id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
10198492 | Nilpotent orbit | In mathematics, nilpotent orbits are generalizations of nilpotent matrices that play an important role
in representation theory of real and complex semisimple Lie groups and semisimple Lie algebras.
Definition.
An element "X" of a semisimple Lie algebra "g" is called nilpotent if its adjoint endomorphism
"ad X": "g" → "g", "ad X"("Y") = ["X","Y"]
is nilpotent, that is, ("ad X")"n" = 0 for large enough "n". Equivalently, "X" is nilpotent if its characteristic polynomial "p""ad X"("t") is equal to "t"dim "g".
A semisimple Lie group or algebraic group "G" acts on its Lie algebra via the adjoint representation, and the property of being nilpotent is invariant under this action. A nilpotent orbit is an orbit of the adjoint action such that any (equivalently, all) of its elements is (are) nilpotent.
Examples.
Nilpotent formula_0 matrices with complex entries form the main motivating case for the general theory, corresponding to the complex general linear group. From the Jordan normal form of matrices we know that each nilpotent matrix is conjugate to a unique matrix with Jordan blocks of sizes formula_1 where formula_2 is a partition of "n". Thus in the case "n"=2 there are two nilpotent orbits, the "zero orbit" consisting of the zero matrix and corresponding to the partition ("1","1") and the "principal orbit" consisting of all non-zero matrices "A" with zero trace and determinant,
formula_3 with formula_4
corresponding to the partition ("2"). Geometrically, this orbit is a two-dimensional complex quadratic cone in four-dimensional vector space of formula_5 matrices minus its apex.
The complex special linear group is a subgroup of the general linear group with the same nilpotent orbits. However, if we replace the "complex" special linear group with the "real" special linear group, new nilpotent orbits may arise. In particular, for "n"=2 there are now 3 nilpotent orbits: the zero orbit and two real half-cones (without the apex), corresponding to positive and negative values of formula_6 in the parametrization above.
Poset structure.
Nilpotent orbits form a partially ordered set: given two nilpotent orbits, "O"1 is less than or equal to "O"2 if "O"1 is contained in the Zariski closure of "O"2. This poset has a unique minimal element, zero orbit, and unique
maximal element, the "regular nilpotent orbit", but in general, it is not a graded poset.
If the ground field is algebraically closed then the zero orbit is covered by a unique orbit, called the "minimal orbit", and the regular orbit covers a unique orbit, called the "subregular orbit".
In the case of the special linear group "SL""n", the nilpotent orbits are parametrized by the partitions of "n". By a theorem of Gerstenhaber, the ordering of the orbits corresponds to the dominance order on the partitions of "n". Moreover, if "G" is an isometry group of a bilinear form, i.e. an orthogonal or symplectic subgroup of "SL""n", then its nilpotent orbits are parametrized by partitions of "n" satisfying a certain parity condition and the corresponding poset structure is induced by the dominance order on all partitions (this is a nontrivial theorem, due to Gerstenhaber and Hesselink). | [
{
"math_id": 0,
"text": "n\\times n"
},
{
"math_id": 1,
"text": "\\lambda_1\\geq \\lambda_2\\geq\\ldots\\geq\\lambda_r,"
},
{
"math_id": 2,
"text": "\\lambda"
},
{
"math_id": 3,
"text": " A=\\begin{bmatrix}x & y\\\\ z & -x \\end{bmatrix}, \\quad (x,y,z)\\ne (0,0,0)\\quad{\\;}"
},
{
"math_id": 4,
"text": "x^2+yz=0,"
},
{
"math_id": 5,
"text": "2\\times 2"
},
{
"math_id": 6,
"text": "y-z"
}
] | https://en.wikipedia.org/wiki?curid=10198492 |
1020021 | Distance (graph theory) | Length of shortest path between two nodes of a graph
In the mathematical field of graph theory, the distance between two vertices in a graph is the number of edges in a shortest path (also called a graph geodesic) connecting them. This is also known as the geodesic distance or shortest-path distance. Notice that there may be more than one shortest path between two vertices. If there is no path connecting the two vertices, i.e., if they belong to different connected components, then conventionally the distance is defined as infinite.
In the case of a directed graph the distance "d"("u","v") between two vertices u and v is defined as the length of a shortest directed path from u to v consisting of arcs, provided at least one such path exists. Notice that, in contrast with the case of undirected graphs, "d"("u","v") does not necessarily coincide with "d"("v","u")—so it is just a quasi-metric, and it might be the case that one is defined while the other is not.
Related concepts.
A metric space defined over a set of points in terms of distances in a graph defined over the set is called a graph metric.
The vertex set (of an undirected graph) and the distance function form a metric space, if and only if the graph is connected.
The eccentricity "ϵ"("v") of a vertex v is the greatest distance between v and any other vertex; in symbols,
formula_0
It can be thought of as how far a node is from the node most distant from it in the graph.
The radius r of a graph is the minimum eccentricity of any vertex or, in symbols,
formula_1
The diameter d of a graph is the maximum eccentricity of any vertex in the graph. That is, d is the greatest distance between any pair of vertices or, alternatively,
formula_2
To find the diameter of a graph, first find the shortest path between each pair of vertices. The greatest length of any of these paths is the diameter of the graph.
A central vertex in a graph of radius r is one whose eccentricity is r—that is, a vertex whose distance from its furthest vertex is equal to the radius, equivalently, a vertex v such that "ϵ"("v") = "r".
A peripheral vertex in a graph of diameter d is one whose eccentricity is d—that is, a vertex whose distance from its furthest vertex is equal to the diameter. Formally, v is peripheral if "ϵ"("v") = "d".
A pseudo-peripheral vertex v has the property that, for any vertex u, if u is as far away from v as possible, then v is as far away from u as possible. Formally, a vertex v is pseudo-peripheral if, for each vertex u with "d"("u","v") = "ϵ"("v"), it holds that "ϵ"("u") = "ϵ"("v").
A level structure of the graph, given a starting vertex, is a partition of the graph's vertices into subsets by their distances from the starting vertex.
A geodetic graph is one for which every pair of vertices has a unique shortest path connecting them. For example, all trees are geodetic.
The weighted shortest-path distance generalises the geodesic distance to weighted graphs. In this case it is assumed that the weight of an edge represents its length or, for complex networks the cost of the interaction, and the weighted shortest-path distance "d""W"("u", "v") is the minimum sum of weights across all the paths connecting u and v. See the shortest path problem for more details and algorithms.
Algorithm for finding pseudo-peripheral vertices.
Often peripheral sparse matrix algorithms need a starting vertex with a high eccentricity. A peripheral vertex would be perfect, but is often hard to calculate. In most circumstances a pseudo-peripheral vertex can be used. A pseudo-peripheral vertex can easily be found with the following algorithm:
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\epsilon(v) = \\max_{u \\in V}d(v,u)."
},
{
"math_id": 1,
"text": "r = \\min_{v \\in V} \\epsilon(v) = \\min_{v \\in V}\\max_{u \\in V}d(v,u)."
},
{
"math_id": 2,
"text": "d = \\max_{v \\in V}\\epsilon(v) = \\max_{v \\in V}\\max_{u \\in V}d(v,u)."
},
{
"math_id": 3,
"text": "u"
},
{
"math_id": 4,
"text": "v"
},
{
"math_id": 5,
"text": "\\epsilon(v) > \\epsilon(u)"
},
{
"math_id": 6,
"text": "u=v"
}
] | https://en.wikipedia.org/wiki?curid=1020021 |
10200558 | Hot band | In molecular vibrational spectroscopy, a hot band is a band centred on a hot transition, which is a transition between two excited vibrational states, i.e. neither is the overall ground state. In infrared or Raman spectroscopy, hot bands refer to those transitions for a particular vibrational mode which arise from a state containing thermal population of another vibrational mode. For example, for a molecule with 3 normal modes, formula_0, formula_1 and formula_2, the transition formula_3 ← formula_4, would be a hot band, since the initial state has one quantum of excitation in the formula_2 mode. Hot bands are distinct from combination bands, which involve simultaneous excitation of multiple normal modes with a single photon, and overtones, which are transitions that involve changing the vibrational quantum number for a normal mode by more than 1.
Vibrational hot bands.
In the harmonic approximation, the normal modes of a molecule are not coupled, and all vibrational quantum levels are equally spaced, so hot bands would not be distinguishable from so-called "fundamental" transitions arising from the overall vibrational ground state. However, vibrations of real molecules always have some anharmonicity, which causes coupling between different vibrational modes that in turn shifts the observed frequencies of hot bands in vibrational spectra. Because anharmonicity decreases the spacing between adjacent vibrational levels, hot bands exhibit red shifts (appear at lower frequencies than the corresponding fundamental transitions). The magnitude of the observed shift is correlated to the degree of anharmonicity in the corresponding normal modes.
Both the lower and upper states involved in the transition are excited states. Therefore, the lower excited state must be populated for a hot band to be observed. The most common form of excitation is by thermal energy. The population of the lower excited state is then given by the Boltzmann distribution.
In general the population can be expressed as
formula_5
where kB is the Boltzmann constant and E is the energy difference between the two states.
In simplified form this can be expressed as
formula_6
where "ν" is the wavenumber [cm−1] of the hot band and "T" is the temperature [K]. Thus, the intensity of a hot band, which is proportional to the population of the lower excited state, increases as the temperature increases.
Combination bands.
As mentioned above, combination bands involve changes in vibrational quantum numbers of more than one normal mode. These transitions are forbidden by harmonic oscillator selection rules, but are observed in vibrational spectra of real systems due to anharmonic couplings of normal modes. Combination bands typically have weak spectral intensities, but can become quite intense in cases where the anharmonicity of the vibrational potential is large. Broadly speaking, there are two types of combination bands.
Difference transition.
A difference transition, or difference band, occurs between excited states of two different vibrations. Using the 3 mode example from above, formula_7 ← formula_8, is a difference transition. For difference bands involving transfer of a single quantum of excitation, as in the example, the frequency is approximately equal to the difference between the fundamental frequencies. The difference is not exact because there is anharmonicity in both vibrations. However the term "difference band" also applies to cases where more than one quantum is transferred, such as formula_8 ← formula_9.
Since the initial state of a difference band is always an excited state, difference bands are necessarily "hot bands"! Difference bands are seldom observed in conventional vibrational spectra, because they are forbidden transitions according to harmonic selection rules, and because populations of vibrationally excited states tend to be quite low.
Sum transition.
A sum transition (sum band), occurs when two or more fundamental vibrations are excited simultaneously. For instance, formula_3 ← formula_10 and formula_11 ← formula_12, are examples of sum transitions. The frequency of a sum band is slightly less than the sum of the frequencies of the fundamentals, again due to anharmonic shifts in both vibrations.
Sum transitions are harmonic-forbidden, and thus typically have low intensities relative to vibrational fundamentals. Also, sum bands can be, but are not always, hot bands, and thus may also show reduced intensities from thermal population effects, as described above. Sum bands are more commonly observed than difference bands in vibrational spectra | [
{
"math_id": 0,
"text": "\\nu_1"
},
{
"math_id": 1,
"text": "\\nu_2"
},
{
"math_id": 2,
"text": "\\nu_3"
},
{
"math_id": 3,
"text": "101"
},
{
"math_id": 4,
"text": " 001"
},
{
"math_id": 5,
"text": "{{N}\\over{N_0}} = {{e^{-E/k_BT}}}"
},
{
"math_id": 6,
"text": "{{N}\\over{N_0}} = {{e^{- \\nu /0.6952T}}}"
},
{
"math_id": 7,
"text": "010"
},
{
"math_id": 8,
"text": "100"
},
{
"math_id": 9,
"text": "020"
},
{
"math_id": 10,
"text": "000"
},
{
"math_id": 11,
"text": "012"
},
{
"math_id": 12,
"text": "001"
}
] | https://en.wikipedia.org/wiki?curid=10200558 |
10201 | Exothermic process | Thermodynamic process that releases energy to its surroundings
In thermodynamics, an exothermic process (from grc " ' ()" 'outward' and " ' ()" 'thermal') is a thermodynamic process or reaction that releases energy from the system to its surroundings, usually in the form of heat, but also in a form of light (e.g. a spark, flame, or flash), electricity (e.g. a battery), or sound (e.g. explosion heard when burning hydrogen). The term "exothermic" was first coined by 19th-century French chemist Marcellin Berthelot.
The opposite of an exothermic process is an endothermic process, one that absorbs energy, usually in the form of heat. The concept is frequently applied in the physical sciences to chemical reactions where chemical bond energy is converted to thermal energy (heat).
Two types of chemical reactions.
Exothermic and endothermic describe two types of chemical reactions or systems found in nature, as follows:
Exothermic.
An exothermic reaction occurs when heat is released to the surroundings. According to the IUPAC, an exothermic reaction is "a reaction for which the overall standard enthalpy change Δ"H"⚬ is negative". Some examples of exothermic process are fuel combustion, condensation and nuclear fission, which is used in nuclear power plants to release large amounts of energy.
Endothermic.
In an endothermic reaction or system, energy is taken from the surroundings in the course of the reaction, usually driven by a favorable entropy increase in the system. An example of an endothermic reaction is a first aid cold pack, in which the reaction of two chemicals, or dissolving of one in another, requires calories from the surroundings, and the reaction cools the pouch and surroundings by absorbing heat from them.
Photosynthesis, the process that allows plants to convert carbon dioxide and water to sugar and oxygen, is an endothermic process: plants absorb radiant energy from the sun and use it in an endothermic, otherwise non-spontaneous process. The chemical energy stored can be freed by the inverse (spontaneous) process: combustion of sugar, which gives carbon dioxide, water and heat (radiant energy).
Energy release.
Exothermic refers to a transformation in which a closed system releases energy (heat) to the surroundings, expressed by
formula_0
When the transformation occurs at constant pressure and without exchange of electrical energy, heat Q is equal to the enthalpy change, i.e.
formula_1
while at constant volume, according to the first law of thermodynamics it equals internal energy (U) change, i.e.
formula_2
In an adiabatic system (i.e. a system that does not exchange heat with the surroundings), an otherwise exothermic process results in an increase in temperature of the system.
In exothermic chemical reactions, the heat that is released by the reaction takes the form of electromagnetic energy or kinetic energy of molecules. The transition of electrons from one quantum energy level to another causes light to be released. This light is equivalent in energy to some of the stabilization energy of the energy for the chemical reaction, i.e. the bond energy. This light that is released can be absorbed by other molecules in solution to give rise to molecular translations and rotations, which gives rise to the classical understanding of heat. In an exothermic reaction, the activation energy (energy needed to start the reaction) is less than the energy that is subsequently released, so there is a net release of energy.
Examples.
Some examples of exothermic processes are:
Implications for chemical reactions.
Chemical exothermic reactions are generally more spontaneous than their counterparts, endothermic reactions.
In a thermochemical reaction that is exothermic, the heat may be listed among the products of the reaction.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Q > 0."
},
{
"math_id": 1,
"text": "\\Delta H < 0,"
},
{
"math_id": 2,
"text": "\\Delta U = Q + 0 > 0."
}
] | https://en.wikipedia.org/wiki?curid=10201 |
10201642 | Polar alignment | Method of orienting telescopes and other celestial observation devices
Polar alignment is the act of aligning the rotational axis of a telescope's equatorial mount or a sundial's gnomon with a celestial pole to parallel Earth's axis.
Alignment methods.
The method to use differs depending on whether the alignment is taking place in daylight or at night. Furthermore, the method differs if the alignment is done in the Northern Hemisphere or Southern Hemisphere. The purpose of the alignment also must be considered; for example, the value of accuracy is much more significant in astrophotography than in casual stargazing.
Aiming at the pole stars.
In the Northern Hemisphere, sighting Polaris the North Star is the usual procedure for aligning a telescope mount's polar axis parallel to the Earth's axis. Polaris is approximately three-quarters of a degree from the North Celestial Pole, and is easily seen by the naked eye.
σ Octantis, sometimes known as the South Star, can be sighted in the Southern hemisphere to perform a polar alignment. At magnitude +5.6, it is difficult for inexperienced observers to locate in the sky. Its declination of -88° 57′ 23″ places it 1° 2′ 37" from the South Celestial Pole. An even closer star BQ Octantis of magnitude +6.9 lies 10' from the South Pole as of 2016. Although not visible to the naked eye, it is easily visible in most polar 'scopes. (It will lie its closest to the South Pole, namely 9', in the year 2027.
Rough alignment method.
In the Northern Hemisphere, rough alignment can be done by visually aligning the axis of the telescope mount with Polaris. In the Southern hemisphere or places where Polaris is not visible, a rough alignment can be performed by ensuring the mount is level, adjusting the latitude adjustment pointer to match the observer's latitude, and aligning the axis of the mount with true south or north by means of a magnetic compass. (This requires taking the local magnetic declination into account). This method can sometimes be adequate for general observing through the eyepiece or for very wide angle astro-imaging with a tripod-mounted camera; it is often used, with an equatorially-mounted telescope, as a starting point in amateur astronomy.
There are ways to improve the accuracy of this method. For example, instead of reading the latitude scale directly, a calibrated precision inclinometer can be used to measure the altitude of the polar axis of the mount. If the setting circles of the mount are then used to find a bright object of known coordinates, the object should mismatch only as to azimuth, so that centering the object by adjusting the azimuth of the mount should complete the polar alignment process. Typically, this provides enough accuracy to allow tracked (i.e. motorized) telephoto images of the sky.
For astro-imaging through a lens or telescope of significant magnification, a more accurate alignment method is necessary to refine the rough alignment, using one of the following approaches.
Polarscope method.
An alignment suitable for visual observation and short exposure imaging (up to a few minutes) can be achieved with a polar scope. This is a low-magnification telescope mounted co-axially with the mount (and adjusted to maximize the accuracy of this alignment). A special reticle is used to align the mount with Polaris (or a group of stars near the polar region) in the Southern Hemisphere. While primitive polariscopes originally needed the careful adjustment of the mount to match the time of year and day, this process can be simplified using computer apps that calculate the correct position of the reticle. A new-style northern-hemisphere reticle uses a 'clock-face' style with 72 divisions (representing 20-minute intervals) and circles to compensate for the drift of Polaris over around thirty years. Use of this reticle can allow alignment to within an arc minute or two.
Drift alignment method.
Drift alignment is a method to refine the polar alignment after a rough alignment is done. The method is based on attempting to track stars in the sky using the clock drive; any error in the polar alignment will show up as the drift of the stars in the eyepiece/sensor. Adjustments are then made to reduce the drift, and the process is repeated until the tracking is satisfactory. For the polar axis altitude adjustment, one can attempt to track a star low in the east or west. For the azimuth adjustment, one typically attempts to track a star close to the meridian, with declination about 20° from the equator, in the hemisphere opposite of the observing location.
Astrometric (plate) solving.
For telescopes combined with an imaging camera connected to a computer, it is possible to achieve very accurate polar alignment (within 0.1 minutes of arc). An initial rough alignment is first performed using the polar scope. An image can then be captured and a star database is used to identify the exact field of view when aimed at stars near the pole - 'plate solving'. The telescope is then rotated ninety degrees around its right ascension axis and a new 'plate solve' is carried out. The error in the point around which the images rotate compared to the true pole is calculated automatically and the operator can be given simple instructions to adjust the mount for a more accurate polar alignment.
Mathematical, two-star polar alignment.
The polar error in elevation and azimuth can be calculated by pointing the telescope to two stars or taking two astrometric solves of two positions and the measured error in right ascension and declination. From the difference between the right ascension and declination of the telescope encoder and the second's star position, the elevation and azimuth error of the polar alignment can be calculated. The basic formulas are as follows:
formula_0formula_1
where
formula_2 is Right ascension
formula_3 is Declination
formula_4 is Site latitude
formula_5 is the hour angle of the reference point equals (formula_2 - Local sidereal time)
formula_6 is Error in Right Ascension
formula_7 is Error in Declination
formula_8 is Polar error in elevation (altitude)
formula_9 is Polar error in azimuth
The inverse can be calculated if the above formula is written in matrix notation. So the polar error expressed in Δe and Δa can be calculated from the Δα and Δδ between the telescope encoder and the second reference star.
Polar Alignment with Excel.
Polar Alignment with Excel is a method for polar alignment of equatorial mountings for astronomical telescopes, using a digital camera and a computer.
Photography.
A digital camera with a standard lens is mounted on the telescope and pointed at the celestial pole. Exposure is set at „B“ (Bulb) and an image is taken while the camera is slowly turned around the polar axis. This yields a kind of star-trail image. The beginning and the end of the star trails must be clearly marked with a few seconds of static exposure. Due to the rotation, the information about the current direction of the axis is hidden in the image. Alternatively, two static images can be taken, which differ by a rotation around the polar axis.
Evaluation.
For the evaluation of the images, a special Excel spreadsheet has been developed. For three stars, the rectangular X-Y-coordinates are measured at both ends of their trails or on both static images. In addition, we need the current right ascension and declination of the 3 stars, the longitude and latitude of the observatory, and the date and time the images were taken. The spreadsheet then outputs the necessary corrections of the azimuth and the pole height in degrees and, in an auxiliary field, the corresponding number of turns of the adjustment screws, thus allowing a direct approach to the correct alignment.
The Excel spreadsheet and detailed instructions for use are available for free download at the website of the vhs-observatory Neumuenster.
Equipment.
Crosshair eyepiece.
A crosshair eyepiece is an ordinary ocular with the only difference being that it has a crosshair for aiming and measurement of the angular distance. This is useful in any type of polar alignment, but especially in drift.
Dedicated polar scope.
A small telescope usually with an etched reticle is inserted into the rotational axis of the mount.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{\\displaystyle \\Delta\\alpha=\\Delta e\\cdot \\tan{(\\delta)}\\cdot \\sin(h)+\\Delta a\\cdot\\left(\\sin(\\Phi)-\\cos(\\Phi)\\cdot\\tan(\\delta)\\cdot\\cos(h)\\right)}"
},
{
"math_id": 1,
"text": "\\Delta\\delta=\\Delta e\\cdot\\cos(h)+\\Delta a\\cdot\\cos(\\Phi)\\cdot\\sin(h)"
},
{
"math_id": 2,
"text": "\\alpha"
},
{
"math_id": 3,
"text": "\\delta"
},
{
"math_id": 4,
"text": "\\Phi"
},
{
"math_id": 5,
"text": "h"
},
{
"math_id": 6,
"text": "\\Delta\\alpha"
},
{
"math_id": 7,
"text": "\\Delta\\delta"
},
{
"math_id": 8,
"text": "\\Delta e"
},
{
"math_id": 9,
"text": "\\Delta a"
}
] | https://en.wikipedia.org/wiki?curid=10201642 |
10202429 | Relative change | Comparisons in quantitative sciences
In any quantitative science, the terms relative change and relative difference are used to compare two quantities while taking into account the "sizes" of the things being compared, i.e. dividing by a "standard" or "reference" or "starting" value. The comparison is expressed as a ratio and is a unitless number. By multiplying these ratios by 100 they can be expressed as percentages so the terms percentage change, percent(age) difference, or relative percentage difference are also commonly used. The terms "change" and "difference" are used interchangeably.
Relative change is often used as a quantitative indicator of quality assurance and quality control for repeated measurements where the outcomes are expected to be the same. A special case of percent change (relative change expressed as a percentage) called "percent error" occurs in measuring situations where the reference value is the accepted or actual value (perhaps theoretically determined) and the value being compared to it is experimentally determined (by measurement).
The relative change formula is not well-behaved under many conditions. Various alternative formulas, called "indicators of relative change", have been proposed in the literature. Several authors have found "log change" and log points to be satisfactory indicators, but these have not seen widespread use.
Definition.
Given two numerical quantities, "vref" and "v" with "vref" some "reference value," their "actual change", "actual difference", or "absolute change" is Δ"v" = "v" − "vref". The term absolute difference is sometimes also used even though the absolute value is not taken; the sign of Δ typically is uniform, e.g. across an increasing data series. If the relationship of the value with respect to the reference value (that is, larger or smaller) does not matter in a particular application, the absolute value may be used in place of the actual change in the above formula to produce a value for the relative change which is always non-negative. The actual difference is not usually a good way to compare the numbers, in particular because it depends on the unit of measurement. For instance, is the same as , but the absolute difference between is 1 while the absolute difference between is 100, giving the impression of a larger difference. But even with constant units, the relative change helps judge the importance of the respective change. For example, an increase in price of of a valuable is considered big if changing from but rather small when changing from .
We can adjust the comparison to take into account the "size" of the quantities involved, by defining, for positive values of "vref":
formula_0
The relative change is independent of the unit of measurement employed; for example, the relative change from is , the same as for . The relative change is not defined if the reference value ("vref") is zero, and gives negative values for positive increases if "vref" is negative, hence it is not usually defined for negative reference values either. For example, we might want to calculate the relative change of −10 to −6. The above formula gives = = −0.4, indicating a decrease, yet in fact the reading increased.
Measures of relative change are unitless numbers expressed as a fraction. Corresponding values of percent change would be obtained by multiplying these values by 100 (and appending the % sign to indicate that the value is a percentage).
Domain.
The domain restriction of relative change to positive numbers often poses a constraint. To avoid this problem it is common to take the absolute value, so that the relative change formula works correctly for all nonzero values of "v"ref:
formula_1
This still does not solve the issue when the reference is zero. It is common to instead use an indicator of relative change, and take the absolute values of both v and formula_2. Then the only problematic case is formula_3, which can usually be addressed by appropriately extending the indicator. For example, for arithmetic mean this formula may be used:
formula_4
Percent error.
The percent error is a special case of the percentage form of relative change calculated from the absolute change between the experimental (measured) and theoretical (accepted) values, and dividing by the theoretical (accepted) value.
formula_5
The terms "Experimental" and "Theoretical" used in the equation above are commonly replaced with similar terms. Other terms used for "experimental" could be "measured," "calculated," or "actual" and another term used for "theoretical" could be "accepted." Experimental value is what has been derived by use of calculation and/or measurement and is having its accuracy tested against the theoretical value, a value that is accepted by the scientific community or a value that could be seen as a goal for a successful result.
Although it is common practice to use the absolute value version of relative change when discussing percent error, in some situations, it can be beneficial to remove the absolute values to provide more information about the result. Thus, if an experimental value is less than the theoretical value, the percent error will be negative. This negative result provides additional information about the experimental result. For example, experimentally calculating the speed of light and coming up with a negative percent error says that the experimental value is a velocity that is less than the speed of light. This is a big difference from getting a positive percent error, which means the experimental value is a velocity that is greater than the speed of light (violating the theory of relativity) and is a newsworthy result.
The percent error equation, when rewritten by removing the absolute values, becomes:
formula_6
It is important to note that the two values in the numerator do not commute. Therefore, it is vital to preserve the order as above: subtract the theoretical value from the experimental value and not vice versa.
Percentage change.
A percentage change is a way to express a change in a variable. It represents the relative change between the old value and the new one.
For example, if a house is worth $100,000 today and the year after its value goes up to $110,000, the percentage change of its value can be expressed as
formula_7
It can then be said that the worth of the house went up by 10%.
More generally, if "V"1 represents the old value and "V"2 the new one,
formula_8
Some calculators directly support this via a <templatestyles src="Kbd/styles.css"></templatestyles>%CH or <templatestyles src="Kbd/styles.css"></templatestyles>Δ% function.
When the variable in question is a percentage itself, it is better to talk about its change by using percentage points, to avoid confusion between relative difference and absolute difference.
Example of percentages of percentages.
If a bank were to raise the interest rate on a savings account from 3% to 4%, the statement that "the interest rate was increased by 1%" would be incorrect and misleading. The absolute change in this situation is 1 percentage point (4% − 3%), but the relative change in the interest rate is:
formula_9
In general, the term "percentage point(s)" indicates an absolute change or difference of percentages, while the percent sign or the word "percentage" refers to the relative change or difference.
Examples.
Comparisons.
Car "M" costs $50,000 and car "L" costs $40,000. We wish to compare these costs. With respect to car "L", the absolute difference is $10,000 = $50,000 − $40,000. That is, car "M" costs $10,000 more than car "L". The relative difference is,
formula_10
and we say that car "M" costs 25% "more than" car "L". It is also common to express the comparison as a ratio, which in this example is,
formula_11
and we say that car "M" costs 125% "of" the cost of car "L".
In this example the cost of car "L" was considered the reference value, but we could have made the choice the other way and considered the cost of car "M" as the reference value. The absolute difference is now −$10,000 = $40,000 − $50,000 since car "L" costs $10,000 less than car "M". The relative difference,
formula_12
is also negative since car "L" costs 20% "less than" car "M". The ratio form of the comparison,
formula_13
says that car "L" costs 80% "of" what car "M" costs.
It is the use of the words "of" and "less/more than" that distinguish between ratios and relative differences.
Indicators of relative change.
The (classical) relative change above is but one of the possible measures/indicators of relative change. An "indicator of relative change" from "x" (initial or reference value) to "y" (new value) formula_14 is a binary real-valued function defined for the domain of interest which satisfies the following properties:
The normalization condition is motivated by the observation that R scaled by a constant formula_19 still satisfies the other conditions besides normalization. Furthermore, due to the independence condition, every R can be written as a single argument function H of the ratio formula_20. The normalization condition is then that formula_21. This implies all indicators behave like the classical one when formula_20 is close to .
Usually the indicator of relative change is presented as the actual change Δ scaled by some function of the values "x" and "y", say "f"("x", "y").
formula_22
As with classical relative change, the general relative change is undefined if "f"("x", "y") is zero. Various choices for the function "f"("x", "y") have been proposed:
As can be seen in the table, all but the first two indicators have, as denominator a "mean". One of the properties of a mean function formula_23 is: formula_24, which means that all such indicators have a "symmetry" property that the classical relative change lacks: formula_25. This agrees with intuition that a relative change from "x" to "y" should have the same magnitude as a relative change in the opposite direction, "y" to "x", just like the relation formula_26 suggests.
Maximum mean change has been recommended when comparing floating point values in programming languages for equality with a certain tolerance. Another application is in the computation of approximation errors when the relative error of a measurement is required. Minimum mean change has been recommended for use in econometrics. Logarithmic change has been recommended as a general-purpose replacement for relative change and is discussed more below.
Tenhunen defines a general relative difference function from "L" (reference value) to "K":
formula_27
which leads to
formula_28
In particular for the special cases formula_29,
formula_30
Logarithmic change.
Of these indicators of relative change, the most natural arguably is the natural logarithm (ln) of the ratio of the two numbers (final and initial), called "log change". Indeed, when formula_31, the following approximation holds:
formula_32
In the same way that relative change is scaled by 100 to get percentages, formula_33 can be scaled by 100 to get what is commonly called log points. Log points are equivalent to the unit centinepers (cNp) when measured for root-power quantities. This quantity has also been referred to as a log percentage and denoted " L%".
Since the derivative of the natural log at 1 is 1, log points are approximately equal to percent change for small differences – for example an increase of 1% equals an increase of 0.995 cNp, and a 5% increase gives a 4.88 cNp increase. This approximation property does not hold for other choices of logarithm base, which introduce a scaling factor due to the derivative not being 1. Log points can thus be used as a replacement for percent change.
Additivity.
Using log change has the advantages of additivity compared to relative change. Specifically, when using log change, the total change after a series of changes equals the sum of the changes. With percent, summing the changes is only an approximation, with larger error for larger changes. For example:
Note that in the above table, since "relative change 0" (respectively "relative change 1") has the same numerical value as "log change 0" (respectively "log change 1"), it does not correspond to the same variation. The conversion between relative and log changes may be computed as formula_34.
By additivity, formula_35, and therefore additivity implies a sort of symmetry property, namely formula_36 and thus the magnitude of a change expressed in log change is the same whether "V"0 or "V"1 is chosen as the reference. In contrast, for relative change, formula_37, with the difference formula_38 becoming larger as "V"1 or "V"0 approaches 0 while the other remains fixed. For example:
Here 0+ means taking the limit from above towards 0.
Uniqueness and extensions.
The log change is the unique two-variable function that is additive, and whose linearization matches relative change. There is a family of additive difference functions formula_39 for any formula_40, such that absolute change is formula_41 and log change is formula_42.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\text{Relative change}(v_\\text{ref}, v) = \\frac{\\text{Actual change}}{v_\\text{ref}} = \\frac{v - v_\\text{ref}}{v_\\text{ref}}."
},
{
"math_id": 1,
"text": " \\text{Relative change}(v_\\text{ref}, v) = \\frac{v - v_\\text{ref}}{|v_\\text{ref}|}."
},
{
"math_id": 2,
"text": "v_\\text{reference}"
},
{
"math_id": 3,
"text": "v=v_\\text{reference}=0"
},
{
"math_id": 4,
"text": "d_r(x,y)=\\frac{|x-y|}{(|x|+|y|)/2},\\ d_r(0,0)=0"
},
{
"math_id": 5,
"text": "\\%\\text{ Error} = \\frac{|\\text{Experimental}-\\text{Theoretical}|}{|\\text{Theoretical}|}\\times 100."
},
{
"math_id": 6,
"text": "\\%\\text{ Error} = \\frac{\\text{Experimental}-\\text{Theoretical}}{|\\text{Theoretical}|}\\times100."
},
{
"math_id": 7,
"text": " \\frac{110000-100000}{100000} = 0.1 = 10\\%."
},
{
"math_id": 8,
"text": "\\text{Percentage change} = \\frac{\\Delta V}{V_1} = \\frac{V_2 - V_1}{V_1} \\times100\\% ."
},
{
"math_id": 9,
"text": "\\frac{4\\% - 3\\%}{3\\%} = 0.333\\ldots = 33\\frac{1}{3}\\%."
},
{
"math_id": 10,
"text": "\\frac{\\$10,000}{\\$40,000} = 0.25 = 25\\%,"
},
{
"math_id": 11,
"text": "\\frac{\\$50,000}{\\$40,000} = 1.25 = 125\\%,"
},
{
"math_id": 12,
"text": "\\frac{-\\$10,000}{\\$50,000} = -0.20 = -20\\%"
},
{
"math_id": 13,
"text": "\\frac{\\$40,000}{\\$50,000} = 0.8 = 80\\%"
},
{
"math_id": 14,
"text": "R(x,y)"
},
{
"math_id": 15,
"text": "\\begin{cases}R(x,y)> 0 &\\text{iff } y>x \\\\ R(x,y)= 0 &\\text{iff } y=x \\\\ R(x,y)< 0 &\\text{iff } y<x \\end{cases}."
},
{
"math_id": 16,
"text": "a>0"
},
{
"math_id": 17,
"text": "R(ax,ay)=R(x,y)"
},
{
"math_id": 18,
"text": "\\left.\\frac{d}{dy} R(1,y) \\right|_{y=1} = 1"
},
{
"math_id": 19,
"text": "c>0"
},
{
"math_id": 20,
"text": "y/x"
},
{
"math_id": 21,
"text": "H'(1) = 1"
},
{
"math_id": 22,
"text": " \\text{Relative change}(x, y) = \\frac{\\text{Actual change}\\,\\Delta}{f(x,y)} = \\frac{y - x}{f(x,y)}."
},
{
"math_id": 23,
"text": "m(x,y)"
},
{
"math_id": 24,
"text": "m(x,y)=m(y,x)"
},
{
"math_id": 25,
"text": "R(x,y)=-R(y,x)"
},
{
"math_id": 26,
"text": "\\frac y x = \\frac 1 \\frac{x}{y}"
},
{
"math_id": 27,
"text": "\nH(K,L) = \\begin{cases}\n \\int_1^{K/L} t^{c-1} dt & \\text{when } K>L \\\\\n -\\int_{K/L}^1 t^{c-1} dt & \\text{when } K<L\n\\end{cases}\n"
},
{
"math_id": 28,
"text": "\nH(K,L) = \\begin{cases}\n \\frac{1}{c} \\cdot ((K/L)^c-1) & c \\neq 0 \\\\\n \\ln(K/L) & c = 0, K > 0, L > 0\n\\end{cases}\n"
},
{
"math_id": 29,
"text": "c=\\pm 1"
},
{
"math_id": 30,
"text": "\nH(K,L) = \\begin{cases}\n (K-L)/K & c=-1 \\\\\n (K-L)/L & c=1\n\\end{cases}\n"
},
{
"math_id": 31,
"text": "\\left | \\frac{V_1 - V_0}{V_0} \\right | \\ll 1"
},
{
"math_id": 32,
"text": " \\ln\\frac{V_1}{V_0} = \\int_{V_0}^{V_1}\\frac{{\\mathrm d}V}{V} \\approx \\int_{V_0}^{V_1}\\frac{{\\mathrm d}V}{V_0} = \\frac{V_1 - V_0}{V_0} = \\text{classical relative change}"
},
{
"math_id": 33,
"text": "\\ln\\frac{V_1}{V_0}"
},
{
"math_id": 34,
"text": "\\text{log change} = \\ln(1 + \\text{relative change})"
},
{
"math_id": 35,
"text": "\\ln\\frac{V_1}{V_0} + \\ln\\frac{V_0}{V_1} = 0"
},
{
"math_id": 36,
"text": "\\ln\\frac{V_1}{V_0} = - \\ln\\frac{V_0}{V_1}"
},
{
"math_id": 37,
"text": "\\frac{V_1 - V_0}{V_0} \\neq - \\frac{V_0 - V_1}{V_1}"
},
{
"math_id": 38,
"text": "\\frac{(V_1 - V_0)^2}{V_0 V_1}"
},
{
"math_id": 39,
"text": "F_\\lambda(x,y)"
},
{
"math_id": 40,
"text": "\\lambda\\in\\mathbb{R}"
},
{
"math_id": 41,
"text": "F_0"
},
{
"math_id": 42,
"text": "F_1"
}
] | https://en.wikipedia.org/wiki?curid=10202429 |
1020537 | Bow shock | Shock wave caused by blowing stellar wind
In astrophysics, bow shocks are shock waves in regions where the conditions of density and pressure change dramatically due to blowing stellar wind. Bow shock occurs when the magnetosphere of an astrophysical object interacts with the nearby flowing ambient plasma such as the solar wind. For Earth and other magnetized planets, it is the boundary at which the speed of the stellar wind abruptly drops as a result of its approach to the magnetopause. For stars, this boundary is typically the edge of the astrosphere, where the stellar wind meets the interstellar medium.
Description.
The defining criterion of a shock wave is that the bulk velocity of the plasma drops from "supersonic" to "subsonic", where the speed of sound cs is defined by
formula_0
where formula_1 is the ratio of specific heats, formula_2 is the pressure, and formula_3 is the density of the plasma.
A common complication in astrophysics is the presence of a magnetic field. For instance, the charged particles making up the solar wind follow spiral paths along magnetic field lines. The velocity of each particle as it gyrates around a field line can be treated similarly to a thermal velocity in an ordinary gas, and in an ordinary gas the mean thermal velocity is roughly the speed of sound. At the bow shock, the bulk forward velocity of the wind (which is the component of the velocity parallel to the field lines about which the particles gyrate) drops below the speed at which the particles are gyrating.
Around the Earth.
The best-studied example of a bow shock is that occurring where the Sun's wind encounters Earth's magnetopause, although bow shocks occur around all planets, both unmagnetized, such as
Mars and Venus and magnetized, such as Jupiter or Saturn. Earth's bow shock is about thick and located about from the planet.
At comets.
Bow shocks form at comets as a result of the interaction between the solar wind and the cometary ionosphere. Far away from the Sun, a comet is an icy boulder without an atmosphere. As it approaches the Sun, the heat of the sunlight causes gas to be released from the cometary nucleus, creating an atmosphere called a coma. The coma is partially ionized by the sunlight, and when the solar wind passes through this ion coma, the bow shock appears.
The first observations were made in the 1980s and 90s as several spacecraft flew by comets 21P/Giacobini–Zinner, 1P/Halley, and 26P/Grigg–Skjellerup. It was then found that the bow shocks at comets are wider and more gradual than the sharp planetary bow shocks seen at for example Earth. These observations were all made near perihelion when the bow shocks already were fully developed.
The Rosetta spacecraft followed comet 67P/Churyumov–Gerasimenko from far out in the solar system, at a heliocentric distance of 3.6 AU, in toward perihelion at 1.24 AU, and back out again. This allowed Rosetta to observe the bow shock as it formed when the outgassing increased during the comet's journey toward the Sun. In this early state of development the shock was called the "infant bow shock". The infant bow shock is asymmetric and, relative to the distance to the nucleus, wider than fully developed bow shocks.
Around the Sun.
For several decades, the solar wind has been thought to form a bow shock at the edge of the heliosphere, where it collides with the surrounding interstellar medium. Moving away from the Sun, the point where the solar wind flow becomes subsonic is the termination shock, the point where the interstellar medium and solar wind pressures balance is the heliopause, and the point where the flow of the interstellar medium becomes subsonic would be the bow shock. This solar bow shock was thought to lie at a distance around 230 AU from the Sun – more than twice the distance of the termination shock as encountered by the Voyager spacecraft.
However, data obtained in 2012 from NASA's Interstellar Boundary Explorer (IBEX) indicates the lack of any solar bow shock. Along with corroborating results from the Voyager spacecraft, these findings have motivated some theoretical refinements; current thinking is that formation of a bow shock is prevented, at least in the galactic region through which the Sun is passing, by a combination of the strength of the local interstellar magnetic-field and of the relative velocity of the heliosphere.
Around other stars.
In 2006, a far infrared bow shock was detected near the AGB star R Hydrae.
Bow shocks are also a common feature in Herbig Haro objects, in which a much stronger collimated outflow of gas and dust from the star interacts with the interstellar medium, producing bright bow shocks that are visible at optical wavelengths.
The Hubble Space Telescope captured these images of bow shocks made of dense gasses and plasma in the Orion Nebula.
Around massive stars.
If a massive star is a runaway star, it can form an infrared bow-shock that is detectable in 24 μm and sometimes in 8μm of the Spitzer Space Telescope or the W3/W4-channels of WISE. In 2016 Kobulnicky et al. did create the largest spitzer/WISE bow-shock catalog to date with 709 bow-shock candidates. To get a larger bow-shock catalog The Milky Way Project (a Citizen Science project) aims to map infrared bow-shocks in the galactic plane. This larger catalog will help to understand the stellar wind of massive stars.
The closest stars with infrared bow-shocks are:
Most of them belong to the Scorpius–Centaurus association and Theta Carinae, which is the brightest star of IC 2602, might also belong to the Lower Centaurus–Crux subgroup. Epsilon Persei does not belong to this stellar association.
Magnetic draping effect.
A similar effect, known as the magnetic draping effect, occurs when a super-Alfvenic plasma flow impacts an unmagnetized object such as what happens when the solar wind reaches the ionosphere of Venus: the flow deflects around the object draping the magnetic field along the wake flow.
The condition for the flow to be super-Alfvenic means that the relative velocity between the flow and object, formula_4, is larger than the local Alfven velocity formula_5 which means a large Alfvenic Mach number: formula_6. For unmagnetized and electrically conductive objects, the ambient field creates electric currents inside the object, and into the surrounding plasma, such that the flow is deflected and slowed as the time scale of magnetic dissipation is much longer than the time scale of magnetic field advection. The induced currents in turn generate magnetic fields that deflect the flow creating a bow shock. For example, the ionospheres of Mars and Venus provide the conductive environments for the interaction with the solar wind. Without an ionosphere, the flowing magnetized plasma is absorbed by the non-conductive body. The latter occurs, for example, when the solar wind interacts with Moon which has no ionosphere. In magnetic draping, the field lines are wrapped and draped around the leading side of the object creating a narrow sheath which is similar to the bow shocks in the planetary magnetospheres. The concentrated magnetic field increases until the ram pressure becomes comparable to the magnetic pressure in the sheath:
formula_7
where formula_8 is the density of the plasma, formula_9 is the draped magnetic field near the object, and formula_10 is the relative speed between the plasma and the object. Magnetic draping has been detected around planets, moons, solar coronal mass ejections, and galaxies.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "c_s^2 = \\gamma p/ \\rho "
},
{
"math_id": 1,
"text": " \\gamma "
},
{
"math_id": 2,
"text": " p "
},
{
"math_id": 3,
"text": " \\rho "
},
{
"math_id": 4,
"text": " v"
},
{
"math_id": 5,
"text": "V_A"
},
{
"math_id": 6,
"text": " M_A \\gg 1"
},
{
"math_id": 7,
"text": "\\rho_0 v^2 = \\frac{B_0^2}{2\\mu_0},"
},
{
"math_id": 8,
"text": " \\rho_0 "
},
{
"math_id": 9,
"text": "B_0 "
},
{
"math_id": 10,
"text": " v "
}
] | https://en.wikipedia.org/wiki?curid=1020537 |
1020661 | Lie–Kolchin theorem | Theorem in the representation theory of linear algebraic groups
In mathematics, the Lie–Kolchin theorem is a theorem in the representation theory of linear algebraic groups; Lie's theorem is the analog for linear Lie algebras.
It states that if "G" is a connected and solvable linear algebraic group defined over an algebraically closed field and
formula_0
a representation on a nonzero finite-dimensional vector space "V", then there is a one-dimensional linear subspace "L" of "V" such that
formula_1
That is, ρ("G") has an invariant line "L", on which "G" therefore acts through a one-dimensional representation. This is equivalent to the statement that "V" contains a nonzero vector "v" that is a common (simultaneous) eigenvector for all formula_2.
It follows directly that every irreducible finite-dimensional representation of a connected and solvable linear algebraic group "G" has dimension one. In fact, this is another way to state the Lie–Kolchin theorem.
The result for Lie algebras was proved by Sophus Lie (1876) and for algebraic groups was proved by Ellis Kolchin (1948, p.19).
The Borel fixed point theorem generalizes the Lie–Kolchin theorem.
Triangularization.
Sometimes the theorem is also referred to as the "Lie–Kolchin triangularization theorem" because by induction it implies that with respect to a suitable basis of "V" the image formula_3 has a "triangular shape"; in other words, the image group formula_3 is conjugate in GL("n","K") (where "n" = dim "V") to a subgroup of the group T of upper triangular matrices, the standard Borel subgroup of GL("n","K"): the image is simultaneously triangularizable.
The theorem applies in particular to a Borel subgroup of a semisimple linear algebraic group "G".
Counter-example.
If the field "K" is not algebraically closed, the theorem can fail. The standard unit circle, viewed as the set of complex numbers formula_4 of absolute value one is a one-dimensional commutative (and therefore solvable) linear algebraic group over the real numbers which has a two-dimensional representation into the special orthogonal group SO(2) without an invariant (real) line. Here the image formula_5 of formula_6 is the orthogonal matrix
formula_7
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rho\\colon G \\to GL(V)"
},
{
"math_id": 1,
"text": "\\rho(G)(L) = L."
},
{
"math_id": 2,
"text": " \\rho(g), \\,\\, g \\in G "
},
{
"math_id": 3,
"text": "\\rho(G)"
},
{
"math_id": 4,
"text": " \\{ x+iy \\in \\mathbb{C} \\mid x^2+y^2=1 \\} "
},
{
"math_id": 5,
"text": " \\rho(z)"
},
{
"math_id": 6,
"text": " z=x+iy "
},
{
"math_id": 7,
"text": " \\begin{pmatrix} x & y \\\\ -y & x \\end{pmatrix}."
}
] | https://en.wikipedia.org/wiki?curid=1020661 |
10208822 | Möbius aromaticity | In organic chemistry, Möbius aromaticity is a special type of aromaticity believed to exist in a number of organic molecules. In terms of molecular orbital theory these compounds have in common a monocyclic array of molecular orbitals in which there is an odd number of out-of-phase overlaps, the opposite pattern compared to the aromatic character to Hückel systems. The nodal plane of the orbitals, viewed as a ribbon, is a Möbius strip, rather than a cylinder, hence the name. The pattern of orbital energies is given by a rotated Frost circle (with the edge of the polygon on the bottom instead of a vertex), so systems with 4"n" electrons are aromatic, while those with 4"n" + 2 electrons are anti-aromatic/non-aromatic. Due to incrementally twisted nature of the orbitals of a Möbius aromatic system, stable Möbius aromatic molecules need to contain at least 8 electrons, although 4 electron Möbius aromatic transition states are well known in the context of the Dewar-Zimmerman framework for pericyclic reactions. Möbius molecular systems were considered in 1964 by Edgar Heilbronner by application of the Hückel method, but the first such isolable compound was not synthesized until 2003 by the group of Rainer Herges. However, the fleeting "trans-"C9H9+ cation, one conformation of which is shown on the right, was proposed to be a Möbius aromatic reactive intermediate in 1998 based on computational and experimental evidence.
Hückel-Möbius aromaticity.
The Herges compound (6 in the image below) was synthesized in several photochemical cycloaddition reactions from tetradehydrodianthracene 1 and the ladderane syn-tricyclooctadiene 2 as a substitute for cyclooctatetraene.
Intermediate 5 was a mixture of 2 isomers and the final product 6 a mixture of 5 isomers with different cis and trans configurations. One of them was found to have a C2 molecular symmetry corresponding to a Möbius aromatic and another Hückel isomer was found with Cs symmetry. Despite having 16 electrons in its pi system (making it a 4n antiaromatic compound) the Heilbronner prediction was borne out because according to Herges the Möbius compound was found to have aromatic properties. With bond lengths deduced from X-ray crystallography a HOMA value was obtained of 0.50 (for the polyene part alone) and 0.35 for the whole compound which qualifies it as a moderate aromat.
It was pointed out by Henry Rzepa that the conversion of intermediate 5 to 6 can proceed by either a Hückel or a Möbius transition state.
The difference was demonstrated in a hypothetical pericyclic ring opening reaction to cyclododecahexaene. The Hückel TS (left) involves 6 electrons (arrow pushing in red) with Cs molecular symmetry conserved throughout the reaction. The ring opening is disrotatory and suprafacial and both bond length alternation and NICS values indicate that the 6 membered ring is aromatic. The Möbius TS with 8 electrons on the other hand has lower computed activation energy and is characterized by C2 symmetry, a conrotatory and antarafacial ring opening and 8-membered ring aromaticity.
Another interesting system is the cyclononatetraenyl cation explored for over 30 years by Paul v. R. Schleyer et al. This reactive intermediate is implied in the solvolysis of the bicyclic chloride 9-deutero-9'-chlorobicyclo[6.1.0]-nonatriene 1 to the indene dihydroindenol 4. The starting chloride is deuterated in only one position but in the final product deuterium is distributed at every available position. This observation is explained by invoking a twisted 8-electron cyclononatetraenyl cation 2 for which a NICS value of -13.4 (outsmarting benzene) is calculated. A more recent study, however, suggests that the stability of "trans"-C9H9+ is not much different in energy compared to a Hückel topology isomer. The same study suggested that for [13]annulenyl cation, the Möbius topology penta-"trans"-C13H13+ is a global energy minimum and predicts that it may be directly observable.
In 2005 the same P. v. R. Schleyer questioned the 2003 Herges claim: he analyzed the same crystallographic data and concluded that there was indeed a large degree of bond length alternation resulting in a HOMA value of -0.02, a computed NICS value of -3.4 ppm also did not point towards aromaticity and (also inferred from a computer model) steric strain would prevent effective pi-orbital overlap.
A Hückel-Möbius aromaticity switch (2007) has been described based on a 28 pi-electron porphyrin system:
The phenylene rings in this molecule are free to rotate forming a set of conformers: one with a Möbius half-twist and another with a Hückel double-twist (a figure-eight configuration) of roughly equal energy.
In 2014, Zhu and Xia (with the help of Schleyer) synthesized a planar Möbius system that consisted of two pentene rings connected with an osmium atom. They formed derivatives where osmium had 16 and 18 electrons and determined that Craig–Möbius aromaticity is more important for the stabilization of the molecule than the metal's electron count.
Transition states.
In contrast to the rarity of Möbius aromatic "ground state" molecular systems, there are many examples of pericyclic "transition states" that exhibit Möbius aromaticity. The classification of a pericyclic transition state as either Möbius or Hückel topology determines whether 4"N" or 4"N" + 2 electrons are required to make the transition state aromatic or antiaromatic, and therefore, allowed or forbidden, respectively. Based on the energy level diagrams derived from Hückel MO theory, (4"N" + 2)-electron Hückel and (4"N")-electron Möbius transition states are aromatic and allowed, while (4"N" + 2)-electron Möbius and (4"N")-electron Hückel transition states are antiaromatic and forbidden. This is the basic premise of the Möbius-Hückel concept.
Derivation of Hückel MO theory energy levels for Möbius topology.
From the figure above, it can also be seen that the interaction between two consecutive formula_0 AOs is attenuated by the incremental twisting between orbitals by formula_1, where formula_2 is the angle of twisting between consecutive orbitals, compared to the usual Hückel system. For this reason resonance integral formula_3 is given by
formula_4,
where formula_5 is the standard Hückel resonance integral value (with completely parallel orbitals).
Nevertheless, after going all the way around, the "N"th and 1st orbitals are almost completely out of phase. (If the twisting were to continue after the formula_6th orbital, the formula_7st orbital would be exactly phase-inverted compared to the 1st orbital). For this reason, in the Hückel matrix the resonance integral between carbon formula_8 and formula_6 is formula_9.
For the generic formula_6 carbon Möbius system, the Hamiltonian matrix formula_10 is:
formula_11.
Eigenvalues for this matrix can now be found, which correspond to the energy levels of the Möbius system. Since formula_10 is a formula_12 matrix, we will have formula_6 eigenvalues formula_13 and formula_6 MOs. Defining the variable
formula_14,
we have:
formula_15.
To find nontrivial solutions to this equation, we set the determinant of this matrix to zero to obtain
formula_16.
Hence, we find the energy levels for a cyclic system with Möbius topology,
formula_17.
In contrast, recall the energy levels for a cyclic system with Hückel topology,
formula_18.
References.
<templatestyles src="Reflist/styles.css" />
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p_z"
},
{
"math_id": 1,
"text": "\\cos\\omega"
},
{
"math_id": 2,
"text": "\\omega=\\pi/N"
},
{
"math_id": 3,
"text": "\\beta^\\prime"
},
{
"math_id": 4,
"text": "\\beta^\\prime=\\beta\\cos(\\pi/N)"
},
{
"math_id": 5,
"text": "\\beta"
},
{
"math_id": 6,
"text": "N"
},
{
"math_id": 7,
"text": "(N+1)"
},
{
"math_id": 8,
"text": "1"
},
{
"math_id": 9,
"text": "-\\beta^\\prime"
},
{
"math_id": 10,
"text": "\\mathbf{H}"
},
{
"math_id": 11,
"text": "\n\\mathbf{H}=\n\\begin{pmatrix}\n \\alpha& \\beta' & 0 &\\cdots& -\\beta' \\\\\n \\beta' & \\alpha& \\beta' & \\cdots & 0 \\\\\n 0 & \\beta' & \\alpha & \\cdots & 0 \\\\\n \\vdots &\\vdots &\\vdots &\\ddots &\\vdots \\\\\n -\\beta' &0& 0 & \\cdots & \\alpha\n\\end{pmatrix}\n"
},
{
"math_id": 12,
"text": "N\\times N"
},
{
"math_id": 13,
"text": "E_k"
},
{
"math_id": 14,
"text": "x_k=\\frac{\\alpha-E_k}{\\beta'}"
},
{
"math_id": 15,
"text": "\n \\begin{pmatrix}\n x_k& 1 & 0 &\\cdots& -1 \\\\\n 1 & x_k& 1 & \\cdots & 0 \\\\\n 0 & 1 & x_k & \\cdots & 0 \\\\\n \\vdots &\\vdots &\\vdots &\\ddots &\\vdots \\\\\n -1 &0& 0 & \\cdots & x_k\n \\end{pmatrix}\n\\cdot\n \\begin{pmatrix}\n c_1^{(k)} \\\\\n c_2^{(k)} \\\\\n c_3^{(k)} \\\\\n \\vdots\\\\\n c_N^{(k)} \\\\\n \\end{pmatrix}=0\n"
},
{
"math_id": 16,
"text": "\n x_k=-2\\cos{\\frac{(2k+1)\\pi}{N}}\n"
},
{
"math_id": 17,
"text": "\n E_k=\\alpha+2\\beta^\\prime\\cos{\\frac{(2k+1)\\pi}{N}}\\quad\n(k=0,1,\\ldots, \\lceil N/2 \\rceil-1)\n"
},
{
"math_id": 18,
"text": "\n E_k=\\alpha+2\\beta\\cos{\\frac{2k\\pi}{N}}\\quad\n(k=0,1,\\ldots, \\lfloor N/2 \\rfloor)\n"
}
] | https://en.wikipedia.org/wiki?curid=10208822 |
1020980 | Starling equation | Mathematical description of fluid movements
The Starling principle holds that extracellular fluid movements between blood and tissues are determined by differences in hydrostatic pressure and colloid osmotic pressure (oncotic pressure) between plasma inside microvessels and interstitial fluid outside them. The Starling equation, proposed many years after the death of Starling, describes that relationship in mathematical form and can be applied to many biological and non-biological semipermeable membranes. The classic Starling principle and the equation that describes it have in recent years been revised and extended.
Every day around 8 litres of water (solvent) containing a variety of small molecules (solutes) leaves the blood stream of an adult human and perfuses the cells of the various body tissues. Interstitial fluid drains by afferent lymph vessels to one of the regional lymph node groups, where around 4 litres per day is reabsorbed to the blood stream. The remainder of the lymphatic fluid is rich in proteins and other large molecules and rejoins the blood stream via the thoracic duct which empties into the great veins close to the heart. Filtration from plasma to interstitial (or tissue) fluid occurs in microvascular capillaries and post-capillary venules. In most tissues the micro vessels are invested with a continuous internal surface layer that includes a fibre matrix now known as the endothelial glycocalyx whose interpolymer spaces function as a system of small pores, radius circa 5 nm. Where the endothelial glycocalyx overlies a gap in the junction molecules that bind endothelial cells together (inter endothelial cell cleft), the plasma ultrafiltrate may pass to the interstitial space, leaving larger molecules reflected back into the plasma.
A small number of continuous capillaries are specialised to absorb solvent and solutes from interstitial fluid back into the blood stream through fenestrations in endothelial cells, but the volume of solvent absorbed every day is small.
Discontinuous capillaries as found in sinusoidal tissues of bone marrow, liver and spleen have little or no filter function.
The rate at which fluid is filtered across vascular endothelium (transendothelial filtration) is determined by the sum of two outward forces, capillary pressure (formula_0) and interstitial protein osmotic pressure (formula_1), and two absorptive forces, plasma protein osmotic pressure (formula_2) and interstitial pressure (formula_3). The Starling equation describes these forces in mathematical terms. It is one of the Kedem–Katchalski equations which bring nonsteady state thermodynamics to the theory of osmotic pressure across membranes that are at least partly permeable to the solute responsible for the osmotic pressure difference. The second Kedem–Katchalsky equation explains the trans endothelial transport of solutes, formula_4.
The equation.
The classic Starling equation reads as follows:
formula_5
where:
By convention, outward force is defined as positive, and inward force is defined as negative. If Jv is positive, solvent is leaving the capillary (filtration). If negative, solvent is entering the capillary (absorption).
Applying the classic Starling equation, it had long been taught that continuous capillaries filter out fluid in their arteriolar section and reabsorb most of it in their venular section, as shown by the diagram.
However, empirical evidence shows that, in most tissues, the flux of the intraluminal fluid of capillaries is continuous and, primarily, effluent. Efflux occurs along the whole length of a capillary. Fluid filtered to the space outside a capillary is mostly returned to the circulation via lymph nodes and the thoracic duct.
A mechanism for this phenomenon is the Michel-Weinbaum model, in honour of two scientists who, independently, described the filtration function of the glycocalyx. Briefly, the colloid osmotic pressure πi of the interstitial fluid has been found to have no effect on Jv and the colloid osmotic pressure difference that opposes filtration is now known to be π'p minus the subglycocalyx π, which is close to zero while there is adequate filtration to flush interstitial proteins out of the interendothelial cleft. Consequently, Jv is much less than previously calculated, and the unopposed diffusion of interstitial proteins to the subglycocalyx space if and when filtration falls wipes out the colloid osmotic pressure difference necessary for reabsorption of fluid to the capillary.
The revised Starling equation is compatible with the steady-state Starling principle:
formula_11
where:
Pressures are often measured in millimetres of mercury (mmHg), and the filtration coefficient in millilitres per minute per millimetre of mercury (ml·min−1·mmHg−1).
Filtration coefficient.
In some texts the product of hydraulic conductivity and surface area is called the filtration co-efficient Kfc.
Reflection coefficient.
Staverman's reflection coefficient, "σ", is a unitless constant that is specific to the permeability of a membrane to a given solute.
The Starling equation, written without "σ", describes the flow of a solvent across a membrane that is impermeable to the solutes contained within the solution.
"σn" corrects for the partial permeability of a semipermeable membrane to a solute "n".
Where "σ" is close to 1, the plasma membrane is less permeable to the denotated species (for example, larger molecules such as albumin and other plasma proteins), which may flow across the endothelial lining, from higher to lower concentrations, more slowly, while allowing water and smaller solutes through the glycocalyx filter to the extravascular space.
Approximated values.
Following are typically quoted values for the variables in the classic Starling equation:
It is reasoned that some albumin escapes from the capillaries and enters the interstitial fluid where it would produce a flow of water equivalent to that produced by a hydrostatic pressure of +3 mmHg. Thus, the difference in protein concentration would produce a flow of fluid into the vessel at the venous end equivalent to 28 − 3 = 25 mmHg of hydrostatic pressure. The total oncotic pressure present at the venous end could be considered as +25 mmHg.
In the beginning (arteriolar end) of a capillary, there is a net driving force (formula_13) outwards from the capillary of +9 mmHg. In the end (venular end), on the other hand, there is a net driving force of −8 mmHg.
Assuming that the net driving force declines linearly, then there is a mean net driving force outwards from the capillary as a whole, which also results in that more fluid exits a capillary than re-enters it. The lymphatic system drains this excess.
J. Rodney Levick argues in his textbook that the interstitial force is often underestimated, and measurements used to populate the revised Starling equation show the absorbing forces to be consistently less than capillary or venular pressures.
Specific organs.
Kidneys.
Glomerular capillaries have a continuous glycocalyx layer in health and the total transendothelial filtration rate of solvent (formula_6) to the renal tubules is normally around 125 ml/ min (about 180 litres/ day). Glomerular capillary formula_6 is more familiarly known as the glomerular filtration rate (GFR). In the rest of the body's capillaries, formula_6 is typically 5 ml/ min (around 8 litres/ day), and the fluid is returned to the circulation "via" afferent and efferent lymphatics.
Lungs.
The Starling equation can describe the movement of fluid from pulmonary capillaries to the alveolar air space.
Clinical significance.
Woodcock and Woodcock showed in 2012 that the revised Starling equation (steady-state Starling principle) provides scientific explanations for clinical observations concerning intravenous fluid therapy. Traditional teaching of both filtration and absorption of fluid occurring in a single capillary has been superseded by the concept of a vital circulation of extracellular fluid running parallel to the circulation of blood. New approaches to the treatment of oedema (tissue swelling) are suggested.
History.
The Starling equation is named for the British physiologist Ernest Starling, who is also recognised for the Frank–Starling law of the heart. Starling can be credited with identifying that the "absorption of isotonic salt solutions (from the extravascular space) by the blood vessels is determined by this osmotic pressure of the serum proteins" in 1896. | [
{
"math_id": 0,
"text": " P_c "
},
{
"math_id": 1,
"text": " \\pi_i "
},
{
"math_id": 2,
"text": " \\pi_p "
},
{
"math_id": 3,
"text": " P_i "
},
{
"math_id": 4,
"text": " J_s "
},
{
"math_id": 5,
"text": "\\ J_v = L_\\mathrm{p} S ( [P_\\mathrm{c} - P_\\mathrm{i}] - \\sigma[\\pi_\\mathrm{p} - \\pi_\\mathrm{i}] )"
},
{
"math_id": 6,
"text": " J_v "
},
{
"math_id": 7,
"text": " [P_\\mathrm{c} - P_\\mathrm{i}] - \\sigma[\\pi_\\mathrm{p} - \\pi_\\mathrm{i}] "
},
{
"math_id": 8,
"text": " L_p "
},
{
"math_id": 9,
"text": " S "
},
{
"math_id": 10,
"text": " \\sigma "
},
{
"math_id": 11,
"text": "\\ J_v = L_\\mathrm{p} S ( [P_\\mathrm{c} - P_\\mathrm{i}] - \\sigma[\\pi_\\mathrm{p} - \\pi_\\mathrm{g}] )"
},
{
"math_id": 12,
"text": " \\pi_g "
},
{
"math_id": 13,
"text": " [P_\\mathrm{c} - P_\\mathrm{i}] - \\sigma[\\pi_\\mathrm{c} - \\pi_\\mathrm{i}]"
}
] | https://en.wikipedia.org/wiki?curid=1020980 |
1021 | Aspect ratio | Attribute of a geometric shape
The aspect ratio of a geometric shape is the ratio of its sizes in different dimensions. For example, the aspect ratio of a rectangle is the ratio of its longer side to its shorter side—the ratio of width to height, when the rectangle is oriented as a "landscape".
The aspect ratio is most often expressed as two integer numbers separated by a colon (x:y), less commonly as a simple or decimal fraction. The values x and y do not represent actual widths and heights but, rather, the proportion between width and height. As an example, 8:5, 16:10, 1.6:1, <templatestyles src="Fraction/styles.css" />8⁄5 and 1.6 are all ways of representing the same aspect ratio.
In objects of more than two dimensions, such as hyperrectangles, the aspect ratio can still be defined as the ratio of the longest side to the shortest side.
Applications and uses.
The term is most commonly used with reference to:
Aspect ratios of simple shapes.
Rectangles.
For a rectangle, the aspect ratio denotes the ratio of the width to the height of the rectangle. A square has the smallest possible aspect ratio of 1:1.
Examples:
Ellipses.
For an ellipse, the aspect ratio denotes the ratio of the major axis to the minor axis. An ellipse with an aspect ratio of 1:1 is a circle.
Aspect ratios of general shapes.
In geometry, there are several alternative definitions to aspect ratios of general compact sets in a d-dimensional space:
If the dimension "d" is fixed, then all reasonable definitions of aspect ratio are equivalent to within constant factors.
Notations.
Aspect ratios are mathematically expressed as "x":"y" (pronounced "x-to-y").
Cinematographic aspect ratios are usually denoted as a (rounded) decimal multiple of width vs unit height, while photographic and videographic aspect ratios are usually defined and denoted by whole number ratios of width to height. In digital images there is a subtle distinction between the "display" aspect ratio (the image as displayed) and the "storage" aspect ratio (the ratio of pixel dimensions); see Distinctions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{2}:1 = 1.414..."
},
{
"math_id": 1,
"text": "\\sqrt{2}"
},
{
"math_id": 2,
"text": "\\sqrt{W^2/WH} = \\sqrt{W/H}"
}
] | https://en.wikipedia.org/wiki?curid=1021 |
1021099 | Hodrick–Prescott filter | Mathematical tool used in macroeconomics
The Hodrick–Prescott filter (also known as Hodrick–Prescott decomposition) is a mathematical tool used in macroeconomics, especially in real business cycle theory, to remove the cyclical component of a time series from raw data. It is used to obtain a smoothed-curve representation of a time series, one that is more sensitive to long-term than to short-term fluctuations. The adjustment of the sensitivity of the trend to short-term fluctuations is achieved by modifying a multiplier formula_0.
The filter was popularized in the field of economics in the 1990s by economists Robert J. Hodrick and Nobel Memorial Prize winner Edward C. Prescott, though it was first proposed much earlier by E. T. Whittaker in 1923. The Hodrick-Prescott filter is a special case of a smoothing spline.
The equation.
The reasoning for the methodology uses ideas related to the decomposition of time series. Let formula_1 for formula_2 denote the logarithms of a time series variable. The series formula_1 is made up of a trend component formula_3 and a cyclical component formula_4 such that formula_5. Given an adequately chosen, positive value of formula_0, there is a trend component that will solve
formula_6
The first term of the equation is the sum of the squared deviations formula_7, which penalizes the cyclical component. The second term is a multiple formula_0 of the sum of the squares of the trend component's second differences. This second term penalizes variations in the growth rate of the trend component. The larger the value of formula_0, the higher is the penalty. Hodrick and Prescott suggest 1600 as a value for formula_0 for quarterly data. Ravn and Uhlig (2002) state that formula_0 should vary by the fourth power of the frequency observation ratio; thus, formula_0 should equal 6.25 (1600/4^4) for annual data and 129,600 (1600*3^4) for monthly data;
in practice, formula_8 for yearly data and formula_9 for monthly data are commonly used, however.
The Hodrick–Prescott filter is explicitly given by
formula_10
where formula_11 denotes the lag operator, as can be seen from the first-order condition for the minimization problem.
Drawbacks to the Hodrick–Prescott filter.
The Hodrick–Prescott filter will only be optimal when:
The standard two-sided Hodrick–Prescott filter is non-causal as it is not purely backward looking. Hence, it should not be used when estimating DSGE models based on recursive state-space representations (e.g., likelihood-based methods that make use of the Kalman filter). The reason is that the Hodrick–Prescott filter uses observations at formula_12 to construct the current time point formula_13, while the recursive setting assumes that only current and past states influence the current observation. One way around this is to use the one-sided Hodrick–Prescott filter.
Exact algebraic formulas are available for the two-sided Hodrick–Prescott filter in terms of its signal-to-noise ratio formula_0.
A working paper by James D. Hamilton at UC San Diego titled "Why You Should Never Use the Hodrick-Prescott Filter" presents evidence against using the HP filter. Hamilton writes that:
A working paper by Robert J. Hodrick titled "An Exploration of Trend-Cycle Decomposition Methodologies in Simulated Data" examines whether the proposed alternative approach of James D. Hamilton is actually better than the HP filter at extracting the cyclical component of several simulated time series calibrated to approximate U.S. real GDP. Hodrick finds that for time series in which there are distinct growth and cyclical components, the HP filter comes closer to isolating the cyclical component than the Hamilton alternative.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda"
},
{
"math_id": 1,
"text": "y_t\\,"
},
{
"math_id": 2,
"text": "t = 1, 2, ..., T\\,"
},
{
"math_id": 3,
"text": "\\tau_t"
},
{
"math_id": 4,
"text": "c_t"
},
{
"math_id": 5,
"text": "y_t\\ = \\tau_t\\ + c_t\\,"
},
{
"math_id": 6,
"text": "\\min_{\\tau}\\left(\\sum_{t = 1}^T {(y_t - \\tau _t )^2 } + \\lambda \\sum_{t = 2}^{T - 1} {[(\\tau _{t+1} - \\tau _t) - (\\tau _t - \\tau _{t - 1} )]^2 }\\right).\\,"
},
{
"math_id": 7,
"text": "d_t=y_t-\\tau_t"
},
{
"math_id": 8,
"text": "\\lambda = 100"
},
{
"math_id": 9,
"text": "\\lambda = 14,400"
},
{
"math_id": 10,
"text": "\\mathit{HP} = \\left[ \\lambda L^2 - 4\\lambda L + (1 + 6\\lambda) - 4\\lambda L^{-1} + \\lambda L^{-2} \\right]^{-1}"
},
{
"math_id": 11,
"text": "L"
},
{
"math_id": 12,
"text": "t+i, i>0 "
},
{
"math_id": 13,
"text": "t"
}
] | https://en.wikipedia.org/wiki?curid=1021099 |
10211794 | Exner function | Parameter in atmospheric modeling
The Exner function is an important parameter in atmospheric modeling. The Exner function can be viewed as non-dimensionalized pressure and can be defined as:
formula_0
where formula_1 is a standard reference surface pressure, usually taken as 1000 hPa; formula_2 is the gas constant for dry air; formula_3 is the heat capacity of dry air at constant pressure; formula_4 is the absolute temperature; and formula_5 is the potential temperature. | [
{
"math_id": 0,
"text": "\\Pi = \\left( \\frac{p}{p_0} \\right)^{R_d/c_p} = \\frac{T}{\\theta} "
},
{
"math_id": 1,
"text": "p_0"
},
{
"math_id": 2,
"text": "R_d"
},
{
"math_id": 3,
"text": "c_p"
},
{
"math_id": 4,
"text": "T"
},
{
"math_id": 5,
"text": "\\theta"
}
] | https://en.wikipedia.org/wiki?curid=10211794 |
1021312 | Gerhard Frey | German mathematician (born 1944)
Gerhard Frey (; born 1 June 1944) is a German mathematician, known for his work in number theory. Following an original idea of , he developed the notion of Frey–Hellegouarch curves, a construction of an elliptic curve from a purported solution to the Fermat equation, that is central to Wiles's proof of Fermat's Last Theorem.
Education and career.
He studied mathematics and physics at the University of Tübingen, graduating in 1967. He continued his postgraduate studies at Heidelberg University, where he received his PhD in 1970, and his Habilitation in 1973. He was assistant professor at Heidelberg University from 1969–1973, professor at the University of Erlangen (1973–1975) and at Saarland University (1975–1990). Until 2009, he held a chair for number theory at the Institute for Experimental Mathematics at the University of Duisburg-Essen, campus Essen.
Frey was a visiting scientist at several universities and research institutions, including the Ohio State University, Harvard University, the University of California, Berkeley, the Mathematical Sciences Research Institute (MSRI), the Institute for Advanced Studies at Hebrew University of Jerusalem, and the Instituto Nacional de Matemática Pura e Aplicada (IMPA) in Rio de Janeiro.
Frey was also the co-editor of the journal "Manuscripta Mathematica".
Research contributions.
His research areas are number theory and diophantine geometry, as well as applications to coding theory and cryptography.
In 1985, Frey pointed out a connection between Fermat's Last Theorem and the Taniyama-Shimura Conjecture, and this connection was made precise shortly thereafter by Jean-Pierre Serre who formulated a conjecture formula_0 and showed that Taniyama-Shimura+formula_0 implies Fermat. Soon after, Kenneth Ribet proved enough of conjecture formula_0 to deduce that the Taniyama-Shimura Conjecture implies Fermat's Last Theorem. This approach provided a framework for the subsequent successful attack on Fermat's Last Theorem by Andrew Wiles in the 1990s.
In 1998, Frey proposed the idea of Weil descent attack for elliptic curves over finite fields with composite degree. As a result of this attack, cryptographers lost their interest in these curves.
Awards and honors.
Frey was awarded the Gauss medal of the "Braunschweigische Wissenschaftliche Gesellschaft" in 1996 for his work on Fermat's Last Theorem. Since 1998, he has been a member of the Göttingen Academy of Sciences.
In 2006, he received the "Certicom ECC Visionary Award" for his contributions to elliptic-curve cryptography.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varepsilon"
}
] | https://en.wikipedia.org/wiki?curid=1021312 |
102140 | Perturbation theory | In math and applied mathematics, methods for finding an approximate solution to a problem
In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem. A critical feature of the technique is a middle step that breaks the problem into "solvable" and "perturbative" parts. In perturbation theory, the solution is expressed as a power series in a small parameter formula_0. The first term is the known solution to the solvable problem. Successive terms in the series at higher powers of formula_0 usually become smaller. An approximate 'perturbation solution' is obtained by truncating the series, usually by keeping only the first two terms, the solution to the known problem and the 'first order' perturbation correction.
Perturbation theory is used in a wide range of fields, and reaches its most sophisticated and advanced forms in quantum field theory. Perturbation theory (quantum mechanics) describes the use of this method in quantum mechanics. The field in general remains actively and heavily researched across multiple disciplines.
Description.
Perturbation theory develops an expression for the desired solution in terms of a formal power series known as a perturbation series in some "small" parameter, that quantifies the deviation from the exactly solvable problem. The leading term in this power series is the solution of the exactly solvable problem, while further terms describe the deviation in the solution, due to the deviation from the initial problem. Formally, we have for the approximation to the full solution formula_1 a series in the small parameter (here called ε), like the following:
formula_2
In this example, formula_3 would be the known solution to the exactly solvable initial problem, and the terms formula_4 represent the first-order, second-order, third-order, and higher-order terms, which may be found iteratively by a mechanistic but increasingly difficult procedure. For small formula_5 these higher-order terms in the series generally (but not always) become successively smaller. An approximate "perturbative solution" is obtained by truncating the series, often by keeping only the first two terms, expressing the final solution as a sum of the initial (exact) solution and the "first-order" perturbative correction
formula_6
Some authors use big O notation to indicate the order of the error in the approximate solution:
If the power series in formula_5 converges with a nonzero radius of convergence, the perturbation problem is called a regular perturbation problem. In regular perturbation problems, the asymptotic solution smoothly approaches the exact solution. However, the perturbation series can also diverge, and the truncated series can still be a good approximation to the true solution if it is truncated at a point at which its elements are minimum. This is called an "asymptotic series". If the perturbation series is divergent or not a power series (for example, if the asymptotic expansion must include non-integer powers formula_7 or negative powers formula_8) then the perturbation problem is called a singular perturbation problem. Many special techniques in perturbation theory have been developed to analyze singular perturbation problems.
Prototypical example.
The earliest use of what would now be called "perturbation theory" was to deal with the otherwise unsolvable mathematical problems of celestial mechanics: for example the orbit of the Moon, which moves noticeably differently from a simple Keplerian ellipse because of the competing gravitation of the Earth and the Sun.
Perturbation methods start with a simplified form of the original problem, which is "simple enough" to be solved exactly. In celestial mechanics, this is usually a Keplerian ellipse. Under Newtonian gravity, an ellipse is exactly correct when there are only two gravitating bodies (say, the Earth and the Moon) but not quite correct when there are three or more objects (say, the Earth, Moon, Sun, and the rest of the Solar System) and not quite correct when the gravitational interaction is stated using formulations from general relativity.
Perturbative expansion.
Keeping the above example in mind, one follows a general recipe to obtain the perturbation series. The perturbative expansion is created by adding successive corrections to the simplified problem. The corrections are obtained by forcing consistency between the unperturbed solution, and the equations describing the system in full. Write formula_9 for this collection of equations; that is, let the symbol formula_9 stand in for the problem to be solved. Quite often, these are differential equations, thus, the letter "D".
The process is generally mechanical, if laborious. One begins by writing the equations formula_9 so that they split into two parts: some collection of equations formula_10 which can be solved exactly, and some additional remaining part formula_11 for some small formula_12 The solution formula_3 (to formula_10) is known, and one seeks the general solution formula_13 to formula_14
Next the approximation formula_15 is inserted into formula_16. This results in an equation for formula_17 which, in the general case, can be written in closed form as a sum over integrals over formula_18 Thus, one has obtained the "first-order correction" formula_19 and thus formula_15 is a good approximation to formula_20 It is a good approximation, precisely because the parts that were ignored were of size formula_21 The process can then be repeated, to obtain corrections formula_22 and so on.
In practice, this process rapidly explodes into a profusion of terms, which become extremely hard to manage by hand. Isaac Newton is reported to have said, regarding the problem of the Moon's orbit, that "It causeth my head to ache." This unmanageability has forced perturbation theory to develop into a high art of managing and writing out these higher order terms. One of the fundamental breakthroughs in quantum mechanics for controlling the expansion are the Feynman diagrams, which allow quantum mechanical perturbation series to be represented by a sketch.
Examples.
Perturbation theory has been used in a large number of different settings in physics and applied mathematics. Examples of the "collection of equations" formula_23 include algebraic equations,
differential equations (e.g., the equations of motion
and commonly wave equations), thermodynamic free energy in statistical mechanics, radiative transfer,
and Hamiltonian operators in quantum mechanics.
Examples of the kinds of solutions that are found perturbatively include the solution of the equation of motion ("e.g.", the trajectory of a particle), the statistical average of some physical quantity ("e.g.", average magnetization), and the ground state energy of a quantum mechanical problem.
Examples of exactly solvable problems that can be used as starting points include linear equations, including linear equations of motion (harmonic oscillator, linear wave equation), statistical or quantum-mechanical systems of non-interacting particles (or in general, Hamiltonians or free energies containing only terms quadratic in all degrees of freedom).
Examples of systems that can be solved with perturbations include systems with nonlinear contributions to the equations of motion, interactions between particles, terms of higher powers in the Hamiltonian/free energy.
For physical problems involving interactions between particles, the terms of the perturbation series may be displayed (and manipulated) using Feynman diagrams.
History.
Perturbation theory was first devised to solve otherwise intractable problems in the calculation of the motions of planets in the solar system. For instance, Newton's law of universal gravitation explained the gravitation between two astronomical bodies, but when a third body is added, the problem was, "How does each body pull on each?" Kepler's orbital equations only solve Newton's gravitational equations when the latter are limited to just two bodies interacting. The gradually increasing accuracy of astronomical observations led to incremental demands in the accuracy of solutions to Newton's gravitational equations, which led many eminent 18th and 19th century mathematicians, notably Lagrange and Laplace, to extend and generalize the methods of perturbation theory.
These well-developed perturbation methods were adopted and adapted to solve new problems arising during the development of quantum mechanics in 20th century atomic and subatomic physics. Paul Dirac developed quantum perturbation theory in 1927 to evaluate when a particle would be emitted in radioactive elements. This was later named Fermi's golden rule. Perturbation theory in quantum mechanics is fairly accessible, mainly because quantum mechanics is limited to linear wave equations, but also since the quantum mechanical notation allows expressions to be written in fairly compact form, thus making them easier to comprehend. This resulted in an explosion of applications, ranging from the Zeeman effect to the hyperfine splitting in the hydrogen atom.
Despite the simpler notation, perturbation theory applied to quantum field theory still easily gets out of hand. Richard Feynman developed the celebrated Feynman diagrams by observing that many terms repeat in a regular fashion. These terms can be replaced by dots, lines, squiggles and similar marks, each standing for a term, a denominator, an integral, and so on; thus complex integrals can be written as simple diagrams, with absolutely no ambiguity as to what they mean. The one-to-one correspondence between the diagrams, and specific integrals is what gives them their power. Although originally developed for quantum field theory, it turns out the diagrammatic technique is broadly applicable to many other perturbative series (although not always worthwhile).
In the second half of the 20th century, as chaos theory developed, it became clear that unperturbed systems were in general completely integrable systems, while the perturbed systems were not. This promptly lead to the study of "nearly integrable systems", of which the KAM torus is the canonical example. At the same time, it was also discovered that many (rather special) non-linear systems, which were previously approachable only through perturbation theory, are in fact completely integrable. This discovery was quite dramatic, as it allowed exact solutions to be given. This, in turn, helped clarify the meaning of the perturbative series, as one could now compare the results of the series to the exact solutions.
The improved understanding of dynamical systems coming from chaos theory helped shed light on what was termed the "small denominator problem" or "small divisor problem". In the 19th century Poincaré observed (as perhaps had earlier mathematicians) that sometimes 2nd and higher order terms in the perturbative series have "small denominators": That is, they have the general form formula_24 where formula_25 formula_26 and formula_27 are some complicated expressions pertinent to the problem to be solved, and formula_28 and formula_29 are real numbers; very often they are the energy of normal modes. The small divisor problem arises when the difference formula_30 is small, causing the perturbative correction to "blow up", becoming as large or maybe larger than the zeroth order term. This situation signals a breakdown of perturbation theory: It stops working at this point, and cannot be expanded or summed any further. In formal terms, the perturbative series is an "asymptotic series": A useful approximation for a few terms, but at some point becomes "less" accurate if even more terms are added. The breakthrough from chaos theory was an explanation of why this happened: The small divisors occur whenever perturbation theory is applied to a chaotic system. The one signals the presence of the other.
Beginnings in the study of planetary motion.
Since the planets are very remote from each other, and since their mass is small as compared to the mass of the Sun, the gravitational forces between the planets can be neglected, and the planetary motion is considered, to a first approximation, as taking place along Kepler's orbits, which are defined by the equations of the two-body problem, the two bodies being the planet and the Sun.
Since astronomic data came to be known with much greater accuracy, it became necessary to consider how the motion of a planet around the Sun is affected by other planets. This was the origin of the three-body problem; thus, in studying the system Moon-Earth-Sun, the mass ratio between the Moon and the Earth was chosen as the "small parameter". Lagrange and Laplace were the first to advance the view that the so-called "constants" which describe the motion of a planet around the Sun gradually change: They are "perturbed", as it were, by the motion of other planets and vary as a function of time; hence the name "perturbation theory".
Perturbation theory was investigated by the classical scholars – Laplace, Poisson, Gauss – as a result of which the computations could be performed with a very high accuracy. The discovery of the planet Neptune in 1848 by Le Verrier, based on the deviations in motion of the planet Uranus. He sent the coordinates to J.G. Galle who successfully observed Neptune through his telescope – a triumph of perturbation theory.
Perturbation orders.
The standard exposition of perturbation theory is given in terms of the order to which the perturbation is carried out: first-order perturbation theory or second-order perturbation theory, and whether the perturbed states are degenerate, which requires singular perturbation. In the singular case extra care must be taken, and the theory is slightly more elaborate.
In chemistry.
Many of the ab initio quantum chemistry methods use perturbation theory directly or are closely related methods. Implicit perturbation theory works with the complete Hamiltonian from the very beginning and never specifies a perturbation operator as such. Møller–Plesset perturbation theory uses the difference between the Hartree–Fock Hamiltonian and the exact non-relativistic Hamiltonian as the perturbation. The zero-order energy is the sum of orbital energies. The first-order energy is the Hartree–Fock energy and electron correlation is included at second-order or higher. Calculations to second, third or fourth order are very common and the code is included in most ab initio quantum chemistry programs. A related but more accurate method is the coupled cluster method.
Shell-crossing.
A shell-crossing (sc) occurs in perturbation theory when matter trajectories intersect, forming a singularity. This limits the predictive power of physical simulations at small scales.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varepsilon"
},
{
"math_id": 1,
"text": "\\ A\\ ,"
},
{
"math_id": 2,
"text": " A \\equiv A_0 + \\varepsilon^1 A_1 + \\varepsilon^2 A_2 + \\varepsilon^3 A_3 + \\cdots "
},
{
"math_id": 3,
"text": "\\ A_0\\ "
},
{
"math_id": 4,
"text": "\\ A_1, A_2, A_3, \\ldots \\ "
},
{
"math_id": 5,
"text": "\\ \\varepsilon\\ "
},
{
"math_id": 6,
"text": " A \\approx A_0 + \\varepsilon A_1 \\qquad \\mathsf{ for } \\qquad \\varepsilon \\to 0 "
},
{
"math_id": 7,
"text": "\\ \\varepsilon^{\\left(1/2\\right)}\\ "
},
{
"math_id": 8,
"text": "\\ \\varepsilon^{-2}\\ "
},
{
"math_id": 9,
"text": "\\ D\\ "
},
{
"math_id": 10,
"text": "\\ D_0\\ "
},
{
"math_id": 11,
"text": "\\ \\varepsilon D_1\\ "
},
{
"math_id": 12,
"text": "\\ \\varepsilon \\ll 1 ~."
},
{
"math_id": 13,
"text": "\\ A\\ "
},
{
"math_id": 14,
"text": "\\ D = D_0 + \\varepsilon D_1 ~."
},
{
"math_id": 15,
"text": "\\ A \\approx A_0 + \\varepsilon A_1\\ "
},
{
"math_id": 16,
"text": "\\ \\varepsilon D_1"
},
{
"math_id": 17,
"text": "\\ A_1\\ ,"
},
{
"math_id": 18,
"text": "\\ A_0 ~."
},
{
"math_id": 19,
"text": "\\ A_1\\ "
},
{
"math_id": 20,
"text": "\\ A ~."
},
{
"math_id": 21,
"text": "\\ \\varepsilon^2 ~."
},
{
"math_id": 22,
"text": "\\ A_2\\ ,"
},
{
"math_id": 23,
"text": "D"
},
{
"math_id": 24,
"text": "\\ \\frac{\\ \\psi_n V \\phi_m\\ }{\\ (\\omega_n -\\omega_m)\\ }\\ "
},
{
"math_id": 25,
"text": "\\ \\psi_n\\ ,"
},
{
"math_id": 26,
"text": "\\ V\\ ,"
},
{
"math_id": 27,
"text": "\\ \\phi_m\\ "
},
{
"math_id": 28,
"text": "\\ \\omega_n\\ "
},
{
"math_id": 29,
"text": "\\ \\omega_m\\ "
},
{
"math_id": 30,
"text": "\\ \\omega_n - \\omega_m\\ "
}
] | https://en.wikipedia.org/wiki?curid=102140 |
1021510 | Dilution of precision (navigation) | Propagation of error with varying topology
Dilution of precision (DOP), or geometric dilution of precision (GDOP), is a term used in satellite navigation and geomatics engineering to specify the error propagation as a mathematical effect of navigation satellite geometry on positional measurement precision.
Introduction.
The concept of dilution of precision (DOP) originated with users of the Loran-C navigation system. The idea of geometric DOP is to state how errors in the measurement will affect the final state estimation. This can be defined as:
formula_0
Conceptually you can geometrically imagine errors on a measurement resulting in the formula_1 term changing. Ideally small changes in the measured data will not result in large changes in output location. The opposite of this ideal is the situation where the solution is very sensitive to measurement errors. The interpretation of this formula is shown in the figure to the right, showing two possible scenarios with acceptable and poor GDOP.
With the wide adoption of satellite navigation systems, the term has come into much wider usage. Neglecting ionospheric and tropospheric effects, the signal from navigation satellites has a fixed precision. Therefore, the relative satellite-receiver geometry plays a major role in determining the precision of estimated positions and times. Due to the relative geometry of any given satellite to a receiver, the precision in the pseudorange of the satellite translates to a corresponding component in each of the four dimensions of position measured by the receiver (i.e., formula_2, formula_3, formula_4, and formula_5). The precision of multiple satellites in view of a receiver combine according to the relative position of the satellites to determine the level of precision in each dimension of the receiver measurement. When visible navigation satellites are close together in the sky, the geometry is said to be weak and the DOP value is high; when far apart, the geometry is strong and the DOP value is low. Consider two overlapping rings, or annuli, of different centres. If they overlap at right angles, the greatest extent of the overlap is much smaller than if they overlap in near parallel. Thus a low DOP value represents a better positional precision due to the wider angular separation between the satellites used to calculate a unit's position. Other factors that can increase the effective DOP are obstructions such as nearby mountains or buildings.
DOP can be expressed as a number of separate measurements:
These values follow mathematically from the positions of the usable satellites. Signal receivers allow the display of these positions ("skyplot") as well as the DOP values.
The term can also be applied to other location systems that employ several geographical spaced sites. It can occur in electronic-counter-counter-measures (electronic warfare) when computing the location of enemy emitters (radar jammers and radio communications devices). Using such an interferometry technique can provide certain geometric layout where there are degrees of freedom that cannot be accounted for due to inadequate configurations.
The effect of geometry of the satellites on position error is called geometric dilution of precision (GDOP) and it is roughly interpreted as ratio of position error to the range error. Imagine that a square pyramid is formed by lines joining four satellites with the receiver at the tip of the pyramid. The larger the volume of the pyramid, the better (lower) the value of GDOP; the smaller its volume, the worse (higher) the value of GDOP will be. Similarly, the greater the number of satellites, the better the value of GDOP.
Interpretation.
The DOP factors are functions of the diagonal elements of the covariance matrix of the parameters, expressed either in a global or a local geodetic frame.
Computation.
As a first step in computing DOP, consider the unit vectors from the receiver to satellite formula_6:
formula_7
where formula_8 denote the position of the receiver and formula_9 denote the position of satellite i. Formulate the matrix, A, which (for 4 pseudorange measurement residual equations) is:
formula_10
The first three elements of each row of "A" are the components of a unit vector from the receiver to the indicated satellite. The last element of each row refers to the partial derivative of pseudorange w.r.t. receiver's clock bias.
Formulate the matrix, "Q", as the covariance matrix resulting from the least-squares normal matrix:
formula_11
In general:
formula_12
where formula_13 is the Jacobian of the sensor measurement residual equations formula_14, with respect to the unknowns, formula_15; formula_16 is the Jacobian of the sensor measurement residual equations with respect to the measured quantities formula_17, and formula_18 is the correlation matrix for noise in the measured quantities.
For the preceding case of 4 range measurement residual equations: formula_19, formula_20, formula_21, formula_22, formula_23, formula_24, formula_25, formula_26 and the measurement noises for the different formula_27 have been assumed to be independent which makes formula_28.
This formula for Q arises from applying best linear unbiased estimation to a linearized version of the sensor measurement residual equations about the current solution formula_29, except in the case of B.L.U.E. formula_18 is a noise covariance matrix rather than the noise correlation matrix used in DOP, and the reason DOP makes this substitution is to obtain a relative error. When formula_18 is a noise covariance matrix, formula_30 is an estimate of the matrix of covariance of noise in the unknowns due to the noise in the measured quantities. It is the estimate obtained by the "first-order second moment" (F.O.S.M.) uncertainty quantification technique which was state of the art in the 1980s. In order for the F.O.S.M. theory to be strictly applicable, either the input noise distributions need to be Gaussian or the measurement noise standard deviations need to be small relative to rate of change in the output near the solution. In this context, the second criteria is typically the one that is satisfied.
This (i.e. for the 4 time of arrival/range measurement residual equations) computation is in accordance with [6] where the weighting matrix,
formula_31 happens to simplify down to the identity matrix.
Note that P only simplifies down to the identity matrix because all the sensor measurement residual equations are time of arrival (pseudo range) equations. In other cases, for example when trying to locate someone broadcasting on an international distress frequency, formula_32 would not simplify down to the identity matrix and in that case there would be a "frequency DOP" or FDOP component either in addition to or in place of the TDOP component. (Regarding "in place of the TDOP component": Since the clocks on the legacy International Cospas-Sarsat Programme LEO satellites are much less accurate than GPS clocks, discarding their time measurements would actually increase the geolocation solution accuracy.)
The elements of formula_30 are designated as:
formula_33
PDOP, TDOP, and GDOP are given by:
formula_34
Notice GDOP is the square root of the trace of the formula_30 matrix.
The horizontal and vertical dilution of precision,
formula_35,
are both dependent on the coordinate system used. To correspond to the local east-north-up coordinate system,
EDOP^2 x x x
x NDOP^2 x x
x x VDOP^2 x
x x x TDOP^2
and the derived dilutions:
formula_36
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\operatorname{GDOP} = \\frac{\\Delta ( \\text{output location} )}{\\Delta ( \\text{measured data} )}"
},
{
"math_id": 1,
"text": "\\Delta ( \\text{measured data} )"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "y"
},
{
"math_id": 4,
"text": "z"
},
{
"math_id": 5,
"text": "t"
},
{
"math_id": 6,
"text": "i"
},
{
"math_id": 7,
"text": "\\begin{align}\n &\\left(\\frac{x_i - x}{R_i}, \\frac{y_i - y}{R_i}, \\frac{z_i - z}{R_i}\\right), &\n R_i &= \\sqrt{(x_i - x)^2 + (y_i - y)^2 + (z_i - z)^2}\n\\end{align}"
},
{
"math_id": 8,
"text": "x, y, z"
},
{
"math_id": 9,
"text": "x_i, y_i, z_i"
},
{
"math_id": 10,
"text": "A = \\begin{bmatrix}\n \\frac {x_1 - x} {R_1} & \\frac {y_1 - y} {R_1} & \\frac {z_1 - z} {R_1} & 1 \\\\\n \\frac {x_2 - x} {R_2} & \\frac {y_2 - y} {R_2} & \\frac {z_2 - z} {R_2} & 1 \\\\\n \\frac {x_3 - x} {R_3} & \\frac {y_3 - y} {R_3} & \\frac {z_3 - z} {R_3} & 1 \\\\\n \\frac {x_4 - x} {R_4} & \\frac {y_4 - y} {R_4} & \\frac {z_4 - z} {R_4} & 1\n\\end{bmatrix}"
},
{
"math_id": 11,
"text": "Q = \\left( A^\\mathsf{T} A \\right)^{-1}"
},
{
"math_id": 12,
"text": "Q = \\left(J_\\mathsf{x}^\\mathsf{T} \\left(J_{d} C_{d} J_{d}^\\mathsf{T}\\right)^{-1}J_{x}\\right)^{-1}"
},
{
"math_id": 13,
"text": "J_{x}"
},
{
"math_id": 14,
"text": "f_{i}\\left(\\underline{x}, \\underline{d}\\right) = 0"
},
{
"math_id": 15,
"text": "\\underline{x}"
},
{
"math_id": 16,
"text": "J_{d}"
},
{
"math_id": 17,
"text": "\\underline{d}"
},
{
"math_id": 18,
"text": "C_{d}"
},
{
"math_id": 19,
"text": "\\underline{x} = (x, y, z, \\tau)^\\mathsf{T}"
},
{
"math_id": 20,
"text": "\\underline{d} = \\left(\\tau_{1}, \\tau_{2}, \\tau_{3}, \\tau_{4}\\right)^\\mathsf{T}"
},
{
"math_id": 21,
"text": "\\tau = ct"
},
{
"math_id": 22,
"text": "\\tau_{i} = ct_{i}"
},
{
"math_id": 23,
"text": "R_{i} = |\\tau_{i} - \\tau| = \\sqrt{(\\tau_{i} - \\tau)^{2}}"
},
{
"math_id": 24,
"text": "f_{i}\\left(\\underline{x}, \\underline{d}\\right) = \\sqrt{(x_{i} - x)^{2} + (y_{i} - y)^{2} + (z_{i} - z)^{2}} - \\sqrt{(\\tau_{i} - \\tau )^{2}}"
},
{
"math_id": 25,
"text": "J_{x} = A"
},
{
"math_id": 26,
"text": "J_{d} = -I"
},
{
"math_id": 27,
"text": "\\tau _{i}"
},
{
"math_id": 28,
"text": "C_{d} = I"
},
{
"math_id": 29,
"text": "\\Delta\\underline{x} = -Q*\\left(J_{x}^\\mathsf{T}\\left(J_{d}C_{d}J_{d}^\\mathsf{T}\\right)^{-1}f\\right)"
},
{
"math_id": 30,
"text": "Q"
},
{
"math_id": 31,
"text": "P = \\left(J_{d}C_{d}J_{d}^\\mathsf{T}\\right)^{-1}"
},
{
"math_id": 32,
"text": "P"
},
{
"math_id": 33,
"text": "Q = \\begin{bmatrix}\n \\sigma_x^2 & \\sigma_{xy} & \\sigma_{xz} & \\sigma_{xt} \\\\\n \\sigma_{xy} & \\sigma_{y}^2 & \\sigma_{yz} & \\sigma_{yt} \\\\\n \\sigma_{xz} & \\sigma_{yz} & \\sigma_{z}^2 & \\sigma_{zt} \\\\\n \\sigma_{xt} & \\sigma_{yt} & \\sigma_{zt} & \\sigma_{t}^2\n\\end{bmatrix}"
},
{
"math_id": 34,
"text": "\\begin{align}\n \\operatorname{PDOP} &= \\sqrt{\\sigma_x^2 + \\sigma_y^2 + \\sigma_z^2}\\\\\n \\operatorname{TDOP} &= \\sqrt{\\sigma_{t}^2}\\\\\n \\operatorname{GDOP} &= \\sqrt{\\operatorname{PDOP}^2 + \\operatorname{TDOP}^2}\\\\\n &= \\sqrt{\\operatorname{tr} Q}\\\\\n\\end{align}"
},
{
"math_id": 35,
"text": "\\begin{align}\n \\operatorname{HDOP} &= \\sqrt{\\sigma_n^2 + \\sigma_e^2} \\\\\n \\operatorname{VDOP} &= \\sqrt{\\sigma_u^2}\n\\end{align}"
},
{
"math_id": 36,
"text": "\\begin{align}\n \\operatorname{GDOP} &= \\sqrt{\\operatorname{EDOP}^2 + \\operatorname{NDOP}^2 + \\operatorname{VDOP}^2 + \\operatorname{TDOP}^2} \\\\\n \\operatorname{HDOP} &= \\sqrt{\\operatorname{EDOP}^2 + \\operatorname{NDOP}^2} \\\\\n \\operatorname{PDOP} &= \\sqrt{\\operatorname{EDOP}^2 + \\operatorname{NDOP}^2 + \\operatorname{VDOP}^2}\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=1021510 |
1021521 | Equity premium puzzle | Economics concept
The equity premium puzzle refers to the inability of an important class of economic models to explain the average equity risk premium (ERP) provided by a diversified portfolio of equities over that of government bonds, which has been observed for more than 100 years. There is a significant disparity between returns produced by stocks compared to returns produced by government treasury bills. The equity premium puzzle addresses the difficulty in understanding and explaining this disparity. This disparity is calculated using the equity risk premium:
The equity risk premium is equal to the difference between equity returns and returns from government bonds. It is equal to around 5% to 8% in the United States.
The risk premium represents the compensation awarded to the equity holder for taking on a higher risk by investing in equities rather than government bonds. However, the 5% to 8% premium is considered to be an implausibly high difference and the equity premium puzzle refers to the unexplained reasons driving this disparity.
Description.
The term was coined by Rajnish Mehra and Edward C. Prescott in a study published in 1985 titled "The Equity Premium: A Puzzle". An earlier version of the paper was published in 1982 under the title "A test of the intertemporal asset pricing model". The authors found that a standard general equilibrium model, calibrated to display key U.S. business cycle fluctuations, generated an equity premium of less than 1% for reasonable risk aversion levels. This result stood in sharp contrast with the average equity premium of 6% observed during the historical period.
In 1982, Robert J. Shiller published the first calculation that showed that either a large risk aversion coefficient or counterfactually large consumption variability was required to explain the means and variances of asset returns. Azeredo (2014) shows, however, that increasing the risk aversion level may produce a negative equity premium in an Arrow-Debreu economy constructed to mimic the persistence in U.S. consumption growth observed in the data since 1929.
The intuitive notion that stocks are much riskier than bonds is not a sufficient explanation of the observation that the magnitude of the disparity between the two returns, the equity risk premium (ERP), is so great that it implies an implausibly high level of investor risk aversion that is fundamentally incompatible with other branches of economics, particularly macroeconomics and financial economics.
The process of calculating the equity risk premium, and selection of the data used, is highly subjective to the study in question, but is generally accepted to be in the range of 3–7% in the long-run. Dimson et al. calculated a premium of "around 3–3.5% on a geometric mean basis" for global equity markets during 1900–2005 (2006). However, over any one decade, the premium shows great variability—from over 19% in the 1950s to 0.3% in the 1970s.
In 1997, Siegel found that the actual standard deviation of the 20-year rate of return was only 2.76%. This means that for long-term investors, the risk of holding the stock of a smaller than expected can be derived only by looking at the standard deviation of annual earnings. For long-term investors, the actual risks of fixed-income securities are higher. Through a series of reasoning, the equity premium should be negative.
To quantify the level of risk aversion implied if these figures represented the "expected" outperformance of equities over bonds, investors would prefer a certain payoff of $51,300 to a 50/50 bet paying either $50,000 or $100,000.
The puzzle has led to an extensive research effort in both macroeconomics and finance. So far a range of useful theoretical tools and numerically plausible explanations have been presented, but no one solution is generally accepted by economists.
Theory.
The economy has a single representative household whose preferences over stochastic consumption paths are given by:
formula_0
where formula_1 is the subjective discount factor, formula_2 is the per capita consumption at time formula_3, U() is an increasing and concave utility function. In the Mehra and Prescott (1985) economy, the utility function belongs to the constant relative risk aversion class:
formula_4
where formula_5 is the constant relative risk aversion parameter. When formula_6, the utility function is the natural logarithmic function. Weil (1989) replaced the constant relative risk aversion utility function with the Kreps-Porteus nonexpected utility preferences.
formula_7
The Kreps-Porteus utility function has a constant intertemporal elasticity of substitution and a constant coefficient of relative risk aversion which are not required to be inversely related - a restriction imposed by the constant relative risk aversion utility function. Mehra and Prescott (1985) and Weil (1989) economies are a variations of Lucas (1978) pure exchange economy. In their economies the growth rate of the endowment process, formula_8, follows an ergodic Markov Process.
formula_9
where formula_10. This assumption is the key difference between Mehra and Prescott's economy and Lucas' economy where the level of the endowment process follows a Markov Process.
There is a single firm producing the perishable consumption good. At any given time formula_3, the firm's output must be less than or equal to formula_11 which is stochastic and follows formula_12. There is only one equity share held by the representative household.
We work out the intertemporal choice problem. This leads to:
formula_13
as the fundamental equation.
For computing stock returns
formula_14
where
formula_15
gives the result.
The derivative of the Lagrangian with respect to the percentage of stock held must equal zero to satisfy necessary conditions for optimality under the assumptions of no arbitrage and the law of one price.
Data.
Much data exists that says that stocks have higher returns. For example, Jeremy Siegel says that stocks in the United States have returned 6.8% per year over a 130-year period.
Proponents of the capital asset pricing model say that this is due to the higher beta of stocks, and that higher-beta stocks should return even more.
Others have criticized that the period used in Siegel's data is not typical, or the country is not typical.
Possible explanations.
A large number of explanations for the puzzle have been proposed. These include:
Kocherlakota (1996), Mehra and Prescott (2003) present a detailed analysis of these explanations in financial markets and conclude that the puzzle is real and remains unexplained. Subsequent reviews of the literature have similarly found no agreed resolution.
The mystery of stock premiums occupies a special place in financial and economic theories, and more progress is needed to understand the spread of stocks on bonds. Over time, as well as to determine the factors driving equity premium in various countries / regions may still be active research agenda.
A 2023 paper by Edward McQuarrie argues the equity risk premium may not exist, at least not as is commonly understood, and is furthermore based on data from a too narrow a time period in the late 20th century. He argues a more detailed examination of historical data finds "over multi-decade periods, sometimes stocks outperformed bonds, sometimes bonds outperformed stocks and sometimes they performed about the same. New international data confirm this pattern. Asset returns in the US in the 20th century do not generalize."
The equity premium: a deeper puzzle.
Azeredo (2014) showed that traditional pre-1930 consumption measures understate the extent of serial correlation in the U.S. annual real growth rate of per capita consumption of non-durables and services ("consumption growth"). Under alternative measures proposed in the study, the serial correlation of consumption growth is found to be positive. This new evidence implies that an important subclass of dynamic general equilibrium models studied by Mehra and Prescott (1985) generates negative equity premium for reasonable risk-aversion levels, thus further exacerbating the equity premium puzzle.
Rare events hypothesis.
One possible solution to the equity premium puzzle considered by Julliard and Ghosh (2008) is whether it can be explained by the rare events hypothesis, founded by Rietz (1988). They hypothesized that extreme economic events such as the Great Depression, the World Wars and the Great Financial Crisis resulted in equity holders demanding high equity premiums to account for the possibility of the significant loss they could suffer if these events were to materialise. As such, when these extreme economic events do not occur, equity holders are rewarded with higher returns. However, Julliard and Ghosh concluded that rare events are unlikely to explain the equity premium puzzle because the Consumption Capital Asset Pricing Model was rejected by their data and much greater risk aversion levels were required to explain the equity premium puzzle. Moreover, extreme economic events affect all assets (both equity and bonds) and they all yield low returns. For example, the equity premium persisted during the Great Depression, and this suggests that an even greater catastrophic economic event is required, and it must be one which only affect stocks, not bonds.
Myopic loss aversion.
Benartzi & Thaler (1995) contend that the equity premium puzzle can be explained by myopic loss aversion and their explanation is based on Kahneman and Tversky's prospect theory. They rely on two assumptions about decision-making to support theory; loss aversion and mental accounting. Loss aversion refers to the assumption that investors are more sensitive to losses than gains, and in fact, research calculates utility of losses felt by investors to be twice that of the utility of a gain. The second assumption is that investors frequently evaluate their stocks even when the purpose of the investment is to fund retirement or other long-term goals. This makes investors more risk averse compared to if they were evaluating their stocks less frequently. Their study found that the difference between returns gained from stocks and returns gained from bonds decrease when stocks are evaluated less frequently. The two combined creates myopic loss aversion and Benartzi & Thaler concluded that the equity premium puzzle can be explained by this theory.
Individual characteristics.
Some explanations rely on assumptions about individual behavior and preferences different from those made by Mehra and Prescott. Examples include the prospect theory model of Benartzi and Thaler (1995) based on loss aversion. A problem for this model is the lack of a general model of portfolio choice and asset valuation for prospect theory.
A second class of explanations is based on relaxation of the optimization assumptions of the standard model. The standard model represents consumers as continuously-optimizing dynamically-consistent expected-utility maximizers. These assumptions provide a tight link between attitudes to risk and attitudes to variations in intertemporal consumption which is crucial in deriving the equity premium puzzle. Solutions of this kind work by weakening the assumption of continuous optimization, for example by supposing that consumers adopt satisficing rules rather than optimizing. An example is info-gap decision theory, based on a non-probabilistic treatment of uncertainty, which leads to the adoption of a robust satisficing approach to asset allocation.
Equity characteristics.
Another explanation of the equity premium puzzle focuses on the characteristics of equity that cannot be captured by typical models but are still consistent with optimisation by investors.
The most significant characteristic that is not typically considered is the requirement for equity holders to monitor their activity and have a manager to assist them. Therefore, the principal-agent relationship is very prevalent between corporation managers and equity holders. If an investor was to choose to not have a manager, it is likely costly for them to monitor the activity of the corporations that they invest in and often rely heavily on auditors or they look to the market hypothesis in which information about asset values in the equity markets are exposed. This hypothesis is based on the theory that an investor who is inexperienced and uninformed can bank on the fact that they will get average market returns in an identifiable market portfolio, which is questionable as to whether or not this can be done by an uninformed investor. Although, as per the characteristics of equity in explaining the premium, it is only necessary to hypothesise that people looking to invest do not think they can reach the same level of performance of the market.
Another explanation related to the characteristics of equity was explored by a variety of studies including Holmstrom and Tirole (1998), Bansal and Coleman (1996) and Palomino(1996)and was in relation to liquidity. Palomino described the noise trader model that was thin and had imperfect competition is the market for equities and the lower its equilibrium price dropped the higher the premium over risk-free bonds would rise. Holmstrom and Tirole in their studies developed another role for liquidity in the equity market that involved firms willing to pay a premium for bonds over private claims when they would be facing uncertainty over liquidity needs.
Tax distortions.
Another explanation related to the observed growing equity premium was argued by McGrattan and Prescott (2001) to be a result of variations over time of taxes and particularly its effect on interest and dividend income. It is difficult however to give credibility to this analysis due to the difficulties in calibration utilised as well as ambiguity surrounding the existence of any noticeable equity premium before 1945. Even given this, it is evident that the observation that equity premium changes arising from the distortion of taxes over time should be taken into account and give more validity to the equity premium itself.
Related data is mentioned in the Handbook of the Equity Risk Premium. Beginning in 1919, it captured the post-World War I recovery, while omitting wartime losses and low pre-war returns. After adding these earlier years, the arithmetic average of the British stock premium for the entire 20th century is 6.6%, which is about 21/4% lower than the incorrect data inferred from 1919-1999.
Implied volatility.
Graham and Harvey have estimated that, for the United States, the expected average premium during the period June 2000 to November 2006 ranged between 4.65 and 2.50. They found a modest correlation of 0.62 between the 10-year equity premium and a measure of implied volatility (in this case VIX, the Chicago Board Options Exchange Volatility Index).
Dennis, Mayhew & Stivers (2006) find that changes in implied volatility have an asymmetric effect on stock returns. They found that negative changes in implied volatility have a stronger impact on stock returns than positive changes in implied volatility. The authors argue that such an asymmetric volatility effect can be explained by the fact that investors are more concerned with downside risk than upside potential. That is, investors are more likely to react to negative news and expect negative changes in implied volatility to have a stronger impact on stock returns. The authors also find that changes in implied volatility can predict future stock returns. Stocks that experience negative changes in implied volatility have higher expected returns in the future. The authors state that this relationship is caused by the association between negative changes in implied volatility and market downturns.
Yan (2011) presents an explanation for the equity premium puzzle using the slope of the implied volatility smile. The implied volatility smile refers to the pattern of implied volatilities for options contracts with the same expiration date but different strike prices. The slope of the implied volatility smile reflects the market's expectations for future changes in the stock price, with a steeper slope indicating higher expected volatility.
The author shows that the slope of the implied volatility smile is a significant predictor of stock returns, even after controlling for traditional risk factors. Specifically, stocks with steeper implied volatility smiles (i.e., higher jump risk) have higher expected returns, consistent with the equity premium puzzle. The author argues that this relationship between the slope of the implied volatility smile and stock returns can be explained by investors' preference for jump risk. Jump risk refers to the risk of sudden, large movements in the stock price, which are not fully captured by traditional measures of volatility. Yan argues that investors are willing to accept lower average returns on stocks that have higher jump risk, because they expect to be compensated with higher returns during times of market stress.
Information derivatives.
The simplest scientific interpretation of the puzzle suggests that consumption optimization is not responsible for the equity premium. More precisely, the timeseries of aggregate consumption is not a leading explanatory factor of the equity premium.
The human brain is (simultaneously) engaged in many strategies. Each of these strategies has a goal. While individually rational, the strategies are in constant competition for limited resources. Even within a single person this competition produces a highly complex behavior which does not fit any simple model.
Nevertheless, the individual strategies can be understood. In finance this is equivalent to understanding different financial products as information derivatives i.e. as products which are derived from all the relevant information available to the customer. If the numerical values for the equity premium are unknown, the rational examination of the equity product would have accurately predicted the observed ballpark values.
From the information derivatives viewpoint consumption optimization is just one possible goal (which never really comes up in practice in its pure academic form). To a classically trained economist this may feel like a loss of a fundamental principle. But it may also be a much needed connection to reality (capturing the real behavior of live investors). Viewing equities as a stand-alone product (information derivative) does not isolate them from the wider economic picture. Equity investments survive in competition with other strategies. The popularity of equities as an investment strategy demands an explanation. In terms of data this means that the information derivatives approach needs to explain not just the realized equities performance but also the investor-expected equity premia. The data suggest the long-term equity investments have been very good at delivering on the theoretical expectations. This explains the viability of the strategy in addition to its performance (i.e. in addition to the equity premium).
Market failure explanations.
Two broad classes of market failure have been considered as explanations of the equity premium.
First, problems of adverse selection and moral hazard may result in the absence of markets in which individuals can insure themselves against systematic risk in labor income and noncorporate profits.
Second, transaction costs or liquidity constraints may prevent individuals from smoothing consumption over time. In relation to transaction costs, there are significantly greater costs associated with trading stocks than trading bonds. These include costs to acquire information, broker fees, taxes, load fees and the bid-ask spread. As such, when shareholders attempt to capitalise on the equity premium by adjusting their asset allocation and purchasing more stocks, they incur significant trading costs which eliminate the gains from the equity premium. However, Kocherlakota (1996) contends that there is insufficient evidence to support this proposition and further data about the size and sources of trading costs need to be collected before this proposition could be validated.
Denial of equity premium.
A final possible explanation is that there is no puzzle to explain: that there is no equity premium. This can be argued from a number of ways, all of them being different forms of the argument that we don't have enough statistical power to distinguish the equity premium from zero:
A related criticism is that the apparent equity premium is an artifact of observing stock market bubbles in progress.
Note however that most mainstream economists agree that the evidence shows substantial statistical power. Benartzi & Thaler analyzed the equity returns over a 200-year period, between 1802 and 1990 and found that whilst equity returns were remained stable between 5.5% and 6.5%, return on government bonds fell significantly from around 5% to 0.5%. Moreover, analysis of how faculty members funded their retirement showed that people who had invested in stocks received much higher returns than people who had invested in government bonds.
Implications.
Implications for the Individual Investor
For the individual investor, the equity premium may represent a reasonable reward for taking on the risk of buying shares such that they base their decisions to allocate assets to shares or bonds depending on how risk tolerant or risk averse they are. On the other hand, if the investor believes that the equity premium arise from mistakes and fears, they would capitalize on that fear and mistake and invest considerable portions of their assets in shares. Here, it is prudent to note that economists more commonly allocate significant portions of their asset in shares.
Currently, the equity premium is estimated to be 3%. Although this is lower than historical rates, it is still significantly more advantageous than bonds for investors investing in their retirements funds and other long-term funds.
The magnitude of the equity premium brings about substantial implications for policy, welfare and also resource allocation.
Policy and Welfare Implications
Campbell and Cochrane (1995) have found in a study of a model that simulates equity premium value's consistent with asset prices, welfare costs are similar in magnitude to welfare benefits. Therefore essentially, a large risk premium in society where asset prices are a reflection of consumer preferences, implies that the cost of welfare is also large. It also means that in recessions, welfare costs are excessive regardless of aggregate consumption. As the equity premium rises, recession-state income marginal values steadily increase also thus further increasing the welfare costs of recessions. This also brings about questions regarding the need for microeconomic policies that operate by way of higher productivity in the long run by trading off short-term pain in the form of adjustment costs. Given the impact on welfare through recessions and the large equity premium, it is evident that these short-term trade offs as a result of economic policy are likely not ideal, and would be preferred to take place in times of normal economic activity.
Resource Allocation
When there is a large risk premium associated with equity, there is a high cost of systematic risk in returns. One of these being its potential implications on individual portfolio decisions. Some research has argued that high rates of return are just signs of misplaced risk-aversion in which investors are able to earn high returns with little risk from switching from stocks to other assets such as bonds. Research on the contrary indicates that a large percentage of the general public believe that the stock market is best for investors that are in it for the long haul and may also link to another implication being trends in the equity premium. Some claims have been made that the equity premium has declined over time in the past few years and may be supported by other studies claiming that tax declines may also continue to reduce the premium and the fact that transaction costs in securities markets decline this is consistent with a declining premium. The trend implication is also supported by models such as 'noise traders' that create a cyclical premium due to noise traders being excessively optimistic thus declining the premium, and vice versa when the optimism is replaced with pessimism, this would explain the constant decline of equity premium as a stock price bubble.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_0 \\left[\\sum_{t=0}^\\infty \\beta^t U(c_t)\\right]"
},
{
"math_id": 1,
"text": "0<\\beta<1"
},
{
"math_id": 2,
"text": "c_t"
},
{
"math_id": 3,
"text": "t"
},
{
"math_id": 4,
"text": "U(c, \\alpha) = \\frac{c^{(1-\\alpha)}}{1-\\alpha}"
},
{
"math_id": 5,
"text": "0< \\alpha < \\infty "
},
{
"math_id": 6,
"text": "\\alpha = 1"
},
{
"math_id": 7,
"text": "U_t = \\left[c_t^{1-\\rho}+\\beta (E_t U_{t+1}^{1-\\alpha})^{(1-\\rho)/(1-\\alpha)}\\right]^{1/(1-\\rho)}"
},
{
"math_id": 8,
"text": "x_t"
},
{
"math_id": 9,
"text": "P \\left [x_{t+1} = \\lambda_j | x_t = \\lambda_i \\right] = \\phi_{i,j} "
},
{
"math_id": 10,
"text": "x_t \\in \\{\\lambda_1,...,\\lambda_n\\}"
},
{
"math_id": 11,
"text": "y_{t}"
},
{
"math_id": 12,
"text": "y_{t+1}=x_{t+1} y_{t}"
},
{
"math_id": 13,
"text": "p_t U'(c_t) = \\beta E_t[(p_{t+1} + y_{t+1}) U'(c_{t+1})]"
},
{
"math_id": 14,
"text": "1 = \\beta E_t\\left[\\frac{U'(c_{t+1})}{U'(c_t)} R_{e, t+1}\\right]"
},
{
"math_id": 15,
"text": "R_{e, t+1} = (p_{t+1} + y_{t+1}) / p_t"
}
] | https://en.wikipedia.org/wiki?curid=1021521 |
1021753 | Variety (universal algebra) | Class of algebraic structures
In universal algebra, a variety of algebras or equational class is the class of all algebraic structures of a given signature satisfying a given set of identities. For example, the groups form a variety of algebras, as do the abelian groups, the rings, the monoids etc. According to Birkhoff's theorem, a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking of homomorphic images, subalgebras, and (direct) products. In the context of category theory, a variety of algebras, together with its homomorphisms, forms a category; these are usually called "finitary algebraic categories".
A "covariety" is the class of all coalgebraic structures of a given signature.
Terminology.
A variety of algebras should not be confused with an algebraic variety, which means a set of solutions to a system of polynomial equations. They are formally quite distinct and their theories have little in common.
The term "variety of algebras" refers to algebras in the general sense of universal algebra; there is also a more specific sense of algebra, namely as algebra over a field, i.e. a vector space equipped with a bilinear multiplication.
Definition.
A "signature" (in this context) is a set, whose elements are called "operations", each of which is assigned a natural number (0, 1, 2, ...) called its "arity". Given a signature "σ" and a set "V", whose elements are called "variables", a "word" is a finite rooted tree in which each node is labelled by either a variable or an operation, such that every node labelled by a variable has no branches away from the root and every node labelled by an operation "o" has as many branches away from the root as the arity of "o". An "equational law" is a pair of such words; the axiom consisting of the words "v" and "w" is written as "v" = "w".
A "theory" consists of a signature, a set of variables, and a set of equational laws. Any theory gives a variety of algebras as follows. Given a theory "T", an "algebra" of "T" consists of a set "A" together with, for each operation "o" of "T" with arity "n", a function "o""A" : "A""n" → "A" such that for each axiom "v" = "w" and each assignment of elements of "A" to the variables in that axiom, the equation holds that is given by applying the operations to the elements of "A" as indicated by the trees defining "v" and "w". The class of algebras of a given theory "T" is called a "variety of algebras".
Given two algebras of a theory "T", say "A" and "B", a "homomorphism" is a function "f" : "A" → "B" such that
formula_0
for every operation "o" of arity "n". Any theory gives a category where the objects are algebras of that theory and the morphisms are homomorphisms.
Examples.
The class of all semigroups forms a variety of algebras of signature (2), meaning that a semigroup has a single binary operation. A sufficient defining equation is the associative law:
formula_1
The class of groups forms a variety of algebras of signature (2,0,1), the three operations being respectively "multiplication" (binary), "identity" (nullary, a constant) and "inversion" (unary). The familiar axioms of associativity, identity and inverse form one suitable set of identities:
formula_2
formula_3
formula_4
The class of rings also forms a variety of algebras. The signature here is (2,2,0,0,1) (two binary operations, two constants, and one unary operation).
If we fix a specific ring "R", we can consider the class of left "R"-modules. To express the scalar multiplication with elements from "R", we need one unary operation for each element of "R". If the ring is infinite, we will thus have infinitely many operations, which is allowed by the definition of an algebraic structure in universal algebra. We will then also need infinitely many identities to express the module axioms, which is allowed by the definition of a variety of algebras. So the left "R"-modules do form a variety of algebras.
The fields do "not" form a variety of algebras; the requirement that all non-zero elements be invertible cannot be expressed as a universally satisfied identity (see below).
The cancellative semigroups also do not form a variety of algebras, since the cancellation property is not an equation, it is an implication that is not equivalent to any set of equations. However, they do form a quasivariety as the implication defining the cancellation property is an example of a quasi-identity.
Birkhoff's Variety theorem.
Given a class of algebraic structures of the same signature, we can define the notions of homomorphism, subalgebra, and product. Garrett Birkhoff proved that a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking of homomorphic images, subalgebras and arbitrary products. This is a result of fundamental importance to universal algebra and known as "Birkhoff's variety theorem" or as the "HSP theorem". "H", "S", and "P" stand, respectively, for the operations of homomorphism, subalgebra, and product.
One direction of the equivalence mentioned above, namely that a class of algebras satisfying some set of identities must be closed under the HSP operations, follows immediately from the definitions. Proving the converse—classes of algebras closed under the HSP operations must be equational—is more difficult.
Using the easy direction of Birkhoff's theorem, we can for example verify the claim made above, that the field axioms are not expressible by any possible set of identities: the product of fields is not a field, so fields do not form a variety.
Subvarieties.
A "subvariety" of a variety of algebras "V" is a subclass of "V" that has the same signature as "V" and is itself a variety, i.e., is defined by a set of identities.
Notice that although every group becomes a semigroup when the identity as a constant is omitted (and/or the inverse operation is omitted), the class of groups does "not" form a subvariety of the variety of semigroups because the signatures are different.
Similarly, the class of semigroups that are groups is not a subvariety of the variety of semigroups. The class of monoids that are groups contains formula_5 and does not contain its subalgebra (more precisely, submonoid) formula_6.
However, the class of abelian groups is a subvariety of the variety of groups because it consists of those groups satisfying "xy" = "yx", with no change of signature. The finitely generated abelian groups do not form a subvariety, since by Birkhoff's theorem they don't form a variety, as an arbitrary product of finitely generated abelian groups is not finitely generated.
Viewing a variety "V" and its homomorphisms as a category, a subvariety "U" of "V" is a full subcategory of "V", meaning that for any objects "a", "b" in "U", the homomorphisms from "a" to "b" in "U" are exactly those from "a" to "b" in "V".
Free objects.
Suppose "V" is a non-trivial variety of algebras, i.e. "V" contains algebras with more than one element. One can show that for every set "S", the variety "V" contains a "free algebra FS on S". This means that there is an injective set map "i" : "S" → "FS" that satisfies the following universal property: given any algebra "A" in "V" and any map "k" : "S" → "A", there exists a unique "V"-homomorphism "f" : "FS" → "A" such that "f" ∘ "i" = "k".
This generalizes the notions of free group, free abelian group, free algebra, free module etc. It has the consequence that every algebra in a variety is a homomorphic image of a free algebra.
Category theory.
Besides varieties, category theorists use two other frameworks that are equivalent in terms of the kinds of algebras they describe: finitary monads and Lawvere theories. We may go from a variety to a finitary monad as follows. A category with some variety of algebras as objects and homomorphisms as morphisms is called a finitary algebraic category. For any finitary algebraic category "V", the forgetful functor "G" : "V" → Set has a left adjoint "F" : Set → "V", namely the functor that assigns to each set the free algebra on that set. This adjunction is monadic, meaning that the category "V" is equivalent to the Eilenberg–Moore category Set"T" for the monad "T" = "GF". Moreover the monad "T" is finitary, meaning it commutes with filtered colimits.
The monad "T" : Set → Set is thus enough to recover the finitary algebraic category. Indeed, finitary algebraic categories are precisely those categories equivalent to the Eilenberg-Moore categories of finitary monads. Both these, in turn, are equivalent to categories of algebras of Lawvere theories.
Working with monads permits the following generalization. One says a category is an algebraic category if it is monadic over Set. This is a more general notion than "finitary algebraic category" because it admits such categories as "CABA" (complete atomic Boolean algebras) and "CSLat" (complete semilattices) whose signatures include infinitary operations. In those two cases the signature is large, meaning that it forms not a set but a proper class, because its operations are of unbounded arity. The algebraic category of sigma algebras also has infinitary operations, but their arity is countable whence its signature is small (forms a set).
Every finitary algebraic category is a locally presentable category.
Pseudovariety of finite algebras.
Since varieties are closed under arbitrary direct products, all non-trivial varieties contain infinite algebras. Attempts have been made to develop a finitary analogue of the theory of varieties. This led, e.g., to the notion of variety of finite semigroups. This kind of variety uses only finitary products. However, it uses a more general kind of identities.
A "pseudovariety" is usually defined to be a class of algebras of a given signature, closed under the taking of homomorphic images, subalgebras and finitary direct products. Not every author assumes that all algebras of a pseudovariety are finite; if this is the case, one sometimes talks of a "variety of finite algebras". For pseudovarieties, there is no general finitary counterpart to Birkhoff's theorem, but in many cases the introduction of a more complex notion of equations allows similar results to be derived.
Pseudovarieties are of particular importance in the study of finite semigroups and hence in formal language theory. Eilenberg's theorem, often referred to as the "variety theorem", describes a natural correspondence between varieties of regular languages and pseudovarieties of finite semigroups.
Notes.
<templatestyles src="Reflist/styles.css" />
External links.
<templatestyles src="Refbegin/styles.css" />
Two monographs available free online: | [
{
"math_id": 0,
"text": "f(o_A(a_1, \\dots, a_n)) = o_B(f(a_1), \\dots, f(a_n))"
},
{
"math_id": 1,
"text": "x(yz) = (xy)z."
},
{
"math_id": 2,
"text": "x(yz) = (xy)z"
},
{
"math_id": 3,
"text": "1 x = x 1 = x"
},
{
"math_id": 4,
"text": "x x^{-1} = x^{-1} x = 1."
},
{
"math_id": 5,
"text": "\\langle\\mathbb Z,+\\rangle"
},
{
"math_id": 6,
"text": "\\langle\\mathbb N,+\\rangle"
}
] | https://en.wikipedia.org/wiki?curid=1021753 |
1021754 | Sound localization | Biological sound detection process
Sound localization is a listener's ability to identify the location or origin of a detected sound in direction and distance.
The sound localization mechanisms of the mammalian auditory system have been extensively studied. The auditory system uses several cues for sound source localization, including time difference and level difference (or intensity difference) between the ears, and spectral information. Other animals, such as birds and reptiles, also use them but they may use them differently, and some also have localization cues which are absent in the human auditory system, such as the effects of ear movements. Animals with the ability to localize sound have a clear evolutionary advantage.
How sound reaches the brain.
Sound is the perceptual result of mechanical vibrations traveling through a medium such as air or water. Through the mechanisms of compression and rarefaction, sound waves travel through the air, bounce off the pinna and concha of the exterior ear, and enter the ear canal. In mammals, the sound waves vibrate the tympanic membrane (ear drum), causing the three bones of the middle ear to vibrate, which then sends the energy through the oval window and into the cochlea where it is changed into a chemical signal by hair cells in the organ of Corti, which synapse onto spiral ganglion fibers that travel through the cochlear nerve into the brain.
Neural interactions.
In vertebrates, interaural time differences are known to be calculated in the superior olivary nucleus of the brainstem. According to Jeffress, this calculation relies on delay lines: neurons in the superior olive which accept innervation from each ear with different connecting axon lengths. Some cells are more directly connected to one ear than the other, thus they are specific for a particular interaural time difference. This theory is equivalent to the mathematical procedure of cross-correlation. However, because Jeffress's theory is unable to account for the precedence effect, in which only the first of multiple identical sounds is used to determine the sounds' location (thus avoiding confusion caused by echoes), it cannot be entirely used to explain the response. Furthermore, a number of recent physiological observations made in the midbrain and brainstem of small mammals have shed considerable doubt on the validity of Jeffress's original ideas.
Neurons sensitive to interaural level differences (ILDs) are excited by stimulation of one ear and inhibited by stimulation of the other ear, such that the response magnitude of the cell depends on the relative strengths of the two inputs, which in turn, depends on the sound intensities at the ears.
In the auditory midbrain nucleus, the inferior colliculus (IC), many ILD sensitive neurons have response functions that decline steeply from maximum to zero spikes as a function of ILD. However, there are also many neurons with much more shallow response functions that do not decline to zero spikes.
Human auditory system.
Sound localization is the process of determining the location of a sound source. The brain utilizes subtle differences in intensity, spectral, and timing cues to localize sound sources.
Localization can be described in terms of three-dimensional position: the azimuth or horizontal angle, the elevation or vertical angle, and the distance (for static sounds) or velocity (for moving sounds).
The azimuth of a sound is signaled by the difference in arrival times between the ears, by the relative amplitude of high-frequency sounds (the shadow effect), and by the asymmetrical spectral reflections from various parts of our bodies, including torso, shoulders, and pinnae.
The distance cues are the loss of amplitude, the loss of high frequencies, and the ratio of the direct signal to the reverberated signal.
Depending on where the source is located, our head acts as a barrier to change the timbre, intensity, and spectral qualities of the sound, helping the brain orient where the sound emanated from. These minute differences between the two ears are known as interaural cues.
Lower frequencies, with longer wavelengths, diffract the sound around the head forcing the brain to focus only on the phasing cues from the source.
Helmut Haas discovered that we can discern the sound source despite additional reflections at 10 decibels louder than the original wave front, using the earliest arriving wave front. This principle is known as the Haas effect, a specific version of the precedence effect. Haas measured down to even a 1 millisecond difference in timing between the original sound and reflected sound increased the spaciousness, allowing the brain to discern the true location of the original sound. The nervous system combines all early reflections into a single perceptual whole allowing the brain to process multiple different sounds at once. The nervous system will combine reflections that are within about 35 milliseconds of each other and that have a similar intensity.
Duplex theory.
To determine the lateral input direction (left, front, right), the auditory system analyzes the following ear signal information:
In 1907, Lord Rayleigh utilized tuning forks to generate monophonic excitation and studied the lateral sound localization theory on a human head model without auricle. He first presented the interaural clue difference based sound localization theory, which is known as Duplex Theory. Human ears are on different sides of the head, and thus have different coordinates in space. As shown in the duplex theory figure, since the distances between the acoustic source and ears are different, there are time difference and intensity difference between the sound signals of two ears. We call those kinds of differences as Interaural Time Difference (ITD) and Interaural Intensity Difference (IID) respectively.
From the duplex theory figure we can see that for source B1 or source B2, there will be a propagation delay between two ears, which will generate the ITD. Simultaneously, human head and ears may have a shadowing effect on high-frequency signals, which will generate IID.
For frequencies below 800 Hz, the dimensions of the head (ear distance 21.5 cm, corresponding to an interaural time delay of 626 μs) are smaller than the half wavelength of the sound waves. So the auditory system can determine phase delays between both ears without confusion. Interaural level differences are very low in this frequency range, especially below about 200 Hz, so a precise evaluation of the input direction is nearly impossible on the basis of level differences alone. As the frequency drops below 80 Hz it becomes difficult or impossible to use either time difference or level difference to determine a sound's lateral source, because the phase difference between the ears becomes too small for a directional evaluation.
For frequencies above 1600 Hz the dimensions of the head are greater than the length of the sound waves. An unambiguous determination of the input direction based on interaural phase alone is not possible at these frequencies. However, the interaural level differences become larger, and these level differences are evaluated by the auditory system. Also, delays between the ears can still be detected via some combination of phase differences and group delays, which are more pronounced at higher frequencies; that is, if there is a sound onset, the delay of this onset between the ears can be used to determine the input direction of the corresponding sound source. This mechanism becomes especially important in reverberant environments. After a sound onset there is a short time frame where the direct sound reaches the ears, but not yet the reflected sound. The auditory system uses this short time frame for evaluating the sound source direction, and keeps this detected direction as long as reflections and reverberation prevent an unambiguous direction estimation. The mechanisms described above cannot be used to differentiate between a sound source ahead of the hearer or behind the hearer; therefore additional cues have to be evaluated.
Pinna filtering effect.
Duplex theory shows that ITD and IID play significant roles in sound localization, but they can only deal with lateral localization problems. For example, if two acoustic sources are placed symmetrically at the front and back of the right side of the human head, they will generate equal ITDs and IIDs, in what is called the cone model effect. However, human ears can still distinguish between these sources. Besides that, in natural sense of hearing, one ear alone, without any ITD or IID, can distinguish between them with high accuracy. Due to the disadvantages of duplex theory, researchers proposed the pinna filtering effect theory. The shape of the human pinna is concave with complex folds and asymmetrical both horizontally and vertically. Reflected and direct waves generate a frequency spectrum on the eardrum, relating to the acoustic sources. Then auditory nerves localize the sources using this frequency spectrum.
These spectrum clues generated by the pinna filtering effect can be presented as a head-related transfer function (HRTF). The corresponding time domain expressions are called the Head-Related Impulse Response (HRIR). The HRTF is also described as the transfer function from the free field to a specific point in the ear canal. We usually recognize HRTFs as LTI systems:
formula_6
formula_7
where L and R represent the left ear and right ear respectively, formula_8 and formula_9 represent the amplitude of the sound pressure at the entrances to the left and right ear canals, and formula_10 is the amplitude of sound pressure at the center of the head coordinate when listener does not exist. In general, an HRTF's formula_11 and formula_12 are functions of source angular position formula_1, elevation angle formula_13, the distance between the source and the center of the head formula_2, the angular velocity formula_14 and the equivalent dimension of the head formula_15.
At present, the main institutes that work on measuring HRTF database include CIPIC International Lab, MIT Media Lab, the Graduate School in Psychoacoustics at the University of Oldenburg, the Neurophysiology Lab at the University of Wisconsin–Madison and Ames Lab of NASA. Databases of HRIRs from humans with normal and impaired hearing and from animals are publicly available.
Other cues.
The human outer ear, i.e. the structures of the pinna and the external ear canal, form direction-selective filters. Depending on the sound input direction, different filter resonances become active. These resonances implant direction-specific patterns into the frequency responses of the ears, which can be evaluated by the auditory system for sound localization. Together with other direction-selective reflections at the head, shoulders and torso, they form the outer ear transfer functions. These patterns in the ear's frequency responses are highly individual, depending on the shape and size of the outer ear. If sound is presented through headphones, and has been recorded via another head with different-shaped outer ear surfaces, the directional patterns differ from the listener's own, and problems will appear when trying to evaluate directions in the median plane with these foreign ears. As a consequence, front–back permutations or inside-the-head-localization can appear when listening to dummy head recordings, or otherwise referred to as binaural recordings. It has been shown that human subjects can monaurally localize high frequency sound but not low frequency sound. Binaural localization, however, was possible with lower frequencies. This is likely due to the pinna being small enough to only interact with sound waves of high frequency. It seems that people can only accurately localize the elevation of sounds that are complex and include frequencies above 7,000 Hz, and a pinna must be present.
When the head is stationary, the binaural cues for lateral sound localization (interaural time difference and interaural level difference) do not give information about the location of a sound in the median plane. Identical ITDs and ILDs can be produced by sounds at eye level or at any elevation, as long as the lateral direction is constant. However, if the head is rotated, the ITD and ILD change dynamically, and those changes are different for sounds at different elevations. For example, if an eye-level sound source is straight ahead and the head turns to the left, the sound becomes louder (and arrives sooner) at the right ear than at the left. But if the sound source is directly overhead, there will be no change in the ITD and ILD as the head turns. Intermediate elevations will produce intermediate degrees of change, and if the presentation of binaural cues to the two ears during head movement is reversed, the sound will be heard behind the listener. Hans Wallach artificially altered a sound's binaural cues during movements of the head. Although the sound was objectively placed at eye level, the dynamic changes to ITD and ILD as the head rotated were those that would be produced if the sound source had been elevated. In this situation, the sound was heard at the synthesized elevation. The fact that the sound sources objectively remained at eye level prevented monaural cues from specifying the elevation, showing that it was the dynamic change in the binaural cues during head movement that allowed the sound to be correctly localized in the vertical dimension. The head movements need not be actively produced; accurate vertical localization occurred in a similar setup when the head rotation was produced passively, by seating the blindfolded subject in a rotating chair. As long as the dynamic changes in binaural cues accompanied a perceived head rotation, the synthesized elevation was perceived.
In the 1960s Batteau showed the pinna also enhances horizontal localization.
Distance of the sound source.
The human auditory system has only limited possibilities to determine the distance of a sound source. In the close-up-range there are some indications for distance determination, such as extreme level differences (e.g. when whispering into one ear) or specific pinna (the visible part of the ear) resonances in the close-up range.
The auditory system uses these clues to estimate the distance to a sound source:
Signal processing.
Sound processing of the human auditory system is performed in so-called critical bands. The hearing range is segmented into 24 critical bands, each with a width of 1 Bark or 100 Mel. For a directional analysis the signals inside the critical band are analyzed together.
The auditory system can extract the sound of a desired sound source out of interfering noise. This allows the listener to concentrate on only one speaker if other speakers are also talking (the cocktail party effect). With the help of the cocktail party effect sound from interfering directions is perceived attenuated compared to the sound from the desired direction. The auditory system can increase the signal-to-noise ratio by up to 15 dB, which means that interfering sound is perceived to be attenuated to half (or less) of its actual loudness.
In enclosed rooms not only the direct sound from a sound source is arriving at the listener's ears, but also sound which has been reflected at the walls. The auditory system analyses only the direct sound, which is arriving first, for sound localization, but not the reflected sound, which is arriving later (law of the first wave front). So sound localization remains possible even in an echoic environment. This echo cancellation occurs in the Dorsal Nucleus of the Lateral Lemniscus (DNLL).
In order to determine the time periods, where the direct sound prevails and which can be used for directional evaluation, the auditory system analyzes loudness changes in different critical bands and also the stability of the perceived direction. If there is a strong attack of the loudness in several critical bands and if the perceived direction is stable, this attack is in all probability caused by the direct sound of a sound source, which is entering newly or which is changing its signal characteristics. This short time period is used by the auditory system for directional and loudness analysis of this sound. When reflections arrive a little bit later, they do not enhance the loudness inside the critical bands in such a strong way, but the directional cues become unstable, because there is a mix of sound of several reflection directions. As a result, no new directional analysis is triggered by the auditory system.
This first detected direction from the direct sound is taken as the found sound source direction, until other strong loudness attacks, combined with stable directional information, indicate that a new directional analysis is possible. (see Franssen effect)
Specific techniques with applications.
Auditory transmission stereo system.
This kind of sound localization technique provides us the real virtual stereo system. It utilizes "smart" manikins, such as KEMAR, to glean signals or use DSP methods to simulate the transmission process from sources to ears. After amplifying, recording and transmitting, the two channels of received signals will be reproduced through earphones or speakers. This localization approach uses electroacoustic methods to obtain the spatial information of the original sound field by transferring the listener's auditory apparatus to the original sound field. The most considerable advantages of it would be that its acoustic images are lively and natural. Also, it only needs two independent transmitted signals to reproduce the acoustic image of a 3D system.
3D para-virtualization stereo system.
The representatives of this kind of system are SRS Audio Sandbox, Spatializer Audio Lab and Qsound Qxpander. They use HRTF to simulate the received acoustic signals at the ears from different directions with common binary-channel stereo reproduction. Therefore, they can simulate reflected sound waves and improve subjective sense of space and envelopment. Since they are para-virtualization stereo systems, the major goal of them is to simulate stereo sound information. Traditional stereo systems use sensors that are quite different from human ears. Although those sensors can receive the acoustic information from different directions, they do not have the same frequency response of human auditory system. Therefore, when binary-channel mode is applied, human auditory systems still cannot feel the 3D sound effect field. However, the 3D para-virtualization stereo system overcome such disadvantages. It uses HRTF principles to glean acoustic information from the original sound field then produce a lively 3D sound field through common earphones or speakers.
Multichannel stereo virtual reproduction.
Since the multichannel stereo systems require many reproduction channels, some researchers adopted the HRTF simulation technologies to reduce the number of reproduction channels. They use only two speakers to simulate multiple speakers in a multichannel system. This process is called as virtual reproduction. Essentially, such approach uses both interaural difference principle and pinna filtering effect theory. Unfortunately, this kind of approach cannot perfectly substitute the traditional multichannel stereo system, such as 5.1/7.1 surround sound system. That is because when the listening zone is relatively larger, simulation reproduction through HRTFs may cause invert acoustic images at symmetric positions.
Animals.
Since most animals have two ears, many of the effects of the human auditory system can also be found in other animals. Therefore, interaural time differences (interaural phase differences) and interaural level differences play a role for the hearing of many animals. But the influences on localization of these effects are dependent on head sizes, ear distances, the ear positions and the orientation of the ears. Smaller animals like insects use different techniques as the separation of the ears are too small. For the process of animals emitting sound to improve localization, a biological form of active sonar, see animal echolocation.
Lateral information (left, ahead, right).
If the ears are located at the side of the head, similar lateral localization cues as for the human auditory system can be used. This means: evaluation of interaural time differences (interaural phase differences) for lower frequencies and evaluation of interaural level differences for higher frequencies. The evaluation of interaural phase differences is useful, as long as it gives unambiguous results. This is the case, as long as ear distance is smaller than half the length (maximal one wavelength) of the sound waves. For animals with a larger head than humans the evaluation range for interaural phase differences is shifted towards lower frequencies, for animals with a smaller head, this range is shifted towards higher frequencies.
The lowest frequency which can be localized depends on the ear distance. Animals with a greater ear distance can localize lower frequencies than humans can. For animals with a smaller ear distance the lowest localizable frequency is higher than for humans.
If the ears are located at the side of the head, interaural level differences appear for higher frequencies and can be evaluated for localization tasks. For animals with ears at the top of the head, no shadowing by the head will appear and therefore there will be much less interaural level differences, which could be evaluated. Many of these animals can move their ears, and these ear movements can be used as a lateral localization cue.
In the median plane (front, above, back, below).
For many mammals there are also pronounced structures in the pinna near the entry of the ear canal. As a consequence, direction-dependent resonances can appear, which could be used as an additional localization cue, similar to the localization in the median plane in the human auditory system.
There are additional localization cues which are also used by animals.
Head tilting.
For sound localization in the median plane (elevation of the sound) also two detectors can be used, which are positioned at different heights. In animals, however, rough elevation information is gained simply by tilting the head, provided that the sound lasts long enough to complete the movement. This explains the innate behavior of cocking the head to one side when trying to localize a sound precisely. To get instantaneous localization in more than two dimensions from time-difference or amplitude-difference cues requires more than two detectors.
Localization with coupled ears (flies).
The tiny parasitic fly "Ormia ochracea" has become a model organism in sound localization experiments because of its unique ear. The animal is too small for the time difference of sound arriving at the two ears to be calculated in the usual way, yet it can determine the direction of sound sources with exquisite precision. The tympanic membranes of opposite ears are directly connected mechanically, allowing resolution of sub-microsecond time differences and requiring a new neural coding strategy. Ho showed that the coupled-eardrum system in frogs can produce increased interaural vibration disparities when only small arrival time and sound level differences were available to the animal's head. Efforts to build directional microphones based on the coupled-eardrum structure are underway.
Bi-coordinate sound localization (owls).
Most owls are nocturnal or crepuscular birds of prey. Because they hunt at night, they must rely on non-visual senses. Experiments by Roger Payne have shown that owls are sensitive to the sounds made by their prey, not the heat or the smell. In fact, the sound cues are both necessary and sufficient for localization of mice from a distant location where they are perched. For this to work, the owls must be able to accurately localize both the azimuth and the elevation of the sound source.
Dolphins.
Dolphins (and other odontocetes) rely on echolocation to aid in detecting, identifying, localizing, and capturing prey. Dolphin sonar signals are well suited for localizing multiple, small targets in a three-dimensional aquatic environment by utilizing highly directional (3 dB beamwidth of about 10 deg), broadband (3 dB bandwidth typically of about 40 kHz; peak frequencies between 40 kHz and 120 kHz), short duration clicks (about 40 μs). Dolphins can localize sounds both passively and actively (echolocation) with a resolution of about 1 deg. Cross-modal matching (between vision and echolocation) suggests dolphins perceive the spatial structure of complex objects interrogated through echolocation, a feat that likely requires spatially resolving individual object features and integration into a holistic representation of object shape. Although dolphins are sensitive to small, binaural intensity and time differences, mounting evidence suggests dolphins employ position-dependent spectral cues derived from well-developed head-related transfer functions, for sound localization in both the horizontal and vertical planes. A very small temporal integration time (264 μs) allows localization of multiple targets at varying distances. Localization adaptations include pronounced asymmetry of the skull, nasal sacks, and specialized lipid structures in the forehead and jaws, as well as acoustically isolated middle and inner ears.
The role of Prestin in sound localization:
In the realm of mammalian sound localization, the Prestin gene has emerged as a pivotal player, particularly in the fascinating arena of echolocation employed by bats and dolphins. Discovered just over a decade ago, Prestin encodes a protein located in the inner ear's hair cells, facilitating rapid contractions and expansions. This intricate mechanism operates akin to an antique phonograph horn, amplifying sound waves within the cochlea and elevating the overall sensitivity of hearing.
In 2014 Liu and others delved into the evolutionary adaptations of Prestin, unveiling its critical role in the ultrasonic hearing range essential for animal sonar, specifically in the context of echolocation. This adaptation proves instrumental for dolphins navigating through turbid waters and bats seeking sustenance in nocturnal darkness.
Noteworthy is the emission of high-frequency echolocation calls by toothed whales and echolocating bats, showcasing diversity in shape, duration, and amplitude. However, it is their high-frequency hearing that becomes paramount, as it enables the reception and analysis of echoes bouncing off objects in their environment. A meticulous dissection of Prestin protein function in sonar-guided bats and bottlenose dolphins, juxtaposed with nonsonar mammals, sheds light on the intricacies of this process.
Evolutionary analyses of Prestin protein sequences brought forth a compelling observation – a singular amino acid shift from threonine (Thr or T) in sonar mammals to asparagine (Asn or N) in nonsonar mammals. This specific alteration, subject to parallel evolution, emerges as a linchpin in the mammalian echolocation narrative.
Subsequent experiments lent credence to this hypothesis, identifying four key amino acid distinctions in sonar mammals that likely contribute to their distinctive echolocation features. The confluence of evolutionary analyses and empirical findings provides robust evidence, marking a significant juncture in comprehending the Prestin gene's role in the evolutionary trajectory of mammalian echolocation systems. This research underscores the adaptability and evolutionary significance of Prestin, offering valuable insights into the genetic foundations of sound localization in bats and dolphins, particularly within the sophisticated realm of echolocation.
History.
The term 'binaural' literally signifies 'to hear with two ears', and was introduced in 1859 to signify the practice of listening to the same sound through both ears, or to two discrete sounds, one through each ear. It was not until 1916 that Carl Stumpf (1848–1936), a German philosopher and psychologist, distinguished between dichotic listening, which refers to the stimulation of each ear with a different stimulus, and diotic listening, the simultaneous stimulation of both ears with the same stimulus.
Later, it would become apparent that binaural hearing, whether dichotic or diotic, is the means by which sound localization occurs.
Scientific consideration of binaural hearing began before the phenomenon was so named, with speculations published in 1792 by William Charles Wells (1757–1817) based on his research into binocular vision. Giovanni Battista Venturi (1746–1822) conducted and described experiments in which people tried to localize a sound using both ears, or one ear blocked with a finger. This work was not followed up on, and was only recovered after others had worked out how human sound localization works. Lord Rayleigh (1842–1919) would do these same experiments and come to the results, without knowing Venturi had first done them, almost seventy-five years later.
Charles Wheatstone (1802–1875) did work on optics and color mixing, and also explored hearing. He invented a device he called a "microphone" that involved a metal plate over each ear, each connected to metal rods; he used this device to amplify sound. He also did experiments holding tuning forks to both ears at the same time, or separately, trying to work out how sense of hearing works, that he published in 1827. Ernst Heinrich Weber (1795–1878) and August Seebeck (1805–1849) and William Charles Wells also attempted to compare and contrast what would become known as binaural hearing with the principles of binocular integration generally.
Understanding how the differences in sound signals between two ears contributes to auditory processing in such a way as to enable sound localization and direction was considerably advanced after the invention of the stethophone by Somerville Scott Alison in 1859, who coined the term 'binaural'. Alison based the stethophone on the stethoscope, which had been invented by René Théophile Hyacinthe Laennec (1781–1826); the stethophone had two separate "pickups", allowing the user to hear and compare sounds derived from two discrete locations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "c"
},
{
"math_id": 4,
"text": "ITD= \\begin{cases} 3\\times\\frac{r}{c}\\times\\sin\\theta, & \\text{if }f\\leq\\text{4000Hz } \\\\ 2\\times\\frac{r}{c}\\times\\sin\\theta, & \\text{if }f>\\text{ 4000Hz} \\end{cases}"
},
{
"math_id": 5,
"text": "IID=1.0+(f/1000)^{0.8}\\times\\sin\\theta"
},
{
"math_id": 6,
"text": "H_L=H_L(r,\\theta,\\varphi,\\omega,\\alpha)=P_L(r,\\theta,\\varphi,\\omega,\\alpha)/P_0(r,\\omega)"
},
{
"math_id": 7,
"text": "H_R=H_R(r,\\theta,\\varphi,\\omega,\\alpha)=P_R(r,\\theta,\\varphi,\\omega,\\alpha)/P_0(r,\\omega),"
},
{
"math_id": 8,
"text": "P_L"
},
{
"math_id": 9,
"text": "P_R"
},
{
"math_id": 10,
"text": "P_0"
},
{
"math_id": 11,
"text": "H_L"
},
{
"math_id": 12,
"text": "H_R"
},
{
"math_id": 13,
"text": "\\varphi"
},
{
"math_id": 14,
"text": "\\omega"
},
{
"math_id": 15,
"text": "\\alpha"
}
] | https://en.wikipedia.org/wiki?curid=1021754 |
102182 | Celestial mechanics | Branch of astronomy
<templatestyles src="Hlist/styles.css"/>
Celestial mechanics is the branch of astronomy that deals with the motions of objects in outer space. Historically, celestial mechanics applies principles of physics (classical mechanics) to astronomical objects, such as stars and planets, to produce ephemeris data.
History.
Modern analytic celestial mechanics started with Isaac Newton's "Principia" (1687). The name celestial mechanics is more recent than that. Newton wrote that the field should be called "rational mechanics". The term "dynamics" came in a little later with Gottfried Leibniz, and over a century after Newton, Pierre-Simon Laplace introduced the term "celestial mechanics". Prior to Kepler there was little connection between exact, quantitative prediction of planetary positions, using geometrical or numerical techniques, and contemporary discussions of the physical causes of the planets' motion.
Johannes Kepler.
Johannes Kepler (1571–1630) was the first to closely integrate the predictive geometrical astronomy, which had been dominant from Ptolemy in the 2nd century to Copernicus, with physical concepts to produce a "New Astronomy, Based upon Causes, or Celestial Physics" in 1609. His work led to the modern laws of planetary orbits, which he developed using his physical principles and the planetary observations made by Tycho Brahe. Kepler's elliptical model greatly improved the accuracy of predictions of planetary motion, years before Isaac Newton developed his law of gravitation in 1686.
Isaac Newton.
Isaac Newton (25 December 1642 – 31 March 1727) is credited with introducing the idea that the motion of objects in the heavens, such as planets, the Sun, and the Moon, and the motion of objects on the ground, like cannon balls and falling apples, could be described by the same set of physical laws. In this sense he unified "celestial" and "terrestrial" dynamics. Using his law of gravity, Newton confirmed Kepler's Laws for elliptical orbits by deriving them from the gravitational two-body problem, which Newton included in his epochal "Principia".
Joseph-Louis Lagrange.
After Newton, Lagrange (25 January 1736 – 10 April 1813) attempted to solve the three-body problem, analyzed the stability of planetary orbits, and discovered the existence of the Lagrangian points. Lagrange also reformulated the principles of classical mechanics, emphasizing energy more than force, and developing a method to use a single polar coordinate equation to describe any orbit, even those that are parabolic and hyperbolic. This is useful for calculating the behaviour of planets and comets and such (parabolic and hyperbolic orbits are conic section extensions of Kepler's elliptical orbits). More recently, it has also become useful to calculate spacecraft trajectories.
Simon Newcomb.
Simon Newcomb (12 March 1835 – 11 July 1909) was a Canadian-American astronomer who revised Peter Andreas Hansen's table of lunar positions. In 1877, assisted by George William Hill, he recalculated all the major astronomical constants. After 1884 he conceived, with A.M.W. Downing, a plan to resolve much international confusion on the subject. By the time he attended a standardisation conference in Paris, France, in May 1886, the international consensus was that all ephemerides should be based on Newcomb's calculations. A further conference as late as 1950 confirmed Newcomb's constants as the international standard.
Albert Einstein.
Albert Einstein (14 March 1879 – 18 April 1955) explained the anomalous precession of Mercury's perihelion in his 1916 paper "The Foundation of the General Theory of Relativity". This led astronomers to recognize that Newtonian mechanics did not provide the highest accuracy. Observations of binary pulsars – the first in 1974 – whose orbits not only require the use of General Relativity for their explanation, but whose evolution proves the existence of gravitational radiation, was a discovery that led to the 1993 Nobel Physics Prize.
Examples of problems.
Celestial motion, without additional forces such as drag forces or the thrust of a rocket, is governed by the reciprocal gravitational acceleration between masses. A generalization is the "n"-body problem, where a number "n" of masses are mutually interacting via the gravitational force. Although analytically not integrable in the general case, the integration can be well approximated numerically.
Examples:
*4-body problem: spaceflight to Mars (for parts of the flight the influence of one or two bodies is very small, so that there we have a 2- or 3-body problem; see also the patched conic approximation)
*3-body problem:
**Quasi-satellite
**Spaceflight to, and stay at a Lagrangian point
In the formula_0 case (two-body problem) the configuration is much simpler than for formula_1. In this case, the system is fully integrable and exact solutions can be found.
Examples:
*A binary star, e.g., Alpha Centauri (approx. the same mass)
*A binary asteroid, e.g., 90 Antiope (approx. the same mass)
A further simplification is based on the "standard assumptions in astrodynamics", which include that one body, the orbiting body, is much smaller than the other, the central body. This is also often approximately valid.
Examples:
*The Solar System orbiting the center of the Milky Way
*A planet orbiting the Sun
*A moon orbiting a planet
*A spacecraft orbiting Earth, a moon, or a planet (in the latter cases the approximation only applies after arrival at that orbit)
Perturbation theory.
Perturbation theory comprises mathematical methods that are used to find an approximate solution to a problem which cannot be solved exactly. (It is closely related to methods used in numerical analysis, which are ancient.) The earliest use of modern perturbation theory was to deal with the otherwise unsolvable mathematical problems of celestial mechanics: Newton's solution for the orbit of the Moon, which moves noticeably differently from a simple Keplerian ellipse because of the competing gravitation of the Earth and the Sun.
Perturbation methods start with a simplified form of the original problem, which is carefully chosen to be exactly solvable. In celestial mechanics, this is usually a Keplerian ellipse, which is correct when there are only two gravitating bodies (say, the Earth and the Moon), or a circular orbit, which is only correct in special cases of two-body motion, but is often close enough for practical use.
The solved, but simplified problem is then "perturbed" to make its time-rate-of-change equations for the object's position closer to the values from the real problem, such as including the gravitational attraction of a third, more distant body (the Sun). The slight changes that result from the terms in the equations – which themselves may have been simplified yet again – are used as corrections to the original solution. Because simplifications are made at every step, the corrections are never perfect, but even one cycle of corrections often provides a remarkably better approximate solution to the real problem.
There is no requirement to stop at only one cycle of corrections. A partially corrected solution can be re-used as the new starting point for yet another cycle of perturbations and corrections. In principle, for most problems the recycling and refining of prior solutions to obtain a new generation of better solutions could continue indefinitely, to any desired finite degree of accuracy.
The common difficulty with the method is that the corrections usually progressively make the new solutions very much more complicated, so each cycle is much more difficult to manage than the previous cycle of corrections. Newton is reported to have said, regarding the problem of the Moon's orbit "It causeth my head to ache."
This general procedure – starting with a simplified problem and gradually adding corrections that make the starting point of the corrected problem closer to the real situation – is a widely used mathematical tool in advanced sciences and engineering. It is the natural extension of the "guess, check, and fix" method used anciently with numbers.
Reference frame.
Problems in celestial mechanics are often posed in simplifying reference frames, such as the synodic reference frame applied to the three-body problem, where the origin coincides with the barycenter of the two larger celestial bodies. Other reference frames for n-body simulations include those that place the origin to follow the center of mass of a body, such as the heliocentric and the geocentric reference frames. The choice of reference frame gives rise to many phenomena, including the retrograde motion of superior planets while on a geocentric reference frame.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
External links.
Research
Artwork
Course notes
Associations
Simulations | [
{
"math_id": 0,
"text": "n=2"
},
{
"math_id": 1,
"text": "n>2"
}
] | https://en.wikipedia.org/wiki?curid=102182 |
10218909 | Bootstrapping (finance) | Method for constructing a fixed-income yield curve
In finance, bootstrapping is a method for constructing a (zero-coupon) fixed-income yield curve from the prices of a set of coupon-bearing products, e.g. bonds and swaps.
A "bootstrapped curve", correspondingly, is one where the prices of the instruments used as an "input" to the curve, will be an exact "output", when these same instruments are valued using this curve.
Here, the term structure of spot returns is recovered from the bond yields by solving for them recursively, by forward substitution: this iterative process is called the "bootstrap method".
The usefulness of bootstrapping is that using only a few carefully selected zero-coupon products, it becomes possible to derive par swap rates (forward and spot) for "all" maturities given the solved curve.
Methodology.
As stated above, the selection of the input securities is important, given that there is a general lack of data points in a yield curve (there are only a fixed number of products in the market). More importantly, because the input securities have varying coupon frequencies, the selection of the input securities is critical. It makes sense to construct a curve of zero-coupon instruments from which one can price any yield, whether forward or spot, without the need of more external information.
Note that certain assumptions (e.g. the interpolation method) will always be required.
General methodology.
The general methodology is as follows: (1) Define the set of yielding products - these will generally be coupon-bearing bonds; (2) Derive discount factors for the corresponding terms - these are the internal rates of return of the bonds; (3) 'Bootstrap' the zero-coupon curve, successively calibrating this curve such that it returns the prices of the inputs. A generically stated algorithm for the third step is as follows; for more detail see .
For each input instrument, proceeding through these in terms of increasing maturity:
When solved as described here, the curve will be arbitrage free in the sense that it is exactly consistent with the selected prices; see and . Note that some analysts will instead construct the curve such that it results in a best-fit "through" the input prices, as opposed to an exact match, using a method such as Nelson-Siegel.
Regardless of approach, however, there is a requirement that the curve be arbitrage-free in a second sense: that all forward rates are positive. More sophisticated methods for the curve construction — whether targeting an exact- or a best-fit — will additionally target curve "smoothness" as an output,
and the choice of interpolation method here, for rates not directly specified, will then be important.
Forward substitution.
A more detailed description of the forward substitution is as follows. For each stage of the iterative process, we are interested in deriving the n-year zero-coupon bond yield, also known as the internal rate of return of the zero-coupon bond. As there are no intermediate payments on this bond, (all the interest and principal is realized at the end of n years) it is sometimes called the n-year spot rate. To derive this rate we observe that the theoretical price of a bond can be calculated as the present value of the cash flows to be received in the future. In the case of swap rates, we want the par bond rate (Swaps are priced at par when created) and therefore we require that the present value of the future cash flows and principal be equal to 100%.
formula_0
therefore
formula_1
where
* formula_2 is the coupon rate of the n-year bond
* formula_3 is the length, or day count fraction, of the period formula_4, in years
* formula_5 is the discount factor for that time period
* formula_6 is the discount factor for the entire period, from which we derive the zero-rate.
Recent practice.
After the financial crisis of 2007–2008 swap valuation is typically under a "multi-curve and collateral" framework; the above, by contrast, describes the "self discounting" approach.
Under the new framework, when valuing a Libor-based swap:
(i) the forecasted cashflows are derived from the Libor-curve,
(ii) however, these cashflows are discounted at the OIS-based curve's overnight rate, as opposed to at Libor.
The result is that, in practice, curves are built as a "set" and not individually, where, correspondingly:
(i) "forecast curves" are constructed for "each" floating-leg Libor tenor;
and (ii) discounting is on a single, common OIS curve which must simultaneously be constructed.
The reason for the change is that, post-crisis, the overnight rate is the rate paid on the collateral (variation margin) posted by counterparties on most CSAs. The forward values of the overnight rate can be read from the overnight index swap curve. "OIS-discounting" is now standard, and is sometimes, referred to as "CSA-discounting".
See: for context; for the math.
References.
References
<templatestyles src="Reflist/styles.css" />
Standard texts | [
{
"math_id": 0,
"text": "1 = C_{n} \\cdot \\Delta_1 \\cdot df_{1} + C_{n} \\cdot \\Delta_2 \\cdot df_{2} + C_{n} \\cdot \\Delta_3 \\cdot df_{3} + \\cdots + (1+ C_{n} \\cdot \\Delta_n ) \\cdot df_n "
},
{
"math_id": 1,
"text": "df_{n} = {(1 - \\sum_{i=1}^{n-1} C_{n} \\cdot \\Delta_i \\cdot df_{i}) \\over (1 + C_{n} \\cdot \\Delta_n )}"
},
{
"math_id": 2,
"text": "C_{n}"
},
{
"math_id": 3,
"text": "\\Delta_i"
},
{
"math_id": 4,
"text": "[i - 1; i]"
},
{
"math_id": 5,
"text": "df_{i}"
},
{
"math_id": 6,
"text": "df_{n}"
}
] | https://en.wikipedia.org/wiki?curid=10218909 |
10220067 | Crystalline cohomology | In mathematics, crystalline cohomology is a Weil cohomology theory for schemes "X" over a base field "k". Its values "H""n"("X"/"W") are modules over the ring "W" of Witt vectors over "k". It was introduced by Alexander Grothendieck (1966, 1968) and developed by Pierre Berthelot (1974).
Crystalline cohomology is partly inspired by the "p"-adic proof in of part of the Weil conjectures and is closely related to the algebraic version of de Rham cohomology that was introduced by Grothendieck (1963). Roughly speaking, crystalline cohomology of a variety "X" in characteristic "p" is the de Rham cohomology of a smooth lift of "X" to characteristic 0, while de Rham cohomology of "X" is the crystalline cohomology reduced mod "p" (after taking into account higher "Tor"s).
The idea of crystalline cohomology, roughly, is to replace the Zariski open sets of a scheme by infinitesimal thickenings of Zariski open sets with divided power structures. The motivation for this is that it can then be calculated by taking a local lifting of a scheme from characteristic "p" to characteristic "0" and employing an appropriate version of algebraic de Rham cohomology.
Crystalline cohomology only works well for smooth proper schemes. Rigid cohomology extends it to more general schemes.
Applications.
For schemes in characteristic "p", crystalline cohomology theory can handle questions about "p"-torsion in cohomology groups better than "p"-adic étale cohomology. This makes it a natural backdrop for much of the work on p-adic L-functions.
Crystalline cohomology, from the point of view of number theory, fills a gap in the l-adic cohomology information, which occurs exactly where there are 'equal characteristic primes'. Traditionally the preserve of ramification theory, crystalline cohomology converts this situation into Dieudonné module theory, giving an important handle on arithmetic problems. Conjectures with wide scope on making this into formal statements were enunciated by Jean-Marc Fontaine, the resolution of which is called p-adic Hodge theory.
Coefficients.
For a variety "X" over an algebraically closed field of characteristic "p" > 0, the formula_0-adic cohomology groups for formula_0 any prime number other than "p" give satisfactory cohomology groups of "X", with coefficients in the ring formula_1 of formula_0-adic integers. It is not possible in general to find similar cohomology groups with coefficients in Q"p" (or Z"p", or Q, or Z) having reasonable properties.
The classic reason (due to Serre) is that if "X" is a supersingular elliptic curve, then its endomorphism ring is a maximal order in a quaternion algebra "B" over Q ramified at "p" and ∞. If "X" had a cohomology group over Q"p" of the expected dimension 2, then (the opposite algebra of) "B" would act on this 2-dimensional space over Q"p", which is impossible since "B" is ramified at "p".
Grothendieck's crystalline cohomology theory gets around this obstruction because it produces modules over the ring of Witt vectors of the ground field. So if the ground field is an algebraic closure of F"p", its values are modules over the "p"-adic completion of the maximal unramified extension of Z"p", a much larger ring containing "n"th roots of unity for all "n" not divisible by "p", rather than over Z"p".
Motivation.
One idea for defining a Weil cohomology theory of a variety "X" over a field "k" of characteristic "p" is to 'lift' it to a variety "X"* over the ring of Witt vectors of "k" (that gives back "X" on reduction mod p), then take the de Rham cohomology of this lift. The problem is that it is not at all obvious that this cohomology is independent of the choice of lifting.
The idea of crystalline cohomology in characteristic 0 is to find a direct definition of a cohomology theory as the cohomology of constant sheaves on a suitable site
Inf("X")
over "X", called the infinitesimal site and then show it is the same as the de Rham cohomology of any lift.
The site Inf("X") is a category whose objects can be thought of as some sort of generalization of the conventional open sets of "X". In characteristic 0 its objects are infinitesimal thickenings "U"→"T" of Zariski open subsets "U" of "X". This means that "U" is the closed subscheme of a scheme "T" defined by a nilpotent sheaf of ideals on "T"; for example, Spec("k")→ Spec("k"["x"]/("x"2)).
Grothendieck showed that for smooth schemes "X" over C, the cohomology of the sheaf "O""X" on Inf("X") is the same as the usual (smooth or algebraic) de Rham cohomology.
Crystalline cohomology.
In characteristic "p" the most obvious analogue of the crystalline site defined above in characteristic 0 does not work. The reason is roughly that in order to prove exactness of the de Rham complex, one needs some sort of Poincaré lemma, whose proof in turn uses integration, and integration requires various divided powers, which exist in characteristic 0 but not always in characteristic "p". Grothendieck solved this problem by defining objects of the crystalline site of "X" to be roughly infinitesimal thickenings of Zariski open subsets of "X", together with a divided power structure giving the needed divided powers.
We will work over the ring "W""n" = "W"/"p""n""W" of Witt vectors of length "n" over a perfect field "k" of characteristic "p">0. For example, "k" could be the finite field of order "p", and "W""n" is then the ring Z/"p""n"Z. (More generally one can work over a base scheme "S" which has a fixed sheaf of ideals "I" with a divided power structure.) If "X" is a scheme over "k", then the crystalline site of "X" relative to "W""n", denoted Cris("X"/"W""n"), has as its objects pairs
"U"→"T" consisting of a closed immersion of a Zariski open subset "U" of "X" into some "W""n"-scheme "T"
defined by a sheaf of ideals "J", together with a divided power structure on "J" compatible with the one on "W""n".
Crystalline cohomology of a scheme "X" over "k" is defined to be the inverse limit
formula_2
where
formula_3
is the cohomology of the crystalline site of "X"/"W""n" with values in the sheaf of rings "O" := "O""W""n".
A key point of the theory is that the crystalline cohomology of a smooth scheme "X" over "k" can often be calculated in terms of the algebraic de Rham cohomology of a proper and smooth lifting of "X" to a scheme "Z" over "W". There is a canonical isomorphism
formula_4
of the crystalline cohomology of "X" with the de Rham cohomology of "Z" over the formal scheme of "W"
(an inverse limit of the hypercohomology of the complexes of differential forms).
Conversely the de Rham cohomology of "X" can be recovered as the reduction mod "p" of its crystalline cohomology (after taking higher "Tor"s into account).
Crystals.
If "X" is a scheme over "S" then the sheaf "O""X"/"S" is defined by
"O""X"/"S"("T") = coordinate ring of "T", where we write "T" as an abbreviation for
an object "U" → "T" of Cris("X"/"S").
A crystal on the site Cris("X"/"S") is a sheaf "F" of "O""X"/"S" modules that is rigid in the following sense:
for any map "f" between objects "T", "T"′ of Cris("X"/"S"), the natural map from "f"*"F"("T") to "F"("T"′) is an isomorphism.
This is similar to the definition of a quasicoherent sheaf of modules in the Zariski topology.
An example of a crystal is the sheaf "O""X"/"S".
The term "crystal" attached to the theory, explained in Grothendieck's letter to Tate (1966), was a metaphor inspired by certain properties of algebraic differential equations. These had played a role in "p"-adic cohomology theories (precursors of the crystalline theory, introduced in various forms by Dwork, Monsky, Washnitzer, Lubkin and Katz) particularly in Dwork's work. Such differential equations can be formulated easily enough by means of the algebraic Koszul connections, but in the "p"-adic theory the analogue of analytic continuation is more mysterious (since "p"-adic discs tend to be disjoint rather than overlap). By decree, a "crystal" would have the 'rigidity' and the 'propagation' notable in the case of the analytic continuation of complex analytic functions. (Cf. also the rigid analytic spaces introduced by John Tate, in the 1960s, when these matters were actively being debated.)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ell"
},
{
"math_id": 1,
"text": "\\mathbf{Z}_\\ell"
},
{
"math_id": 2,
"text": "H^i(X/W)=\\varprojlim H^i(X/W_n)"
},
{
"math_id": 3,
"text": "H^i(X/W_n)= H^i(\\operatorname{Cris}(X/W_n),O)"
},
{
"math_id": 4,
"text": "H^i(X/W) = H^i_{DR}(Z/W) \\quad(= H^i(Z,\\Omega_{Z/W}^*)= \\varprojlim H^i(Z,\\Omega_{Z/W_n}^*))"
}
] | https://en.wikipedia.org/wiki?curid=10220067 |
10220473 | Eilenberg–Zilber theorem | Links the homology groups of a product space with those of the individual spaces
In mathematics, specifically in algebraic topology, the Eilenberg–Zilber theorem is an important result in establishing the link between the homology groups of a product space formula_0 and those of the spaces formula_1 and formula_2. The theorem first appeared in a 1953 paper in the American Journal of Mathematics by Samuel Eilenberg and Joseph A. Zilber. One possible route to a proof is the acyclic model theorem.
Statement of the theorem.
The theorem can be formulated as follows. Suppose formula_1 and formula_2 are topological spaces, Then we have the three chain complexes formula_3, formula_4, and formula_5. (The argument applies equally to the simplicial or singular chain complexes.) We also have the "tensor product complex" formula_6, whose differential is, by definition,
formula_7
for formula_8 and formula_9, formula_10 the differentials on formula_3,formula_4.
Then the theorem says that we have chain maps
formula_11
such that formula_12 is the identity and formula_13 is chain-homotopic to the identity. Moreover, the maps are natural in formula_1 and formula_2. Consequently the two complexes must have the same homology:
formula_14
Statement in terms of composite maps.
The original theorem was proven in terms of acyclic models but more mileage was gotten in a phrasing by Eilenberg and Mac Lane using explicit maps. The standard map formula_15 they produce is traditionally referred to as the Alexander–Whitney map and formula_16 the Eilenberg–Zilber map. The maps are natural in both formula_1 and formula_2 and inverse up to homotopy: one has
formula_17
for a homotopy formula_18 natural in both formula_1 and formula_2 such that further, each of formula_19, formula_20, and formula_21 is zero. This is what would come to be known as a "contraction" or a "homotopy retract datum".
The coproduct.
The diagonal map formula_22 induces a map of cochain complexes formula_23 which, followed by the Alexander–Whitney formula_15 yields a coproduct formula_24 inducing the standard coproduct on formula_25. With respect to these coproducts on
formula_1 and formula_2, the map
formula_26,
also called the Eilenberg–Zilber map, becomes a map of differential graded coalgebras. The composite formula_24 itself is not a map of coalgebras.
Statement in cohomology.
The Alexander–Whitney and Eilenberg–Zilber maps dualize (over any choice of commutative coefficient ring formula_27 with unity) to a pair of maps
formula_28
which are also homotopy equivalences, as witnessed by the duals of the preceding equations, using the dual homotopy formula_29. The coproduct does not dualize straightforwardly, because dualization does not distribute over tensor products of infinitely-generated modules, but there is a natural injection of differential graded algebras formula_30 given by formula_31, the product being taken in the coefficient ring formula_27. This formula_32 induces an isomorphism in cohomology, so one does have the zig-zag of differential graded algebra maps
formula_33
inducing a product formula_34 in cohomology, known as the cup product, because formula_35 and formula_36 are isomorphisms. Replacing formula_37 with formula_38 so the maps all go the same way, one gets the standard cup product on cochains, given explicitly by
formula_39,
which, since cochain evaluation formula_40 vanishes unless formula_41, reduces to the more familiar expression.
Note that if this direct map formula_42 of cochain complexes were in fact a map of differential graded algebras, then the cup product would make formula_43 a commutative graded algebra, which it is not. This failure of the Alexander–Whitney map to be a coalgebra map is an example the unavailability of commutative cochain-level models for cohomology over fields of nonzero characteristic, and thus is in a way responsible for much of the subtlety and complication in stable homotopy theory.
Generalizations.
An important generalisation to the non-abelian case using crossed complexes is given in the paper by Andrew Tonks below. This give full details of a result on the (simplicial) classifying space of a crossed complex stated but not proved in the paper by Ronald Brown and Philip J. Higgins on classifying spaces.
Consequences.
The Eilenberg–Zilber theorem is a key ingredient in establishing the Künneth theorem, which expresses the homology groups formula_44 in terms of formula_25 and formula_45. In light of the Eilenberg–Zilber theorem, the content of the Künneth theorem consists in analysing how the homology of the tensor product complex relates to the homologies of the factors. | [
{
"math_id": 0,
"text": "X \\times Y"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "Y"
},
{
"math_id": 3,
"text": "C_*(X)"
},
{
"math_id": 4,
"text": "C_*(Y)"
},
{
"math_id": 5,
"text": "C_*(X \\times Y) "
},
{
"math_id": 6,
"text": "C_*(X) \\otimes C_*(Y)"
},
{
"math_id": 7,
"text": "\\partial_{C_*(X) \\otimes C_*(Y)}( \\sigma \\otimes \\tau) = \\partial_X \\sigma \\otimes \\tau + (-1)^p \\sigma \\otimes \\partial_Y\\tau"
},
{
"math_id": 8,
"text": "\\sigma \\in C_p(X)"
},
{
"math_id": 9,
"text": "\\partial_X"
},
{
"math_id": 10,
"text": "\\partial_Y"
},
{
"math_id": 11,
"text": "F\\colon C_*(X \\times Y) \\rightarrow C_*(X) \\otimes C_*(Y), \\quad G\\colon C_*(X) \\otimes C_*(Y) \\rightarrow C_*(X \\times Y)"
},
{
"math_id": 12,
"text": "FG"
},
{
"math_id": 13,
"text": "GF"
},
{
"math_id": 14,
"text": "H_*(C_*(X \\times Y)) \\cong H_*(C_*(X) \\otimes C_*(Y))."
},
{
"math_id": 15,
"text": "F"
},
{
"math_id": 16,
"text": "G"
},
{
"math_id": 17,
"text": "FG = \\mathrm{id}_{C_*(X) \\otimes C_*(Y)}, \\qquad GF - \\mathrm{id}_{C_*(X \\times Y)} = \\partial_{C_*(X) \\otimes C_*(Y)}H+H\\partial_{C_*(X) \\otimes C_*(Y)}"
},
{
"math_id": 18,
"text": "H"
},
{
"math_id": 19,
"text": "HH"
},
{
"math_id": 20,
"text": "FH"
},
{
"math_id": 21,
"text": "HG"
},
{
"math_id": 22,
"text": "\\Delta\\colon X \\to X \\times X"
},
{
"math_id": 23,
"text": "C_*(X) \\to C_*(X \\times X)"
},
{
"math_id": 24,
"text": "C_*(X) \\to C_*(X) \\otimes C_*(X)"
},
{
"math_id": 25,
"text": "H_*(X)"
},
{
"math_id": 26,
"text": "H_*(X) \\otimes H_*(Y) \\to H_*\\big(C_*(X) \\otimes C_*(Y)\\big)\\ \\overset\\sim\\to\\ H_*(X \\times Y)"
},
{
"math_id": 27,
"text": "k"
},
{
"math_id": 28,
"text": "G^*\\colon C^*(X \\times Y) \\rightarrow \\big(C_*(X) \\otimes C_*(Y)\\big)^*, \\quad F^*\\colon \\big(C_*(X) \\otimes C_*(Y)\\big)^*\\rightarrow C^*(X \\times Y)"
},
{
"math_id": 29,
"text": "H^*"
},
{
"math_id": 30,
"text": "i\\colon C^*(X) \\otimes C^*(Y) \\to \\big(C_*(X) \\otimes C_*(Y)\\big)^*"
},
{
"math_id": 31,
"text": "\\alpha \\otimes \\beta \\mapsto (\\sigma \\otimes \\tau \\mapsto \\alpha(\\sigma)\\beta(\\tau))"
},
{
"math_id": 32,
"text": "i"
},
{
"math_id": 33,
"text": " C^*(X) \\otimes C^*(X)\\ \\overset{i}{\\to}\\ \\big(C_*(X) \\otimes C_*(X)\\big)^*\\ \\overset{G^*}{\\leftarrow}\\ C^*(X \\times X) \\overset{C^*(\\Delta)}{\\to} C^*(X)"
},
{
"math_id": 34,
"text": "\\smile\\colon H^*(X) \\otimes H^*(X) \\to H^*(X)"
},
{
"math_id": 35,
"text": "H^*(i)"
},
{
"math_id": 36,
"text": "H^*(G)"
},
{
"math_id": 37,
"text": "G^*"
},
{
"math_id": 38,
"text": "F^*"
},
{
"math_id": 39,
"text": "\\alpha \\otimes \\beta \\mapsto \\Big(\\sigma \\mapsto (\\alpha \\otimes \\beta)(F^*\\Delta^*\\sigma) =\n\\sum_{p=0}^{\\dim \\sigma} \\alpha(\\sigma|_{\\Delta^{[0,p]}}) \\cdot \\beta(\\sigma|_{\\Delta^{[p,\\dim \\sigma]}})\\Big)"
},
{
"math_id": 40,
"text": "C^p(X) \\otimes C_q(X) \\to k"
},
{
"math_id": 41,
"text": "p=q"
},
{
"math_id": 42,
"text": "C^*(X) \\otimes C^*(X) \\to C^*(X)"
},
{
"math_id": 43,
"text": "C^*(X)"
},
{
"math_id": 44,
"text": "H_*(X \\times Y)"
},
{
"math_id": 45,
"text": "H_*(Y)"
}
] | https://en.wikipedia.org/wiki?curid=10220473 |
10220713 | Zero field splitting | Zero field splitting (ZFS) describes various interactions of the energy levels of a molecule or ion resulting from the presence of more than one unpaired electron. In quantum mechanics, an energy level is called degenerate if it corresponds to two or more different measurable states of a quantum system. In the presence of a magnetic field, the Zeeman effect is well known to split degenerate states. In quantum mechanics terminology, the degeneracy is said to be "lifted" by the presence of the magnetic field. In the presence of more than one unpaired electron, the electrons mutually interact to give rise to two or more energy states. Zero field splitting refers to this lifting of degeneracy even in the absence of a magnetic field. ZFS is responsible for many effects related to the magnetic properties of materials, as manifested in their electron spin resonance spectra and magnetism.
The classic case for ZFS is the spin triplet, i.e., the S=1 spin system. In the presence of a magnetic field, the levels with different values of magnetic spin quantum number (MS=0,±1) are separated and the Zeeman splitting dictates their separation. In the absence of magnetic field, the 3 levels of the triplet are isoenergetic to the first order. However, when the effects of inter-electron repulsions are considered, the energy of the three sublevels of the triplet can be seen to have separated. This effect is thus an example of ZFS. The degree of separation depends on the symmetry of the system.
Quantum mechanical description.
The corresponding Hamiltonian can be written as:
formula_0
Where S is the total spin quantum number, and formula_1 are the spin matrices.
The value of the ZFS parameter are usually defined via D and E parameters. D describes the axial component of the magnetic dipole–dipole interaction, and E the transversal component. Values of D have been obtained for a wide number of organic biradicals by EPR measurements. This value may be measured by other magnetometry techniques such as SQUID; however, EPR measurements provide more accurate data in most cases. This value can also be obtained with other techniques such as optically detected magnetic resonance (ODMR; a double resonance technique which combines EPR with measurements such as fluorescence, phosphorescence and absorption), with sensitivity down to a single molecule or defect in solids like diamond (e.g. N-V center) or silicon carbide.
Algebraic derivation.
The start is the corresponding Hamiltonian formula_2. formula_3 describes the dipolar spin-spin interaction between two unpaired spins (formula_4 and formula_5). Where formula_6 is the total spin formula_7, and formula_3 being a symmetric and traceless (which it is when formula_3 arises from dipole-dipole interaction) matrix, which means it is diagonalizable.
with formula_3 being traceless (formula_8). For simplicity formula_9 is defined as formula_10. The Hamiltonian becomes:
The key is to express formula_11 as its mean value and a deviation formula_12
To find the value for the deviation formula_12 which is then by rearranging equation (3):
By inserting (4) and (3) into (2) the result reads as:
Note, that in the second line in (5)
formula_13 was added. By doing so formula_14 can be further used.
By using the fact, that formula_3 is traceless (formula_15) equation (5) simplifies to:
By defining D and E parameters equation (6) becomes to:
with formula_16 and formula_17 (measurable) zero field splitting values.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\hat{\\mathcal{H}}=D\\left(S_z^2-\\frac{1}{3}S(S+1)\\right)+E(S_x^2-S_y^2) "
},
{
"math_id": 1,
"text": "S_{x,y,z}"
},
{
"math_id": 2,
"text": "\\hat{\\mathcal{H}}_D=\\mathbf{SDS}"
},
{
"math_id": 3,
"text": "\\mathbf{D}"
},
{
"math_id": 4,
"text": "S_1"
},
{
"math_id": 5,
"text": "S_2"
},
{
"math_id": 6,
"text": "S"
},
{
"math_id": 7,
"text": "S=S_1+S_2"
},
{
"math_id": 8,
"text": "D_{xx}+D_{yy}+D_{zz}=0"
},
{
"math_id": 9,
"text": "D_{j}"
},
{
"math_id": 10,
"text": "D_{jj}"
},
{
"math_id": 11,
"text": "D_x S_x^2+D_y S_y^2"
},
{
"math_id": 12,
"text": "\\Delta"
},
{
"math_id": 13,
"text": "S_z^2-S_z^2"
},
{
"math_id": 14,
"text": "S_x^2+S_y^2+S_z^2=S(S+1)"
},
{
"math_id": 15,
"text": "\\frac{1}{2}D_x+\\frac{1}{2}D_y=-\\frac{1}{2}D_z"
},
{
"math_id": 16,
"text": "D=\\frac{3}{2}D_z"
},
{
"math_id": 17,
"text": "E=\\frac{1}{2}\\left(D_x-D_y\\right)"
}
] | https://en.wikipedia.org/wiki?curid=10220713 |
10220853 | Eilenberg–Moore spectral sequence | In mathematics, in the field of algebraic topology, the Eilenberg–Moore spectral sequence addresses the calculation of the homology groups of a pullback over a fibration. The spectral sequence formulates the calculation from knowledge of the homology of the remaining spaces. Samuel Eilenberg and John C. Moore's original paper addresses this for singular homology.
Motivation.
Let formula_0 be a field and let
formula_1 and formula_2
denote singular homology and singular cohomology with coefficients in "k", respectively.
Consider the following pullback formula_3 of a continuous map "p":
formula_4
A frequent question is how the homology of the fiber product, formula_3, relates to the homology of "B", "X" and "E". For example, if "B" is a point, then the pullback is just the usual product formula_5. In this case the Künneth formula says
formula_6
However this relation is not true in more general situations. The Eilenberg−Moore spectral sequence is a device which allows the computation of the (co)homology of the fiber product in certain situations.
Statement.
The Eilenberg−Moore spectral sequences generalizes the above isomorphism to the situation where "p" is a fibration of topological spaces and the base "B" is simply connected. Then there is a convergent spectral sequence with
formula_7
This is a generalization insofar as the zeroeth Tor functor is just the tensor product and in the above special case the cohomology of the point "B" is just the coefficient field "k" (in degree 0).
Dually, we have the following homology spectral sequence:
formula_8
Indications on the proof.
The spectral sequence arises from the study of differential graded objects (chain complexes), not spaces. The following discusses the original homological construction of Eilenberg and Moore. The cohomology case is obtained in a similar manner.
Let
formula_9
be the singular chain functor with coefficients in formula_0. By the Eilenberg–Zilber theorem, formula_10 has a differential graded coalgebra structure over formula_0 with
structure maps
formula_11
In down-to-earth terms, the map assigns to a singular chain "s": "Δn" → "B" the composition of "s" and the diagonal inclusion "B" ⊂ "B" × "B". Similarly, the maps formula_12 and formula_13 induce maps of differential graded coalgebras
formula_14, formula_15.
In the language of comodules, they endow formula_16 and formula_17 with differential graded comodule structures over formula_10, with structure maps
formula_18
and similarly for "E" instead of "X". It is now possible to construct the so-called cobar resolution for
formula_17
as a differential graded formula_10 comodule. The cobar resolution is a standard technique in differential homological algebra:
formula_19
where the "n"-th term formula_20 is given by
formula_21
The maps formula_22 are given by
formula_23
where formula_24 is the structure map for formula_17 as a left formula_10 comodule.
The cobar resolution is a bicomplex, one degree coming from the grading of the chain complexes "S"∗(−), the other one is the simplicial degree "n". The total complex of the bicomplex is denoted formula_25.
The link of the above algebraic construction with the topological situation is as follows. Under the above assumptions, there is a map
formula_26
that induces a quasi-isomorphism (i.e. inducing an isomorphism on homology groups)
formula_27
where formula_28 is the cotensor product and Cotor (cotorsion) is the
derived functor for the cotensor product.
To calculate
formula_29,
view
formula_30
as a double complex.
For any bicomplex there are two filtrations (see John McCleary (2001) or the spectral sequence of a filtered complex); in this case the Eilenberg−Moore spectral sequence results from filtering by increasing homological degree (by columns in the standard picture of a spectral sequence). This filtration yields
formula_31
These results have been refined in various ways. For example, William G. Dwyer (1975) refined the convergence results to include spaces for which
formula_32
acts nilpotently on
formula_33
for all formula_34
and Brooke Shipley (1996) further generalized this to include arbitrary pullbacks.
The original construction does not lend itself to computations with other homology theories since there is no reason to expect that such a process would work for a homology theory not derived from chain complexes. However, it is possible to axiomatize the above procedure and give conditions under which the above spectral sequence holds for a general (co)homology theory, see Larry Smith's original work or the introduction in . | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "H_\\ast(-)=H_\\ast(-,k)"
},
{
"math_id": 2,
"text": "H^\\ast(-)=H^\\ast(-,k)"
},
{
"math_id": 3,
"text": "E_f"
},
{
"math_id": 4,
"text": " \\begin{array}{c c c} E_f &\\rightarrow & E \\\\ \\downarrow & & \\downarrow{p}\\\\ X &\\rightarrow_{ f} &B\\\\ \\end{array} "
},
{
"math_id": 5,
"text": "E \\times X"
},
{
"math_id": 6,
"text": "H^*(E_f) = H^*(X \\times E) \\cong H^*(X) \\otimes_k H^*(E)."
},
{
"math_id": 7,
"text": "E_2^{\\ast,\\ast}=\\text{Tor}_{H^\\ast(B)}^{\\ast,\\ast}(H^\\ast(X),H^\\ast(E))\\Rightarrow H^\\ast(E_f)."
},
{
"math_id": 8,
"text": "E^2_{\\ast,\\ast}=\\text{Cotor}^{H_\\ast(B)}_{\\ast,\\ast}(H_\\ast(X),H_\\ast(E))\\Rightarrow H_\\ast(E_f)."
},
{
"math_id": 9,
"text": "S_\\ast(-)=S_\\ast(-,k)"
},
{
"math_id": 10,
"text": "S_\\ast(B)"
},
{
"math_id": 11,
"text": "S_\\ast(B)\\xrightarrow{\\triangle} S_\\ast(B\\times B)\\xrightarrow{\\simeq}S_\\ast(B)\\otimes S_\\ast(B)."
},
{
"math_id": 12,
"text": "f"
},
{
"math_id": 13,
"text": "p"
},
{
"math_id": 14,
"text": "f_\\ast \\colon S_\\ast(X)\\rightarrow S_\\ast(B)"
},
{
"math_id": 15,
"text": "p_\\ast \\colon S_\\ast(E)\\rightarrow S_\\ast(B)"
},
{
"math_id": 16,
"text": "S_\\ast(E)"
},
{
"math_id": 17,
"text": "S_\\ast(X)"
},
{
"math_id": 18,
"text": "S_\\ast(X)\\xrightarrow{\\triangle} S_\\ast(X)\\otimes S_\\ast(X)\\xrightarrow{f_\\ast\\otimes 1} S_\\ast(B)\\otimes S_\\ast(X)"
},
{
"math_id": 19,
"text": " \\mathcal{C}(S_\\ast(X),S_\\ast(B))=\\cdots\\xleftarrow{\\delta_2} \\mathcal{C}_{-2}(S_\\ast(X),S_\\ast(B))\\xleftarrow{\\delta_1} \\mathcal{C}_{-1}(S_\\ast(X),S_\\ast(B))\\xleftarrow{\\delta_0} S_\\ast(X)\\otimes S_\\ast(B),"
},
{
"math_id": 20,
"text": "\\mathcal{C}_{-n}"
},
{
"math_id": 21,
"text": "\\mathcal{C}_{-n}(S_\\ast(X),S_\\ast(B))=S_\\ast(X)\\otimes \\underbrace{S_\\ast(B)\\otimes \\cdots \\otimes S_\\ast(B)}_{n}\\otimes S_\\ast(B)."
},
{
"math_id": 22,
"text": "\\delta_n"
},
{
"math_id": 23,
"text": "\\lambda_f\\otimes\\cdots\\otimes 1 + \\sum_{i=2}^n 1\\otimes\\cdots \\otimes\\triangle_i\\otimes\\cdots\\otimes 1,"
},
{
"math_id": 24,
"text": "\\lambda_f"
},
{
"math_id": 25,
"text": "\\mathbf{\\mathcal{C}}_\\bullet"
},
{
"math_id": 26,
"text": "\\Theta\\colon \\mathbf{\\mathcal{C}}_{\\bullet {\\text{ }\\Box_{S_\\ast(B)}}}S_\\ast(E)\\rightarrow S_\\ast(E_f,k)"
},
{
"math_id": 27,
"text": "\\Theta_\\ast \\colon \\operatorname{Cotor}^{S_\\ast(B)}(S_\\ast(X)S_\\ast(E))\\rightarrow H_\\ast(E_f),"
},
{
"math_id": 28,
"text": "\\Box_{S_\\ast(B)}"
},
{
"math_id": 29,
"text": "H_\\ast(\\mathbf{\\mathcal{C}}_{\\bullet {\\text{ }\\Box_{S_\\ast(B)}}}S_\\ast(E))"
},
{
"math_id": 30,
"text": "\\mathbf{\\mathcal{C}}_{\\bullet {\\text{ }\\Box_{S_\\ast(B)}}}S_\\ast(E)"
},
{
"math_id": 31,
"text": "E^2=\\operatorname{Cotor}^{H_\\ast(B)}(H_\\ast(X),H_\\ast(E))."
},
{
"math_id": 32,
"text": "\\pi_1(B)"
},
{
"math_id": 33,
"text": "H_i(E_f)"
},
{
"math_id": 34,
"text": "i\\geq 0"
}
] | https://en.wikipedia.org/wiki?curid=10220853 |
10221371 | Mechanical watch | Type of watch which uses a clockwork mechanism to measure the passage of time
A mechanical watch is a watch that uses a clockwork mechanism to measure the passage of time, as opposed to quartz watches which function using the vibration modes of a piezoelectric quartz tuning fork, or radio watches, which are quartz watches synchronized to an atomic clock via radio waves. A mechanical watch is driven by a mainspring which must be wound either periodically by hand or via a self-winding mechanism. Its force is transmitted through a series of gears to power the balance wheel, a weighted wheel which oscillates back and forth at a constant rate. A device called an escapement releases the watch's wheels to move forward a small amount with each swing of the balance wheel, moving the watch's hands forward at a constant rate. The escapement is what makes the 'ticking' sound which is heard in an operating mechanical watch. Mechanical watches evolved in Europe in the 17th century from spring powered clocks, which appeared in the 15th century.
Mechanical watches are typically not as accurate as quartz watches, and they eventually require periodic cleaning, lubrication and calibration by a skilled watchmaker. Since the 1970s and 1980s, as a result of the quartz crisis, quartz watches have taken over most of the watch market, and mechanical watches (especially Swiss-made watches) are now mostly marketed as luxury goods, purchased for their aesthetic and luxury values, for appreciation of their fine craftsmanship, or as a status symbol.
Components.
The internal mechanism of a watch, excluding the face and hands, is called the movement. All mechanical watches have these five parts:
Additional functions on a watch besides the basic timekeeping ones are traditionally called "complications". Mechanical watches may have these complications:
Mechanism.
The mechanical watch is a mature technology, and most ordinary watch movements have the same parts and work the same way.
Mainspring and motion work.
The "mainspring" that powers the watch, a spiral ribbon of spring steel, is inside a cylindrical barrel, with the outer end of the mainspring attached to the barrel. The force of the mainspring turns the barrel. The barrel has gear teeth around the outside that turn the "center wheel" once per hour — this wheel has a shaft that goes through the dial. On the dial side the "cannon pinion" is attached with a friction fit (allowing it to slide when setting the hands) and the minute hand is attached to the cannon pinion. The cannon pinion drives a small 12-to-1 reduction gearing called the motion work that turns the hour wheel and hand once for every 12 revolutions of the minute hand.
For the same rate of oscillation, the duration of run, runtime or power reserve of a mechanical watch is mainly a question of what size of mainspring is used, which is, in turn, a question of how much power is needed and how much room is available. If the movement is dirty or worn, the power may not transfer from the mainspring efficiently to the escapement. Service can help restore a degraded runtime.
Most mechanical watch movements have a duration of run between 36 and 72 hours. Some mechanical watch movements are able to run for a week. The exact duration of run for a mechanical movement is calculated with the formula<br>
formula_0<br>
where formula_1 is the number of barrel teeth, formula_2 is the number of center pinion leaves, formula_3 is the number of revolutions of the barrel, and formula_4 is the number of revolutions of the center pinion — the run duration.
Wheel train.
The center wheel drives the pinion of the third wheel, and the third wheel drives the pinion of the fourth wheel. In watches with the seconds hand in a subsidiary seconds dial, usually located above the 6 o'clock position, the fourth wheel is geared to rotate once per minute, and the second hand is attached directly to the arbour of this wheel.
Escapement.
The fourth wheel also drives the "escape wheel" of the lever escapement. The escape wheel teeth alternately catch on two fingers called "pallets" on the arms of the "pallet lever", which rocks back and forth. The other end of the lever has a fork which engages with an upright "impulse pin" on the "balance wheel" shaft. Each time the balance wheel swings through its center position, it unlocks the lever, which releases one tooth of the escape wheel, allowing the watch's wheels to advance by a fixed amount, moving the hands forward. As the escape wheel turns, its tooth pushes against the lever, which gives the balance wheel a brief push, keeping it swinging back and forth.
Balance wheel.
The balance wheel keeps time for the watch. It consists of a weighted wheel which rotates back and forth, which is returned toward its center position by a fine spiral spring, the balance spring or "hair spring". The wheel and spring together constitute a "harmonic oscillator". The mass of the balance wheel combines with the stiffness of the spring to precisely control the period of each swing or 'beat' of the wheel. A balance wheel's period of oscillation "T" in seconds, the time required for one complete cycle (two beats), is
formula_5
where formula_6 is the wheel's moment of inertia in kilogram-meter2 and formula_7 is the stiffness (spring constant) of its balance spring in newton-meters per radian. Most watch balance wheels oscillate at 5, 6, 8, or 10 beats per second. This translates into 2.5, 3, 4, and 5 Hz respectively, or 18000, 21,600, 28,800, and 36,000 beats per hour (BPH). In most watches there is a "regulator" lever on the balance spring which is used to adjust the rate of the watch. It has two "curb pins" which embrace the last turn of the spring, holding the part behind the pins motionless, so the position of the curb pins determines the length of the spring. Moving the regulator lever slides the curb pins up or down the spring to control its effective length. Sliding the pins up the spring, shortening the spring's length, makes it stiffer, increasing formula_7 in the equation above, decreasing the wheel's period formula_8 so it swings back and forth faster, causing the watch to run faster.
Keyless work.
A separate set of gears called the "keyless work" winds the mainspring when the "crown" is rotated, and when the crown is pulled out a short distance allow the hands to be turned to set the watch. The stem attached to the crown has a gear called the "clutch" or "castle wheel", with two rings of teeth that project axially from the ends. When the stem is pushed in, the outer teeth turn the "ratchet wheel" on top of the mainspring barrel, which turns the shaft that the inner end of the mainspring is attached to, winding the mainspring tighter around the shaft. A spring-loaded pawl or "click" presses against the ratchet teeth, preventing the mainspring from unwinding. When the stem is pulled out, the inner teeth of the castle wheel engage with a gear which turns the minute wheel. When the crown is turned, the friction coupling of the cannon pinion allows the hands to be rotated.
Center seconds.
If the seconds hand is co-axial with the minute and hour hand, that is it is pivoted at the center of the dial, this arrangement is called "center seconds" or "sweep seconds", because the seconds hand sweeps around the minute track on the dial.
Initially center seconds hands were driven off the third wheel, sometimes via an intermediate wheel, with the gearing on the outside of the top plate. This method of driving the seconds hand is called indirect center seconds. Because the gearing was outside the plates, it added to the thickness of the movement, and because the rotation of the third wheel had to be geared up to turn the seconds hand once a minute, the seconds hand had a fluttering motion.
In 1948 Zenith introduced a watch with a redesigned gear train where the fourth wheel was at the center of the movement, and so could drive a center seconds hand directly. The minute wheel, which had previously been at the center of the movement, was moved off center and drove the minute hand indirectly. Any fluttering due to the indirect gearing is concealed by the relatively slow movement of the minute hand. This redesign brought all the train gearing between the plates and allowed a thinner movement.
Watch jewels.
Jewel bearings were invented and introduced in watches by Nicolas Fatio (or Facio) de Duillier and Pierre and Jacob Debaufre around 1702 to reduce friction. They did not become widely used until the mid-19th century. Until the 20th century they were ground from tiny pieces of natural gems. Watches often had garnet, quartz, or even glass jewels; only top quality watches used sapphire or ruby. In 1902, a process to grow artificial sapphire crystals was invented, making jewels much cheaper. Jewels in modern watches are all synthetic sapphire or (usually) ruby, made of corundum (Al2O3), one of the hardest substances known. The only difference between sapphire and ruby is that different impurities have been added to change the color; there is no difference in their properties as a bearing. The advantage of using jewels is that their ultrahard slick surface has a lower coefficient of friction with metal. The static coefficient of friction of steel-on-steel is 0.58, while that of sapphire-on-steel is 0.10-0.15.
Purposes.
Jewels serve two purposes in a watch. First, reduced friction can increase accuracy. Friction in the wheel train bearings and the escapement causes slight variations in the impulses applied to the balance wheel, causing variations in the rate of timekeeping. The low, predictable friction of jewel surfaces reduces these variations. Second, they can increase the life of the bearings. In unjeweled bearings, the pivots of the watch's wheels rotate in holes in the plates supporting the movement. The sideways force applied by the driving gear causes more pressure and friction on one side of the hole. In some of the wheels, the rotating shaft can wear away the hole until it is oval shaped, eventually causing the gear to jam, stopping the watch.
Types.
In the escapement, jewels are used for the parts that work by sliding friction:
In bearings two different types are used:
Where they are used.
The number of jewels used in watch movements increased over the last 150 years as jeweling grew less expensive and watches grew more accurate. The only bearings that really need to be jeweled in a watch are the ones in the "going train" - the gear train that transmits force from the mainspring barrel to the balance wheel - since only they are constantly under force from the mainspring. The wheels that turn the hands (the motion work) and the calendar wheels are not under load, while the ones that wind the mainspring (the keyless work) are used very seldom, so they do not wear significantly. Friction has the greatest effect in the wheels that move the fastest, so they benefit most from jewelling. So the first mechanism to be jeweled in watches was the balance wheel pivots, followed by the escapement. As more jeweled bearings were added, they were applied to slower moving wheels, and jewelling progressed up the going train toward the barrel. A 17 jewel watch has every bearing from the balance wheel to the center wheel pivot bearings jeweled, so it was considered a 'fully jeweled' watch. In quality watches, to minimize positional error, capstones were added to the lever and escape wheel bearings, making 21 jewels. Even the mainspring barrel arbor was sometimes jeweled, making the total 23. When self-winding watches were introduced in the 1950s, several wheels in the automatic winding mechanism were jeweled, increasing the count to 25–27.
'Jewel inflation'.
It is doubtful whether adding jewels in addition to the ones listed above is really useful in a watch. It does not increase accuracy, since the only wheels which have an effect on the balance wheel, those in the going train, are already jeweled. Marine chronometers, the most accurate portable timepieces, often have only 7 jewels. Nor does jeweling additional wheel bearings increase the useful life of the movement; as mentioned above most of the other wheels do not get enough wear to need them.
However, by the early 20th century watch movements had been standardized to the point that there was little difference between their mechanisms, besides quality of workmanship. So watch manufacturers made the number of jewels, one of the few metrics differentiating quality watches, a major advertising point, listing it prominently on the watch's face. Consumers, with little else to go on, learned to equate more jewels with more quality in a watch. Although initially this was a good measure of quality, it gave manufacturers an incentive to increase the jewel count.
Around the 1960s this 'jewel craze' reached new heights, and manufacturers made watches with 41, 53, 75, or even 100 jewels. Most of these additional jewels were totally nonfunctional; they never contacted moving parts, and were included just to increase the jewel count. For example, the Waltham 100 jewel watch consisted of an ordinary 17 jewel movement, with 83 tiny pieces of ruby mounted around the automatic winding rotor.
In 1974, the International Organization for Standardization (ISO) in collaboration with the Swiss watch industry standards organization Normes de l'Industrie Horlogère Suisse (NIHS) published a standard, ISO 1112, which prohibited manufacturers from including such nonfunctional jewels in the jewel counts in advertising and sales literature. This stopped the use of totally nonfunctional jewels. However, some experts say manufacturers have continued to inflate the jewel count of their watches by 'upjeweling'; adding functional jeweled bearings to wheels that do not really need them, exploiting loopholes in ISO 1112. Examples given include adding capstones to third and fourth wheel bearings, jeweling minute wheel bearings, and automatic winding ratchet pawls. Arguably none of these additions adds to the accuracy or longevity of the watch.
World time.
Some fine mechanical watches will have a "world time" feature, which is a "city bezel" as well as an "hour bezel" which will rotate according to the city's relative time zone.
There are usually 27 cities (corresponding to 24 major time zones) on the city bezel, starting with GMT/UTC:
History.
Peter Henlein has often been described as the inventor of the first pocket watch, the "Nuremberg egg", in 1510, but this claim appears to be a 19th-century invention and does not appear in older sources.
Until the quartz revolution of the 1970s, all watches were mechanical. Early watches were terribly imprecise; a good one could vary as much as 15 minutes in a day. Modern precision (a few seconds per day) was not attained by any watch until 1760, when John Harrison created his marine chronometers. Industrialization of the movement manufacturing process by the Waltham Watch Company in 1854 made additional precision possible; the company won a gold medal at the 1876 Philadelphia Centennial Exposition for their manufacturing quality.
Mechanical watches are powered by a mainspring. Modern mechanical watches require of the order of 1 microwatt of power on average Because the mainspring provides an uneven source of power (its torque steadily decreases as the spring unwinds), watches from the early 16th century to the early 19th century featured a chain-driven fusee which served to regulate the torque output of the mainspring throughout its winding. Unfortunately, the fusees were very brittle, were very easy to break, and were the source of many problems, especially inaccuracy of timekeeping when the fusee chain became loose or lost its velocity after the lack of maintenance.
As new kinds of escapements were created which served to better isolate the watch from its time source, the balance spring, watches could be built without a fusee and still be accurate.
In the 18th century the original verge escapement, which required a fusee, was gradually replaced in better French watches with the cylinder escapement, and in British watches with the duplex escapement. In the 19th century, both were superseded by the lever escapement which has been used almost exclusively ever since. A cheaper version of the lever, the pin lever escapement, patented in 1867 by Georges Frederic Roskopf was used in inexpensive watches until the 1970s.
As manual-wound mechanical watches became less popular and less favored in the 1970s, watch design and industrialists came out with the automatic watch. Whereas a mechanically-wound watch must be wound with the pendant or a levered setting, an automatic watch does not need to be wound by the pendant; simply rotating the watch winds the watch automatically. The interior of an automatic watch houses a swiveling metal or brass "plate" that swivels on its axis when the watch is shaken horizontally.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n_2 = \\frac{n_1 \\cdot z_1}{z_2}"
},
{
"math_id": 1,
"text": "z_1"
},
{
"math_id": 2,
"text": "z_2"
},
{
"math_id": 3,
"text": "n_1"
},
{
"math_id": 4,
"text": "n_2"
},
{
"math_id": 5,
"text": "T = 2 \\pi \\sqrt{ \\frac {I}{\\kappa} } \\,"
},
{
"math_id": 6,
"text": "I\\,"
},
{
"math_id": 7,
"text": "\\kappa\\,"
},
{
"math_id": 8,
"text": "T\\,"
}
] | https://en.wikipedia.org/wiki?curid=10221371 |
102221 | Orthogonality | Various meanings of the terms
In mathematics, orthogonality is the generalization of the geometric notion of "perpendicularity". Whereas "perpendicular" is typically followed by "to" when relating two lines to one another (e.g., "line A is perpendicular to line B"), "orthogonal" is commonly used without "to" (e.g., "orthogonal lines A and B").
"Orthogonality" is also used with various meanings that are often weakly related or not related at all with the mathematical meanings.
Etymology.
The word comes from the Ancient Greek ('), meaning "upright", and ('), meaning "angle".
The Ancient Greek (') and Classical Latin ' originally denoted a rectangle. Later, they came to mean a right triangle. In the 12th century, the post-classical Latin word "orthogonalis" came to mean a right angle or something related to a right angle.
Physics.
Optics.
In optics, polarization states are said to be orthogonal when they propagate independently of each other, as in vertical and horizontal linear polarization or right- and left-handed circular polarization.
Special relativity.
In special relativity, a time axis determined by a rapidity of motion is hyperbolic-orthogonal to a space axis of simultaneous events, also determined by the rapidity. The theory features relativity of simultaneity.
Quantum mechanics.
In quantum mechanics, a sufficient (but not necessary) condition that two eigenstates of a Hermitian operator, formula_0 and formula_1, are orthogonal is that they correspond to different eigenvalues. This means, in Dirac notation, that formula_2 if formula_0 and formula_1 correspond to different eigenvalues. This follows from the fact that Schrödinger's equation is a Sturm–Liouville equation (in Schrödinger's formulation) or that observables are given by Hermitian operators (in Heisenberg's formulation).
Art.
In art, the perspective (imaginary) lines pointing to the vanishing point are referred to as "orthogonal lines". The term "orthogonal line" often has a quite different meaning in the literature of modern art criticism. Many works by painters such as Piet Mondrian and Burgoyne Diller are noted for their exclusive use of "orthogonal lines" — not, however, with reference to perspective, but rather referring to lines that are straight and exclusively horizontal or vertical, forming right angles where they intersect. For example, an essay at the web site of the Thyssen-Bornemisza Museum states that "Mondrian ... dedicated his entire oeuvre to the investigation of the balance between orthogonal lines and primary colours."
Computer science.
Orthogonality in programming language design is the ability to use various language features in arbitrary combinations with consistent results. This usage was introduced by Van Wijngaarden in the design of Algol 68:
The number of independent primitive concepts has been minimized in order that the language be easy to describe, to learn, and to implement. On the other hand, these concepts have been applied “orthogonally” in order to maximize the expressive power of the language while trying to avoid deleterious superfluities.
Orthogonality is a system design property which guarantees that modifying the technical effect produced by a component of a system neither creates nor propagates side effects to other components of the system. Typically this is achieved through the separation of concerns and encapsulation, and it is essential for feasible and compact designs of complex systems. The emergent behavior of a system consisting of components should be controlled strictly by formal definitions of its logic and not by side effects resulting from poor integration, i.e., non-orthogonal design of modules and interfaces. Orthogonality reduces testing and development time because it is easier to verify designs that neither cause side effects nor depend on them.
Orthogonal instruction set.
An instruction set is said to be orthogonal if it lacks redundancy (i.e., there is only a single instruction that can be used to accomplish a given task) and is designed such that instructions can use any register in any addressing mode. This terminology results from considering an instruction as a vector whose components are the instruction fields. One field identifies the registers to be operated upon and another specifies the addressing mode. An orthogonal instruction set uniquely encodes all combinations of registers and addressing modes.
Telecommunications.
In telecommunications, multiple access schemes are orthogonal when an ideal receiver can completely reject arbitrarily strong unwanted signals from the desired signal using different basis functions. One such scheme is time-division multiple access (TDMA), where the orthogonal basis functions are nonoverlapping rectangular pulses ("time slots").
Orthogonal frequency-division multiplexing.
Another scheme is orthogonal frequency-division multiplexing (OFDM), which refers to the use, by a single transmitter, of a set of frequency multiplexed signals with the exact minimum frequency spacing needed to make them orthogonal so that they do not interfere with each other. Well known examples include (a, g, and n) versions of 802.11 Wi-Fi; WiMAX; ITU-T G.hn, DVB-T, the terrestrial digital TV broadcast system used in most of the world outside North America; and DMT (Discrete Multi Tone), the standard form of ADSL.
In OFDM, the subcarrier frequencies are chosen so that the subcarriers are orthogonal to each other, meaning that crosstalk between the subchannels is eliminated and intercarrier guard bands are not required. This greatly simplifies the design of both the transmitter and the receiver. In conventional FDM, a separate filter for each subchannel is required.
Statistics, econometrics, and economics.
When performing statistical analysis, independent variables that affect a particular dependent variable are said to be orthogonal if they are uncorrelated, since the covariance forms an inner product. In this case the same results are obtained for the effect of any of the independent variables upon the dependent variable, regardless of whether one models the effects of the variables individually with simple regression or simultaneously with multiple regression. If correlation is present, the factors are not orthogonal and different results are obtained by the two methods. This usage arises from the fact that if centered by subtracting the expected value (the mean), uncorrelated variables are orthogonal in the geometric sense discussed above, both as observed data (i.e., vectors) and as random variables (i.e., density functions).
One econometric formalism that is alternative to the maximum likelihood framework, the Generalized Method of Moments, relies on orthogonality conditions. In particular, the Ordinary Least Squares estimator may be easily derived from an orthogonality condition between the explanatory variables and model residuals.
Taxonomy.
In taxonomy, an orthogonal classification is one in which no item is a member of more than one group, that is, the classifications are mutually exclusive.
Chemistry and biochemistry.
In chemistry and biochemistry, an orthogonal interaction occurs when there are two pairs of substances and each substance can interact with their respective partner, but does not interact with either substance of the other pair. For example, DNA has two orthogonal pairs: cytosine and guanine form a base-pair, and adenine and thymine form another base-pair, but other base-pair combinations are strongly disfavored. As a chemical example, tetrazine reacts with transcyclooctene and azide reacts with cyclooctyne without any cross-reaction, so these are mutually orthogonal reactions, and so, can be performed simultaneously and selectively.
Organic synthesis.
In organic synthesis, orthogonal protection is a strategy allowing the deprotection of functional groups independently of each other.
Supramolecular chemistry.
In supramolecular chemistry the notion of orthogonality refers to the possibility of two or more supramolecular, often non-covalent, interactions being compatible; reversibly forming without interference from the other.
Analytical chemistry.
In analytical chemistry, analyses are "orthogonal" if they make a measurement or identification in completely different ways, thus increasing the reliability of the measurement. Orthogonal testing thus can be viewed as "cross-checking" of results, and the "cross" notion corresponds to the etymologic origin of "orthogonality". Orthogonal testing is often required as a part of a new drug application.
System reliability.
In the field of system reliability orthogonal redundancy is that form of redundancy where the form of backup device or method is completely different from the prone to error device or method. The failure mode of an orthogonally redundant back-up device or method does not intersect with and is completely different from the failure mode of the device or method in need of redundancy to safeguard the total system against catastrophic failure.
Neuroscience.
In neuroscience, a sensory map in the brain which has overlapping stimulus coding (e.g. location and quality) is called an orthogonal map.
Philosophy.
In philosophy, two topics, authors, or pieces of writing are said to be "orthogonal" to each other when they do not substantively cover what could be considered potentially overlapping or competing claims. Thus, texts in philosophy can either support and complement one another, they can offer competing explanations or systems, or they can be orthogonal to each other in cases where the scope, content, and purpose of the pieces of writing are entirely unrelated.
Gaming.
In board games such as chess which feature a grid of squares, 'orthogonal' is used to mean "in the same row/'rank' or column/'file'". This is the counterpart to squares which are "diagonally adjacent". In the ancient Chinese board game Go a player can capture the stones of an opponent by occupying all orthogonally adjacent points.
Other examples.
Stereo vinyl records encode both the left and right stereo channels in a single groove. The V-shaped groove in the vinyl has walls that are 90 degrees to each other, with variations in each wall separately encoding one of the two analogue channels that make up the stereo signal. The cartridge senses the motion of the stylus following the groove in two orthogonal directions: 45 degrees from vertical to either side. A pure horizontal motion corresponds to a mono signal, equivalent to a stereo signal in which both channels carry identical (in-phase) signals.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\psi_m "
},
{
"math_id": 1,
"text": " \\psi_n "
},
{
"math_id": 2,
"text": " \\langle \\psi_m | \\psi_n \\rangle = 0 "
}
] | https://en.wikipedia.org/wiki?curid=102221 |
10222175 | Wythoff's game | Two-player mathematical subtraction game
Wythoff's game is a two-player mathematical subtraction game, played with two piles of counters. Players take turns removing counters from one or both piles; when removing counters from both piles, the numbers of counters removed from each pile must be equal. The game ends when one player removes the last counter or counters, thus winning.
An equivalent description of the game is that a single chess queen is placed somewhere on a large grid of squares, and each player can move the queen towards the lower left corner of the grid: south, west, or southwest, any number of steps. The winner is the player who moves the queen into the corner. The two Cartesian coordinates of the queen correspond to the sizes of two piles in the formulation of the game involving removing counters from piles.
Martin Gardner in his March 1977 "Mathematical Games column" in "Scientific American" claims that the game was played in China under the name 捡石子 "jiǎn shízǐ" ("picking stones"). The Dutch mathematician W. A. Wythoff published a mathematical analysis of the game in 1907.
Optimal strategy.
Any position in the game can be described by a pair of integers ("n", "m") with "n" ≤ "m", describing the size of both piles in the position or the coordinates of the queen. The strategy of the game revolves around "cold positions" and "hot positions": in a cold position, the player whose turn it is to move will lose with best play, while in a hot position, the player whose turn it is to move will win with best play. The optimal strategy from a hot position is to move to any reachable cold position.
The classification of positions into hot and cold can be carried out recursively with the following three rules:
For instance, all positions of the form (0, "m") and ("m", "m") with "m" > 0 are hot, by rule 2. However, the position (1,2) is cold, because the only positions that can be reached from it, (0,1), (0,2), (1,0) and (1,1), are all hot. The cold positions ("n", "m") with the smallest values of "n" and "m" are (0, 0), (1, 2), (3, 5), (4, 7), (6, 10) and (8, 13). (sequence and in OEIS) (Also see OEIS: )
For misère game of this game, (0, 1) and (2, 2) are cold positions, and a position ("n", "m") with "m", "n" > 2 is cold if and only if ("n", "m") in normal game is cold.
Formula for cold positions.
Wythoff discovered that the cold positions follow a regular pattern determined by the golden ratio. Specifically, if "k" is any natural number and
formula_0
formula_1
where φ is the golden ratio and we are using the floor function, then ("n""k", "m""k") is the "k"th cold position. These two sequences of numbers are recorded in the Online Encyclopedia of Integer Sequences as OEIS: and OEIS: , respectively.
The two sequences "nk" and "mk" are the Beatty sequences associated with the equation
formula_2
As is true in general for pairs of Beatty sequences, these two sequences are complementary: each positive integer appears exactly once in either sequence.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n_k = \\lfloor k \\phi \\rfloor = \\lfloor m_k \\phi \\rfloor -m_k \\,"
},
{
"math_id": 1,
"text": "m_k = \\lfloor k \\phi^2 \\rfloor = \\lceil n_k \\phi \\rceil = n_k + k \\,"
},
{
"math_id": 2,
"text": "\\frac{1}{\\phi} + \\frac{1}{\\phi^2} = 1 \\,."
}
] | https://en.wikipedia.org/wiki?curid=10222175 |
10223066 | Spacetime algebra | Setting of relativistic physics in geometric algebra
In mathematical physics, spacetime algebra (STA) is the application of Clifford algebra Cl1,3(R), or equivalently the geometric algebra G(M4) to physics. Spacetime algebra provides a "unified, coordinate-free formulation for all of relativistic physics, including the Dirac equation, Maxwell equation and General Relativity" and "reduces the mathematical divide between classical, quantum and relativistic physics."
Spacetime algebra is a vector space that allows not only vectors, but also bivectors (directed quantities describing rotations associated with rotations or particular planes, such as areas, or rotations) or blades (quantities associated with particular hyper-volumes) to be combined, as well as rotated, reflected, or Lorentz boosted. It is also the natural parent algebra of spinors in special relativity. These properties allow many of the most important equations in physics to be expressed in particularly simple forms, and can be very helpful towards a more geometric understanding of their meanings.
In comparison to related methods, STA and Dirac algebra are both Clifford Cl1,3 algebras, but STA uses real number scalars while Dirac algebra uses complex number scalars.
The STA spacetime split is similar to the algebra of physical space (APS, Pauli algebra) approach. APS represents spacetime as a paravector, a combined 3-dimensional vector space and a 1-dimensional scalar.
Structure.
For any pair of STA vectors, formula_0, there is a vector (geometric) product formula_1, inner (dot) product formula_2 and outer (exterior, wedge) product formula_3. The vector product is a sum of an inner and outer product:
formula_4
The inner product generates a real number (scalar), and the outer product generates a bivector. The vectors formula_5 and formula_6 are orthogonal if their inner product is zero; vectors formula_5 and formula_6 are parallel if their outer product is zero.
The orthonormal basis vectors are a timelike vector formula_7 and 3 spacelike vectors formula_8. The Minkowski metric tensor's nonzero terms are the diagonal terms, formula_9. For formula_10:
formula_11
The Dirac matrices share these properties, and STA is equivalent to the algebra generated by the Dirac matrices over the field of real numbers; explicit matrix representation is unnecessary for STA.
Products of the basis vectors generate a tensor basis containing one scalar formula_12, four vectors formula_13, six bivectors formula_14, four pseudovectors (trivectors) formula_15 and one pseudoscalar formula_16 with formula_17. The pseudoscalar commutes with all even-grade STA elements, but anticommutes with all odd-grade STA elements.
Subalgebra.
STA's even-graded elements (scalars, bivectors, pseudoscalar) form a Clifford Cl3,0(R) even subalgebra equivalent to the APS or Pauli algebra. The STA bivectors are equivalent to the APS vectors and pseudovectors. The STA subalgebra becomes more explicit by renaming the STA bivectors formula_18 as formula_19 and the STA bivectors formula_20 as formula_21. The Pauli matrices, formula_22, are a matrix representation for formula_23. For any pair of formula_24, the nonzero inner products are formula_25, and the nonzero outer products are:
formula_26
The sequence of algebra to even subalgebra continues as algebra of physical space, quaternion algebra, complex numbers and real numbers.
Division.
A nonzero vector formula_5 is a null vector (degree 2 nilpotent) if formula_27. An example is formula_28. Null vectors are tangent to the light cone (null cone). An element formula_6 is an idempotent if formula_29. Two idempotents formula_30 and formula_31 are orthogonal idempotents if formula_32. An example of an orthogonal idempotent pair is formula_33 and formula_34 with formula_35 . Proper zero divisors are nonzero elements whose product is zero such as null vectors or orthogonal idempotents. A division algebra is an algebra that contains multiplicative inverse (reciprocal) elements for every element, but this occurs if there are no proper zero divisors and if the only idempotent is 1.
The only associative division algebras are the real numbers, complex numbers and quaternions. As STA is not a division algebra, some STA elements may lack an inverse; however, division by the non-null vector formula_36 may be possible by multiplication by its inverse, defined as formula_37.
Reciprocal frame.
Associated with the orthogonal basis formula_38 is the reciprocal basis set formula_39 satisfying these equations:
formula_40
These reciprocal frame vectors differ only by a sign, with formula_41, but formula_42.
A vector formula_5 may be represented using either the basis vectors or the reciprocal basis vectors formula_43 with summation over formula_44, according to the Einstein notation. The inner product of vector and basis vectors or reciprocal basis vectors generates the vector components.
formula_45
The metric and index gymnastics raise or lower indices:
formula_46
Spacetime gradient.
The spacetime gradient, like the gradient in a Euclidean space, is defined such that the directional derivative relationship is satisfied:
formula_47
This requires the definition of the gradient to be
formula_48
Written out explicitly with formula_49, these partials are
formula_50
Spacetime split.
In STA, a spacetime split is a projection from four-dimensional space into (3+1)-dimensional space in a chosen reference frame by means of the following two operations:
This is achieved by pre-multiplication or post-multiplication by a timelike basis vector formula_51, which serves to split a four vector into a scalar timelike and a bivector spacelike component, in the reference frame co-moving with formula_51. With formula_52 we have
formula_53
Spacetime split is a method for representing an even-graded vector of spacetime as a vector in the Pauli algebra, an algebra where time is a scalar separated from vectors that occur in 3 dimensional space. The method replaces these spacetime vectors formula_54
As these bivectors formula_55 square to unity, they serve as a spatial basis. Utilizing the Pauli matrix notation, these are written formula_56. Spatial vectors in STA are denoted in boldface; then with formula_57 and formula_58, the formula_51-spacetime split formula_59, and its reverse formula_60 are:
formula_61
However, the above formulas only work in the Minkowski metric with signature (+ - - -). For forms of the spacetime split that work in either signature, alternate definitions in which formula_62 and formula_63 must be used.
Transformations.
To rotate a vector formula_64 in geometric algebra, the following formula is used:
formula_65,
where formula_66 is the angle to rotate by, and formula_67 is the normalized bivector representing the plane of rotation so that formula_68.
For a given spacelike bivector, formula_69, so Euler's formula applies, giving the rotation
formula_70.
For a given timelike bivector, formula_71, so a "rotation through time" uses the analogous equation for the split-complex numbers:
formula_72.
Interpreting this equation, these rotations along the time direction are simply hyperbolic rotations. These are equivalent to Lorentz boosts in special relativity.
Both of these transformations are known as Lorentz transformations, and the combined set of all of them is the Lorentz group. To transform an object in STA from any basis (corresponding to a reference frame) to another, one or more of these transformations must be used.
Any spacetime element formula_73 is transformed by multiplication with the pseudoscalar to form its dual element formula_74. Duality rotation transforms spacetime element formula_73 to element formula_75 through angle formula_76 with pseudoscalar formula_77 is:
formula_78
Duality rotation occurs only for non-singular Clifford algebra, non-singular meaning a Clifford algebra containing pseudoscalars with a non-zero square.
Grade involution (main involution, inversion) transforms every r-vector formula_79 to formula_80:
formula_81
Reversion transformation occurs by decomposing any spacetime element as a sum of products of vectors and then reversing the order of each product. For multivector
formula_73 arising from a product of vectors, formula_82 the reversion is formula_83:
formula_84
Clifford conjugation of a spacetime element formula_73 combines reversion and grade involution transformations, indicated as formula_85:
formula_86
The grade involution, reversion and Clifford conjugation transformations are involutions.
Classical electromagnetism.
The Faraday bivector.
In STA, the electric field and magnetic field can be unified into a single bivector field, known as the Faraday bivector, equivalent to the Faraday tensor. It is defined as:
formula_87
where formula_88 and formula_89 are the usual electric and magnetic fields, and formula_90 is the STA pseudoscalar. Alternatively, expanding formula_91 in terms of components, formula_91 is defined that
formula_92
The separate formula_93 and formula_94 fields are recovered from formula_91 using
formula_95
The formula_51 term represents a given reference frame, and as such, using different reference frames will result in apparently different relative fields, exactly as in standard special relativity.
Since the Faraday bivector is a relativistic invariant, further information can be found in its square, giving two new Lorentz-invariant quantities, one scalar, and one pseudoscalar:
formula_96
The scalar part corresponds to the Lagrangian density for the electromagnetic field, and the pseudoscalar part is a less-often seen Lorentz invariant.
Maxwell's equation.
STA formulates Maxwell's equations in a simpler form as one equation, rather than the 4 equations of vector calculus. Similarly to the above field bivector, the electric charge density and current density can be unified into a single spacetime vector, equivalent to a four-vector. As such, the spacetime current formula_97 is given by
formula_98
where the components formula_99 are the components of the classical 3-dimensional current density. When combining these quantities in this way, it makes it particularly clear that the classical charge density is nothing more than a current travelling in the timelike direction given by formula_51.
Combining the electromagnetic field and current density together with the spacetime gradient as defined earlier, we can combine all four of Maxwell's equations into a single equation in STA.
Maxwell's equation:
formula_100
The fact that these quantities are all covariant objects in the STA automatically guarantees Lorentz covariance of the equation, which is much easier to show than when separated into four separate equations.
In this form, it is also much simpler to prove certain properties of Maxwell's equations, such as the conservation of charge. Using the fact that for any bivector field, the divergence of its spacetime gradient is formula_101, one can perform the following manipulation:
formula_102
This equation has the clear meaning that the divergence of the current density is zero, i.e. the total charge and current density over time is conserved.
Using the electromagnetic field, the form of the Lorentz force on a charged particle can also be considerably simplified using STA.
Lorentz force on a charged particle:
formula_103
Potential formulation.
In the standard vector calculus formulation, two potential functions are used: the electric scalar potential, and the magnetic vector potential. Using the tools of STA, these two objects are combined into a single vector field formula_104, analogous to the electromagnetic four-potential in tensor calculus. In STA, it is defined as
formula_105
where formula_106 is the scalar potential, and formula_107 are the components of the magnetic potential. As defined, this field has SI units of webers per meter (V⋅s⋅m−1).
The electromagnetic field can also be expressed in terms of this potential field, using
formula_108
However, this definition is not unique. For any twice-differentiable scalar function formula_109, the potential given by
formula_110
will also give the same formula_91 as the original, due to the fact that
formula_111
This phenomenon is called gauge freedom. The process of choosing a suitable function formula_112 to make a given problem simplest is known as gauge fixing. However, in relativistic electrodynamics, the Lorenz condition is often imposed, where formula_113.
To reformulate the STA Maxwell equation in terms of the potential formula_104, formula_91 is first replaced with the above definition.
formula_114
Substituting in this result, one arrives at the potential formulation of electromagnetism in STA:
Potential equation:
formula_115
Lagrangian formulation.
Analogously to the tensor calculus formalism, the potential formulation in STA naturally leads to an appropriate Lagrangian density.
Electromagnetic Lagrangian density:
formula_116
The multivector-valued Euler-Lagrange equations for the field can be derived, and being loose with the mathematical rigor of taking the partial derivative with respect to something that is not a scalar, the relevant equations become:
formula_117
To begin to re-derive the potential equation from this form, it is simplest to work in the Lorenz gauge, setting
formula_118
This process can be done regardless of the chosen gauge, but this makes the resulting process considerably clearer. Due to the structure of the geometric product, using this condition results in formula_119.
After substituting in formula_120, the same equation of motion as above for the potential field formula_104 is easily obtained.
The Pauli equation.
STA allows the description of the Pauli particle in terms of a real theory in place of a matrix theory. The matrix theory description of the Pauli particle is:
formula_121
where formula_122 is a spinor, formula_123 is the imaginary unit with no geometric interpretation, formula_124 are the Pauli matrices (with the 'hat' notation indicating that formula_125 is a matrix operator and not an element in the geometric algebra), and formula_126 is the Schrödinger Hamiltonian.
The STA approach transforms the matrix spinor representation formula_127 to the STA representation formula_128 using elements, formula_129, of the even-graded spacetime subalgebra and the pseudoscalar formula_130:
formula_131
The Pauli particle is described by the "real Pauli–Schrödinger equation:"
formula_132
where now formula_133 is an even multi-vector of the geometric algebra, and the Schrödinger Hamiltonian is formula_126. Hestenes refers to this as the "real Pauli–Schrödinger theory" to emphasize that this theory reduces to the Schrödinger theory if the term that includes the magnetic field is dropped. The vector formula_134 is an arbitrarily selected fixed vector; a fixed rotation can generate any alternative selected fixed vector formula_135.
The Dirac equation.
STA enables a description of the Dirac particle in terms of a real theory in place of a matrix theory. The matrix theory description of the Dirac particle is:
formula_136
where formula_137 are the Dirac matrices and formula_138 is the imaginary unit with no geometric interpretation.
Using the same approach as for Pauli equation, the STA approach transforms the matrix upper spinor formula_139 and matrix lower spinor formula_140 of the matrix Dirac bispinor formula_141 to the corresponding geometric algebra spinor representations formula_142 and formula_143. These are then combined to represent the full geometric algebra Dirac bispinor formula_144.
formula_145
Following Hestenes' derivation, the Dirac particle is described by the equation:
Dirac equation in STA:
formula_146
Here, formula_133 is the spinor field, formula_51 and formula_147 are elements of the geometric algebra, formula_148 is the electromagnetic four-potential, and formula_149 is the spacetime vector derivative.
Dirac spinors.
A relativistic Dirac spinor formula_128 can be expressed as:
formula_150
where, according to its derivation by David Hestenes, formula_151 is an even multivector-valued function on spacetime, formula_152 is a unimodular spinor or "rotor", and formula_153 and formula_154 are scalar-valued functions. In this construction, the components of formula_133 directly correspond with the components of a Dirac spinor, both having 8 scalar degrees of freedom.
This equation is interpreted as connecting spin with the imaginary pseudoscalar.
The rotor, formula_155, Lorentz transforms the frame of vectors formula_156 into another frame of vectors formula_157 by the operation formula_158; note that formula_159 indicates the reverse transformation.
This has been extended to provide a framework for locally varying vector- and scalar-valued observables and support for the Zitterbewegung interpretation of quantum mechanics originally proposed by Schrödinger.
Hestenes has compared his expression for formula_133 with Feynman's expression for it in the path integral formulation:
formula_160
where formula_161 is the classical action along the formula_162-path.
Using the spinors, the current density from the field can be expressed by
formula_163
Symmetries.
Global phase symmetry is a constant global phase shift of the wave function that leaves the Dirac equation unchanged. Local phase symmetry is a spatially varying phase shift that leaves the Dirac equation unchanged if accompanied by a gauge transformation of the electromagnetic four-potential as expressed by these combined substitutions.
formula_164
In these equations, the local phase transformation is a phase shift formula_165 at spacetime location formula_166 with pseudovector formula_77 and formula_134 of even-graded spacetime subalgebra applied to wave function formula_128; the gauge transformation is a subtraction of the gradient of the phase shift formula_167 from the electromagnetic four-potential formula_73 with particle electric charge formula_168.
Researchers have applied STA and related Clifford algebra approaches to gauge theories, electroweak interaction, Yang–Mills theory, and the standard model.
The discrete symmetries are parity formula_169, charge conjugation formula_170 and time reversal formula_171 applied to wave function formula_128. These effects are:
formula_172
General relativity.
General relativity.
Researchers have applied STA and related Clifford algebra approaches to relativity, gravity and cosmology. The gauge theory gravity (GTG) uses STA to describe an induced curvature on Minkowski space while admitting a gauge symmetry under "arbitrary smooth remapping of events onto spacetime" leading to this geodesic equation.
formula_173
and the covariant derivative
formula_174
where formula_175 is the connection associated with the gravitational potential, and formula_176 is an external interaction such as an electromagnetic field.
The theory shows some promise for the treatment of black holes, as its form of the Schwarzschild solution does not break down at singularities; most of the results of general relativity have been mathematically reproduced, and the relativistic formulation of classical electrodynamics has been extended to quantum mechanics and the Dirac equation.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "a,b"
},
{
"math_id": 1,
"text": "ab"
},
{
"math_id": 2,
"text": "a \\cdot b"
},
{
"math_id": 3,
"text": "a \\wedge b"
},
{
"math_id": 4,
"text": " a \\cdot b = \\frac{ab +ba}{2} = b \\cdot a, \\quad a \\wedge b= \\frac{ab-ba}{2} = - b \\wedge a, \\quad ab = a \\cdot b + a \\wedge b "
},
{
"math_id": 5,
"text": "a"
},
{
"math_id": 6,
"text": "b"
},
{
"math_id": 7,
"text": "\\gamma_{0} "
},
{
"math_id": 8,
"text": "\\gamma_{1},\\gamma_{2},\\gamma_{3}"
},
{
"math_id": 9,
"text": "(\\eta_{00}, \\eta_{11}, \\eta_{22}, \\eta_{33})=(1, -1, -1, -1)"
},
{
"math_id": 10,
"text": "\\mu , \\nu = 0,1,2,3"
},
{
"math_id": 11,
"text": "\\gamma_{\\mu} \\cdot \\gamma_{\\nu} = \\frac{\\gamma_{\\mu} \\gamma_{\\nu}+ \\gamma_{\\nu} \\gamma_{\\mu}}{2} = \\eta_{\\mu \\nu}, \\quad \\gamma_{0} \\cdot \\gamma_{0}=1, \\ \\gamma_{1} \\cdot \\gamma_{1}=\\gamma_{2} \\cdot \\gamma_{2}=\\gamma_{3} \\cdot \\gamma_{3}=-1, \\quad \\text{ otherwise } \\ \\gamma_{\\mu} \\gamma_{\\nu} = - \\gamma_{\\nu} \\gamma_{\\mu}"
},
{
"math_id": 12,
"text": "\\{1\\}"
},
{
"math_id": 13,
"text": "\\{\\gamma_{0}, \\gamma_{1}, \\gamma_{2}, \\gamma_{3}\\}"
},
{
"math_id": 14,
"text": "\\{\\gamma_{0}\\gamma_{1}, \\, \\gamma_{0}\\gamma_{2},\\, \\gamma_{0}\\gamma_{3}, \\, \\gamma_{1}\\gamma_{2}, \\, \\gamma_{2}\\gamma_{3}, \\, \\gamma_{3}\\gamma_{1}\\}"
},
{
"math_id": 15,
"text": "\\{I\\gamma_{0}, I\\gamma_{1}, I\\gamma_{2}, I\\gamma_{3}\\}"
},
{
"math_id": 16,
"text": "\\{I\\}"
},
{
"math_id": 17,
"text": "I=\\gamma_{0} \\gamma_{1} \\gamma_{2} \\gamma_{3} "
},
{
"math_id": 18,
"text": " ( \\gamma_{1} \\gamma_{0}, \\gamma_{2} \\gamma_{0},\\gamma_{3} \\gamma_{0}) "
},
{
"math_id": 19,
"text": " (\\sigma_{1}, \\sigma_{2}, \\sigma_{3}) "
},
{
"math_id": 20,
"text": " ( \\gamma_{3} \\gamma_{2}, \\gamma_{1} \\gamma_{3},\\gamma_{2} \\gamma_{1})"
},
{
"math_id": 21,
"text": " ( I \\sigma_{1}, I \\sigma_{2},I \\sigma_{3})"
},
{
"math_id": 22,
"text": " \\hat{\\sigma}_{1}, \\hat{\\sigma}_{2}, \\hat{\\sigma}_{3} "
},
{
"math_id": 23,
"text": " \\sigma_{1}, \\sigma_{2}, \\sigma_{3} "
},
{
"math_id": 24,
"text": " (\\sigma_{1}, \\sigma_{2}, \\sigma_{3})"
},
{
"math_id": 25,
"text": "\\sigma_{1} \\cdot \\sigma_{1}=\\sigma_{2} \\cdot \\sigma_{2} =\\sigma_{3} \\cdot \\sigma_{3}=1 "
},
{
"math_id": 26,
"text": "\\begin{align}\n\\sigma_{1} \\wedge \\sigma_{2}&= I \\sigma_{3} \\\\\n\\sigma_{2} \\wedge \\sigma_{3}&= I \\sigma_{1} \\\\\n\\sigma_{3} \\wedge \\sigma_{1}&= I \\sigma_{2} \\\\\n\\end{align}"
},
{
"math_id": 27,
"text": "a^{2}=0"
},
{
"math_id": 28,
"text": "a=\\gamma^{0}+\\gamma^{1}"
},
{
"math_id": 29,
"text": "b^{2}=b"
},
{
"math_id": 30,
"text": "b_{1}"
},
{
"math_id": 31,
"text": "b_{2}"
},
{
"math_id": 32,
"text": "b_{1} b_{2} =0"
},
{
"math_id": 33,
"text": "\\tfrac{1}{2}(1 + \\gamma_0 \\gamma_{k})"
},
{
"math_id": 34,
"text": "\\tfrac{1}{2}(1 - \\gamma_0\\gamma_{k})"
},
{
"math_id": 35,
"text": "k=1,2,3"
},
{
"math_id": 36,
"text": "c"
},
{
"math_id": 37,
"text": "c^{-1}= (c \\cdot c)^{-1} c "
},
{
"math_id": 38,
"text": "\\{ \\gamma_{0}, \\gamma_{1}, \\gamma_{2}, \\gamma_{3} \\}"
},
{
"math_id": 39,
"text": "\\{ \\gamma^{0}, \\gamma^{1}, \\gamma^{2}, \\gamma^{3} \\}"
},
{
"math_id": 40,
"text": "\\gamma_{\\mu} \\cdot \\gamma^{\\nu} = \\delta^{\\nu}_{\\mu} , \\quad \\mu, \\nu =0,1,2,3 "
},
{
"math_id": 41,
"text": "\\gamma^0 = \\gamma_0"
},
{
"math_id": 42,
"text": "\\gamma^1 = -\\gamma_1, \\ \\ \\gamma^2 = -\\gamma_2, \\ \\ \\gamma^3 = -\\gamma_3 "
},
{
"math_id": 43,
"text": "a = a^{\\mu} \\gamma_{\\mu} = a_{\\mu} \\gamma^{\\mu}"
},
{
"math_id": 44,
"text": "\\mu = 0, 1, 2, 3"
},
{
"math_id": 45,
"text": "\\begin{align}a \\cdot \\gamma^{\\nu} &= a^\\nu , \\quad \\nu=0,1,2,3 \\\\ a \\cdot \\gamma_{\\nu} &= a_\\nu , \\quad \\nu=0,1,2,3 \\end{align}"
},
{
"math_id": 46,
"text": " \\begin{align} \\gamma_{\\mu} &= \\eta_{\\mu \\nu} \\gamma^{\\nu} , \\quad \\mu, \\nu =0,1,2,3 \\\\ \\gamma^{\\mu} &= \\eta^{\\mu \\nu} \\gamma_{\\nu} , \\quad \\mu, \\nu =0,1,2,3 \\end{align} "
},
{
"math_id": 47,
"text": "a \\cdot \\nabla F(x)= \\lim_{\\tau \\rightarrow 0} \\frac{F(x + a\\tau) - F(x)}{\\tau} ."
},
{
"math_id": 48,
"text": " \\nabla = \\gamma^\\mu \\frac{\\partial}{\\partial x^\\mu} = \\gamma^\\mu \\partial_\\mu ."
},
{
"math_id": 49,
"text": "x = ct \\gamma_0 + x^k \\gamma_k"
},
{
"math_id": 50,
"text": " \\partial_0 = \\frac{1}{c} \\frac{\\partial}{\\partial t}, \\quad \\partial_k = \\frac{\\partial}{\\partial {x^k}} "
},
{
"math_id": 51,
"text": "\\gamma_0"
},
{
"math_id": 52,
"text": "x = x^\\mu \\gamma_\\mu"
},
{
"math_id": 53,
"text": "\n\\begin{align}\nx \\gamma_0 &= x^0 + x^k \\gamma_k \\gamma_0 \\\\ \\gamma_0 x &= x^0 - x^k \\gamma_k \\gamma_0 \\end{align}\n"
},
{
"math_id": 54,
"text": "(\\gamma)"
},
{
"math_id": 55,
"text": "\\gamma_k \\gamma_0"
},
{
"math_id": 56,
"text": "\\sigma_k = \\gamma_k \\gamma_0"
},
{
"math_id": 57,
"text": "\\mathbf{x} = x^k \\sigma_k"
},
{
"math_id": 58,
"text": "x^0 = c t"
},
{
"math_id": 59,
"text": "x \\gamma_0"
},
{
"math_id": 60,
"text": "\\gamma_0 x"
},
{
"math_id": 61,
"text": "\\begin{align}\nx \\gamma_0 &= x^0 + x^k \\sigma_k = ct + \\mathbf{x} \\\\\n\\gamma_0 x &= x^0 - x^k \\sigma_k = ct - \\mathbf{x}\n\\end{align}\n"
},
{
"math_id": 62,
"text": "\\sigma_k = \\gamma_k \\gamma^0"
},
{
"math_id": 63,
"text": "\\sigma^k = \\gamma_0 \\gamma^k"
},
{
"math_id": 64,
"text": "v"
},
{
"math_id": 65,
"text": "v' = e^{-\\beta \\frac{\\theta}{2}} \\ v \\ e^{\\beta \\frac{\\theta}{2}}"
},
{
"math_id": 66,
"text": "\\theta"
},
{
"math_id": 67,
"text": "\\beta"
},
{
"math_id": 68,
"text": "\\beta\\tilde{\\beta}=1"
},
{
"math_id": 69,
"text": "\\beta^2 = -1"
},
{
"math_id": 70,
"text": "v' = \\left(\\cos\\left(\\frac{\\theta}{2}\\right) - \\beta \\sin\\left(\\frac{\\theta}{2}\\right)\\right) \\ v \\ \\left(\\cos\\left(\\frac{\\theta}{2}\\right) + \\beta \\sin\\left(\\frac{\\theta}{2}\\right)\\right)"
},
{
"math_id": 71,
"text": "\\beta^2 = 1"
},
{
"math_id": 72,
"text": "v' = \\left(\\cosh\\left(\\frac{\\theta}{2}\\right) - \\beta \\sinh\\left(\\frac{\\theta}{2}\\right)\\right) \\ v \\ \\left(\\cosh\\left(\\frac{\\theta}{2}\\right) + \\beta \\sinh\\left(\\frac{\\theta}{2}\\right)\\right)"
},
{
"math_id": 73,
"text": "A"
},
{
"math_id": 74,
"text": "A I"
},
{
"math_id": 75,
"text": "A^{\\prime} "
},
{
"math_id": 76,
"text": "\\phi"
},
{
"math_id": 77,
"text": "I"
},
{
"math_id": 78,
"text": "A^{\\prime}=e^{I \\phi} A"
},
{
"math_id": 79,
"text": "A_{r} "
},
{
"math_id": 80,
"text": "A^{\\ast}_{r} "
},
{
"math_id": 81,
"text": "A^{\\ast}_{r}=(-1)^{r} \\ A_{r}"
},
{
"math_id": 82,
"text": "a_{1} a_{2} \\ldots a_{r-1}a_{r} "
},
{
"math_id": 83,
"text": "A^{\\dagger}"
},
{
"math_id": 84,
"text": "A=a_{1} a_{2} \\ldots a_{r-1} a_{r}, \\quad A^{\\dagger}=a_{r} a_{r-1 }\\ldots a_{2} a_{1}"
},
{
"math_id": 85,
"text": "\\tilde{A}"
},
{
"math_id": 86,
"text": "\\tilde{A}=A^{\\ast \\dagger}"
},
{
"math_id": 87,
"text": "F = \\vec{E} + I c \\vec{B} ,"
},
{
"math_id": 88,
"text": "E"
},
{
"math_id": 89,
"text": "B"
},
{
"math_id": 90,
"text": "I"
},
{
"math_id": 91,
"text": "F"
},
{
"math_id": 92,
"text": "F = E^i \\sigma_i + I c B^i \\sigma_i = E^1 \\gamma_1 \\gamma_0 + E^2 \\gamma_2 \\gamma_0 + E^3 \\gamma_3 \\gamma_0 - c B^1 \\gamma_2 \\gamma_3 - c B^2 \\gamma_3 \\gamma_1 - c B^3 \\gamma_1 \\gamma_2 ."
},
{
"math_id": 93,
"text": "\\vec E"
},
{
"math_id": 94,
"text": "\\vec B"
},
{
"math_id": 95,
"text": "\\begin{align}\n E = \\frac{1}{2}\\left(F - \\gamma_0 F \\gamma_0\\right), \\\\\n I c B = \\frac{1}{2}\\left(F + \\gamma_0 F \\gamma_0\\right) .\n\\end{align}"
},
{
"math_id": 96,
"text": "F^2 = E^2 - c^2 B^2 + 2 I c \\vec{E} \\cdot \\vec{B} ."
},
{
"math_id": 97,
"text": "J"
},
{
"math_id": 98,
"text": "J = c \\rho \\gamma_0 + J^i \\gamma_i , "
},
{
"math_id": 99,
"text": "J^i"
},
{
"math_id": 100,
"text": " \\nabla F = \\mu_0 c J"
},
{
"math_id": 101,
"text": "0"
},
{
"math_id": 102,
"text": "\\begin{align}\n\\nabla \\cdot \\left[\\nabla F\\right] &= \\nabla \\cdot \\left[\\mu_0 c J\\right] \\\\\n0 &= \\nabla \\cdot J .\n\\end{align}"
},
{
"math_id": 103,
"text": " \\mathcal F = q F \\cdot v "
},
{
"math_id": 104,
"text": "A"
},
{
"math_id": 105,
"text": "A = \\frac{\\phi}{c} \\gamma_0 + A^k \\gamma_k"
},
{
"math_id": 106,
"text": "\\phi"
},
{
"math_id": 107,
"text": "A^k"
},
{
"math_id": 108,
"text": "\\frac{1}{c} F = \\nabla \\wedge A ."
},
{
"math_id": 109,
"text": "\\Lambda(\\vec x)"
},
{
"math_id": 110,
"text": "A' = A + \\nabla \\Lambda"
},
{
"math_id": 111,
"text": "\n\\nabla \\wedge \\left(A + \\nabla \\Lambda\\right)\n= \\nabla \\wedge A + \\nabla \\wedge \\nabla \\Lambda\n= \\nabla \\wedge A .\n"
},
{
"math_id": 112,
"text": "\\Lambda"
},
{
"math_id": 113,
"text": "\\nabla \\cdot \\vec{A} = 0"
},
{
"math_id": 114,
"text": "\\begin{align}\n\\frac{1}{c} \\nabla F &= \\nabla \\left(\\nabla \\wedge A\\right) \\\\\n &= \\nabla \\cdot \\left(\\nabla \\wedge A\\right) + \\nabla \\wedge \\left(\\nabla \\wedge A\\right) \\\\\n &= \\nabla^2 A + \\left(\\nabla \\wedge \\nabla\\right) A = \\nabla^2 A + 0\\\\\n &= \\nabla^2 A\n\\end{align}"
},
{
"math_id": 115,
"text": " \\nabla^2 A = \\mu_0 J "
},
{
"math_id": 116,
"text": " \\mathcal L = \\frac{1}{2} \\epsilon_0 F^2 - J \\cdot A "
},
{
"math_id": 117,
"text": "\\nabla \\frac{\\partial \\mathcal L}{\\partial \\left(\\nabla A\\right)} - \\frac{\\partial \\mathcal L}{\\partial A} = 0."
},
{
"math_id": 118,
"text": "\\nabla \\cdot A = 0."
},
{
"math_id": 119,
"text": "\\nabla \\wedge A = \\nabla A"
},
{
"math_id": 120,
"text": "F = c \\nabla A"
},
{
"math_id": 121,
"text": "i \\hbar \\, \\partial_t \\Psi = H_S \\Psi - \\frac{e \\hbar}{2mc} \\, \\hat\\sigma \\cdot \\mathbf{B} \\Psi ,"
},
{
"math_id": 122,
"text": "\\Psi"
},
{
"math_id": 123,
"text": "i"
},
{
"math_id": 124,
"text": "\\hat\\sigma_i"
},
{
"math_id": 125,
"text": "\\hat\\sigma"
},
{
"math_id": 126,
"text": "H_S"
},
{
"math_id": 127,
"text": " | \\psi \\rangle"
},
{
"math_id": 128,
"text": "\\psi"
},
{
"math_id": 129,
"text": "\\mathbf{\\sigma_1 , \\sigma_2 , \\sigma_3}"
},
{
"math_id": 130,
"text": "I = \\sigma_1 \\sigma_2 \\sigma_3"
},
{
"math_id": 131,
"text": "\n| \\psi \\rangle =\n\\begin{vmatrix}\n\\operatorname{cos(\\theta/2) \\ e^{-i \\phi/2}} \\\\\n\\operatorname{sin(\\theta/2) \\ e^{+i \\phi/2}}\n\\end{vmatrix} =\n\\begin{vmatrix}\na^{0}+ia^{3} \\\\\n-a^{2}+ia^{1}\n\\end{vmatrix} \\mapsto \\psi = a^{0}+a^{1} \\mathbf{I \\sigma_{1}}+a^{2} \\mathbf{I \\sigma_{2}}+a^{3} \\mathbf{I \\sigma_{3}}\n"
},
{
"math_id": 132,
"text": "\\partial_t \\psi \\, I \\sigma_3 \\, \\hbar = H_S \\psi - \\frac{e \\hbar}{2mc} \\, \\mathbf{B} \\psi \\sigma_3 ,"
},
{
"math_id": 133,
"text": "\\psi"
},
{
"math_id": 134,
"text": "\\sigma_{3}"
},
{
"math_id": 135,
"text": "\\sigma^{\\prime}_{3}"
},
{
"math_id": 136,
"text": "\\hat \\gamma^\\mu (i \\partial_\\mu - e \\mathbf{A}_\\mu) |\\psi\\rangle = m |\\psi\\rangle ,"
},
{
"math_id": 137,
"text": "\\hat\\gamma"
},
{
"math_id": 138,
"text": "i"
},
{
"math_id": 139,
"text": "| \\psi_{U} \\rangle "
},
{
"math_id": 140,
"text": " | \\psi_{L} \\rangle"
},
{
"math_id": 141,
"text": "| \\psi \\rangle "
},
{
"math_id": 142,
"text": " \\psi_{U}"
},
{
"math_id": 143,
"text": "\\psi_{L} "
},
{
"math_id": 144,
"text": "\\psi "
},
{
"math_id": 145,
"text": "\n|\\psi \\rangle =\n\\begin{vmatrix}\n| \\psi_{U} \\rangle \\\\\n| \\psi_{L} \\rangle\n\\end{vmatrix} \\mapsto \\psi = \\psi_{U} + \\psi_{L} \\mathbf{\\sigma_{3}}\n"
},
{
"math_id": 146,
"text": "\\nabla \\psi \\, I \\sigma_3 - e \\mathbf{A} \\psi = m \\psi \\gamma_0"
},
{
"math_id": 147,
"text": "I \\sigma_3"
},
{
"math_id": 148,
"text": "\\mathbf{A}"
},
{
"math_id": 149,
"text": "\\nabla = \\gamma^\\mu \\partial_\\mu"
},
{
"math_id": 150,
"text": " \\psi = R (\\rho e^{i \\beta})^\\frac{1}{2}"
},
{
"math_id": 151,
"text": " \\psi = \\psi(x)"
},
{
"math_id": 152,
"text": "R = R(x)"
},
{
"math_id": 153,
"text": " \\rho = \\rho(x)"
},
{
"math_id": 154,
"text": " \\beta = \\beta(x)"
},
{
"math_id": 155,
"text": "R"
},
{
"math_id": 156,
"text": "\\gamma_\\mu"
},
{
"math_id": 157,
"text": "e_\\mu"
},
{
"math_id": 158,
"text": "e_\\mu = R \\gamma_\\mu R^{\\dagger}"
},
{
"math_id": 159,
"text": "R^{\\dagger}"
},
{
"math_id": 160,
"text": " \\psi = e^{i \\Phi_\\lambda / \\hbar} ,"
},
{
"math_id": 161,
"text": "\\Phi_\\lambda"
},
{
"math_id": 162,
"text": "\\lambda"
},
{
"math_id": 163,
"text": "J^\\mu = \\bar{\\psi}\\gamma^\\mu\\psi "
},
{
"math_id": 164,
"text": "\\psi \\mapsto \\psi e^{\\alpha (x) I \\sigma_{3}} , \\quad eA \\mapsto eA- \\nabla \\alpha (x) "
},
{
"math_id": 165,
"text": "\\alpha (x)"
},
{
"math_id": 166,
"text": "x"
},
{
"math_id": 167,
"text": "\\nabla \\alpha(x)"
},
{
"math_id": 168,
"text": "e"
},
{
"math_id": 169,
"text": "(\\hat{P})"
},
{
"math_id": 170,
"text": "(\\hat{C})"
},
{
"math_id": 171,
"text": "(\\hat{T})"
},
{
"math_id": 172,
"text": "\\begin{align} \\hat{P}| \\psi \\rangle &\\mapsto \\gamma_{0} \\psi (\\gamma_{0} x \\gamma_{0}) \\gamma_{0} \\\\\n \\hat{C}| \\psi \\rangle &\\mapsto \\psi \\sigma_{1} \\\\\n \\hat{T}| \\psi \\rangle &\\mapsto I \\gamma_{0} \\psi (\\gamma_{0} x \\gamma_{0}) \\gamma_{1}\n\\end{align} "
},
{
"math_id": 173,
"text": " \\frac{d}{d \\tau} R = \\frac{1}{2} (\\Omega - \\omega) R "
},
{
"math_id": 174,
"text": " D_\\tau = \\partial_\\tau + \\frac{1}{2} \\omega ,"
},
{
"math_id": 175,
"text": "\\omega"
},
{
"math_id": 176,
"text": "\\Omega"
}
] | https://en.wikipedia.org/wiki?curid=10223066 |
10224194 | Kai (conjunction) | Kai ( "and"; ; ; sometimes abbreviated "k") is a letter that is a conjunction in Greek, Coptic (ⲕⲁⲓ) and Esperanto ("kaj"; ).
"Kai" is the most frequent word in any Greek text and thus used by statisticians to assess authorship of ancient manuscripts based on the number of times it is used.
Ligature.
Because of its frequent occurrence, "kai" is sometimes abbreviated in Greek manuscripts and in signage, by a ligature (comparable to Latin &), written as ϗ (uppercase variant Ϗ; Coptic variant ⳤ), formed from kappa (κ) with an extra lower stroke.
It may occur with the varia above it: ϗ̀.
Ϗ ϗ
For representation in electronic texts the kai symbol has its own Unicode positions: GREEK KAI SYMBOL (U+03D7) and GREEK CAPITAL KAI SYMBOL (U+03CF).
Authorship of ancient texts.
The number of common words which express a general relation ("and", "in", "but", "I", "to be") is random with the same distribution at least among the same genre. By contrast, the occurrence of the definite article "the" cannot be modeled by simple probabilistic laws because the number of nouns with definite article depends on the subject matter.
Table 1 has data about the epistles of Saint Paul. 2nd Thessalonians, Titus, and Philemon were excluded because they were too short to give reliable samples. From an analysis of these and other data the first 4 epistles (Romans, 1 Corinthians, 2 Corinthians, and Galatians) form a consistent group, and all the other epistles lie more than 2 standard deviations from the mean of this group (using formula_0 statistics).
Esperanto.
Esperanto comes from Greek.
It may be abbreviated as or (among other places, in the PIV dictionary), or, eventually, as &. However, a few Esperanto speakers experiment with using the Greek kai character in Esperanto texts. Such contraction is usually criticised as the symbol is not internationally recognisable.
References.
<templatestyles src="Reflist/styles.css" />
"This article incorporates material from Econ 7800 class notes by Hans G. Ehrbar, which is licensed under . | [
{
"math_id": 0,
"text": "\\chi^2"
}
] | https://en.wikipedia.org/wiki?curid=10224194 |
10225 | Elliptic curve | Algebraic curve
In mathematics, an elliptic curve is a smooth, projective, algebraic curve of genus one, on which there is a specified point O. An elliptic curve is defined over a field K and describes points in "K"2, the Cartesian product of K with itself. If the field's characteristic is different from 2 and 3, then the curve can be described as a plane algebraic curve which consists of solutions ("x", "y") for:
formula_0
for some coefficients a and b in K. The curve is required to be non-singular, which means that the curve has no cusps or self-intersections. (This is equivalent to the condition 4"a"3 + 27"b"2 ≠ 0, that is, being square-free in x.) It is always understood that the curve is really sitting in the projective plane, with the point O being the unique point at infinity. Many sources define an elliptic curve to be simply a curve given by an equation of this form. (When the coefficient field has characteristic 2 or 3, the above equation is not quite general enough to include all non-singular cubic curves; see below.)
An elliptic curve is an abelian variety – that is, it has a group law defined algebraically, with respect to which it is an abelian group – and O serves as the identity element.
If "y"2 = "P"("x"), where P is any polynomial of degree three in x with no repeated roots, the solution set is a nonsingular plane curve of genus one, an elliptic curve. If P has degree four and is square-free this equation again describes a plane curve of genus one; however, it has no natural choice of identity element. More generally, any algebraic curve of genus one, for example the intersection of two quadric surfaces embedded in three-dimensional projective space, is called an elliptic curve, provided that it is equipped with a marked point to act as the identity.
Using the theory of elliptic functions, it can be shown that elliptic curves defined over the complex numbers correspond to embeddings of the torus into the complex projective plane. The torus is also an abelian group, and this correspondence is also a group isomorphism.
Elliptic curves are especially important in number theory, and constitute a major area of current research; for example, they were used in Andrew Wiles's proof of Fermat's Last Theorem. They also find applications in elliptic curve cryptography (ECC) and integer factorization.
An elliptic curve is "not" an ellipse in the sense of a projective conic, which has genus zero: see elliptic integral for the origin of the term. However, there is a natural representation of real elliptic curves with shape invariant "j" ≥ 1 as ellipses in the hyperbolic plane formula_1. Specifically, the intersections of the Minkowski hyperboloid with quadric surfaces characterized by a certain constant-angle property produce the Steiner ellipses in formula_1 (generated by orientation-preserving collineations). Further, the orthogonal trajectories of these ellipses comprise the elliptic curves with "j" ≤ 1, and any ellipse in formula_1 described as a locus relative to two foci is uniquely the elliptic curve sum of two Steiner ellipses, obtained by adding the pairs of intersections on each orthogonal trajectory. Here, the vertex of the hyperboloid serves as the identity on each trajectory curve.
Topologically, a complex elliptic curve is a torus, while a complex ellipse is a sphere.
Elliptic curves over the real numbers.
Although the formal definition of an elliptic curve requires some background in algebraic geometry, it is possible to describe some features of elliptic curves over the real numbers using only introductory algebra and geometry.
In this context, an elliptic curve is a plane curve defined by an equation of the form
formula_0
after a linear change of variables (a and b are real numbers). This type of equation is called a Weierstrass equation, and said to be in Weierstrass form, or Weierstrass normal form.
The definition of elliptic curve also requires that the curve be non-singular. Geometrically, this means that the graph has no cusps, self-intersections, or isolated points. Algebraically, this holds if and only if the discriminant, formula_2, is not equal to zero.
formula_3
The real graph of a non-singular curve has "two" components if its discriminant is positive, and "one" component if it is negative. For example, in the graphs shown in figure to the right, the discriminant in the first case is 64, and in the second case is −368.
The group law.
When working in the projective plane, the equation in homogeneous coordinates becomes :
formula_4
This equation is not defined on the line at infinity, but we can multiply by formula_5 to get one that is :
formula_6
This resulting equation is defined on the whole projective plane, and the curve it defines projects onto the elliptic curve of interest. To find its intersection with the line at infinity, we can just posit formula_7. This implies formula_8, which in a field means formula_9. formula_10 on the other hand can take any value thus all triplets formula_11 satisfy the equation. In projective geometry this set is simply the point formula_12, which is thus the unique intersection of the curve with the line at infinity.
Since the curve is smooth, hence continuous, it can be shown that this point at infinity is the identity element of a group structure whose operation is geometrically described as follows:
Since the curve is symmetric about the x-axis, given any point P, we can take −"P" to be the point opposite it. We then have formula_13, as formula_14 lies on the XZ-plane, so that formula_15 is also the symmetrical of formula_14 about the origin, and thus represents the same projective point.
If P and Q are two points on the curve, then we can uniquely describe a third point "P" + "Q" in the following way. First, draw the line that intersects P and Q. This will generally intersect the cubic at a third point, R. We then take "P" + "Q" to be −"R", the point opposite R.
This definition for addition works except in a few special cases related to the point at infinity and intersection multiplicity. The first is when one of the points is O. Here, we define "P" + "O" = "P" = "O" + "P", making O the identity of the group. If "P" = "Q" we only have one point, thus we cannot define the line between them. In this case, we use the tangent line to the curve at this point as our line. In most cases, the tangent will intersect a second point R and we can take its opposite. If P and Q are opposites of each other, we define "P" + "Q" = "O". Lastly, If P is an inflection point (a point where the concavity of the curve changes), we take R to be P itself and "P" + "P" is simply the point opposite itself, i.e. itself.
Let K be a field over which the curve is defined (that is, the coefficients of the defining equation or equations of the curve are in K) and denote the curve by E. Then the K-rational points of E are the points on E whose coordinates all lie in K, including the point at infinity. The set of K-rational points is denoted by "E"("K"). "E"("K") is a group, because properties of polynomial equations show that if P is in "E"("K"), then −"P" is also in "E"("K"), and if two of P, Q, R are in "E"("K"), then so is the third. Additionally, if K is a subfield of L, then "E"("K") is a subgroup of "E"("L").
Algebraic interpretation.
The above groups can be described algebraically as well as geometrically. Given the curve "y"2 = "x"3 + "bx" + "c" over the field K (whose characteristic we assume to be neither 2 nor 3), and points "P" = ("xP", "yP") and "Q" = ("xQ", "yQ") on the curve, assume first that "xP" ≠ "xQ" (case "1"). Let "y" = "sx" + "d" be the equation of the line that intersects P and Q, which has the following slope:
formula_16
The line equation and the curve equation intersect at the points xP, xQ, and xR, so the equations have identical y values at these values.
formula_17
which is equivalent to
formula_18
Since xP, xQ, and xR are solutions, this equation has its roots at exactly the same x values as
formula_19
and because both equations are cubics they must be the same polynomial up to a scalar. Then equating the coefficients of "x"2 in both equations
formula_20
and solving for the unknown xR.
formula_21
yR follows from the line equation
formula_22
and this is an element of K, because s is.
If "xP" = "xQ", then there are two options: if "yP" = −"yQ" (case "3"), including the case where "yP" = "yQ" = 0 (case "4"), then the sum is defined as 0; thus, the inverse of each point on the curve is found by reflecting it across the x-axis.
If "yP" = "yQ" ≠ 0, then "Q" = "P" and "R" = ("x""R", "y""R") = −("P" + "P") = −2"P" = −2"Q" (case "2" using P as R). The slope is given by the tangent to the curve at ("x""P", "y""P").
formula_23
A more general expression for formula_24 that works in both case 1 and case 2 is
formula_25
where equality to relies on P and Q obeying "y"2 = "x"3 + "bx" + "c".
Non-Weierstrass curves.
For the curve "y"2 = "x"3 + "ax"2 + "bx" + "c" (the general form of an elliptic curve with characteristic 3), the formulas are similar, with "s" = and "xR" = "s"2 − "a" − "xP" − "xQ".
For a general cubic curve not in Weierstrass normal form, we can still define a group structure by designating one of its nine inflection points as the identity O. In the projective plane, each line will intersect a cubic at three points when accounting for multiplicity. For a point P, −"P" is defined as the unique third point on the line passing through O and P. Then, for any P and Q, "P" + "Q" is defined as −"R" where R is the unique third point on the line containing P and Q.
For an example of the group law over a non-Weierstrass curve, see Hessian curves.
Elliptic curves over the rational numbers.
A curve "E" defined over the field of rational numbers is also defined over the field of real numbers. Therefore, the law of addition (of points with real coordinates) by the tangent and secant method can be applied to "E". The explicit formulae show that the sum of two points "P" and "Q" with rational coordinates has again rational coordinates, since the line joining "P" and "Q" has rational coefficients. This way, one shows that the set of rational points of "E" forms a subgroup of the group of real points of "E".
Integral points.
This section is concerned with points "P" = ("x", "y") of "E" such that "x" is an integer.
For example, the equation "y"2 = "x"3 + 17 has eight integral solutions with "y" > 0:
("x", "y") = (−2, 3), (−1, 4), (2, 5), (4, 9), (8, 23), (43, 282), (52, 375), (, ).
As another example, Ljunggren's equation, a curve whose Weierstrass form is "y"2 = "x"3 − 2"x", has only four solutions with "y" ≥ 0 :
("x", "y") = (0, 0), (−1, 1), (2, 2), (338, ).
The structure of rational points.
Rational points can be constructed by the method of tangents and secants detailed above, starting with a "finite" number of rational points. More precisely the Mordell–Weil theorem states that the group "E"(Q) is a finitely generated (abelian) group. By the fundamental theorem of finitely generated abelian groups it is therefore a finite direct sum of copies of Z and finite cyclic groups.
The proof of the theorem involves two parts. The first part shows that for any integer "m" > 1, the quotient group "E"(Q)/"mE"(Q) is finite (this is the weak Mordell–Weil theorem). Second, introducing a height function "h" on the rational points "E"(Q) defined by "h"("P"0) = 0 and "h"("P")
log max(|"p"|, |"q"|) if "P" (unequal to the point at infinity "P"0) has as abscissa the rational number "x" = "p"/"q" (with coprime "p" and "q"). This height function "h" has the property that "h"("mP") grows roughly like the square of "m". Moreover, only finitely many rational points with height smaller than any constant exist on "E".
The proof of the theorem is thus a variant of the method of infinite descent and relies on the repeated application of Euclidean divisions on "E": let "P" ∈ "E"(Q) be a rational point on the curve, writing "P" as the sum 2"P"1 + "Q"1 where "Q"1 is a fixed representant of "P" in "E"(Q)/2"E"(Q), the height of "P"1 is about of the one of "P" (more generally, replacing 2 by any "m" > 1, and by ). Redoing the same with "P"1, that is to say "P"1 = 2"P"2 + "Q"2, then "P"2 = 2"P"3 + "Q"3, etc. finally expresses "P" as an integral linear combination of points "Qi" and of points whose height is bounded by a fixed constant chosen in advance: by the weak Mordell–Weil theorem and the second property of the height function "P" is thus expressed as an integral linear combination of a finite number of fixed points.
The theorem however doesn't provide a method to determine any representatives of "E"(Q)/"mE"(Q).
The rank of "E"(Q), that is the number of copies of Z in "E"(Q) or, equivalently, the number of independent points of infinite order, is called the "rank" of "E". The Birch and Swinnerton-Dyer conjecture is concerned with determining the rank. One conjectures that it can be arbitrarily large, even if only examples with relatively small rank are known. The elliptic curve with the currently largest exactly-known rank is
"y"2 + "xy" + "y" = "x"3 − "x"2 − "x" +
It has rank 20, found by Noam Elkies and Zev Klagsbrun in 2020. Curves of rank higher than 20 have been known since 1994, with lower bounds on their ranks ranging from 21 to 29, but their exact ranks are not known and in particular it is not proven which of them have higher rank than the others or which is the true "current champion".
As for the groups constituting the torsion subgroup of "E"(Q), the following is known: the torsion subgroup of "E"(Q) is one of the 15 following groups (a theorem due to Barry Mazur): Z/"NZ for "N" = 1, 2, ..., 10, or 12, or Z/2Z × Z/2"NZ with "N" = 1, 2, 3, 4. Examples for every case are known. Moreover, elliptic curves whose Mordell–Weil groups over Q have the same torsion groups belong to a parametrized family.
The Birch and Swinnerton-Dyer conjecture.
The "Birch and Swinnerton-Dyer conjecture" (BSD) is one of the Millennium problems of the Clay Mathematics Institute. The conjecture relies on analytic and arithmetic objects defined by the elliptic curve in question.
At the analytic side, an important ingredient is a function of a complex variable, "L", the Hasse–Weil zeta function of "E" over Q. This function is a variant of the Riemann zeta function and Dirichlet L-functions. It is defined as an Euler product, with one factor for every prime number "p".
For a curve "E" over Q given by a minimal equation
formula_26
with integral coefficients formula_27, reducing the coefficients modulo "p" defines an elliptic curve over the finite field F"p" (except for a finite number of primes "p", where the reduced curve has a singularity and thus fails to be elliptic, in which case "E" is said to be of bad reduction at "p").
The zeta function of an elliptic curve over a finite field F"p" is, in some sense, a generating function assembling the information of the number of points of "E" with values in the finite field extensions F"pn" of F"p". It is given by
formula_28
The interior sum of the exponential resembles the development of the logarithm and, in fact, the so-defined zeta function is a rational function in "T":
formula_29
where the 'trace of Frobenius' term formula_30 is defined to be the difference between the 'expected' number formula_31 and the number of points on the elliptic curve formula_32 over formula_33, viz.
formula_34
or equivalently,
formula_35.
We may define the same quantities and functions over an arbitrary finite field of characteristic formula_36, with formula_37 replacing formula_36 everywhere.
The of "E" over Q is then defined by collecting this information together, for all primes "p". It is defined by
formula_38
where "N" is the conductor of "E", i.e. the product of primes with bad reduction, in which case "ap" is defined differently from the method above: see Silverman (1986) below.
This product converges for Re("s") > 3/2 only. Hasse's conjecture affirms that the "L"-function admits an analytic continuation to the whole complex plane and satisfies a functional equation relating, for any "s", "L"("E", "s") to "L"("E", 2 − "s"). In 1999 this was shown to be a consequence of the proof of the Shimura–Taniyama–Weil conjecture, which asserts that every elliptic curve over "Q" is a modular curve, which implies that its "L"-function is the "L"-function of a modular form whose analytic continuation is known. One can therefore speak about the values of "L"("E", "s") at any complex number "s".
At "s=1" (the conductor product can be discarded as it is finite), the L-function becomes
formula_39
The "Birch and Swinnerton-Dyer conjecture" relates the arithmetic of the curve to the behaviour of this "L"-function at "s" = 1. It affirms that the vanishing order of the "L"-function at "s" = 1 equals the rank of "E" and predicts the leading term of the Laurent series of "L"("E", "s") at that point in terms of several quantities attached to the elliptic curve.
Much like the Riemann hypothesis, the truth of the BSD conjecture would have multiple consequences, including the following two:
Elliptic curves over finite fields.
Let "K" = F"q" be the finite field with "q" elements and "E" an elliptic curve defined over "K". While the precise number of rational points of an elliptic curve "E" over "K" is in general difficult to compute, Hasse's theorem on elliptic curves gives the following inequality:
formula_44
In other words, the number of points on the curve grows proportionally to the number of elements in the field. This fact can be understood and proven with the help of some general theory; see local zeta function and étale cohomology for example.
The set of points "E"(F"q") is a finite abelian group. It is always cyclic or the product of two cyclic groups, depending whether "q" is even or odd. For example, the curve defined by
formula_45
over F71 has 72 points (71 affine points including (0,0) and one point at infinity) over this field, whose group structure is given by Z/2Z × Z/36Z. The number of points on a specific curve can be computed with Schoof's algorithm.
Studying the curve over the field extensions of F"q" is facilitated by the introduction of the local zeta function of "E" over F"q", defined by a generating series (also see above)
formula_46
where the field "Kn" is the (unique up to isomorphism) extension of "K" = F"q" of degree "n" (that is, F"qn").
The zeta function is a rational function in "T". To see this, the integer formula_47 such that
formula_48
has an associated complex number formula_49 such that
formula_50
where formula_51 is the complex conjugate. We choose formula_49 so that its absolute value is formula_52, that is formula_53, and that formula_54, so that formula_55 and formula_56, or in other words, formula_57.
formula_49 can then be used in the local zeta function as its values when raised to the various powers of n can be said to reasonably approximate the behaviour of formula_47.
formula_58
Then formula_59, so finally
formula_60
For example, the zeta function of "E" : "y"2 + "y" = "x"3 over the field F2 is given by
formula_61
which follows from:
formula_62
The functional equation is
formula_63
As we are only interested in the behaviour of formula_47, we can use a reduced zeta function
formula_64
formula_65
and so
formula_66
which leads directly to the local L-functions
formula_67
The Sato–Tate conjecture is a statement about how the error term formula_68 in Hasse's theorem varies with the different primes "q", if an elliptic curve E over Q is reduced modulo q. It was proven (for almost all such curves) in 2006 due to the results of Taylor, Harris and Shepherd-Barron, and says that the error terms are equidistributed.
Elliptic curves over finite fields are notably applied in cryptography and for the factorization of large integers. These algorithms often make use of the group structure on the points of "E". Algorithms that are applicable to general groups, for example the group of invertible elements in finite fields, F*"q", can thus be applied to the group of points on an elliptic curve. For example, the discrete logarithm is such an algorithm. The interest in this is that choosing an elliptic curve allows for more flexibility than choosing "q" (and thus the group of units in F"q"). Also, the group structure of elliptic curves is generally more complicated.
Elliptic curves over a general field.
Elliptic curves can be defined over any field "K"; the formal definition of an elliptic curve is a non-singular projective algebraic curve over "K" with genus 1 and endowed with a distinguished point defined over "K".
If the characteristic of "K" is neither 2 nor 3, then every elliptic curve over "K" can be written in the form
formula_69
after a linear change of variables. Here "p" and "q" are elements of "K" such that the right hand side polynomial "x"3 − "px" − "q" does not have any double roots. If the characteristic is 2 or 3, then more terms need to be kept: in characteristic 3, the most general equation is of the form
formula_70
for arbitrary constants "b"2, "b"4, "b"6 such that the polynomial on the right-hand side has distinct roots (the notation is chosen for historical reasons). In characteristic 2, even this much is not possible, and the most general equation is
formula_71
provided that the variety it defines is non-singular. If characteristic were not an obstruction, each equation would reduce to the previous ones by a suitable linear change of variables.
One typically takes the curve to be the set of all points ("x","y") which satisfy the above equation and such that both "x" and "y" are elements of the algebraic closure of "K". Points of the curve whose coordinates both belong to "K" are called "K"-rational points.
Many of the preceding results remain valid when the field of definition of "E" is a number field "K", that is to say, a finite field extension of Q. In particular, the group "E(K)" of "K"-rational points of an elliptic curve "E" defined over "K" is finitely generated, which generalizes the Mordell–Weil theorem above. A theorem due to Loïc Merel shows that for a given integer "d", there are (up to isomorphism) only finitely many groups that can occur as the torsion groups of "E"("K") for an elliptic curve defined over a number field "K" of degree "d". More precisely, there is a number "B"("d") such that for any elliptic curve "E" defined over a number field "K" of degree "d", any torsion point of "E"("K") is of order less than "B"("d"). The theorem is effective: for "d" > 1, if a torsion point is of order "p", with "p" prime, then
formula_72
As for the integral points, Siegel's theorem generalizes to the following: Let "E" be an elliptic curve defined over a number field "K", "x" and "y" the Weierstrass coordinates. Then there are only finitely many points of "E(K)" whose "x"-coordinate is in the ring of integers "O""K".
The properties of the Hasse–Weil zeta function and the Birch and Swinnerton-Dyer conjecture can also be extended to this more general situation.
Elliptic curves over the complex numbers.
The formulation of elliptic curves as the embedding of a torus in the complex projective plane follows naturally from a curious property of Weierstrass's elliptic functions. These functions and their first derivative are related by the formula
formula_73
Here, "g"2 and "g"3 are constants; ℘("z") is the Weierstrass elliptic function and its derivative. It should be clear that this relation is in the form of an elliptic curve (over the complex numbers). The Weierstrass functions are doubly periodic; that is, they are periodic with respect to a lattice Λ; in essence, the Weierstrass functions are naturally defined on a torus "T" = C/Λ. This torus may be embedded in the complex projective plane by means of the map
formula_74
This map is a group isomorphism of the torus (considered with its natural group structure) with the chord-and-tangent group law on the cubic curve which is the image of this map. It is also an isomorphism of Riemann surfaces from the torus to the cubic curve, so topologically, an elliptic curve is a torus. If the lattice Λ is related by multiplication by a non-zero complex number c to a lattice "c"Λ, then the corresponding curves are isomorphic. Isomorphism classes of elliptic curves are specified by the j-invariant.
The isomorphism classes can be understood in a simpler way as well. The constants "g"2 and "g"3, called the modular invariants, are uniquely determined by the lattice, that is, by the structure of the torus. However, all real polynomials factorize completely into linear factors over the complex numbers, since the field of complex numbers is the algebraic closure of the reals. So, the elliptic curve may be written as
formula_75
One finds that
formula_76
and
formula_77
with j-invariant "j"("τ") and "λ"("τ") is sometimes called the modular lambda function. For example, let "τ" = 2"i", then "λ"(2"i") = (−1 + √2)4 which implies "g"′2, "g"′3, and therefore "g"′2 − 27"g"′3 of the formula above are all algebraic numbers if τ involves an imaginary quadratic field. In fact, it yields the integer "j"(2"i") = 663 =.
In contrast, the modular discriminant
formula_78
is generally a transcendental number. In particular, the value of the Dedekind eta function "η"(2"i") is
formula_79
Note that the uniformization theorem implies that every compact Riemann surface of genus one can be represented as a torus. This also allows an easy understanding of the torsion points on an elliptic curve: if the lattice Λ is spanned by the fundamental periods "ω"1 and "ω"2, then the n-torsion points are the (equivalence classes of) points of the form
formula_80
for integers a and b in the range 0 ≤ ("a", "b") < "n".
If
formula_81
is an elliptic curve over the complex numbers and
formula_82
then a pair of fundamental periods of E can be calculated very rapidly by
formula_83
M("w", "z") is the arithmetic–geometric mean of w and z. At each step of the arithmetic–geometric mean iteration, the signs of zn arising from the ambiguity of geometric mean iterations are chosen such that where wn and zn denote the individual arithmetic mean and geometric mean iterations of w and z, respectively. When |"wn" − "zn"| = |"wn" + "zn"|, there is an additional condition that Im() > 0.
Over the complex numbers, every elliptic curve has nine inflection points. Every line through two of these points also passes through a third inflection point; the nine points and 12 lines formed in this way form a realization of the Hesse configuration.
The Dual Isogeny.
Given an isogeny
formula_84
of elliptic curves of degree formula_85, the dual isogeny is an isogeny
formula_86
of the same degree such that
formula_87
Here formula_88 denotes the multiplication-by-formula_85 isogeny formula_89 which has degree formula_90
Construction of the Dual Isogeny.
Often only the existence of a dual isogeny is needed, but it can be explicitly given as the composition
formula_91
where formula_92 is the group of divisors of degree 0. To do this, we need maps formula_93 given by formula_94 where formula_14 is the neutral point of formula_32 and formula_95 given by formula_96
To see that formula_97, note that the original isogeny formula_98 can be written as a composite
formula_99
and that since formula_98 is of degree formula_85, formula_100 is multiplication by formula_85 on formula_101
Alternatively, we can use the smaller Picard group formula_102, a quotient of formula_103 The map formula_104 descends to an isomorphism, formula_105 The dual isogeny is
formula_106
Note that the relation formula_97 also implies the conjugate relation formula_107 Indeed, let formula_108 Then formula_109 But formula_110 is surjective, so we must have formula_111
Algorithms that use elliptic curves.
Elliptic curves over finite fields are used in some cryptographic applications as well as for integer factorization. Typically, the general idea in these applications is that a known algorithm which makes use of certain finite groups is rewritten to use the groups of rational points of elliptic curves. For more see also:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
Serge Lang, in the introduction to the book cited below, stated that "It is possible to write endlessly on elliptic curves. (This is not a threat.)" The following short list is thus at best a guide to the vast expository literature available on the theoretical, algorithmic, and cryptographic aspects of elliptic curves.
External links.
"This article incorporates material from Isogeny on PlanetMath, which is licensed under the ." | [
{
"math_id": 0,
"text": "y^2 = x^3 + ax + b"
},
{
"math_id": 1,
"text": "\\mathbb{H}^2"
},
{
"math_id": 2,
"text": "\\Delta"
},
{
"math_id": 3,
"text": "\\Delta = -16\\left(4a^3 + 27b^2\\right) \\neq 0"
},
{
"math_id": 4,
"text": "\\frac{Y^2}{Z^2} =\n\\frac{X^3}{Z^3}\n+a\\frac{X}{Z} + b"
},
{
"math_id": 5,
"text": "Z^3"
},
{
"math_id": 6,
"text": "ZY^2 = X^3 + aZ^2X + bZ^3"
},
{
"math_id": 7,
"text": "Z = 0"
},
{
"math_id": 8,
"text": "X^3 = 0"
},
{
"math_id": 9,
"text": "X = 0"
},
{
"math_id": 10,
"text": "Y"
},
{
"math_id": 11,
"text": "(0,Y,0)"
},
{
"math_id": 12,
"text": "O = [0:1:0]"
},
{
"math_id": 13,
"text": " -O = O"
},
{
"math_id": 14,
"text": "O"
},
{
"math_id": 15,
"text": "-O"
},
{
"math_id": 16,
"text": "s = \\frac{y_P - y_Q}{x_P - x_Q}"
},
{
"math_id": 17,
"text": "\\left(s x + d\\right)^2 = x^3 + bx + c"
},
{
"math_id": 18,
"text": "x^3 - s^2 x^2 - 2sdx + bx + c - d^2 = 0"
},
{
"math_id": 19,
"text": "(x - x_P) (x - x_Q) (x - x_R) = x^3 + (-x_P - x_Q - x_R) x^2 + (x_P x_Q + x_P x_R + x_Q x_R) x - x_P x_Q x_R "
},
{
"math_id": 20,
"text": "-s^2 = (-x_P - x_Q - x_R)"
},
{
"math_id": 21,
"text": "x_R = s^2 - x_P - x_Q"
},
{
"math_id": 22,
"text": "y_R = y_P - s(x_P - x_R)"
},
{
"math_id": 23,
"text": "\\begin{align}\n s &= \\frac{3{x_P}^2 + b}{2y_P}\\\\\n x_R &= s^2 - 2x_P\\\\\n y_R &= y_P - s(x_P - x_R)\n\\end{align}"
},
{
"math_id": 24,
"text": "s"
},
{
"math_id": 25,
"text": "s = \\frac{{x_P}^2 + x_P x_Q + {x_Q}^2 + b}{y_P + y_Q}"
},
{
"math_id": 26,
"text": "y^2 + a_1xy + a_3y = x^3 + a_2x^2 + a_4x + a_6"
},
{
"math_id": 27,
"text": "a_i"
},
{
"math_id": 28,
"text": "Z(E(\\mathbf{F}_p), T) = \\exp\\left(\\sum_{n=1}^\\infty \\# \\left[E({\\mathbf F}_{p^n})\\right]\\frac{T^n}{n}\\right)"
},
{
"math_id": 29,
"text": "Z(E(\\mathbf{F}_p), T) = \\frac{1 - a_pT + pT^2}{(1 - T)(1 - pT)},"
},
{
"math_id": 30,
"text": "a_p"
},
{
"math_id": 31,
"text": "p+1"
},
{
"math_id": 32,
"text": "E"
},
{
"math_id": 33,
"text": "\\mathbb{F}_p"
},
{
"math_id": 34,
"text": "\na_p = p + 1 - \\#E(\\mathbb{F}_p)\n"
},
{
"math_id": 35,
"text": "\n\\#E(\\mathbb{F}_p) = p + 1 - a_p\n"
},
{
"math_id": 36,
"text": "p"
},
{
"math_id": 37,
"text": "q = p^n"
},
{
"math_id": 38,
"text": "L(E(\\mathbf{Q}), s) = \\prod_{p\\not\\mid N} \\left(1 - a_p p^{-s} + p^{1 - 2s}\\right)^{-1} \\cdot \\prod_{p\\mid N} \\left(1 - a_p p^{-s}\\right)^{-1}"
},
{
"math_id": 39,
"text": "L(E(\\mathbf{Q}), 1) = \\prod_{p\\not\\mid N} \\left(1 - a_p p^{-1} + p^{-1}\\right)^{-1} = \\prod_{p\\not\\mid N} \\frac{p}{p - a_p + 1} = \\prod_{p\\not\\mid N}\\frac{p}{\\#E(\\mathbb{F}_p)}"
},
{
"math_id": 40,
"text": "y^2 = x^3 - n^2x"
},
{
"math_id": 41,
"text": "2x^2 + y^2 + 8z^2 = n"
},
{
"math_id": 42,
"text": "2x^2 + y^2 + 32z^2 = n"
},
{
"math_id": 43,
"text": "y^2=x^3+ax+b"
},
{
"math_id": 44,
"text": "|\\# E(K) - (q + 1)| \\le 2\\sqrt{q}"
},
{
"math_id": 45,
"text": "y^2 = x^3 - x"
},
{
"math_id": 46,
"text": "Z(E(K), T) = \\exp \\left(\\sum_{n=1}^{\\infty} \\# \\left[E(K_n)\\right] {T^n\\over n} \\right)"
},
{
"math_id": 47,
"text": "a_n"
},
{
"math_id": 48,
"text": "\\#E(K_n) = 1 - a_n + q^n"
},
{
"math_id": 49,
"text": "\\alpha"
},
{
"math_id": 50,
"text": " 1 - a_n + q^n = 1 - \\alpha^n - \\bar\\alpha^n + q^n"
},
{
"math_id": 51,
"text": "\\bar\\alpha"
},
{
"math_id": 52,
"text": "\\sqrt{q}"
},
{
"math_id": 53,
"text": "\\alpha = q^{\\frac12}e^{i\\theta}, \\bar\\alpha = q^{\\frac12}e^{-i\\theta}"
},
{
"math_id": 54,
"text": "\\cos n\\theta=\\frac{a_n}{2\\sqrt q}"
},
{
"math_id": 55,
"text": "\\alpha^n\\bar\\alpha^n = q^n"
},
{
"math_id": 56,
"text": "\\alpha^n+\\bar\\alpha^n = a_n"
},
{
"math_id": 57,
"text": "(1 - \\alpha^n)(1 - \\bar\\alpha^n) = 1 - a_n + q^n"
},
{
"math_id": 58,
"text": "\n\\begin{alignat}{2} \nZ(E(K),T) & = \\exp \\left(\\sum_{n=1}^{\\infty} \\left(1 - \\alpha^n - \\bar\\alpha^n + q^n\\right){T^n\\over n} \\right) \\\\ \n& = \\exp \\left(\\sum_{n=1}^{\\infty} {T^n\\over n} - \\sum_{n=1}^{\\infty}\\alpha^n{T^n\\over n} - \\sum_{n=1}^{\\infty}\\bar\\alpha^n{T^n\\over n} + \\sum_{n=1}^{\\infty}q^n{T^n\\over n} \\right) \\\\ \n& = \\exp \\left(-\\ln(1-T) + \\ln(1-\\alpha T) + \\ln(1-\\bar\\alpha T) - \\ln(1-qT) \\right) \\\\ \n& = \\exp \\left(\\ln\\frac{(1-\\alpha T)(1-\\bar\\alpha T)}{(1-T)(1-qT)} \\right) \\\\ \n& =\\frac{(1-\\alpha T)(1-\\bar\\alpha T)}{(1-T)(1-qT)} \\\\ \n\\end{alignat}\n"
},
{
"math_id": 59,
"text": "(1 - \\alpha T)(1 - \\bar\\alpha T) = 1 - aT + qT^2"
},
{
"math_id": 60,
"text": "Z(E(K), T) = \\frac{1 - aT + qT^2}{(1 - qT)(1 - T)}"
},
{
"math_id": 61,
"text": "\\frac{1 + 2T^2}{(1 - T)(1 - 2T)}"
},
{
"math_id": 62,
"text": " \\left| E(\\mathbf{F}_{2^r}) \\right| = \\begin{cases} 2^r + 1 & r \\text{ odd} \\\\ 2^r + 1 - 2(-2)^{\\frac{r}{2}} & r \\text{ even} \\end{cases} "
},
{
"math_id": 63,
"text": "Z \\left(E(K), \\frac{1}{qT} \\right) = \\frac{1 - a\\frac{1}{qT} + q\\left(\\frac{1}{qT}\\right)^2}{(1 - q\\frac{1}{qT})(1 - \\frac{1}{qT})}= \\frac{q^2T^2 - aqT + q}{(qT - q)(qT - 1)} = Z(E(K), T)"
},
{
"math_id": 64,
"text": "Z(a, T) = \\exp \\left(\\sum_{n=1}^{\\infty} -a_n {T^n\\over n} \\right)"
},
{
"math_id": 65,
"text": "Z(a, T) = \\exp \\left(\\sum_{n=1}^{\\infty} -\\alpha^n {T^n\\over n} - \\bar\\alpha^n {T^n\\over n} \\right)"
},
{
"math_id": 66,
"text": "Z(a, T) = \\exp \\left(\\ln(1-\\alpha T) + \\ln(1-\\bar\\alpha T)\\right)"
},
{
"math_id": 67,
"text": "L(E(K), T) = 1 - aT + qT^2"
},
{
"math_id": 68,
"text": "2\\sqrt{q}"
},
{
"math_id": 69,
"text": "y^2 = x^3 - px - q"
},
{
"math_id": 70,
"text": "y^2 = 4x^3 + b_2 x^2 + 2b_4 x + b_6"
},
{
"math_id": 71,
"text": "y^2 + a_1 xy + a_3 y = x^3 + a_2 x^2 + a_4 x + a_6"
},
{
"math_id": 72,
"text": "p < d^{3d^2}"
},
{
"math_id": 73,
"text": "\\wp'(z)^2 = 4\\wp(z)^3 -g_2\\wp(z) - g_3"
},
{
"math_id": 74,
"text": "z \\mapsto \\left[1 : \\wp(z) : \\tfrac12\\wp'(z)\\right]"
},
{
"math_id": 75,
"text": "y^2 = x(x - 1)(x - \\lambda)"
},
{
"math_id": 76,
"text": "\\begin{align}\ng_2' &= \\frac{\\sqrt[3]4}{3} \\left(\\lambda^2 - \\lambda + 1\\right) \\\\[4pt]\ng_3' &= \\frac{1}{27} (\\lambda + 1)\\left(2\\lambda^2 - 5\\lambda + 2\\right)\n\\end{align}"
},
{
"math_id": 77,
"text": "j(\\tau) = 1728\\frac{{g_2'}^3}{{g_2'}^3 - 27{g_3'}^2} = 256\\frac{ \\left(\\lambda^2 - \\lambda + 1\\right)^3}{\\lambda^2\\left(\\lambda - 1\\right)^2}"
},
{
"math_id": 78,
"text": "\\Delta(\\tau) = g_2(\\tau)^3 - 27g_3(\\tau)^2 = (2\\pi)^{12}\\,\\eta^{24}(\\tau)"
},
{
"math_id": 79,
"text": "\\eta(2i)=\\frac{\\Gamma \\left(\\frac14\\right)}{2^\\frac{11}{8} \\pi^\\frac34}"
},
{
"math_id": 80,
"text": " \\frac{a}{n} \\omega_1 + \\frac{b}{n} \\omega_2"
},
{
"math_id": 81,
"text": "E : y^2=4(x-e_1)(x-e_2)(x-e_3)"
},
{
"math_id": 82,
"text": "a_0=\\sqrt{e_1-e_3}, \\qquad b_0=\\sqrt{e_1-e_2}, \\qquad c_0=\\sqrt{e_2-e_3},"
},
{
"math_id": 83,
"text": "\\omega_1=\\frac{\\pi}{\\operatorname{M}(a_0,b_0)}, \\qquad \\omega_2=\\frac{\\pi}{\\operatorname{M}(c_0,ib_0)}"
},
{
"math_id": 84,
"text": " f : E \\rightarrow E' "
},
{
"math_id": 85,
"text": "n"
},
{
"math_id": 86,
"text": "\\hat{f} : E' \\rightarrow E"
},
{
"math_id": 87,
"text": "f \\circ \\hat{f} = [n]."
},
{
"math_id": 88,
"text": "[n]"
},
{
"math_id": 89,
"text": "e\\mapsto ne"
},
{
"math_id": 90,
"text": "n^2."
},
{
"math_id": 91,
"text": " E'\\rightarrow \\mbox{Div}^0(E')\\to\\mbox{Div}^0(E)\\rightarrow E\\,"
},
{
"math_id": 92,
"text": "{\\mathrm{Div}}^0"
},
{
"math_id": 93,
"text": "E \\rightarrow {\\mbox{Div}}^0(E)"
},
{
"math_id": 94,
"text": "P\\to P - O"
},
{
"math_id": 95,
"text": "{\\mbox{Div}}^0(E) \\rightarrow E\\,"
},
{
"math_id": 96,
"text": "\\sum n_P P \\to \\sum n_P P."
},
{
"math_id": 97,
"text": "f \\circ \\hat{f} = [n]"
},
{
"math_id": 98,
"text": "f"
},
{
"math_id": 99,
"text": " E \\rightarrow {\\mbox{Div}}^0(E)\\to {\\mbox{Div}}^0(E')\\to E'\\,"
},
{
"math_id": 100,
"text": "f_* f^*"
},
{
"math_id": 101,
"text": "{\\mbox{Div}}^0(E')."
},
{
"math_id": 102,
"text": "{\\mathrm{Pic}}^0"
},
{
"math_id": 103,
"text": "{\\mbox{Div}}^0."
},
{
"math_id": 104,
"text": "E\\rightarrow {\\mbox{Div}}^0(E)"
},
{
"math_id": 105,
"text": "E\\to{\\mbox{Pic}}^0(E)."
},
{
"math_id": 106,
"text": " E' \\to {\\mbox{Pic}}^0(E')\\to {\\mbox{Pic}}^0(E)\\to E\\,"
},
{
"math_id": 107,
"text": "\\hat{f} \\circ f = [n]."
},
{
"math_id": 108,
"text": "\\phi = \\hat{f} \\circ f."
},
{
"math_id": 109,
"text": "\\phi \\circ \\hat{f} = \\hat{f} \\circ [n] = [n] \\circ \\hat{f}."
},
{
"math_id": 110,
"text": "\\hat{f}"
},
{
"math_id": 111,
"text": "\\phi = [n]."
}
] | https://en.wikipedia.org/wiki?curid=10225 |
1022661 | Symbolic Cholesky decomposition | In the mathematical subfield of numerical analysis the symbolic Cholesky decomposition is an algorithm used to determine the non-zero pattern for the formula_0 factors of a symmetric sparse matrix when applying the Cholesky decomposition or variants.
Algorithm.
Let
formula_1
be a sparse symmetric positive definite matrix with elements from a field formula_2, which we wish to factorize as formula_3.
In order to implement an efficient sparse factorization it has been found to be necessary to determine the non zero structure of the factors before doing any numerical work. To write the algorithm down we use the following notation:
The following algorithm gives an efficient
symbolic factorization of A :
formula_8 | [
{
"math_id": 0,
"text": "L"
},
{
"math_id": 1,
"text": "A=(a_{ij}) \\in \\mathbb{K}^{n \\times n}"
},
{
"math_id": 2,
"text": "\\mathbb{K}"
},
{
"math_id": 3,
"text": "A = LL^T\\,"
},
{
"math_id": 4,
"text": "\\mathcal{A}_i"
},
{
"math_id": 5,
"text": "\\mathcal{L}_j"
},
{
"math_id": 6,
"text": "\\min\\mathcal{L}_j"
},
{
"math_id": 7,
"text": "\\pi(i)\\,\\!"
},
{
"math_id": 8,
"text": "\n\\begin{align}\n& \\pi(i):=0~\\mbox{for all}~i\\\\\n& \\mbox{For}~i:=1~\\mbox{to}~n\\\\\n& \\qquad \\mathcal{L}_i := \\mathcal{A}_i\\\\\n& \\qquad \\mbox{For all}~j~\\mbox{such that}~\\pi(j) = i\\\\\n& \\qquad \\qquad \\mathcal{L}_i := (\\mathcal{L}_i \\cup \\mathcal{L}_j)\\setminus\\{j\\}\\\\\n& \\qquad \\pi(i) := \\min(\\mathcal{L}_i\\setminus\\{i\\})\n\\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=1022661 |
10226740 | Free recoil | Term for recoil energy of a firearm not supported from behind
Free recoil / Frecoil is a vernacular term or jargon for recoil energy of a firearm not supported from behind. Free recoil denotes the translational kinetic energy ("Et") imparted to the shooter of a small arm when discharged and is expressed in joules (J), or foot-pound force (ft·lb"f") for non-SI units of measure. More generally, the term refers to the recoil of a free-standing firearm, in contrast to a firearm securely bolted to or braced by a massive mount or wall. Free recoil should not be confused with recoil:
Free recoil and firearms.
Free recoil, sometimes called "recoil energy", is a byproduct of the propulsive force from the powder charge held within a firearm chamber (metallic cartridge firearm) or breech (gunpowder firearm). The physical event of free recoil occurs when a powder charge is deflagrated within a firearm, resulting in the conversion of chemical energy held within the powder charge into thermodynamic energy. This energy is then transferred to the base of the bullet and to the rear of the cartridge or breech, propelling the firearm "rearward" into the shooter while the projectile is propelled "forward" down the barrel, with increasing velocity, to the muzzle. The rearward energy of the firearm is the free recoil and the forward energy of the bullet is the muzzle energy.
The concept of free recoil comes from the "tolerability" of gross recoil energy. Trying to figure the net recoil energy of a firearm (also known as felt recoil) is a futile endeavor. Even if the recoil energy loss can be calculated, due to:
the factors of human perception are not calculable.
Therefore, free recoil stands as a scientific measurement of recoil energy, just as the room or outside temperature is measured. The comfort level of a shooter's ability to tolerate free recoil is a personal perception. Just as it is a person's personal perception of how comfortable he or she feels to room or outside temperature.
There are many factors that determine how a shooter will perceive the free recoil of a firearm. Some of the factors are:
Calculating free recoil.
There are several different ways to calculate free recoil. However, the two most common are the momentum "short" and "long" forms. Both forms will yield the same value.
The short form uses one equation as where the long form requires two equations. The long form finds the velocity for the fire arm. With the "velocity" known for the small arm, the free recoil of the small arm can be calculated using the translational kinetic energy equation.
Where as:
"Etgu" is the translational kinetic energy of the small arm as expressed by the joule (J).
mgu is the weight of the small arm expressed in kilograms (kg).
mp is the weight of the projectile expressed in grams (g).
mc is the weight of the powder charge expressed in grams (g).
vgu is the velocity of the small arm expressed in meters per second (m/s).
vp is the velocity of the projectile expressed in meters per second (m/s).
vc is the velocity of the powder charge expressed in meters per second (m/s).
1000 is the conversion factor to set the equation equal to kilograms.
Calculating free recoil using SI units, example.
Small arm: Mauser 1898 action, chambered in 7×57mm Mauser, rifle weighing 4.54 kilograms (10 pounds).
Projectile: of spitzer type, weighing 9.1 grams (140 grains), with a muzzle velocity of 823 meters per second (2,700 feet per second).
Powder charge: single base nitrocellulose, weighing 2.75 grams (42.5 grains), with a powder charge velocity of 1,585 meters per second (5,200 feet per second).
The momentum short form:
formula_3
and with the numeric values in place;
formula_4
formula_5
formula_6
formula_7
formula_8
formula_9
formula_10 of free recoil
Calculating free recoil using non-SI units.
From the momentum long form in both Imperial units of measure and in an English Engineering format:
Whereas:
"Etgu" is the translational kinetic energy of the small arm as expressed by the foot-pound force (ft·lbf).
"m"gu is the weight of the small arm expressed in pounds (lb).
"m"p is the weight of the projectile expressed in grains (gr).
"m"c is the weight of the powder charge expressed in grains (gr).
"v"gu is the velocity of the small arm expressed in feet per second (ft/s).
"v"p is the velocity of the projectile expressed in feet per second (ft/s).
"v"c is the velocity of the powder charge expressed in feet per second (ft/s).
"g"c is the dimensional constant and is the numeral coefficient of 32.1739
7000 is the conversion factor to set the equation equal to pounds.
Calculated free recoil for small arms.
The following free recoil energy table does not take into consideration: recoil suppression devices, or loss of energy due to auto loading mechanism. English units of measure are enclosed in parentheses.
See also.
See physics of firearms for a more detailed discussion.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_{tgu} = 0.5 \\cdot [\\tfrac {(m_p \\cdot v_p) + (m_c \\cdot v_c)} { 1000 }]^2 / m_{gu}"
},
{
"math_id": 1,
"text": "v_{gu} = \\tfrac {(m_p \\cdot v_p) + (m_c \\cdot v_c)} {1000 \\cdot m_{gu}}"
},
{
"math_id": 2,
"text": "E_{tgu} = 0.5 \\cdot m_{gu} \\cdot v_{gu}^2\\,"
},
{
"math_id": 3,
"text": "E_{tgu} = 0.5 \\cdot [\\tfrac {(m_p \\cdot v_p) + (m_c \\cdot v_c)} { 1000 } ]^2 / m_{gu}"
},
{
"math_id": 4,
"text": "E_{tgu} = 0.5 \\cdot [\\tfrac {(9.1 \\cdot 823) + (2.75 \\cdot 1585)} { 1000 } ]^2 / 4.54 ="
},
{
"math_id": 5,
"text": "E_{tgu} = 0.5 \\cdot [\\tfrac {(7489.3) + (4358.75)} { 1000 } ]^2 / 4.54 ="
},
{
"math_id": 6,
"text": "E_{tgu} = 0.5 \\cdot [\\tfrac {11848.05} { 1000 } ]^2 / 4.54 ="
},
{
"math_id": 7,
"text": "E_{tgu} = 0.5 \\cdot 11.848^2 / 4.54 = \\,"
},
{
"math_id": 8,
"text": "E_{tgu} = 0.5 \\cdot 140.367 / 4.54 = \\,"
},
{
"math_id": 9,
"text": "E_{tgu} = 70.188 / 4.54 = \\,"
},
{
"math_id": 10,
"text": "E_{tgu} = 15.46J \\,"
},
{
"math_id": 11,
"text": "v_{gu} = \\tfrac {(m_p \\cdot v_p) + (m_c \\cdot v_c)} {7000} / m_{gu} "
},
{
"math_id": 12,
"text": "E_{tgu} = \\tfrac {m_{gu} \\cdot v_{gu}^2}{2g_c}\\,"
}
] | https://en.wikipedia.org/wiki?curid=10226740 |
10231968 | $ (disambiguation) | $ is the dollar or peso currency sign (36 in ASCII), primarily used to represent currencies.
$ may also refer to:
<templatestyles src="Template:TOC_right/styles.css" />
Currency.
The sign is used for:
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "\\mathrm{S}\\!\\!\\!\\Vert"
}
] | https://en.wikipedia.org/wiki?curid=10231968 |
1023353 | Burgers' equation | Partial differential equation
Burgers' equation or Bateman–Burgers equation is a fundamental partial differential equation and convection–diffusion equation occurring in various areas of applied mathematics, such as fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow. The equation was first introduced by Harry Bateman in 1915 and later studied by Johannes Martinus Burgers in 1948. For a given field formula_0 and diffusion coefficient (or "kinematic viscosity", as in the original fluid mechanical context) formula_1, the general form of Burgers' equation (also known as viscous Burgers' equation) in one space dimension is the dissipative system:
formula_2
The term formula_3 can also rewritten as formula_4. When the diffusion term is absent (i.e. formula_5), Burgers' equation becomes the inviscid Burgers' equation:
formula_6
which is a prototype for conservation equations that can develop discontinuities (shock waves).
The reason for the formation of sharp gradients for small values of formula_1 becomes intuitively clear when one examines the left-hand side of the equation. The term formula_7 is evidently a wave operator describing a wave propagating in the positive formula_8-direction with a speed formula_9. Since the wave speed is formula_9, regions exhibiting large values of formula_9 will be propagated rightwards quickly than regions exhibiting smaller values of formula_9; in other words, if formula_9 is decreasing in the formula_8-direction, initially, then larger formula_9's that lie in the backside will catch up with smaller formula_9's that is on the front side. The role of the right-side diffusive term is essentially to stop the gradient becoming infinite.
Inviscid Burgers' equation.
The inviscid Burgers' equation is a conservation equation, more generally a first order quasilinear hyperbolic equation. The solution to the equation and along with the initial condition
formula_10
can be constructed by the method of characteristics. Let formula_11 be the parameter characterising any given characteristics in the formula_8-formula_11 plane, then the characteristic equations are given by
formula_12
Integration of the second equation tells us that formula_9 is constant along the characteristic and integration of the first equation shows that the characteristics are straight lines, i.e.,
formula_13
where formula_14 is the point (or parameter) on the "x"-axis ("t" = 0) of the "x"-"t" plane from which the characteristic curve is drawn. Since formula_9 at formula_8-axis is known from the initial condition and the fact that formula_9 is unchanged as we move along the characteristic emanating from each point formula_15, we write formula_16 on each characteristic. Therefore, the family of trajectories of characteristics parametrized by formula_14 is
formula_17
Thus, the solution is given by
formula_18
This is an implicit relation that determines the solution of the inviscid Burgers' equation provided characteristics don't intersect. If the characteristics do intersect, then a classical solution to the PDE does not exist and leads to the formation of a shock wave. Whether characteristics can intersect or not depends on the initial condition. In fact, the "breaking time" before a shock wave can be formed is given by
formula_19
Complete integral of the inviscid Burgers' equation.
The implicit solution described above containing an arbitrary function formula_20 is called the general integral. However, the inviscid Burgers' equation, being a first-order partial differential equation, also has a complete integral which contains two arbitrary constants (for the two independent variables). Subrahmanyan Chandrasekhar provided the complete integral in 1943, which is given by
formula_21
where formula_22 and formula_23 are arbitrary constants. The complete integral satisfies a linear initial condition, i.e., formula_24. One can also construct the geneal integral using the above complete integral.
Viscous Burgers' equation.
The viscous Burgers' equation can be converted to a linear equation by the Cole–Hopf transformation,
formula_25
which turns it into the equation
formula_26
which can be integrated with respect to formula_8 to obtain
formula_27
where formula_28 is an arbitrary function of time. Introducing the transformation formula_29 (which does not affect the function formula_0), the required equation reduces to that of the heat equation
formula_30
The diffusion equation can be solved. That is, if formula_31, then
formula_32
The initial function formula_33 is related to the initial function formula_34 by
formula_35
where the lower limit is chosen arbitrarily. Inverting the Cole–Hopf transformation, we have
formula_36
which simplifies, by getting rid of the time-dependent prefactor in the argument of the logarthim, to
formula_37
This solution is derived from the solution of the heat equation for formula_38 that decays to zero as formula_39; other solutions for formula_9 can be obtained starting from solutions of formula_38 that satisfies different boundary conditions.
Some explicit solutions of the viscous Burgers' equation.
Explicit expressions for the viscous Burgers' equation are available. Some of the physically relevant solutions are given below:
Steadily propagating traveling wave.
If formula_34 is such that formula_40 and formula_41 and formula_42, then we have a traveling-wave solution (with a constant speed formula_43) given by
formula_44
This solution, that was originally derived by Harry Bateman in 1915, is used to describe the variation of pressure across a . When formula_45 and formula_46 to
formula_47
with formula_48.
Delta function as an initial condition.
If formula_49, where formula_50 (say, the Reynolds number) is a constant, then we have
formula_51
In the limit formula_52, the limiting behaviour is a diffusional spreading of a source and therefore is given by
formula_53
On the other hand, In the limit formula_54, the solution approaches that of the aforementioned Chandrasekhar's shock-wave solution of the inviscid Burgers' equation and is given by
formula_55
The shock wave location and its speed are given by formula_56 and formula_57
N-wave solution.
The N-wave solution comprises a compression wave followed by a rarafaction wave. A solution of this type is given by
formula_58
where formula_59 may be regarded as an initial Reynolds number at time formula_60 and formula_61 with formula_62, may be regarded as the time-varying Reynold number.
Other forms.
Multi-dimensional Burgers' equation.
In two or more dimensions, the Burgers' equation becomes
formula_63
One can also extend the equation for the vector field formula_64, as in
formula_65
Generalized Burgers' equation.
The generalized Burgers' equation extends the quasilinear convective to more generalized form, i.e.,
formula_66
where formula_67 is any arbitrary function of u. The inviscid formula_5 equation is still a quasilinear hyperbolic equation for formula_68 and its solution can be constructed using method of characteristics as before.
Stochastic Burgers' equation.
Added space-time noise formula_69, where formula_70 is an formula_71 Wiener process, forms a stochastic Burgers' equation
formula_72
This stochastic PDE is the one-dimensional version of Kardar–Parisi–Zhang equation in a field formula_73 upon substituting formula_74.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "u(x,t)"
},
{
"math_id": 1,
"text": "\\nu"
},
{
"math_id": 2,
"text": "\\frac{\\partial u}{\\partial t} + u \\frac{\\partial u}{\\partial x} = \\nu\\frac{\\partial^2 u}{\\partial x^2}."
},
{
"math_id": 3,
"text": "u\\partial u/\\partial x"
},
{
"math_id": 4,
"text": "\\partial(u^2/2)/\\partial x"
},
{
"math_id": 5,
"text": "\\nu=0"
},
{
"math_id": 6,
"text": "\\frac{\\partial u}{\\partial t} + u \\frac{\\partial u}{\\partial x} = 0,"
},
{
"math_id": 7,
"text": "\\partial/\\partial t + u \\partial/\\partial x"
},
{
"math_id": 8,
"text": "x"
},
{
"math_id": 9,
"text": "u"
},
{
"math_id": 10,
"text": "\\frac{\\partial u}{\\partial t} + u \\frac{\\partial u}{\\partial x} = 0, \\quad u(x,0) = f(x)"
},
{
"math_id": 11,
"text": "t"
},
{
"math_id": 12,
"text": "\\frac{dx}{dt} = u, \\quad \\frac{du}{dt}=0."
},
{
"math_id": 13,
"text": "u=c, \\quad x = ut + \\xi "
},
{
"math_id": 14,
"text": "\\xi"
},
{
"math_id": 15,
"text": "x=\\xi"
},
{
"math_id": 16,
"text": "u=c=f(\\xi)"
},
{
"math_id": 17,
"text": "x=f(\\xi) t+ \\xi."
},
{
"math_id": 18,
"text": "u(x,t) = f(\\xi) = f(x-ut), \\quad \\xi = x - f(\\xi) t."
},
{
"math_id": 19,
"text": "t_b = \\frac{-1}{\\inf_x \\left(f^\\prime(x)\\right)}."
},
{
"math_id": 20,
"text": "f"
},
{
"math_id": 21,
"text": "u(x,t) = \\frac{ax+b}{at+1}."
},
{
"math_id": 22,
"text": "a"
},
{
"math_id": 23,
"text": "b"
},
{
"math_id": 24,
"text": "f(x) = ax + b"
},
{
"math_id": 25,
"text": "u(x,t) = -2 \\nu \\frac{\\partial}{\\partial x}\\ln \\varphi(x,t),"
},
{
"math_id": 26,
"text": "2 \\nu \\frac{\\partial}{\\partial x}\\left[\\frac{1}{\\varphi}\\left(\\frac{\\partial\\varphi}{\\partial t}- \\nu \\frac{\\partial^2\\varphi}{\\partial x^2}\\right)\\right]=0,"
},
{
"math_id": 27,
"text": "\\frac{\\partial\\varphi}{\\partial t}- \\nu \\frac{\\partial^2\\varphi}{\\partial x^2}=\\varphi \\frac{df(t)}{dt},"
},
{
"math_id": 28,
"text": "df/dt"
},
{
"math_id": 29,
"text": "\\varphi\\to \\varphi e^f"
},
{
"math_id": 30,
"text": "\\frac{\\partial\\varphi}{\\partial t}= \\nu \\frac{\\partial^2\\varphi}{\\partial x^2}."
},
{
"math_id": 31,
"text": "\\varphi(x,0)=\\varphi_0(x)"
},
{
"math_id": 32,
"text": "\\varphi(x,t) = \\frac{1}{\\sqrt{4\\pi \\nu t}}\\int_{-\\infty}^\\infty \\varphi_0(x') \\exp \\left[-\\frac{(x-x')^2}{4\\nu t}\\right]dx'."
},
{
"math_id": 33,
"text": "\\varphi_0(x)"
},
{
"math_id": 34,
"text": "u(x,0)=f(x)"
},
{
"math_id": 35,
"text": "\\ln \\varphi_0(x) = - \\frac{1}{2\\nu}\\int_0^x f(x') dx',"
},
{
"math_id": 36,
"text": "u(x,t)=-2\\nu\\frac{\\partial}{\\partial x}\\ln\\left\\{\\frac{1}{\\sqrt{4\\pi \\nu t}}\\int_{-\\infty}^\\infty \\exp\\left[-\\frac{(x-x')^2}{4\\nu t} - \\frac{1}{2\\nu}\\int_0^{x'}f(x'')dx''\\right]dx'\\right\\}"
},
{
"math_id": 37,
"text": "u(x,t)=-2\\nu\\frac{\\partial}{\\partial x}\\ln\\left\\{\\int_{-\\infty}^\\infty \\exp\\left[-\\frac{(x-x')^2}{4\\nu t} - \\frac{1}{2\\nu}\\int_0^{x'}f(x'')dx''\\right]dx'\\right\\}."
},
{
"math_id": 38,
"text": "\\varphi"
},
{
"math_id": 39,
"text": "x\\to\\pm\\infty"
},
{
"math_id": 40,
"text": "f(-\\infty)=f^+"
},
{
"math_id": 41,
"text": "f(+\\infty)=f^-"
},
{
"math_id": 42,
"text": "f'(x)<0"
},
{
"math_id": 43,
"text": "c=(f^++f^-)/2"
},
{
"math_id": 44,
"text": "u(x,t) = c - \\frac{f^+-f^-}{2}\\tanh\\left[\\frac{f^+-f^-}{4\\nu}(x-ct)\\right]."
},
{
"math_id": 45,
"text": "f^+=2"
},
{
"math_id": 46,
"text": "f^-=0"
},
{
"math_id": 47,
"text": "u(x,t)=\\frac{2}{1+e^{\\frac{x-t}{\\nu}}}"
},
{
"math_id": 48,
"text": "c=1"
},
{
"math_id": 49,
"text": "u(x,0) = 2\\nu Re \\delta(x)"
},
{
"math_id": 50,
"text": "Re"
},
{
"math_id": 51,
"text": "u(x,t)= \\sqrt{\\frac{\\nu}{\\pi t}} \\left[\\frac{(e^{Re}-1)e^{-x^2/4\\nu t}}{1 + (e^{Re}-1) \\mathrm{erfc}(x/\\sqrt{4\\nu t})/\\sqrt{2}}\\right]."
},
{
"math_id": 52,
"text": "Re\\to 0"
},
{
"math_id": 53,
"text": "u(x,t) = \\frac{2\\nu Re}{\\sqrt{4\\pi \\nu t}} \\exp\\left(-\\frac{x^2}{4\\nu t}\\right)."
},
{
"math_id": 54,
"text": "Re\\to \\infty"
},
{
"math_id": 55,
"text": "u(x,t) = \\begin{cases}\\frac{x}{t}, \\quad 0<x<\\sqrt{2\\nu Re\\,t},\\\\\n0, \\quad \\text{otherwise}.\\end{cases}"
},
{
"math_id": 56,
"text": "x=\\sqrt{2\\nu Re\\, t}"
},
{
"math_id": 57,
"text": "\\sqrt{\\nu Re/t}."
},
{
"math_id": 58,
"text": "u(x,t) = \\frac{x}{t}\\left[1 + \\frac{1}{e^{Re_0-1}}\\sqrt{\\frac{t}{t_0}}\\exp\\left(-\\frac{Re(t)x^2}{4\\nu Re_0 t}\\right)\\right]^{-1}"
},
{
"math_id": 59,
"text": "R_0"
},
{
"math_id": 60,
"text": "t=t_0"
},
{
"math_id": 61,
"text": "Re(t) = (1/2\\nu) \\int_0^\\infty udx=\\ln (1+\\sqrt{\\tau/t})"
},
{
"math_id": 62,
"text": "\\tau = t_0 \\sqrt{e^{Re_0}-1}"
},
{
"math_id": 63,
"text": "\\frac{\\partial u}{\\partial t} + u \\cdot \\nabla u = \\nu \\nabla^2 u."
},
{
"math_id": 64,
"text": "\\mathbf u"
},
{
"math_id": 65,
"text": "\\frac{\\partial \\mathbf u}{\\partial t} + \\mathbf u \\cdot \\nabla \\mathbf u = \\nu \\nabla^2 \\mathbf u."
},
{
"math_id": 66,
"text": "\\frac{\\partial u}{\\partial t} + c(u) \\frac{\\partial u}{\\partial x} = \\nu\\frac{\\partial^2 u}{\\partial x^2}."
},
{
"math_id": 67,
"text": "c(u)"
},
{
"math_id": 68,
"text": "c(u)>0"
},
{
"math_id": 69,
"text": "\\eta(x,t) = \\dot W(x,t)"
},
{
"math_id": 70,
"text": "W"
},
{
"math_id": 71,
"text": "L^2(\\mathbb R)"
},
{
"math_id": 72,
"text": "\\frac{\\partial u}{\\partial t} + u \\frac{\\partial u}{\\partial x} = \\nu \\frac{\\partial^2 u}{\\partial x^2}-\\lambda\\frac{\\partial\\eta}{\\partial x}."
},
{
"math_id": 73,
"text": "h(x,t)"
},
{
"math_id": 74,
"text": "u(x,t)=-\\lambda\\partial h/\\partial x"
}
] | https://en.wikipedia.org/wiki?curid=1023353 |
1023390 | Elastance | Mechanical stiffness, or inverse of electrical capacitance or fluid flow compliance
Electrical elastance is the reciprocal of capacitance. The SI unit of elastance is the inverse farad (F−1). The concept is not widely used by electrical and electronic engineers. The value of capacitors is invariably specified in units of capacitance rather than inverse capacitance. However, it is used in theoretical work in network analysis and has some niche applications at microwave frequencies.
The term "elastance" was coined by Oliver Heaviside through the analogy of a capacitor as a spring. The term is also used for analogous quantities in some other energy domains. It maps to stiffness in the mechanical domain, and is the inverse of compliance in the fluid flow domain, especially in physiology. It is also the name of the generalised quantity in bond-graph analysis and other schemes analysing systems across multiple domains.
Usage.
The definition of capacitance ("C") is the charge ("Q") stored per unit voltage ("V").
formula_0
Elastance ("S") is the reciprocal of capacitance, thus,
formula_1
Expressing the values of capacitors as elastance is not done much by practical electrical engineers, although it is sometimes convenient for capacitors in series. The total elastance is simply the sum of the individual elastances in that case. However, it is used by network theorists in their analysis. One advantage is that an increase in elastance increases impedance. This is in the same direction as the other two basic passive elements, resistance and inductance. An example of the use of elastance can be found in the 1926 doctoral thesis of Wilhelm Cauer. On his path to founding network synthesis, he formed the loop matrix A,
formula_2
where L, R, S and Z are the network loop matrices of inductance, resistance, elastance and impedance respectively and "s" is complex frequency. This expression would be significantly more complicated if Cauer had tried to use a matrix of capacitances instead of elastances. The use of elastance here is merely for mathematical convenience, in much the same way as mathematicians use radians rather than the more common units for angles.
Elastance is also used in microwave engineering. In this field varactor diodes are used as a voltage variable capacitor in frequency multipliers, parametric amplifiers and variable filters. These diodes store a charge in their junction when reverse biased which is the source of the capacitor effect. The slope of the voltage-stored charge curve is called "differential elastance" in this field.
Units.
The SI unit of elastance is the reciprocal farad (F−1). The term "daraf" is sometimes used for this unit, but it is not approved by SI and its use is discouraged. The term is formed by writing "farad" backwards, in much the same way as the unit "mho" (unit of conductance, also not approved by SI) is formed by writing "ohm" backwards.
The term "daraf" was coined by Arthur E. Kennelly. He used it from at least 1920.
History.
The terms "elastance" and "elastivity" were coined by Oliver Heaviside in 1886. Heaviside coined a great many of the terms used in circuit analysis today, such as impedance, inductance, admittance, and conductance. Heaviside's terminology followed the model of resistance and resistivity with the "-ance" ending used for extensive properties and the "-ivity" ending used for intensive properties. The extensive properties are used in circuit analysis (they are the "values" of components) and the intensive properties are used in field analysis. Heaviside's nomenclature was designed to highlight the connection between corresponding quantities in field and circuit. Elastivity is the intensive property of a material corresponding to the bulk property of a component, elastance. It is the reciprocal of permittivity. As Heaviside put it,
<templatestyles src="Template:Blockquote/styles.css" />Permittivity gives rise to permittance, and elastivity to elastance.
Here, "permittance" is Heaviside's term for capacitance. He did not like any term that suggested that a capacitor was a container for holding charge. He rejected the terms "capacity" (capacitance) and "capacious" (capacitive) and their inverses "incapacity" and "incapacious". The terms current in his time for a capacitor were "condenser" (suggesting that the "electric fluid" could be condensed out) and "leyden" after the Leyden jar, an early form of capacitor, also suggesting some sort of storage. Heaviside preferred the analogy of a mechanical spring under compression, hence his preference for terms that suggested a property of a spring. This preference was a result of Heaviside following James Clerk Maxwell's view of electric current, or at least, Heaviside's interpretation of it. In this view, electric current is a flow caused by the electromotive force and is the analogue of velocity caused by a mechanical force. At the capacitor, this current causes a "displacement" whose rate of change is equal to the current. The displacement is viewed as an electric strain, like a mechanical strain in a compressed spring. The existence of a flow of physical charge is denied, as is the buildup of charge on the capacitor plates. This is replaced with the concept of divergence of the displacement field at the plates, which is numerically equal to the charge collected on the plates in the charge flow view.
For a period in the nineteenth and early-twentieth centuries, some authors followed Heaviside in the use of "elastance" and "elastivity". Today, the reciprocal quantities "capacitance" and "permittivity" are almost universally preferred by electrical engineers. However, elastance does still see some usage by theoretical writers. A further consideration in Heaviside's choice of these terms was a wish to distinguish them from mechanical terms. Thus, he chose "elastivity" rather than "elasticity". This avoids having to write "electrical elasticity" to disambiguate it from "mechanical elasticity".
Heaviside carefully chose his terms to be unique to electromagnetism, most especially avoiding commonality with mechanics. Ironically, many of his terms have subsequently been borrowed back into mechanics and other domains in order to name analogous properties. For instance, it is now necessary to distinguish "electrical impedance" from "mechanical impedance" in some contexts. "Elastance" has also been borrowed back into mechanics for the analogous quantity by some authors, but often "stiffness" is the preferred term instead. However, "elastance" is widely used for the analogous property in the domain of fluid dynamics, especially in the fields of biomedicine and physiology.
Mechanical analogy.
Mechanical–electrical analogies are formed by comparing the mathematical description of the two systems. Quantities that appear in the same place in equations of the same form are called "analogues". There are two main reasons for forming such analogies. The first is to allow electrical phenomena to be explained in terms of the more familiar mechanical systems. For instance, an electrical inductor-capacitor-resistor circuit has differential equations of the same form as a mechanical mass-spring-damper system. In such cases the electrical domain is converted to the mechanical domain. The second, and more important, reason is to allow a system containing both mechanical and electrical parts to be analysed as a unified whole. This is of great benefit in the fields of mechatronics and robotics. In such cases the mechanical domain is most often converted to the electrical domain because network analysis in the electrical domain is highly developed.
The Maxwellian analogy.
In the analogy developed by Maxwell, now known as the impedance analogy, voltage is made analogous to force. The voltage of a source of electric power is still called electromotive force for this reason. Current is analogous to velocity. The time derivative of distance (displacement) is equal to velocity and the time derivative of momentum is equal to force. Quantities in other energy domains that are in this same differential relationship are called respectively "generalised displacement", "generalised velocity", "generalised momentum", and "generalised force". In the electrical domain, it can be seen that the generalised displacement is charge, explaining the Maxwellians' use of the term "displacement".
Since elastance is the ratio of voltage over charge, then it follows that the analogue of elastance in another energy domain is the ratio of a generalised force over a generalised displacement. Thus, an elastance can be defined in any energy domain. "Elastance" is used as the name of the generalised quantity in the formal analysis of systems with multiple energy domains, such as is done with bond graphs.
Other analogies.
Maxwell's analogy is not the only way that analogies can be constructed between mechanical and electrical systems. There are any number of ways to do this. One very common system is the mobility analogy. In this analogy force maps to current instead of voltage. Electrical impedance no longer maps to mechanical impedance, and likewise, electrical elastance no longer maps to mechanical elastance.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " C = {Q \\over V}"
},
{
"math_id": 1,
"text": " S = {V \\over Q} \\ . "
},
{
"math_id": 2,
"text": "\\mathbf{A}= s^2 \\mathbf{L} + s \\mathbf{R} + \\mathbf{S} = s \\mathbf{Z}"
}
] | https://en.wikipedia.org/wiki?curid=1023390 |
10237 | Exponentiation by squaring | Algorithm for fast exponentiation
In mathematics and computer programming, exponentiating by squaring is a general method for fast computation of large positive integer powers of a number, or more generally of an element of a semigroup, like a polynomial or a square matrix. Some variants are commonly referred to as square-and-multiply algorithms or binary exponentiation. These can be of quite general use, for example in modular arithmetic or powering of matrices. For semigroups for which additive notation is commonly used, like elliptic curves used in cryptography, this method is also referred to as double-and-add.
Basic method.
Recursive version.
The method is based on the observation that, for any integer formula_0, one has:
formula_1
If the exponent n is zero then the answer is 1. If the exponent is negative then we can reuse the previous formula by rewriting the value using a positive exponent. That is,
formula_2
Together, these may be implemented directly as the following recursive algorithm:
In: an integer x; an integer n
Out: xn
exp_by_squaring(x, n)
if n < 0 then
return exp_by_squaring(1 / x, -n);
else if n = 0 then
return 1;
else if n is even then
return exp_by_squaring(x * x, n / 2);
else if n is odd then
return x * exp_by_squaring(x * x, (n - 1) / 2);
end function
In each recursive call, the least significant digits of the binary representation of n is removed. It follows that the number of recursive calls is formula_3 the number of bits of the binary representation of n. So this algorithm computes this number of squares and a lower number of multiplication, which is equal to the number of 1 in the binary representation of n. This logarithmic number of operations is to be compared with the trivial algorithm which requires "n" − 1 multiplications.
This algorithm is not tail-recursive. This implies that it requires an amount of auxiliary memory that is roughly proportional to the number of recursive calls -- or perhaps higher if the amount of data per iteration is increasing.
The algorithms of the next section use a different approach, and the resulting algorithms needs the same number of operations, but use an auxiliary memory that is roughly the same as the memory required to store the result.
With constant auxiliary memory.
The variants described in this section are based on the formula
formula_4
If one applies recursively this formula, by starting with "y" = 1, one gets eventually an exponent equal to 0, and the desired result is then the left factor.
This may be implemented as a tail-recursive function:
Function exp_by_squaring(x, n)
return exp_by_squaring2(1, x, n)
Function exp_by_squaring2(y, x, n)
if n < 0 then return exp_by_squaring2(y, 1 / x, -n);
else if n = 0 then return y;
else if n is even then return exp_by_squaring2(y, x * x, n / 2);
else if n is odd then return exp_by_squaring2(x * y, x * x, (n - 1) / 2).
The iterative version of the algorithm also uses a bounded auxiliary space, and is given by
Function exp_by_squaring_iterative(x, n)
if n < 0 then
x := 1 / x;
n := -n;
if n = 0 then return 1
y := 1;
while n > 1 do
if n is odd then
y := x * y;
n := n - 1;
x := x * x;
n := n / 2;
return x * y
The correctness of the algorithm results from the fact that formula_5 is invariant during the computation; it is formula_6 at the beginning; and it is formula_7 at the end.
These algorithms use exactly the same number of operations as the algorithm of the preceding section, but the multiplications are done in a different order.
Computational complexity.
A brief analysis shows that such an algorithm uses formula_8 squarings and at most formula_8 multiplications, where formula_9 denotes the floor function. More precisely, the number of multiplications is one less than the number of ones present in the binary expansion of "n". For "n" greater than about 4 this is computationally more efficient than naively multiplying the base with itself repeatedly.
Each squaring results in approximately double the number of digits of the previous, and so, if multiplication of two "d"-digit numbers is implemented in O("d""k") operations for some fixed "k", then the complexity of computing "x""n" is given by
formula_10
2"k"-ary method.
This algorithm calculates the value of "xn" after expanding the exponent in base 2"k". It was first proposed by Brauer in 1939. In the algorithm below we make use of the following function "f"(0) = ("k", 0) and "f"("m") = ("s", "u"), where "m" = "u"·2"s" with "u" odd.
Algorithm:
y := 1; i := l - 1
while i ≥ 0 do
(s, u) := f(ni)
for j := 1 to k - s do
y := y2
y := y * xu
for j := 1 to s do
y := y2
i := i - 1
return y
For optimal efficiency, "k" should be the smallest integer satisfying
formula_12
Sliding-window method.
This method is an efficient variant of the 2"k"-ary method. For example, to calculate the exponent 398, which has binary expansion (110 001 110)2, we take a window of length 3 using the 2"k"-ary method algorithm and calculate 1, x3, x6, x12, x24, x48, x49, x98, x99, x198, x199, x398.
But, we can also compute 1, x3, x6, x12, x24, x48, x96, x192, x199, x398, which saves one multiplication and amounts to evaluating (110 001 110)2
Here is the general algorithm:
Algorithm:
Algorithm:
y := 1; i := l - 1
while i > -1 do
if ni = 0 then
y := y2' i := i - 1
else
while ns = 0 do
s := s + 1
for h := 1 to i - s + 1 do
y := y2
u := (ni, ni-1, ..., ns)2
y := y * xu
i := s - 1
return y
Montgomery's ladder technique.
Many algorithms for exponentiation do not provide defence against side-channel attacks. Namely, an attacker observing the sequence of squarings and multiplications can (partially) recover the exponent involved in the computation. This is a problem if the exponent should remain secret, as with many public-key cryptosystems. A technique called "Montgomery's ladder" addresses this concern.
Given the binary expansion of a positive, non-zero integer "n" = ("n""k"−1..."n"0)2 with "n"k−1 = 1, we can compute "xn" as follows:
x1 = x; x2 = x2
for i = k - 2 to 0 do
if ni = 0 then
x2 = x1 * x2; x1 = x12
else
x1 = x1 * x2; x2 = x22
return x1
The algorithm performs a fixed sequence of operations (up to log "n"): a multiplication and squaring takes place for each bit in the exponent, regardless of the bit's specific value. A similar algorithm for multiplication by doubling exists.
This specific implementation of Montgomery's ladder is not yet protected against cache timing attacks: memory access latencies might still be observable to an attacker, as different variables are accessed depending on the value of bits of the secret exponent. Modern cryptographic implementations use a "scatter" technique to make sure the processor always misses the faster cache.
Fixed-base exponent.
There are several methods which can be employed to calculate "xn" when the base is fixed and the exponent varies. As one can see, precomputations play a key role in these algorithms.
Yao's method.
Yao's method is orthogonal to the 2"k"-ary method where the exponent is expanded in radix "b" = 2"k" and the computation is as performed in the algorithm above. Let n, ni, b, and bi be integers.
Let the exponent n be written as
formula_14
where formula_15 for all formula_16.
Let "xi" = "xbi".
Then the algorithm uses the equality
formula_17
Given the element x of G, and the exponent n written in the above form, along with the precomputed values "x""b"0..."x""b""w"−1, the element xn is calculated using the algorithm below:
y = 1, u = 1, j = h - 1
while j > 0 do
for i = 0 to w - 1 do
if ni = j then
u = u × xbi
y = y × u
j = j - 1
return y
If we set "h" = 2"k" and "bi" = "hi", then the ni values are simply the digits of n in base h. Yao's method collects in "u" first those xi that appear to the highest power &NoBreak;&NoBreak;; in the next round those with power &NoBreak;&NoBreak; are collected in u as well etc. The variable "y" is multiplied &NoBreak;&NoBreak; times with the initial u, &NoBreak;&NoBreak; times with the next highest powers, and so on.
The algorithm uses &NoBreak;&NoBreak; multiplications, and &NoBreak;&NoBreak; elements must be stored to compute xn.
Euclidean method.
The Euclidean method was first introduced in "Efficient exponentiation using precomputation and vector addition chains" by P.D Rooij.
This method for computing formula_18 in group G, where n is a natural integer, whose algorithm is given below, is using the following equality recursively:
formula_19
where formula_20.
In other words, a Euclidean division of the exponent "n"1 by "n"0 is used to return a quotient q and a rest "n"1 mod "n"0.
Given the base element x in group G, and the exponent formula_21 written as in Yao's method, the element formula_18 is calculated using formula_22 precomputed values formula_23 and then the algorithm below.
Begin loop
Find formula_24, such that formula_25.
Find formula_26, such that formula_27.
Break loop if formula_28.
Let formula_29, and then let formula_30.
Compute recursively formula_31, and then let formula_32.
End loop;
Return formula_33.
The algorithm first finds the largest value among the "n""i" and then the supremum within the set of { "n""i" \ "i" ≠ "M" }.
Then it raises "x""M" to the power q, multiplies this value with "x""N", and then assigns "x""N" the result of this computation and "n""M" the value "n""M" modulo "n""N".
Further applications.
The approach also works with semigroups that are not of characteristic zero, for example allowing fast computation of large exponents modulo a number. Especially in cryptography, it is useful to compute powers in a ring of integers modulo q. For example, the evaluation of
13789722341 (mod 2345)
2029
would take a very long time and much storage space if the naïve method of computing 13789722341 and then taking the remainder when divided by 2345 were used. Even using a more effective method will take a long time: square 13789, take the remainder when divided by 2345, multiply the result by 13789, and so on.
Applying above "exp-by-squaring" algorithm, with "*" interpreted as "x" * "y" = "xy" mod 2345 (that is, a multiplication followed by a division with remainder) leads to only 27 multiplications and divisions of integers, which may all be stored in a single machine word. Generally, any of these approaches will take fewer than 2log2(722340) &leq; 40 modular multiplications.
The approach can also be used to compute integer powers in a group, using either of the rules
Power("x", −"n")
Power("x"−1, "n"),
Power("x", −"n")
(Power("x", "n"))−1.
The approach also works in non-commutative semigroups and is often used to compute powers of matrices.
More generally, the approach works with positive integer exponents in every magma for which the binary operation is power associative.
Signed-digit recoding.
In certain computations it may be more efficient to allow negative coefficients and hence use the inverse of the base, provided inversion in G is "fast" or has been precomputed. For example, when computing "x"2"k"−1, the binary method requires "k"−1 multiplications and "k"−1 squarings. However, one could perform k squarings to get "x"2"k" and then multiply by "x"−1 to obtain "x"2"k"−1.
To this end we define the signed-digit representation of an integer n in radix b as
formula_34
"Signed binary representation" corresponds to the particular choice "b" = 2 and formula_35. It is denoted by formula_36. There are several methods for computing this representation. The representation is not unique. For example, take "n" = 478: two distinct signed-binary representations are given by formula_37 and formula_38, where formula_39 is used to denote −1. Since the binary method computes a multiplication for every non-zero entry in the base-2 representation of n, we are interested in finding the signed-binary representation with the smallest number of non-zero entries, that is, the one with "minimal" Hamming weight. One method of doing this is to compute the representation in non-adjacent form, or NAF for short, which is one that satisfies formula_40 and denoted by formula_41. For example, the NAF representation of 478 is formula_42. This representation always has minimal Hamming weight. A simple algorithm to compute the NAF representation of a given integer formula_43 with formula_44 is the following:
formula_45
for "i" = 0 to "l" − 1 do
formula_46
formula_47
return formula_48
Another algorithm by Koyama and Tsuruoka does not require the condition that formula_49; it still minimizes the Hamming weight.
Alternatives and generalizations.
Exponentiation by squaring can be viewed as a suboptimal addition-chain exponentiation algorithm: it computes the exponent by an addition chain consisting of repeated exponent doublings (squarings) and/or incrementing exponents by "one" (multiplying by "x") only. More generally, if one allows "any" previously computed exponents to be summed (by multiplying those powers of "x"), one can sometimes perform the exponentiation using fewer multiplications (but typically using more memory). The smallest power where this occurs is for "n" = 15:
formula_50 (squaring, 6 multiplies),
formula_51 (optimal addition chain, 5 multiplies if "x"3 is re-used).
In general, finding the "optimal" addition chain for a given exponent is a hard problem, for which no efficient algorithms are known, so optimal chains are typically used for small exponents only (e.g. in compilers where the chains for small powers have been pre-tabulated). However, there are a number of heuristic algorithms that, while not being optimal, have fewer multiplications than exponentiation by squaring at the cost of additional bookkeeping work and memory usage. Regardless, the number of multiplications never grows more slowly than Θ(log "n"), so these algorithms improve asymptotically upon exponentiation by squaring by only a constant factor at best.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n > 0"
},
{
"math_id": 1,
"text": " x^n=\n \\begin{cases}\n x \\, ( x^{2})^{(n - 1)/2}, & \\mbox{if } n \\mbox{ is odd} \\\\\n (x^{2})^{n/2} , & \\mbox{if } n \\mbox{ is even}\n \\end{cases}\n"
},
{
"math_id": 2,
"text": "x^n = \\left(\\frac{1}{x}\\right)^{-n}\\,."
},
{
"math_id": 3,
"text": "\\lceil \\log_2 n\\rceil,"
},
{
"math_id": 4,
"text": " yx^n=\n \\begin{cases}\n (yx) \\, ( x^{2})^{(n - 1)/2}, & \\mbox{if } n \\mbox{ is odd} \\\\\n y\\,(x^{2})^{n/2} , & \\mbox{if } n \\mbox{ is even}.\n \\end{cases}\n"
},
{
"math_id": 5,
"text": "yx^n"
},
{
"math_id": 6,
"text": "1\\cdot x^n=x^n"
},
{
"math_id": 7,
"text": "yx^1=xy "
},
{
"math_id": 8,
"text": "\\lfloor \\log_2n\\rfloor"
},
{
"math_id": 9,
"text": "\\lfloor\\;\\rfloor"
},
{
"math_id": 10,
"text": "\n \\sum\\limits_{i=0}^{O(\\log n)} \\big(2^i O(\\log x)\\big)^k = O\\big((n \\log x)^k\\big).\n"
},
{
"math_id": 11,
"text": "x^3, x^5, ... , x^{2^k-1}"
},
{
"math_id": 12,
"text": "\\lg n < \\frac{k(k + 1) \\cdot 2^{2k}}{2^{k+1} - k - 2} + 1."
},
{
"math_id": 13,
"text": "x^3, x^5, ... ,x^{2^k-1}"
},
{
"math_id": 14,
"text": " n = \\sum_{i=0}^{w-1} n_i b_i,"
},
{
"math_id": 15,
"text": "0 \\leqslant n_i < h"
},
{
"math_id": 16,
"text": "i \\in [0, w-1]"
},
{
"math_id": 17,
"text": "x^n = \\prod_{i=0}^{w-1} x_i^{n_i} = \\prod_{j=1}^{h-1} \\bigg[\\prod_{n_i=j} x_i\\bigg]^j."
},
{
"math_id": 18,
"text": "x^n"
},
{
"math_id": 19,
"text": "x_0^{n_0} \\cdot x_1^{n_1} = \\left(x_0 \\cdot x_1^q\\right)^{n_0} \\cdot x_1^{n_1 \\mod n_0},"
},
{
"math_id": 20,
"text": "q = \\left\\lfloor \\frac{n_1}{n_0} \\right\\rfloor"
},
{
"math_id": 21,
"text": "n"
},
{
"math_id": 22,
"text": "l"
},
{
"math_id": 23,
"text": "x^{b_0}, ..., x^{b_{l_i}}"
},
{
"math_id": 24,
"text": "M \\in [0, l - 1]"
},
{
"math_id": 25,
"text": "\\forall i \\in [0, l - 1], n_M \\ge n_i"
},
{
"math_id": 26,
"text": "N \\in \\big([0, l - 1] - M\\big)"
},
{
"math_id": 27,
"text": "\\forall i \\in \\big([0, l - 1] - M\\big), n_N \\ge n_i"
},
{
"math_id": 28,
"text": "n_N = 0"
},
{
"math_id": 29,
"text": "q = \\lfloor n_M / n_N \\rfloor"
},
{
"math_id": 30,
"text": "n_N = (n_M \\bmod n_N)"
},
{
"math_id": 31,
"text": "x_M^q"
},
{
"math_id": 32,
"text": "x_N = x_N \\cdot x_M^q"
},
{
"math_id": 33,
"text": "x^n = x_M^{n_M}"
},
{
"math_id": 34,
"text": "n = \\sum_{i=0}^{l-1} n_i b^i \\text{ with } |n_i| < b."
},
{
"math_id": 35,
"text": "n_i \\in \\{-1, 0, 1\\}"
},
{
"math_id": 36,
"text": "(n_{l-1} \\dots n_0)_s"
},
{
"math_id": 37,
"text": "(10\\bar 1 1100\\bar 1 10)_s"
},
{
"math_id": 38,
"text": "(100\\bar 1 1000\\bar 1 0)_s"
},
{
"math_id": 39,
"text": "\\bar 1"
},
{
"math_id": 40,
"text": "n_i n_{i+1} = 0 \\text{ for all } i \\geqslant 0"
},
{
"math_id": 41,
"text": "(n_{l-1} \\dots n_0)_\\text{NAF}"
},
{
"math_id": 42,
"text": "(1000\\bar 1 000\\bar 1 0)_\\text{NAF}"
},
{
"math_id": 43,
"text": "n = (n_l n_{l-1} \\dots n_0)_2"
},
{
"math_id": 44,
"text": "n_l = n_{l-1} = 0"
},
{
"math_id": 45,
"text": "c_0=0"
},
{
"math_id": 46,
"text": "c_{i+1} = \\left\\lfloor\\frac{1}{2}(c_i + n_i + n_{i+1})\\right\\rfloor"
},
{
"math_id": 47,
"text": "n_i' = c_i + n_i - 2c_{i+1}"
},
{
"math_id": 48,
"text": "(n_{l-1}' \\dots n_0')_\\text{NAF}"
},
{
"math_id": 49,
"text": "n_i = n_{i+1} = 0"
},
{
"math_id": 50,
"text": "x^{15} = x \\times (x \\times [x \\times x^2]^2)^2"
},
{
"math_id": 51,
"text": "x^{15} = x^3 \\times ([x^3]^2)^2"
}
] | https://en.wikipedia.org/wiki?curid=10237 |
1023857 | Positive set theory | Class of alternative set theories
In mathematical logic, positive set theory is the name for a class of alternative set theories in which the axiom of comprehension holds for at least the positive formulas formula_0 (the smallest class of formulas containing atomic membership and equality formulas and closed under conjunction, disjunction, existential and universal quantification).
Typically, the motivation for these theories is topological: the sets are the classes which are closed under a certain topology. The closure conditions for the various constructions allowed in building positive formulas are readily motivated (and one can further justify the use of universal quantifiers bounded in sets to get generalized positive comprehension): the justification of the existential quantifier seems to require that the topology be compact.
Axioms.
The set theory formula_1 of Olivier Esser consists of the following axioms:
Extensionality.
formula_2
Positive comprehension.
formula_3
where formula_0 is a "positive formula". A positive formula uses only the logical constants formula_4 but not formula_5.
Closure.
formula_6
where formula_0 is a formula. That is, for every formula formula_0, the intersection of all sets which contain every formula_7 such that formula_8 exists. This is called the closure of formula_9 and is written in any of the various ways that topological closures can be presented. This can be put more briefly if class language is allowed (any condition on sets defining a class as in NBG): for any class "C" there is a set which is the intersection of all sets which contain "C" as a subclass. This is a reasonable principle if the sets are understood as closed classes in a topology.
Infinity.
The von Neumann ordinal formula_10 exists. This is not an axiom of infinity in the usual sense; if Infinity does not hold, the closure of formula_10 exists and has itself as its sole additional member (it is certainly infinite); the point of this axiom is that formula_10 contains no additional elements at all, which boosts the theory from the strength of second order arithmetic to the strength of Morse–Kelley set theory with the proper class ordinal a weakly compact cardinal.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi"
},
{
"math_id": 1,
"text": "\\mathrm{GPK}^+_\\infty"
},
{
"math_id": 2,
"text": "\\forall x \\forall y (\\forall z (z \\in x \\leftrightarrow z \\in y) \\to x = y)"
},
{
"math_id": 3,
"text": "\\exists x \\forall y (y \\in x \\leftrightarrow \\phi(y))"
},
{
"math_id": 4,
"text": "\\{\\top, \\bot, \\land, \\lor, \\forall, \\exists, =, \\in\\}"
},
{
"math_id": 5,
"text": "\\{\\to, \\neg\\}"
},
{
"math_id": 6,
"text": "\\exists x \\forall y (y \\in x \\leftrightarrow \\forall z (\\forall w (\\phi(w) \\rightarrow w \\in z) \\rightarrow y \\in z))"
},
{
"math_id": 7,
"text": "x"
},
{
"math_id": 8,
"text": "\\phi(x)"
},
{
"math_id": 9,
"text": "\\{x \\mid \\phi(x)\\}"
},
{
"math_id": 10,
"text": "\\omega"
}
] | https://en.wikipedia.org/wiki?curid=1023857 |
1023920 | NTSC-J | Japanese variation of the NTSC analog television standard
NTSC-J or "System J" is the informal designation for the analogue television standard used in Japan. The system is based on the US NTSC (NTSC-M) standard with minor differences. While NTSC-M is an official CCIR and FCC standard, NTSC-J or "System J" are a colloquial indicators.
The system was introduced by NHK and NTV, with regular color broadcasts starting on September 10, 1960.
NTSC-J was replaced by digital broadcasts in 44 of the country's 47 prefectures on 24 July 2011. Analogue broadcasting ended on 31 March 2012 in the three prefectures devastated by the 2011 Tōhoku earthquake and tsunami (Iwate, Miyagi, Fukushima) and the subsequent Fukushima Daiichi nuclear disaster.
The term NTSC-J is also incorrectly and informally used to distinguish regions in console video games, which use televisions (see Marketing definition below).
Technical definition.
Japan implemented the NTSC standard with slight differences. The black and blanking levels of the NTSC-J signal are identical to each other (both at 0 IRE, similar to the PAL video standard), while in American NTSC the black level is slightly higher (7.5 IRE) than blanking level - because of the way this appears in the waveform, the higher black level is also called pedestal. This small difference doesn't cause any incompatibility problems, but needs to be compensated by a slight change of the TV brightness setting in order to achieve proper images.
YIQ color encoding in NTSC-J uses slightly different equations and ranges from regular NTSC. formula_0 has a range of 0 to +-334 (+-309 on NTSC-M), and formula_1 has a range of 0 to +-293 (+-271 on NTSC-M).
YCbCr equations for NTSC-J are formula_2, while on NTSC-M we have formula_3.
NTSC-J also uses a white reference (color temperature) of 9300K instead of the usual NTSC standard of 6500K.
The over-the-air RF frequencies used in Japan do not match those of the US NTSC standard. On VHF the frequency spacing for each channel is 6 MHz as in North America, South America, Caribbean, South Korea, Taiwan, Burma (Myanmar) the Philippines, except between channels 7 and 8 (which overlap). Channels 1 through 3 are reallocated for the expansion of the Japanese FM band. On UHF frequency spacing for each channel in Japan is the same, but the channel numbers are 1 lower than on the other areas mentioned - for example, channel 13 in Japan is on the same frequency as channel 14. For more information see Television channel frequencies. Channels 13-62 are used for analog and digital TV broadcasting.
The encoding of the stereo subcarrier also differs between NTSC-M/MTS and Japanese EIAJ MTS broadcasts.
Marketing definition.
The term NTSC-J was informally used to distinguish regions in console video games, which use televisions. NTSC-J is used as the name of the video gaming region of Japan (hence the "J"), South East Asia (some countries only), Taiwan, Hong Kong, Macau, Philippines and South Korea (now NTSC-K) (formerly part of SE Asia with Hong Kong, Taiwan, Japan, etc.).
Most games designated as part of this region will not run on hardware designated as part of the NTSC-U, PAL (or PAL-E, "E" stands for Europe) or NTSC-C (for China) mostly due to the regional differences of the PAL (SECAM was also used in the early 1990s) and NTSC standards. Many older video game systems do not allow games from different regions to be played (accomplished by various forms of regional lockout); however more modern consoles either leave protection to the discretion of publishers, such as Microsoft's Xbox 360, or discontinue its use entirely, like Sony's PlayStation 3 (with a few exceptions).
China received its own designation due to fears of an influx of illegal copies flooding out of China, which is notorious for its rampant copyright infringements. There is also concern of copyright protection through regional lockout built into the video game systems and games themselves, as the same product can be edited by different publishers from one continent to another.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I"
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": "C = (Cb-512)*(0.545)*(\\sin\\omega t) + (Cr-512)*(0.769)*(\\cos\\omega t)"
},
{
"math_id": 3,
"text": "C = (Cb-512)*(0.504)*(\\sin\\omega t) + (Cr-512)*(0.711)*(\\cos\\omega t)"
}
] | https://en.wikipedia.org/wiki?curid=1023920 |
10240442 | CM-field | Complex multiplication field
In mathematics, a CM-field is a particular type of number field, so named for a close connection to the theory of complex multiplication. Another name used is J-field.
The abbreviation "CM" was introduced by .
Formal definition.
A number field "K" is a CM-field if it is a quadratic extension "K"/"F" where the base field "F" is totally real but "K" is totally imaginary. I.e., every embedding of "F" into formula_0 lies entirely within formula_1, but there is no embedding of "K" into formula_1.
In other words, there is a subfield "F" of "K" such that "K" is generated over "F" by a single square root of an element, say
β = formula_2,
in such a way that the minimal polynomial of β over the rational number field formula_3 has all its roots non-real complex numbers. For this α should be chosen "totally negative", so that for each embedding σ of formula_4 into the real number field,
σ(α) < 0.
Properties.
One feature of a CM-field is that complex conjugation on formula_0 induces an automorphism on the field which is independent of its embedding into formula_5. In the notation given, it must change the sign of β.
A number field "K" is a CM-field if and only if it has a "units defect", i.e. if it contains a proper subfield "F" whose unit group has the same formula_6-rank as that of "K" . In fact, "F" is the totally real subfield of "K" mentioned above. This follows from Dirichlet's unit theorem.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb C "
},
{
"math_id": 1,
"text": "\\mathbb R "
},
{
"math_id": 2,
"text": "\\sqrt{\\alpha} "
},
{
"math_id": 3,
"text": " \\mathbb Q"
},
{
"math_id": 4,
"text": "F"
},
{
"math_id": 5,
"text": "\\mathbb C"
},
{
"math_id": 6,
"text": "\\mathbb Z"
},
{
"math_id": 7,
"text": " \\mathbb Q (\\zeta_n) "
},
{
"math_id": 8,
"text": " \\mathbb Q (\\zeta_n +\\zeta_n^{-1}). "
},
{
"math_id": 9,
"text": " \\zeta_n^2+\\zeta_n^{-2}-2 = (\\zeta_n - \\zeta_n^{-1})^2. "
},
{
"math_id": 10,
"text": "x^4 + x^3 - x^2 - x + 1"
}
] | https://en.wikipedia.org/wiki?curid=10240442 |
10240807 | Unbiased estimation of standard deviation | Procedure to estimate standard deviation from a sampleIn statistics and in particular statistical theory, unbiased estimation of a standard deviation is the calculation from a statistical sample of an estimated value of the standard deviation (a measure of statistical dispersion) of a population of values, in such a way that the expected value of the calculation equals the true value. Except in some important situations, outlined later, the task has little relevance to applications of statistics since its need is avoided by standard procedures, such as the use of significance tests and confidence intervals, or by using Bayesian analysis.
However, for statistical theory, it provides an exemplar problem in the context of estimation theory which is both simple to state and for which results cannot be obtained in closed form. It also provides an example where imposing the requirement for unbiased estimation might be seen as just adding inconvenience, with no real benefit.
Motivation.
In statistics, the standard deviation of a population of numbers is often estimated from a random sample drawn from the population. This is the sample standard deviation, which is defined by
formula_0
where formula_1 is the sample (formally, realizations from a random variable "X") and formula_2 is the sample mean.
One way of seeing that this is a biased estimator of the standard deviation of the population is to start from the result that "s"2 is an unbiased estimator for the variance σ2 of the underlying population if that variance exists and the sample values are drawn independently with replacement. The square root is a nonlinear function, and only linear functions commute with taking the expectation. Since the square root is a strictly concave function, it follows from Jensen's inequality that the square root of the sample variance is an underestimate.
The use of "n" − 1 instead of "n" in the formula for the sample variance is known as Bessel's correction, which corrects the bias in the estimation of the population "variance," and some, but not all of the bias in the estimation of the population "standard deviation."
It is not possible to find an estimate of the standard deviation which is unbiased for all population distributions, as the bias depends on the particular distribution. Much of the following relates to estimation assuming a normal distribution.
Bias correction.
Results for the normal distribution.
When the random variable is normally distributed, a minor correction exists to eliminate the bias. To derive the correction, note that for normally distributed "X", Cochran's theorem implies that formula_3 has a chi square distribution with formula_4 degrees of freedom and thus its square root, formula_5 has a chi distribution with formula_4 degrees of freedom. Consequently, calculating the expectation of this last expression and rearranging constants,
formula_6
where the correction factor formula_7 is the scale mean of the chi distribution with formula_4 degrees of freedom, formula_8. This depends on the sample size "n," and is given as follows:
formula_9
where Γ(·) is the gamma function. An unbiased estimator of "σ" can be obtained by dividing formula_10 by formula_11. As formula_12 grows large it approaches 1, and even for smaller values the correction is minor. The figure shows a plot of formula_7 versus sample size. The table below gives numerical values of formula_11 and algebraic expressions for some values of formula_12; more complete tables may be found in most textbooks on statistical quality control.
It is important to keep in mind this correction only produces an unbiased estimator for normally and independently distributed "X". When this condition is satisfied, another result about "s" involving formula_11 is that the standard error of "s" is formula_13, while the standard error of the unbiased estimator is formula_14
Rule of thumb for the normal distribution.
If calculation of the function "c"4("n") appears too difficult, there is a simple rule of thumb to take the estimator
formula_15
The formula differs from the familiar expression for "s"2 only by having "n" − 1.5 instead of "n" − 1 in the denominator. This expression is only approximate; in fact,
formula_16
The bias is relatively small: say, for formula_17 it is equal to 2.3%, and for formula_18 the bias is already 0.1%.
Other distributions.
In cases where statistically independent data are modelled by a parametric family of distributions other than the normal distribution, the population standard deviation will, if it exists, be a function of the parameters of the model. One general approach to estimation would be maximum likelihood. Alternatively, it may be possible to use the Rao–Blackwell theorem as a route to finding a good estimate of the standard deviation. In neither case would the estimates obtained usually be unbiased. Notionally, theoretical adjustments might be obtainable to lead to unbiased estimates but, unlike those for the normal distribution, these would typically depend on the estimated parameters.
If the requirement is simply to reduce the bias of an estimated standard deviation, rather than to eliminate it entirely, then two practical approaches are available, both within the context of resampling. These are jackknifing and bootstrapping. Both can be applied either to parametrically based estimates of the standard deviation or to the sample standard deviation.
For non-normal distributions an approximate (up to "O"("n"−1) terms) formula for the unbiased estimator of the standard deviation is
formula_19
where "γ"2 denotes the population excess kurtosis. The excess kurtosis may be either known beforehand for certain distributions, or estimated from the data.
Effect of autocorrelation (serial correlation).
The material above, to stress the point again, applies only to independent data. However, real-world data often does not meet this requirement; it is autocorrelated (also known as serial correlation). As one example, the successive readings of a measurement instrument that incorporates some form of “smoothing” (more correctly, low-pass filtering) process will be autocorrelated, since any particular value is calculated from some combination of the earlier and later readings.
Estimates of the variance, and standard deviation, of autocorrelated data will be biased. The expected value of the sample variance is
formula_20
where "n" is the sample size (number of measurements) and "formula_21" is the autocorrelation function (ACF) of the data. (Note that the expression in the brackets is simply one minus the average expected autocorrelation for the readings.) If the ACF consists of positive values then the estimate of the variance (and its square root, the standard deviation) will be biased low. That is, the actual variability of the data will be greater than that indicated by an uncorrected variance or standard deviation calculation. It is essential to recognize that, if this expression is to be used to correct for the bias, by dividing the estimate formula_22 by the quantity in brackets above, then the ACF must be known analytically, not via estimation from the data. This is because the estimated ACF will itself be biased.
Example of bias in standard deviation.
To illustrate the magnitude of the bias in the standard deviation, consider a dataset that consists of sequential readings from an instrument that uses a specific digital filter whose ACF is known to be given by
formula_23
where "α" is the parameter of the filter, and it takes values from zero to unity. Thus the ACF is positive and geometrically decreasing. The figure shows the ratio of the estimated standard deviation to its known value (which can be calculated analytically for this digital filter), for several settings of "α" as a function of sample size "n". Changing "α" alters the variance reduction ratio of the filter, which is known to be
formula_24
so that smaller values of "α" result in more variance reduction, or “smoothing.” The bias is indicated by values on the vertical axis different from unity; that is, if there were no bias, the ratio of the estimated to known standard deviation would be unity. Clearly, for modest sample sizes there can be significant bias (a factor of two, or more).
Variance of the mean.
It is often of interest to estimate the variance or standard deviation of an estimated mean rather than the variance of a population. When the data are autocorrelated, this has a direct effect on the theoretical variance of the sample mean, which is
formula_25
The variance of the sample mean can then be estimated by substituting an estimate of "σ"2. One such estimate can be obtained from the equation for E[s2] given above. First define the following constants, assuming, again, a known ACF:
formula_26
formula_27
so that
formula_28
This says that the expected value of the quantity obtained by dividing the observed sample variance by the correction factor formula_29 gives an unbiased estimate of the variance. Similarly, re-writing the expression above for the variance of the mean,
formula_30
and substituting the estimate for formula_31 gives
formula_32
which is an unbiased estimator of the variance of the mean in terms of the observed sample variance and known quantities. If the autocorrelations formula_21 are identically zero, this expression reduces to the well-known result for the variance of the mean for independent data. The effect of the expectation operator in these expressions is that the equality holds in the mean (i.e., on average).
Estimating the standard deviation of the population.
Having the expressions above involving the variance of the population, and of an estimate of the mean of that population, it would seem logical to simply take the square root of these expressions to obtain unbiased estimates of the respective standard deviations. However it is the case that, since expectations are integrals,
formula_33
Instead, assume a function "θ" exists such that an unbiased estimator of the standard deviation can be written
formula_34
and "θ" depends on the sample size "n" and the ACF. In the case of NID (normally and independently distributed) data, the radicand is unity and "θ" is just the "c"4 function given in the first section above. As with "c"4, "θ" approaches unity as the sample size increases (as does "γ1").
It can be demonstrated via simulation modeling that ignoring "θ" (that is, taking it to be unity) and using
formula_35
removes all but a few percent of the bias caused by autocorrelation, making this a "reduced"-bias estimator, rather than an "un"biased estimator. In practical measurement situations, this reduction in bias can be significant, and useful, even if some relatively small bias remains. The figure above, showing an example of the bias in the standard deviation vs. sample size, is based on this approximation; the actual bias would be somewhat larger than indicated in those graphs since the transformation bias "θ" is not included there.
Estimating the standard deviation of the sample mean.
The unbiased variance of the mean in terms of the population variance and the ACF is given by
formula_36
and since there are no expected values here, in this case the square root can be taken, so that
formula_37
Using the unbiased estimate expression above for "σ", an estimate of the standard deviation of the mean will then be
formula_38
If the data are NID, so that the ACF vanishes, this reduces to
formula_39
In the presence of a nonzero ACF, ignoring the function "θ" as before leads to the "reduced"-bias estimator
formula_40
which again can be demonstrated to remove a useful majority of the bias.
References.
<templatestyles src="Reflist/styles.css" />
External links.
This article incorporates public domain material from | [
{
"math_id": 0,
"text": "s = \\sqrt{\\frac{\\sum_{i=1}^n (x_i - \\overline{x})^2}{n-1}},"
},
{
"math_id": 1,
"text": "\\{x_1,x_2,\\ldots,x_n\\}"
},
{
"math_id": 2,
"text": "\\overline{x}"
},
{
"math_id": 3,
"text": "(n-1) s^2/\\sigma^2"
},
{
"math_id": 4,
"text": "n-1"
},
{
"math_id": 5,
"text": "\\sqrt{n-1} s/\\sigma"
},
{
"math_id": 6,
"text": "\\operatorname{E}[s] = c_4(n)\\sigma "
},
{
"math_id": 7,
"text": "c_4(n)"
},
{
"math_id": 8,
"text": "\\mu_1/\\sqrt{n-1}"
},
{
"math_id": 9,
"text": "c_4(n) = \\sqrt{\\frac{2}{n-1}} \\frac{\\Gamma\\left(\\frac{n}{2}\\right)}{\\Gamma\\left(\\frac{n-1}{2}\\right)} = 1 - \\frac{1}{4n} - \\frac{7}{32n^2} - \\frac{19}{128n^3} + O(n^{-4})"
},
{
"math_id": 10,
"text": "s"
},
{
"math_id": 11,
"text": " c_4(n)"
},
{
"math_id": 12,
"text": "n"
},
{
"math_id": 13,
"text": "\\sigma\\sqrt{1-c_4^{2}}"
},
{
"math_id": 14,
"text": "\\sigma\\sqrt{c_4^{-2}-1} ."
},
{
"math_id": 15,
"text": "\\hat\\sigma = \\sqrt{ \\frac{1}{n-1.5} \\sum_{i=1}^n(x_i - \\overline {x})^2}"
},
{
"math_id": 16,
"text": "\\operatorname{E} \\left [\\hat\\sigma \\right ] = \\sigma\\cdot\\left ( 1 + \\frac{1}{16n^2} + \\frac{3}{16n^3} + O(n^{-4}) \\right)."
},
{
"math_id": 17,
"text": "n=3"
},
{
"math_id": 18,
"text": "n=9"
},
{
"math_id": 19,
"text": "\\hat\\sigma = \\sqrt{ \\frac{1}{n - 1.5 - \\tfrac{1}{4} \\gamma_2} \\sum_{i=1}^n \\left (x_i - \\overline {x} \\right )^2 },"
},
{
"math_id": 20,
"text": "{\\rm E}\\left[s^2\\right] = \\sigma ^2 \\left[ 1- \\frac{2}{n-1} \\sum_{k=1}^{n-1} \\left( 1-\\frac{k}{n} \\right)\\rho _k \\right]"
},
{
"math_id": 21,
"text": "\\rho_k"
},
{
"math_id": 22,
"text": "s^2"
},
{
"math_id": 23,
"text": "\\rho _k = (1 - \\alpha)^k"
},
{
"math_id": 24,
"text": "{\\rm VRR} =\\frac{\\alpha}{2-\\alpha}"
},
{
"math_id": 25,
"text": "{\\rm Var}\\left[ \\overline x \\right] = \\frac{\\sigma ^2}{n} \\left[ 1+2 \\sum_{k=1}^{n-1} {\\left( 1-\\frac{k}{n} \\right)\\rho _k } \\right] ."
},
{
"math_id": 26,
"text": "\\gamma_1 \\equiv 1 - \\frac{2}{n-1} \\sum_{k=1}^{n-1} { \\left(1 -\\frac{k}{n} \\right)} \\rho _k"
},
{
"math_id": 27,
"text": "\\gamma _2 \\equiv 1 + 2 \\sum_{k=1}^{n-1} { \\left( 1-\\frac{k}{n} \\right)} \\rho_k"
},
{
"math_id": 28,
"text": "{\\rm E}\\left[s^2\\right] = \\sigma ^2 \\gamma _1 \\Rightarrow {\\rm E}\\left[ \\frac{s^2 }{\\gamma _1} \\right] = \\sigma ^2 "
},
{
"math_id": 29,
"text": "\\gamma_1"
},
{
"math_id": 30,
"text": " {\\rm Var}\\left[ {\\overline x} \\right] = \\frac{\\sigma ^2 }{n} \\gamma _2"
},
{
"math_id": 31,
"text": "\\sigma^2"
},
{
"math_id": 32,
"text": "{\\rm Var}\\left[ \\overline {x} \\right] = {\\rm E}\\left[ \\frac{s^2}{\\gamma_1} \\left( \\frac{\\gamma_2}{n} \\right) \\right] = {\\rm E}\\left[ \\frac{s^2}{n} \\left\\{ \\frac{n-1}{\\frac{n}{\\gamma _2 } - 1} \\right\\} \\right]"
},
{
"math_id": 33,
"text": "{\\rm E}[s]\\ne \\sqrt {{\\rm E}\\left[s^2 \\right]} \\ne \\sigma \\sqrt {\\gamma _1 }"
},
{
"math_id": 34,
"text": " {\\rm E}[s] =\\sigma \\theta \\sqrt { \\gamma _1 } \\Rightarrow \\hat \\sigma= \\frac{s}{\\theta \\sqrt {\\gamma_1}}"
},
{
"math_id": 35,
"text": "{\\rm E}[s] \\approx \\sigma \\sqrt{\\gamma _1 } \\Rightarrow \\hat \\sigma \\approx \\frac{s}{\\sqrt { \\gamma _1 }}"
},
{
"math_id": 36,
"text": "{\\rm Var}\\left[ {\\overline x} \\right] = \\frac{\\sigma ^2 }{n} \\gamma _2 "
},
{
"math_id": 37,
"text": "\\sigma_{\\overline x}= \\frac{\\sigma}{\\sqrt { n}} \\sqrt {\\gamma_2} "
},
{
"math_id": 38,
"text": "\\hat \\sigma_{\\overline x} = \\frac{s}{\\theta \\sqrt {n}} \\frac{\\sqrt {\\gamma_2}}{\\sqrt{\\gamma_1}}"
},
{
"math_id": 39,
"text": "\\hat \\sigma_{\\overline x} =\\frac{s}{c_4 \\sqrt {n}}"
},
{
"math_id": 40,
"text": "\\hat \\sigma _{\\overline x} \\approx \\frac{s}{\\sqrt{n}} \\frac{\\sqrt{\\gamma _2}}{\\sqrt{\\gamma_1}} = \\frac{s}{\\sqrt{n}} \\sqrt{\\frac{n-1}{\\frac{n}{\\gamma_2} -1} }"
}
] | https://en.wikipedia.org/wiki?curid=10240807 |
1024131 | Jean Bourgain | Belgian mathematician (1954–2018)
Jean Louis, baron Bourgain (; (1954--)28 1954 – (2018--)22 2018) was a Belgian mathematician. He was awarded the Fields Medal in 1994 in recognition of his work on several core topics of mathematical analysis such as the geometry of Banach spaces, harmonic analysis, ergodic theory and nonlinear partial differential equations from mathematical physics.
Biography.
Bourgain received his PhD from the Vrije Universiteit Brussel in 1977. He was a faculty member at the University of Illinois Urbana-Champaign and, from 1985 until 1995, professor at Institut des Hautes Études Scientifiques at Bures-sur-Yvette in France, at the Institute for Advanced Study in Princeton, New Jersey from 1994 until 2018. He was an editor for the "Annals of Mathematics". From 2012 to 2014, he was a visiting scholar at UC Berkeley.
His research work included several areas of mathematical analysis such as the geometry of Banach spaces, harmonic analysis, analytic number theory, combinatorics, ergodic theory, partial differential equations and spectral theory, and later also group theory. He proved the uniqueness of the solutions for the initial value problem of the Korteweg–De Vries equation. He formulated what became known as the Bourgain slicing problem in high-dimensional convex geometry. In 1985, he proved Bourgain's embedding theorem in metric dimension reduction, which states that every metric space can be embedded into an formula_0 space of dimension formula_1 with distortion formula_2. Together with Vitali Milman, he contributed to progress on Mahler’s conjecture in 1987. In 2000, Bourgain connected the Kakeya problem to arithmetic combinatorics. As a researcher, he was the author or coauthor of more than 500 articles.
Together with Ciprian Demeter and Larry Guth, he proved Vinogradov's mean-value theorem in 2015.
Bourgain was diagnosed with pancreatic cancer in late 2014. He died of it on 22 December 2018 at a hospital in Bonheiden, Belgium.
Awards and recognition.
Bourgain received several awards during his career, the most notable being the Fields Medal in 1994.
In 2009 Bourgain was elected a foreign member of the Royal Swedish Academy of Sciences.
In 2010, he received the Shaw Prize in Mathematics.
In 2012, he and Terence Tao received the Crafoord Prize in Mathematics from the Royal Swedish Academy of Sciences.
In 2015, he was made a baron by king Philippe of Belgium.
In 2016, he received the 2017 Breakthrough Prize in Mathematics.
In 2017, he received the 2018 Leroy P. Steele Prizes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "l_p"
},
{
"math_id": 1,
"text": "O(\\log^2 (n))"
},
{
"math_id": 2,
"text": "O(\\log(n))"
}
] | https://en.wikipedia.org/wiki?curid=1024131 |
10242885 | Algebraic character | Mathematical concept
An algebraic character is a formal expression attached to a module in representation theory of semisimple Lie algebras that generalizes the character of a finite-dimensional representation and is analogous to the Harish-Chandra character of the representations of semisimple Lie groups.
Definition.
Let formula_0 be a semisimple Lie algebra with a fixed Cartan subalgebra formula_1 and let the abelian group formula_2 consist of the (possibly infinite) formal integral linear combinations of formula_3, where formula_4, the (complex) vector space of weights. Suppose that formula_5 is a locally-finite weight module. Then the algebraic character of formula_5 is an element of formula_6
defined by the formula:
formula_7
where the sum is taken over all weight spaces of the module formula_8
Example.
The algebraic character of the Verma module formula_9 with the highest weight formula_10 is given by the formula
formula_11
with the product taken over the set of positive roots.
Properties.
Algebraic characters are defined for locally-finite weight modules and are "additive", i.e. the character of a direct sum of modules is the sum of their characters. On the other hand, although one can define multiplication of the formal exponents by the formula formula_12 and extend it to their "finite" linear combinations by linearity, this does not make formula_6 into a ring, because of the possibility of formal infinite sums. Thus the product of algebraic characters is well defined only in restricted situations; for example, for the case of a highest weight module, or a finite-dimensional module. In good situations, the algebraic character is "multiplicative", i.e., the character of the tensor product of two weight modules is the product of their characters.
Generalization.
Characters also can be defined almost "verbatim" for weight modules over a Kac–Moody or generalized Kac–Moody Lie algebra. | [
{
"math_id": 0,
"text": "\\mathfrak{g}"
},
{
"math_id": 1,
"text": "\\mathfrak{h},"
},
{
"math_id": 2,
"text": "A=\\mathbb{Z}[[\\mathfrak{h}^*]]"
},
{
"math_id": 3,
"text": "e^{\\mu}"
},
{
"math_id": 4,
"text": "\\mu\\in\\mathfrak{h}^*"
},
{
"math_id": 5,
"text": "V"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": " ch(V)=\\sum_{\\mu}\\dim V_{\\mu}e^{\\mu}, "
},
{
"math_id": 8,
"text": "V."
},
{
"math_id": 9,
"text": "M_\\lambda"
},
{
"math_id": 10,
"text": "\\lambda"
},
{
"math_id": 11,
"text": " ch(M_{\\lambda})=\\frac{e^{\\lambda}}{\\prod_{\\alpha>0}(1-e^{-\\alpha})},"
},
{
"math_id": 12,
"text": "e^{\\mu}\\cdot e^{\\nu}=e^{\\mu+\\nu}"
}
] | https://en.wikipedia.org/wiki?curid=10242885 |
1024314 | Mental accounting | Mental accounting (or psychological accounting) is a model of consumer behaviour developed by Richard Thaler that attempts to describe the process whereby people code, categorize and evaluate economic outcomes. Mental accounting incorporates the economic concepts of prospect theory and transactional utility theory to evaluate how people create distinctions between their financial resources in the form of mental accounts, which in turn impacts the buyer decision process and reaction to economic outcomes. People are presumed to make mental accounts as a self control strategy to manage and keep track of their spending and resources. People budget money into mental accounts for savings (e.g., saving for a home) or expense categories (e.g., gas money, clothing, utilities). People also are assumed to make mental accounts to facilitate savings for larger purposes (e.g., a home or college tuition). Mental accounting can result in people demonstrating greater loss aversion for certain mental accounts, resulting in cognitive bias that incentivizes systematic departures from consumer rationality. Through an increased understanding of mental accounting differences in decision making based on different resources, and different reactions based on similar outcomes can be greater understood.
As Thaler puts it, “All organizations, from General Motors down to single person households, have explicit and/or implicit accounting systems. The accounting system often influences decisions in unexpected ways”. Particularly, individual expenses will usually not be considered in conjunction with the present value of one’s total wealth; they will be instead considered in the context of two accounts: the current budgetary period (this could be a monthly process due to bills, or yearly due to an annual income), and the category of expense. People can even have multiple mental accounts for the same kind of resource. A person may use different monthly budgets for grocery shopping and eating out at restaurants, for example, and constrain one kind of purchase when its budget has run out while not constraining the other kind of purchase, even though both expenditures draw on the same fungible resource (income).
One detailed application of mental accounting, the Behavioral Life Cycle Hypothesis posits that people mentally frame assets as belonging to either current income, current wealth or future income and this has implications for their behavior as the accounts are largely non-fungible and marginal propensity to consume out of each account is different.
Utility, value and transaction.
In mental accounting theory, the framing effect defines that the way a person subjectively frames a transaction in their mind will determine the utility they receive or expect. The concept of framing is adopted in prospect theory, which is commonly used by mental accounting theorists as the value function in their analysis (Richard Thaler Included ). In Prospect Theory, the value function is concave for gains (implying an aversion to risk), indicating decreasing marginal utility with accumulation of gain. The value function is convex for losses (implying a risk-seeking attitude). A concave value function for gain incentivizes risk-averse behavior because marginal gain decreases relative increase in value. Conversely, a convex value function for losses means that the impact of a loss is more detrimental to a person than an equivalent gain, thus incentivizing risk-seeking behavior in order to avoid loss. These proponents of the value function portray the concept of loss aversion, which asserts that people are more likely to make decisions in order to minimize loss than to maximise gain.
Given the Prospect Theory framework, how do people interpret, or ‘account for’, multiple transactions/outcomes, of the format formula_0? They can either view the outcomes jointly, and receive formula_1, in which case the outcomes are integrated, or formula_2, in which case we say that the outcomes are segregated. The choice to integrate or segregate multiple outcomes can be beneficial or detrimental to overall utility depending on the correctness of application. Due to the nature of our value function’s different slopes for gains and losses, our utility is maximized in different ways, depending on how we code the four kinds of transactions formula_3 and formula_4 (as gains or as losses):1) Multiple gains: formula_3 and formula_4 are both considered gains. Here, we see that formula_5. Thus, we want to segregate multiple gains.
2) Multiple losses: formula_3 and formula_4 are both considered losses. Here, we see that formula_6. We want to integrate multiple losses.
3) Mixed gain: one of formula_3 and formula_4 is a gain and one is a loss, however the gain is the larger of the two. In this case, formula_7. Utility is maximized when we integrate a mixed gain.
4) Mixed loss: again, one of formula_3 and formula_4 is a gain and one is a loss, however the loss is now significantly larger than the gain. In this case, formula_8. Clearly, we don't want to integrate a mixed loss when the less is significantly larger than the gain. This is often referred to as a "silver lining", a reference to the folk maxim "every cloud has a silver lining". When the loss is just barely larger than the gain, integration may be preferred.
Integration and segregation of outcomes is a means of framing that can impact the overall utility derived from multiple outcomes. Mental accounting interprets the tendency of people to mentally segregate their financial resources into different categories. In the event of financial losses or gains in different mental accounts, people will be impacted differently than if the financial loss was integrated across their entire financial portfolio. In the event of multiple gain and mixed loss, mental accounting will segregate outcomes resulting in maximised utility. In the event of multiple losses and mixed gain, mental accounting will segregate outcomes resulting in minimized utility.
There are two values attached to any transaction - acquisition value and transaction value. "Acquisition value" is the money that one is ready to part with for physically acquiring some good. "Transaction value" is the value one attaches to having a good deal. If the price that one is paying is equal to the mental reference price for the good, the transaction value is zero. If the price is lower than the reference price, the transaction utility is positive. Total utility received from a transaction, then, is the sum of acquisition utility and transaction utility.
Pain of Paying.
A more proximal psychological mechanism through which mental accounting influences spending is through its influence on the pain of paying that is associated with spending money from a mental account. Pain of paying is a negative affective response associated with a financial loss. Prototypical examples are the unpleasant feeling that one experiences when watching the fare increase on a taximeter or at the gas pump. When considering an expense, consumers appear to compare the cost of the expense to the size of an account that it would deplete (e.g., numerator vs. denominator). A $30 t-shirt, for example, would be a subjectively larger expense when drawn from $50 in one's wallet than $500 in one's checking account. The larger the fraction, the more pain of paying the purchase appears to generate and the less likely consumers are to then exchange money for the good. Other evidence of the relation between pain of paying and spending include the lower debt held by consumers who report experiencing a higher pain of paying for the same goods and services than consumers who report experiencing less pain of paying.
Main principles of mental accounts.
Richard Thaler divided the concept mental accounting into two main principles; segregation of gains and losses, and account reference points. Both principles utilize concepts related to utility and pain of paying to interpret how people evaluate economic outcomes.
Segregation of gains and losses.
A main principle of mental accounting is the assertion that people frame gains and losses by segregating into different mental accounts rather than integrating into their overall account. The impact of this tendency means that outcomes can be framed based on the context of a decision. In mental accounting the framing of a decision reduces from the overall account to a smaller segregated account which can incentivize purchase decisions. An example of this was posed by Thaler where people were more inclined to drive 20 minutes to save $5 on a $15 purchase than on a $125 purchase. The principle applies to mental accounting where if gains and losses are viewed relative to a smaller segregated account then the outcome is viewed differently.
Account reference points.
Account reference points refer to the tendency for people to set a reference point on a current decision based on prior outcome in the same mental account. As a result the impact of prior outcomes integrate into the current decision when determining overall utility. An example was posed by Thaler where gamblers were more inclined to make risk-seeking bets on the last race of the day. This phenomenon was justified by the assertion that gamblers segregate the gains and losses from each day into separate accounts and integrate gains and losses for each day in an account. It can then be interpreted that end-of-day risk-seeking bets is an example of loss aversion where gamblers attempt to equalize their daily account.
Practical implications.
Since the inception of the concept, mental accounting has been applied to interpret consumer behavior particularly in the contexts of online shopping, consumer reward points, public taxation policy.
Psychology.
Mental accounting is subject to many logical fallacies and cognitive biases, which hold many implications such as overspending.
Credit cards and cash payments.
Another example of mental accounting is the greater willingness to pay for goods when using credit cards than cash. Swiping a credit card prolongs the payment to a later date (when we pay our monthly bill) and integrates it to a large existing sum (our bill to that point). This delay causes the payment to stick in our memory less clearly and saliently. Furthermore, the payment is no longer perceived in isolation; rather, it is seen as a (relatively) small increase of an already large credit card bill. For example, it might be a change from $120 to $125, instead of a regular, out-of-pocket $5 cost. And as we can see from our value function, this "V(-$125) – V(-$120)" is smaller than "V(-$5)," thus the pain of paying is reduced.
Marketing.
Mental accounting can be useful for marketers predict customer response to bundling of pricing and segregation of products. People respond more positively to incentives and costs when gains are segregated, losses are integrated, marketers segregate net losses (the silver lining principle), and integrate net gains. Automotive dealers, for example, benefit from these principles when they bundle optional features into a single price but segregate each feature included in the bundle (e.g. velvet seat covers, aluminum wheels,anti-theft car lock). Cellular phone companies can use principles of mental accounting when deciding how much to charge consumers for a new smartphone and to give them for their trade-in. When the cost of the phone is large and the value of the phone to be traded in is low, it is better to charge consumers a slightly higher price for the phone and return that money to them as a higher value on their trade in. Conversely, when the cost of the phone and the value of the trade-in are more comparable, because consumers are loss averse, it is better to charge them less for the new phone and offer them less for the trade-in.
Public policy.
Mental accounting can also be utilized in public economics and public policy. Inherently, the way that people (and therefore tax-payers and voters) perceive decisions and outcomes will be influenced by their process of mental accounting. Policy-makers and public economists could potentially apply mental accounting concepts when crafting public systems, trying to understand and identify market failures, redistribute wealth or resources in a fair way, reduce the saliency of sunk costs, limiting or eliminating the Free-rider problem, or even just when delivering bundles of multiple goods or services to taxpayers. The following examples exist where mental accounting applied to public policy and programs produced positive outcomes.
A good example of the importance of considering mental accounting while crafting public policy is demonstrated by authors Justine Hastings and Jesse Shapiro in their analysis of the SNAP (Supplemental Nutritional Assistance Program). They "argue that these findings are not consistent with households treating SNAP funds as fungible with non-SNAP funds, and we support this claim with formal tests of fungibility that allow different households to have different consumption functions" Put differently, their data supports Thaler's (and the concept of mental accounting's) claim that the principle of fungibility is often violated in practice. Furthermore, they find SNAP to be very effective, calculating a marginal propensity to consume SNAP-eligible food (MPCF) out of benefits received by SNAP of 0.5 to 0.6. This is much higher than the MPCF out of cash transfers, which is usually around 0.1.
The implications of taxation policy on taxpayers was examined through mental accounting principles in "Optimal Taxation with Behavioral Insights". The research paper applied the ideology of the three pillars of optimal taxation, and incorporated mental accounting concepts (as well as misperceptions and internalities). Outcomes included novel economic insights, including application of nudges present in optimal taxation frameworks, and challenging the Diamond-Mirrlees productive efficiency result and the Atkinson-Stiglitz uniform commodity taxation proposition, finding they are more likely to fail with behavioral agents.
In the paper "Public vs. Private Mental Accounts: Experimental Evidence from Savings Groups in Colombia", it was demonstrated that mental accounting can be exploited to help nudge people towards saving more. The study found that publicly creating a savings goal greatly increased the savings rate of participants when compared to the control and those who set savings goals privately. The power of the labeling effect was observed to vary based on the savings success history of the participants.
Mental accounting plays a powerful role in our decision-making processes. It is important for public policy experts, researchers, and policy-makers continue to explore the ways that it can be utilized to benefit public welfare.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "(x, y)"
},
{
"math_id": 1,
"text": "Value(x+y)"
},
{
"math_id": 2,
"text": "Value(x) + Value(y)"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "y"
},
{
"math_id": 5,
"text": "Value(x) + Value(y) > Value(x+y)"
},
{
"math_id": 6,
"text": "Value(-x) + Value(-y) < Value( -(x+y) )"
},
{
"math_id": 7,
"text": "Value(x) + Value(-y) < Value(x-y)"
},
{
"math_id": 8,
"text": "Value(x) + Value(-y) >Value(x-y) "
}
] | https://en.wikipedia.org/wiki?curid=1024314 |
1024323 | Pyrgeometer | Device which measures infra-red radiation
A pyrgeometer is a device that measures near-surface infra-red (IR) radiation, approximately from 4.5 μm to 100 μm on the electromagnetic spectrum (thereby excluding solar radiation).
It measures the resistance/voltage changes in a material that is sensitive to the net energy transfer by radiation that occurs between itself and its surroundings (which can be either in or out). By also measuring its own temperature and making some assumptions about the nature of its surroundings it can infer a temperature of the local atmosphere with which it is exchanging radiation.
Since the mean free path of IR radiation in the atmosphere is ~25 meters, this device typically measures IR flux in the nearest 25 meter layer.
Pyrgeometer components.
A pyrgeometer consists of the following major components:
Measurement of long wave downward radiation.
The atmosphere and the pyrgeometer (in effect its sensor surface) exchange long wave IR radiation. This results in a net radiation balance according to:
formula_0
Where (in SI units):
The pyrgeometer's thermopile detects the net radiation balance between the incoming and outgoing long wave radiation flux and converts it to a voltage according to the equation below.
formula_1
Where (in SI units):
The value for S is determined during calibration of the instrument. The calibration is performed at the production factory with a reference instrument traceable to a regional calibration center.
To derive the absolute downward long wave flux, the temperature of the pyrgeometer has to be taken into account. It is measured using a temperature sensor inside the instrument, near the cold junctions of the thermopile. The pyrgeometer is considered to approximate a black body. Due to this it emits long wave radiation according to:
formula_2
Where (in SI units):
From the calculations above the incoming long wave radiation can be derived. This is usually done by rearranging the equations above to yield the so-called pyrgeometer equation by Albrecht and Cox.
formula_3
Where all the variables have the same meaning as before.
As a result, the detected voltage and instrument temperature yield the total global long wave downward radiation.
Usage.
Pyrgeometers are frequently used in meteorology, climatology studies. The atmospheric long-wave downward radiation is of interest for research into long term climate changes.
The signals are generally detected using a data logging system, capable of taking high resolution samples in the millivolt range.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_\\mathrm{net} = E_\\mathrm{in} - E_\\mathrm{out}"
},
{
"math_id": 1,
"text": " E_\\mathrm{net} = \\frac{U_\\mathrm{emf}}{S}"
},
{
"math_id": 2,
"text": "E_\\mathrm{out} = \\sigma T^4"
},
{
"math_id": 3,
"text": "E_\\mathrm{in} = \\frac{U_\\mathrm{emf}}{S}+ \\sigma T^4"
}
] | https://en.wikipedia.org/wiki?curid=1024323 |
1024614 | Taylor–Proudman theorem | In fluid mechanics, the Taylor–Proudman theorem (after Geoffrey Ingram Taylor and Joseph Proudman) states that when a solid body is moved slowly within a fluid that is steadily rotated with a high angular velocity formula_0, the fluid velocity will be uniform along any line parallel to the axis of rotation. formula_0 must be large compared to the movement of the solid body in order to make the Coriolis force large compared to the acceleration terms.
Derivation.
The Navier–Stokes equations for steady flow, with zero viscosity and a body force corresponding to the Coriolis force, are
formula_1
where formula_2 is the fluid velocity, formula_3 is the fluid density, and formula_4 the pressure. If we assume that formula_5 is a scalar potential and the advective term on the left may be neglected (reasonable if the Rossby number is much less than unity) and that the flow is incompressible (density is constant), the equations become:
formula_6
where formula_0 is the angular velocity vector. If the curl of this equation is taken, the result is the Taylor–Proudman theorem:
formula_7
To derive this, one needs the vector identities
formula_8
and
formula_9
and
formula_10
(because the curl of the gradient is always equal to zero).
Note that formula_11 is also needed (angular velocity is divergence-free).
The vector form of the Taylor–Proudman theorem is perhaps better understood by expanding the dot product:
formula_12
In coordinates for which formula_13, the equations reduce to
formula_14
if formula_15. Thus, "all three" components of the velocity vector are uniform along any line parallel to the z-axis.
Taylor column.
The Taylor column is an imaginary cylinder projected above and below a real cylinder that has been placed parallel to the rotation axis (anywhere in the flow, not necessarily in the center). The flow will curve around the imaginary cylinders just like the real due to the Taylor–Proudman theorem, which states that the flow in a rotating, homogeneous, inviscid fluid are 2-dimensional in the plane orthogonal to the rotation axis and thus there is no variation in the flow along the formula_16 axis, often taken to be the formula_17 axis.
The Taylor column is a simplified, experimentally observed effect of what transpires in the Earth's atmospheres and oceans.
History.
The result known as the Taylor-Proudman theorem was first derived by Sydney Samuel Hough (1870-1923), a mathematician at Cambridge University, in 1897. Proudman published another derivation in 1916 and Taylor in 1917, then the effect was demonstrated experimentally by Taylor in 1923.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Omega"
},
{
"math_id": 1,
"text": "\n\\rho({\\mathbf u}\\cdot\\nabla){\\mathbf u}={\\mathbf F}-\\nabla p,"
},
{
"math_id": 2,
"text": "{\\mathbf u}"
},
{
"math_id": 3,
"text": "\\rho"
},
{
"math_id": 4,
"text": "p"
},
{
"math_id": 5,
"text": "F=\\nabla\\Phi=-2\\rho\\mathbf\\Omega\\times{\\mathbf u}"
},
{
"math_id": 6,
"text": "\n2\\rho\\mathbf\\Omega\\times{\\mathbf u}=-\\nabla p,"
},
{
"math_id": 7,
"text": "\n({\\mathbf\\Omega}\\cdot\\nabla){\\mathbf u}={\\mathbf 0}.\n"
},
{
"math_id": 8,
"text": "\\nabla\\times(A\\times B)=A(\\nabla\\cdot B)-(A\\cdot\\nabla)B+(B\\cdot\\nabla)A-B(\\nabla\\cdot A)"
},
{
"math_id": 9,
"text": "\\nabla\\times(\\nabla p)=0\\ "
},
{
"math_id": 10,
"text": "\\nabla\\times(\\nabla \\Phi)=0\\ "
},
{
"math_id": 11,
"text": "\\nabla\\cdot{\\mathbf\\Omega}=0"
},
{
"math_id": 12,
"text": "\n\\Omega_x\\frac{\\partial {\\mathbf u}}{\\partial x} + \\Omega_y\\frac{\\partial {\\mathbf u}}{\\partial y} + \\Omega_z\\frac{\\partial {\\mathbf u}}{\\partial z}=0.\n"
},
{
"math_id": 13,
"text": "\\Omega_x=\\Omega_y=0"
},
{
"math_id": 14,
"text": "\n\\frac{\\partial{\\mathbf u}}{\\partial z}=0,"
},
{
"math_id": 15,
"text": "\\Omega_z\\neq 0"
},
{
"math_id": 16,
"text": "\\vec{\\Omega}"
},
{
"math_id": 17,
"text": "\\hat{z}"
}
] | https://en.wikipedia.org/wiki?curid=1024614 |
10246759 | Poincaré–Lindstedt method | Technique used in perturbation theory
In perturbation theory, the Poincaré–Lindstedt method or Lindstedt–Poincaré method is a technique for uniformly approximating periodic solutions to ordinary differential equations, when regular perturbation approaches fail. The method removes secular terms—terms growing without bound—arising in the straightforward application of perturbation theory to weakly nonlinear problems with finite oscillatory solutions.
The method is named after Henri Poincaré, and Anders Lindstedt.
<templatestyles src="Template:Blockquote/styles.css" />All efforts of geometers in the second half of this century have had as main objective the elimination of secular terms.
— The article gives several examples. The theory can be found in Chapter 10 of Nonlinear Differential Equations and Dynamical Systems by Verhulst.
Example: the Duffing equation.
The undamped, unforced Duffing equation is given by
formula_0
for "t" > 0, with 0 < "ε" ≪ 1.
Consider initial conditions
formula_1 formula_2
A perturbation-series solution of the form "x"("t") = "x"0("t") + "ε" "x"1("t") + ... is sought. The first two terms of the series are
formula_3
This approximation grows without bound in time, which is inconsistent with the physical system that the equation models. The term responsible for this unbounded growth, called the secular term, is formula_4. The Poincaré–Lindstedt method allows for the creation of an approximation that is accurate for all time, as follows.
In addition to expressing the solution itself as an asymptotic series, form another series with which to scale time "t":
formula_5 where formula_6
We have the leading order "ω"0 = 1, because when formula_7, the equation has solution formula_8. Then the original problem becomes
formula_9
Now search for a solution of the form "x"("τ") = "x"0("τ") + "ε" "x"1("τ") + ... . The following solutions for the zeroth and first order problem in "ε" are obtained:
formula_10
So the secular term can be removed through the choice: "ω"1 = . Higher orders of accuracy can be obtained by continuing the perturbation analysis along this way. As of now, the approximation—correct up to first order in "ε"—is
formula_11
Example: the van der Pol oscillator.
We solve the van der Pol oscillator only up to order 2. This method can be continued indefinitely in the same way, where the order-n term formula_12 consists of a harmonic term formula_13, plus some super-harmonic terms formula_14. The coefficients of the super-harmonic terms are solved directly, and the coefficients of the harmonic term are determined by expanding down to order-(n+1), and eliminating its secular term.
See chapter 10 of for a derivation up to order 3, and for a computer derivation up to order 164.
Consider the van der Pol oscillator with equationformula_15where formula_16 is a small positive number. Perform substitution to the second order:formula_5 where formula_17which yields the equationformula_18Now plug in formula_19, and we have three equations, for the orders formula_20 respectively:formula_21The first equation has general solution formula_22. Pick origin of time such that formula_23. Then plug it into the second equation to obtain (after some trigonometric identities)formula_24To eliminate the secular term, we must set both formula_25 coefficients to zero, thus we have formula_26yielding formula_27. In particular, we found that when formula_16 increases from zero to a small positive constant, all circular orbits in phase space are destroyed, except the one at radius 2. Now solving formula_28 yields formula_29. We can always absorb formula_30 term into formula_31, so we can WLOG have just formula_32.
Now plug into the second equation to obtainformula_33To eliminate the secular term, we set formula_34.
Thus we find that formula_35.
Example: Mathieu equation.
This is an example of parametric resonance.
Consider the Mathieu equation formula_36, where formula_37 is a constant, and formula_16 is small. The equation's solution would have two time-scales, one fast-varying on the order of formula_38, and another slow-varying on the order of formula_39. So expand the solution as formula_40Now plug into the Mathieu equation and expand to obtainformula_41As before, we have the solutionsformula_42The secular term coefficients in the third equation are formula_43Setting them to zero, we find the equations of motion:
formula_44
Its determinant is formula_45, and so when formula_46, the origin is a saddle point, so the amplitude of oscillation formula_47 grows unboundedly.
In other words, when the angular frequency (in this case, formula_48) in the parameter is sufficiently close to the angular frequency (in this case, formula_49) of the original oscillator, the oscillation grows unboundedly, like a child swinging on a swing pumping all the way to the moon.
Shohat expansion.
For the van der Pol oscillator, we have formula_50 for large formula_16, so as formula_16 becomes large, the serial expansion of formula_51 in terms of formula_16 diverges and we would need to keep more and more terms of it to keep formula_51 bounded. This suggests to us a parametrization that is bounded:formula_52Then, using serial expansions formula_53 and formula_54, and using the same method of eliminating the secular terms, we find formula_55.
Because formula_56, the expansion formula_53 allows us to take a finite number of terms for the series on the right, and it would converge to a finite value at formula_57 limit. Then we would have formula_50, which is exactly the desired asymptotic behavior. This is the idea behind Shohat expansion.
The exact asymptotic constant is formula_58, which as we can see is approached by formula_59. | [
{
"math_id": 0,
"text": "\\ddot{x} + x + \\varepsilon\\, x^3 = 0\\,"
},
{
"math_id": 1,
"text": "x(0) = 1,\\,"
},
{
"math_id": 2,
"text": " \\dot x(0) = 0.\\,"
},
{
"math_id": 3,
"text": "x(t) = \\cos(t) + \\varepsilon \\left[ \\tfrac{1}{32}\\, \\left( \\cos(3t) - \\cos(t) \\right) - \\tfrac{3}{8}\\, t\\, \\sin(t) \\right] + \\cdots.\\,"
},
{
"math_id": 4,
"text": "t\\sin(t)"
},
{
"math_id": 5,
"text": "\\tau = \\omega t,\\,"
},
{
"math_id": 6,
"text": "\\omega = \\omega_0 + \\varepsilon \\omega_1 + \\cdots.\\,"
},
{
"math_id": 7,
"text": "\\epsilon = 0"
},
{
"math_id": 8,
"text": "x = \\cos (t)"
},
{
"math_id": 9,
"text": "\\omega^2\\, x''(\\tau) + x(\\tau) + \\varepsilon\\, x^3(\\tau) = 0\\,"
},
{
"math_id": 10,
"text": "\n \\begin{align}\n x_0 &= \\cos(\\tau) \\\\\n \\text{and }\n x_1 &= \\tfrac{1}{32}\\, \\left(\\cos(3\\tau)-\\cos(\\tau)\\right) + \\left( \\omega_1 - \\tfrac{3}{8} \\right)\\, \\tau\\, \\sin(\\tau).\n \\end{align}\n"
},
{
"math_id": 11,
"text": "\n x(t) \\approx \\cos\\Bigl(\\left(1 + \\tfrac{3}{8}\\, \\varepsilon \\right)\\, t \\Bigr) \n + \\tfrac{1}{32}\\, \\varepsilon\\, \\left[\\cos\\Bigl( 3 \\left(1 + \\tfrac{3}{8}\\,\\varepsilon\\, \\right)\\, t \\Bigr)-\\cos\\Bigl(\\left(1 + \\tfrac{3}{8}\\,\\varepsilon\\, \\right)\\, t \\Bigr)\\right]. \\,\n"
},
{
"math_id": 12,
"text": "\\epsilon^n x_n"
},
{
"math_id": 13,
"text": "a_n\\cos(t) + b_n\\cos(t)"
},
{
"math_id": 14,
"text": "a_{n, 2}\\cos(2t) + b_{n, 2}\\cos(2t) + \\cdots"
},
{
"math_id": 15,
"text": "\\ddot x + \\epsilon (x^2-1) \\dot x + x = 0"
},
{
"math_id": 16,
"text": "\\epsilon"
},
{
"math_id": 17,
"text": "\\omega = 1 + \\epsilon \\omega_1 + \\epsilon^2 \\omega_2 + O(\\epsilon^3)"
},
{
"math_id": 18,
"text": "\\omega^2\\ddot x + \\omega\\epsilon (x^2-1) \\dot x + x = 0"
},
{
"math_id": 19,
"text": "x = x_0 + \\epsilon x_1 + \\epsilon^2 x_2 + O(\\epsilon^3)"
},
{
"math_id": 20,
"text": "1, \\epsilon, \\epsilon^2"
},
{
"math_id": 21,
"text": "\\begin{cases}\n\\ddot x_0 + x_0 = 0 \\\\\n\\ddot x_1 + x_1 + 2\\omega_1 \\ddot x_0 + (x_0^2-1)\\dot x_0 = 0 \\\\\n\\ddot x_2 + x_2 + (\\omega_1^2 + 2\\omega_2)\\ddot x_0 + 2\\omega_1 \\ddot x_1 + 2x_0x_1\\dot x_0 + \\omega_1(x_0^2-1)\\dot x_0 + \\dot x_1(x_0^2-1) = 0\n\\end{cases}"
},
{
"math_id": 22,
"text": "x_0 =A \\cos (\\tau + \\phi)"
},
{
"math_id": 23,
"text": "\\phi = 0"
},
{
"math_id": 24,
"text": "\\ddot x_1 + x_1 + (A-A^3/4) \\sin\\tau - 2\\omega_1 A \\cos \\tau - (A^3/4) \\sin(3\\tau) = 0"
},
{
"math_id": 25,
"text": "\\sin\\tau, \\cos \\tau"
},
{
"math_id": 26,
"text": "\\begin{cases}\nA= A^3/4 \\\\\n2\\omega_1 A = 0\n\\end{cases}"
},
{
"math_id": 27,
"text": "A = 2, \\omega_1 = 0"
},
{
"math_id": 28,
"text": "\\ddot x_1 + x_1 = 2\\sin(3\\tau)"
},
{
"math_id": 29,
"text": "x_1 = B \\cos(\\tau + \\phi) -\\frac 14 \\sin(3\\tau)"
},
{
"math_id": 30,
"text": "\\epsilon B\\cos(\\tau + \\phi)"
},
{
"math_id": 31,
"text": "x_0"
},
{
"math_id": 32,
"text": "x_1 = -\\frac 14 \\sin(3\\tau)"
},
{
"math_id": 33,
"text": "\\ddot x_2 + x_2 -(4\\omega_2 + 1/4)\\cos\\tau - \\frac 34 \\cos 3\\tau - \\frac 54 \\cos 5\\tau = 0"
},
{
"math_id": 34,
"text": "\\omega_2 = -\\frac{1}{16}"
},
{
"math_id": 35,
"text": "\\omega = 1 - \\frac{1}{16}\\epsilon^2 + O(\\epsilon^3)"
},
{
"math_id": 36,
"text": "\\ddot x + (1 + b\\epsilon^2 + \\epsilon \\cos(t))x = 0"
},
{
"math_id": 37,
"text": "b"
},
{
"math_id": 38,
"text": "t"
},
{
"math_id": 39,
"text": "T = \\epsilon^2 t"
},
{
"math_id": 40,
"text": "x(t) = x_0(t, T) + \\epsilon x_1(t, T) + \\epsilon^2 x_2(t, T) + O(\\epsilon^3)"
},
{
"math_id": 41,
"text": "\\begin{cases}\n\\partial_t^2 x_0 + x_0 = 0 \\\\\n\\partial_t^2 x_1 + x_1 = -\\cos(t)x_0 \\\\\n\\partial_t^2 x_2 + x_2 = -bx_0 - 2\\partial_{tT}x_0 - \\cos(t)x_1\n\\end{cases}"
},
{
"math_id": 42,
"text": "\\begin{cases}\nx_0 = A\\cos(t) + B \\sin(t) \\\\\nx_1 = -\\frac A2 + \\frac A6 \\cos(2t) + \\frac B6 \\sin(2t)\n\\end{cases}"
},
{
"math_id": 43,
"text": "\\begin{cases}\n\\frac{1}{12} \\left(-12 b A+5 A-24 B'\\right) \\\\\n\\frac{1}{12} \\left(24 A'-12 b B-B\\right)\n\\end{cases}"
},
{
"math_id": 44,
"text": "\\frac{d}{dT} \\begin{bmatrix}\nA \\\\ B\n\\end{bmatrix} = \n\\begin{bmatrix}\n0 & \\frac 12 (\\frac{1}{12} + b) \\\\\n\\frac 12 (\\frac{5}{12} - b) & 0 \\\\\n\\end{bmatrix}\n\\begin{bmatrix}\nA\\\\B\n\\end{bmatrix}"
},
{
"math_id": 45,
"text": "\\frac 14 (b-5/12) (b + 1/12)"
},
{
"math_id": 46,
"text": "b \\in (-1/12, 5/12)"
},
{
"math_id": 47,
"text": "\\sqrt{A^2 + B^2}"
},
{
"math_id": 48,
"text": "1"
},
{
"math_id": 49,
"text": "\\sqrt{1+b\\epsilon^2 }"
},
{
"math_id": 50,
"text": "\\omega \\sim 1/\\epsilon"
},
{
"math_id": 51,
"text": "\\omega"
},
{
"math_id": 52,
"text": "r := \\frac{\\epsilon}{1+\\epsilon}"
},
{
"math_id": 53,
"text": "\\epsilon \\omega = r + c_2 r^2 + c_3 r^3 + c_4 r^4 + \\cdots"
},
{
"math_id": 54,
"text": "x = x_0 + r x_1 + r^2 x_2 + \\cdots"
},
{
"math_id": 55,
"text": "c_2 = 1, c_3 = \\frac{15}{16}, c_4 = \\frac{13}{16}"
},
{
"math_id": 56,
"text": "\\lim_{\\epsilon \\to \\infty}r = 1"
},
{
"math_id": 57,
"text": "\\epsilon \\to \\infty"
},
{
"math_id": 58,
"text": "\\epsilon\\omega \\to \\frac{2\\pi}{3-2\\ln 2} = 3.8936\\cdots "
},
{
"math_id": 59,
"text": "1 + c_2 + c_3 + c_4 = 3.75"
}
] | https://en.wikipedia.org/wiki?curid=10246759 |
102476 | Log-normal distribution | Probability distribution
In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics (e.g., energies, concentrations, lengths, prices of financial instruments, and other metrics).
The distribution is occasionally referred to as the Galton distribution or Galton's distribution, after Francis Galton. The log-normal distribution has also been associated with other names, such as McAlister, Gibrat and Cobb–Douglas.
A log-normal process is the statistical realization of the multiplicative product of many independent random variables, each of which is positive. This is justified by considering the central limit theorem in the log domain (sometimes called Gibrat's law). The log-normal distribution is the maximum entropy probability distribution for a random variate X—for which the mean and variance of ln("X") are specified.
Definitions.
Generation and parameters.
Let formula_1 be a standard normal variable, and let formula_2 and formula_0 be two real numbers, with formula_3. Then, the distribution of the random variable
formula_4
is called the log-normal distribution with parameters formula_2 and formula_0. These are the expected value (or mean) and standard deviation of the variable's natural logarithm, "not" the expectation and standard deviation of formula_5 itself.
This relationship is true regardless of the base of the logarithmic or exponential function: If formula_6 is normally distributed, then so is formula_7 for any two positive numbers formula_8 Likewise, if formula_9 is log-normally distributed, then so is formula_10 where formula_11.
In order to produce a distribution with desired mean formula_12 and variance formula_13 one uses
formula_14 and formula_15
Alternatively, the "multiplicative" or "geometric" parameters formula_16 and formula_17 can be used. They have a more direct interpretation: formula_18 is the median of the distribution, and formula_19 is useful for determining "scatter" intervals, see below.
Probability density function.
A positive random variable formula_5 is log-normally distributed (i.e., formula_20), if the natural logarithm of formula_5 is normally distributed with mean formula_21 and variance formula_22
formula_23
Let formula_24 and formula_25 be respectively the cumulative probability distribution function and the probability density function of the formula_26 standard normal distribution, then we have that the probability density function of the log-normal distribution is given by:
formula_27
Cumulative distribution function.
The cumulative distribution function is
formula_28
where formula_24 is the cumulative distribution function of the standard normal distribution (i.e., formula_29).
This may also be expressed as follows:
formula_30
where is the complementary error function.
Multivariate log-normal.
If formula_31 is a multivariate normal distribution, then formula_32 has a multivariate log-normal distribution. The exponential is applied elementwise to the random vector formula_33. The mean of formula_34 is
formula_35
and its covariance matrix is
formula_36
Since the multivariate log-normal distribution is not widely used, the rest of this entry only deals with the univariate distribution.
Characteristic function and moment generating function.
All moments of the log-normal distribution exist and
formula_37
This can be derived by letting formula_38 within the integral. However, the log-normal distribution is not determined by its moments. This implies that it cannot have a defined moment generating function in a neighborhood of zero. Indeed, the expected value formula_39 is not defined for any positive value of the argument formula_40, since the defining integral diverges.
The characteristic function formula_41 is defined for real values of "t", but is not defined for any complex value of "t" that has a negative imaginary part, and hence the characteristic function is not analytic at the origin. Consequently, the characteristic function of the log-normal distribution cannot be represented as an infinite convergent series. In particular, its Taylor formal series diverges:
formula_42
However, a number of alternative divergent series representations have been obtained.
A closed-form formula for the characteristic function formula_43 with formula_40 in the domain of convergence is not known. A relatively simple approximating formula is available in closed form, and is given by
formula_44
where formula_45 is the Lambert W function. This approximation is derived via an asymptotic method, but it stays sharp all over the domain of convergence of formula_46.
Properties.
Probability in different domains.
The probability content of a log-normal distribution in any arbitrary domain can be computed to desired precision by first transforming the variable to normal, then numerically integrating using the ray-trace method. (Matlab code)
Probabilities of functions of a log-normal variable.
Since the probability of a log-normal can be computed in any domain, this means that the cdf (and consequently pdf and inverse cdf) of any function of a log-normal variable can also be computed. (Matlab code)
Geometric or multiplicative moments.
The geometric or multiplicative mean of the log-normal distribution is formula_47. It equals the median. The geometric or multiplicative standard deviation is formula_48.
By analogy with the arithmetic statistics, one can define a geometric variance, formula_49, and a geometric coefficient of variation, formula_50, has been proposed. This term was intended to be "analogous" to the coefficient of variation, for describing multiplicative variation in log-normal data, but this definition of GCV has no theoretical basis as an estimate of formula_51 itself (see also Coefficient of variation).
Note that the geometric mean is smaller than the arithmetic mean. This is due to the AM–GM inequality and is a consequence of the logarithm being a concave function. In fact,
formula_52
In finance, the term formula_53 is sometimes interpreted as a convexity correction. From the point of view of stochastic calculus, this is the same correction term as in Itō's lemma for geometric Brownian motion.
Arithmetic moments.
For any real or complex number "n", the "n"-th moment of a log-normally distributed variable "X" is given by
formula_54
Specifically, the arithmetic mean, expected square, arithmetic variance, and arithmetic standard deviation of a log-normally distributed variable "X" are respectively given by:
formula_55
The arithmetic coefficient of variation formula_56 is the ratio formula_57. For a log-normal distribution it is equal to
formula_58
This estimate is sometimes referred to as the "geometric CV" (GCV), due to its use of the geometric variance. Contrary to the arithmetic standard deviation, the arithmetic coefficient of variation is independent of the arithmetic mean.
The parameters "μ" and "σ" can be obtained, if the arithmetic mean and the arithmetic variance are known:
formula_59
A probability distribution is not uniquely determined by the moments E["X""n"] = e"nμ" + "n"2"σ"2 for "n" ≥ 1. That is, there exist other distributions with the same set of moments. In fact, there is a whole family of distributions with the same moments as the log-normal distribution.
Mode, median, quantiles.
The mode is the point of global maximum of the probability density function. In particular, by solving the equation formula_60, we get that:
formula_61
Since the log-transformed variable formula_62 has a normal distribution, and quantiles are preserved under monotonic transformations, the quantiles of formula_63 are
formula_64
where formula_65 is the quantile of the standard normal distribution.
Specifically, the median of a log-normal distribution is equal to its multiplicative mean,
formula_66
Partial expectation.
The partial expectation of a random variable formula_63 with respect to a threshold formula_67 is defined as
formula_68
Alternatively, by using the definition of conditional expectation, it can be written as formula_69. For a log-normal random variable, the partial expectation is given by:
formula_70
where formula_71 is the normal cumulative distribution function. The derivation of the formula is provided in the . The partial expectation formula has applications in insurance and economics, it is used in solving the partial differential equation leading to the Black–Scholes formula.
Conditional expectation.
The conditional expectation of a log-normal random variable formula_63—with respect to a threshold formula_67—is its partial expectation divided by the cumulative probability of being in that range:
formula_72
Alternative parameterizations.
In addition to the characterization by formula_73 or formula_74, here are multiple ways how the log-normal distribution can be parameterized. ProbOnto, the knowledge base and ontology of probability distributions lists seven such forms:
Examples for re-parameterization.
Consider the situation when one would like to run a model using two different optimal design tools, for example PFIM and PopED. The former supports the LN2, the latter LN7 parameterization, respectively. Therefore, the re-parameterization is required, otherwise the two tools would produce different results.
For the transition formula_82 following formulas hold formula_83 and formula_84.
For the transition formula_85 following formulas hold formula_86 and formula_87.
All remaining re-parameterisation formulas can be found in the specification document on the project website.
Multiplication and division of independent, log-normal random variables.
If two independent, log-normal variables formula_94 and formula_95 are multiplied [divided], the product [ratio] is again log-normal, with parameters formula_96 [formula_97] and formula_0, where formula_98. This is easily generalized to the product of formula_99 such variables.
More generally, if formula_100 are formula_99 independent, log-normally distributed variables, then formula_101
Multiplicative central limit theorem.
The geometric or multiplicative mean of formula_99 independent, identically distributed, positive random variables formula_102 shows, for formula_103, approximately a log-normal distribution with parameters formula_104 and formula_105, assuming formula_106 is finite.
In fact, the random variables do not have to be identically distributed. It is enough for the distributions of formula_107 to all have finite variance and satisfy the other conditions of any of the many variants of the central limit theorem.
This is commonly known as Gibrat's law.
Other.
A set of data that arises from the log-normal distribution has a symmetric Lorenz curve (see also Lorenz asymmetry coefficient).
The harmonic formula_108, geometric formula_109 and arithmetic formula_110 means of this distribution are related; such relation is given by
formula_111
Log-normal distributions are infinitely divisible, but they are not stable distributions, which can be easily drawn from.
Related distributions.
For a more accurate approximation, one can use the Monte Carlo method to estimate the cumulative distribution function, the pdf and the right tail.
The sum of correlated log-normally distributed random variables can also be approximated by a log-normal distribution formula_123
Statistical inference.
Estimation of parameters.
For determining the maximum likelihood estimators of the log-normal distribution parameters "μ" and "σ", we can use the same procedure as for the normal distribution. Note that
formula_132
where formula_46 is the density function of the normal distribution formula_133. Therefore, the log-likelihood function is
formula_134
Since the first term is constant with regard to "μ" and "σ", both logarithmic likelihood functions, formula_135 and formula_136, reach their maximum with the same formula_2 and formula_0. Hence, the maximum likelihood estimators are identical to those for a normal distribution for the observations formula_137,
formula_138
For finite "n", the estimator for formula_2 is unbiased, but the one for formula_0 is biased. As for the normal distribution, an unbiased estimator for formula_0 can be obtained by replacing the denominator "n" by "n"−1 in the equation for formula_139.
When the individual values formula_140 are not available, but the sample's mean formula_141 and standard deviation "s" is, then the Method of moments can be used. The corresponding parameters are determined by the following formulas, obtained from solving the equations for the expectation formula_142 and variance formula_143 for formula_2 and formula_0:
formula_144
Interval estimates.
The most efficient way to obtain interval estimates when analyzing log-normally distributed data consists of applying the well-known methods based on the normal distribution to logarithmically transformed data and then to back-transform results if appropriate.
Prediction intervals.
A basic example is given by prediction intervals: For the normal distribution, the interval formula_145 contains approximately two thirds (68%) of the probability (or of a large sample), and formula_146 contain 95%. Therefore, for a log-normal distribution,
formula_147 contains 2/3, and
formula_148 contains 95% of the probability. Using estimated parameters, then approximately the same percentages of the data should be contained in these intervals.
Confidence interval for "μ*".
Using the principle, note that a confidence interval for formula_2 is formula_149, where formula_150 is the standard error and "q" is the 97.5% quantile of a t distribution with "n-1" degrees of freedom. Back-transformation leads to a confidence interval for formula_151 (the median), is:
formula_152 with formula_153
Confidence interval for "μ".
The literature discusses several options for calculating the confidence interval for formula_2 (the mean of the log-normal distribution). These include bootstrap as well as various other methods.
Extremal principle of entropy to fix the free parameter "σ".
In applications, formula_0 is a parameter to be determined. For growing processes balanced by production and dissipation, the use of an extremal principle of Shannon entropy shows that
formula_154
This value can then be used to give some scaling relation between the inflexion point and maximum point of the log-normal distribution. This relationship is determined by the base of natural logarithm, formula_155, and exhibits some geometrical similarity to the minimal surface energy principle.
These scaling relations are useful for predicting a number of growth processes (epidemic spreading, droplet splashing, population growth, swirling rate of the bathtub vortex, distribution of language characters, velocity profile of turbulences, etc.).
For example, the log-normal function with such formula_0 fits well with the size of secondarily produced droplets during droplet impact and the spreading of an epidemic disease.
The value formula_156 is used to provide a probabilistic solution for the Drake equation.
Occurrence and applications.
The log-normal distribution is important in the description of natural phenomena. Many natural growth processes are driven by the accumulation of many small percentage changes which become additive on a log scale. Under appropriate regularity conditions, the distribution of the resulting accumulated changes will be increasingly well approximated by a log-normal, as noted in the section above on "Multiplicative Central Limit Theorem". This is also known as Gibrat's law, after Robert Gibrat (1904–1980) who formulated it for companies. If the rate of accumulation of these small changes does not vary over time, growth becomes independent of size. Even if this assumption is not true, the size distributions at any age of things that grow over time tends to be log-normal. Consequently, reference ranges for measurements in healthy individuals are more accurately estimated by assuming a log-normal distribution than by assuming a symmetric distribution about the mean.
A second justification is based on the observation that fundamental natural laws imply multiplications and divisions of positive variables. Examples are the simple gravitation law connecting masses and distance with the resulting force, or the formula for equilibrium concentrations of chemicals in a solution that connects concentrations of educts and products. Assuming log-normal distributions of the variables involved leads to consistent models in these cases.
Specific examples are given in the following subsections. contains a review and table of log-normal distributions from geology, biology, medicine, food, ecology, and other areas. is a review article on log-normal distributions in neuroscience, with annotated bibliography.
The image on the right, made with CumFreq, illustrates an example of fitting the log-normal distribution to ranked annually maximum one-day rainfalls showing also the 90% confidence belt based on the binomial distribution.
The rainfall data are represented by plotting positions as part of a cumulative frequency analysis.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma"
},
{
"math_id": 1,
"text": "\\ Z\\ "
},
{
"math_id": 2,
"text": "\\mu"
},
{
"math_id": 3,
"text": "\\sigma > 0"
},
{
"math_id": 4,
"text": " X = e^{\\mu + \\sigma Z} "
},
{
"math_id": 5,
"text": "\\ X\\ "
},
{
"math_id": 6,
"text": "\\ \\log_a(X)\\ "
},
{
"math_id": 7,
"text": "\\ \\log_b(X)\\ "
},
{
"math_id": 8,
"text": "\\ a, b \\neq 1 ~."
},
{
"math_id": 9,
"text": "\\ e^Y\\ "
},
{
"math_id": 10,
"text": "\\ a^Y\\ ,"
},
{
"math_id": 11,
"text": "0 < a \\neq 1"
},
{
"math_id": 12,
"text": "\\mu_X"
},
{
"math_id": 13,
"text": "\\ \\sigma_X^2\\ ,"
},
{
"math_id": 14,
"text": "\\ \\mu = \\ln\\left( \\frac{ \\mu_X^2 }{\\ \\sqrt{ \\mu_X^2 + \\sigma_X^2\\ }\\ }\\right)\\ "
},
{
"math_id": 15,
"text": "\\ \\sigma^2 = \\ln\\left( 1 + \\frac{\\ \\sigma_X^2\\ }{ \\mu_X^2 } \\right) ~."
},
{
"math_id": 16,
"text": "\\ \\mu^* = e^\\mu\\ "
},
{
"math_id": 17,
"text": "\\ \\sigma^* = e^\\sigma\\ "
},
{
"math_id": 18,
"text": "\\ \\mu^*\\ "
},
{
"math_id": 19,
"text": "\\ \\sigma^*\\ "
},
{
"math_id": 20,
"text": "\\ X \\sim \\operatorname{Lognormal}\\left(\\ \\mu, \\sigma^2\\ \\right)\\ "
},
{
"math_id": 21,
"text": " \\mu"
},
{
"math_id": 22,
"text": "\\ \\sigma^2\\ :"
},
{
"math_id": 23,
"text": " \\ln(X) \\sim \\mathcal{N}(\\mu,\\sigma^2)"
},
{
"math_id": 24,
"text": "\\ \\Phi\\ "
},
{
"math_id": 25,
"text": "\\ \\varphi\\ "
},
{
"math_id": 26,
"text": "\\ \\mathcal{N}(\\ 0, 1\\ )\\ "
},
{
"math_id": 27,
"text": "\n\\begin{align}\nf_X(x) & = \\frac{ \\rm{d} }{ {\\rm d} x }\\ \\operatorname{\\mathbb{P}_\\mathit{X}}\\,\\!\\bigl[\\ X \\le x\\ \\bigr] \\\\[6pt]\n& = \\frac{ \\rm{d} }{ {\\rm d} x }\\ \\operatorname{\\mathbb{P}_\\mathit{X}}\\,\\!\\bigl[\\ \\ln X \\le \\ln x\\ \\bigr] \\\\[6pt]\n& = \\frac{ \\rm{d} }{ {\\rm d} x } \\operatorname{\\Phi}\\!\\!\\left( \\frac{\\ \\ln x -\\mu\\ }{ \\sigma } \\right) \\\\[6pt]\n& = \\operatorname{\\varphi}\\!\\left( \\frac{\\ln x - \\mu} \\sigma \\right) \\frac{ \\rm{d} }{ {\\rm d} x } \\left( \\frac{\\ \\ln x - \\mu\\ }{ \\sigma }\\right) \\\\[6pt]\n& = \\operatorname{\\varphi}\\!\\left( \\frac{\\ \\ln x - \\mu\\ }{ \\sigma } \\right) \\frac{ 1 }{\\ \\sigma\\ x\\ } \\\\[6pt]\n& = \\frac{ 1 }{\\ x\\ \\sigma\\sqrt{2\\ \\pi\\ }\\ } \\exp\\left( -\\frac{\\ (\\ln x-\\mu)^2\\ }{2\\ \\sigma^2} \\right) ~.\n\\end{align}\n"
},
{
"math_id": 28,
"text": " F_X(x) = \\Phi\\left( \\frac{(\\ln x) - \\mu} \\sigma \\right) "
},
{
"math_id": 29,
"text": "\\ \\operatorname\\mathcal{N}(\\ 0,\\ 1 )\\ "
},
{
"math_id": 30,
"text": "\n\\frac12 \\left[ 1 + \\operatorname{erf} \\left(\\frac{\\ln x - \\mu}{\\sigma\\sqrt{2}}\\right) \\right] = \\frac12 \\operatorname{erfc} \\left(-\\frac{\\ln x - \\mu}{\\sigma\\sqrt{2}}\\right)\n"
},
{
"math_id": 31,
"text": "\\boldsymbol X \\sim \\mathcal{N}(\\boldsymbol\\mu,\\,\\boldsymbol\\Sigma)"
},
{
"math_id": 32,
"text": "Y_i=\\exp(X_i)"
},
{
"math_id": 33,
"text": "\\boldsymbol X"
},
{
"math_id": 34,
"text": "\\boldsymbol Y"
},
{
"math_id": 35,
"text": "\\operatorname{E}[\\boldsymbol Y]_i=e^{\\mu_i+\\frac{1}{2}\\Sigma_{ii}} ,"
},
{
"math_id": 36,
"text": "\\operatorname{Var}[\\boldsymbol Y]_{ij}=e^{\\mu_i+\\mu_j + \\frac{1}{2}(\\Sigma_{ii}+\\Sigma_{jj}) }( e^{\\Sigma_{ij}} - 1) . "
},
{
"math_id": 37,
"text": "\\operatorname{E}[X^n]= e^{n\\mu+n^2\\sigma^2/2}"
},
{
"math_id": 38,
"text": "z=\\tfrac{\\ln(x) - (\\mu+n\\sigma^2)}{\\sigma}"
},
{
"math_id": 39,
"text": "\\operatorname{E}[e^{t X}]"
},
{
"math_id": 40,
"text": "t"
},
{
"math_id": 41,
"text": "\\operatorname{E}[e^{i t X}]"
},
{
"math_id": 42,
"text": "\\sum_{n=0}^\\infty \\frac{(it)^n}{n!}e^{n\\mu+n^2\\sigma^2/2}"
},
{
"math_id": 43,
"text": "\\varphi(t)"
},
{
"math_id": 44,
"text": "\\varphi(t)\\approx\\frac{\\exp\\left(-\\frac{W^2(-it\\sigma^2e^\\mu) + 2W(-it\\sigma^2e^\\mu)}{2\\sigma^2} \\right)}{\\sqrt{1+W(-it\\sigma^2e^\\mu)}}"
},
{
"math_id": 45,
"text": "W"
},
{
"math_id": 46,
"text": "\\varphi"
},
{
"math_id": 47,
"text": "\\operatorname{GM}[X] = e^\\mu = \\mu^*"
},
{
"math_id": 48,
"text": "\\operatorname{GSD}[X] = e^{\\sigma} = \\sigma^*"
},
{
"math_id": 49,
"text": "\\operatorname{GVar}[X] = e^{\\sigma^2}"
},
{
"math_id": 50,
"text": "\\operatorname{GCV}[X] = e^{\\sigma} - 1"
},
{
"math_id": 51,
"text": "\\operatorname{CV}"
},
{
"math_id": 52,
"text": "\\operatorname{E}[X] = e^{\\mu + \\frac12 \\sigma^2} = e^{\\mu} \\cdot \\sqrt{e^{\\sigma^2}} = \\operatorname{GM}[X] \\cdot \\sqrt{\\operatorname{GVar}[X]}."
},
{
"math_id": 53,
"text": "e^{-\\frac12\\sigma^2}"
},
{
"math_id": 54,
"text": "\\operatorname{E}[X^n] = e^{n\\mu + \\frac12n^2\\sigma^2}."
},
{
"math_id": 55,
"text": "\\begin{align}\n \\operatorname{E}[X] & = e^{\\mu + \\tfrac{1}{2}\\sigma^2}, \\\\[4pt]\n \\operatorname{E}[X^2] & = e^{2\\mu + 2\\sigma^2}, \\\\[4pt]\n \\operatorname{Var}[X] & = \\operatorname{E}[X^2] - \\operatorname{E}[X]^2 = (\\operatorname{E}[X])^2(e^{\\sigma^2} - 1) = e^{2\\mu + \\sigma^2} (e^{\\sigma^2} - 1), \\\\[4pt]\n \\operatorname{SD}[X] & = \\sqrt{\\operatorname{Var}[X]} = \\operatorname{E}[X] \\sqrt{e^{\\sigma^2} - 1}\n = e^{\\mu + \\tfrac{1}{2}\\sigma^2}\\sqrt{e^{\\sigma^2} - 1},\n \\end{align}"
},
{
"math_id": 56,
"text": "\\operatorname{CV}[X]"
},
{
"math_id": 57,
"text": "\\tfrac{\\operatorname{SD}[X]}{\\operatorname{E}[X]}"
},
{
"math_id": 58,
"text": "\\operatorname{CV}[X] = \\sqrt{e^{\\sigma^2} - 1}."
},
{
"math_id": 59,
"text": "\\begin{align}\n\\mu &= \\ln \\left(\\frac{\\operatorname{E}[X]^2}{\\sqrt{\\operatorname{E}[X^2]}}\\right) = \\ln \\left( \\frac{\\operatorname{E}[X]^2}{\\sqrt{\\operatorname{Var}[X] + \\operatorname{E}[X]^2}} \\right), \\\\[4pt]\n \\sigma^2 &= \\ln \\left(\\frac{\\operatorname{E}[X^2]}{\\operatorname{E}[X]^2}\\right) = \\ln \\left(1 + \\frac{\\operatorname{Var}[X]}{\\operatorname{E}[X]^2}\\right).\n \\end{align}"
},
{
"math_id": 60,
"text": "(\\ln f)'=0"
},
{
"math_id": 61,
"text": "\\operatorname{Mode}[X] = e^{\\mu - \\sigma^2}."
},
{
"math_id": 62,
"text": "Y = \\ln X"
},
{
"math_id": 63,
"text": "X"
},
{
"math_id": 64,
"text": "q_X(\\alpha) = e^{\\mu+\\sigma q_\\Phi(\\alpha)} =\\mu^* (\\sigma^*)^{q_\\Phi(\\alpha)},"
},
{
"math_id": 65,
"text": "q_\\Phi(\\alpha)"
},
{
"math_id": 66,
"text": "\\operatorname{Med}[X] = e^\\mu = \\mu^* ~."
},
{
"math_id": 67,
"text": "k"
},
{
"math_id": 68,
"text": " g(k) = \\int_k^\\infty x f_X(x \\mid X > k)\\, dx . "
},
{
"math_id": 69,
"text": "g(k)=\\operatorname{E}[X\\mid X>k] P(X>k)"
},
{
"math_id": 70,
"text": "g(k) = \\int_k^\\infty x f_X(x \\mid X > k)\\, dx = e^{\\mu+\\tfrac{1}{2} \\sigma^2}\\, \\Phi\\!\\left(\\frac{\\mu+\\sigma^2-\\ln k} \\sigma \\right) "
},
{
"math_id": 71,
"text": "\\Phi"
},
{
"math_id": 72,
"text": "\\begin{align}\nE[X\\mid X<k] & =e^{\\mu +\\frac{\\sigma^2}{2}}\\cdot \\frac{\\Phi \\left[\\frac{\\ln(k)-\\mu -\\sigma^2}{\\sigma} \\right]}{\\Phi \\left[\\frac{\\ln(k)-\\mu}{\\sigma} \\right]} \\\\[8pt]\nE[X\\mid X\\geqslant k] &=e^{\\mu +\\frac{\\sigma^2}{2}}\\cdot \\frac{\\Phi \\left[\\frac{\\mu +\\sigma^2-\\ln(k)} \\sigma \\right]}{1-\\Phi \\left[\\frac{\\ln(k)-\\mu}{\\sigma}\\right]} \\\\\n[8pt]\nE[X\\mid X\\in [k_1,k_2]] &=e^{\\mu +\\frac{\\sigma^2}{2}}\\cdot \n\\frac{\n \\Phi \\left[\\frac{\\ln(k_2)-\\mu -\\sigma^2} \\sigma \\right]-\\Phi \\left[\\frac{\\ln(k_1)-\\mu -\\sigma^2} \\sigma \\right]\n}{\n \\Phi \\left[\\frac{\\ln(k_2)-\\mu}{\\sigma}\\right]-\\Phi \\left[\\frac{\\ln(k_1)-\\mu}{\\sigma}\\right]\n}\n\\end{align}"
},
{
"math_id": 73,
"text": "\\mu, \\sigma"
},
{
"math_id": 74,
"text": "\\mu^*, \\sigma^*"
},
{
"math_id": 75,
"text": "P(x;\\boldsymbol\\mu,\\boldsymbol\\sigma)=\\frac{1}{x \\sigma \\sqrt{2 \\pi}} \\exp\\left[-\\frac{(\\ln x - \\mu)^2}{2 \\sigma^2}\\right]"
},
{
"math_id": 76,
"text": "P(x;\\boldsymbol\\mu,\\boldsymbol {v})=\\frac{1}{x \\sqrt{v} \\sqrt{2 \\pi}} \\exp\\left[-\\frac{(\\ln x - \\mu)^2}{2 v}\\right]"
},
{
"math_id": 77,
"text": "P(x;\\boldsymbol m,\\boldsymbol \\sigma) =\\frac{1}{x \\sigma \\sqrt{2 \\pi}} \\exp\\left[-\\frac{\\ln^2(x/m)}{2 \\sigma^2}\\right]"
},
{
"math_id": 78,
"text": "P(x;\\boldsymbol m,\\boldsymbol {cv})= \\frac{1}{x \\sqrt{\\ln(cv^2+1)} \\sqrt{2 \\pi}} \\exp\\left[-\\frac{\\ln^2(x/m)}{2\\ln(cv^2+1)}\\right]"
},
{
"math_id": 79,
"text": "P(x;\\boldsymbol\\mu,\\boldsymbol \\tau)=\\sqrt{\\frac{\\tau}{2 \\pi}} \\frac{1}{x} \\exp\\left[-\\frac{\\tau}{2}(\\ln x-\\mu)^2\\right]"
},
{
"math_id": 80,
"text": "P(x;\\boldsymbol m,\\boldsymbol {\\sigma_g})=\\frac{1}{x \\ln(\\sigma_g)\\sqrt{2 \\pi}} \\exp\\left[-\\frac{\\ln^2(x/m)}{2 \\ln^2(\\sigma_g)}\\right]"
},
{
"math_id": 81,
"text": "P(x;\\boldsymbol {\\mu_N},\\boldsymbol {\\sigma_N})= \\frac{1}{x \\sqrt{2 \\pi \\ln\\left(1+\\sigma_N^2/\\mu_N^2\\right)}} \\exp\\left(-\\frac{\\Big[ \\ln x - \\ln\\frac{\\mu_N}{\\sqrt{1+\\sigma_N^2/\\mu_N^2}}\\Big]^2}{2\\ln(1+\\sigma_N^2/\\mu_N^2)}\\right)"
},
{
"math_id": 82,
"text": "\\operatorname{LN2}(\\mu, v) \\to \\operatorname{LN7}(\\mu_N, \\sigma_N)"
},
{
"math_id": 83,
"text": "\\mu_N = \\exp(\\mu+v/2) "
},
{
"math_id": 84,
"text": "\\sigma_N = \\exp(\\mu+v/2)\\sqrt{\\exp(v)-1}"
},
{
"math_id": 85,
"text": "\\operatorname{LN7}(\\mu_N, \\sigma_N) \\to \\operatorname{LN2}(\\mu, v)"
},
{
"math_id": 86,
"text": "\\mu = \\ln\\left( \\mu_N / \\sqrt{1+\\sigma_N^2/\\mu_N^2} \\right) "
},
{
"math_id": 87,
"text": " v = \\ln(1+\\sigma_N^2/\\mu_N^2)"
},
{
"math_id": 88,
"text": "X \\sim \\operatorname{Lognormal}(\\mu, \\sigma^2)"
},
{
"math_id": 89,
"text": "a X \\sim \\operatorname{Lognormal}( \\mu + \\ln a,\\ \\sigma^2)"
},
{
"math_id": 90,
"text": " a > 0. "
},
{
"math_id": 91,
"text": "\\tfrac{1}{X} \\sim \\operatorname{Lognormal}(-\\mu,\\ \\sigma^2)."
},
{
"math_id": 92,
"text": "X^a \\sim \\operatorname{Lognormal}(a\\mu,\\ a^2 \\sigma^2)"
},
{
"math_id": 93,
"text": "a \\neq 0."
},
{
"math_id": 94,
"text": "X_1"
},
{
"math_id": 95,
"text": "X_2"
},
{
"math_id": 96,
"text": "\\mu=\\mu_1+\\mu_2"
},
{
"math_id": 97,
"text": "\\mu=\\mu_1-\\mu_2"
},
{
"math_id": 98,
"text": "\\sigma^2=\\sigma_1^2+\\sigma_2^2"
},
{
"math_id": 99,
"text": "n"
},
{
"math_id": 100,
"text": "X_j \\sim \\operatorname{Lognormal} (\\mu_j, \\sigma_j^2)"
},
{
"math_id": 101,
"text": "Y = \\textstyle\\prod_{j=1}^n X_j \\sim \\operatorname{Lognormal} \\Big(\\textstyle \\sum_{j=1}^n\\mu_j,\\ \\sum_{j=1}^n \\sigma_j^2 \\Big)."
},
{
"math_id": 102,
"text": "X_i"
},
{
"math_id": 103,
"text": "n \\to\\infty"
},
{
"math_id": 104,
"text": "\\mu = E[\\ln(X_i)]"
},
{
"math_id": 105,
"text": "\\sigma^2 = \\mbox{var}[\\ln(X_i)]/n"
},
{
"math_id": 106,
"text": "\\sigma^2"
},
{
"math_id": 107,
"text": "\\ln(X_i)"
},
{
"math_id": 108,
"text": "H"
},
{
"math_id": 109,
"text": "G"
},
{
"math_id": 110,
"text": "A"
},
{
"math_id": 111,
"text": "H = \\frac{G^2} A."
},
{
"math_id": 112,
"text": "X \\sim \\mathcal{N}(\\mu, \\sigma^2)"
},
{
"math_id": 113,
"text": "\\exp(X) \\sim \\operatorname{Lognormal}(\\mu, \\sigma^2)."
},
{
"math_id": 114,
"text": "\\ln(X) \\sim \\mathcal{N}(\\mu, \\sigma^2)"
},
{
"math_id": 115,
"text": "X_j \\sim \\operatorname{Lognormal}(\\mu_j, \\sigma_j^2)"
},
{
"math_id": 116,
"text": "Y = \\sum_{j=1}^n X_j"
},
{
"math_id": 117,
"text": "Y"
},
{
"math_id": 118,
"text": "Z"
},
{
"math_id": 119,
"text": "\\begin{align}\n \\sigma^2_Z &= \\ln\\!\\left[ \\frac{\\sum e^{2\\mu_j+\\sigma_j^2}(e^{\\sigma_j^2}-1)}{(\\sum e^{\\mu_j+\\sigma_j^2/2})^2} + 1\\right], \\\\\n \\mu_Z &= \\ln\\!\\left[ \\sum e^{\\mu_j+\\sigma_j^2/2} \\right] - \\frac{\\sigma^2_Z}{2}.\n \\end{align}"
},
{
"math_id": 120,
"text": "X_j"
},
{
"math_id": 121,
"text": "\\sigma_j=\\sigma"
},
{
"math_id": 122,
"text": "\\begin{align}\n \\sigma^2_Z &= \\ln\\!\\left[ (e^{\\sigma^2}-1)\\frac{\\sum e^{2\\mu_j}}{(\\sum e^{\\mu_j})^2} + 1\\right], \\\\\n \\mu_Z &= \\ln\\!\\left[ \\sum e^{\\mu_j} \\right] + \\frac{\\sigma^2}{2} - \\frac{\\sigma^2_Z}{2}.\n \\end{align}"
},
{
"math_id": 123,
"text": "\\begin{align}\n\tS_+ &= \\operatorname{E}\\left[\\sum_i X_i \\right] = \\sum_i \n\t\\operatorname{E}[X_i] = \n\t\\sum_i e^{\\mu_i + \\sigma_i^2/2}\n\t\\\\\n\t\\sigma^2_{Z} &= 1/S_+^2 \\, \\sum_{i,j}\n\t \\operatorname{cor}_{ij} \\sigma_i \\sigma_j \\operatorname{E}[X_i] \\operatorname{E}[X_j] =\n\t 1/S_+^2 \\, \\sum_{i,j}\n\t \\operatorname{cor}_{ij} \\sigma_i \\sigma_j e^{\\mu_i+\\sigma_i^2/2} \n\t e^{\\mu_j+\\sigma_j^2/2} \n\t\\\\\n\t\\mu_Z &= \\ln\\left( S_+ \\right) - \\sigma_{Z}^2/2 \n\\end{align}"
},
{
"math_id": 124,
"text": "X+c"
},
{
"math_id": 125,
"text": "x\\in (c, +\\infty)"
},
{
"math_id": 126,
"text": "\\operatorname{E}[X+c] = \\operatorname{E}[X] + c"
},
{
"math_id": 127,
"text": "\\operatorname{Var}[X+c] = \\operatorname{Var}[X]"
},
{
"math_id": 128,
"text": "X\\mid Y \\sim \\operatorname{Rayleigh}(Y)"
},
{
"math_id": 129,
"text": " Y \\sim \\operatorname{Lognormal}(\\mu, \\sigma^2)"
},
{
"math_id": 130,
"text": " X \\sim \\operatorname{Suzuki}(\\mu, \\sigma)"
},
{
"math_id": 131,
"text": " F(x;\\mu,\\sigma) = \\left[\\left(\\frac{e^\\mu}{x}\\right)^{\\pi/(\\sigma \\sqrt{3})} + 1\\right]^{-1}."
},
{
"math_id": 132,
"text": "L(\\mu, \\sigma) = \\prod_{i=1}^n \\frac 1 {x_i} \\varphi_{\\mu,\\sigma} (\\ln x_i),"
},
{
"math_id": 133,
"text": "\\mathcal N(\\mu,\\sigma^2)"
},
{
"math_id": 134,
"text": "\n\\ell (\\mu,\\sigma \\mid x_1, x_2, \\ldots, x_n) = - \\sum _i \\ln x_i + \\ell_N (\\mu, \\sigma \\mid \\ln x_1, \\ln x_2, \\dots, \\ln x_n)."
},
{
"math_id": 135,
"text": "\\ell"
},
{
"math_id": 136,
"text": "\\ell_N"
},
{
"math_id": 137,
"text": "\\ln x_1, \\ln x_2, \\dots, \\ln x_n)"
},
{
"math_id": 138,
"text": "\\widehat \\mu = \\frac {\\sum_i \\ln x_i}{n}, \\qquad \\widehat \\sigma^2 = \\frac {\\sum_i \\left( \\ln x_i - \\widehat \\mu \\right)^2} {n}."
},
{
"math_id": 139,
"text": "\\widehat\\sigma^2"
},
{
"math_id": 140,
"text": "x_1, x_2, \\ldots, x_n"
},
{
"math_id": 141,
"text": "\\bar x"
},
{
"math_id": 142,
"text": "\\operatorname{E}[X]"
},
{
"math_id": 143,
"text": "\\operatorname{Var}[X]"
},
{
"math_id": 144,
"text": " \\mu = \\ln\\left(\\frac{ \\bar x} {\\sqrt{1+\\widehat\\sigma^2/\\bar x^2} } \\right), \\qquad \\sigma^2 = \\ln\\left(1 + {\\widehat\\sigma^2} / \\bar x^2 \\right)."
},
{
"math_id": 145,
"text": "[\\mu-\\sigma,\\mu+\\sigma]"
},
{
"math_id": 146,
"text": "[\\mu-2\\sigma,\\mu+2\\sigma]"
},
{
"math_id": 147,
"text": "[\\mu^*/\\sigma^*,\\mu^*\\cdot\\sigma^*]=[\\mu^* {}^\\times\\!\\!/ \\sigma^*]"
},
{
"math_id": 148,
"text": "[\\mu^*/(\\sigma^*)^2,\\mu^*\\cdot(\\sigma^*)^2] = [\\mu^* {}^\\times\\!\\!/ (\\sigma^*)^2]"
},
{
"math_id": 149,
"text": "[\\widehat\\mu \\pm q \\cdot \\widehat\\mathop{se}]"
},
{
"math_id": 150,
"text": "\\mathop{se} = \\widehat\\sigma / \\sqrt{n}"
},
{
"math_id": 151,
"text": "\\mu^*"
},
{
"math_id": 152,
"text": "[\\widehat\\mu^* {}^\\times\\!\\!/ (\\operatorname{sem}^*)^q]"
},
{
"math_id": 153,
"text": "\\operatorname{sem}^*=(\\widehat\\sigma^*)^{1/\\sqrt{n}}"
},
{
"math_id": 154,
"text": "\\sigma = \\frac 1 \\sqrt{6} "
},
{
"math_id": 155,
"text": "e = 2.718\\ldots"
},
{
"math_id": 156,
"text": "\\sigma = 1 \\big/ \\sqrt{6}"
},
{
"math_id": 157,
"text": "G = \\operatorname{erf}\\left(\\frac{\\sigma }{2 }\\right)"
},
{
"math_id": 158,
"text": "\\operatorname{erf}"
},
{
"math_id": 159,
"text": " G=2 \\Phi \\left(\\frac{\\sigma }{\\sqrt{2}}\\right)-1"
},
{
"math_id": 160,
"text": "\\Phi(x)"
}
] | https://en.wikipedia.org/wiki?curid=102476 |
10251864 | Multi-objective optimization | Mathematical concept
Multi-objective optimization or Pareto optimization (also known as multi-objective programming, vector optimization, multicriteria optimization, or multiattribute optimization) is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.
For a multi-objective optimization problem, it is not guaranteed that a single solution simultaneously optimizes each objective. The objective functions are said to be conflicting. A solution is called nondominated, Pareto optimal, Pareto efficient or noninferior, if none of the objective functions can be improved in value without degrading some of the other objective values. Without additional subjective preference information, there may exist a (possibly infinite) number of Pareto optimal solutions, all of which are considered equally good. Researchers study multi-objective optimization problems from different viewpoints and, thus, there exist different solution philosophies and goals when setting and solving them. The goal may be to find a representative set of Pareto optimal solutions, and/or quantify the trade-offs in satisfying the different objectives, and/or finding a single solution that satisfies the subjective preferences of a human decision maker (DM).
Bicriteria optimization denotes the special case in which there are two objective functions.
There is a direct relationship between multitask optimization and multi-objective optimization.
Introduction.
A multi-objective optimization problem is an optimization problem that involves multiple objective functions. In mathematical terms, a multi-objective optimization problem can be formulated as
formula_0
where the integer formula_1 is the number of objectives and the set formula_2 is the feasible set of decision vectors, which is typically formula_3 but it depends on the formula_4-dimensional application domain. The feasible set is typically defined by some constraint functions. In addition, the vector-valued objective function is often defined as
formula_5
If some objective function is to be maximized, it is equivalent to minimize its negative or its inverse. We denote formula_6 the image of formula_2; formula_7 a feasible solution or feasible decision; and formula_8an objective vector or an outcome.
In multi-objective optimization, there does not typically exist a feasible solution that minimizes all objective functions simultaneously. Therefore, attention is paid to Pareto optimal solutions; that is, solutions that cannot be improved in any of the objectives without degrading at least one of the other objectives. In mathematical terms, a feasible solution formula_9 is said to (Pareto) dominate another solution formula_10, if
A solution formula_7 (and the corresponding outcome formula_13) is called Pareto optimal if there does not exist another solution that dominates it. The set of Pareto optimal outcomes, denoted formula_14, is often called the Pareto front, Pareto frontier, or Pareto boundary.
The Pareto front of a multi-objective optimization problem is bounded by a so-called nadir objective vector formula_15and an ideal objective vector formula_16, if these are finite. The nadir objective vector is defined as
formula_17
and the ideal objective vector as
formula_18
In other words, the components of the nadir and ideal objective vectors define the upper and lower bounds of the objective function of Pareto optimal solutions. In practice, the nadir objective vector can only be approximated as, typically, the whole Pareto optimal set is unknown. In addition, a utopian objective vector formula_19, such that formula_20 where formula_21 is a small constant, is often defined because of numerical reasons.
Examples of applications.
Economics.
In economics, many problems involve multiple objectives along with constraints on what combinations of those objectives are attainable. For example, consumer's demand for various goods is determined by the process of maximization of the utilities derived from those goods, subject to a constraint based on how much income is available to spend on those goods and on the prices of those goods. This constraint allows more of one good to be purchased only at the sacrifice of consuming less of another good; therefore, the various objectives (more consumption of each good is preferred) are in conflict with each other. A common method for analyzing such a problem is to use a graph of indifference curves, representing preferences, and a budget constraint, representing the trade-offs that the consumer is faced with.
Another example involves the production possibilities frontier, which specifies what combinations of various types of goods can be produced by a society with certain amounts of various resources. The frontier specifies the trade-offs that the society is faced with — if the society is fully utilizing its resources, more of one good can be produced only at the expense of producing less of another good. A society must then use some process to choose among the possibilities on the frontier.
Macroeconomic policy-making is a context requiring multi-objective optimization. Typically a central bank must choose a stance for monetary policy that balances competing objectives — low inflation, low unemployment, low balance of trade deficit, etc. To do this, the central bank uses a model of the economy that quantitatively describes the various causal linkages in the economy; it simulates the model repeatedly under various possible stances of monetary policy, in order to obtain a menu of possible predicted outcomes for the various variables of interest. Then in principle it can use an aggregate objective function to rate the alternative sets of predicted outcomes, although in practice central banks use a non-quantitative, judgement-based, process for ranking the alternatives and making the policy choice.
Finance.
In finance, a common problem is to choose a portfolio when there are two conflicting objectives — the desire to have the expected value of portfolio returns be as high as possible, and the desire to have risk, often measured by the standard deviation of portfolio returns, be as low as possible. This problem is often represented by a graph in which the efficient frontier shows the best combinations of risk and expected return that are available, and in which indifference curves show the investor's preferences for various risk-expected return combinations. The problem of optimizing a function of the expected value (first moment) and the standard deviation (square root of the second central moment) of portfolio return is called a two-moment decision model.
Optimal control.
In engineering and economics, many problems involve multiple objectives which are not describable as the-more-the-better or the-less-the-better; instead, there is an ideal target value for each objective, and the desire is to get as close as possible to the desired value of each objective. For example, energy systems typically have a trade-off between performance and cost or one might want to adjust a rocket's fuel usage and orientation so that it arrives both at a specified place and at a specified time; or one might want to conduct open market operations so that both the inflation rate and the unemployment rate are as close as possible to their desired values.
Often such problems are subject to linear equality constraints that prevent all objectives from being simultaneously perfectly met, especially when the number of controllable variables is less than the number of objectives and when the presence of random shocks generates uncertainty. Commonly a multi-objective quadratic objective function is used, with the cost associated with an objective rising quadratically with the distance of the objective from its ideal value. Since these problems typically involve adjusting the controlled variables at various points in time and/or evaluating the objectives at various points in time, intertemporal optimization techniques are employed.
Optimal design.
Product and process design can be largely improved using modern modeling, simulation, and optimization techniques. The key question in optimal design is measuring what is good or desirable about a design. Before looking for optimal designs, it is important to identify characteristics that contribute the most to the overall value of the design. A good design typically involves multiple criteria/objectives such as capital cost/investment, operating cost, profit, quality and/or product recovery, efficiency, process safety, operation time, etc. Therefore, in practical applications, the performance of process and product design is often measured with respect to multiple objectives. These objectives are typically conflicting, i.e., achieving the optimal value for one objective requires some compromise on one or more objectives.
For example, when designing a paper mill, one can seek to decrease the amount of capital invested in a paper mill and enhance the quality of paper simultaneously. If the design of a paper mill is defined by large storage volumes and paper quality is defined by quality parameters, then the problem of optimal design of a paper mill can include objectives such as i) minimization of expected variation of those quality parameters from their nominal values, ii) minimization of the expected time of breaks and iii) minimization of the investment cost of storage volumes. Here, the maximum volume of towers is a design variable. This example of optimal design of a paper mill is a simplification of the model used in. Multi-objective design optimization has also been implemented in engineering systems in the circumstances such as control cabinet layout optimization, airfoil shape optimization using scientific workflows, design of nano-CMOS, system on chip design, design of solar-powered irrigation systems, optimization of sand mould systems, engine design, optimal sensor deployment and optimal controller design.
Process optimization.
Multi-objective optimization has been increasingly employed in chemical engineering and manufacturing. In 2009, Fiandaca and Fraga used the multi-objective genetic algorithm (MOGA) to optimize the pressure swing adsorption process (cyclic separation process). The design problem involved the dual maximization of nitrogen recovery and nitrogen purity. The results approximated the Pareto frontier well with acceptable trade-offs between the objectives.
In 2010, Sendín et al. solved a multi-objective problem for the thermal processing of food. They tackled two case studies (bi-objective and triple-objective problems) with nonlinear dynamic models. They used a hybrid approach consisting of the weighted Tchebycheff and the Normal Boundary Intersection approach. The novel hybrid approach was able to construct a Pareto optimal set for the thermal processing of foods.
In 2013, Ganesan et al. carried out the multi-objective optimization of the combined carbon dioxide reforming and partial oxidation of methane. The objective functions were methane conversion, carbon monoxide selectivity, and hydrogen to carbon monoxide ratio. Ganesan used the Normal Boundary Intersection (NBI) method in conjunction with two swarm-based techniques (Gravitational Search Algorithm (GSA) and Particle Swarm Optimization (PSO)) to tackle the problem. Applications involving chemical extraction and bioethanol production processes have posed similar multi-objective problems.
In 2013, Abakarov et al. proposed an alternative technique to solve multi-objective optimization problems arising in food engineering. The Aggregating Functions Approach, the Adaptive Random Search Algorithm, and the Penalty Functions Approach were used to compute the initial set of the non-dominated or Pareto-optimal solutions. The Analytic Hierarchy Process and Tabular Method were used simultaneously for choosing the best alternative among the computed subset of non-dominated solutions for osmotic dehydration processes.
In 2018, Pearce et al. formulated task allocation to human and robotic workers as a multi-objective optimization problem, considering production time and the ergonomic impact on the human worker as the two objectives considered in the formulation. Their approach used a Mixed-Integer Linear Program to solve the optimization problem for a weighted sum of the two objectives to calculate a set of Pareto optimal solutions. Applying the approach to several manufacturing tasks showed improvements in at least one objective in most tasks and in both objectives in some of the processes.
Radio resource management.
The purpose of radio resource management is to satisfy the data rates that are requested by the users of a cellular network. The main resources are time intervals, frequency blocks, and transmit powers. Each user has its own objective function that, for example, can represent some combination of the data rate, latency, and energy efficiency. These objectives are conflicting since the frequency resources are very scarce, thus there is a need for tight spatial frequency reuse which causes immense inter-user interference if not properly controlled. Multi-user MIMO techniques are nowadays used to reduce the interference by adaptive precoding. The network operator would like to both bring great coverage and high data rates, thus the operator would like to find a Pareto optimal solution that balance the total network data throughput and the user fairness in an appropriate subjective manner.
Radio resource management is often solved by scalarization; that is, selection of a network utility function that tries to balance throughput and user fairness. The choice of utility function has a large impact on the computational complexity of the resulting single-objective optimization problem. For example, the common utility of weighted sum rate gives an NP-hard problem with a complexity that scales exponentially with the number of users, while the weighted max-min fairness utility results in a quasi-convex optimization problem with only a polynomial scaling with the number of users.
Electric power systems.
Reconfiguration, by exchanging the functional links between the elements of the system, represents one of the most important measures which can improve the operational performance of a distribution system. The problem of optimization through the reconfiguration of a power distribution system, in terms of its definition, is a historical single objective problem with constraints. Since 1975, when Merlin and Back introduced the idea of distribution system reconfiguration for active power loss reduction, until nowadays, a lot of researchers have proposed diverse methods and algorithms to solve the reconfiguration problem as a single objective problem. Some authors have proposed Pareto optimality based approaches (including active power losses and reliability indices as objectives). For this purpose, different artificial intelligence based methods have been used: microgenetic, branch exchange, particle swarm optimization and non-dominated sorting genetic algorithm.
Inspection of infrastructure.
Autonomous inspection of infrastructure has the potential to reduce costs, risks and environmental impacts, as well as ensuring better periodic maintenance of inspected assets. Typically, planning such missions has been viewed as a single-objective optimization problem, where one aims to minimize the energy or time spent in inspecting an entire target structure. For complex, real-world structures, however, covering 100% of an inspection target is not feasible, and generating an inspection plan may be better viewed as a multiobjective optimization problem, where one aims to both maximize inspection coverage and minimize time and costs. A recent study has indicated that multiobjective inspection planning indeed has the potential to outperform traditional methods on complex structures
Solution.
As multiple Pareto optimal solutions for multi-objective optimization problems usually exist, what it means to solve such a problem is not as straightforward as it is for a conventional single-objective optimization problem. Therefore, different researchers have defined the term "solving a multi-objective optimization problem" in various ways. This section summarizes some of them and the contexts in which they are used. Many methods convert the original problem with multiple objectives into a single-objective optimization problem. This is called a scalarized problem. If the Pareto optimality of the single-objective solutions obtained can be guaranteed, the scalarization is characterized as done neatly.
Solving a multi-objective optimization problem is sometimes understood as approximating or computing all or a representative set of Pareto optimal solutions.
When decision making is emphasized, the objective of solving a multi-objective optimization problem is referred to as supporting a decision maker in finding the most preferred Pareto optimal solution according to their subjective preferences. The underlying assumption is that one solution to the problem must be identified to be implemented in practice. Here, a human decision maker (DM) plays an important role. The DM is expected to be an expert in the problem domain.
The most preferred results can be found using different philosophies. Multi-objective optimization methods can be divided into four classes.
More information and examples of different methods in the four classes are given in the following sections.
No-preference methods.
When a decision maker does not explicitly articulate any preference information, the multi-objective optimization method can be classified as a no-preference method. A well-known example is the method of global criterion, in which a scalarized problem of the form
formula_22
is solved. In the above problem, formula_23 can be any formula_24 norm, with common choices including formula_25, formula_26, and formula_27. The method of global criterion is sensitive to the scaling of the objective functions. Thus, it is recommended that the objectives be normalized into a uniform, dimensionless scale.
A priori methods.
A priori methods require that sufficient preference information is expressed before the solution process. Well-known examples of a priori methods include the utility function method, lexicographic method, and goal programming.
Utility function method.
The utility function method assumes the decision maker's utility function is available. A mapping formula_28 is a utility function if for all formula_29 it holds that formula_30 if the decision maker prefers formula_31 to formula_32, and formula_33 if the decision maker is indifferent between formula_31 and formula_32. The utility function specifies an ordering of the decision vectors (recall that vectors can be ordered in many different ways). Once formula_34 is obtained, it suffices to solve
formula_35
but in practice, it is very difficult to construct a utility function that would accurately represent the decision maker's preferences, particularly since the Pareto front is unknown before the optimization begins.
Lexicographic method.
The lexicographic method assumes that the objectives can be ranked in the order of importance. We assume that the objective functions are in the order of importance so that formula_36 is the most important and formula_37 the least important to the decision maker. Subject to this assumption, various methods can be used to attain the lexicographically optimal solution. Note that a goal or target value is not specified for any objective here, which makes it different from the Lexicographic Goal Programming method.
Scalarizing.
Scalarizing a multi-objective optimization problem is an a priori method, which means formulating a single-objective optimization problem such that optimal solutions to the single-objective optimization problem are Pareto optimal solutions to the multi-objective optimization problem. In addition, it is often required that every Pareto optimal solution can be reached with some parameters of the scalarization. With different parameters for the scalarization, different Pareto optimal solutions are produced. A general formulation for a scalarization of a multi-objective optimization problem is
formula_38
where formula_39 is a vector parameter, the set formula_40 is a set depending on the parameter formula_39, and formula_41 is a function.
Very well-known examples are:
formula_42
where the weights of the objectives formula_43 are the parameters of the scalarization.
formula_45
where upper bounds formula_46 are parameters as above and formula_47 is the objective to be minimized.
Somewhat more advanced examples are the following:
One example of the achievement scalarizing problems can be formulated as
formula_48
where the term formula_49 is called the augmentation term, formula_50 is a small constant, and formula_51 and formula_19 are the "nadir" and "utopian" vectors, respectively. In the above problem, the parameter is the so-called "reference point" formula_52 representing objective function values preferred by the decision maker.
formula_53
where formula_54 is individual optima (absolute) for objectives of maximization formula_55 and minimization formula_56 to formula_57.
formula_58
where the weights of the objectives formula_43 are the parameters of the scalarization. If the parameters/weights are drawn uniformly in the positive orthant, it is shown that this scalarization provably converges to the Pareto front, even when the front is non-convex.
For example, portfolio optimization is often conducted in terms of mean-variance analysis. In this context, the efficient set is a subset of the portfolios parametrized by the portfolio mean return formula_59 in the problem of choosing portfolio shares to minimize the portfolio's variance of return formula_60 subject to a given value of formula_59; see Mutual fund separation theorem for details. Alternatively, the efficient set can be specified by choosing the portfolio shares to maximize the function formula_61; the set of efficient portfolios consists of the solutions as formula_62 ranges from zero to infinity.
Some of the above scalarizations involve invoking the minimax principle, where always the worst of the different objectives is optimized.
A posteriori methods.
A posteriori methods aim at producing all the Pareto optimal solutions or a representative subset of the Pareto optimal solutions. Most a posteriori methods fall into either one of the following three classes:
Mathematical programming.
Well-known examples of mathematical programming-based a posteriori methods are the Normal Boundary Intersection (NBI),<ref name="doi10.1137/S1052623496307510"></ref> Modified Normal Boundary Intersection (NBIm), Normal Constraint (NC), Successive Pareto Optimization (SPO), and Directed Search Domain (DSD) methods, which solve the multi-objective optimization problem by constructing several scalarizations. The solution to each scalarization yields a Pareto optimal solution, whether locally or globally. The scalarizations of the NBI, NBIm, NC, and DSD methods are constructed to obtain evenly distributed Pareto points that give a good approximation of the real set of Pareto points.
Evolutionary algorithms.
Evolutionary algorithms are popular approaches to generating Pareto optimal solutions to a multi-objective optimization problem. Most evolutionary multi-objective optimization (EMO) algorithms apply Pareto-based ranking schemes. Evolutionary algorithms such as the Non-dominated Sorting Genetic Algorithm-II (NSGA-II),<ref name="doi10.1109/4235.996017"></ref> its extended version NSGA-III, Strength Pareto Evolutionary Algorithm 2 (SPEA-2) and multiobjective differential evolution variants have become standard approaches, although some schemes based on particle swarm optimization and simulated annealing are significant. The main advantage of evolutionary algorithms, when applied to solve multi-objective optimization problems, is the fact that they typically generate sets of solutions, allowing computation of an approximation of the entire Pareto front. The main disadvantage of evolutionary algorithms is their lower speed and the Pareto optimality of the solutions cannot be guaranteed; it is only known that none of the generated solutions is dominated by another.
Another paradigm for multi-objective optimization based on novelty using evolutionary algorithms was recently improved upon. This paradigm searches for novel solutions in objective space (i.e., novelty search on objective space) in addition to the search for non-dominated solutions. Novelty search is like stepping stones guiding the search to previously unexplored places. It is especially useful in overcoming bias and plateaus as well as guiding the search in many-objective optimization problems.
Deep learning methods.
Deep learning conditional methods are new approaches to generating several Pareto optimal solutions. The idea is to use the generalization capacity of deep neural networks to learn a model of the entire Pareto front from a limited number of example trade-offs along that front, a task called "Pareto Front Learning". Several approaches address this setup, including using hypernetworks and using Stein variational gradient descent.
List of methods.
Commonly known a posteriori methods are listed below:
Interactive methods.
In interactive methods of optimizing multiple objective problems, the solution process is iterative and the decision maker continuously interacts with the method when searching for the most preferred solution (see e.g., Miettinen 1999, Miettinen 2008). In other words, the decision maker is expected to express preferences at each iteration to get "Pareto optimal solutions" that are of interest to the decision maker and learn what kind of solutions are attainable.
The following steps are commonly present in interactive methods of optimization:
The above aspiration levels refer to desirable objective function values forming a reference point. Instead of mathematical convergence, often used as a stopping criterion in mathematical optimization methods, psychological convergence is often emphasized in interactive methods. Generally speaking, a method is terminated when the decision maker is confident that he/she has found the "most preferred solution available".
Types of preference information.
There are different interactive methods involving different types of preference information. Three types can be identified based on
On the other hand, a fourth type of generating a small sample of solutions is included in: An example of the interactive method utilizing trade-off information is the Zionts-Wallenius method, where the decision maker is shown several objective trade-offs at each iteration, and (s)he is expected to say whether (s)he likes, dislikes, or is indifferent with respect to each trade-off. In reference point-based methods (see e.g.,), the decision maker is expected at each iteration to specify a reference point consisting of desired values for each objective and a corresponding Pareto optimal solution(s) is then computed and shown to them for analysis. In classification-based interactive methods, the decision maker is assumed to give preferences in the form of classifying objectives at the current Pareto optimal solution into different classes, indicating how the values of the objectives should be changed to get a more preferred solution. Then, the classification information is considered when new (more preferred) Pareto optimal solution(s) are computed. In the satisficing trade-off method (STOM), three classes are used: objectives whose values 1) should be improved, 2) can be relaxed, and 3) are acceptable as such. In the NIMBUS method, two additional classes are also used: objectives whose values 4) should be improved until a given bound and 5) can be relaxed until a given bound.
Hybrid methods.
Different hybrid methods exist, but here we consider hybridizing MCDM (multi-criteria decision-making) and EMO (evolutionary multi-objective optimization). A hybrid algorithm in multi-objective optimization combines algorithms/approaches from these two fields (see e.g.,). Hybrid algorithms of EMO and MCDM are mainly used to overcome shortcomings by utilizing strengths. Several types of hybrid algorithms have been proposed in the literature, e.g., incorporating MCDM approaches into EMO algorithms as a local search operator, leading a DM to the most preferred solution(s), etc. A local search operator is mainly used to enhance the rate of convergence of EMO algorithms.
The roots for hybrid multi-objective optimization can be traced to the first Dagstuhl seminar organized in November 2004 (see here). Here, some of the best minds in EMO (Professor Kalyanmoy Deb, Professor Jürgen Branke, etc.) and MCDM (Professor Kaisa Miettinen, Professor Ralph E. Steuer, etc.) realized the potential in combining ideas and approaches of MCDM and EMO fields to prepare hybrids of them. Subsequently, many more Dagstuhl seminars have been arranged to foster collaboration. Recently, hybrid multi-objective optimization has become an important theme in several international conferences in the area of EMO and MCDM (see e.g.,).
Visualization of the Pareto front.
Visualization of the Pareto front is one of the a posteriori preference techniques of multi-objective optimization. The a posteriori preference techniques provide an important class of multi-objective optimization techniques. Usually, the a posteriori preference techniques include four steps: (1) computer approximates the Pareto front, i.e., the Pareto optimal set in the objective space; (2) the decision maker studies the Pareto front approximation; (3) the decision maker identifies the preferred point at the Pareto front; (4) computer provides the Pareto optimal decision, whose output coincides with the objective point identified by the decision maker. From the point of view of the decision maker, the second step of the a posteriori preference techniques is the most complicated. There are two main approaches to informing the decision maker. First, a number of points of the Pareto front can be provided in the form of a list (interesting discussion and references are given in) or using heatmaps.
Visualization in bi-objective problems: tradeoff curve.
In the case of bi-objective problems, informing the decision maker concerning the Pareto front is usually carried out by its visualization: the Pareto front, often named the tradeoff curve in this case, can be drawn at the objective plane. The tradeoff curve gives full information on objective values and on objective tradeoffs, which inform how improving one objective is related to deteriorating the second one while moving along the tradeoff curve. The decision maker takes this information into account while specifying the preferred Pareto optimal objective point. The idea to approximate and visualize the Pareto front was introduced for linear bi-objective decision problems by S. Gass and T. Saaty. This idea was developed and applied in environmental problems by J.L. Cohon. A review of methods for approximating the Pareto front for various decision problems with a small number of objectives (mainly, two) is provided in.
Visualization in high-order multi-objective optimization problems.
There are two generic ideas for visualizing the Pareto front in high-order multi-objective decision problems (problems with more than two objectives). One of them, which is applicable in the case of a relatively small number of objective points that represent the Pareto front, is based on using the visualization techniques developed in statistics (various diagrams, etc.; see the corresponding subsection below). The second idea proposes the display of bi-objective cross-sections (slices) of the Pareto front. It was introduced by W.S. Meisel in 1973 who argued that such slices inform the decision maker on objective tradeoffs. The figures that display a series of bi-objective slices of the Pareto front for three-objective problems are known as the decision maps. They give a clear picture of tradeoffs between the three criteria. The disadvantages of such an approach are related to the following two facts. First, the computational procedures for constructing the Pareto front's bi-objective slices are unstable since the Pareto front is usually not stable. Secondly, it is applicable in the case of only three objectives. In the 1980s, the idea of W.S. Meisel was implemented in a different form—in the form of the Interactive Decision Maps (IDM) technique. More recently, N. Wesner proposed using a combination of a Venn diagram and multiple scatterplots of the objective space to explore the Pareto frontier and select optimal solutions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\min_{x \\in X} (f_1(x), f_2(x),\\ldots, f_k(x))\n"
},
{
"math_id": 1,
"text": "k\\geq 2"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": " X \\subseteq \\mathbb R^n "
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "\\begin{align}\nf : X &\\to \\mathbb R^k \\\\\nx &\\mapsto\n \\begin{pmatrix}\n f_1(x) \\\\\n \\vdots \\\\\n f_k(x)\n \\end{pmatrix}\n\\end{align}"
},
{
"math_id": 6,
"text": "Y \\subseteq \\mathbb R^k"
},
{
"math_id": 7,
"text": "x^*\\in X"
},
{
"math_id": 8,
"text": "z^* = f(x^*) \\in \\mathbb R^k"
},
{
"math_id": 9,
"text": "x_1\\in X"
},
{
"math_id": 10,
"text": "x_2\\in X"
},
{
"math_id": 11,
"text": "\\forall i \\in \\{1, \\dots, k\\}, f_i(x_1) \\leq f_i(x_2)"
},
{
"math_id": 12,
"text": "\\exists i \\in \\{1, \\dots, k\\}, f_i(x_1) < f_i(x_2)"
},
{
"math_id": 13,
"text": "f(x^*)"
},
{
"math_id": 14,
"text": " X^* "
},
{
"math_id": 15,
"text": " z^{nadir} "
},
{
"math_id": 16,
"text": " z^{ideal} "
},
{
"math_id": 17,
"text": " z^{nadir} = \\begin{pmatrix}\n \\sup_{x^* \\in X^*} f_1(x^*) \\\\\n \\vdots \\\\\n \\sup_{x^* \\in X^*} f_k(x^*)\n\\end{pmatrix} "
},
{
"math_id": 18,
"text": " z^{ideal} = \\begin{pmatrix}\n \\inf_{x^* \\in X^*} f_1(x^*) \\\\\n \\vdots \\\\\n \\inf_{x^* \\in X^*} f_k(x^*)\n\\end{pmatrix}"
},
{
"math_id": 19,
"text": "z^{utop}"
},
{
"math_id": 20,
"text": " z^{utop}_i = z^{ideal}_{i} - \\epsilon, \\forall i \\in \\{1, \\dots , k\\}"
},
{
"math_id": 21,
"text": "\\epsilon>0"
},
{
"math_id": 22,
"text": "\n\\begin{align}\n\\min & \\|f(x)-z^{ideal}\\|\\\\\n\\text{s.t. } & x\\in X\n\\end{align}\n"
},
{
"math_id": 23,
"text": "\\|\\cdot\\|"
},
{
"math_id": 24,
"text": "L_p"
},
{
"math_id": 25,
"text": "L_1"
},
{
"math_id": 26,
"text": "L_2"
},
{
"math_id": 27,
"text": "L_\\infty"
},
{
"math_id": 28,
"text": " u\\colon Y\\rightarrow\\mathbb{R}"
},
{
"math_id": 29,
"text": "\\mathbf{y}^1,\\mathbf{y}^2\\in Y"
},
{
"math_id": 30,
"text": "u(\\mathbf{y}^1)>u(\\mathbf{y}^2)"
},
{
"math_id": 31,
"text": "\\mathbf{y}^1"
},
{
"math_id": 32,
"text": "\\mathbf{y}^2"
},
{
"math_id": 33,
"text": "u(\\mathbf{y}^1)=u(\\mathbf{y}^2)"
},
{
"math_id": 34,
"text": "u"
},
{
"math_id": 35,
"text": " \\max\\;u(\\mathbf{f}(\\mathbf{x}))\\text{ subject to }\\mathbf{x}\\in X,"
},
{
"math_id": 36,
"text": "f_1"
},
{
"math_id": 37,
"text": "f_k"
},
{
"math_id": 38,
"text": "\n\\begin{array}{ll}\n\\min & g(f_1(x),\\ldots,f_k(x),\\theta)\\\\\n\\text{s.t.} & x\\in X_\\theta\n\\end{array}\n"
},
{
"math_id": 39,
"text": "\\theta"
},
{
"math_id": 40,
"text": "X_\\theta\\subseteq X"
},
{
"math_id": 41,
"text": "g:\\mathbb R^{k+1} \\rightarrow \\mathbb R"
},
{
"math_id": 42,
"text": "\n\\min_{x\\in X} \\sum_{i=1}^k w_if_i(x)\n"
},
{
"math_id": 43,
"text": "w_i>0"
},
{
"math_id": 44,
"text": "\\epsilon"
},
{
"math_id": 45,
"text": "\n\\begin{array}{ll}\n\\min & f_j(x)\\\\\n\\text{s.t.} & x \\in X\\\\\n & f_i(x)\\leq \\epsilon_i \\text{ for }i\\in\\{1,\\ldots,k\\}\\setminus\\{j\\}\n\\end{array}\n"
},
{
"math_id": 46,
"text": " \\epsilon_j"
},
{
"math_id": 47,
"text": " f_j "
},
{
"math_id": 48,
"text": "\n\\begin{align}\n\\min & \\max_{i=1,\\ldots,k} \\left[\\frac{f_i(x)-\\bar z_i}{z^{nadir}_i-z_i^{utop}}\\right] + \\rho\\sum_{i=1}^k\\frac{f_i(x)}{z_i^{nadir}-z_i^{utop}}\\\\\n\\text{s.t. } & x\\in S\n\\end{align}\n"
},
{
"math_id": 49,
"text": "\\rho\\sum_{i=1}^k\\frac{f_i(x)}{z_i^{nadir}-z_i^{utop}}"
},
{
"math_id": 50,
"text": "\\rho>0"
},
{
"math_id": 51,
"text": "z^{nadir}"
},
{
"math_id": 52,
"text": "\\bar z"
},
{
"math_id": 53,
"text": "\n\\begin{array}{ll}\n\\max & \\frac{\\sum_{j=1}^r Z_j}{W_j}- \\frac{\\sum_{j=r+1}^s Z_j}{W_{r+1}} \\\\\n\\text{s.t. } & AX=b \\\\\n & X\\geq 0\n\\end{array}\n"
},
{
"math_id": 54,
"text": "W_j"
},
{
"math_id": 55,
"text": "r"
},
{
"math_id": 56,
"text": "r+1"
},
{
"math_id": 57,
"text": "s"
},
{
"math_id": 58,
"text": "\n\\min_{x\\in X} \\max_i \\frac{ f_i(x)}{w_i}\n"
},
{
"math_id": 59,
"text": "\\mu_P"
},
{
"math_id": 60,
"text": "\\sigma_P"
},
{
"math_id": 61,
"text": "\\mu_P - b \\sigma_P "
},
{
"math_id": 62,
"text": "b"
}
] | https://en.wikipedia.org/wiki?curid=10251864 |
10252066 | Choquet integral | A Choquet integral is a subadditive or superadditive integral created by the French mathematician Gustave Choquet in 1953. It was initially used in statistical mechanics and potential theory, but found its way into decision theory in the 1980s, where it is used as a way of measuring the expected utility of an uncertain event. It is applied specifically to membership functions and capacities. In imprecise probability theory, the Choquet integral is also used to calculate the lower expectation induced by a 2-monotone lower probability, or the upper expectation induced by a 2-alternating upper probability.
Using the Choquet integral to denote the expected utility of belief functions measured with capacities is a way to reconcile the Ellsberg paradox and the Allais paradox.
Definition.
The following notation is used:
Assume that formula_4 is measurable with respect to formula_1, that is
formula_5
Then the Choquet integral of formula_4 with respect to formula_6 is defined by:
formula_7
where the integrals on the right-hand side are the usual Riemann integral (the integrands are integrable because they are monotone in formula_8).
Properties.
In general the Choquet integral does not satisfy additivity. More specifically, if formula_6 is not a probability measure, it may hold that
formula_9
for some functions formula_4 and formula_10.
The Choquet integral does satisfy the following properties.
Monotonicity.
If formula_11 then
formula_12
Positive homogeneity.
For all formula_13 it holds that
formula_14
Comonotone additivity.
If formula_15 are comonotone functions, that is, if for all formula_16 it holds that
formula_17.
which can be thought of as formula_4 and formula_10 rising and falling together
then
formula_18
Subadditivity.
If formula_6 is 2-alternating, then
formula_19
Superadditivity.
If formula_6 is 2-monotone, then
formula_20
Alternative representation.
Let formula_21 denote a cumulative distribution function such that formula_22 is formula_23 integrable. Then this following formula is often referred to as Choquet Integral:
formula_24
where formula_25.
Applications.
The Choquet integral was applied in image processing, video processing and computer vision. In behavioral decision theory, Amos Tversky and Daniel Kahneman use the Choquet integral and related methods in their formulation of cumulative prospect theory.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "\\mathcal{F}"
},
{
"math_id": 2,
"text": "f : S\\to \\mathbb{R}"
},
{
"math_id": 3,
"text": "\\nu : \\mathcal{F}\\to \\mathbb{R}^+"
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": "\\forall x\\in\\mathbb{R}\\colon \\{s \\in S \\mid f (s) \\geq x\\}\\in\\mathcal{F}"
},
{
"math_id": 6,
"text": "\\nu"
},
{
"math_id": 7,
"text": "\n(C)\\int f d\\nu :=\n\\int_{-\\infty}^0\n(\\nu (\\{s | f (s) \\geq x\\})-\\nu(S))\\, dx\n+\n\\int^\\infty_0\n\\nu (\\{s | f (s) \\geq x\\})\\, dx\n"
},
{
"math_id": 8,
"text": "x"
},
{
"math_id": 9,
"text": "\\int f \\,d\\nu + \\int g \\,d\\nu \\neq \\int (f + g)\\, d\\nu."
},
{
"math_id": 10,
"text": "g"
},
{
"math_id": 11,
"text": "f\\leq g"
},
{
"math_id": 12,
"text": "(C)\\int f\\, d\\nu \\leq (C)\\int g\\, d\\nu"
},
{
"math_id": 13,
"text": "\\lambda\\ge 0"
},
{
"math_id": 14,
"text": "(C)\\int \\lambda f \\,d\\nu = \\lambda (C)\\int f\\, d\\nu,"
},
{
"math_id": 15,
"text": "f,g : S \\rightarrow \\mathbb{R}"
},
{
"math_id": 16,
"text": "s,s' \\in S"
},
{
"math_id": 17,
"text": "(f(s) - f(s')) (g(s) - g(s')) \\geq 0"
},
{
"math_id": 18,
"text": "(C)\\int\\, f d\\nu + (C)\\int g\\, d\\nu = (C)\\int (f + g)\\, d\\nu."
},
{
"math_id": 19,
"text": "(C)\\int\\, f d\\nu + (C)\\int g\\, d\\nu \\ge (C)\\int (f + g)\\, d\\nu."
},
{
"math_id": 20,
"text": "(C)\\int\\, f d\\nu + (C)\\int g\\, d\\nu \\le (C)\\int (f + g)\\, d\\nu."
},
{
"math_id": 21,
"text": "G"
},
{
"math_id": 22,
"text": "G^{-1}"
},
{
"math_id": 23,
"text": "d H"
},
{
"math_id": 24,
"text": "\\int_{-\\infty}^\\infty G^{-1}(\\alpha) d H(\\alpha) = -\\int_{-\\infty}^a H(G(x))dx+ \\int_a^\\infty \\hat{H}(1-G(x)) dx,"
},
{
"math_id": 25,
"text": "\\hat{H}(x)=H(1)-H(1-x)"
},
{
"math_id": 26,
"text": "H(x):=x"
},
{
"math_id": 27,
"text": "\\int_0^1 G^{-1}(x)dx = E[X]"
},
{
"math_id": 28,
"text": "H(x):=1_{[\\alpha,x]}"
},
{
"math_id": 29,
"text": "\\int_0^1 G^{-1}(x)dH(x)= G^{-1}(\\alpha)"
}
] | https://en.wikipedia.org/wiki?curid=10252066 |
1025272 | Bohr–Einstein debates | Series of public disputes between physicists Niels Bohr and Albert Einstein
The Bohr–Einstein debates were a series of public disputes about quantum mechanics between Albert Einstein and Niels Bohr. Their debates are remembered because of their importance to the philosophy of science, insofar as the disagreements—and the outcome of Bohr's version of quantum mechanics becoming the prevalent view—form the root of the modern understanding of physics. Most of Bohr's version of the events held in the Solvay Conference in 1927 and other places was first written by Bohr decades later in an article titled, "Discussions with Einstein on Epistemological Problems in Atomic Physics". Based on the article, the philosophical issue of the debate was whether Bohr's Copenhagen interpretation of quantum mechanics, which centered on his belief of complementarity, was valid in explaining nature. Despite their differences of opinion and the succeeding discoveries that helped solidify quantum mechanics, Bohr and Einstein maintained a mutual admiration that was to last the rest of their lives.
Although Bohr and Einstein disagreed, they were great friends all their lives and enjoyed using each other as a foil.
Pre-revolutionary debates.
Einstein was the first physicist to say that Max Planck's discovery of the energy quanta would require a rewriting of the laws of physics. To support his point, in 1905 Einstein proposed that light sometimes acts as a particle which he called a light quantum (see photon and wave–particle duality). Bohr was one of the most vocal opponents of the photon idea and did not openly embrace it until 1925. The photon appealed to Einstein because he saw it as a physical reality (although a confusing one) behind the numbers presented by Planck mathematically in 1900. Bohr disliked it because it made the choice of mathematical solution arbitrary. Bohr did not like a scientist having to choose between equations. This disagreement was perhaps the first real Bohr-Einstein debate. Einstein had proposed the photon in 1905, and Arthur Compton provided experiment in 1922 with his Compton effect, but Bohr refused to believe the photon existed even then. Bohr fought back against the existence of the quantum of light (photon) by writing the BKS theory (in collaboration with Hans Kramers and John C. Slater) in 1924. However, after the 1925 Bothe–Geiger coincidence experiment, BKS was proved to be wrong and Einstein's hypothesis was proven to be correct.
The quantum revolution.
The quantum revolution of the mid-1920s occurred under the direction of both Einstein and Bohr, and their post-revolutionary debates were about making sense of the change. Werner Heisenberg's "Umdeutung" paper in 1925 reinterpreted old quantum theory in terms of matrix-like operators, removing the Newtonian elements of space and time from any underlying reality. In parallel, Erwin Schrödinger redeveloped quantum theory in terms of a wave mechanics formulation, leading to the Schrödinger equation. However, when Schrödinger sent a preprint of his new equation to Einstein, Einstein wrote back hailing his equation as a decisive advance of “true genius.” Then in 1926 when Max Born, collaborating with Heisenberg, proposed that mechanics were to be understood as a probability without any causal explanation.
Both Einstein and Schrödinger rejected Born's interpretation with its renunciation of causality which had been a key feature of science previous to old quantum theory and was still a feature of general relativity. In a 1926 letter to Max Born, Einstein wrote:
At first, even Heisenberg had heated disputes with Bohr that his matrix mechanics was not compatible with the Schrödinger's wave mechanics. And Bohr was at first opposed to Heisenberg's uncertainty principle. But by the Fifth Solvay Conference held in October 1927 Heisenberg and Born concluded that the revolution was over and nothing further was needed. It was at that last stage that Einstein's skepticism turned to dismay. He believed that much had been accomplished, but the reasons for the mechanics still needed to be understood.
Einstein's refusal to accept the revolution as complete reflected his desire to see developed a model for the underlying causes from which these apparent random statistical methods resulted. He did not reject the idea that positions in space-time could never be completely known but did not want to allow the uncertainty principle to necessitate a seemingly random, non-deterministic mechanism by which the laws of physics operated. Einstein himself was a statistical thinker but denied that no more needed to be discovered or clarified. Einstein worked the rest of his life to discover a new theory that would make sense of quantum mechanics and return causality to science, what many now call the theory of everything. Bohr, meanwhile, was dismayed by none of the elements that troubled Einstein. He made his own peace with the contradictions by proposing a principle of complementarity that assigns properties only as result of measurements.
Post-revolution: First stage.
As mentioned above, Einstein's position underwent significant modifications over the course of the years. In the first stage, Einstein refused to accept quantum indeterminism and sought to demonstrate that the uncertainty principle could be violated, suggesting ingenious thought experiments which should permit the accurate determination of incompatible variables, such as position and velocity, or to explicitly reveal simultaneously the wave and the particle aspects of the same process. (The main source and substance for these thought experiments is solely from Bohr's account twenty years later.) Bohr admits: “As regards the account of the conversations I am of course aware that I am relying only on my own memory, just as I am prepared for the possibility that many features of the development of quantum theory, in which Einstein has played so large a part, may appear to himself in a different light.”
Einstein's argument.
The first serious attack by Einstein on the "orthodox" conception took place during the Fifth Solvay International Conference on "Electrons and Photons" in 1927. Einstein pointed out how it was possible to take advantage of the (universally accepted) laws of conservation of energy and of impulse (momentum) in order to obtain information on the state of a particle in a process of interference which, according to the principle of indeterminacy or that of complementarity, should not be accessible.
In order to follow his argumentation and to evaluate Bohr's response, it is convenient to refer to the experimental apparatus illustrated in figure A. A beam of light perpendicular to the "X" axis (here aligned vertically) propagates in the direction "z" and encounters a screen "S"1 with a narrow (relative to the wavelength of the ray) slit. After having passed through the slit, the wave function diffracts with an angular opening that causes it to encounter a second screen "S"2 with two slits. The successive propagation of the wave results in the formation of the interference figure on the final screen "F".
At the passage through the two slits of the second screen "S"2, the wave aspects of the process become essential. In fact, it is precisely the interference between the two terms of the quantum superposition corresponding to states in which the particle is localized in one of the two slits which produces zones of constructive and destructive interference (in which the wave function is nullified). It is also important to note that any experiment designed to evidence the "corpuscular" aspects of the process at the passage of the screen "S"2 (which, in this case, reduces to the determination of which slit the particle has passed through) inevitably destroys the wave aspects, implies the disappearance of the interference figure and the emergence of two concentrated spots of diffraction which confirm our knowledge of the trajectory followed by the particle.
At this point Einstein brings into play the first screen as well and argues as follows: since the incident particles have velocities (practically) perpendicular to the screen "S"1, and since it is only the interaction with this screen that can cause a deflection from the original direction of propagation, by the law of conservation of impulse which implies that the sum of the impulses of two systems which interact is conserved, if the incident particle is deviated toward the top, the screen will recoil toward the bottom and vice versa. In realistic conditions the mass of the screen is so large that it will remain stationary, but, in principle, it is possible to measure even an infinitesimal recoil. If we imagine taking the measurement of the impulse of the screen in the direction "X" after every single particle has passed, we can know, from the fact that the screen will be found recoiled toward the top (bottom), whether the particle in question has been deviated toward the bottom or top, and therefore through which slit in "S"2 the particle has passed. But since the determination of the direction of the recoil of the screen after the particle has passed cannot influence the successive development of the process, we will still have an interference figure on the screen "F". The interference takes place precisely because the state of the system is the "superposition" of two states whose wave functions are non-zero only near one of the two slits. On the other hand, if every particle passes through only the slit "b" or the slit "c", then the set of systems is the statistical mixture of the two states, which means that interference is not possible. If Einstein is correct, then there is a violation of the principle of indeterminacy.
This thought experiment was begun in a simpler form during the general discussion portion of the actual proceedings during the 1927 Solvay conference. In those official proceedings, Bohr's reply is recorded as: “I feel myself in a very difficult position because I don’t understand precisely the point that Einstein is trying to make.” Einstein had explained, “it could happen that the same elementary process produces an action in two or several places on the screen. But the interpretation, according to which psi squared expresses the probability that this particular particle is found at a given point, assumes an entirely peculiar mechanism of action at a distance.” It is clear from this that Einstein was referring to separability (in particular, and most importantly local causality, i.e. locality), not indeterminacy. In fact, Paul Ehrenfest wrote a letter to Bohr stating that the 1927 thought experiments of Einstein had nothing to do with the uncertainty principle, as Einstein had already accepted these “and for a long time never doubted.”
Bohr's response.
Bohr evidently misunderstood Einstein's argument about the quantum mechanical violation of relativistic causality (locality) and instead focused on the consistency of quantum indeterminacy. Bohr's response was to illustrate Einstein's idea more clearly using the diagram in Figure C. (Figure C shows a fixed screen S1 that is bolted down. Then try to imagine one that can slide up or down along a rod instead of a fixed bolt.) Bohr observes that extremely precise knowledge of any (potential) vertical motion of the screen is an essential presupposition in Einstein's argument. In fact, if its velocity in the direction "X" "before" the passage of the particle is not known with a precision substantially greater than that induced by the recoil (that is, if it were already moving vertically with an unknown and greater velocity than that which it derives as a consequence of the contact with the particle), then the determination of its motion after the passage of the particle would not give the information we seek. However, Bohr continues, an extremely precise determination of the velocity of the screen, when one applies the principle of indeterminacy, implies an inevitable imprecision of its position in the direction "X". Before the process even begins, the screen would therefore occupy an indeterminate position at least to a certain extent (defined by the formalism). Now consider, for example, the point "d" in figure A, where the interference is destructive. Any displacement of the first screen would make the lengths of the two paths, "a–b–d" and "a–c–d", different from those indicated in the figure. If the difference between the two paths varies by half a wavelength, at point "d" there will be constructive rather than destructive interference. The ideal experiment must average over all the possible positions of the screen S1, and, for every position, there corresponds, for a certain fixed point "F", a different type of interference, from the perfectly destructive to the perfectly constructive. The effect of this averaging is that the pattern of interference on the screen "F" will be uniformly grey. Once more, our attempt to evidence the corpuscular aspects in "S"2 has destroyed the possibility of interference in "F", which depends crucially on the wave aspects.
As Bohr recognized, for the understanding of this phenomenon "it is decisive that, contrary to genuine instruments of measurement, these bodies along with the particles would constitute, in the case under examination, the system to which the quantum-mechanical formalism must apply. With respect to the precision of the conditions under which one can correctly apply the formalism, it is essential to include the entire experimental apparatus. In fact, the introduction of any new apparatus, such as a mirror, in the path of a particle could introduce new effects of interference which influence essentially the predictions about the results which will be registered at the end." Further along, Bohr attempts to resolve this ambiguity concerning which parts of the system should be considered macroscopic and which not:
"In particular, it must be very clear that...the unambiguous use of spatiotemporal concepts in the description of atomic phenomena must be limited to the registration of observations which refer to images on a photographic lens or to analogous practically irreversible effects of amplification such as the formation of a drop of water around an ion in a dark room."
Bohr's argument about the impossibility of using the apparatus proposed by Einstein to violate the principle of indeterminacy depends crucially on the fact that a macroscopic system (the screen "S"1) obeys quantum laws. On the other hand, Bohr consistently held that, in order to illustrate the microscopic aspects of reality, it is necessary to set off a process of amplification, which involves macroscopic apparatuses, whose fundamental characteristic is that of obeying classical laws and which can be described in classical terms. This ambiguity would later come back in the form of what is still called today the measurement problem.
However, Bohr in his article refuting the EPR paper, states “there is no question of a mechanical disturbance of the system under investigation.” Heisenberg quotes Bohr as saying, “I find all such assertions as ‘observation introduces uncertainty into the phenomenon’ inaccurate and misleading.” Manjit Kumar's book on the Bohr–Einstein debates finds these assertions by Bohr contrary to his arguments. Others, such as the physicist Leon Rosenfeld, did find Bohr's argument convincing.
Uncertainty principle applied to time and energy.
In many textbook examples and popular discussions of quantum mechanics, the principle of indeterminacy is explained by reference to the pair of variables position and velocity (or momentum). It is important to note that the wave nature of physical processes implies that there must exist another relation of indeterminacy: that between time and energy. In order to comprehend this relation, it is convenient to refer to the experiment illustrated in
Figure D, which results in the propagation of a wave which is limited in spatial extension. Assume that, as illustrated in the figure, a ray which is extremely extended longitudinally is propagated toward a screen with a slit furnished with a shutter which remains open only for a very brief interval of time formula_0. Beyond the slit, there will be a wave of limited spatial extension which continues to propagate toward the right.
A perfectly monochromatic wave (such as a musical note which cannot be divided into harmonics) has infinite spatial extent. In order to have a wave which is limited in spatial extension (which is technically called a wave packet), several waves of different frequencies must be superimposed and distributed continuously within a certain interval of frequencies around an average value, such as formula_1.
It then happens that at a certain instant, there exists a spatial region (which moves over time) in which the contributions of the various fields of the superposition add up constructively. Nonetheless, according to a precise mathematical theorem, as we move far away from this region, the phases of the various fields, at any specified point, are distributed causally and destructive interference is produced. The region in which the wave has non-zero amplitude is therefore spatially limited. It is easy to demonstrate that, if the wave has a spatial extension equal to formula_2 (which means, in our example, that the shutter has remained open for a time formula_3 where v is the velocity of the wave), then the wave contains (or is a superposition of) various monochromatic waves whose frequencies cover an interval formula_4 which satisfies the relation:
formula_5
Remembering that in the Planck relation, frequency and energy are proportional:
formula_6
it follows immediately from the preceding inequality that the particle associated with the wave should possess an energy which is not perfectly defined (since different frequencies are involved in the superposition) and consequently there is indeterminacy in energy:
formula_7
From this it follows immediately that:
formula_8
which is the relation of indeterminacy between time and energy.
Einstein's second criticism.
At the sixth Congress of Solvay in 1930, the indeterminacy relation just discussed was Einstein's target of criticism. His idea contemplates the existence of an experimental apparatus which was subsequently designed by Bohr in such a way as to emphasize the essential elements and the key points which he would use in his response.
Einstein considers a box (called Einstein's box; see figure) containing electromagnetic radiation and a clock which controls the opening of a shutter which covers a hole made in one of the walls of the box. The shutter uncovers the hole for a time formula_0 which can be chosen arbitrarily. During the opening, we are to suppose that a photon, from among those inside the box, escapes through the hole. In this way a wave of limited spatial extension has been created, following the explanation given above. In order to challenge the indeterminacy relation between time and energy, it is necessary to find a way to determine with adequate precision the energy that the photon has brought with it. At this point, Einstein turns to mass–energy equivalence of special relativity: formula_9. From this it follows that knowledge of the mass of an object provides a precise indication about its energy. The argument is therefore very simple: if one weighs the box before and after the opening of the shutter and if a certain amount of energy has escaped from the box, the box will be lighter. The variation in mass multiplied by formula_10
will provide precise knowledge of the energy emitted.
Moreover, the clock will indicate the precise time at which the event of the particle's emission took place. Since, in principle, the mass of the box can be determined to an arbitrary degree of accuracy, the energy emitted can be determined with a precision formula_11 as accurate as one desires. Therefore, the product formula_12 can be rendered less than what is implied by the principle of indeterminacy.
The idea is particularly acute and the argument seemed unassailable. It's important to consider the impact of all of these exchanges on the people involved at the time. Leon Rosenfeld, who had participated in the Congress, described the event several years later:
"It was a real shock for Bohr...who, at first, could not think of a solution. For the entire evening he was extremely agitated, and he continued passing from one scientist to another, seeking to persuade them that it could not be the case, that it would have been the end of physics if Einstein were right; but he couldn't come up with any way to resolve the paradox. I will never forget the image of the two antagonists as they left the club: Einstein, with his tall and commanding figure, who walked tranquilly, with a mildly ironic smile, and Bohr who trotted along beside him, full of excitement...The morning after saw the triumph of Bohr."
Bohr's triumph.
The triumph of Bohr consisted in his demonstrating, once again, that Einstein's subtle argument was not conclusive, but even more so in the way that he arrived at this conclusion by appealing precisely to one of the great ideas of Einstein: the principle of equivalence between gravitational mass and inertial mass, together with the time dilation of special relativity, and a consequence of these—the gravitational redshift. Bohr showed that, in order for Einstein's experiment to function, the box would have to be suspended on a spring in the middle of a gravitational field. In order to obtain a measurement of the weight of the box, a pointer would have to be attached to the box which corresponded with the index on a scale. After the release of a photon, a mass formula_13 could be added to the box to restore it to its original position and this would allow us to determine the energy formula_14 that was lost when the photon left. The box is immersed in a gravitational field of strength formula_15, and the gravitational redshift affects the speed of the clock, yielding uncertainty formula_0 in the time formula_16 required for the pointer to return to its original position. Bohr gave the following calculation establishing the uncertainty relation formula_17.
Let the uncertainty in the mass formula_13 be denoted by formula_18. Let the error in the position of the pointer be formula_19. Adding the load formula_13 to the box imparts a momentum formula_20 that we can measure with an accuracy formula_21, where formula_22 ≈ formula_23. Clearly formula_24, and therefore formula_25. By the redshift formula (which follows from the principle of equivalence and the time dilation), the uncertainty in the time formula_16 is formula_26, and formula_27, and so formula_28. We have therefore proven the claimed formula_29.
More recent analyses of the photon box debate questions Bohr's understanding of Einstein's thought experiment, referring instead to a prelude to the EPR paper, focusing on inseparability rather than indeterminism being at issue.
Post-revolution: Second stage.
The second phase of Einstein's "debate" with Bohr and the orthodox interpretation is characterized by an acceptance of the fact that it is, as a practical matter, impossible to simultaneously determine the values of certain incompatible quantities, but the rejection that this implies that these quantities do not actually have precise values. Einstein rejects the probabilistic interpretation of Born and insists that quantum probabilities are epistemic and not ontological in nature. As a consequence, the theory must be incomplete in some way. He recognizes the great value of the theory, but suggests that it "does not tell the whole story", and, while providing an appropriate description at a certain level, it gives no information on the more fundamental underlying level:
"I have the greatest consideration for the goals which are pursued by the physicists of the latest generation which go under the name of quantum mechanics, and I believe that this theory represents a profound level of truth, but I also believe that the restriction to laws of a statistical nature will turn out to be transitory...Without doubt quantum mechanics has grasped an important fragment of the truth and will be a paragon for all future fundamental theories, for the fact that it must be deducible as a limiting case from such foundations, just as electrostatics is deducible from Maxwell's equations of the electromagnetic field or as thermodynamics is deducible from statistical mechanics."
These thoughts of Einstein would set off a line of research into hidden variable theories, such as the Bohm interpretation, in an attempt to complete the edifice of quantum theory. If quantum mechanics can be made "complete" in Einstein's sense, it cannot be done locally; this fact was demonstrated by John Stewart Bell with the formulation of Bell's inequality in 1964. Although, the Bell inequality ruled out local hidden variable theories, Bohm's theory was not ruled out. A 2007 experiment ruled out a large class of non-Bohmian non-local hidden variable theories, though not Bohmian mechanics itself.
Post-revolution: Third stage.
The argument of EPR.
In 1935 Einstein, Boris Podolsky and Nathan Rosen developed an argument, published in the magazine "Physical Review" with the title "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?", based on an entangled state of two systems. Before coming to this argument, it is necessary to formulate another hypothesis that comes out of Einstein's work in relativity: the principle of locality. "The elements of physical reality which are objectively possessed cannot be influenced instantaneously at a distance."
David Bohm picked up the EPR argument in 1951. In his textbook "Quantum Theory," he reformulated it in terms of an entangled state of two particles, which can be summarized as follows:
1) Consider a system of two photons which at time "t" are located, respectively, in the spatially distant regions "A" and "B" and which are also in the entangled state of polarization formula_30 described below:
formula_31
2) At time "t" the photon in region A is tested for vertical polarization. Suppose that the result of the measurement is that the photon passes through the filter. According to the reduction of the wave packet, the result is that, at time "t" + "dt", the system becomes
formula_32
3) At this point, the observer in A who carried out the first measurement on photon "1", without doing anything else that could disturb the system or the other photon ("assumption (R)", below), can predict with certainty that photon "2" will pass a test of vertical polarization. It follows that photon "2" possesses an element of physical reality: that of having a vertical polarization.
4) According to the assumption of locality, it cannot have been the action carried out in A which created this element of reality for photon "2". Therefore, we must conclude that the photon possessed the property of being able to pass the vertical polarization test "before" and "independently of" the measurement of photon "1".
5) At time "t", the observer in "A" could have decided to carry out a test of polarization at 45°, obtaining a certain result, for example, that the photon passes the test. In that case, he could have concluded that photon "2" turned out to be polarized at 45°. Alternatively, if the photon did not pass the test, he could have concluded that photon "2" turned out to be polarized at 135°. Combining one of these alternatives with the conclusion reached in 4, it seems that photon "2", before the measurement took place, possessed both the property of being able to pass with certainty a test of vertical polarization and the property of being able to pass with certainty a test of polarization at either 45° or 135°. These properties are incompatible according to the formalism.
6) Since natural and obvious requirements have forced the conclusion that photon "2" simultaneously possesses incompatible properties, this means that, even if it is not possible to determine these properties simultaneously and with arbitrary precision, they are nevertheless possessed objectively by the system. But quantum mechanics denies this possibility and it is therefore an incomplete theory.
Bohr's response.
Bohr's response to this argument was published, five months later than the original publication of EPR, in the same magazine "Physical Review" and with exactly the same title as the original. The crucial point of Bohr's answer is distilled in a passage which he later had republished in Paul Arthur Schilpp's book "Albert Einstein, scientist-philosopher" in honor of the seventieth birthday of Einstein. Bohr attacks assumption (R) of EPR by stating:
"The statement of the criterion in question is ambiguous with regard to the expression "without disturbing the system in any way". Naturally, in this case no mechanical disturbance of the system under examination can take place in the crucial stage of the process of measurement. But even in this stage there arises the essential problem of an influence on the precise conditions which define the possible types of prediction which regard the subsequent behaviour of the system...their arguments do not justify their conclusion that the quantum description turns out to be essentially incomplete...This description can be characterized as a rational use of the possibilities of an unambiguous interpretation of the process of measurement compatible with the finite and uncontrollable interaction between the object and the instrument of measurement in the context of quantum theory".
Bohr's presentation of his argument was hard to follow for many of the scientists (although his views were generally accepted). Rosenfeld, who had worked closely with Bohr for many years, later explains Bohr's argument in a way that is perhaps more accessible:
"In the case of the two particles, it is true that the measurement carried out on the first particle does not cause any direct physical disturbance of the second; but the measurement decisively affects the nature of verifiable predictions we will be able to make about this second particle. (...) [A]s long as we do not carry out any measurement (...) we have no control at all over this correlation [between the two particles]. If we really want the system to be subject to study and communication, we must carry out some measurement. If we now observe the position of the first particle, the correlation between the positions of the particles can be used to give us information about where the second particle is, but we have no way of making use of the correlation between the pulses of the particles (...). If we observe the momentum of the first particle, it is just the opposite. We retain control over the momentum correlation, but lose it over the position correlation. The two different measurements define two complementary phenomena that can never be reconciled into a single description of the given two-particle system".
Confirmatory experiments.
Years after the exposition of Einstein via his EPR experiment, many physicists started performing experiments to show that Einstein's view of a spooky action in a distance is indeed consistent with the laws of physics. The first experiment to definitively prove that this was the case was in 1949, when physicists Chien-Shiung Wu and her colleague Irving Shaknov showcased this theory in real time using photons. Their work was published in the new year of the succeeding decade.
Later in 1975, Alain Aspect proposed in an article, an experiment meticulous enough to be irrefutable: "Proposed experiment to test the non-separability of quantum mechanics". This led Aspect, together with his assistant Gérard Roger, and Jean Dalibard and Philippe Grangier (two young physics students at the time) to set up several increasingly complex experiments between 1980 and 1982 that further established quantum entanglement. Finally in 1998, the Geneva experiment tested the correlation between two detectors set 30 kilometres apart, virtually across the whole city, using the Swiss optical fibre telecommunication network. The distance gave the necessary time to commute the angles of the polarizers. It was therefore possible to have a completely random electrical shunting. Furthermore, the two distant polarizers were entirely independent. The measurements were recorded on each side, and compared after each experiment by dating each measurement using an atomic clock. The experiment once again verified entanglement under the strictest and most ideal conditions possible. If Aspect's experiment implied that a hypothetical coordination signal travel twice as fast as "c", Geneva's reached 10 million times "c".
Post-revolution: Fourth stage.
In his last writing on the topic, Einstein further refined his position, making it completely clear that what really disturbed him about the quantum theory was the problem of the total renunciation of all minimal standards of realism, even at the microscopic level, that the acceptance of the completeness of the theory implied. Since the early days of quantum theory the assumption of locality and Lorentz invariance guided his thoughts and led to his determination that if we demand strict locality then hidden variables are naturally implied apropos EPR. Bell, starting from this EPR logic (which is widely misunderstood or forgotten) showed that local hidden variables imply a conflict with experiment. Ultimately what was at stake for Einstein was the assumption that physical reality be universally local. Although the majority of experts in the field agree that Einstein was wrong, the current understanding is still not complete (see Interpretation of quantum mechanics).
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\Delta t "
},
{
"math_id": 1,
"text": " \\nu_0 "
},
{
"math_id": 2,
"text": " \\Delta x "
},
{
"math_id": 3,
"text": " \\Delta t = \\Delta x/v "
},
{
"math_id": 4,
"text": " \\Delta \\nu "
},
{
"math_id": 5,
"text": " \\Delta \\nu \\ge \\frac{1}{\\Delta t}. "
},
{
"math_id": 6,
"text": " E = h\\nu \\,"
},
{
"math_id": 7,
"text": " \\Delta E = h\\,\\Delta\\nu \\ge \\frac{h}{\\Delta t}. "
},
{
"math_id": 8,
"text": " \\Delta E \\, \\Delta t \\ge h "
},
{
"math_id": 9,
"text": " E=mc^2 "
},
{
"math_id": 10,
"text": " c^2 "
},
{
"math_id": 11,
"text": " \\Delta E "
},
{
"math_id": 12,
"text": " \\Delta E \\Delta t "
},
{
"math_id": 13,
"text": "m"
},
{
"math_id": 14,
"text": "E = mc^2"
},
{
"math_id": 15,
"text": "g"
},
{
"math_id": 16,
"text": "t"
},
{
"math_id": 17,
"text": " \\Delta E \\Delta t \\ge h "
},
{
"math_id": 18,
"text": "\\Delta m"
},
{
"math_id": 19,
"text": "\\Delta q"
},
{
"math_id": 20,
"text": "p"
},
{
"math_id": 21,
"text": "\\Delta p"
},
{
"math_id": 22,
"text": "\\Delta p \\Delta q"
},
{
"math_id": 23,
"text": "h"
},
{
"math_id": 24,
"text": "\\Delta p \\le tg\\Delta m"
},
{
"math_id": 25,
"text": "tg\\Delta m\\Delta q \\ge h"
},
{
"math_id": 26,
"text": "\\Delta t = c^{-2} gt\\Delta q"
},
{
"math_id": 27,
"text": "\\Delta E = c^2\\Delta m"
},
{
"math_id": 28,
"text": "\\Delta E \\Delta t = c^2\\Delta m \\Delta t \\ge h"
},
{
"math_id": 29,
"text": "\\Delta E\\Delta t \\ge h"
},
{
"math_id": 30,
"text": " \\left|\\Psi\\right\\rang "
},
{
"math_id": 31,
"text": " \\left|\\Psi,t\\right\\rang = \\frac1{\\sqrt{2}}\\left|1,V\\right\\rang \\left|2,V\\right\\rang + \\frac1{\\sqrt{2}}\\left|1,H\\right\\rang \\left|2,H\\right\\rang. "
},
{
"math_id": 32,
"text": "\\left|\\Psi,t+dt\\right\\rang = \\left|1,V\\right\\rang \\left|2,V\\right\\rang. "
}
] | https://en.wikipedia.org/wiki?curid=1025272 |
1025455 | Killing vector field | Vector field on a Riemannian manifold that preserves the metric
In mathematics, a Killing vector field (often called a Killing field), named after Wilhelm Killing, is a vector field on a Riemannian manifold (or pseudo-Riemannian manifold) that preserves the metric. Killing fields are the infinitesimal generators of isometries; that is, flows generated by Killing fields are continuous isometries of the manifold. More simply, the flow generates a symmetry, in the sense that moving each point of an object the same distance in the direction of the Killing vector will not distort distances on the object.
Definition.
Specifically, a vector field formula_0 is a Killing field if the Lie derivative with respect to formula_0 of the metric formula_1 vanishes:
formula_2
In terms of the Levi-Civita connection, this is
formula_3
for all vectors formula_4 and formula_5. In local coordinates, this amounts to the Killing equation
formula_6
This condition is expressed in covariant form. Therefore, it is sufficient to establish it in a preferred coordinate system in order to have it hold in all coordinate systems.
Examples.
Killing field on the circle.
The vector field on a circle that points counterclockwise and has the same length at each point is a Killing vector field, since moving each point on the circle along this vector field simply rotates the circle.
Killing fields on the hyperbolic plane.
A toy example for a Killing vector field is on the upper half-plane formula_7 equipped with the Poincaré metric formula_8. The pair formula_9 is typically called the hyperbolic plane and has Killing vector field formula_10 (using standard coordinates). This should be intuitively clear since the covariant derivative formula_11 transports the metric along an integral curve generated by the vector field (whose image is parallel to the x-axis).
Furthermore, the metric is independent of formula_12 from which we can immediately conclude that formula_10 is a Killing field using one of the results below in this article.
The isometry group of the upper half-plane model (or rather, the component connected to the identity) is formula_13 (see Poincaré half-plane model), and the other two Killing fields may be derived from considering the action of the generators of formula_13 on the upper half-plane. The other two generating Killing fields are dilatation formula_14 and the special conformal transformation formula_15.
Killing fields on a 2-sphere.
The Killing fields of the two-sphere formula_16, or more generally the formula_17-sphere formula_18 should be obvious from ordinary intuition: spheres, having rotational symmetry, should possess Killing fields which generate rotations about any axis. That is, we expect formula_16 to have symmetry under the action of the 3D rotation group SO(3). That is, by using the "a priori" knowledge that spheres can be embedded in Euclidean space, it is immediately possible to guess the form of the Killing fields. This is not possible in general, and so this example is of very limited educational value.
The conventional chart for the 2-sphere embedded in formula_19 in Cartesian coordinates formula_20 is given by
formula_21
so that formula_22 parametrises the height, and formula_23 parametrises rotation about the formula_24-axis.
The pullback of the standard Cartesian metric formula_25 gives the standard metric on the sphere,
formula_26.
Intuitively, a rotation about any axis should be an isometry. In this chart, the vector field which generates rotations about the formula_24-axis:
formula_27
In these coordinates, the metric components are all independent of formula_23, which shows that formula_28 is a Killing field.
The vector field
formula_29
is not a Killing field; the coordinate formula_22 explicitly appears in the metric. The flow generated by formula_30 goes from north to south; points at the north pole spread apart, those at the south come together. Any transformation that moves points closer or farther apart cannot be an isometry; therefore, the generator of such motion cannot be a Killing field.
The generator formula_28 is recognized as a rotation about the formula_24-axis
formula_31
A second generator, for rotations about the formula_12-axis, is
formula_32
The third generator, for rotations about the formula_33-axis, is
formula_34
The algebra given by linear combinations of these three generators closes, and obeys the relations
formula_35
This is the Lie algebra formula_36.
Expressing formula_0 and formula_4 in terms of spherical coordinates gives
formula_37
and
formula_38
That these three vector fields are actually Killing fields can be determined in two different ways. One is by explicit computation: just plug in explicit expressions for formula_39 and chug to show that formula_40 This is a worth-while exercise. Alternately, one can recognize formula_41 and formula_5 are the generators of isometries in Euclidean space, and since the metric on the sphere is inherited from metric in Eucliden space, the isometries are inherited as well.
These three Killing fields form a complete set of generators for the algebra. They are not unique: any linear combination of these three fields is still a Killing field.
There are several subtle points to note about this example.
Killing fields in Minkowski space.
The Killing fields of Minkowski space are the 3 space translations, time translation, three generators of rotations (the little group) and the three generators of boosts. These are
The boosts and rotations generate the Lorentz group. Together with space-time translations, this forms the Lie algebra for the Poincaré group.
Killing fields in flat space.
Here we derive the Killing fields for general flat space.
From Killing's equation and the Ricci identity for a covector formula_47,
formula_48
(using abstract index notation) where formula_49 is the Riemann curvature tensor, the following identity may be proven for a Killing field formula_50:
formula_51
When the base manifold formula_52 is flat space, that is, Euclidean space or pseudo-Euclidean space (as for Minkowski space), we can choose global flat coordinates such that in these coordinates, the Levi-Civita connection and hence Riemann curvature vanishes everywhere, giving
formula_53
Integrating and imposing the Killing equation allows us to write the general solution to formula_54 as
formula_55
where formula_56 is antisymmetric. By taking appropriate values of formula_57 and formula_58, we get a basis for the generalised Poincaré algebra of isometries of flat space:
formula_59
formula_60
These generate pseudo-rotations (rotations and boosts) and translations respectively. Intuitively these preserve the (pseudo)-metric at each point.
For (pseudo-)Euclidean space of total dimension, in total there are formula_61 generators, making flat space maximally symmetric. This number is generic for maximally symmetric spaces. Maximally symmetric spaces can be considered as sub-manifolds of flat space, arising as surfaces of constant proper distance
formula_62
which have O("p", "q") symmetry. If the submanifold has dimension formula_17, this group of symmetries has the expected dimension (as a Lie group).
Heuristically, we can derive the dimension of the Killing field algebra. Treating Killing's equation formula_63 together with the identity formula_64 as a system of second order differential equations for formula_65, we can determine the value of formula_65 at any point given initial data at a point formula_66. The initial data specifies formula_67 and formula_68, but Killing's equation imposes that the covariant derivative is antisymmetric. In total this is formula_69 independent values of initial data.
For concrete examples, see below for examples of flat space (Minkowski space) and maximally symmetric spaces (sphere, hyperbolic space).
Killing fields in general relativity.
Killing fields are used to discuss isometries in general relativity (in which the geometry of spacetime as distorted by gravitational fields is viewed as a 4-dimensional pseudo-Riemannian manifold). In a static configuration, in which nothing changes with time, the time vector will be a Killing vector, and thus the Killing field will point in the direction of forward motion in time. For example, the Schwarzschild metric has four Killing fields: the metric is independent of formula_70, hence formula_71 is a time-like Killing field. The other three are the three generators of rotations discussed above. The Kerr metric for a rotating black hole has only two Killing fields: the time-like field, and a field generating rotations about the axis of rotation of the black hole.
de Sitter space and anti-de Sitter space are maximally symmetric spaces, with the formula_17-dimensional versions of each possessing formula_72 Killing fields.
Killing field of a constant coordinate.
If the metric coefficients formula_73 in some coordinate basis formula_74 are independent of one of the coordinates formula_75, then formula_76 is a Killing vector, where formula_77 is the Kronecker delta.
To prove this, let us assume formula_78. Then formula_79 and formula_80
Now let us look at the Killing condition
formula_81
and from formula_82. The Killing condition becomes
formula_83
that is formula_84, which is true.
Conversely, if the metric formula_85 admits a Killing field formula_50, then one can construct coordinates for which formula_86. These coordinates are constructed by taking a hypersurface formula_87 such that formula_50 is nowhere tangent to formula_87. Take coordinates formula_88 on formula_87, then define local coordinates formula_89 where formula_70 denotes the parameter along the integral curve of formula_50 based at formula_90 on formula_87. In these coordinates, the Lie derivative reduces to the coordinate derivative, that is,
formula_91
and by the definition of the Killing field the left-hand side vanishes.
Properties.
A Killing field is determined uniquely by a vector at some point and its gradient (i.e. all covariant derivatives of the field at the point).
The Lie bracket of two Killing fields is still a Killing field. The Killing fields on a manifold "M" thus form a Lie subalgebra of vector fields on "M". This is the Lie algebra of the isometry group of the manifold if "M" is complete. A Riemannian manifold with a transitive group of isometries is a homogeneous space.
For compact manifolds
The covariant divergence of every Killing vector field vanishes.
If formula_0 is a Killing vector field and formula_4 is a harmonic vector field, then formula_92 is a harmonic function.
If formula_0 is a Killing vector field and formula_93 is a harmonic p-form, then formula_94
Geodesics.
Each Killing vector corresponds to a quantity which is conserved along geodesics. This conserved quantity is the metric product between the Killing vector and the geodesic tangent vector. Along an affinely parametrized geodesic with tangent vector formula_95 then given the Killing vector formula_96, the quantity formula_97 is conserved:
formula_98
This aids in analytically studying motions in a spacetime with symmetries.
Stress-energy tensor.
Given a conserved, symmetric tensor formula_99, that is, one satisfying formula_100 and formula_101, which are properties typical of a stress-energy tensor, and a Killing vector formula_96, we can construct the conserved quantity formula_102 satisfying
formula_103
Cartan decomposition.
As noted above, the Lie bracket of two Killing fields is still a Killing field. The Killing fields on a manifold formula_52 thus form a Lie subalgebra formula_104 of all vector fields on formula_105 Selecting a point formula_106 the algebra formula_104 can be decomposed into two parts:
formula_107
and
formula_108
where formula_109 is the covariant derivative. These two parts intersect trivially but do not in general split formula_104. For instance, if formula_52 is a Riemannian homogeneous space, we have formula_110 if and only if formula_52 is a Riemannian symmetric space.
Intuitively, the isometries of formula_52 locally define a submanifold formula_111 of the total space, and the Killing fields show how to "slide along" that submanifold. They span the tangent space of that submanifold. The tangent space formula_112 should have the same dimension as the isometries acting effectively at that point. That is, one expects formula_113 Yet, in general, the number of Killing fields is larger than the dimension of that tangent space. How can this be? The answer is that the "extra" Killing fields are redundant. Taken all together, the fields provide an over-complete basis for the tangent space at any particular selected point; linear combinations can be made to vanish at that particular point. This was seen in the example of the Killing fields on a 2-sphere: there are 3 Killing fields; at any given point, two span the tangent space at that point, and the third one is a linear combination of the other two. Picking any two defines formula_114 the remaining degenerate linear combinations define an orthogonal space formula_115
Cartan involution.
The Cartan involution is defined as the mirroring or reversal of the direction of a geodesic. Its differential flips the direction of the tangents to a geodesic. It is a linear operator of norm one; it has two invariant subspaces, of eigenvalue +1 and −1. These two subspaces correspond to formula_116 and formula_117 respectively.
This can be made more precise. Fixing a point formula_118 consider a geodesic formula_119 passing through formula_66, with formula_120 The involution formula_121 is defined as
formula_122
This map is an involution, in that formula_123 When restricted to geodesics along the Killing fields, it is also clearly an isometry. It is uniquely defined.
Let formula_124 be the group of isometries generated by the Killing fields. The function formula_125 defined by
formula_126
is a homomorphism of formula_124. Its infinitesimal formula_127 is
formula_128
The Cartan involution is a Lie algebra homomorphism, in that
formula_129
for all formula_130 The subspace formula_131 has odd parity under the Cartan involution, while formula_132 has even parity. That is, denoting the Cartan involution at point formula_118 as formula_133 one has
formula_134
and
formula_135
where formula_136 is the identity map. From this, it follows that the subspace formula_132 is a Lie subalgebra of formula_104, in that
formula_137
As these are even and odd parity subspaces, the Lie brackets split, so that
formula_138
and
formula_139
The above decomposition holds at all points formula_118 for a symmetric space formula_52; proofs can be found in Jost. They also hold in more general settings, but not necessarily at all points of the manifold.
For the special case of a symmetric space, one explicitly has that formula_140 that is, the Killing fields span the entire tangent space of a symmetric space. Equivalently, the curvature tensor is covariantly constant on locally symmetric spaces, and so these are locally parallelizable; this is the Cartan–Ambrose–Hicks theorem.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "g"
},
{
"math_id": 2,
"text": "\\mathcal{L}_{X} g = 0 \\,."
},
{
"math_id": 3,
"text": "g\\left(\\nabla_Y X, Z\\right) + g\\left(Y, \\nabla_Z X\\right) = 0 \\,"
},
{
"math_id": 4,
"text": "Y"
},
{
"math_id": 5,
"text": "Z"
},
{
"math_id": 6,
"text": "\\nabla_\\mu X_\\nu + \\nabla_{\\nu} X_\\mu = 0 \\,."
},
{
"math_id": 7,
"text": "M = \\mathbb{R}^2_{y > 0}"
},
{
"math_id": 8,
"text": "g = y^{-2}\\left(dx^2 + dy^2\\right)"
},
{
"math_id": 9,
"text": "(M, g)"
},
{
"math_id": 10,
"text": "\\partial_x"
},
{
"math_id": 11,
"text": "\\nabla_{\\partial_x}g"
},
{
"math_id": 12,
"text": "x"
},
{
"math_id": 13,
"text": "\\text{SL}(2, \\mathbb{R})"
},
{
"math_id": 14,
"text": "D = x\\partial_x + y\\partial_y"
},
{
"math_id": 15,
"text": "K = (x^2 - y^2)\\partial_x + 2xy \\partial_y"
},
{
"math_id": 16,
"text": "S^2"
},
{
"math_id": 17,
"text": "n"
},
{
"math_id": 18,
"text": "S^n"
},
{
"math_id": 19,
"text": "\\mathbb{R}^3"
},
{
"math_id": 20,
"text": "(x,y,z)"
},
{
"math_id": 21,
"text": "x = \\sin\\theta\\cos\\phi,\\qquad y = \\sin\\theta\\sin\\phi,\\qquad z = \\cos\\theta"
},
{
"math_id": 22,
"text": "\\theta"
},
{
"math_id": 23,
"text": "\\phi"
},
{
"math_id": 24,
"text": "z"
},
{
"math_id": 25,
"text": "ds^2 = dx^2 + dy^2 + dz^2"
},
{
"math_id": 26,
"text": "ds^2 = d\\theta^2 + \\sin^2\\theta d\\phi^2"
},
{
"math_id": 27,
"text": "\\frac{\\partial}{\\partial\\phi}."
},
{
"math_id": 28,
"text": "\\partial_\\phi"
},
{
"math_id": 29,
"text": "\\frac{\\partial}{\\partial\\theta}"
},
{
"math_id": 30,
"text": "\\partial_\\theta"
},
{
"math_id": 31,
"text": "Z = x\\partial_y - y\\partial_x = \\sin^2\\theta \\,\\partial_\\phi"
},
{
"math_id": 32,
"text": "X = z\\partial_y - y\\partial_z"
},
{
"math_id": 33,
"text": "y"
},
{
"math_id": 34,
"text": "Y = z\\partial_x - x\\partial_z"
},
{
"math_id": 35,
"text": "[X,Y] = Z \\quad [Y,Z] = X \\quad [Z,X] = Y."
},
{
"math_id": 36,
"text": "\\mathfrak{so}(3)"
},
{
"math_id": 37,
"text": "X = \\sin^2\\theta \\,(\\sin\\phi\\partial_\\theta + \\cot\\theta\\cos\\phi\\partial_\\phi)"
},
{
"math_id": 38,
"text": "Y = \\sin^2 \\theta \\,(\\cos\\phi\\partial_\\theta - \\cot\\theta\\sin\\phi\\partial_\\phi)"
},
{
"math_id": 39,
"text": "\\mathcal{L}_Xg"
},
{
"math_id": 40,
"text": "\\mathcal{L}_Xg=\\mathcal{L}_Yg=\\mathcal{L}_Zg=0."
},
{
"math_id": 41,
"text": "X, Y"
},
{
"math_id": 42,
"text": "\\sin^2\\theta"
},
{
"math_id": 43,
"text": "\\partial_\\phi = X/\\sin^2\\theta"
},
{
"math_id": 44,
"text": " \\partial_t ~, \\qquad \\partial_x ~, \\qquad \\partial_y ~, \\qquad \\partial_z ~;"
},
{
"math_id": 45,
"text": "-y \\partial_x + x \\partial_y ~, \\qquad -z \\partial_y + y \\partial_z ~, \\qquad -x \\partial_z + z \\partial_x ~;"
},
{
"math_id": 46,
"text": "x \\partial_t + t \\partial_x~, \\qquad y \\partial_t + t \\partial_y ~, \\qquad z \\partial_t + t \\partial_z."
},
{
"math_id": 47,
"text": "K_a"
},
{
"math_id": 48,
"text": "\\nabla_a\\nabla_b K_c - \\nabla_b\\nabla_a K_c = R^d{}_{cab}K_d"
},
{
"math_id": 49,
"text": "R^a{}_{bcd}"
},
{
"math_id": 50,
"text": "X^a"
},
{
"math_id": 51,
"text": "\\nabla_a\\nabla_b X_c = R^d{}_{acb}X_d."
},
{
"math_id": 52,
"text": "M"
},
{
"math_id": 53,
"text": "\\partial_\\mu\\partial_\\nu X_\\rho = 0."
},
{
"math_id": 54,
"text": "X_\\rho"
},
{
"math_id": 55,
"text": "X^\\rho = \\omega^{\\rho\\sigma} x_\\sigma + c^\\rho"
},
{
"math_id": 56,
"text": "\\omega^{\\mu\\nu} = -\\omega^{\\nu\\mu}"
},
{
"math_id": 57,
"text": "\\omega^{\\mu\\nu}"
},
{
"math_id": 58,
"text": "c^\\rho"
},
{
"math_id": 59,
"text": "M_{\\mu\\nu} = x_\\mu\\partial_\\nu - x_\\nu\\partial_\\mu"
},
{
"math_id": 60,
"text": "P_\\rho = \\partial_\\rho."
},
{
"math_id": 61,
"text": "n(n+1)/2"
},
{
"math_id": 62,
"text": "\\{\\mathbf{x}\\in\\mathbb{R}^{p,q}:\\eta(\\mathbf{x},\\mathbf{x})=\\pm \\frac{1}{\\kappa^2}\\}"
},
{
"math_id": 63,
"text": "\\nabla_a X_b + \\nabla_b X_a = 0"
},
{
"math_id": 64,
"text": "\\nabla_a\\nabla_b X_c = R^c{}_{bad}X_c."
},
{
"math_id": 65,
"text": "X_a"
},
{
"math_id": 66,
"text": "p"
},
{
"math_id": 67,
"text": "X_a(p)"
},
{
"math_id": 68,
"text": "\\nabla_a X_b(p)"
},
{
"math_id": 69,
"text": "n^2 - n(n-1)/2 = n(n+1)/2"
},
{
"math_id": 70,
"text": "t"
},
{
"math_id": 71,
"text": "\\partial_t"
},
{
"math_id": 72,
"text": "\\frac{n(n+1)}{2}"
},
{
"math_id": 73,
"text": "g_{\\mu \\nu} \\,"
},
{
"math_id": 74,
"text": "dx^{a} \\,"
},
{
"math_id": 75,
"text": "x^{\\kappa} \\,"
},
{
"math_id": 76,
"text": "K^{\\mu} = \\delta^{\\mu}_{\\kappa} \\,"
},
{
"math_id": 77,
"text": "\\delta^{\\mu}_{\\kappa} \\,"
},
{
"math_id": 78,
"text": "g_{\\mu \\nu},_0 = 0 \\,"
},
{
"math_id": 79,
"text": "K^\\mu = \\delta^\\mu_0 \\,"
},
{
"math_id": 80,
"text": "K_{\\mu} = g_{\\mu \\nu} K^\\nu = g_{\\mu \\nu} \\delta^\\nu_0 = g_{\\mu 0} \\,"
},
{
"math_id": 81,
"text": "K_{\\mu;\\nu} + K_{\\nu;\\mu} = K_{\\mu,\\nu} + K_{\\nu,\\mu} - 2\\Gamma^\\rho_{\\mu\\nu}K_\\rho = g_{\\mu 0,\\nu} + g_{\\nu 0,\\mu} - g^{\\rho\\sigma}(g_{\\sigma\\mu,\\nu} + g_{\\sigma\\nu,\\mu} - g_{\\mu\\nu,\\sigma})g_{\\rho 0} \\,"
},
{
"math_id": 82,
"text": "g_{\\rho 0}g^{\\rho \\sigma} = \\delta_0^\\sigma \\,"
},
{
"math_id": 83,
"text": "g_{\\mu 0,\\nu} + g_{\\nu 0,\\mu} - (g_{0\\mu,\\nu} + g_{0\\nu,\\mu} - g_{\\mu\\nu,0}) = 0 \\,"
},
{
"math_id": 84,
"text": "g_{\\mu\\nu,0} = 0"
},
{
"math_id": 85,
"text": "\\mathbf{g}"
},
{
"math_id": 86,
"text": "\\partial_0 g_{\\mu\\nu} = 0"
},
{
"math_id": 87,
"text": "\\Sigma"
},
{
"math_id": 88,
"text": "x^i"
},
{
"math_id": 89,
"text": "(t,x^i)"
},
{
"math_id": 90,
"text": "(x^i)"
},
{
"math_id": 91,
"text": "\\mathcal{L}_Xg_{\\mu\\nu} = \\partial_0 g_{\\mu\\nu}"
},
{
"math_id": 92,
"text": "g(X, Y)"
},
{
"math_id": 93,
"text": "\\omega"
},
{
"math_id": 94,
"text": "\\mathcal{L}_{X} \\omega = 0 \\,."
},
{
"math_id": 95,
"text": "U^a"
},
{
"math_id": 96,
"text": "X_b"
},
{
"math_id": 97,
"text": "U^bX_b"
},
{
"math_id": 98,
"text": "U^a\\nabla_a(U^bX_b)=0"
},
{
"math_id": 99,
"text": "T^{ab}"
},
{
"math_id": 100,
"text": "T^{ab} = T^{ba}"
},
{
"math_id": 101,
"text": "\\nabla_a T^{ab}=0"
},
{
"math_id": 102,
"text": "J^a := T^{ab}X_b"
},
{
"math_id": 103,
"text": "\\nabla_a J^a = 0."
},
{
"math_id": 104,
"text": "\\mathfrak{g}"
},
{
"math_id": 105,
"text": "M."
},
{
"math_id": 106,
"text": "p \\in M~,"
},
{
"math_id": 107,
"text": "\\mathfrak{h} = \\{ X\\in\\mathfrak{g} : X(p) = 0 \\}"
},
{
"math_id": 108,
"text": "\\mathfrak{m} = \\{ X\\in\\mathfrak{g} : \\nabla X(p) = 0 \\}"
},
{
"math_id": 109,
"text": "\\nabla"
},
{
"math_id": 110,
"text": "\\mathfrak{g} = \\mathfrak{h} \\oplus \\mathfrak{m}"
},
{
"math_id": 111,
"text": "N"
},
{
"math_id": 112,
"text": "T_pN"
},
{
"math_id": 113,
"text": "T_pN \\cong \\mathfrak{m}~."
},
{
"math_id": 114,
"text": "\\mathfrak{m};"
},
{
"math_id": 115,
"text": "\\mathfrak{h}."
},
{
"math_id": 116,
"text": "\\mathfrak{p}"
},
{
"math_id": 117,
"text": "\\mathfrak{m},"
},
{
"math_id": 118,
"text": "p \\in M"
},
{
"math_id": 119,
"text": "\\gamma: \\mathbb{R} \\to M"
},
{
"math_id": 120,
"text": "\\gamma(0) = p~."
},
{
"math_id": 121,
"text": "\\sigma_p"
},
{
"math_id": 122,
"text": "\\sigma_p(\\gamma(\\lambda)) = \\gamma(-\\lambda)"
},
{
"math_id": 123,
"text": "\\sigma_p^2 = 1~."
},
{
"math_id": 124,
"text": "G"
},
{
"math_id": 125,
"text": "s_p: G \\to G"
},
{
"math_id": 126,
"text": "s_p(g) = \\sigma_p \\circ g \\circ \\sigma_p = \\sigma_p \\circ g \\circ \\sigma_p^{-1}"
},
{
"math_id": 127,
"text": "\\theta_p: \\mathfrak{g} \\to \\mathfrak{g}"
},
{
"math_id": 128,
"text": "\\theta_p(X) = \\left. \\frac{d}{d\\lambda} s_p\\left(e^{\\lambda X}\\right) \\right|_{\\lambda=0}"
},
{
"math_id": 129,
"text": "\\theta_p[X, Y] = \\left[\\theta_p X, \\theta_p Y\\right]"
},
{
"math_id": 130,
"text": "X, Y \\in \\mathfrak{g}~."
},
{
"math_id": 131,
"text": "\\mathfrak{m}"
},
{
"math_id": 132,
"text": "\\mathfrak{h}"
},
{
"math_id": 133,
"text": "\\theta_p"
},
{
"math_id": 134,
"text": "\\left.\\theta_p\\right|_{\\mathfrak{m}} = -Id"
},
{
"math_id": 135,
"text": "\\left.\\theta_p\\right|_{\\mathfrak{h}} = +Id"
},
{
"math_id": 136,
"text": "Id"
},
{
"math_id": 137,
"text": "[\\mathfrak{h}, \\mathfrak{h}] \\subset \\mathfrak{h} ~."
},
{
"math_id": 138,
"text": "[\\mathfrak{h}, \\mathfrak{m}] \\subset \\mathfrak{m}"
},
{
"math_id": 139,
"text": "[\\mathfrak{m}, \\mathfrak{m}] \\subset \\mathfrak{h} ~."
},
{
"math_id": 140,
"text": "T_pM \\cong \\mathfrak{m};"
},
{
"math_id": 141,
"text": "\\mathcal{L}_{X} g = \\lambda g\\,"
},
{
"math_id": 142,
"text": "\\lambda."
},
{
"math_id": 143,
"text": "\\nabla T \\,"
}
] | https://en.wikipedia.org/wiki?curid=1025455 |
1025655 | Balassa–Samuelson effect | Tendency for consumer prices to be systematically higher in more developed countries
The Balassa–Samuelson effect, also known as Harrod–Balassa–Samuelson effect (Kravis and Lipsey 1983), the Ricardo–Viner–Harrod–Balassa–Samuelson–Penn–Bhagwati effect (Samuelson 1994, p. 201), or productivity biased purchasing power parity (PPP) (Officer 1976) is the tendency for consumer prices to be systematically higher in more developed countries than in less developed countries. This observation about the systematic differences in consumer prices is called the "Penn effect". The Balassa–Samuelson hypothesis is the proposition that this can be explained by the greater variation in productivity between developed and less developed countries in the traded goods' sectors which in turn affects wages and prices in the non-tradable goods sectors.
Béla Balassa and Paul Samuelson independently proposed the causal mechanism for the Penn effect in the early 1960s.
The theory.
The Balassa–Samuelson effect depends on inter-country differences in the relative productivity of the tradable and non-tradable sectors.
The empirical “Penn Effect”.
By the law of one price, entirely tradable goods cannot vary greatly in price by location (because buyers can source from the lowest cost location).(This sentence has no meaning) However most services must be delivered locally (e.g. hairdressing), and many manufactured goods such as furniture have high transportation costs (or, conversely, low value-to-weight or low value-to-bulk ratios), which makes deviations from the law of one price (known as purchasing power parity or PPP-deviations) persistent. The Penn effect is that PPP-deviations usually occur in the same direction: where incomes are high, average price levels are typically high.
Basic form of the effect.
The simplest model which generates a Balassa–Samuelson effect has two countries, two goods (one tradable, and a country specific nontradable) and one factor of production, labor. For simplicity assume that productivity, as measured by marginal product (in terms of goods produced) of labor, in the nontradable sector is equal between countries and normalized to one.
formula_0
where "nt" denotes the nontradable sector and 1 and 2 indexes the two countries.
In each country, under the assumption of competition in the labor market the wage ends up being equal to the value of the marginal product, or the sector's price times MPL. (Note that this is not necessary, just sufficient, to produce the Penn effect. What is needed is that wages are at least related to productivity.)
formula_1
formula_2
Where the subscript "t" denotes the tradables sector. Note that the lack of a country specific subscript on the price of tradables means that tradable goods prices are equalized between the two countries.
Suppose that country 2 is the more productive, and hence, the wealthier one. This means that
formula_3
which implies that
formula_4.
So with a same (world) price for tradable goods, the price of nontradable goods will be lower in the less productive country, resulting in an overall lower price level.
The effect in more detail.
A typical discussion of this argument would include the following features:
Equivalent Balassa–Samuelson effect within a country.
The average asking price for a house in a prosperous city can be ten times that of an identical house in a depressed area of the "same country". Therefore, the RER-deviation exists independent of what happens to the "nominal exchange rate" (which is always 1 for areas sharing the same currency). Looking at the price level distribution within a country gives a clearer picture of the effect, because this removes three complicating factors:
A pint of pub beer is famously more expensive in the south of England than the north, but supermarket beer prices are very similar. This may be treated as anecdotal evidence in favour of the Balassa–Samuelson hypothesis, since supermarket beer is an easily transportable, traded good. (Although pub beer is transportable, the pub itself is not.) The BS-hypothesis explanation for the price differentials is that the 'productivity' of pub employees (in pints served per hour) is more uniform than the 'productivity' (in foreign currency earned per year) of people working in the dominant tradable sector in each region of the country (financial services in the south of England, manufacturing in the north). Although the employees of southern pubs are not significantly more productive than their counterparts in the north, southern pubs must pay wages comparable to those offered by other southern firms in order to keep their staff. This results in southern pubs incurring a higher labour cost per pint served.
Empirical evidence on the Balassa–Samuelson effect.
Evidence for the Penn effect is well established in today's world (and is readily observable when traveling internationally). However, the Balassa–Samuelson (BS) hypothesis implies that countries with rapidly expanding economies should tend to have more rapidly appreciating exchange rates (for instance the Four Asian Tigers); conventional econometric tests yield mixed findings for this prediction.
In total, since it was (re)discovered in 1964, according to Tica and Druzic (2006) the HBS theory "has been tested 60 times in 98 countries in time series or panel analyses and in 142 countries in cross-country analyses. In these analyzed estimates, country specific HBS coefficients have been estimated 166 times in total, and at least once for 65 different countries". Many papers have been published since then. Bahmani-Oskooee and Abm (2005) & Egert, Halpern and McDonald (2006) also provide quite interesting surveys of empirical evidence on BS effect.
Over time, the testing of the HBS model has evolved quite dramatically. Panel data and time series techniques have crowded out old cross-section tests, demand side and terms of trade variables have emerged as explanatory variables, new econometric methodologies have replaced old ones, and recent improvements with endogenous tradability have provided direction for future researchers.
The sector approach combined with panel data analysis and/or cointegration has become a benchmark for empirical tests. Consensus has been reached on the testing of internal and external HBS effects (vis a vis a numeraire country) with a strong reservation against the purchasing power parity assumption in the tradable sector.
The vast majority of the evidence supports the HBS model. A deeper analysis of the empirical evidence shows that the strength of the results is strongly influenced by the nature of the tests and set of countries analyzed. Almost all cross-section tests confirm the model, while panel data results confirm the model for the majority of countries included in the tests. Although some negative results have been returned, there has been strong support for the predictions of a cointegration between relative productivity and relative prices within a country and between countries, while the interpretation of evidence for cointegration between real exchange rate and relative productivity has been much more controversial.
Therefore, most of the contemporary authors (e.g.: Egert, Halpern and McDonald (2006); Drine & Rault (2002)) analyze main BS assumptions separately:
Refinements to the econometric techniques and debate about alternative models are continuing in the International economics community. For instance:
"A possible explanation of the BS empirical rejection may simply be that there are additional long-run real exchange determinants that have to be considered." Drine & Rault conclude.
The next section lists some of the alternative proposals to an explanation of the Penn effect, but there are significant econometric problems with testing the BS-hypothesis, and the lack of strong evidence for it between modern economies may not refute it, or even imply that it produces a small effect. For instance, other effects of exchange rate movements might mask the long-term BS-hypothesis mechanism (making it harder to detect if it exists). Exchange rate movements are believed by some to affect productivity; if this is true then regressing RER movements on differential productivity growth will be 'polluted' by a totally different relationship between the variables1.
Alternative, and additional causes of the Penn effect.
Most professional economists accept that the Balassa–Samuelson effect model has some merit. However other sources of the Penn effect RER/GDP relationship have been proposed:
The distribution sector.
In a 2001 International Monetary Fund working paper Macdonald & Ricci accept that relative productivity changes produce PPP-deviations, but argue that this is not confined to tradables versus non-tradable sectors. Quoting the abstract: "an increase in the productivity and competitiveness of the distribution sector with respect to foreign countries leads to an appreciation of the real exchange rate, similarly to what a relative increase in the domestic productivity of tradables does".
The Dutch Disease.
Capital inflows (say to the Netherlands) may stimulate currency appreciation through demand for money. As the RER appreciates, the competitiveness of the traded-goods sectors falls (in terms of the international price of traded goods).
In this model, there has been no change in real economy productivities, but money price productivity in traded goods has been exogenously lowered through currency appreciation. Since capital inflow is associated with high-income states (e.g. Monaco) this could explain part of the RER/Income correlation.
Yves Bourdet and Hans Falck have studied the effect of Cape Verde remittances on the traded-goods sector. They find that, as local incomes have risen with a doubling of remittances from abroad, the Cape Verde RER has appreciated 14% (during the 1990s). The export sector of the Cape Verde economy suffered a similar fall in productivity during the same period, which was caused entirely by capital flows and not by the BS-effect.
Services are a 'superior good'.
Rudi Dornbusch (1998) and others say that income rises can change the ratio of demand for goods and services (tradable and non-tradable sectors). This is because services tend to be superior goods, which are consumed proportionately more heavily at higher incomes.
A shift in preferences at the microeconomic level, caused by an income effect can change the make-up of the consumer price index to include proportionately more expenditure on services. This alone may shift the consumer price index, and might make the non-trade sector look relatively less productive than it had been when demand was lower; if service quality (rather than quantity) follows diminishing returns to labour input, a general demand for a higher service quality automatically produces a reduction in per-capita productivity.
A typical labour market pattern is that high-GDP countries have a higher ratio of service-sector to traded-goods-sector employment than low-GDP countries. If the traded/non-traded consumption ratio is also correlated with the price level, the Penn effect would still be observed with labour productivity rising equally fast (in identical technologies) between countries.
The protectionism explanation.
Lipsey and Swedenborg (1996) show a strong correlation between the barriers to Free trade and the domestic price level. If wealthy countries feel more able to protect their native producers than developing nations (e.g. with tariffs on agricultural imports) we should expect to see a correlation between rising GDP and rising prices (for goods in protected industries - especially food).
This explanation is similar to the BS-effect, since an industry needing protection must be measurably less productive in the world market of the commodity it produces. However, this reasoning is slightly different from the pure BS-hypothesis, because the goods being produced are 'traded-goods', even though protectionist measures mean that they are more expensive on the domestic market than the international market, so they will not be "traded" internationally
Trade theory implications.
The supply-side economists (and others) have argued that raising International competitiveness through policies that promote traded goods sectors' productivity (at the expense of other sectors) will increase a nation's GDP, and increase its standard of living, when compared with treating the sectors equally. The Balassa–Samuelson effect might be one reason to oppose this trade theory, because it predicts that: "a GDP gain in traded goods does not lead to as much of an improvement in the living standard as an equal GDP increase in the non-traded sector". (This is due to the effect's prediction that the CPI will increase by more in the former case.)
History.
The Balassa–Samuelson effect model was developed independently in 1964 by Béla Balassa and Paul Samuelson. The effect had previously been hypothesized in the first edition of Roy Forbes Harrod's "International Economics" (1939, pp. 71–77), but this portion was not included in subsequent editions.
Partly because empirical findings have been mixed, and partly to differentiate the model from its conclusion, modern papers tend to refer to the Balassa–Samuelson "hypothesis", rather than the Balassa–Samuelson "effect". (See for instance: "A panel data analysis of the Balassa-Samuelson hypothesis", referred to above.)
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
"results do not show supportive evidence for the Balassa–Samuelson effect in the long run."
"Real appreciation is also observed in tradables and often accounts for the bulk in the overall appreciation". | [
{
"math_id": 0,
"text": "MPL_{nt,1}=MPL_{nt,2}=1"
},
{
"math_id": 1,
"text": "w_1=p_{nt,1}*MPL_{nt,1}=p_{t}*MPL_{t,1}"
},
{
"math_id": 2,
"text": "w_2=p_{nt,2}*MPL_{nt,2}=p_{t}*MPL_{t,2}"
},
{
"math_id": 3,
"text": "MPL_{t,1}<MPL_{t,2}"
},
{
"math_id": 4,
"text": "p_{nt,1}<p_{nt,2}"
}
] | https://en.wikipedia.org/wiki?curid=1025655 |
1025748 | Lévy process | Stochastic process in probability theory
In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is a stochastic process with independent, stationary increments: it represents the motion of a point whose successive displacements are random, in which displacements in pairwise disjoint time intervals are independent, and displacements in different time intervals of the same length have identical probability distributions. A Lévy process may thus be viewed as the continuous-time analog of a random walk.
The most well known examples of Lévy processes are the Wiener process, often called the Brownian motion process, and the Poisson process. Further important examples include the Gamma process, the Pascal process, and the Meixner process. Aside from Brownian motion with drift, all other proper (that is, not deterministic) Lévy processes have discontinuous paths. All Lévy processes are additive processes.
Mathematical definition.
A Lévy process is a stochastic process formula_0 that satisfies the following properties:
If formula_10 is a Lévy process then one may construct a version of formula_10 such that formula_11 is almost surely right-continuous with left limits.
Properties.
Independent increments.
A continuous-time stochastic process assigns a random variable "X""t" to each point "t" ≥ 0 in time. In effect it is a random function of "t". The increments of such a process are the differences "X""s" − "X""t" between its values at different times "t" < "s". To call the increments of a process independent means that increments "X""s" − "X""t" and "X""u" − "X""v" are independent random variables whenever the two time intervals do not overlap and, more generally, any finite number of increments assigned to pairwise non-overlapping time intervals are mutually (not just pairwise) independent.
Stationary increments.
To call the increments stationary means that the probability distribution of any increment "X""t" − "X""s" depends only on the length "t" − "s" of the time interval; increments on equally long time intervals are identically distributed.
If formula_10 is a Wiener process, the probability distribution of "X""t" − "X""s" is normal with expected value 0 and variance "t" − "s".
If formula_10 is a Poisson process, the probability distribution of "X""t" − "X""s" is a Poisson distribution with expected value λ("t" − "s"), where λ > 0 is the "intensity" or "rate" of the process.
If formula_10 is a Cauchy process, the probability distribution of "X""t" − "X""s" is a Cauchy distribution with density formula_12.
Infinite divisibility.
The distribution of a Lévy process has the property of infinite divisibility: given any integer "n", the law of a Lévy process at time t can be represented as the law of the sum of "n" independent random variables, which are precisely the increments of the Lévy process over time intervals of length "t"/"n," which are independent and identically distributed by assumptions 2 and 3. Conversely, for each infinitely divisible probability distribution formula_13, there is a Lévy process formula_10 such that the law of formula_14 is given by formula_13.
Moments.
In any Lévy process with finite moments, the "n"th moment formula_15, is a polynomial function of "t"; these functions satisfy a binomial identity:
formula_16
Lévy–Khintchine representation.
The distribution of a Lévy process is characterized by its characteristic function, which is given by the Lévy–Khintchine formula (general for all infinitely divisible distributions): If formula_17 is a Lévy process, then its characteristic function formula_18 is given by
formula_19
where formula_20, formula_21, and formula_22 is a σ-finite measure called the Lévy measure of formula_10, satisfying the property
formula_23
In the above, formula_24 is the indicator function. Because characteristic functions uniquely determine their underlying probability distributions, each Lévy process is uniquely determined by the "Lévy–Khintchine triplet" formula_25. The terms of this triplet suggest that a Lévy process can be seen as having three independent components: a linear drift, a Brownian motion, and a Lévy jump process, as described below. This immediately gives that the only (nondeterministic) continuous Lévy process is a Brownian motion with drift; similarly, every Lévy process is a semimartingale.
Lévy–Itô decomposition.
Because the characteristic functions of independent random variables multiply, the Lévy–Khintchine theorem suggests that every Lévy process is the sum of Brownian motion with drift and another independent random variable, a Lévy jump process. The Lévy–Itô decomposition describes the latter as a (stochastic) sum of independent Poisson random variables.
Let formula_26— that is, the restriction of formula_22 to formula_27, normalized to be a probability measure; similarly, let formula_28 (but do not rescale). Then
formula_29
The former is the characteristic function of a compound Poisson process with intensity formula_30 and child distribution formula_31. The latter is that of a compensated generalized Poisson process (CGPP): a process with countably many jump discontinuities on every interval a.s., but such that those discontinuities are of magnitude less than formula_32. If formula_33, then the CGPP is a pure jump process. Therefore in terms of processes one may decompose formula_10 in the following way
formula_34
where formula_35 is the compound Poisson process with jumps larger than formula_32 in absolute value and formula_36 is the aforementioned compensated generalized Poisson process which is also a zero-mean martingale.
Generalization.
A Lévy random field is a multi-dimensional generalization of Lévy process.
Still more general are decomposable processes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X=\\{X_t:t \\geq 0\\}"
},
{
"math_id": 1,
"text": "X_0=0 \\,"
},
{
"math_id": 2,
"text": "0 \\leq t_1 < t_2<\\cdots <t_n <\\infty"
},
{
"math_id": 3,
"text": "X_{t_2}-X_{t_1}, X_{t_3}-X_{t_2},\\dots,X_{t_n}-X_{t_{n-1}}"
},
{
"math_id": 4,
"text": "s<t \\,"
},
{
"math_id": 5,
"text": "X_t-X_s \\,"
},
{
"math_id": 6,
"text": "X_{t-s}; \\,"
},
{
"math_id": 7,
"text": "\\varepsilon>0"
},
{
"math_id": 8,
"text": "t\\ge 0"
},
{
"math_id": 9,
"text": "\\lim_{h\\rightarrow 0} P(|X_{t+h}-X_t|>\\varepsilon)=0."
},
{
"math_id": 10,
"text": "X"
},
{
"math_id": 11,
"text": "t \\mapsto X_t"
},
{
"math_id": 12,
"text": "f(x; t) = { 1 \\over \\pi } \\left[ { t \\over x^2 + t^2 } \\right] "
},
{
"math_id": 13,
"text": "F"
},
{
"math_id": 14,
"text": "X_1"
},
{
"math_id": 15,
"text": "\\mu_n(t) = E(X_t^n)"
},
{
"math_id": 16,
"text": "\\mu_n(t+s)=\\sum_{k=0}^n {n \\choose k} \\mu_k(t) \\mu_{n-k}(s)."
},
{
"math_id": 17,
"text": " X = (X_t)_{t\\geq 0} "
},
{
"math_id": 18,
"text": " \\varphi_X(\\theta) "
},
{
"math_id": 19,
"text": "\\varphi_X(\\theta)(t) := \\mathbb{E}\\left[e^{i\\theta X(t)}\\right] = \\exp{\\left(t\\left(ai\\theta - \\frac{1}{2}\\sigma^2\\theta^2 + \\int_{\\R\\setminus\\{0\\}}{\\left(e^{i\\theta x}-1 -i\\theta x\\mathbf{1}_{|x|<1}\\right)\\,\\Pi(dx)}\\right)\\right)}\n"
},
{
"math_id": 20,
"text": "a \\in \\mathbb{R}"
},
{
"math_id": 21,
"text": "\\sigma\\ge 0"
},
{
"math_id": 22,
"text": "\\Pi"
},
{
"math_id": 23,
"text": "\\int_{\\R\\setminus\\{0\\}}{\\min(1,x^2)\\,\\Pi(dx)} < \\infty. "
},
{
"math_id": 24,
"text": "\\mathbf{1}"
},
{
"math_id": 25,
"text": "(a,\\sigma^2, \\Pi)"
},
{
"math_id": 26,
"text": "\\nu=\\frac{\\Pi|_{\\R\\setminus(-1,1)}}{\\Pi(\\R\\setminus(-1,1))}"
},
{
"math_id": 27,
"text": "\\R\\setminus(-1,1)"
},
{
"math_id": 28,
"text": "\\mu=\\Pi|_{(-1,1)\\setminus\\{0\\}}"
},
{
"math_id": 29,
"text": "\\int_{\\R\\setminus\\{0\\}}{\\left(e^{i\\theta x}-1 -i\\theta x\\mathbf{1}_{|x|<1}\\right)\\,\\Pi(dx)}=\\Pi(\\R\\setminus(-1,1))\\int_{\\R}{(e^{i\\theta x}-1)\\,\\nu(dx)}+\\int_{\\R}{(e^{i\\theta x}-1-i\\theta x)\\,\\mu(dx)}."
},
{
"math_id": 30,
"text": "\\Pi(\\R\\setminus(-1,1))"
},
{
"math_id": 31,
"text": "\\nu"
},
{
"math_id": 32,
"text": "1"
},
{
"math_id": 33,
"text": "\\int_{\\R}{|x|\\,\\mu(dx)}<\\infty"
},
{
"math_id": 34,
"text": "X_t=\\sigma B_t + at+Y_t+Z_t, t\\geq 0,"
},
{
"math_id": 35,
"text": "Y"
},
{
"math_id": 36,
"text": "Z_t"
}
] | https://en.wikipedia.org/wiki?curid=1025748 |
1025794 | Minimum degree algorithm | Matrix manipulation algorithm
In numerical analysis, the minimum degree algorithm is an algorithm used to permute the rows and columns of a symmetric sparse matrix before applying the Cholesky decomposition, to reduce the number of non-zeros in the Cholesky factor.
This results in reduced storage requirements and means that the Cholesky factor can be applied with fewer arithmetic operations. (Sometimes it may also pertain to an incomplete Cholesky factor used as a preconditioner—for example, in the preconditioned conjugate gradient algorithm.)
Minimum degree algorithms are often used in the finite element method where the reordering of nodes can be carried out depending only on the topology of the mesh, rather than on the coefficients in the partial differential equation, resulting in efficiency savings when the same mesh is used for a variety of coefficient values.
Given a linear system
formula_0
where A is an formula_1 real symmetric sparse square matrix. The Cholesky factor L will typically suffer 'fill in', that is have more non-zeros than the upper triangle of A. We seek a permutation matrix P, so that the matrix
formula_2, which is also symmetric, has the least possible fill in its Cholesky factor. We solve the reordered system
formula_3
The problem of finding the best ordering is an NP-complete problem and is thus intractable, so heuristic methods are used instead. The minimum degree algorithm is derived from a method first proposed by Markowitz in 1959 for non-symmetric linear programming problems, which is loosely described as follows. At each step in Gaussian elimination row and column permutations are performed so as to minimize the number of off diagonal non-zeros in the pivot row and column. A symmetric version
of Markowitz method was described by Tinney and Walker in 1967 and Rose later derived a graph theoretic version of the algorithm where the factorization is only simulated, and this was named the minimum degree algorithm. The graph referred to is the graph with "n" vertices, with vertices "i" and "j" connected by an edge when formula_4, and the "degree" is the degree of the vertices. A crucial aspect of such algorithms is a tie breaking strategy when there is a choice of renumbering resulting in the same degree.
A version of the minimum degree algorithm was implemented in the MATLAB function symmmd (where MMD stands for multiple minimum degree), but has now been superseded by a symmetric approximate multiple minimum degree function symamd, which is faster. This is confirmed by theoretical analysis, which shows that for graphs with "n" vertices and "m" edges, MMD has a tight upper bound of formula_5 on its running time, whereas for AMD a tight bound of formula_6 holds. Cummings, Fahrbach, and Fatehpuria designed an exact minimum degree algorithm with formula_6 running time, and showed that no such algorithm can exist that runs in time formula_7, for any formula_8, assuming the strong exponential time hypothesis. | [
{
"math_id": 0,
"text": " \\mathbf{A}\\mathbf{x} = \\mathbf{b}"
},
{
"math_id": 1,
"text": "n \\times n"
},
{
"math_id": 2,
"text": "\\mathbf{P}^T\\mathbf{A}\\mathbf{P}"
},
{
"math_id": 3,
"text": " \\left(\\mathbf{P}^T\\mathbf{A}\\mathbf{P}\\right) \\left(\\mathbf{P}^T\\mathbf{x}\\right) = \\mathbf{P}^T\\mathbf{b}."
},
{
"math_id": 4,
"text": "a_{ij} \\ne 0"
},
{
"math_id": 5,
"text": "O(n^2m)"
},
{
"math_id": 6,
"text": "O(nm)"
},
{
"math_id": 7,
"text": "O(nm^{1-\\varepsilon})"
},
{
"math_id": 8,
"text": "\\varepsilon > 0"
}
] | https://en.wikipedia.org/wiki?curid=1025794 |
1025901 | Reynolds decomposition | In fluid dynamics and turbulence theory, Reynolds decomposition is a mathematical technique used to separate the expectation value of a quantity from its fluctuations.
Decomposition.
For example, for a quantity formula_0 the decomposition would be
formula_1
where formula_2 denotes the expectation value of formula_0, (often called the steady component/time, spatial or ensemble average), and formula_3, are the deviations from the expectation value (or fluctuations). The fluctuations are defined as the expectation value subtracted from quantity formula_0 such that their time average equals zero.
The expected value, formula_2, is often found from an ensemble average which is an average taken over multiple experiments under identical conditions. The expected value is also sometime denoted formula_4, but it is also seen often with the over-bar notation.
Direct numerical simulation, or resolution of the Navier–Stokes equations completely in formula_5, is only possible on extremely fine computational grids and small time steps even when Reynolds numbers are low, and becomes prohibitively computationally expensive at high Reynolds' numbers. Due to computational constraints, simplifications of the Navier-Stokes equations are useful to parameterize turbulence that are smaller than the computational grid, allowing larger computational domains.
Reynolds decomposition allows the simplification of the Navier–Stokes equations by substituting in the sum of the steady component and perturbations to the velocity profile and taking the mean value. The resulting equation contains a nonlinear term known as the Reynolds stresses which gives rise to turbulence.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "u"
},
{
"math_id": 1,
"text": "u(x,y,z,t) = \\overline{u(x,y,z)} + u'(x,y,z,t) "
},
{
"math_id": 2,
"text": "\\overline{u}"
},
{
"math_id": 3,
"text": "u'"
},
{
"math_id": 4,
"text": "\\langle u\\rangle"
},
{
"math_id": 5,
"text": "(x,y,z,t)"
}
] | https://en.wikipedia.org/wiki?curid=1025901 |
10261414 | Hot chocolate effect | Phenomenon of wave mechanics
The hot chocolate effect is a phenomenon of wave mechanics in which the pitch heard from tapping a cup of hot liquid rises after the addition of a soluble powder. The effect is thought to happen because upon initial stirring, entrained gas bubbles reduce the speed of sound in the liquid, lowering the frequency. As the bubbles clear, sound travels faster in the liquid and the frequency increases.
Name.
The effect was initially observed when making instant coffee and pouring beer, but also occurs in other situations such as adding salt to supersaturated hot water or cold beer. Recent research has found many more substances which create the effect, even in initially non-supersaturated liquids.
It was named and popularized by Frank Crawford of the Lawrence Berkeley National Laboratory starting in 1980 after the effect itself was pointed out to him by Nancy Steiner, though the effect had been reported several times in the preceding decades.
Description.
The effect can be observed by pouring hot milk or hot water into a mug, stirring in chocolate powder, and tapping the bottom of the mug with a spoon. The pitch of the taps will increase progressively with no relation to the speed or force of tapping. Subsequent stirring of the same solution (without adding more chocolate powder) will gradually decrease the pitch again, followed by another increase. This process can be repeated a number of times, until equilibrium has been reached. Musical effects can be achieved by varying the strength and timing of the stirring action along with the timing of the tapping action.
Explanation.
The phenomenon is explained by the effect of bubble density on the speed of sound in the liquid. The note heard is the frequency of a standing wave where a quarter wavelength is the distance between the base of the mug and the liquid surface. This frequency "f" is equal to the speed "v" of the wave divided by four times the height of the water column h:
formula_0
The speed of sound "v" in a homogeneous liquid or gas is dependent on the fluid's mass density (formula_1) and adiabatic bulk modulus (formula_2), according to the Newton-Laplace formula:
formula_3
Water is approximately 800 times denser than air, and air is approximately 15,000 times more compressible than water. (Compressibility is the inverse of the bulk modulus formula_2.) When water is filled with air bubbles, the fluid's density is still very close to the density of water, but the compressibility will be the compressibility of air. This greatly reduces the speed of sound in the liquid. Wavelength is constant for a given volume of fluid; therefore the frequency (pitch) of the sound will decrease as long as gas bubbles are present.
Different rates of bubble formation will generate different acoustic profiles, allowing differentiation of the added solutes. | [
{
"math_id": 0,
"text": "\nf = \\frac{1}{4}\\frac{v}{h}\n"
},
{
"math_id": 1,
"text": "\\rho"
},
{
"math_id": 2,
"text": "K"
},
{
"math_id": 3,
"text": "\nv = \\sqrt{\\frac{K}{\\rho}}\n"
}
] | https://en.wikipedia.org/wiki?curid=10261414 |
10261692 | Ground track | Path on the surface of the Earth or another body directly below an aircraft or satellite
A ground track or ground trace is the path on the surface of a planet directly below an aircraft's or satellite's trajectory. In the case of satellites, it is also known as a suborbital track or subsatellite track, and is the vertical projection of the satellite's orbit onto the surface of the Earth (or whatever body the satellite is orbiting).
A satellite ground track may be thought of as a path along the Earth's surface that traces the movement of an imaginary line between the satellite and the center of the Earth. In other words, the ground track is the set of points at which the satellite will pass directly overhead, or cross the zenith, in the frame of reference of a ground observer.
Aircraft ground tracks.
In air navigation, ground tracks typically approximate an arc of a great circle, this being the shortest distance between two points on the Earth's surface. In order to follow a specified ground track, a pilot must adjust their heading in order to compensate for the effect of wind. Aircraft routes are planned to avoid restricted airspace and dangerous areas, and to pass near navigation beacons.
Satellite ground tracks.
The ground track of a satellite can take a number of different forms, depending on the values of the orbital elements, parameters that define the size, shape, and orientation of the satellite's orbit.
Direct and retrograde motion.
Typically, satellites have a roughly sinusoidal ground track. A satellite with an orbital inclination between zero and ninety degrees is said to be in what is called a "direct" or "prograde orbit", meaning that it orbits in the same direction as the planet's rotation. A satellite with an orbital inclination between 90° and 180° (or, equivalently, between 0° and −90°) is said to be in a "retrograde orbit".
A satellite in a direct orbit with an orbital period less than one day will tend to move from west to east along its ground track. This is called "apparent direct" motion. A satellite in a direct orbit with an orbital period "greater" than one day will tend to move from east to west along its ground track, in what is called "apparent retrograde" motion. This effect occurs because the satellite orbits more slowly than the speed at which the Earth rotates beneath it. Any satellite in a true retrograde orbit will always move from east to west along its ground track, regardless of the length of its orbital period.
Because a satellite in an eccentric orbit moves faster near perigee and slower near apogee, it is possible for a satellite to track eastward during part of its orbit and westward during another part. This phenomenon allows for ground tracks that cross over themselves in a single orbit, as in the geosynchronous and Molniya orbits discussed below.
Effect of orbital period.
A satellite whose orbital period is an integer fraction of a day (e.g., 24 hours, 12 hours, 8 hours, etc.) will follow roughly the same ground track every day. This ground track is shifted east or west depending on the longitude of the ascending node, which can vary over time due to perturbations of the orbit. If the period of the satellite is slightly longer than an integer fraction of a day, the ground track will shift west over time; if it is slightly shorter, the ground track will shift east.
As the orbital period of a satellite increases, approaching the rotational period of the Earth (in other words, as its average orbital speed slows towards the rotational speed of the Earth), its sinusoidal ground track will become compressed longitudinally, meaning that the "nodes" (the points at which it crosses the equator) will become closer together until at geosynchronous orbit they lie directly on top of each other. For orbital periods "longer" than the Earth's rotational period, an increase in the orbital period corresponds to a longitudinal stretching out of the (apparent retrograde) ground track.
A satellite whose orbital period is "equal" to the rotational period of the Earth is said to be in a geosynchronous orbit. Its ground track will have a "figure eight" shape over a fixed location on the Earth, crossing the equator twice each day. It will track eastward when it is on the part of its orbit closest to perigee, and westward when it is closest to apogee.
A special case of the geosynchronous orbit, the geostationary orbit, has an eccentricity of zero (meaning the orbit is circular), and an inclination of zero in the Earth-Centered, Earth-Fixed coordinate system (meaning the orbital plane is not tilted relative to the Earth's equator). The "ground track" in this case consists of a single point on the Earth's equator, above which the satellite sits at all times. Note that the satellite is still orbiting the Earth — its apparent lack of motion is due to the fact that the Earth is rotating about its own center of mass at the same rate as the satellite is orbiting.
Effect of inclination.
Orbital inclination is the angle formed between the plane of an orbit and the equatorial plane of the Earth. The geographic latitudes covered by the ground track will range from "–i" to "i", where "i" is the orbital inclination. In other words, the greater the inclination of a satellite's orbit, the further north and south its ground track will pass. A satellite with an inclination of exactly 90° is said to be in a polar orbit, meaning it passes over the Earth's north and south poles.
Launch sites at lower latitudes are often preferred partly for the flexibility they allow in orbital inclination; the initial inclination of an orbit is constrained to be greater than or equal to the launch latitude. Vehicles launched from Cape Canaveral, for instance, will have an initial orbital inclination of at least 28°27′, the latitude of the launch site—and to achieve this minimum requires launching with a due east azimuth, which may not always be feasible given other launch constraints. At the extremes, a launch site located on the equator can launch directly into any desired inclination, while a hypothetical launch site at the north or south pole would only be able to launch into polar orbits. (While it is possible to perform an orbital inclination change maneuver once on orbit, such maneuvers are typically among the most costly, in terms of fuel, of all orbital maneuvers, and are typically avoided or minimized to the extent possible.)
In addition to providing for a wider range of initial orbit inclinations, low-latitude launch sites offer the benefit of requiring less energy to make orbit (at least for prograde orbits, which comprise the vast majority of launches), due to the initial velocity provided by the Earth's rotation. The desire for equatorial launch sites, coupled with geopolitical and logistical realities, has fostered the development of floating launch platforms, most notably Sea Launch.
Effect of argument of perigee.
If the argument of perigee is zero, meaning that perigee and apogee lie in the equatorial plane, then the ground track of the satellite will appear the same above and below the equator (i.e., it will exhibit 180° rotational symmetry about the orbital nodes.) If the argument of perigee is non-zero, however, the satellite will behave differently in the northern and southern hemispheres. The Molniya orbit, with an argument of perigee near −90°, is an example of such a case. In a Molniya orbit, apogee occurs at a high latitude (63°), and the orbit is highly eccentric ("e" = 0.72). This causes the satellite to "hover" over a region of the northern hemisphere for a long time, while spending very little time over the southern hemisphere. This phenomenon is known as "apogee dwell", and is desirable for communications for high latitude regions.
Repeat orbits.
As orbital operations are often required to monitor a specific location on Earth, orbits that cover the same ground track periodically are often used. On earth, these orbits are commonly referred to as Earth-repeat orbits, and are often designed with "frozen orbit" parameters to achieve a repeat ground track orbit with stable (minimally time-varying) orbit elements. These orbits use the nodal precession effect to shift the orbit so the ground track coincides with that of a previous orbit, so that this essentially balances out the offset in the revolution of the orbited body. The longitudinal rotation after a certain period of time of a planet is given by:
formula_0
where
The effect of the nodal precession can be quantified as:
formula_3
where
These two effects must cancel out after a set formula_9 orbital revolutions and formula_10 (sidereal) days. Hence, equating the elapsed time to the orbital period of the satellite and combining the above two equations yields an equation which holds for any orbit that is a repeat orbit:
formula_11
where
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta L_1 = -2 \\pi \\frac{T}{T_E}"
},
{
"math_id": 1,
"text": "T"
},
{
"math_id": 2,
"text": "T_E"
},
{
"math_id": 3,
"text": "\\Delta L_2 = - \\frac{3 \\pi J_2 R_e^2 cos(i)}{a^2(1-e^2)^2}"
},
{
"math_id": 4,
"text": "J_2"
},
{
"math_id": 5,
"text": "R_e"
},
{
"math_id": 6,
"text": "i"
},
{
"math_id": 7,
"text": "a"
},
{
"math_id": 8,
"text": "e"
},
{
"math_id": 9,
"text": "j"
},
{
"math_id": 10,
"text": "k"
},
{
"math_id": 11,
"text": "j\\left| \\Delta L_1 + \\Delta L_2 \\right| = j\\left|-2 \\pi \\frac{2 \\pi \\sqrt{\\frac{a^3}{\\mu}}}{T_E} - \\frac{3 \\pi J_2 R_e^2 cos(i)}{a^2(1-e^2)^2} \\right| = k 2 \\pi"
},
{
"math_id": 12,
"text": "\\mu"
}
] | https://en.wikipedia.org/wiki?curid=10261692 |
1026409 | Opel Monza | The Opel Monza is an executive fastback coupe produced by the German automaker Opel from 1977 to 1986. It was marketed in the United Kingdom as the Vauxhall Royale Coupé by Vauxhall.
Monza A1 (1977–1982).
The Monza was planned as a successor for the Commodore Coupé. In the late 1970s the Commodore C model was made as a two-door version (as was the Rekord E1), but still as a sedan type car. The first model of the Commodore the "A" series had a regular coupé in the production line and Opel desired to make a newer version of their large luxury coupé. Work began in 1976 and in 1978 the first Monzas were available to buy. The cars to compete with would be the Mercedes-Benz C123 and later the Mercedes-Benz C124 and the BMW 6 Series (E24) models (coupe models), and any other large luxury coupe. There was plenty of space inside for the class, and the seats were upmarket.
The internals consisted of parts mainly borrowed from the Rekord E1 and later the E2, which meant cloth seats, and much plastic on the dashboard and inner doors. Even the rev counter and the tachometer was taken directly from the Rekord E models. The model experienced some gearbox problems. The engine range for the Monza A1 was the 3.0S, the 2.8S, the newly developed 3.0E and later the 2.5E (the 3.0 had 180 bhp and 248 Nm with fuel injection), gave a wide range. The three-speed Borg Warner automatic transmission from the Commodore range needed to be modified to cope with the new and improved power outputs. Opel's own four-speed manual gearboxes were not up to the job and, instead of putting in a more modern five-speed manual gearbox, Opel turned to gearbox and transmission producer Getrag, and installed the Getrag 264 four-speed manual gearbox in the early Monzas. But when people bought a big, luxurious coupé they wanted modern products as well, and Opel obliged, as soon the Getrag 240 (for the 2.5 engines) and the Getrag 265 (for the 3.0E), both 5-speed manual gearboxes, replaced the old 4-speed gearbox.
The Monza, however, sharing the same layout as the Senator A1, had very good driving abilities. It handled well, thanks to the newly developed MacPherson strut system for the front of the car, as used on the Rekord E1 and E2, and the new independent rear suspension gave the car soft, yet firm, driving characteristics and excellent stability for such a big car. The engine range, however non-economical, was also very good, and few problems with the extremely reliable engines. The six-cylinder engines were all overhead camshaft. Many parts on the engine, such as the water pump and drive train were the same parts as used on the four-cylinder version. This meant that this was an engine not only tested for many years in the Commodore, Admiral and Diplomat range, but also very reliable. Although the first generation of 3.0E engines in the Monza A1 had overheating problems when standing still, this could easily be fixed by fitting an oil-cooler.
Opel introduced the "C" package. The "C" cars were fitted with extra instruments (oil pressure, voltmeter etc.) and the interior was either red, dark blue, green, or brown.
The A1 also came with a sports package or "S" package. The cars all were marked as "S" models on the front wings, and came with 15-inch Ronal alloy wheels and a 45% limited slip differential
Four well-sized adults had plenty of space. Even the boot was large, and the rear seats flipped down to make even more space. The A1 was not a great hit at the customers even though it was relatively cheap.
With the 3.0-litre engine, the Monza was at that time the fastest car Opel had ever built. Being capable of speeds as high as 215 km/h, and the 0–100 km/h mark went in just 8.2 seconds.
Monza A2 (1982–1986).
In 1982, the Monza, Rekord and Senator all got a face-lift and were named the A2 (E2 for the Rekord). The A2 looked similar to the A1 overall but with some small changes to the front end. The headlights noticeably increased in size, and the front was more streamlined than the A1. The car was much more slippery, with drag resistance down by around ten percent (from 0.40 to 0.35 formula_0). Also the chrome parts like bumpers etc. were changed to a matt black finish, or with plastic parts. The bumpers were now made of plastic and gave the Monza the look of a sports car in appearance, and actually did look similar to the Opel Manta, despite the ample size difference. The rear lights were the same and the orange front indicators were now clear glass, giving a much more modern look to the car. Overall the update was regarded as successful although retrospectively some of the purity of the lines of the early car were lost.
At a time of rising fuel prices, the need for fuel efficiency was becoming paramount, and Opel decided to change the engine specifications of the Monza. This meant introducing both the inline four-cylinder CIH 2.0E engine from the Rekord E2 (replaced by the torquier 2.2 in October 1984). However, as the Monza weighs almost 1400 kg, given the 115 PS of the two engines, the cars were underpowered and thus unpopular. The 2.5E was given a new Bosch injection system so between 136 and 140 PS was available. The 2.8S was taken out of production. The 3.0E engine stayed the top of the range. The 3.0E was given an upgraded Bosch fuel injection and fuel consumption improved somewhat.
The cars now came with more luxurious interior, electrically controlled side mirrors and even an on-board computer, recording fuel consumption, speed and range.
The launch of the A2 in the UK saw the demise of the Vauxhall Royale Coupe, which had been sold alongside the Monza, resulting in only the Opel model being available on the market. The Royale was disparagingly described by "Autocar" as "an effeminate, frilly, titivated version of the [Monza] with fussy wheels and an unpleasant (often pastel-shaded) velour-smothered interior".
Monza GSE.
The last incarnation of the Monza was the GSE edition in mid-1983; basically the A2 car, but a high-specification model which had Recaro sports seats, digital LCD instruments, firmer suspension, the Getrag five-speed manual transmission, an enhanced all-black interior, and a small boot spoiler. Also GSE models are equipped with a 40% limited slip differential, an addition that had to be ordered separately on earlier 3.0E cars when purchasing.
By the time the Senator was updated to the new Senator B and the Monza cancelled, 43,812 Monzas had been built. There was no direct Monza replacement, although the idea of a large Opel/Vauxhall sporting car was carried on in the Lotus Carlton/Lotus Omega saloon. Bitter Cars put a 4.0 engine under the hood as a prototype. Three were built; two left hand drive and one right hand drive, one left hand drive burned out on a motorway in Germany and the other is in a museum, but the right hand drive one is in Somerset, UK.
Holden Monza.
In Australia, local racing legend Peter Brock had plans to import, modify and market the Opel Monza Coupé as the Holden Monza with the Holden 5 Litre V8 fitted, through his own HDT (Holden Dealer Team) business, but the plans eventually fell through. This was due to the expense of adapting the car to Australian Design Rules. One model was built with modifications, including a 5.0-litre Holden V8 engine.
Other uses of the Monza name.
In South Africa, a saloon version of the smaller Opel Kadett E was also sold as the Opel Monza. In Brazil and Venezuela, a version of the Opel Ascona C was sold as the Chevrolet Monza, which featured a three-door fastback body unique to Latin America. There was also an unrelated Chevrolet Monza in the United States.
Since 2019, Chinese buyers have been offered another Chevrolet Monza, this time a four-door sedan.
2013 Monza Concept.
The Opel Monza Concept is a three-door 2+2 fastback coupé plug-in hybrid concept car with 2 gullwing doors for easy access to the rear seats unveiled at the Frankfurt Motor Show in September 2013. The concept was also shown under the British Vauxhall marque.
The concept shares the same basic plug-in hybrid setup as the Chevrolet Volt and Opel Ampera called "VOLTEC", but using a turbocharged 1-liter 3-cylinder natural gas-powered engine as its range extender instead of General Motors’ current 1.4-liter gasoline Voltec engine. The Monza Concept is the first car to feature cutting-edge LED projection infotainment.
Dr. Karl-Thomas Neumann, the CEO of Opel has been quoted as saying "The Monza Concept is nothing less than our vision of the automotive future". According to Opel, this concept is the role-model for the next generation of Opel cars, and because of its modular chassis design, future cars based on it would be able to accommodate gasoline, diesel or electric power.
Chief designer Ed Welburn of General Motors said "The gullwing doors will go into production and concept".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\scriptstyle C_\\mathrm x\\,"
}
] | https://en.wikipedia.org/wiki?curid=1026409 |
102651 | Bernoulli process | Random process of binary (boolean) random variables
In probability and statistics, a Bernoulli process (named after Jacob Bernoulli) is a finite or infinite sequence of binary random variables, so it is a discrete-time stochastic process that takes only two values, canonically 0 and 1. The component Bernoulli variables "X""i" are identically distributed and independent. Prosaically, a Bernoulli process is a repeated coin flipping, possibly with an unfair coin (but with consistent unfairness). Every variable "X""i" in the sequence is associated with a Bernoulli trial or experiment. They all have the same Bernoulli distribution. Much of what can be said about the Bernoulli process can also be generalized to more than two outcomes (such as the process for a six-sided die); this generalization is known as the Bernoulli scheme.
The problem of determining the process, given only a limited sample of Bernoulli trials, may be called the problem of checking whether a coin is fair.
Definition.
A "Bernoulli process" is a finite or infinite sequence of independent random variables "X"1, "X"2, "X"3, ..., such that
In other words, a Bernoulli process is a sequence of independent identically distributed Bernoulli trials.
Independence of the trials implies that the process is memoryless. Given that the probability "p" is known, past outcomes provide no information about future outcomes. (If "p" is unknown, however, the past informs about the future indirectly, through inferences about "p".)
If the process is infinite, then from any point the future trials constitute a Bernoulli process identical to the whole process, the fresh-start property.
Interpretation.
The two possible values of each "X""i" are often called "success" and "failure". Thus, when expressed as a number 0 or 1, the outcome may be called the number of successes on the "i"th "trial".
Two other common interpretations of the values are true or false and yes or no. Under any interpretation of the two values, the individual variables "X""i" may be called Bernoulli trials with parameter p.
In many applications time passes between trials, as the index i increases. In effect, the trials "X"1, "X"2, ... "X"i, ... happen at "points in time" 1, 2, ..., "i", ... That passage of time and the associated notions of "past" and "future" are not necessary, however. Most generally, any "X"i and "X""j" in the process are simply two from a set of random variables indexed by {1, 2, ..., "n"}, the finite cases, or by {1, 2, 3, ...}, the infinite cases.
One experiment with only two possible outcomes, often referred to as "success" and "failure", usually encoded as 1 and 0, can be modeled as a Bernoulli distribution. Several random variables and probability distributions beside the Bernoullis may be derived from the Bernoulli process:
The negative binomial variables may be interpreted as random waiting times.
Formal definition.
The Bernoulli process can be formalized in the language of probability spaces as a random sequence of independent realisations of a random variable that can take values of heads or tails. The state space for an individual value is denoted by formula_1
Borel algebra.
Consider the countably infinite direct product of copies of formula_2. It is common to examine either the one-sided set formula_3 or the two-sided set formula_4. There is a natural topology on this space, called the product topology. The sets in this topology are finite sequences of coin flips, that is, finite-length strings of "H" and "T" ("H" stands for heads and "T" stands for tails), with the rest of (infinitely long) sequence taken as "don't care". These sets of finite sequences are referred to as cylinder sets in the product topology. The set of all such strings forms a sigma algebra, specifically, a Borel algebra. This algebra is then commonly written as formula_5 where the elements of formula_6 are the finite-length sequences of coin flips (the cylinder sets).
Bernoulli measure.
If the chances of flipping heads or tails are given by the probabilities formula_7, then one can define a natural measure on the product space, given by formula_8 (or by formula_9 for the two-sided process). In another word, if a discrete random variable "X" has a "Bernoulli distribution" with parameter "p", where 0 ≤ "p" ≤ 1, and its probability mass function is given by
formula_10 and formula_11.
We denote this distribution by Ber("p").
Given a cylinder set, that is, a specific sequence of coin flip results formula_12 at times formula_13, the probability of observing this particular sequence is given by
formula_14
where "k" is the number of times that "H" appears in the sequence, and "n"−"k" is the number of times that "T" appears in the sequence. There are several different kinds of notations for the above; a common one is to write
formula_15
where each formula_16 is a binary-valued random variable with formula_17 in Iverson bracket notation, meaning either formula_18 if formula_19 or formula_20 if formula_21. This probability formula_22 is commonly called the Bernoulli measure.
Note that the probability of any specific, infinitely long sequence of coin flips is exactly zero; this is because formula_23, for any formula_24. A probability equal to 1 implies that any given infinite sequence has measure zero. Nevertheless, one can still say that some classes of infinite sequences of coin flips are far more likely than others, this is given by the asymptotic equipartition property.
To conclude the formal definition, a Bernoulli process is then given by the probability triple formula_25, as defined above.
Law of large numbers, binomial distribution and central limit theorem.
Let us assume the canonical process with formula_26 represented by formula_27 and formula_28 represented by formula_29. The law of large numbers states that the average of the sequence, i.e., formula_30, will approach the expected value almost certainly, that is, the events which do not satisfy this limit have zero probability. The expectation value of flipping "heads", assumed to be represented by 1, is given by formula_31. In fact, one has
formula_32
for any given random variable formula_16 out of the infinite sequence of Bernoulli trials that compose the Bernoulli process.
One is often interested in knowing how often one will observe "H" in a sequence of "n" coin flips. This is given by simply counting: Given "n" successive coin flips, that is, given the set of all possible strings of length "n", the number "N"("k","n") of such strings that contain "k" occurrences of "H" is given by the binomial coefficient
formula_33
If the probability of flipping heads is given by "p", then the total probability of seeing a string of length "n" with "k" heads is
formula_34
where formula_35.
The probability measure thus defined is known as the Binomial distribution.
As we can see from the above formula that, if n=1, the "Binomial distribution" will turn into a "Bernoulli distribution". So we can know that the "Bernoulli distribution" is exactly a special case of "Binomial distribution" when n equals to 1.
Of particular interest is the question of the value of formula_36 for a sufficiently long sequences of coin flips, that is, for the limit formula_37. In this case, one may make use of Stirling's approximation to the factorial, and write
formula_38
Inserting this into the expression for "P"("k","n"), one obtains the Normal distribution; this is the content of the central limit theorem, and this is the simplest example thereof.
The combination of the law of large numbers, together with the central limit theorem, leads to an interesting and perhaps surprising result: the asymptotic equipartition property. Put informally, one notes that, yes, over many coin flips, one will observe "H" exactly "p" fraction of the time, and that this corresponds exactly with the peak of the Gaussian. The asymptotic equipartition property essentially states that this peak is infinitely sharp, with infinite fall-off on either side. That is, given the set of all possible infinitely long strings of "H" and "T" occurring in the Bernoulli process, this set is partitioned into two: those strings that occur with probability 1, and those that occur with probability 0. This partitioning is known as the Kolmogorov 0-1 law.
The size of this set is interesting, also, and can be explicitly determined: the logarithm of it is exactly the entropy of the Bernoulli process. Once again, consider the set of all strings of length "n". The size of this set is formula_39. Of these, only a certain subset are likely; the size of this set is formula_40 for formula_41. By using Stirling's approximation, putting it into the expression for "P"("k","n"), solving for the location and width of the peak, and finally taking formula_37 one finds that
formula_42
This value is the Bernoulli entropy of a Bernoulli process. Here, "H" stands for entropy; not to be confused with the same symbol "H" standing for "heads".
John von Neumann posed a question about the Bernoulli process regarding the possibility of a given process being isomorphic to another, in the sense of the isomorphism of dynamical systems. The question long defied analysis, but was finally and completely answered with the Ornstein isomorphism theorem. This breakthrough resulted in the understanding that the Bernoulli process is unique and universal; in a certain sense, it is the single most random process possible; nothing is 'more' random than the Bernoulli process (although one must be careful with this informal statement; certainly, systems that are mixing are, in a certain sense, "stronger" than the Bernoulli process, which is merely ergodic but not mixing. However, such processes do not consist of independent random variables: indeed, many purely deterministic, non-random systems can be mixing).
Dynamical systems.
The Bernoulli process can also be understood to be a dynamical system, as an example of an ergodic system and specifically, a measure-preserving dynamical system, in one of several different ways. One way is as a shift space, and the other is as an odometer. These are reviewed below.
Bernoulli shift.
One way to create a dynamical system out of the Bernoulli process is as a shift space. There is a natural translation symmetry on the product space formula_43 given by the shift operator
formula_44
The Bernoulli measure, defined above, is translation-invariant; that is, given any cylinder set formula_45, one has
formula_46
and thus the Bernoulli measure is a Haar measure; it is an invariant measure on the product space.
Instead of the probability measure formula_47, consider instead some arbitrary function formula_48. The pushforward
formula_49
defined by formula_50 is again some function formula_51 Thus, the map formula_52 induces another map formula_53 on the space of all functions formula_51 That is, given some formula_48, one defines
formula_54
The map formula_53 is a linear operator, as (obviously) one has formula_55 and formula_56 for functions formula_57 and constant formula_58. This linear operator is called the transfer operator or the "Ruelle–Frobenius–Perron operator". This operator has a spectrum, that is, a collection of eigenfunctions and corresponding eigenvalues. The largest eigenvalue is the Frobenius–Perron eigenvalue, and in this case, it is 1. The associated eigenvector is the invariant measure: in this case, it is the Bernoulli measure. That is, formula_59
If one restricts formula_53 to act on polynomials, then the eigenfunctions are (curiously) the Bernoulli polynomials! This coincidence of naming was presumably not known to Bernoulli.
The 2x mod 1 map.
The above can be made more precise. Given an infinite string of binary digits formula_60 write
formula_61
The resulting formula_62 is a real number in the unit interval formula_63 The shift formula_52 induces a homomorphism, also called formula_52, on the unit interval. Since formula_64 one can see that formula_65 This map is called the dyadic transformation; for the doubly-infinite sequence of bits formula_66 the induced homomorphism is the Baker's map.
Consider now the space of functions in formula_62. Given some formula_67 one can find that
formula_68
Restricting the action of the operator formula_53 to functions that are on polynomials, one finds that it has a discrete spectrum given by
formula_69
where the formula_70 are the Bernoulli polynomials. Indeed, the Bernoulli polynomials obey the identity
formula_71
The Cantor set.
Note that the sum
formula_72
gives the Cantor function, as conventionally defined. This is one reason why the set formula_73 is sometimes called the Cantor set.
Odometer.
Another way to create a dynamical system is to define an odometer. Informally, this is exactly what it sounds like: just "add one" to the first position, and let the odometer "roll over" by using carry bits as the odometer rolls over. This is nothing more than base-two addition on the set of infinite strings. Since addition forms a group (mathematics), and the Bernoulli process was already given a topology, above, this provides a simple example of a topological group.
In this case, the transformation formula_52 is given by
formula_74
It leaves the Bernoulli measure invariant only for the special case of formula_75 (the "fair coin"); otherwise not. Thus, formula_52 is a measure preserving dynamical system in this case, otherwise, it is merely a conservative system.
Bernoulli sequence.
The term "Bernoulli sequence" is often used informally to refer to a realization of a Bernoulli process.
However, the term has an entirely different formal definition as given below.
Suppose a Bernoulli process formally defined as a single random variable (see preceding section). For every infinite sequence "x" of coin flips, there is a sequence of integers
formula_76
called the "Bernoulli sequence" associated with the Bernoulli process. For example, if "x" represents a sequence of coin flips, then the associated Bernoulli sequence is the list of natural numbers or time-points for which the coin toss outcome is "heads".
So defined, a Bernoulli sequence formula_77 is also a random subset of the index set, the natural numbers formula_78.
Almost all Bernoulli sequences formula_77 are ergodic sequences.
Randomness extraction.
From any Bernoulli process one may derive a Bernoulli process with "p" = 1/2 by the von Neumann extractor, the earliest randomness extractor, which actually extracts uniform randomness.
Basic von Neumann extractor.
Represent the observed process as a sequence of zeroes and ones, or bits, and group that input stream in non-overlapping pairs of successive bits, such as (11)(00)(10)... . Then for each pair,
This table summarizes the computation.
For example, an input stream of eight bits "10011011" would by grouped into pairs as "(10)(01)(10)(11)". Then, according to the table above, these pairs are translated into the output of the procedure:
"(1)(0)(1)()" (="101").
In the output stream 0 and 1 are equally likely, as 10 and 01 are equally likely in the original, both having probability "p"(1−"p") = (1−"p")"p". This extraction of uniform randomness does not require the input trials to be independent, only uncorrelated. More generally, it works for any exchangeable sequence of bits: all sequences that are finite rearrangements are equally likely.
The von Neumann extractor uses two input bits to produce either zero or one output bits, so the output is shorter than the input by a factor of at least 2. On average the computation discards proportion "p"2 + (1 − "p")2 of the input pairs(00 and 11), which is near one when "p" is near zero or one, and is minimized at 1/4 when "p" = 1/2 for the original process (in which case the output stream is 1/4 the length of the input stream on average).
Von Neumann (classical) main operation pseudocode:
if (Bit1 ≠ Bit2) {
output(Bit1)
Iterated von Neumann extractor.
This decrease in efficiency, or waste of randomness present in the input stream, can be mitigated by iterating the algorithm over the input data. This way the output can be made to be "arbitrarily close to the entropy bound".
The iterated version of the von Neumann algorithm, also known as advanced multi-level strategy (AMLS), was introduced by Yuval Peres in 1992. It works recursively, recycling "wasted randomness" from two sources: the sequence of discard-non-discard, and the values of discarded pairs (0 for 00, and 1 for 11). It relies on the fact that, given the sequence already generated, both of those sources are still exchangeable sequences of bits, and thus eligible for another round of extraction. While such generation of additional sequences can be iterated infinitely to extract all available entropy, an infinite amount of computational resources is required, therefore the number of iterations is typically fixed to a low value – this value either fixed in advance, or calculated at runtime.
More concretely, on an input sequence, the algorithm consumes the input bits in pairs, generating output together with two new sequences, () gives AMLS paper notation:
(If the length of the input is odd, the last bit is completely discarded.) Then the algorithm is applied recursively to each of the two new sequences, until the input is empty.
Example: The input stream from the AMLS paper, "11001011101110" using 1 for H and 0 for T, is processed this way:
Starting from step 1, the input is a concatenation of sequence 2 and sequence 1 from the previous step (the order is arbitrary but should be fixed). The final output is "()()(1)()(1)()(1)(1)()()(0)(0)()(0)(1)(1)()(1)" (="1111000111"), so from 14 bits of input 10 bits of output were generated, as opposed to 3 bits through the von Neumann algorithm alone. The constant output of exactly 2 bits per round per bit pair (compared with a variable none to 1 bit in classical VN) also allows for constant-time implementations which are resistant to timing attacks.
Von Neumann–Peres (iterated) main operation pseudocode:
if (Bit1 ≠ Bit2) {
output(1, Sequence1)
output(Bit1)
} else {
output(0, Sequence1)
output(Bit1, Sequence2)
Another tweak was presented in 2016, based on the observation that the Sequence2 channel doesn't provide much throughput, and a hardware implementation with a finite number of levels can benefit from discarding it earlier in exchange for processing more levels of Sequence1.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "i"
},
{
"math_id": 1,
"text": "2=\\{H,T\\} ."
},
{
"math_id": 2,
"text": "2=\\{H,T\\}"
},
{
"math_id": 3,
"text": "\\Omega=2^\\mathbb{N}=\\{H,T\\}^\\mathbb{N}"
},
{
"math_id": 4,
"text": "\\Omega=2^\\mathbb{Z}"
},
{
"math_id": 5,
"text": "(\\Omega, \\mathcal{B})"
},
{
"math_id": 6,
"text": "\\mathcal{B}"
},
{
"math_id": 7,
"text": "\\{p,1-p\\}"
},
{
"math_id": 8,
"text": "P=\\{p, 1-p\\}^\\mathbb{N}"
},
{
"math_id": 9,
"text": "P=\\{p, 1-p\\}^\\mathbb{Z}"
},
{
"math_id": 10,
"text": "pX(1)=P(X=1)=p"
},
{
"math_id": 11,
"text": "pX(0)=P(X=0)=1-p"
},
{
"math_id": 12,
"text": "[\\omega_1, \\omega_2,\\cdots\\omega_n]"
},
{
"math_id": 13,
"text": "1,2,\\cdots,n"
},
{
"math_id": 14,
"text": "P([\\omega_1, \\omega_2,\\cdots ,\\omega_n])= p^k (1-p)^{n-k}"
},
{
"math_id": 15,
"text": "P(X_1=x_1, X_2=x_2,\\cdots, X_n=x_n)= p^k (1-p)^{n-k}"
},
{
"math_id": 16,
"text": "X_i"
},
{
"math_id": 17,
"text": "x_i=[\\omega_i=H]"
},
{
"math_id": 18,
"text": "1"
},
{
"math_id": 19,
"text": "\\omega_i=H"
},
{
"math_id": 20,
"text": "0"
},
{
"math_id": 21,
"text": "\\omega_i=T"
},
{
"math_id": 22,
"text": "P"
},
{
"math_id": 23,
"text": "\\lim_{n\\to\\infty}p^n=0"
},
{
"math_id": 24,
"text": "0\\le p<1"
},
{
"math_id": 25,
"text": "(\\Omega, \\mathcal{B}, P)"
},
{
"math_id": 26,
"text": " H "
},
{
"math_id": 27,
"text": " 1 "
},
{
"math_id": 28,
"text": " T "
},
{
"math_id": 29,
"text": " 0 "
},
{
"math_id": 30,
"text": " \\bar{X}_{n}:=\\frac{1}{n}\\sum_{i=1}^{n}X_{i} "
},
{
"math_id": 31,
"text": "p"
},
{
"math_id": 32,
"text": "\\mathbb{E}[X_i]=\\mathbb{P}([X_i=1])=p,"
},
{
"math_id": 33,
"text": "N(k,n) = {n \\choose k}=\\frac{n!}{k! (n-k)!}"
},
{
"math_id": 34,
"text": "\\mathbb{P}([S_n=k]) = {n\\choose k} p^k (1-p)^{n-k} , "
},
{
"math_id": 35,
"text": " S_n=\\sum_{i=1}^{n}X_i "
},
{
"math_id": 36,
"text": "S_{n}"
},
{
"math_id": 37,
"text": "n\\to\\infty"
},
{
"math_id": 38,
"text": "n! = \\sqrt{2\\pi n} \\;n^n e^{-n}\n \\left(1 + \\mathcal{O}\\left(\\frac{1}{n}\\right)\\right)"
},
{
"math_id": 39,
"text": "2^n"
},
{
"math_id": 40,
"text": "2^{nH}"
},
{
"math_id": 41,
"text": "H\\le 1"
},
{
"math_id": 42,
"text": "H=-p\\log_2 p - (1-p)\\log_2(1-p)"
},
{
"math_id": 43,
"text": "\\Omega=2^\\mathbb{N}"
},
{
"math_id": 44,
"text": "T(X_0, X_1, X_2, \\cdots) = (X_1, X_2, \\cdots)"
},
{
"math_id": 45,
"text": "\\sigma\\in\\mathcal{B}"
},
{
"math_id": 46,
"text": "P(T^{-1}(\\sigma))=P(\\sigma)"
},
{
"math_id": 47,
"text": "P:\\mathcal{B}\\to\\mathbb{R}"
},
{
"math_id": 48,
"text": "f:\\mathcal{B}\\to\\mathbb{R}"
},
{
"math_id": 49,
"text": "f\\circ T^{-1}"
},
{
"math_id": 50,
"text": "\\left(f\\circ T^{-1}\\right)(\\sigma) = f(T^{-1}(\\sigma))"
},
{
"math_id": 51,
"text": "\\mathcal{B}\\to\\mathbb{R}."
},
{
"math_id": 52,
"text": "T"
},
{
"math_id": 53,
"text": "\\mathcal{L}_T"
},
{
"math_id": 54,
"text": "\\mathcal{L}_T f = f \\circ T^{-1}"
},
{
"math_id": 55,
"text": "\\mathcal{L}_T(f+g)= \\mathcal{L}_T(f) + \\mathcal{L}_T(g)"
},
{
"math_id": 56,
"text": "\\mathcal{L}_T(af)= a\\mathcal{L}_T(f)"
},
{
"math_id": 57,
"text": "f,g"
},
{
"math_id": 58,
"text": "a"
},
{
"math_id": 59,
"text": "\\mathcal{L}_T(P)= P."
},
{
"math_id": 60,
"text": "b_0, b_1, \\cdots"
},
{
"math_id": 61,
"text": "y=\\sum_{n=0}^\\infty \\frac{b_n}{2^{n+1}}."
},
{
"math_id": 62,
"text": "y"
},
{
"math_id": 63,
"text": "0\\le y\\le 1."
},
{
"math_id": 64,
"text": "T(b_0, b_1, b_2, \\cdots) = (b_1, b_2, \\cdots),"
},
{
"math_id": 65,
"text": "T(y)=2y\\bmod 1."
},
{
"math_id": 66,
"text": "\\Omega=2^\\mathbb{Z},"
},
{
"math_id": 67,
"text": "f(y)"
},
{
"math_id": 68,
"text": "\\left[\\mathcal{L}_T f\\right](y) = \\frac{1}{2}f\\left(\\frac{y}{2}\\right)+\\frac{1}{2}f\\left(\\frac{y+1}{2}\\right)"
},
{
"math_id": 69,
"text": "\\mathcal{L}_T B_n= 2^{-n}B_n"
},
{
"math_id": 70,
"text": "B_n"
},
{
"math_id": 71,
"text": "\\frac{1}{2}B_n\\left(\\frac{y}{2}\\right)+\\frac{1}{2}B_n\\left(\\frac{y+1}{2}\\right) = 2^{-n}B_n(y)"
},
{
"math_id": 72,
"text": "y=\\sum_{n=0}^\\infty \\frac{b_n}{3^{n+1}}"
},
{
"math_id": 73,
"text": "\\{H,T\\}^\\mathbb{N}"
},
{
"math_id": 74,
"text": "T\\left(1,\\dots,1,0,X_{k+1},X_{k+2},\\dots\\right) = \\left(0,\\dots,0,1,X_{k+1},X_{k+2},\\dots \\right)."
},
{
"math_id": 75,
"text": "p=1/2"
},
{
"math_id": 76,
"text": "\\mathbb{Z}^x = \\{n\\in \\mathbb{Z} : X_n(x) = 1 \\} \\, "
},
{
"math_id": 77,
"text": "\\mathbb{Z}^x"
},
{
"math_id": 78,
"text": "\\mathbb{N}"
}
] | https://en.wikipedia.org/wiki?curid=102651 |
1026522 | Boltzmann equation | Equation of statistical mechanics
The Boltzmann equation or Boltzmann transport equation (BTE) describes the statistical behaviour of a thermodynamic system not in a state of equilibrium; it was devised by Ludwig Boltzmann in 1872.
The classic example of such a system is a fluid with temperature gradients in space causing heat to flow from hotter regions to colder ones, by the random but biased transport of the particles making up that fluid. In the modern literature the term Boltzmann equation is often used in a more general sense, referring to any kinetic equation that describes the change of a macroscopic quantity in a thermodynamic system, such as energy, charge or particle number.
The equation arises not by analyzing the individual positions and momenta of each particle in the fluid but rather by considering a probability distribution for the position and momentum of a typical particle—that is, the probability that the particle occupies a given very small region of space (mathematically the volume element formula_0) centered at the position formula_1, and has momentum nearly equal to a given momentum vector formula_2 (thus occupying a very small region of momentum space formula_3), at an instant of time.
The Boltzmann equation can be used to determine how physical quantities change, such as heat energy and momentum, when a fluid is in transport. One may also derive other properties characteristic to fluids such as viscosity, thermal conductivity, and electrical conductivity (by treating the charge carriers in a material as a gas). See also convection–diffusion equation.
The equation is a nonlinear integro-differential equation, and the unknown function in the equation is a probability density function in six-dimensional space of a particle position and momentum. The problem of existence and uniqueness of solutions is still not fully resolved, but some recent results are quite promising.
Overview.
The phase space and density function.
The set of all possible positions r and momenta p is called the phase space of the system; in other words a set of three coordinates for each position coordinate "x, y, z", and three more for each momentum component "px", "py", "pz". The entire space is 6-dimensional: a point in this space is (r, p) = ("x, y, z, px, py, pz"), and each coordinate is parameterized by time "t". The small volume ("differential volume element") is written
formula_4
Since the probability of N molecules, which "all" have r and p within formula_5, is in question, at the heart of the equation is a quantity "f" which gives this probability per unit phase-space volume, or probability per unit length cubed per unit momentum cubed, at an instant of time t. This is a probability density function: "f"(r, p, "t"), defined so that,
formula_6
is the number of molecules which "all" have positions lying within a volume element formula_7 about r and momenta lying within a momentum space element formula_8 about p, at time t. Integrating over a region of position space and momentum space gives the total number of particles which have positions and momenta in that region:
formula_9
which is a 6-fold integral. While "f" is associated with a number of particles, the phase space is for one-particle (not all of them, which is usually the case with deterministic many-body systems), since only one r and p is in question. It is not part of the analysis to use r1, p1 for particle 1, r2, p2 for particle 2, etc. up to r"N", p"N" for particle "N".
It is assumed the particles in the system are identical (so each has an identical mass m). For a mixture of more than one chemical species, one distribution is needed for each, see below.
Principal statement.
The general equation can then be written as
formula_10
where the "force" term corresponds to the forces exerted on the particles by an external influence (not by the particles themselves), the "diff" term represents the diffusion of particles, and "coll" is the collision term – accounting for the forces acting between particles in collisions. Expressions for each term on the right side are provided below.
Note that some authors use the particle velocity v instead of momentum p; they are related in the definition of momentum by p = "m"v.
The force and diffusion terms.
Consider particles described by "f", each experiencing an "external" force F not due to other particles (see the collision term for the latter treatment).
Suppose at time t some number of particles all have position r within element formula_7 and momentum p within formula_8. If a force F instantly acts on each particle, then at time "t" + Δ"t" their position will be formula_11 and momentum p + Δp = p + FΔ"t". Then, in the absence of collisions, "f" must satisfy
formula_12
Note that we have used the fact that the phase space volume element formula_5 is constant, which can be shown using Hamilton's equations (see the discussion under Liouville's theorem). However, since collisions do occur, the particle density in the phase-space volume formula_5 changes, so
where Δ"f" is the "total" change in "f". Dividing (1) by formula_13 and taking the limits Δ"t" → 0 and Δ"f" → 0, we have
The total differential of "f" is:
where ∇ is the gradient operator, · is the dot product,
formula_14
is a shorthand for the momentum analogue of ∇, and ê"x", ê"y", ê"z" are Cartesian unit vectors.
Final statement.
Dividing (3) by "dt" and substituting into (2) gives:
formula_15
In this context, F(r, "t") is the force field acting on the particles in the fluid, and m is the mass of the particles. The term on the right hand side is added to describe the effect of collisions between particles; if it is zero then the particles do not collide. The collisionless Boltzmann equation, where individual collisions are replaced with long-range aggregated interactions, e.g. Coulomb interactions, is often called the Vlasov equation.
This equation is more useful than the principal one above, yet still incomplete, since "f" cannot be solved unless the collision term in "f" is known. This term cannot be found as easily or generally as the others – it is a statistical term representing the particle collisions, and requires knowledge of the statistics the particles obey, like the Maxwell–Boltzmann, Fermi–Dirac or Bose–Einstein distributions.
The collision term (Stosszahlansatz) and molecular chaos.
Two-body collision term.
A key insight applied by Boltzmann was to determine the collision term resulting solely from two-body collisions between particles that are assumed to be uncorrelated prior to the collision. This assumption was referred to by Boltzmann as the "" and is also known as the "molecular chaos assumption". Under this assumption the collision term can be written as a momentum-space integral over the product of one-particle distribution functions:
formula_16
where p"A" and p"B" are the momenta of any two particles (labeled as "A" and "B" for convenience) before a collision, p′"A" and p′"B" are the momenta after the collision,
formula_17
is the magnitude of the relative momenta (see relative velocity for more on this concept), and "I"("g", Ω) is the differential cross section of the collision, in which the relative momenta of the colliding particles turns through an angle θ into the element of the solid angle "d"Ω, due to the collision.
Simplifications to the collision term.
Since much of the challenge in solving the Boltzmann equation originates with the complex collision term, attempts have been made to "model" and simplify the collision term. The best known model equation is due to Bhatnagar, Gross and Krook. The assumption in the BGK approximation is that the effect of molecular collisions is to force a non-equilibrium distribution function at a point in physical space back to a Maxwellian equilibrium distribution function and that the rate at which this occurs is proportional to the molecular collision frequency. The Boltzmann equation is therefore modified to the BGK form:
formula_18
where formula_19 is the molecular collision frequency, and formula_20 is the local Maxwellian distribution function given the gas temperature at this point in space. This is also called "relaxation time approximation".
General equation (for a mixture).
For a mixture of chemical species labelled by indices "i" = 1, 2, 3, ..., "n" the equation for species i is
formula_21
where "fi" = "fi"(r, p"i", "t"), and the collision term is
formula_22
where "f′" = "f′"(p′"i", "t"), the magnitude of the relative momenta is
formula_23
and "Iij" is the differential cross-section, as before, between particles "i" and "j". The integration is over the momentum components in the integrand (which are labelled "i" and "j"). The sum of integrals describes the entry and exit of particles of species "i" in or out of the phase-space element.
Applications and extensions.
Conservation equations.
The Boltzmann equation can be used to derive the fluid dynamic conservation laws for mass, charge, momentum, and energy.163 For a fluid consisting of only one kind of particle, the number density n is given by
formula_24
The average value of any function "A" is
formula_25
Since the conservation equations involve tensors, the Einstein summation convention will be used where repeated indices in a product indicate summation over those indices. Thus formula_26 and formula_27, where formula_28 is the particle velocity vector. Define formula_29 as some function of momentum formula_30 only, which is conserved in a collision. Assume also that the force formula_31 is a function of position only, and that "f" is zero for formula_32. Multiplying the Boltzmann equation by "A" and integrating over momentum yields four terms, which, using integration by parts, can be expressed as
formula_33
formula_34
formula_35
formula_36
where the last term is zero, since "A" is conserved in a collision. The values of "A" correspond to moments of velocity formula_28 (and momentum formula_30, as they are linearly dependent).
Zeroth moment.
Letting formula_37, the mass of the particle, the integrated Boltzmann equation becomes the conservation of mass equation:
formula_38
where formula_39 is the mass density, and formula_40 is the average fluid velocity.
First moment.
Letting formula_41, the momentum of the particle, the integrated Boltzmann equation becomes the conservation of momentum equation:
formula_42
where formula_43 is the pressure tensor (the viscous stress tensor plus the hydrostatic pressure).
Second moment.
Letting formula_44, the kinetic energy of the particle, the integrated Boltzmann equation becomes the conservation of energy equation:
formula_45
where formula_46 is the kinetic thermal energy density, and formula_47 is the heat flux vector.
Hamiltonian mechanics.
In Hamiltonian mechanics, the Boltzmann equation is often written more generally as
formula_48
where L is the Liouville operator (there is an inconsistent definition between the Liouville operator as defined here and the one in the article linked) describing the evolution of a phase space volume and C is the collision operator. The non-relativistic form of L is
formula_49
Quantum theory and violation of particle number conservation.
It is possible to write down relativistic quantum Boltzmann equations for relativistic quantum systems in which the number of particles is not conserved in collisions. This has several applications in physical cosmology, including the formation of the light elements in Big Bang nucleosynthesis, the production of dark matter and baryogenesis. It is not a priori clear that the state of a quantum system can be characterized by a classical phase space density "f". However, for a wide class of applications a well-defined generalization of "f" exists which is the solution of an effective Boltzmann equation that can be derived from first principles of quantum field theory.
General relativity and astronomy.
The Boltzmann equation is of use in galactic dynamics. A galaxy, under certain assumptions, may be approximated as a continuous fluid; its mass distribution is then represented by "f"; in galaxies, physical collisions between the stars are very rare, and the effect of "gravitational collisions" can be neglected for times far longer than the age of the universe.
Its generalization in general relativity is
formula_50
where Γαβγ is the Christoffel symbol of the second kind (this assumes there are no external forces, so that particles move along geodesics in the absence of collisions), with the important subtlety that the density is a function in mixed contravariant-covariant ("xi", "pi") phase space as opposed to fully contravariant ("xi", "pi") phase space.
In physical cosmology the fully covariant approach has been used to study the cosmic microwave background radiation. More generically the study of processes in the early universe often attempt to take into account the effects of quantum mechanics and general relativity. In the very dense medium formed by the primordial plasma after the Big Bang, particles are continuously created and annihilated. In such an environment quantum coherence and the spatial extension of the wavefunction can affect the dynamics, making it questionable whether the classical phase space distribution "f" that appears in the Boltzmann equation is suitable to describe the system. In many cases it is, however, possible to derive an effective Boltzmann equation for a generalized distribution function from first principles of quantum field theory. This includes the formation of the light elements in Big Bang nucleosynthesis, the production of dark matter and baryogenesis.
Solving the equation.
Exact solutions to the Boltzmann equations have been proven to exist in some cases; this analytical approach provides insight, but is not generally usable in practical problems.
Instead, numerical methods (including finite elements and lattice Boltzmann methods) are generally used to find approximate solutions to the various forms of the Boltzmann equation. Example applications range from hypersonic aerodynamics in rarefied gas flows to plasma flows. An application of the Boltzmann equation in electrodynamics is the calculation of the electrical conductivity - the result is in leading order identical with the semiclassical result.
Close to local equilibrium, solution of the Boltzmann equation can be represented by an asymptotic expansion in powers of Knudsen number (the Chapman–Enskog expansion). The first two terms of this expansion give the Euler equations and the Navier–Stokes equations. The higher terms have singularities. The problem of developing mathematically the limiting processes, which lead from the atomistic view (represented by Boltzmann's equation) to the laws of motion of continua, is an important part of Hilbert's sixth problem.
Limitations and further uses of the Boltzmann equation.
The Boltzmann equation is valid only under several assumptions. For instance, the particles are assumed to be pointlike, i.e. without having a finite size. There exists a generalization of the Boltzmann equation that is called the Enskog equation. The collision term is modified in Enskog equations such that particles have a finite size, for example they can be modelled as spheres having a fixed radius.
No further degrees of freedom besides translational motion are assumed for the particles. If there are internal degrees of freedom, the Boltzmann equation has to be generalized and might possess inelastic collisions.
Many real fluids like liquids or dense gases have besides the features mentioned above more complex forms of collisions, there will be not only binary, but also ternary and higher order collisions. These must be derived by using the BBGKY hierarchy.
Boltzmann-like equations are also used for the movement of cells. Since cells are composite particles that carry internal degrees of freedom, the corresponding generalized Boltzmann equations must have inelastic collision integrals. Such equations can describe invasions of cancer cells in tissue, morphogenesis, and chemotaxis-related effects.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d^3 \\mathbf{r}"
},
{
"math_id": 1,
"text": "\\mathbf{r}"
},
{
"math_id": 2,
"text": " \\mathbf{p}"
},
{
"math_id": 3,
"text": "d^3 \\mathbf{p}"
},
{
"math_id": 4,
"text": " d^3\\mathbf{r} \\, d^3\\mathbf{p} = dx \\, dy \\, dz \\, dp_x \\, dp_y \\, dp_z. "
},
{
"math_id": 5,
"text": " d^3\\mathbf{r} \\, d^3\\mathbf{p}"
},
{
"math_id": 6,
"text": "dN = f (\\mathbf{r},\\mathbf{p},t) \\, d^3\\mathbf{r} \\, d^3\\mathbf{p}"
},
{
"math_id": 7,
"text": " d^3\\mathbf{r}"
},
{
"math_id": 8,
"text": " d^3\\mathbf{p}"
},
{
"math_id": 9,
"text": "\\begin{align}\nN & = \\int\\limits_\\mathrm{momenta} d^3\\mathbf{p} \\int\\limits_\\mathrm{positions} d^3\\mathbf{r}\\,f (\\mathbf{r},\\mathbf{p},t) \\\\[5pt]\n& = \\iiint\\limits_\\mathrm{momenta} \\quad \\iiint\\limits_\\mathrm{positions} f(x,y,z, p_x,p_y,p_z, t) \\, dx \\, dy \\, dz \\, dp_x \\, dp_y \\, dp_z\n\\end{align}"
},
{
"math_id": 10,
"text": "\n \\frac{df}{dt} =\n \\left(\\frac{\\partial f}{\\partial t}\\right)_\\text{force} +\n \\left(\\frac{\\partial f}{\\partial t}\\right)_\\text{diff} +\n \\left(\\frac{\\partial f}{\\partial t}\\right)_\\text{coll},\n"
},
{
"math_id": 11,
"text": " \\mathbf{r} + \\Delta \\mathbf{r} = \\mathbf{r} +\\frac{\\mathbf{p}}{m} \\, \\Delta t "
},
{
"math_id": 12,
"text": "\nf \\left (\\mathbf{r}+\\frac{\\mathbf{p}}{m} \\, \\Delta t,\\mathbf{p}+\\mathbf{F} \\, \\Delta t, t+\\Delta t \\right )\\,d^3\\mathbf{r}\\,d^3\\mathbf{p} = f(\\mathbf{r}, \\mathbf{p},t) \\, d^3\\mathbf{r} \\, d^3\\mathbf{p}\n"
},
{
"math_id": 13,
"text": " d^3\\mathbf{r} \\, d^3\\mathbf{p} \\, \\Delta t"
},
{
"math_id": 14,
"text": "\n\\frac{\\partial f}{\\partial \\mathbf{p}} = \\mathbf{\\hat{e}}_x\\frac{\\partial f}{\\partial p_x} + \\mathbf{\\hat{e}}_y\\frac{\\partial f}{\\partial p_y} + \\mathbf{\\hat{e}}_z \\frac{\\partial f}{\\partial p_z}= \\nabla_\\mathbf{p}f\n"
},
{
"math_id": 15,
"text": "\\frac{\\partial f}{\\partial t} + \\frac{\\mathbf{p}}{m}\\cdot\\nabla f + \\mathbf{F} \\cdot \\frac{\\partial f}{\\partial \\mathbf{p}} = \\left(\\frac{\\partial f}{\\partial t} \\right)_\\mathrm{coll}"
},
{
"math_id": 16,
"text": "\n \\left(\\frac{\\partial f}{\\partial t}\\right)_\\text{coll} =\n \\iint g I(g, \\Omega)[f(\\mathbf{r},\\mathbf{p'}_A, t) f(\\mathbf{r},\\mathbf{p'}_B,t) - f(\\mathbf{r},\\mathbf{p}_A,t) f(\\mathbf{r},\\mathbf{p}_B,t)] \\,d\\Omega \\,d^3\\mathbf{p}_B,\n"
},
{
"math_id": 17,
"text": "\n g = |\\mathbf{p}_B - \\mathbf{p}_A| = |\\mathbf{p'}_B - \\mathbf{p'}_A|\n"
},
{
"math_id": 18,
"text": "\\frac{\\partial f}{\\partial t} + \\frac{\\mathbf{p}}{m}\\cdot\\nabla f + \\mathbf{F} \\cdot \\frac{\\partial f}{\\partial \\mathbf{p}} = \\nu (f_0 - f),"
},
{
"math_id": 19,
"text": "\\nu"
},
{
"math_id": 20,
"text": "f_0"
},
{
"math_id": 21,
"text": "\\frac{\\partial f_i}{\\partial t} + \\frac{\\mathbf{p}_i}{m_i} \\cdot \\nabla f_i + \\mathbf{F} \\cdot \\frac{\\partial f_i}{\\partial \\mathbf{p}_i} = \\left(\\frac{\\partial f_i}{\\partial t} \\right)_\\text{coll},"
},
{
"math_id": 22,
"text": " \\left(\\frac{\\partial f_i}{\\partial t} \\right)_{\\mathrm{coll}} = \\sum_{j=1}^n \\iint g_{ij} I_{ij}(g_{ij}, \\Omega)[f'_i f'_j - f_i f_j] \\,d\\Omega\\,d^3\\mathbf{p'},"
},
{
"math_id": 23,
"text": "g_{ij} = |\\mathbf{p}_i - \\mathbf{p}_j| = |\\mathbf{p'}_i - \\mathbf{p'}_j|,"
},
{
"math_id": 24,
"text": "n = \\int f \\,d^3\\mathbf{p}."
},
{
"math_id": 25,
"text": "\\langle A \\rangle = \\frac 1 n \\int A f \\,d^3\\mathbf{p}."
},
{
"math_id": 26,
"text": "\\mathbf{x} \\mapsto x_i"
},
{
"math_id": 27,
"text": "\\mathbf{p} \\mapsto p_i = m v_i"
},
{
"math_id": 28,
"text": "v_i"
},
{
"math_id": 29,
"text": "A(p_i)"
},
{
"math_id": 30,
"text": "p_i"
},
{
"math_id": 31,
"text": "F_i"
},
{
"math_id": 32,
"text": "p_i \\to \\pm\\infty"
},
{
"math_id": 33,
"text": "\\int A \\frac{\\partial f}{\\partial t} \\,d^3\\mathbf{p} = \\frac{\\partial }{\\partial t} (n \\langle A \\rangle),"
},
{
"math_id": 34,
"text": "\\int \\frac{p_j A}{m}\\frac{\\partial f}{\\partial x_j} \\,d^3\\mathbf{p} = \\frac{1}{m}\\frac{\\partial}{\\partial x_j}(n\\langle A p_j \\rangle),"
},
{
"math_id": 35,
"text": "\\int A F_j \\frac{\\partial f}{\\partial p_j} \\,d^3\\mathbf{p} = -n F_j\\left\\langle \\frac{\\partial A}{\\partial p_j}\\right\\rangle,"
},
{
"math_id": 36,
"text": "\\int A \\left(\\frac{\\partial f}{\\partial t}\\right)_\\text{coll} \\,d^3\\mathbf{p} = 0,"
},
{
"math_id": 37,
"text": "A = m(v_i)^0 = m"
},
{
"math_id": 38,
"text": "\\frac{\\partial}{\\partial t}\\rho + \\frac{\\partial}{\\partial x_j}(\\rho V_j) = 0,"
},
{
"math_id": 39,
"text": "\\rho = mn"
},
{
"math_id": 40,
"text": "V_i = \\langle v_i\\rangle"
},
{
"math_id": 41,
"text": "A = m(v_i)^1 = p_i"
},
{
"math_id": 42,
"text": "\\frac{\\partial}{\\partial t}(\\rho V_i) + \\frac{\\partial}{\\partial x_j}(\\rho V_i V_j+P_{ij}) - n F_i = 0,"
},
{
"math_id": 43,
"text": "P_{ij} = \\rho \\langle (v_i-V_i) (v_j-V_j) \\rangle"
},
{
"math_id": 44,
"text": "A = \\frac{m(v_i)^2}{2} = \\frac{p_i p_i}{2m}"
},
{
"math_id": 45,
"text": "\\frac{\\partial}{\\partial t}(u + \\tfrac{1}{2}\\rho V_i V_i) + \\frac{\\partial}{\\partial x_j} (uV_j + \\tfrac{1}{2}\\rho V_i V_i V_j + J_{qj} + P_{ij}V_i) - nF_iV_i = 0,"
},
{
"math_id": 46,
"text": "u = \\tfrac{1}{2} \\rho \\langle (v_i-V_i) (v_i-V_i) \\rangle"
},
{
"math_id": 47,
"text": "J_{qi} = \\tfrac{1}{2} \\rho \\langle(v_i - V_i)(v_k - V_k)(v_k - V_k)\\rangle"
},
{
"math_id": 48,
"text": "\\hat{\\mathbf{L}}[f]=\\mathbf{C}[f], "
},
{
"math_id": 49,
"text": "\\hat{\\mathbf{L}}_\\mathrm{NR} = \\frac{\\partial}{\\partial t} + \\frac{\\mathbf{p}}{m} \\cdot \\nabla + \\mathbf{F}\\cdot\\frac{\\partial}{\\partial \\mathbf{p}}\\,."
},
{
"math_id": 50,
"text": "\\hat{\\mathbf{L}}_\\mathrm{GR}[f] = p^\\alpha\\frac{\\partial f}{\\partial x^\\alpha} - \\Gamma^\\alpha{}_{\\beta\\gamma} p^\\beta p^\\gamma \\frac{\\partial f}{\\partial p^\\alpha} = C[f],"
}
] | https://en.wikipedia.org/wiki?curid=1026522 |
102653 | Bernoulli trial | Any experiment with two possible random outcomes
In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is conducted. It is named after Jacob Bernoulli, a 17th-century Swiss mathematician, who analyzed them in his "" (1713).
The mathematical formalization and advanced formulation of the Bernoulli trial is known as the Bernoulli process.
Since a Bernoulli trial has only two possible outcomes, it can be framed as a "yes or no" question. For example:
Success and failure are in this context labels for the two outcomes, and should not be construed literally or as value judgments. More generally, given any probability space, for any event (set of outcomes), one can define a Bernoulli trial according to whether the event occurred or not (event or complementary event). Examples of Bernoulli trials include:
Definition.
Independent repeated trials of an experiment with exactly two possible outcomes are called Bernoulli trials. Call one of the outcomes "success" and the other outcome "failure". Let formula_0 be the probability of success in a Bernoulli trial, and formula_1 be the probability of failure. Then the probability of success and the probability of failure sum to one, since these are complementary events: "success" and "failure" are mutually exclusive and exhaustive. Thus, one has the following relations:
formula_2
Alternatively, these can be stated in terms of odds: given probability "formula_0" of success and "formula_1" of failure, the "odds for" are formula_3 and the "odds against" are formula_4 These can also be expressed as numbers, by dividing, yielding the odds for, formula_5, and the odds against, formula_6:
formula_7
These are multiplicative inverses, so they multiply to 1, with the following relations:
formula_8
In the case that a Bernoulli trial is representing an event from finitely many equally likely outcomes, where "formula_9" of the outcomes are success and "formula_10" of the outcomes are failure, the odds for are formula_11 and the odds against are formula_12 This yields the following formulas for probability and odds:
formula_13
Here the odds are computed by dividing the number of outcomes, not the probabilities, but the proportion is the same, since these ratios only differ by multiplying both terms by the same constant factor.
Random variables describing Bernoulli trials are often encoded using the convention that 1 = "success", 0 = "failure".
Closely related to a Bernoulli trial is a binomial experiment, which consists of a fixed number formula_14 of statistically independent Bernoulli trials, each with a probability of success formula_0, and counts the number of successes. A random variable corresponding to a binomial experiment is denoted by formula_15, and is said to have a "binomial distribution".
The probability of exactly formula_16 successes in the experiment formula_15 is given by:
formula_17
where formula_18 is a binomial coefficient.
Bernoulli trials may also lead to negative binomial distributions (which count the number of successes in a series of repeated Bernoulli trials until a specified number of failures are seen), as well as various other distributions.
When multiple Bernoulli trials are performed, each with its own probability of success, these are sometimes referred to as Poisson trials.
Examples.
Tossing coins.
Consider the simple experiment where a fair coin is tossed four times. Find the probability that exactly two of the tosses result in heads.
Solution.
For this experiment, let a heads be defined as a "success" and a tails as a "failure." Because the coin is assumed to be fair, the probability of success is formula_19. Thus, the probability of failure, formula_1, is given by
formula_20.
Using the equation above, the probability of exactly two tosses out of four total tosses resulting in a heads is given by:
formula_21
Rolling dice.
What is probability that when three independent fair six-sided dice are rolled, exactly two yield sixes?
Solution.
On one die, the probability of rolling a six, formula_22. Thus, the probability of not rolling a six, formula_23.
As above, the probability of exactly two sixes out of three,
formula_24
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "q"
},
{
"math_id": 2,
"text": "\np = 1 - q, \\quad \\quad q = 1 - p, \\quad \\quad p + q = 1."
},
{
"math_id": 3,
"text": "p:q"
},
{
"math_id": 4,
"text": "q:p."
},
{
"math_id": 5,
"text": "o_f"
},
{
"math_id": 6,
"text": "o_a"
},
{
"math_id": 7,
"text": "\n\\begin{align}\no_f &= p/q = p/(1-p) = (1-q)/q\\\\\no_a &= q/p = (1-p)/p = q/(1-q).\n\\end{align}\n"
},
{
"math_id": 8,
"text": "\no_f = 1/o_a, \\quad o_a = 1/o_f, \\quad o_f \\cdot o_a = 1."
},
{
"math_id": 9,
"text": "S"
},
{
"math_id": 10,
"text": "F"
},
{
"math_id": 11,
"text": "S:F"
},
{
"math_id": 12,
"text": "F:S."
},
{
"math_id": 13,
"text": "\n\\begin{align}\np &= S/(S+F)\\\\\nq &= F/(S+F)\\\\\no_f &= S/F\\\\\no_a &= F/S.\n\\end{align}\n"
},
{
"math_id": 14,
"text": "n"
},
{
"math_id": 15,
"text": "B(n,p)"
},
{
"math_id": 16,
"text": "k"
},
{
"math_id": 17,
"text": "P(k)={n \\choose k} p^k q^{n-k}"
},
{
"math_id": 18,
"text": "{n \\choose k}"
},
{
"math_id": 19,
"text": "p = \\tfrac{1}{2}"
},
{
"math_id": 20,
"text": "q = 1 - p = 1 - \\tfrac{1}{2} = \\tfrac{1}{2}"
},
{
"math_id": 21,
"text": "\\begin{align}\nP(2)\n &= {4 \\choose 2} p^{2} q^{4-2} \\\\\n &= 6 \\times \\left(\\tfrac{1}{2}\\right)^2 \\times \\left(\\tfrac{1}{2}\\right)^2 \\\\\n &= \\dfrac {3}{8}.\n\\end{align}"
},
{
"math_id": 22,
"text": "p = \\tfrac{1}{6}"
},
{
"math_id": 23,
"text": "q = 1 - p = \\tfrac{5}{6}"
},
{
"math_id": 24,
"text": "\\begin{align}\nP(2)\n &= {3 \\choose 2} p^{2} q^{3-2} \\\\\n &= 3 \\times \\left(\\tfrac{1}{6}\\right)^2 \\times \\left(\\tfrac{5}{6}\\right)^1 \\\\\n &= \\dfrac {5}{72} \\approx 0.069.\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=102653 |
10265555 | Notation for differentiation | Notation of differential calculus
In differential calculus, there is no single uniform notation for differentiation. Instead, various notations for the derivative of a function or variable have been proposed by various mathematicians. The usefulness of each notation varies with the context, and it is sometimes advantageous to use more than one notation in a given context. The most common notations for differentiation (and its opposite operation, the antidifferentiation or indefinite integration) are listed below.
Leibniz's notation.
"dy""dx""d"2"y""dx"2The first and second derivatives of y with respect to x, in the Leibniz notation.
The original notation employed by Gottfried Leibniz is used throughout mathematics. It is particularly common when the equation "y"
"f"("x") is regarded as a functional relationship between dependent and independent variables "y" and "x". Leibniz's notation makes this relationship explicit by writing the derivative as
formula_0
Furthermore, the derivative of "f" at "x" is therefore written
formula_1
Higher derivatives are written as
formula_2
This is a suggestive notational device that comes from formal manipulations of symbols, as in,
formula_3
The value of the derivative of "y" at a point "x"
"a" may be expressed in two ways using Leibniz's notation:
formula_4.
Leibniz's notation allows one to specify the variable for differentiation (in the denominator). This is especially helpful when considering partial derivatives. It also makes the chain rule easy to remember and recognize:
formula_5
Leibniz's notation for differentiation does not require assigning a meaning to symbols such as "dx" or "dy" (known as differentials) on their own, and some authors do not attempt to assign these symbols meaning. Leibniz treated these symbols as infinitesimals. Later authors have assigned them other meanings, such as infinitesimals in non-standard analysis, or exterior derivatives. Commonly, "dx" is left undefined or equated with formula_6, while "dy" is assigned a meaning in terms of "dx", via the equation
formula_7
which may also be written, e.g.
formula_8
(see below). Such equations give rise to the terminology found in some texts wherein the derivative is referred to as the "differential coefficient" (i.e., the coefficient of "dx").
Some authors and journals set the differential symbol "d" in roman type instead of italic: d"x". The ISO/IEC 80000 scientific style guide recommends this style.
Leibniz's notation for antidifferentiation.
formula_9<br>formula_10The single and double indefinite integrals of y with respect to x, in the Leibniz notation.
Leibniz introduced the integral symbol ∫ in "Analyseos tetragonisticae pars secunda" and "Methodi tangentium inversae exempla" (both from 1675). It is now the standard symbol for integration.
formula_11
Lagrange's notation.
"f"′("x")A function f of x, differentiated once in Lagrange's notation.
One of the most common modern notations for differentiation is named after Joseph Louis Lagrange, even though it was actually invented by Euler and just popularized by the former. In Lagrange's notation, a prime mark denotes a derivative. If "f" is a function, then its derivative evaluated at "x" is written
formula_12.
It first appeared in print in 1749.
Higher derivatives are indicated using additional prime marks, as in formula_13 for the second derivative and formula_14 for the third derivative. The use of repeated prime marks eventually becomes unwieldy. Some authors continue by employing Roman numerals, usually in lower case, as in
formula_15
to denote fourth, fifth, sixth, and higher order derivatives. Other authors use Arabic numerals in parentheses, as in
formula_16
This notation also makes it possible to describe the "n"th derivative, where "n" is a variable. This is written
formula_17
Unicode characters related to Lagrange's notation include
When there are two independent variables for a function "f"("x", "y"), the following convention may be followed:
formula_18
Lagrange's notation for antidifferentiation.
"f"(−1)("x")"f"(−2)("x")The single and double indefinite integrals of f with respect to x, in the Lagrange notation.
When taking the antiderivative, Lagrange followed Leibniz's notation:
formula_19
However, because integration is the inverse operation of differentiation, Lagrange's notation for higher order derivatives extends to integrals as well. Repeated integrals of "f" may be written as
formula_20 for the first integral (this is easily confused with the inverse function formula_21),
formula_22 for the second integral,
formula_23 for the third integral, and
formula_24 for the "n"th integral.
D-notation.
"Dxy""D"2"f"The x derivative of y and the second derivative of f, Euler notation.
This notation is sometimes called <templatestyles src="Template:Visible anchor/styles.css" />Euler's notation although it was introduced by Louis François Antoine Arbogast, and it seems that Leonhard Euler did not use it.
This notation uses a differential operator denoted as "D" (D operator) or "D̃" (Newton–Leibniz operator). When applied to a function "f"("x"), it is defined by
formula_25
Higher derivatives are notated as "powers" of "D" (where the superscripts denote iterated composition of "D"), as in
formula_26 for the second derivative,
formula_27 for the third derivative, and
formula_28 for the "n"th derivative.
D-notation leaves implicit the variable with respect to which differentiation is being done. However, this variable can also be made explicit by putting its name as a subscript: if "f" is a function of a variable "x", this is done by writing
formula_29 for the first derivative,
formula_30 for the second derivative,
formula_31 for the third derivative, and
formula_32 for the "n"th derivative.
When "f" is a function of several variables, it is common to use "∂", a stylized cursive lower-case d, rather than ""D"". As above, the subscripts denote the derivatives that are being taken. For example, the second partial derivatives of a function "f"("x", "y") are:
formula_33
See .
D-notation is useful in the study of differential equations and in differential algebra.
D-notation for antiderivatives.
"D""y""D"−2"f"The x antiderivative of y and the second antiderivative of f, Euler notation.
D-notation can be used for antiderivatives in the same way that Lagrange's notation is as follows
formula_34 for a first antiderivative,
formula_35 for a second antiderivative, and
formula_36 for an "n"th antiderivative.
Newton's notation.
"ẋ""ẍ"The first and second derivatives of x, Newton's notation.
Isaac Newton's notation for differentiation (also called the dot notation, fluxions, or sometimes, crudely, the flyspeck notation for differentiation) places a dot over the dependent variable. That is, if "y" is a function of "t", then the derivative of "y" with respect to "t" is
formula_37
Higher derivatives are represented using multiple dots, as in
formula_38
Newton extended this idea quite far:
formula_39
Unicode characters related to Newton's notation include:
Newton's notation is generally used when the independent variable denotes time. If location "y" is a function of "t", then formula_37 denotes velocity and formula_40 denotes acceleration. This notation is popular in physics and mathematical physics. It also appears in areas of mathematics connected with physics such as differential equations.
When taking the derivative of a dependent variable "y" = "f"("x"), an alternative notation exists:
formula_41
Newton developed the following partial differential operators using side-dots on a curved X ( ⵋ ). Definitions given by Whiteside are below:
formula_42
Newton's notation for integration.
"x̍""x̎"The first and second antiderivatives of x, in one of Newton's notations.
Newton developed many different notations for integration in his "Quadratura curvarum" (1704) and later works: he wrote a small vertical bar or prime above the dependent variable ("y̍" ), a prefixing rectangle (▭"y"), or the inclosure of the term in a rectangle ("y") to denote the "fluent" or time integral (absement).
formula_43
To denote multiple integrals, Newton used two small vertical bars or primes ("y̎"), or a combination of previous symbols ▭"y̍" "y̍", to denote the second time integral (absity).
formula_44
Higher order time integrals were as follows:
formula_45
This mathematical notation did not become widespread because of printing difficulties and the Leibniz–Newton calculus controversy.
Partial derivatives.
"fx""fxy"A function f differentiated against x, then against x and y.
When more specific types of differentiation are necessary, such as in multivariate calculus or tensor analysis, other notations are common.
For a function "f" of a single independent variable "x", we can express the derivative using subscripts of the independent variable:
formula_46
This type of notation is especially useful for taking partial derivatives of a function of several variables.
A function f differentiated against x.
Partial derivatives are generally distinguished from ordinary derivatives by replacing the differential operator "d" with a "∂" symbol. For example, we can indicate the partial derivative of "f"("x", "y", "z") with respect to "x", but not to "y" or "z" in several ways:
formula_47
What makes this distinction important is that a non-partial derivative such as formula_48 "may", depending on the context, be interpreted as a rate of change in formula_49 relative to formula_50 when all variables are allowed to vary simultaneously, whereas with a partial derivative such as formula_51 it is explicit that only one variable should vary.
Other notations can be found in various subfields of mathematics, physics, and engineering; see for example the Maxwell relations of thermodynamics. The symbol formula_52 is the derivative of the temperature "T" with respect to the volume "V" while keeping constant the entropy (subscript) "S", while formula_53 is the derivative of the temperature with respect to the volume while keeping constant the pressure "P". This becomes necessary in situations where the number of variables exceeds the degrees of freedom, so that one has to choose which other variables are to be kept fixed.
Higher-order partial derivatives with respect to one variable are expressed as
formula_54
and so on. Mixed partial derivatives can be expressed as
formula_55
In this last case the variables are written in inverse order between the two notations, explained as follows:
formula_56
So-called multi-index notation is used in situations when the above notation becomes cumbersome or insufficiently expressive. When considering functions on formula_57, we define a multi-index to be an ordered list of formula_58 non-negative integers: formula_59. We then define, for formula_60, the notation
formula_61
In this way some results (such as the Leibniz rule) that are tedious to write in other ways can be expressed succinctly -- some examples can be found in the article on multi-indices.
Notation in vector calculus.
Vector calculus concerns differentiation and integration of vector or scalar fields. Several notations specific to the case of three-dimensional Euclidean space are common.
Assume that ("x", "y", "z") is a given Cartesian coordinate system, that A is a vector field with components formula_62, and that formula_63 is a scalar field.
The differential operator introduced by William Rowan Hamilton, written ∇ and called del or nabla, is symbolically defined in the form of a vector,
formula_64
where the terminology "symbolically" reflects that the operator ∇ will also be treated as an ordinary vector.
∇"φ"Gradient of the scalar field "φ".
formula_67
∇∙AThe divergence of the vector field A.
formula_69
∇2"φ"The Laplacian of the scalar field "φ".
formula_71
∇×AThe curl of vector field A.
formula_74
Many symbolic operations of derivatives can be generalized in a straightforward manner by the gradient operator in Cartesian coordinates. For example, the single-variable product rule has a direct analogue in the multiplication of scalar fields by applying the gradient operator, as in
formula_75
Many other rules from single variable calculus have vector calculus analogues for the gradient, divergence, curl, and Laplacian.
Further notations have been developed for more exotic types of spaces. For calculations in Minkowski space, the d'Alembert operator, also called the d'Alembertian, wave operator, or box operator is represented as formula_76, or as formula_77 when not in conflict with the symbol for the Laplacian.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{dy}{dx}."
},
{
"math_id": 1,
"text": "\\frac{df}{dx}(x)\\text{ or }\\frac{d f(x)}{dx}\\text{ or }\\frac{d}{dx} f(x)."
},
{
"math_id": 2,
"text": "\\frac{d^2y}{dx^2}, \\frac{d^3y}{dx^3}, \\frac{d^4y}{dx^4}, \\ldots, \\frac{d^ny}{dx^n}."
},
{
"math_id": 3,
"text": "\\frac{d\\left(\\frac{dy}{dx}\\right)}{dx} = \\left(\\frac{d}{dx}\\right)^2y = \\frac{d^2y}{dx^2}."
},
{
"math_id": 4,
"text": "\\left.\\frac{dy}{dx}\\right|_{x=a} \\text{ or } \\frac{dy}{dx}(a)"
},
{
"math_id": 5,
"text": "\\frac{dy}{dx} = \\frac{dy}{du} \\cdot \\frac{du}{dx}."
},
{
"math_id": 6,
"text": "\\Delta x"
},
{
"math_id": 7,
"text": "dy = \\frac{dy}{dx} \\cdot dx, "
},
{
"math_id": 8,
"text": "df = f'(x) \\cdot dx "
},
{
"math_id": 9,
"text": "\\int y\\,dx"
},
{
"math_id": 10,
"text": "\\iint y\\,dx^2"
},
{
"math_id": 11,
"text": "\\begin{align}\n \\int y'\\,dx &= \\int f'(x)\\,dx = f(x) + C_0 = y + C_0 \\\\\n \\int y\\,dx &= \\int f(x)\\,dx = F(x) + C_1 \\\\\n \\iint y\\,dx^2 &= \\int \\left ( \\int y\\,dx \\right ) dx = \\int_{X\\times X} f(x)\\,dx = \\int F(x)\\,dx = g(x) + C_2 \\\\\n \\underbrace{\\int \\dots \\int}_{\\!\\! n} y\\,\\underbrace{dx \\dots dx}_n &= \\int_{\\underbrace{X\\times\\cdots\\times X}_n} f(x)\\,dx = \\int s(x)\\,dx = S(x) + C_n\n\\end{align}"
},
{
"math_id": 12,
"text": "f'(x)"
},
{
"math_id": 13,
"text": "f''(x)"
},
{
"math_id": 14,
"text": "f'''(x)"
},
{
"math_id": 15,
"text": "f^{\\mathrm{iv}}(x), f^{\\mathrm{v}}(x), f^{\\mathrm{vi}}(x), \\ldots,"
},
{
"math_id": 16,
"text": "f^{(4)}(x), f^{(5)}(x), f^{(6)}(x), \\ldots."
},
{
"math_id": 17,
"text": "f^{(n)}(x)."
},
{
"math_id": 18,
"text": "\\begin{align}\n f^\\prime &= \\frac{\\partial f}{\\partial x} = f_x \\\\[5pt]\n f_\\prime &= \\frac{\\partial f}{\\partial y} = f_y \\\\[5pt]\n f^{\\prime\\prime} &= \\frac{\\partial ^2 f}{\\partial x^2} = f_{xx} \\\\[5pt]\n f_\\prime^\\prime &= \\frac{\\partial ^2 f}{\\partial y \\partial x}\\ = f_{xy} \\\\[5pt]\n f_{\\prime\\prime} &= \\frac{\\partial ^2 f}{\\partial y^2} = f_{yy}\n\\end{align}"
},
{
"math_id": 19,
"text": "f(x) = \\int f'(x)\\,dx = \\int y'\\,dx."
},
{
"math_id": 20,
"text": "f^{(-1)}(x)"
},
{
"math_id": 21,
"text": "f^{-1}(x)"
},
{
"math_id": 22,
"text": "f^{(-2)}(x)"
},
{
"math_id": 23,
"text": "f^{(-3)}(x)"
},
{
"math_id": 24,
"text": "f^{(-n)}(x)"
},
{
"math_id": 25,
"text": "(Df)(x) = \\frac{df(x)}{dx}."
},
{
"math_id": 26,
"text": "D^2f"
},
{
"math_id": 27,
"text": "D^3f"
},
{
"math_id": 28,
"text": "D^nf"
},
{
"math_id": 29,
"text": "D_x f"
},
{
"math_id": 30,
"text": "D^2_x f"
},
{
"math_id": 31,
"text": "D^3_x f"
},
{
"math_id": 32,
"text": "D^n_x f"
},
{
"math_id": 33,
"text": "\n\\begin{align}\n& \\partial_{xx} f = \\frac{\\partial^2 f}{\\partial x^2}, \\\\[5pt]\n& \\partial_{xy} f = \\frac{\\partial^2 f}{\\partial y\\,\\partial x}, \\\\[5pt]\n& \\partial_{yx} f = \\frac{\\partial^2 f}{\\partial x\\,\\partial y}, \\\\[5pt]\n& \\partial_{yy} f = \\frac{\\partial^2 f}{\\partial y^2}.\n\\end{align}\n"
},
{
"math_id": 34,
"text": "D^{-1}f(x)"
},
{
"math_id": 35,
"text": "D^{-2}f(x)"
},
{
"math_id": 36,
"text": "D^{-n}f(x)"
},
{
"math_id": 37,
"text": "\\dot y"
},
{
"math_id": 38,
"text": "\\ddot y, \\overset{...}{y}"
},
{
"math_id": 39,
"text": "\\begin{align}\n \\ddot{y} &\\equiv \\frac{d^2y}{dt^2} = \\frac{d}{dt}\\left(\\frac{dy}{dt}\\right) = \\frac{d}{dt}\\Bigl(\\dot{y}\\Bigr) = \\frac{d}{dt}\\Bigl(f'(t)\\Bigr) = D_t^2 y = f''(t) = y''_t \\\\[5pt]\n \\overset{...}{y} &= \\dot{\\ddot{y}} \\equiv \\frac{d^3y}{dt^3} = D_t^3 y = f'''(t) = y'''_t \\\\[5pt]\n \\overset{\\,4}{\\dot{y}} &= \\overset{....}{y} = \\ddot{\\ddot{y}} \\equiv \\frac{d^4y}{dt^4} = D_t^4 y = f^{\\rm IV}(t) = y^{(4)}_t \\\\[5pt]\n \\overset{\\,5}{\\dot{y}} &= \\ddot{\\overset{...}{y}} = \\dot{\\ddot{\\ddot{y}}} = \\ddot{\\dot{\\ddot{y}}} \\equiv \\frac{d^5y}{dt^5} = D_t^5 y = f^{\\rm V}(t) = y^{(5)}_t \\\\[5pt]\n \\overset{\\,6}{\\dot{y}} &= \\overset{...}{\\overset{...}{y}} \\equiv \\frac{d^6y}{dt^6} = D_t^6 y = f^{\\rm VI}(t) = y^{(6)}_t \\\\[5pt]\n \\overset{\\,7}{\\dot{y}} &= \\dot{\\overset{...}{\\overset{...}{y}}} \\equiv \\frac{d^7y}{dt^7} = D_t^7 y = f^{\\rm VII}(t) = y^{(7)}_t \\\\[5pt]\n \\overset{\\,10}{\\dot{y}} &= \\ddot{\\ddot{\\ddot{\\ddot{\\ddot{y}}}}} \\equiv \\frac{d^{10}y}{dt^{10}} = D_t^{10} y = f^{\\rm X}(t) = y^{(10)}_t \\\\[5pt]\n \\overset{\\,n}{\\dot{y}} &\\equiv \\frac{d^ny}{dt^n} = D_t^n y = f^{(n)}(t) = y^{(n)}_t\n\\end{align}"
},
{
"math_id": 40,
"text": "\\ddot y"
},
{
"math_id": 41,
"text": "\\frac{\\dot{y}}{\\dot{x}} = \\dot{y}:\\dot{x} \\equiv \\frac{dy}{dt}:\\frac{dx}{dt} = \\frac{\\frac{dy}{dt}}{\\frac{dx}{dt}} = \\frac{dy}{dx} = \\frac{d}{dx}\\Bigl(f(x)\\Bigr) = D y = f'(x) = y'."
},
{
"math_id": 42,
"text": "\\begin{align}\n \\mathcal{X} \\ &=\\ f(x,y) \\,, \\\\[5pt]\n \\cdot\\mathcal{X} \\ &=\\ x\\frac{\\partial f}{\\partial x} = xf_x\\,, \\\\[5pt]\n \\mathcal{X}\\!\\cdot \\ &=\\ y\\frac{\\partial f}{\\partial y} = yf_y\\,, \\\\[5pt]\n \\colon\\!\\mathcal{X}\\,\\text{ or }\\,\\cdot\\!\\left(\\cdot\\mathcal{X}\\right) \\ &=\\ x^2\\frac{\\partial^2 f}{\\partial x^2} = x^2 f_{xx}\\,, \\\\[5pt]\n \\mathcal{X}\\colon\\,\\text{ or }\\,\\left(\\mathcal{X}\\cdot\\right)\\!\\cdot \\ &=\\ y^2\\frac{\\partial^2 f}{\\partial y^2} = y^2 f_{yy}\\,, \\\\[5pt]\n \\cdot\\mathcal{X}\\!\\cdot\\ \\ &=\\ xy\\frac{\\partial^2 f}{\\partial x \\, \\partial y} = xy f_{xy}\\,,\n\\end{align}"
},
{
"math_id": 43,
"text": "\\begin{align}\n y &= \\Box \\dot{y} \\equiv \\int \\dot{y} \\,dt = \\int f'(t) \\,dt = D_t^{-1} (D_t y) = f(t) + C_0 = y_t + C_0 \\\\\n \\overset{\\,\\prime}{y} &= \\Box y \\equiv \\int y \\,dt = \\int f(t) \\,dt = D_t^{-1} y = F(t) + C_1\n\\end{align}"
},
{
"math_id": 44,
"text": "\\overset{\\,\\prime\\prime}{y} = \\Box \\overset{\\,\\prime}{y} \\equiv \\int \\overset{\\,\\prime}{y} \\,dt = \\int F(t) \\,dt = D_t^{-2} y = g(t) + C_2"
},
{
"math_id": 45,
"text": "\\begin{align}\n \\overset{\\,\\prime\\prime\\prime}{y} &= \\Box \\overset{\\,\\prime\\prime}{y} \\equiv \\int \\overset{\\,\\prime\\prime}{y} \\,dt = \\int g(t) \\,dt = D_t^{-3} y = G(t) + C_3 \\\\\n \\overset{\\,\\prime\\prime\\prime\\prime}{y} &= \\Box \\overset{\\,\\prime\\prime\\prime}{y} \\equiv \\int \\overset{\\,\\prime\\prime\\prime}{y} \\,dt = \\int G(t) \\,dt = D_t^{-4} y = h(t) + C_4 \\\\\n \\overset{\\;n}\\overset{\\,\\prime}{y} &= \\Box \\overset{\\;n-1}\\overset{\\,\\prime}y \\equiv \\int \\overset{\\;n-1}\\overset{\\,\\prime}y \\,dt = \\int s(t) \\,dt = D_t^{-n} y = S(t) + C_n\n\\end{align}"
},
{
"math_id": 46,
"text": "\\begin{align}\n f_x &= \\frac{df}{dx} \\\\[5pt]\n f_{x x} &= \\frac{d^2f}{dx^2}.\n\\end{align}"
},
{
"math_id": 47,
"text": "\\frac{\\partial f}{\\partial x} = f_x = \\partial_x f."
},
{
"math_id": 48,
"text": "\\textstyle \\frac{df}{dx}"
},
{
"math_id": 49,
"text": "f"
},
{
"math_id": 50,
"text": "x"
},
{
"math_id": 51,
"text": "\\textstyle \\frac{\\partial f}{\\partial x}"
},
{
"math_id": 52,
"text": "\\left(\\frac{\\partial T}{\\partial V}\\right)_{\\!S} "
},
{
"math_id": 53,
"text": "\\left(\\frac{\\partial T}{\\partial V}\\right)_{\\!P} "
},
{
"math_id": 54,
"text": "\n\\begin{align}\n& \\frac{\\partial^2f}{\\partial x^2} = f_{xx}, \\\\[5pt]\n& \\frac{\\partial^3f}{\\partial x^3} = f_{xxx},\n\\end{align}\n"
},
{
"math_id": 55,
"text": "\\frac{\\partial^2 f}{\\partial y \\partial x} = f_{xy}."
},
{
"math_id": 56,
"text": "\n\\begin{align}\n& (f_x)_y = f_{xy}, \\\\[5pt]\n& \\frac{\\partial}{\\partial y}\\!\\left(\\frac{\\partial f}{\\partial x}\\right) = \\frac{\\partial^2f}{\\partial y \\, \\partial x}.\n\\end{align}\n"
},
{
"math_id": 57,
"text": "\\R^n"
},
{
"math_id": 58,
"text": "n"
},
{
"math_id": 59,
"text": "\\alpha = (\\alpha_1,\\ldots,\\alpha_n), \\ \\alpha_i \\in \\Z_{\\geq 0}"
},
{
"math_id": 60,
"text": "f:\\R^n \\to X"
},
{
"math_id": 61,
"text": "\\partial^\\alpha f = \\frac{\\partial^{\\alpha_1}}{\\partial x_1^{\\alpha_1}} \\cdots \\frac{\\partial^{\\alpha_n}}{\\partial x_n^{\\alpha_n}} f"
},
{
"math_id": 62,
"text": "\\mathbf{A} = (\\mathbf{A}_x, \\mathbf{A}_y, \\mathbf{A}_z)"
},
{
"math_id": 63,
"text": "\\varphi = \\varphi(x,y,z)"
},
{
"math_id": 64,
"text": "\\nabla = \\left( \\frac{\\partial}{\\partial x}, \\frac{\\partial}{\\partial y}, \\frac{\\partial}{\\partial z} \\right)\\!,"
},
{
"math_id": 65,
"text": "\\mathrm{grad\\,} \\varphi"
},
{
"math_id": 66,
"text": "\\varphi"
},
{
"math_id": 67,
"text": "\\begin{align}\n \\operatorname{grad} \\varphi\n &= \\left( \\frac{\\partial \\varphi}{\\partial x}, \\frac{\\partial \\varphi}{\\partial y}, \\frac{\\partial \\varphi}{\\partial z} \\right) \\\\\n &= \\left( \\frac{\\partial}{\\partial x}, \\frac{\\partial}{\\partial y}, \\frac{\\partial}{\\partial z} \\right) \\varphi \\\\\n &= \\nabla \\varphi\n\\end{align}"
},
{
"math_id": 68,
"text": "\\mathrm{div}\\,\\mathbf{A}"
},
{
"math_id": 69,
"text": "\\begin{align}\n \\operatorname{div} \\mathbf{A}\n &= {\\partial A_x \\over \\partial x} + {\\partial A_y \\over \\partial y} + {\\partial A_z \\over \\partial z} \\\\\n &= \\left( \\frac{\\partial}{\\partial x}, \\frac{\\partial}{\\partial y}, \\frac{\\partial}{\\partial z} \\right) \\cdot \\mathbf{A} \\\\\n &= \\nabla \\cdot \\mathbf{A}\n\\end{align}"
},
{
"math_id": 70,
"text": "\\operatorname{div} \\operatorname{grad} \\varphi"
},
{
"math_id": 71,
"text": "\\begin{align}\n \\operatorname{div} \\operatorname{grad} \\varphi\n &= \\nabla \\cdot (\\nabla \\varphi) \\\\\n &= (\\nabla \\cdot \\nabla) \\varphi \\\\\n &= \\nabla^2 \\varphi \\\\\n &= \\Delta \\varphi \\\\\n\\end{align}"
},
{
"math_id": 72,
"text": "\\mathrm{curl}\\,\\mathbf{A}"
},
{
"math_id": 73,
"text": "\\mathrm{rot}\\,\\mathbf{A}"
},
{
"math_id": 74,
"text": "\\begin{align}\n \\operatorname{curl} \\mathbf{A}\n &= \\left(\n {\\partial A_z \\over {\\partial y} } - {\\partial A_y \\over {\\partial z} },\n {\\partial A_x \\over {\\partial z} } - {\\partial A_z \\over {\\partial x} },\n {\\partial A_y \\over {\\partial x} } - {\\partial A_x \\over {\\partial y} }\n \\right) \\\\\n &= \\left( {\\partial A_z \\over {\\partial y} } - {\\partial A_y \\over {\\partial z} } \\right) \\mathbf{i} +\n \\left( {\\partial A_x \\over {\\partial z} } - {\\partial A_z \\over {\\partial x} } \\right) \\mathbf{j} +\n \\left( {\\partial A_y \\over {\\partial x} } - {\\partial A_x \\over {\\partial y} } \\right) \\mathbf{k} \\\\\n &= \\begin{vmatrix}\n \\mathbf{i} & \\mathbf{j} & \\mathbf{k} \\\\\n \\cfrac{\\partial}{\\partial x} & \\cfrac{\\partial}{\\partial y} & \\cfrac{\\partial}{\\partial z} \\\\\n A_x & A_y & A_z\n \\end{vmatrix} \\\\\n &= \\nabla \\times \\mathbf{A}\n\\end{align}"
},
{
"math_id": 75,
"text": "(f g)' = f' g+f g' ~~~ \\Longrightarrow ~~~ \\nabla(\\phi \\psi) = (\\nabla \\phi) \\psi + \\phi (\\nabla \\psi)."
},
{
"math_id": 76,
"text": "\\Box"
},
{
"math_id": 77,
"text": "\\Delta"
}
] | https://en.wikipedia.org/wiki?curid=10265555 |
1026848 | Weyl tensor | Measure of the curvature of a pseudo-Riemannian manifold
In differential geometry, the Weyl curvature tensor, named after Hermann Weyl, is a measure of the curvature of spacetime or, more generally, a pseudo-Riemannian manifold. Like the Riemann curvature tensor, the Weyl tensor expresses the tidal force that a body feels when moving along a geodesic. The Weyl tensor differs from the Riemann curvature tensor in that it does not convey information on how the volume of the body changes, but rather only how the shape of the body is distorted by the tidal force. The Ricci curvature, or trace component of the Riemann tensor contains precisely the information about how volumes change in the presence of tidal forces, so the Weyl tensor is the traceless component of the Riemann tensor. This tensor has the same symmetries as the Riemann tensor, but satisfies the extra condition that it is trace-free: metric contraction on any pair of indices yields zero. It is obtained from the Riemann tensor by subtracting a tensor that is a linear expression in the Ricci tensor.
In general relativity, the Weyl curvature is the only part of the curvature that exists in free space—a solution of the vacuum Einstein equation—and it governs the propagation of gravitational waves through regions of space devoid of matter. More generally, the Weyl curvature is the only component of curvature for Ricci-flat manifolds and always governs the characteristics of the field equations of an Einstein manifold.
In dimensions 2 and 3 the Weyl curvature tensor vanishes identically. In dimensions ≥ 4, the Weyl curvature is generally nonzero. If the Weyl tensor vanishes in dimension ≥ 4, then the metric is locally conformally flat: there exists a local coordinate system in which the metric tensor is proportional to a constant tensor. This fact was a key component of Nordström's theory of gravitation, which was a precursor of general relativity.
Definition.
The Weyl tensor can be obtained from the full curvature tensor by subtracting out various traces. This is most easily done by writing the Riemann tensor as a (0,4) valence tensor (by contracting with the metric). The (0,4) valence Weyl tensor is then
formula_0
where "n" is the dimension of the manifold, "g" is the metric, "R" is the Riemann tensor, "Ric" is the Ricci tensor, "s" is the scalar curvature, and formula_1 denotes the Kulkarni–Nomizu product of two symmetric (0,2) tensors:
formula_2
In tensor component notation, this can be written as
formula_3
The ordinary (1,3) valent Weyl tensor is then given by contracting the above with the inverse of the metric.
The decomposition (1) expresses the Riemann tensor as an orthogonal direct sum, in the sense that
formula_4
This decomposition, known as the Ricci decomposition, expresses the Riemann curvature tensor into its irreducible components under the action of the orthogonal group. In dimension 4, the Weyl tensor further decomposes into invariant factors for the action of the special orthogonal group, the self-dual and antiself-dual parts "C"+ and "C"−.
The Weyl tensor can also be expressed using the Schouten tensor, which is a trace-adjusted multiple of the Ricci tensor,
formula_5
Then
formula_6
In indices,
formula_7
where formula_8 is the Riemann tensor, formula_9 is the Ricci tensor, formula_10 is the Ricci scalar (the scalar curvature) and brackets around indices refers to the antisymmetric part. Equivalently,
formula_11
where "S" denotes the Schouten tensor.
Properties.
Conformal rescaling.
The Weyl tensor has the special property that it is invariant under conformal changes to the metric. That is, if formula_12 for some positive scalar function formula_13 then the (1,3) valent Weyl tensor satisfies formula_14. For this reason the Weyl tensor is also called the conformal tensor. It follows that a necessary condition for a Riemannian manifold to be conformally flat is that the Weyl tensor vanish. In dimensions ≥ 4 this condition is sufficient as well. In dimension 3 the vanishing of the Cotton tensor is a necessary and sufficient condition for the Riemannian manifold being conformally flat. Any 2-dimensional (smooth) Riemannian manifold is conformally flat, a consequence of the existence of isothermal coordinates.
Indeed, the existence of a conformally flat scale amounts to solving the overdetermined partial differential equation
formula_15
In dimension ≥ 4, the vanishing of the Weyl tensor is the only integrability condition for this equation; in dimension 3, it is the Cotton tensor instead.
Symmetries.
The Weyl tensor has the same symmetries as the Riemann tensor. This includes:
formula_16
In addition, of course, the Weyl tensor is trace free:
formula_17
for all "u", "v". In indices these four conditions are
formula_18
Bianchi identity.
Taking traces of the usual second Bianchi identity of the Riemann tensor eventually shows that
formula_19
where "S" is the Schouten tensor. The valence (0,3) tensor on the right-hand side is the Cotton tensor, apart from the initial factor.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C = R - \\frac{1}{n-2}\\left(\\mathrm{Ric} - \\frac{s}{n}g\\right) {~\\wedge\\!\\!\\!\\!\\!\\!\\!\\!\\;\\bigcirc~} g - \\frac{s}{2n(n - 1)}g {~\\wedge\\!\\!\\!\\!\\!\\!\\!\\!\\;\\bigcirc~} g"
},
{
"math_id": 1,
"text": "h {~\\wedge\\!\\!\\!\\!\\!\\!\\!\\!\\;\\bigcirc~} k"
},
{
"math_id": 2,
"text": "\\begin{align} \n (h {~\\wedge\\!\\!\\!\\!\\!\\!\\!\\!\\;\\bigcirc~} k)\\left(v_1, v_2, v_3, v_4\\right) =\\quad\n &h\\left(v_1, v_3\\right)k\\left(v_2, v_4\\right) + h\\left(v_2, v_4\\right)k\\left(v_1, v_3\\right) \\\\\n {}-{} &h\\left(v_1, v_4\\right)k\\left(v_2, v_3\\right) - h\\left(v_2, v_3\\right)k\\left(v_1, v_4\\right)\n\\end{align}"
},
{
"math_id": 3,
"text": "\\begin{align} \n C_{ik\\ell m} = R_{ik\\ell m}\n +{} &\\frac{1}{n - 2} \\left(R_{im}g_{k\\ell} - R_{i\\ell}g_{km} + R_{k\\ell}g_{im} - R_{km}g_{i\\ell} \\right) \\\\\n {}+{} &\\frac{1}{(n - 1)(n - 2)} R \\left(g_{i\\ell}g_{km} - g_{im}g_{k\\ell} \\right).\\ \n\\end{align}"
},
{
"math_id": 4,
"text": "|R|^2 = |C|^2 + \\left|\\frac{1}{n - 2}\\left(\\mathrm{Ric} - \\frac{s}{n}g\\right) {~\\wedge\\!\\!\\!\\!\\!\\!\\!\\!\\;\\bigcirc~} g\\right|^2 + \\left|\\frac{s}{2n(n - 1)}g {~\\wedge\\!\\!\\!\\!\\!\\!\\!\\!\\;\\bigcirc~} g\\right|^2."
},
{
"math_id": 5,
"text": "P = \\frac{1}{n - 2}\\left(\\mathrm{Ric} - \\frac{s}{2(n-1)}g\\right)."
},
{
"math_id": 6,
"text": "C = R - P {~\\wedge\\!\\!\\!\\!\\!\\!\\!\\!\\;\\bigcirc~} g."
},
{
"math_id": 7,
"text": "C_{abcd} = R_{abcd} - \\frac{2}{n - 2}\\left(g_{a[c}R_{d]b} - g_{b[c}R_{d]a}\\right) + \\frac{2}{(n - 1)(n - 2)}R~g_{a[c}g_{d]b}"
},
{
"math_id": 8,
"text": "R_{abcd}"
},
{
"math_id": 9,
"text": "R_{ab}"
},
{
"math_id": 10,
"text": "R"
},
{
"math_id": 11,
"text": "{C_{ab}}^{cd} = {R_{ab}}^{cd} - 4S_{[a}^{[c}\\delta_{b]}^{d]}"
},
{
"math_id": 12,
"text": "g_{\\mu\\nu}\\mapsto g'_{\\mu\\nu} = f g_{\\mu\\nu}"
},
{
"math_id": 13,
"text": "f"
},
{
"math_id": 14,
"text": "{C'}^{a}_{\\ \\ bcd} = C^{a}_{\\ \\ bcd}"
},
{
"math_id": 15,
"text": "Ddf - df\\otimes df + \\left(|df|^2 + \\frac{\\Delta f}{n - 2}\\right)g = \\operatorname{Ric}."
},
{
"math_id": 16,
"text": "\\begin{align}\n C(u, v) &= -C(v, u) \\\\\n \\langle C(u, v)w, z \\rangle &= -\\langle C(u, v)z, w \\rangle \\\\\n C(u, v)w + C(v, w)u + C(w, u)v &= 0.\n\\end{align}"
},
{
"math_id": 17,
"text": "\\operatorname{tr} C(u, \\cdot)v = 0"
},
{
"math_id": 18,
"text": "\\begin{align}\n C_{abcd} = -C_{bacd} &= -C_{abdc} \\\\\n C_{abcd} + C_{acdb} + C_{adbc} &= 0 \\\\\n {C^a}_{bac} &= 0.\n\\end{align}"
},
{
"math_id": 19,
"text": "\\nabla_a {C^a}_{bcd} = 2(n - 3)\\nabla_{[c}S_{d]b}"
}
] | https://en.wikipedia.org/wiki?curid=1026848 |
1027229 | Method of analytic tableaux | Tool for proving a logical formula
In proof theory, the semantic tableau (; plural: tableaux), also called an analytic tableau, truth tree, or simply tree, is a decision procedure for sentential and related logics, and a proof procedure for formulae of first-order logic. An analytic tableau is a tree structure computed for a logical formula, having at each node a subformula of the original formula to be proved or refuted. Computation constructs this tree and uses it to prove or refute the whole formula. The tableau method can also determine the satisfiability of finite sets of formulas of various logics. It is the most popular proof procedure for modal logics.
A method of truth trees contains a fixed set of rules for producing trees from a given logical formula, or set of logical formulas. Those trees will have more formulas at each branch, and in some cases, a branch can come to contain both a formula and its negation, which is to say, a contradiction. In that case, the branch is said to close. If every branch in a tree closes, the tree itself is said to close. In virtue of the rules for construction of tableaux, a closed tree is a proof that the original formula, or set of formulas, used to construct it was itself self-contradictory, and therefore false. Conversely, a tableau can also prove that a logical formula is tautologous: if a formula is tautologous, its negation is a contradiction, so a tableau built from its negation will close.
History.
In his "Symbolic Logic Part II", Charles Lutwidge Dodgson (also known by his literary pseudonym, Lewis Carroll) introduced the Method of Trees, the earliest modern use of a truth tree.
The method of semantic tableaux was invented by the Dutch logician Evert Willem Beth (Beth 1955) and simplified, for classical logic, by Raymond Smullyan (Smullyan 1968, 1995). Smullyan's simplification, "one-sided tableaux", is described here. Smullyan's method has been generalized to arbitrary many-valued propositional and first-order logics by Walter Carnielli (Carnielli 1987).
Tableaux can be intuitively seen as sequent systems upside-down. This symmetrical relation between tableaux and sequent systems was formally established in (Carnielli 1991).
Propositional logic.
Background.
A formula in propositional logic consists of letters, which stand for propositions, and connectives for conjunction, disjunction, conditionals, biconditionals, and negation. The truth or falsehood of a proposition is called its truth value. A formula, or set of formulas, is said to be satisfiable if there is a possible assignment of truth-values to the propositional letters such that the entire formula, which combines the letters with connectives, is itself true as well. Such an assignment is said to "satisfy" the formula.
A tableau checks whether a given set of formulae is satisfiable or not. It can be used to check either validity or entailment: a formula is valid if its negation is unsatisfiable, and formulae formula_0 imply formula_1 if formula_2 is unsatisfiable.
The following table shows some notational variants for logical connectives, for readers who may be more familiar with a different notation from the one used here. In general, as of the time of the inclusion of this sentence, the first symbol in each line has been used in the text of this article; however, since Wikipedia editors are not rule-bound to use consistent notation within or between articles, this may change.
General method.
The main principle of propositional tableaux is to attempt to "break" complex formulae into smaller ones until complementary pairs of literals are produced or no further expansion is possible.
The method works on a tree whose nodes are labeled with formulae. At each step, this tree is modified; in the propositional case, the only allowed changes are additions of a node as descendant of a leaf. The procedure starts by generating the tree made of a chain of all formulae in the set to prove unsatisfiability. Then, the following procedure may be repeatedly applied nondeterministically:
Eventually, this procedure will terminate, because at some point every applicable node gets applied, and the expansion rules guarantee that every node in the tree is simpler than the applicable node used to create it.
The principle of tableau is that formulae in nodes of the same branch are considered in conjunction while the different branches are considered to be disjuncted. As a result, a tableau is a tree-like representation of a formula that is a disjunction of conjunctions. This formula is equivalent to the set to prove unsatisfiability. The procedure modifies the tableau in such a way that the formula represented by the resulting tableau is equivalent to the original one. One of these conjunctions may contain a pair of complementary literals, in which case that conjunction is proved to be unsatisfiable. If all conjunctions are proved unsatisfiable, the original set of formulae is unsatisfiable.
And.
Whenever a branch of a tableau contains a formula formula_5 that is the conjunction of two formulae, these two formulae are both consequences of that formula. This fact can be formalized by the following rule for expansion of a tableau:
(formula_6) If a branch of the tableau contains a conjunctive formula formula_5, add to its leaf the chain of two nodes containing the formulae formula_4 and formula_1
This rule is generally written as follows:
formula_7
A variant of this rule allows a node to contain a set of formulae rather than a single one. In this case, the formulae in this set are considered in conjunction, so one can add formula_8 at the end of a branch containing formula_5. More precisely, if a node on a branch is labeled formula_9, one can add to the branch the new leaf formula_10.
Or.
If a branch of a tableau contains a formula that is a disjunction of two formulae, such as formula_11, the following rule can be applied:
(formula_12) If a node on a branch contains a disjunctive formula formula_11, then create two sibling children to the leaf of the branch, containing the formulae formula_4 and formula_1, respectively.
This rule splits a branch into two, differing only for the final node. Since branches are considered in disjunction to each other, the two resulting branches are equivalent to the original one, as the disjunction of their non-common nodes is precisely formula_11. The rule for disjunction is generally formally written using the symbol formula_13 for separating the formulae of the two distinct nodes to be created:
formula_14
If nodes are assumed to contain sets of formulae, this rule is replaced by: if a node is labeled formula_15, a leaf of the branch this node is in can be appended two sibling child nodes labeled formula_16 and formula_17, respectively.
Not.
The aim of tableaux is to generate progressively simpler formulae until pairs of opposite literals are produced or no other rule can be applied. Negation can be treated by initially making formulae in negation normal form, so that negation only occurs in front of literals. Alternatively, one can use De Morgan's laws during the expansion of the tableau, so that for example formula_18 is treated as formula_19. Rules that introduce or remove a pair of negations (such as in formula_20) are also used in this case (otherwise, there would be no way of expanding a formula like formula_21:
formula_22
formula_23
Closure.
Every tableau can be considered as a graphical representation of a formula, which is equivalent to the set the tableau is built from. This formula is as follows: each branch of the tableau represents the conjunction of its formulae; the tableau represents the disjunction of its branches. The expansion rules transforms a tableau into one having an equivalent represented formula. Since the tableau is initialized as a single branch containing the formulae of the input set, all subsequent tableaux obtained from it represent formulae which are equivalent to that set (in the variant where the initial tableau is the single node labeled true, the formulae represented by tableaux are consequences of the original set.)
The method of tableaux works by starting with the initial set of formulae and then adding to the tableau simpler and simpler formulae until contradiction is shown in the simple form of opposite literals. Since the formula represented by a tableau is the disjunction of the formulae represented by its branches, contradiction is obtained when every branch contains a pair of opposite literals.
Once a branch contains a literal and its negation, its corresponding formula is unsatisfiable. As a result, this branch can be now "closed", as there is no need to further expand it. If all branches of a tableau are closed, the formula represented by the tableau is unsatisfiable; therefore, the original set is unsatisfiable as well. Obtaining a tableau where all branches are closed is a way for proving the unsatisfiability of the original set. In the propositional case, one can also prove that satisfiability is proved by the impossibility of finding a closed tableau, provided that every expansion rule has been applied everywhere it could be applied. In particular, if a tableau contains some open (non-closed) branches and every formula that is not a literal has been used by a rule to generate a new node on every branch the formula is in, the set is satisfiable.
This rule takes into account that a formula may occur in more than one branch (this is the case if there is at least a branching point "below" the node). In this case, the rule for expanding the formula has to be applied so that its conclusion(s) are appended to all of these branches that are still open, before one can conclude that the tableau cannot be further expanded and that the formula is therefore satisfiable.
Set-labeled tableau.
A variant of tableau is to label nodes with sets of formulae rather than single formulae. In this case, the initial tableau is a single node labeled with the set to be proved satisfiable. The formulae in a set are therefore considered to be in conjunction.
The rules of expansion of the tableau can now work on the leaves of the tableau, ignoring all internal nodes. For conjunction, the rule is based on the equivalence of a set containing a conjunction formula_5 with the set containing both formula_4 and formula_1 in place of it. In particular, if a leaf is labeled with formula_9, a node can be appended to it with label formula_10:
formula_24
For disjunction, a set formula_25 is equivalent to the disjunction of the two sets formula_26 and formula_27. As a result, if the first set labels a leaf, two children can be appended to it, labeled with the latter two formulae.
formula_28
Finally, if a set contains both a literal and its negation, this branch can be closed:
formula_29
A tableau for a given finite set "X" is a finite (upside down) tree with root "X" in which all child nodes are obtained by applying the tableau rules to their parents. A branch in such a tableau is closed if its leaf node contains "closed". A tableau is closed if all its branches are closed. A tableau is open if at least one branch is not closed.
Below are two closed tableaux for the set
formula_30
Each rule application is marked at the right hand side. Both achieve the same effect, the first closes faster. The only difference is the order in which the reduction is performed.
formula_31
and second, longer one, with the rules applied in a different order:
formula_32
The first tableau closes after only one rule application while the second one misses the mark and takes a lot longer to close. Clearly, we would prefer to always find the shortest closed tableaux but it can be shown that one single algorithm that finds the shortest closed tableaux for all input sets of formulae cannot exist.
The three rules formula_33, formula_34 and formula_35 given above are then enough to decide if a given set formula_36 of formulae in negated normal form are jointly satisfiable:
Just apply all possible rules in all possible orders until we find a closed tableau for formula_36 or until we exhaust all possibilities and conclude that every tableau for formula_36 is open.
In the first case, formula_36 is jointly unsatisfiable and in the second the case the leaf node of the open branch gives an assignment to the atomic formulae and negated atomic formulae which makes formula_36 jointly satisfiable. Classical logic actually has the rather nice property that we need to investigate only (any) one tableau completely: if it closes then formula_36 is unsatisfiable and if it is open then formula_36 is satisfiable. But this property is not generally enjoyed by other logics.
These rules suffice for all of classical logic by taking an initial set of formulae "X" and replacing each member "C" by its logically equivalent negated normal form "C' " giving a set of formulae "X' ". We know that "X" is satisfiable if and only if "X' " is
satisfiable, so it suffices to search for a closed tableau for "X' " using the procedure outlined above.
By setting formula_37 we can test whether the formula "A" is a tautology of classical logic:
If the tableau for formula_38 closes then formula_3 is unsatisfiable and so "A" is a tautology since no assignment of truth values will ever make "A" false. Otherwise any open leaf of any open branch of any open tableau for formula_38 gives an assignment that falsifies "A".
Conditional.
Classical propositional logic usually has a connective to denote material implication. If we write this connective as ⇒, then the formula "A" ⇒ "B" stands for "if "A" then "B"". It is possible to give a tableau rule for breaking down "A" ⇒ "B" into its constituent formulae. Similarly, we can give one rule each for breaking down each of ¬("A" ∧ "B"), ¬("A" ∨ "B"), ¬(¬"A"), and ¬("A" ⇒ "B"). Together these rules would give a terminating procedure for deciding whether a given set of formulae is simultaneously satisfiable in classical logic since each rule breaks down one formula into its constituents but no rule builds larger formulae out of smaller constituents. Thus we must eventually reach a node that contains only atoms and negations of atoms. If this last node matches (id) then we can close the branch, otherwise it remains open.
But note that the following equivalences hold in classical logic where (...) = (...) means that the left hand side formula is logically equivalent to the right hand side formula:
formula_39
If we start with an arbitrary formula "C" of classical logic, and apply these equivalences repeatedly to replace the left hand sides with the right hand sides in "C", then we will obtain a formula "C' " which is logically equivalent to "C" but which has the property that "C' " contains no implications, and ¬ appears in front of atomic formulae only. Such a formula is said to be in negation normal form and it is possible to prove formally that every formula "C" of classical logic has a logically equivalent formula "C' " in negation normal form. That is, "C" is satisfiable if and only if "C' " is satisfiable.
Propositional tableau with unification.
The above rules for propositional tableau can be simplified by using uniform notation. In uniform notation, each formula is either of type formula_40 (alpha) or of type formula_41 (beta). Each formula of type alpha is assigned the two components formula_42, and each formula of type beta is assigned the two components formula_43. Formulae of type alpha can be thought of as being conjunctive, as both formula_44 and formula_45 are implied by formula_46 being true. Formulae of type beta can be thought of as being disjunctive, as either formula_47 or formula_48 is implied by formula_49 being true. The below tables shows how to determine the type, and the components, of any given propositional formula:
formula_50 formula_51
In each table, the left-most column shows all the possible structures for the formulae of type alpha or beta, and the right-most columns show their respective components. Alternatively, the rules for uniform notation can be expressed using signed formulae:
formula_52
When constructing a propositional tableau using the above notation, whenever one encounters a formula of type alpha, its two components formula_53 are added to the current branch that is being expanded. Whenever one encounters a formula of type beta on some branch formula_54, one can split formula_54 into two branches, one with the set {formula_54, formula_47} of formulae, and the other with the set {formula_54, formula_48} of formulae.
First-order logic tableau.
Tableaux are extended to first-order predicate logic by two rules for dealing with universal and existential quantifiers, respectively. Two different sets of rules can be used; both employ a form of Skolemization for handling existential quantifiers, but differ on the handling of universal quantifiers.
The set of formulae to check for validity is here supposed to contain no free variables; this is not a limitation as free variables are implicitly universally quantified, so universal quantifiers over these variables can be added, resulting in a formula with no free variables.
First-order tableau without unification.
A first-order formula formula_55 implies all formulae formula_56 where formula_57 is a ground term. The following inference rule is therefore correct:
formula_58 where formula_57 is an arbitrary ground term
Contrarily to the rules for the propositional connectives, multiple applications of this rule to the same formula may be necessary. As an example, the set formula_59 can only be proved unsatisfiable if both formula_60 and formula_61 are generated from formula_62.
Existential quantifiers are dealt with by means of Skolemization. In particular, a formula with a leading existential quantifier like formula_63 generates its Skolemization formula_64, where formula_65 is a new constant symbol.
formula_66 where formula_65 is a new constant symbol
The Skolem term formula_65 is a constant (a function of arity 0) because the quantification over formula_67 does not occur within the scope of any universal quantifier. If the original formula contained some universal quantifiers such that the quantification over formula_67 was within their scope, these quantifiers have evidently been removed by the application of the rule for universal quantifiers.
The rule for existential quantifiers introduces new constant symbols. These symbols can be used by the rule for universal quantifiers, so that formula_68 can generate formula_69 even if formula_65 was not in the original formula but is a Skolem constant created by the rule for existential quantifiers.
The above two rules for universal and existential quantifiers are correct, and so are the propositional rules: if a set of formulae generates a closed tableau, this set is unsatisfiable. Completeness can also be proved: if a set of formulae is unsatisfiable, there exists a closed tableau built from it by these rules. However, actually finding such a closed tableau requires a suitable policy of application of rules. Otherwise, an unsatisfiable set can generate an infinite-growing tableau. As an example, the set formula_70 is unsatisfiable, but a closed tableau is never obtained if one unwisely keeps applying the rule for universal quantifiers to formula_62, generating for example formula_71. A closed tableau can always be found by ruling out this and similar "unfair" policies of application of tableau rules.
The rule for universal quantifiers formula_72 is the only non-deterministic rule, as it does not specify which term to instantiate with. Moreover, while the other rules need to be applied only once for each formula and each path the formula is in, this one may require multiple applications. Application of this rule can however be restricted by delaying the application of the rule until no other rule is applicable and by restricting the application of the rule to ground terms that already appear in the path of the tableau. The variant of tableaux with unification shown below aims at solving the problem of non-determinism.
First-order tableau with unification.
The main problem of tableau without unification is how to choose a ground term formula_57 for the universal quantifier rule. Indeed, every possible ground term can be used, but clearly most of them might be useless for closing the tableau.
A solution to this problem is to "delay" the choice of the term to the time when the consequent of the rule allows closing at least a branch of the tableau. This can be done by using a variable instead of a term, so that formula_55 generates formula_73, and then allowing substitutions to later replace formula_74 with a term. The rule for universal quantifiers becomes:
formula_75 where formula_74 is a variable not occurring everywhere else in the tableau
While the initial set of formulae is supposed not to contain free variables, a formula of the tableau may contain the free variables generated by this rule. These free variables are implicitly considered universally quantified.
This rule employs a variable instead of a ground term. What is gained by this change is that these variables can be then given a value when a branch of the tableau can be closed, solving the problem of generating terms that might be useless.
As an example, formula_77 can be proved unsatisfiable by first generating formula_78; the negation of this literal is unifiable with formula_79, the most general unifier being the substitution that replaces formula_80 with formula_81; applying this substitution results in replacing formula_78 with formula_60, which closes the tableau.
This rule closes at least a branch of the tableau—the one containing the considered pair of literals. However, the substitution has to be applied to the whole tableau, not only on these two literals. This is expressed by saying that the free variables of the tableau are "rigid": if an occurrence of a variable is replaced by something else, all other occurrences of the same variable must be replaced in the same way. Formally, the free variables are (implicitly) universally quantified and all formulae of the tableau are within the scope of these quantifiers.
Existential quantifiers are dealt with by Skolemization. Contrary to the tableau without unification, Skolem terms may not be simple constants. Indeed, formulae in a tableau with unification may contain free variables, which are implicitly considered universally quantified. As a result, a formula like formula_63 may be within the scope of universal quantifiers; if this is the case, the Skolem term is not a simple constant but a term made of a new function symbol and the free variables of the formula.
formula_82 where formula_83 is a new function symbol and formula_84 the free variables of formula_85
This rule incorporates a simplification over a rule where formula_84 are the free variables of the branch, not of formula_85 alone. This rule can be further simplified by the reuse of a function symbol if it has already been used in a formula that is identical to formula_85 up to variable renaming.
The formula represented by a tableau is obtained in a way that is similar to the propositional case, with the additional assumption that free variables are considered universally quantified. As for the propositional case, formulae in each branch are conjoined and the resulting formulae are disjoined. In addition, all free variables of the resulting formula are universally quantified. All these quantifiers have the whole formula in their scope. In other words, if formula_86 is the formula obtained by disjoining the conjunction of the formulae in each branch, and formula_84 are the free variables in it, then formula_87 is the formula represented by the tableau. The following considerations apply:
The following two variants are also correct.
Tableaux with unification can be proved complete: if a set of formulae is unsatisfiable, it has a tableau-with-unification proof. However, actually finding such a proof may be a difficult problem. Contrarily to the case without unification, applying a substitution can modify the existing part of a tableau; while applying a substitution closes at least a branch, it may make other branches impossible to close (even if the set is unsatisfiable).
A solution to this problem is "delayed instantiation": no substitution is applied until one that closes all branches at the same time is found. With this variant, a proof for an unsatisfiable set can always be found by a suitable policy of application of the other rules. This method however requires the whole tableau to be kept in memory: the general method closes branches, which can be then discarded, while this variant does not close any branch until the end.
The problem that some tableaux that can be generated are impossible to close even if the set is unsatisfiable is common to other sets of tableau expansion rules: even if some specific sequences of application of these rules allow constructing a closed tableau (if the set is unsatisfiable), some other sequences lead to tableaux that cannot be closed. General solutions for these cases are outlined in the "Searching for a tableau" section.
Tableau calculi and their properties.
A tableau calculus is a set of rules that allows building and modification of a tableau. Propositional tableau rules, tableau rules without unification, and tableau rules with unification, are all tableau calculi. Some important properties a tableau calculus may or may not possess are completeness, destructiveness, and proof confluence.
A tableau calculus is called complete if it allows building a tableau proof for every given unsatisfiable set of formulae. The tableau calculi mentioned above can be proved complete.
A remarkable difference between tableau with unification and the other two calculi is that the latter two calculi only modify a tableau by adding new nodes to it, while the former one allows substitutions to modify the existing part of the tableau. More generally, tableau calculi are classed as "destructive" or "non-destructive" depending on whether they only add new nodes to tableau or not. Tableau with unification is therefore destructive, while propositional tableau and tableau without unification are non-destructive.
Proof confluence is the property of a tableau calculus being able to obtain a proof for an arbitrary unsatisfiable set from an arbitrary tableau, assuming that this tableau has itself been obtained by applying the rules of the calculus. In other words, in a proof confluent tableau calculus, from an unsatisfiable set one can apply whatever set of rules and still obtain a tableau from which a closed one can be obtained by applying some other rules.
Proof procedures.
A tableau calculus is simply a set of rules that prescribes how a tableau can be modified. A proof procedure is a method for actually finding a proof (if one exists). In other words, a tableau calculus is a set of rules, while a proof procedure is a policy of application of these rules. Even if a calculus is complete, not every possible choice of application of rules leads to a proof of an unsatisfiable set. For example, formula_102 is unsatisfiable, but both tableaux with unification and tableaux without unification allow the rule for the universal quantifiers to be applied repeatedly to the last formula, while simply applying the rule for disjunction to the third one would directly lead to closure.
For proof procedures, a definition of completeness has been given: a proof procedure is strongly complete if it allows finding a closed tableau for any given unsatisfiable set of formulae. Proof confluence of the underlying calculus is relevant to completeness: proof confluence is the guarantee that a closed tableau can be always generated from an arbitrary partially constructed tableau (if the set is unsatisfiable). Without proof confluence, the application of a 'wrong' rule may result in the impossibility of making the tableau complete by applying other rules.
Propositional tableaux and tableaux without unification have strongly complete proof procedures. In particular, a complete proof procedure is that of applying the rules in a "fair" way. This is because the only way such calculi cannot generate a closed tableau from an unsatisfiable set is by not applying some applicable rules.
For propositional tableaux, fairness amounts to expanding every formula in every branch. More precisely, for every formula and every branch the formula is in, the rule having the formula as a precondition has been used to expand the branch. A fair proof procedure for propositional tableaux is strongly complete.
For first-order tableaux without unification, the condition of fairness is similar, with the exception that the rule for universal quantifiers might require more than one application. Fairness amounts to expanding every universal quantifier infinitely often. In other words, a fair policy of application of rules cannot keep applying other rules without expanding every universal quantifier in every branch that is still open once in a while.
Searching for a closed tableau.
If a tableau calculus is complete, every unsatisfiable set of formulae has an associated closed tableau. While this tableau can always be obtained by applying some of the rules of the calculus, the problem of which rules to apply for a given formula still remains. As a result, completeness does not automatically imply the existence of a feasible policy of application of rules that always leads to a closed tableau for every given unsatisfiable set of formulae. While a fair proof procedure is complete for ground tableau and tableau without unification, this is not the case for tableau with unification.
A general solution for this problem is that of searching the space of tableaux until a closed one is found (if any exists, that is, the set is unsatisfiable). In this approach, one starts with an empty tableau and then recursively applies every possible applicable rule. This procedure visits a (implicit) tree whose nodes are labeled with tableaux, and such that the tableau in a node is obtained from the tableau in its parent by applying one of the valid rules.
Since each branch can be infinite, this tree has to be visited breadth-first rather than depth-first. This requires a large amount of space, as the breadth of the tree can grow exponentially. A method that may visit some nodes more than once but works in polynomial space is to visit in a depth-first manner with iterative deepening: one first visits the tree depth first up to a certain depth, then increases the depth and perform the visit again. This particular procedure uses the depth (which is also the number of tableau rules that have been applied) for deciding when to stop at each step. Various other parameters (such as the size of the tableau labeling a node) have been used instead.
Reducing search.
The size of the search tree depends on the number of (children) tableaux that can be generated from a given (parent) one. Reducing the number of such tableaux therefore reduces the required search.
A way for reducing this number is to disallow the generation of some tableaux based on their internal structure. An example is the condition of regularity: if a branch contains a literal, using an expansion rule that generates the same literal is useless because the branch containing two copies of the literals would have the same set of formulae of the original one. This expansion can be disallowed because if a closed tableau exists, it can be found without it. This restriction is structural because it can be checked by looking at the structure of the tableau to expand only.
Different methods for reducing search disallow the generation of some tableaux on the ground that a closed tableau can still be found by expanding the other ones. These restrictions are called global. As an example of a global restriction, one may employ a rule that specifies which of the open branches is to be expanded. As a result, if a tableau has for example two non-closed branches, the rule specifies which one is to be expanded, disallowing the expansion of the second one. This restriction reduces the search space because one possible choice is now forbidden; completeness is however not harmed, as the second branch will still be expanded if the first one is eventually closed. As an example, a tableau with root formula_103, child formula_104, and two leaves formula_81 and formula_105 can be closed in two ways: applying formula_33 first to formula_81 and then to formula_105, or vice versa. There is clearly no need to follow both possibilities; one may consider only the case in which formula_33 is first applied to formula_81 and disregard the case in which it is first applied to formula_105. This is a global restriction because what allows neglecting this second expansion is the presence of the other tableau, where expansion is applied to formula_81 first and formula_105 afterwards.
Clause tableaux.
When applied to sets of clauses (rather than of arbitrary formulae), tableaux methods allow for a number of efficiency improvements. A first-order clause is a formula formula_106 that does not contain free variables and such that each formula_107 is a literal. The universal quantifiers are often omitted for clarity, so that for example formula_108 actually means formula_109. Note that, if taken literally, these two formulae are not the same as for satisfiability: rather, the satisfiability formula_108 is the same as that of formula_110. That free variables are universally quantified is not a consequence of the definition of first-order satisfiability; it is rather used as an implicit common assumption when dealing with clauses.
The only expansion rules that are applicable to a clause are formula_72 and formula_34; these two rules can be replaced by their combination without losing completeness. In particular, the following rule corresponds to applying in sequence the rules formula_72 and formula_34 of the first-order calculus with unification.
formula_111 where formula_112 is obtained by replacing every variable with a new one in formula_113
When the set to be checked for satisfiability is only composed of clauses, this and the unification rules are sufficient to prove unsatisfiability. In other worlds, the tableau calculi composed of formula_114 and formula_76 is complete.
Since the clause expansion rule only generates literals and never new clauses, the clauses to which it can be applied are only clauses of the input set. As a result, the clause expansion rule can be further restricted to the case where the clause is in the input set.
formula_111 where formula_112 is obtained by replacing every variable with a new one in formula_113, which is a clause of the input set
Since this rule directly exploits the clauses in the input set there is no need to initialize the tableau to the chain of the input clauses. The initial tableau can therefore be initialize with the single node labeled formula_115; this label is often omitted as implicit. As a result of this further simplification, every node of the tableau (apart from the root) is labeled with a literal.
A number of optimizations can be used for clause tableau. These optimization are aimed at reducing the number of possible tableaux to be explored when searching for a closed tableau as described in the "Searching for a closed tableau" section above.
Connection tableau.
Connection is a condition over tableau that forbids expanding a branch using clauses that are unrelated to the literals that are already in the branch. Connection can be defined in two ways:
Both conditions apply only to branches consisting not only of the root. The second definition allows for the use of a clause containing a literal that unifies with the negation of a literal in the branch, while the first only further constraint that literal to be in leaf of the current branch.
If clause expansion is restricted by connectedness (either strong or weak), its application produces a tableau in which substitution can applied to one of the new leaves, closing its branch. In particular, this is the leaf containing the literal of the clause that unifies with the negation of a literal in the branch (or the negation of the literal in the parent, in case of strong connection).
Both conditions of connectedness lead to a complete first-order calculus: if a set of clauses is unsatisfiable, it has a closed connected (strongly or weakly) tableau. Such a closed tableau can be found by searching in the space of tableaux as explained in the "Searching for a closed tableau" section. During this search, connectedness eliminates some possible choices of expansion, thus reducing search. In other worlds, while the tableau in a node of the tree can be in general expanded in several different ways, connection may allow only few of them, thus reducing the number of resulting tableaux that need to be further expanded.
This can be seen on the following (propositional) example. The tableau made of a chain formula_116 for the set of clauses formula_117 can be in general expanded using each of the four input clauses, but connection only allows the expansion that uses formula_118. This means that the tree of tableaux has four leaves in general but only one if connectedness is imposed. This means that connectedness leaves only one tableau to try to expand, instead of the four ones to consider in general. In spite of this reduction of choices, the completeness theorem implies that a closed tableau can be found if the set is unsatisfiable.
The connectedness conditions, when applied to the propositional (clausal) case, make the resulting calculus non-confluent. As an example, formula_119 is unsatisfiable, but applying formula_114 to formula_81 generates the chain formula_116, which is not closed and to which no other expansion rule can be applied without violating either strong or weak connectedness. In the case of weak connectedness, confluence holds provided that the clause used for expanding the root is relevant to unsatisfiability, that is, it is contained in a minimally unsatisfiable subset of the set of clauses. Unfortunately, the problem of checking whether a clause meets this condition is itself a hard problem. In spite of non-confluence, a closed tableau can be found using search, as presented in the "Searching for a closed tableau" section above. While search is made necessary, connectedness reduces the possible choices of expansion, thus making search more efficient.
Regular tableaux.
A tableau is regular if no literal occurs twice in the same branch. Enforcing this condition allows for a reduction of the possible choices of tableau expansion, as the clauses that would generate a non-regular tableau cannot be expanded.
These disallowed expansion steps are however useless. If formula_1 is a branch containing a literal formula_120, and formula_121 is a clause whose expansion violates regularity, then formula_121 contains formula_120. In order to close the tableau, one needs to expand and close, among others, the branch where formula_122, where formula_120 occurs twice. However, the formulae in this branch are exactly the same as the formulae of formula_1 alone. As a result, the same expansion steps that close formula_122 also close formula_1. This means that expanding formula_121 was unnecessary; moreover, if formula_121 contained other literals, its expansion generated other leaves that needed to be closed. In the propositional case, the expansion needed to close these leaves are completely useless; in the first-order case, they may only affect the rest of the tableau because of some unifications; these can however be combined to the substitutions used to close the rest of the tableau.
Tableaux for modal logics.
In a modal logic, a model comprises a set of "possible worlds", each one associated to a truth evaluation; an "accessibility relation" specifies when a world is "accessible" from another one. A modal formula may specify not only conditions over a possible world, but also on the ones that are accessible from it. As an example, formula_123 is true in a world if formula_4 is true in all worlds that are accessible from it.
As for propositional logic, tableaux for modal logics are based on recursively breaking formulae into its basic components. Expanding a modal formula may however require stating conditions over different worlds. As an example, if formula_124 is true in a world then there exists a world accessible from it where formula_4 is false. However, one cannot simply add the following rule to the propositional ones.
formula_125
In propositional tableaux all formulae refer to the same truth evaluation, but the precondition of the rule above holds in one world while the consequence holds in another. Not taking this into account would generate incorrect results. For example, formula formula_126 states that formula_81 is true in the current world and formula_81 is false in a world that is accessible from it. Simply applying formula_33 and the expansion rule above would produce formula_81 and formula_127, but these two formulae should not in general generate a contradiction, as they hold in different worlds. Modal tableaux calculi do contain rules of the kind of the one above, but include mechanisms to avoid the incorrect interaction of formulae referring to different worlds.
Technically, tableaux for modal logics check the satisfiability of a set of formulae: they check whether there exists a model formula_128 and world formula_129 such that the formulae in the set are true in that model and world. In the example above, while formula_81 states the truth of formula_81 in formula_129, the formula formula_130 states the truth of formula_127 in some world formula_131 that is accessible from formula_129 and which may in general be different from formula_129. Tableaux calculi for modal logic take into account that formulae may refer to different worlds.
This fact has an important consequence: formulae that hold in a world may imply conditions over different successors of that world. Unsatisfiability may then be proved from the subset of formulae referring to a single successor. This holds if a world may have more than one successor, which is true for most modal logics. If this is the case, a formula like formula_132 is true if a successor where formula_3 holds exists and a successor where formula_133 holds exists. In the other way around, if one can show unsatisfiability of formula_3 in an arbitrary successor, the formula is proved unsatisfiable without checking for worlds where formula_133 holds. At the same time, if one can show unsatisfiability of formula_133, there is no need to check formula_3. As a result, while there are two possible way to expand formula_132, one of these two ways is always sufficient to prove unsatisfiability if the formula is unsatisfiable. For example, one may expand the tableau by considering an arbitrary world where formula_3 holds. If this expansion leads to unsatisfiability, the original formula is unsatisfiable. However, it is also possible that unsatisfiability cannot be proved this way, and that the world where formula_133 holds should have been considered instead. As a result, one can always prove unsatisfiability by expanding either formula_124 only or formula_134 only; however, if the wrong choice is made the resulting tableau may not be closed. Expanding either subformula leads to tableau calculi that are complete but not proof-confluent. Searching as described in the "Searching for a closed tableau" may therefore be necessary.
Depending on whether the precondition and consequence of a tableau expansion rule refer to the same world or not, the rule is called static or transactional. While rules for propositional connectives are all static, not all rules for modal connectives are transactional: for example, in every modal logic including axiom T, it holds that formula_123 implies formula_4 in the same world. As a result, the relative (modal) tableau expansion rule is static, as both its precondition and consequence refer to the same world.
Formula-deleting tableau.
A method for avoiding formulae referring to different worlds interacting in the wrong way is to make sure that all formulae of a branch refer to the same world. This condition is initially true as all formulae in the set to be checked for consistency are assumed referring to the same world. When expanding a branch, two situations are possible: either the new formulae refer to the same world as the other one in the branch or not. In the first case, the rule is applied normally. In the second case, all formulae of the branch that do not also hold in the new world are deleted from the branch, and possibly added to all other branches that are still relative to the old world.
As an example, in S5 every formula formula_123 that is true in a world is also true in all accessible worlds (that is, in all accessible worlds both formula_4 and formula_123 are true). Therefore, when applying formula_135, whose consequence holds in a different world, one deletes all formulae from the branch, but can keep all formulae formula_123, as these hold in the new world as well. In order to retain completeness, the deleted formulae are then added to all other branches that still refer to the old world.
World-labeled tableau.
A different mechanism for ensuring the correct interaction between formulae referring to different worlds is to switch from formulae to labeled formulae: instead of writing formula_4, one would write formula_136 to make it explicit that formula_4 holds in world formula_129.
All propositional expansion rules are adapted to this variant by stating that they all refer to formulae with the same world label. For example, formula_137 generates two nodes labeled with formula_136 and formula_138; a branch is closed only if it contains two opposite literals of the same world, like formula_139 and formula_140; no closure is generated if the two world labels are different, like in formula_139 and formula_141.
A modal expansion rule may have a consequence that refers to different worlds. For example, the rule for formula_124 would be written as follows
formula_142
The precondition and consequent of this rule refer to worlds formula_129 and formula_131, respectively. The various calculi use different methods for keeping track of the accessibility of the worlds used as labels. Some include pseudo-formulae like formula_143 to denote that formula_131 is accessible from formula_129. Some others use sequences of integers as world labels, this notation implicitly representing the accessibility relation (for example, formula_144 is accessible from formula_145.)
Set-labeling tableaux.
The problem of interaction between formulae holding in different worlds can be overcome by using set-labeling tableaux. These are trees whose nodes are labeled with sets of formulae; the expansion rules explain how to attach new nodes to a leaf, based only on the label of the leaf (and not on the label of other nodes in the branch).
Tableaux for modal logics are used to verify the satisfiability of a set of modal formulae in a given modal logic. Given a set of formulae formula_146, they check the existence of a model formula_128 and a world formula_129 such that formula_147.
The expansion rules depend on the particular modal logic used. A tableau system for the basic modal logic K can be obtained by adding to the propositional tableau rules the following one:
formula_148
Intuitively, the precondition of this rule expresses the truth of all formulae formula_0 at all accessible worlds, and truth of formula_133 at some accessible worlds. The consequence of this rule is a formula that must be true at one of those worlds where formula_133 is true.
More technically, modal tableaux methods check the existence of a model formula_128 and a world formula_129 that make set of formulae true. If formula_149 are true in formula_129, there must be a world formula_131 that is accessible from formula_129 and that makes formula_150 true. This rule therefore amounts to deriving a set of formulae that must be satisfied in such formula_131.
While the preconditions formula_149 are assumed satisfied by formula_151, the consequences formula_150 are assumed satisfied in formula_152: same model but possibly different worlds. Set-labeled tableaux do not explicitly keep track of the world where each formula is assumed true: two nodes may or may not refer to the same world. However, the formulae labeling any given node are assumed true at the same world.
As a result of the possibly different worlds where formulae are assumed true, a formula in a node is not automatically valid in all its descendants, as every application of the modal rule corresponds to a move from a world to another one. This condition is automatically captured by set-labeling tableaux, as expansion rules are based only on the leaf where they are applied and not on its ancestors.
Notably, formula_153 does not directly extend to multiple negated boxed formulae such as in formula_154: while there exists an accessible world where formula_155 is false and one in which formula_156 is false, these two worlds are not necessarily the same.
Differently from the propositional rules, formula_153 states conditions over all its preconditions. For example, it cannot be applied to a node labeled by formula_157; while this set is inconsistent and this could be easily proved by applying formula_153, this rule cannot be applied because of formula formula_81, which is not even relevant to inconsistency. Removal of such formulae is made possible by the rule:
formula_158
The addition of this rule (thinning rule) makes the resulting calculus non-confluent: a tableau for an inconsistent set may be impossible to close, even if a closed tableau for the same set exists.
Rule formula_159 is non-deterministic: the set of formulae to be removed (or to be kept) can be chosen arbitrarily; this creates the problem of choosing a set of formulae to discard that is not so large it makes the resulting set satisfiable and not so small it makes the necessary expansion rules inapplicable. Having a large number of possible choices makes the problem of searching for a closed tableau harder.
This non-determinism can be avoided by restricting the usage of formula_159 so that it is only applied before a modal expansion rule, and so that it only removes the formulae that make that other rule inapplicable. This condition can be also formulated by merging the two rules in a single one. The resulting rule produces the same result as the old one, but implicitly discard all formulae that made the old rule inapplicable. This mechanism for removing formula_159 has been proved to preserve completeness for many modal logics.
Axiom T expresses reflexivity of the accessibility relation: every world is accessible from itself. The corresponding tableau expansion rule is:
formula_160
This rule relates conditions over the same world: if formula_161 is true in a world, by reflexivity formula_1 is also true "in the same world". This rule is static, not transactional, as both its precondition and consequent refer to the same world.
This rule copies formula_161 from the precondition to the consequent, in spite of this formula having been "used" to generate formula_1. This is correct, as the considered world is the same, so formula_161 also holds there. This "copying" is necessary in some cases. It is for example necessary to prove the inconsistency of formula_162: the only applicable rules are in order formula_163, from which one is blocked if formula_164 is not copied.
Auxiliary tableaux.
A different method for dealing with formulae holding in alternate worlds is to start a different tableau for each new world that is introduced in the tableau. For example, formula_124 implies that formula_4 is false in an accessible world, so one starts a new tableau rooted by formula_3. This new tableau is attached to the node of the original tableau where the expansion rule has been applied; a closure of this tableau immediately generates a closure of all branches where that node is, regardless of whether the same node is associated other auxiliary tableaux. The expansion rules for the auxiliary tableaux are the same as for the original one; therefore, an auxiliary tableau can have in turns other (sub-)auxiliary tableaux.
Global assumptions.
The above modal tableaux establish the consistency of a set of formulae, and can be used for solving the local logical consequence problem. This is the problem of telling whether, for each model formula_128, if formula_4 is true in a world formula_129, then formula_1 is also true in the same world. This is the same as checking whether formula_1 is true in a world of a model, in the assumption that formula_4 is also true in the same world of the same model.
A related problem is the global consequence problem, where the assumption is that a formula (or set of formulae) formula_165 is true in all possible worlds of the model. The problem is that of checking whether, in all models formula_128 where formula_165 is true in all worlds, formula_1 is also true in all worlds.
Local and global assumption differ on models where the assumed formula is true in some worlds but not in others. As an example, formula_166 entails formula_167 globally but not locally. Local entailment does not hold in a model consisting of two worlds making formula_168 and formula_169 true, respectively, and where the second is accessible from the first; in the first world, the assumptions are true but formula_167 is false. This counterexample works because formula_168 can be assumed true in a world and false in another one. If however the same assumption is considered global, formula_170 is not allowed in any world of the model.
These two problems can be combined, so that one can check whether formula_1 is a local consequence of formula_4 under the global assumption formula_165. Tableaux calculi can deal with global assumption by a rule allowing its addition to every node, regardless of the world it refers to.
Notations.
The following conventions are sometimes used.
Uniform notation.
When writing tableaux expansion rules, formulae are often denoted using a convention, so that for example formula_46 is always considered to be formula_171. The following table provides the notation for formulae in propositional, first-order, and modal logic.
Each label in the first column is taken to be either formula in the other columns. An overlined formula such as formula_172 indicates that formula_44 is the negation of whatever formula appears in its place, so that for example in formula formula_173 the subformula formula_44 is the negation of formula_81.
Since every label indicates many equivalent formulae, this notation allows writing a single rule for all these equivalent formulae. For example, the conjunction expansion rule is formulated as:
formula_174
Signed formulae.
A formula in a tableau is assumed true. Signed tableaux allows stating that a formula is false. This is generally achieved by adding a label to each formula, where the label T indicates formulae assumed true and F those assumed false. A different but equivalent notation is that to write formulae that are assumed true at the left of the node and formulae assumed false at its right.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A_1,\\ldots,A_n"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "\\{A_1,\\ldots,A_n,\\neg B\\}"
},
{
"math_id": 3,
"text": "\\neg A"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "A \\wedge B"
},
{
"math_id": 6,
"text": "\\wedge"
},
{
"math_id": 7,
"text": "(\\land) \\frac{A \\wedge B}{\\begin{array}{c} A \\\\ B\\end{array}}"
},
{
"math_id": 8,
"text": "\\{A, B\\}"
},
{
"math_id": 9,
"text": "X \\cup \\{A \\wedge B\\}"
},
{
"math_id": 10,
"text": "X \\cup \\{A, B\\}"
},
{
"math_id": 11,
"text": "A \\vee B"
},
{
"math_id": 12,
"text": "\\vee"
},
{
"math_id": 13,
"text": "|"
},
{
"math_id": 14,
"text": "(\\vee) \\frac{A \\vee B}{A\\mid B}"
},
{
"math_id": 15,
"text": "Y \\cup \\{A \\vee B\\}"
},
{
"math_id": 16,
"text": "Y \\cup \\{A\\}"
},
{
"math_id": 17,
"text": "Y \\cup \\{B\\}"
},
{
"math_id": 18,
"text": "\\neg (A \\wedge B)"
},
{
"math_id": 19,
"text": "\\neg A \\vee \\neg B"
},
{
"math_id": 20,
"text": "\\neg \\neg A"
},
{
"math_id": 21,
"text": "\\neg \\neg (A \\wedge B) )"
},
{
"math_id": 22,
"text": "(\\neg 1) \\frac{A}{\\neg \\neg A}"
},
{
"math_id": 23,
"text": "(\\neg 2) \\frac{\\neg \\neg A}{A}"
},
{
"math_id": 24,
"text": "(\\wedge) \\frac{X \\cup \\{A \\wedge B\\}}{X \\cup \\{A, B\\}}"
},
{
"math_id": 25,
"text": "X \\cup \\{A \\vee B\\}"
},
{
"math_id": 26,
"text": "X \\cup \\{A\\}"
},
{
"math_id": 27,
"text": "X \\cup \\{B\\}"
},
{
"math_id": 28,
"text": "(\\vee) \\frac{X \\cup \\{A \\vee B\\}}{X \\cup \\{A\\}|X \\cup \\{B\\}}"
},
{
"math_id": 29,
"text": "(id) \\frac{X \\cup \\{p, \\neg p\\}}{closed}"
},
{
"math_id": 30,
"text": "X = \\{r \\wedge \\neg r,\\; p \\wedge ((\\neg p \\vee q) \\wedge \\neg q)\\}"
},
{
"math_id": 31,
"text": "\n\\dfrac{\\quad\n \\dfrac{\n \\quad\n r \\wedge \\neg r,\\; p \\wedge ((\\neg p \\vee q) \\wedge \\neg q)\n \\quad\n }{\n r,\\; \\neg r,\\; p \\wedge ((\\neg p \\vee q) \\wedge \\neg q)\n }(\\wedge)\n}{\nclosed\n} (\\wedge)\n"
},
{
"math_id": 32,
"text": "\n\\dfrac{\n \\quad\n \\dfrac {\n \\quad\n \\dfrac\n {\n \\quad\n r \\wedge \\neg r,\\; p \\wedge ((\\neg p \\vee q) \\wedge \\neg q)\n \\quad\n }{\n r \\wedge \\neg r,\\; p,\\; ((\\neg p \\vee q) \\wedge \\neg q)\n } (\\wedge)\n \\quad\n }{\n r \\wedge \\neg r,\\; p,\\; (\\neg p \\vee q),\\; \\neg q\n } (\\wedge)\n}{\n \\quad\n \\dfrac {\n \\quad\n r \\wedge \\neg r,\\; p,\\; \\neg p,\\; \\neg q\n \\quad\n }{\n closed\n } (id)\n \\quad\\quad\n \\dfrac {\n \\quad\n r \\wedge \\neg r,\\; p,\\; q,\\; \\neg q\n \\quad\n }{\n closed\n } (id)\n} (\\vee)\n"
},
{
"math_id": 33,
"text": "(\\wedge)"
},
{
"math_id": 34,
"text": "(\\vee)"
},
{
"math_id": 35,
"text": "(id)"
},
{
"math_id": 36,
"text": "X'"
},
{
"math_id": 37,
"text": "X = \\{\\neg A\\}"
},
{
"math_id": 38,
"text": "\\{\\neg A\\}"
},
{
"math_id": 39,
"text": "\n\\begin{array}{lcl}\n\\neg (A \\land B) & = & \\neg A \\lor \\neg B \\\\\n\\neg (A \\lor B) & = & \\neg A \\land \\neg B \\\\\n\\neg (\\neg A) & = & A \\\\\n\\neg (A \\Rightarrow B) & = & A \\land \\neg B \\\\\nA \\Rightarrow B & = & \\neg A \\lor B \\\\\nA \\Leftrightarrow B & = & (A \\land B) \\lor (\\neg A \\land \\neg B) \\\\\n\\neg (A \\Leftrightarrow B) & = & (A \\land \\neg B) \\lor (\\neg A \\land B)\n\\end{array}\n"
},
{
"math_id": 40,
"text": "\\alpha "
},
{
"math_id": 41,
"text": "\\beta "
},
{
"math_id": 42,
"text": "\\alpha_1, \\alpha_2"
},
{
"math_id": 43,
"text": "\\beta_1, \\beta_2"
},
{
"math_id": 44,
"text": "\\alpha_1"
},
{
"math_id": 45,
"text": "\\alpha_2"
},
{
"math_id": 46,
"text": "\\alpha"
},
{
"math_id": 47,
"text": "\\beta_1"
},
{
"math_id": 48,
"text": "\\beta_2"
},
{
"math_id": 49,
"text": "\\beta"
},
{
"math_id": 50,
"text": "\\begin{array}{c|c|c} \\alpha & \\alpha_1 & \\alpha_2 \\\\ \\hline X \\wedge Y&X&Y\n\\\\ \\neg(X \\vee Y)&\\neg X&\\neg Y\\\\ \\neg(X \\implies Y)&X&\\neg Y\\\\ \\neg\\neg X&X&X\\\\ \\end{array}"
},
{
"math_id": 51,
"text": "\\begin{array}{c|c|c} \\beta & \\beta_1 & \\beta_2 \\\\ \\hline \\neg(X \\wedge Y)&\\neg X&\\neg Y \n\\\\ X \\vee Y&X&Y\\\\ X \\implies Y&\\neg X&Y\\\\ \\neg\\neg X&X&X\\\\ \\end{array}"
},
{
"math_id": 52,
"text": "\\begin{array}{c|c|c} \\alpha & \\alpha_1 & \\alpha_2 \\\\ \\hline T\\,X \\wedge Y&T\\,\\,X&T\\,\\,Y\\\\ \nF\\,\\,X \\vee Y&F\\,\\,X&F\\,\\,Y\\\\ F\\,\\,X \\implies Y&T\\,\\,X&F\\,\\,Y\\\\ F\\,\\,\\neg X&T\\,\\,X&T\\,\\,Y\\\\ \nT\\,\\,\\neg X&F\\,\\,X&F\\,\\,Y\\end{array}\n\\qquad \\begin{array}{c|c|c} \\beta & \\beta_1 & \\beta_2 \\\\ \n\\hline F\\,\\,X \\wedge Y&F\\,\\,Y&F\\,\\,X\\\\ T\\,\\,X \\vee Y&T\\,\\,X&T\\,\\,Y\n\\\\ T\\,\\,X \\implies Y&F\\,\\,X&T\\,\\,Y\\\\ F\\,\\,\\neg X&T\\,\\,X&T\\,\\,X\\\\ \nT\\,\\,\\neg X&F\\,\\,X&F\\,\\,Y\\end{array}"
},
{
"math_id": 53,
"text": "\\alpha_1,\\alpha_2"
},
{
"math_id": 54,
"text": "\\theta "
},
{
"math_id": 55,
"text": "\\forall x . \\gamma(x)"
},
{
"math_id": 56,
"text": "\\gamma(t)"
},
{
"math_id": 57,
"text": "t"
},
{
"math_id": 58,
"text": "(\\forall) \\frac{\\forall x . \\gamma(x)}{\\gamma(t)}"
},
{
"math_id": 59,
"text": "\\{\\neg P(a) \\vee \\neg P(b), \\forall x . P(x)\\}"
},
{
"math_id": 60,
"text": "P(a)"
},
{
"math_id": 61,
"text": "P(b)"
},
{
"math_id": 62,
"text": "\\forall x . P(x)"
},
{
"math_id": 63,
"text": "\\exists x . \\delta(x)"
},
{
"math_id": 64,
"text": "\\delta(c)"
},
{
"math_id": 65,
"text": "c"
},
{
"math_id": 66,
"text": "(\\exists) \\frac{\\exists x . \\delta(x)}{\\delta(c)}"
},
{
"math_id": 67,
"text": "x"
},
{
"math_id": 68,
"text": "\\forall y . \\gamma(y)"
},
{
"math_id": 69,
"text": "\\gamma(c)"
},
{
"math_id": 70,
"text": "\\{\\neg P(f(c)), \\forall x . P(x)\\}"
},
{
"math_id": 71,
"text": "P(c), P(f(c)), P(f(f(c))), \\ldots"
},
{
"math_id": 72,
"text": "(\\forall)"
},
{
"math_id": 73,
"text": "\\gamma(x')"
},
{
"math_id": 74,
"text": "x'"
},
{
"math_id": 75,
"text": "(\\forall) \\frac{\\forall x . \\gamma(x)}{\\gamma(x')}"
},
{
"math_id": 76,
"text": "(\\sigma)"
},
{
"math_id": 77,
"text": "\\{\\neg P(a), \\forall x . P(x)\\}"
},
{
"math_id": 78,
"text": "P(x_1)"
},
{
"math_id": 79,
"text": "\\neg P(a)"
},
{
"math_id": 80,
"text": "x_1"
},
{
"math_id": 81,
"text": "a"
},
{
"math_id": 82,
"text": "(\\exists) \\frac{\\exists x . \\delta(x)}{\\delta(f(x_1,\\ldots,x_n))}"
},
{
"math_id": 83,
"text": "f"
},
{
"math_id": 84,
"text": "x_1,\\ldots,x_n"
},
{
"math_id": 85,
"text": "\\delta"
},
{
"math_id": 86,
"text": "F"
},
{
"math_id": 87,
"text": "\\forall x_1,\\ldots,x_n . F"
},
{
"math_id": 88,
"text": "\\gamma"
},
{
"math_id": 89,
"text": "A(x)"
},
{
"math_id": 90,
"text": "B(x)"
},
{
"math_id": 91,
"text": "\\forall x . (...A(x)...B(x)...)"
},
{
"math_id": 92,
"text": "(...A(x)...B(x)...)"
},
{
"math_id": 93,
"text": "(...A(t)...A(t')...)"
},
{
"math_id": 94,
"text": "t'"
},
{
"math_id": 95,
"text": "P(x) \\rightarrow P(c)"
},
{
"math_id": 96,
"text": "D=\\{1,2\\}, P(1)=\\bot, P(2)=\\top, c=1"
},
{
"math_id": 97,
"text": "x=2"
},
{
"math_id": 98,
"text": "\\{P(x),\\neg P(c)\\}"
},
{
"math_id": 99,
"text": "P(x)"
},
{
"math_id": 100,
"text": "\\neg P(c)"
},
{
"math_id": 101,
"text": "\\forall x . (P(x) \\rightarrow P(c))"
},
{
"math_id": 102,
"text": "\\{P(f(c)), R(c), \\neg P(f(c)) \\vee \\neg R(c), \\forall x . Q(x)\\}"
},
{
"math_id": 103,
"text": "\\neg a \\wedge \\neg b"
},
{
"math_id": 104,
"text": "a \\vee b"
},
{
"math_id": 105,
"text": "b"
},
{
"math_id": 106,
"text": "\\forall x_1,\\ldots,x_n L_1 \\vee \\cdots \\vee L_m"
},
{
"math_id": 107,
"text": "L_i"
},
{
"math_id": 108,
"text": "P(x,y) \\vee Q(f(x))"
},
{
"math_id": 109,
"text": "\\forall x,y . P(x,y) \\vee Q(f(x))"
},
{
"math_id": 110,
"text": "\\exists x,y . P(x,y) \\vee Q(f(x))"
},
{
"math_id": 111,
"text": "(C) \\frac{L_1 \\vee \\cdots \\vee L_n}{L_1'|\\cdots|L_n'}"
},
{
"math_id": 112,
"text": "L_1' \\vee \\cdots \\vee L_n'"
},
{
"math_id": 113,
"text": "L_1 \\vee \\cdots \\vee L_n"
},
{
"math_id": 114,
"text": "(C)"
},
{
"math_id": 115,
"text": "true"
},
{
"math_id": 116,
"text": "true - a"
},
{
"math_id": 117,
"text": "\\{a, \\neg a \\vee b, \\neg c \\vee d, \\neg b\\}"
},
{
"math_id": 118,
"text": "\\neg a \\vee b"
},
{
"math_id": 119,
"text": "\\{a, b, \\neg b\\}"
},
{
"math_id": 120,
"text": "L"
},
{
"math_id": 121,
"text": "C"
},
{
"math_id": 122,
"text": "B - L"
},
{
"math_id": 123,
"text": "\\Box A"
},
{
"math_id": 124,
"text": "\\neg \\Box A"
},
{
"math_id": 125,
"text": "\\frac{\\neg \\Box A}{\\neg A}"
},
{
"math_id": 126,
"text": "a \\wedge \\neg \\Box a"
},
{
"math_id": 127,
"text": "\\neg a"
},
{
"math_id": 128,
"text": "M"
},
{
"math_id": 129,
"text": "w"
},
{
"math_id": 130,
"text": "\\neg \\Box a"
},
{
"math_id": 131,
"text": "w'"
},
{
"math_id": 132,
"text": "\\neg \\Box A \\wedge \\neg \\Box B"
},
{
"math_id": 133,
"text": "\\neg B"
},
{
"math_id": 134,
"text": "\\neg \\Box B"
},
{
"math_id": 135,
"text": "\\frac{\\neg \\Box B}{\\neg B}"
},
{
"math_id": 136,
"text": "w:A"
},
{
"math_id": 137,
"text": "w:A \\wedge B"
},
{
"math_id": 138,
"text": "w:B"
},
{
"math_id": 139,
"text": "w:a"
},
{
"math_id": 140,
"text": "w:\\neg a"
},
{
"math_id": 141,
"text": "w':\\neg a"
},
{
"math_id": 142,
"text": "\\frac{w:\\neg \\Box A}{w':\\neg A}"
},
{
"math_id": 143,
"text": "wRw'"
},
{
"math_id": 144,
"text": "(1,4,2,3)"
},
{
"math_id": 145,
"text": "(1,4,2)"
},
{
"math_id": 146,
"text": "S"
},
{
"math_id": 147,
"text": "M,w \\models S"
},
{
"math_id": 148,
"text": "(K) \\frac{\\Box A_1; \\ldots ; \\Box A_n ; \\neg \\Box B}{A_1; \\ldots ; A_n ; \\neg B}"
},
{
"math_id": 149,
"text": "\\Box A_1; \\ldots ; \\Box A_n ; \\neg \\Box B"
},
{
"math_id": 150,
"text": "A_1; \\ldots ; A_n ; \\neg B"
},
{
"math_id": 151,
"text": "M,w"
},
{
"math_id": 152,
"text": "M,w'"
},
{
"math_id": 153,
"text": "(K)"
},
{
"math_id": 154,
"text": "\\Box A_1; \\ldots; \\Box A_n; \\neg \\Box B_1; \\neg \\Box B_2"
},
{
"math_id": 155,
"text": "B_1"
},
{
"math_id": 156,
"text": "B_2"
},
{
"math_id": 157,
"text": "a; \\Box b; \\Box (b \\rightarrow c); \\neg \\Box c"
},
{
"math_id": 158,
"text": "(\\theta) \\frac{A_1;\\ldots;A_n;B_1;\\ldots;B_m}{A_1;\\ldots;A_n}"
},
{
"math_id": 159,
"text": "(\\theta)"
},
{
"math_id": 160,
"text": "(T) \\frac{A_1;\\ldots;A_n;\\Box B}{A_1;\\ldots;A_n; \\Box B; B}"
},
{
"math_id": 161,
"text": "\\Box B"
},
{
"math_id": 162,
"text": "\\Box(a \\wedge \\neg \\Box a)"
},
{
"math_id": 163,
"text": "(T), (\\wedge), (\\theta), (K)"
},
{
"math_id": 164,
"text": "\\Box a"
},
{
"math_id": 165,
"text": "G"
},
{
"math_id": 166,
"text": "\\{P, \\neg \\Box (P \\wedge Q)\\}"
},
{
"math_id": 167,
"text": "\\neg \\Box Q"
},
{
"math_id": 168,
"text": "P"
},
{
"math_id": 169,
"text": "\\neg P, Q"
},
{
"math_id": 170,
"text": "\\neg P"
},
{
"math_id": 171,
"text": "\\alpha_1 \\wedge \\alpha_2"
},
{
"math_id": 172,
"text": "\\overline{\\alpha_1}"
},
{
"math_id": 173,
"text": "\\neg (a \\vee b)"
},
{
"math_id": 174,
"text": "(\\alpha) \\frac{\\alpha}{\\begin{array}{c}\\alpha_1\\\\ \\alpha_2\\end{array}}"
}
] | https://en.wikipedia.org/wiki?curid=1027229 |
10273855 | Positive-definite kernel | Generalization of a positive-definite matrix
In operator theory, a branch of mathematics, a positive-definite kernel is a generalization of a positive-definite function or a positive-definite matrix. It was first introduced by James Mercer in the early 20th century, in the context of solving integral operator equations. Since then, positive-definite functions and their various analogues and generalizations have arisen in diverse parts of mathematics. They occur naturally in Fourier analysis, probability theory, operator theory, complex function-theory, moment problems, integral equations, boundary-value problems for partial differential equations, machine learning, embedding problem, information theory, and other areas.
Definition.
Let formula_0 be a nonempty set, sometimes referred to as the index set. A symmetric function formula_1 is called a positive-definite (p.d.) kernel on formula_2 if
holds for all formula_3, formula_4.
In probability theory, a distinction is sometimes made between positive-definite kernels, for which equality in (1.1) implies formula_5, and positive semi-definite (p.s.d.) kernels, which do not impose this condition. Note that this is equivalent to requiring that every finite matrix constructed by pairwise evaluation, formula_6, has either entirely positive (p.d.) or nonnegative (p.s.d.) eigenvalues.
In mathematical literature, kernels are usually complex-valued functions. That is, a complex-valued function formula_7 is called a Hermitian kernel if formula_8 and positive definite if for every finite set of points formula_3 and any complex numbers formula_9,
formula_10
where formula_11 denotes the complex conjugate. In the rest of this article we assume real-valued functions, which is the common practice in applications of p.d. kernels.
History.
Positive-definite kernels, as defined in (1.1), appeared first in 1909 in a paper on integral equations by James Mercer. Several other authors made use of this concept in the following two decades, but none of them explicitly used kernels formula_46, i.e. p.d. functions (indeed M. Mathias and S. Bochner seem not to have been aware of the study of p.d. kernels). Mercer’s work arose from Hilbert’s paper of 1904 on Fredholm integral equations of the second kind:
In particular, Hilbert had shown that
where formula_47 is a continuous real symmetric kernel, formula_48 is continuous, formula_49 is a complete system of orthonormal eigenfunctions, and formula_50’s are the corresponding eigenvalues of (1.2). Hilbert defined a “definite” kernel as one for which the double integral
formula_51
satisfies formula_52 except for formula_53. The original object of Mercer’s paper was to characterize the kernels which are definite in the sense of Hilbert, but Mercer soon found that the class of such functions was too restrictive to characterize in terms of determinants. He therefore defined a continuous real symmetric kernel formula_54 to be of positive type (i.e. positive-definite) if formula_55 for all real continuous functions formula_48 on formula_56, and he proved that (1.1) is a necessary and sufficient condition for a kernel to be of positive type. Mercer then proved that for any continuous p.d. kernel the expansion
formula_57
holds absolutely and uniformly.
At about the same time W. H. Young, motivated by a different question in the theory of integral equations, showed that for continuous kernels condition (1.1) is equivalent to formula_55 for all formula_58.
E.H. Moore initiated the study of a very general kind of p.d. kernel. If formula_59 is an abstract set, he calls functions formula_60 defined on formula_61 “positive Hermitian matrices” if they satisfy (1.1) for all formula_62. Moore was interested in generalization of integral equations and showed that to each such formula_47 there is a Hilbert space formula_37 of functions such that, for each formula_63. This property is called the reproducing property of the kernel and turns out to have importance in the solution of boundary-value problems for elliptic partial differential equations.
Another line of development in which p.d. kernels played a large role was the theory of harmonics on homogeneous spaces as begun by E. Cartan in 1929, and continued by H. Weyl and S. Ito. The most comprehensive theory of p.d. kernels in homogeneous spaces is that of M. Krein which includes as special cases the work on p.d. functions and irreducible unitary representations of locally compact groups.
In probability theory, p.d. kernels arise as covariance kernels of stochastic processes.
Connection with reproducing kernel Hilbert spaces and feature maps.
Positive-definite kernels provide a framework that encompasses some basic Hilbert space constructions. In the following we present a tight relationship between positive-definite kernels and two mathematical objects, namely reproducing Hilbert spaces and feature maps.
Let formula_64 be a set, formula_37 a Hilbert space of functions formula_65, and formula_66 the corresponding inner product on formula_37. For any formula_67 the evaluation functional formula_68 is defined by formula_69.
We first define a reproducing kernel Hilbert space (RKHS):
Definition: Space formula_37 is called a reproducing kernel Hilbert space if the evaluation functionals are continuous.
Every RKHS has a special function associated to it, namely the reproducing kernel:
Definition: Reproducing kernel is a function formula_70 such that
The latter property is called the reproducing property.
The following result shows equivalence between RKHS and reproducing kernels:
<templatestyles src="Math_theorem/styles.css" />
Theorem — Every reproducing kernel formula_47 induces a unique RKHS, and every RKHS has a unique reproducing kernel.
Now the connection between positive definite kernels and RKHS is given by the following theorem
<templatestyles src="Math_theorem/styles.css" />
Theorem — Every reproducing kernel is positive-definite, and every positive definite kernel defines a unique RKHS, of which it is the unique reproducing kernel.
Thus, given a positive-definite kernel formula_47, it is possible to build an associated RKHS with formula_47 as a reproducing kernel.
As stated earlier, positive definite kernels can be constructed from inner products. This fact can be used to connect p.d. kernels with another interesting object that arises in machine learning applications, namely the feature map. Let formula_74 be a Hilbert space, and formula_75 the corresponding inner product. Any map formula_76 is called a feature map. In this case we call formula_74 the feature space. It is easy to see that every feature map defines a unique p.d. kernel by
formula_77
Indeed, positive definiteness of formula_47 follows from the p.d. property of the inner product. On the other hand, every p.d. kernel, and its corresponding RKHS, have many associated feature maps. For example: Let formula_78, and formula_79 for all formula_67. Then formula_80, by the reproducing property.
This suggests a new look at p.d. kernels as inner products in appropriate Hilbert spaces, or in other words p.d. kernels can be viewed as similarity maps which quantify effectively how similar two points formula_48 and formula_81 are through the value formula_60. Moreover, through the equivalence of p.d. kernels and its corresponding RKHS, every feature map can be used to construct a RKHS.
Kernels and distances.
Kernel methods are often compared to distance based methods such as nearest neighbors. In this section we discuss parallels between their two respective ingredients, namely kernels formula_47 and distances formula_82.
Here by a distance function between each pair of elements of some set formula_64, we mean a metric defined on that set, i.e. any nonnegative-valued function formula_82 on formula_83 which satisfies
One link between distances and p.d. kernels is given by a particular kind of kernel, called a negative definite kernel, and defined as follows
Definition: A symmetric function formula_89 is called a negative definite (n.d.) kernel on formula_2 if
holds for any formula_90 and formula_91 such that formula_92.
The parallel between n.d. kernels and distances is in the following: whenever a n.d. kernel vanishes on the set formula_93, and is zero only on this set, then its square root is a distance for formula_2. At the same time each distance does not correspond necessarily to a n.d. kernel. This is only true for Hilbertian distances, where distance formula_82 is called Hilbertian if one can embed the metric space formula_94 isometrically into some Hilbert space.
On the other hand, n.d. kernels can be identified with a subfamily of p.d. kernels known as infinitely divisible kernels. A nonnegative-valued kernel formula_47 is said to be infinitely divisible if for every formula_95 there exists a positive-definite kernel formula_96 such that formula_97.
Another link is that a p.d. kernel induces a pseudometric, where the first constraint on the distance function is loosened to allow formula_98 for formula_99. Given a positive-definite kernel formula_100, we can define a distance function as:
formula_101
Some applications.
Kernels in machine learning.
Positive-definite kernels, through their equivalence with reproducing kernel Hilbert spaces (RKHS), are particularly important in the field of statistical learning theory because of the celebrated representer theorem which states that every minimizer function in an RKHS can be written as a linear combination of the kernel function evaluated at the training points. This is a practically useful result as it effectively simplifies the empirical risk minimization problem from an infinite dimensional to a finite dimensional optimization problem.
Kernels in probabilistic models.
There are several different ways in which kernels arise in probability theory.
Assume now that a noise variable formula_113, with zero mean and variance formula_114, is added to formula_48, such that the noise is independent for different formula_48 and independent of formula_115 there, then the problem of finding a good estimate for formula_103 is identical to the above one, but with a modified kernel given by formula_116.
Numerical solution of partial differential equations.
One of the greatest application areas of so-called meshfree methods is in the numerical solution of PDEs. Some of the popular meshfree methods are closely related to positive-definite kernels (such as meshless local Petrov Galerkin (MLPG), Reproducing kernel particle method (RKPM) and smoothed-particle hydrodynamics (SPH)). These methods use radial basis kernel for collocation.
Other applications.
In the literature on computer experiments and other engineering experiments, one increasingly encounters models based on p.d. kernels, RBFs or kriging. One such topic is response surface methodology. Other types of applications that boil down to data fitting are rapid prototyping and computer graphics. Here one often uses implicit surface models to approximate or interpolate point cloud data.
Applications of p.d. kernels in various other branches of mathematics are in multivariate integration, multivariate optimization, and in numerical analysis and scientific computing, where one studies fast, accurate and adaptive algorithms ideally implemented in high-performance computing environments.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathcal X "
},
{
"math_id": 1,
"text": " K: \\mathcal X \\times \\mathcal X \\to \\mathbb{R}"
},
{
"math_id": 2,
"text": "\\mathcal X"
},
{
"math_id": 3,
"text": "x_1, \\dots, x_n\\in \\mathcal X"
},
{
"math_id": 4,
"text": "n\\in \\mathbb{N}, c_1, \\dots, c_n \\in \\mathbb{R}"
},
{
"math_id": 5,
"text": "c_i = 0\\; (\\forall i)"
},
{
"math_id": 6,
"text": "\\mathbf{K}_{ij} = K(x_i, x_j)"
},
{
"math_id": 7,
"text": "K:\\mathcal{X}\\times \\mathcal{X} \\to \\mathbb{C}"
},
{
"math_id": 8,
"text": "K(x,y)=\\overline{K(y,x)}"
},
{
"math_id": 9,
"text": "\\xi_1, \\dots, \\xi_n \\in \\mathbb{C}"
},
{
"math_id": 10,
"text": "\\sum_{i=1}^n \\sum_{j=1}^n \\xi_i \\overline{\\xi}_j K(x_i, x_j) \\ge 0"
},
{
"math_id": 11,
"text": "\\overline{\\xi}_j"
},
{
"math_id": 12,
"text": " (K_i)_{i\\in\\mathbb{N}},\\ \\ K_i:\\mathcal X\\times \\mathcal X\\to \\mathbb{R}"
},
{
"math_id": 13,
"text": " \\sum_{i=1}^n \\lambda_i K_i"
},
{
"math_id": 14,
"text": " \\lambda_1,\\dots,\\lambda_n \\ge 0"
},
{
"math_id": 15,
"text": " K_1^{a_1}\\dots K_n^{a_n}"
},
{
"math_id": 16,
"text": " a_1,\\dots,a_n\\in \\mathbb{N}"
},
{
"math_id": 17,
"text": " K = \\lim_{n\\to\\infty}K_n"
},
{
"math_id": 18,
"text": " (\\mathcal X_i)_{i=1}^n "
},
{
"math_id": 19,
"text": " (K_i)_{i=1}^n,\\ \\ K_i:\\mathcal X_i\\times \\mathcal X_i\\to \\mathbb{R}"
},
{
"math_id": 20,
"text": " K((x_1,\\dots,x_n),(y_1,\\dots,y_n)) =\\prod_{i=1}^n K_i(x_i,y_i) "
},
{
"math_id": 21,
"text": " K((x_1,\\dots,x_n),(y_1,\\dots,y_n)) =\\sum_{i=1}^n K_i(x_i,y_i) "
},
{
"math_id": 22,
"text": " \\mathcal X = \\mathcal X_1 \\times \\dots \\times \\mathcal X_n"
},
{
"math_id": 23,
"text": " \\mathcal X_0\\subset \\mathcal X"
},
{
"math_id": 24,
"text": " K_0"
},
{
"math_id": 25,
"text": " K"
},
{
"math_id": 26,
"text": " \\mathcal X_0\\times \\mathcal X_0"
},
{
"math_id": 27,
"text": "\\mathbb{R}^d"
},
{
"math_id": 28,
"text": " K(\\mathbf{x}, \\mathbf{y}) = \\mathbf{x}^T\\mathbf{y}, \\quad \\mathbf{x},\\mathbf{y} \\in\\mathbb{R}^d"
},
{
"math_id": 29,
"text": " K(\\mathbf{x}, \\mathbf{y}) =(\\mathbf{x}^T\\mathbf{y}+r)^n, \\quad \\mathbf{x},\\mathbf{y} \\in\\mathbb{R}^d, r \\ge 0, n \\ge 1 "
},
{
"math_id": 30,
"text": " K(\\mathbf{x}, \\mathbf{y}) =e^{-\\frac{\\|\\mathbf{x} - \\mathbf{y}\\|^2}{2\\sigma^2}}, \\quad \\mathbf{x}, \\mathbf{y}\\in\\mathbb{R}^d, \\sigma>0"
},
{
"math_id": 31,
"text": " K(\\mathbf{x}, \\mathbf{y}) =e^{-\\alpha\\|\\mathbf{x} - \\mathbf{y}\\|}, \\quad \\mathbf{x}, \\mathbf{y}\\in\\mathbb{R}^d, \\alpha>0"
},
{
"math_id": 32,
"text": " K(x,y) =e^{-\\alpha|x-y|}, \\quad x,y \\in\\mathbb{R}, \\alpha>0"
},
{
"math_id": 33,
"text": "W^k_2(\\mathbb{R}^d)"
},
{
"math_id": 34,
"text": " K(x,y) = \\|x-y\\|_2^{k-\\frac d2}B_{k-\\frac d2}(\\|x-y\\|_2)"
},
{
"math_id": 35,
"text": "B_\\nu"
},
{
"math_id": 36,
"text": " K(x,y) = \\operatorname{sinc}(\\alpha(x-y)), \\quad x,y\\in\\mathbb{R}, \\alpha>0"
},
{
"math_id": 37,
"text": "H"
},
{
"math_id": 38,
"text": "(\\cdot,\\cdot)_H:H\\times H\\to \\mathbb{R}"
},
{
"math_id": 39,
"text": " \\sum_{i,j=1}^n c_i c_j (x_i, x_j)_H = \\left(\\sum_{i=1}^n c_i x_i, \\sum_{j=1}^n c_j x_j\\right)_H= \\left\\|\\sum_{i=1}^n c_i x_i\\right\\|_H^2\\ge 0 "
},
{
"math_id": 40,
"text": "\\mathbb{R}_+^d"
},
{
"math_id": 41,
"text": "\\chi"
},
{
"math_id": 42,
"text": " \\psi_{JD}= H\\left(\\frac{\\theta+\\theta'}2\\right)-\\frac{H(\\theta)+H(\\theta')}2,"
},
{
"math_id": 43,
"text": " \\psi_{\\chi^2}= \\sum_i\\frac{(\\theta_i-\\theta_i')^2}{\\theta_i+\\theta_i'},\\quad \\psi_{TV}= \\sum_i \\left|\\theta_i - \\theta_i'\\right|,"
},
{
"math_id": 44,
"text": " \\psi_{H_1}= \\sum_i \\left|\\sqrt{\\theta_i}-\\sqrt{\\theta_i'}\\right|,\\psi_{H_2}= \\sum_i \\left|\\sqrt{\\theta_i} - \\sqrt{\\theta_i'}\\right|^2,"
},
{
"math_id": 45,
"text": " K(\\theta,\\theta') = e^{-\\alpha \\psi(\\theta,\\theta')}, \\alpha>0. "
},
{
"math_id": 46,
"text": "K(x,y) = f(x-y)"
},
{
"math_id": 47,
"text": "K"
},
{
"math_id": 48,
"text": "x"
},
{
"math_id": 49,
"text": "\\{\\psi_n\\}"
},
{
"math_id": 50,
"text": "\\lambda_n"
},
{
"math_id": 51,
"text": "J(x) =\\int_a^b \\int_a^b K(s,t)x(s)x(t)\\ \\mathrm{d}s\\; \\mathrm{d}t"
},
{
"math_id": 52,
"text": "J(x)>0"
},
{
"math_id": 53,
"text": "x(t) = 0"
},
{
"math_id": 54,
"text": "K(s,t)"
},
{
"math_id": 55,
"text": "J(x)\\ge 0"
},
{
"math_id": 56,
"text": "[a,b]"
},
{
"math_id": 57,
"text": " K(s,t) = \\sum_n \\frac{\\psi_n(s)\\psi_n(t)}{\\lambda_n} "
},
{
"math_id": 58,
"text": "x\\in L^1[a,b]"
},
{
"math_id": 59,
"text": "E"
},
{
"math_id": 60,
"text": "K(x,y)"
},
{
"math_id": 61,
"text": "E \\times E"
},
{
"math_id": 62,
"text": "x_i\\in E"
},
{
"math_id": 63,
"text": "f\\in H, f(y) = (f,K(\\cdot,y))_H"
},
{
"math_id": 64,
"text": "X"
},
{
"math_id": 65,
"text": "f:X\\to \\mathbb{R}"
},
{
"math_id": 66,
"text": "(\\cdot,\\cdot)_H : H \\times H \\to \\mathbb{R}"
},
{
"math_id": 67,
"text": "x\\in X"
},
{
"math_id": 68,
"text": "e_x:H\\to \\mathbb{R}"
},
{
"math_id": 69,
"text": "f\\mapsto e_x(f) =f(x)"
},
{
"math_id": 70,
"text": "K:X\\times X \\to \\mathbb{R}"
},
{
"math_id": 71,
"text": "K_x(\\cdot)\\in H, \\forall x\\in X"
},
{
"math_id": 72,
"text": "(f,K_x) =f(x)"
},
{
"math_id": 73,
"text": "f\\in H"
},
{
"math_id": 74,
"text": "F"
},
{
"math_id": 75,
"text": "(\\cdot,\\cdot)_F"
},
{
"math_id": 76,
"text": "\\Phi: X\\to F"
},
{
"math_id": 77,
"text": "K(x,y) =(\\Phi(x),\\Phi(y))_F."
},
{
"math_id": 78,
"text": "F=H"
},
{
"math_id": 79,
"text": "\\Phi(x) = K_x"
},
{
"math_id": 80,
"text": "(\\Phi(x),\\Phi(y))_F = (K_x,K_y)_H = K(x,y)"
},
{
"math_id": 81,
"text": "y"
},
{
"math_id": 82,
"text": "d"
},
{
"math_id": 83,
"text": "\\mathcal X\\times \\mathcal X"
},
{
"math_id": 84,
"text": " d(x,y) \\ge 0"
},
{
"math_id": 85,
"text": " d(x,y)=0"
},
{
"math_id": 86,
"text": "x=y"
},
{
"math_id": 87,
"text": " d(x,y) = d(y,x), "
},
{
"math_id": 88,
"text": " d(x,z) \\le d(x,y)+d(y,z)."
},
{
"math_id": 89,
"text": "\\psi:\\mathcal X\\times \\mathcal X\\to \\mathbb{R}"
},
{
"math_id": 90,
"text": "n\\in \\mathbb{N}, x_1, \\dots, x_n\\in \\mathcal X,"
},
{
"math_id": 91,
"text": "c_1, \\dots, c_n \\in \\mathbb{R}"
},
{
"math_id": 92,
"text": "\\sum_{i=1}^n c_i=0"
},
{
"math_id": 93,
"text": "\\{(x,x):x\\in\\mathcal X\\}"
},
{
"math_id": 94,
"text": "(\\mathcal X,d)"
},
{
"math_id": 95,
"text": "n\\in\\mathbb{N}"
},
{
"math_id": 96,
"text": "K_n"
},
{
"math_id": 97,
"text": "K=(K_n)^n"
},
{
"math_id": 98,
"text": " d(x,y) = 0 "
},
{
"math_id": 99,
"text": " x \\neq y "
},
{
"math_id": 100,
"text": " K "
},
{
"math_id": 101,
"text": " d(x,y) = \\sqrt{ K(x,x) - 2 K(x,y) + K(y,y) } "
},
{
"math_id": 102,
"text": "f(x)"
},
{
"math_id": 103,
"text": "f"
},
{
"math_id": 104,
"text": "(x_i,f_i)=(x_i,f(x_i))"
},
{
"math_id": 105,
"text": "f_i"
},
{
"math_id": 106,
"text": "x_i"
},
{
"math_id": 107,
"text": "Z(x_i)"
},
{
"math_id": 108,
"text": "E[Z(x_i)]"
},
{
"math_id": 109,
"text": "x,y\\in\\mathcal X"
},
{
"math_id": 110,
"text": "Z(x)"
},
{
"math_id": 111,
"text": "Z(y)"
},
{
"math_id": 112,
"text": " K(x,y)=E[Z(x)\\cdot Z(y)] "
},
{
"math_id": 113,
"text": "\\epsilon(x)"
},
{
"math_id": 114,
"text": "\\sigma^2"
},
{
"math_id": 115,
"text": "Z"
},
{
"math_id": 116,
"text": "K(x,y)=E[Z(x)\\cdot Z(y)] + \\sigma^2\\delta_{xy}"
},
{
"math_id": 117,
"text": "x_1,\\dots,x_n\\in\\mathcal X"
},
{
"math_id": 118,
"text": "f(x) = \\frac 1 n \\sum_{i=1}^n K\\left(\\frac{x-x_i}h\\right)"
}
] | https://en.wikipedia.org/wiki?curid=10273855 |
10273917 | Projective cover | In the branch of abstract mathematics called category theory, a projective cover of an object "X" is in a sense the best approximation of "X" by a projective object "P". Projective covers are the dual of injective envelopes.
Definition.
Let formula_0 be a category and "X" an object in formula_0. A projective cover is a pair ("P","p"), with "P" a projective object in formula_0 and "p" a superfluous epimorphism in Hom("P", "X").
If "R" is a ring, then in the category of "R"-modules, a superfluous epimorphism is then an epimorphism formula_1 such that the kernel of "p" is a superfluous submodule of "P".
Properties.
Projective covers and their superfluous epimorphisms, when they exist, are unique up to isomorphism. The isomorphism need not be unique, however, since the projective property is not a full fledged universal property.
The main effect of "p" having a superfluous kernel is the following: if "N" is any proper submodule of "P", then formula_2. Informally speaking, this shows the superfluous kernel causes "P" to cover "M" optimally, that is, no submodule of "P" would suffice. This does not depend upon the projectivity of "P": it is true of all superfluous epimorphisms.
If ("P","p") is a projective cover of "M", and "P' " is another projective module with an epimorphism formula_3, then there is a split epimorphism α from "P' " to "P" such that formula_4
Unlike injective envelopes and flat covers, which exist for every left (right) "R"-module regardless of the ring "R", left (right) "R"-modules do not in general have projective covers. A ring "R" is called left (right) perfect if every left (right) "R"-module has a projective cover in "R"-Mod (Mod-"R").
A ring is called semiperfect if every finitely generated left (right) "R"-module has a projective cover in "R"-Mod (Mod-"R"). "Semiperfect" is a left-right symmetric property.
A ring is called "lift/rad" if idempotents lift from "R"/"J" to "R", where "J" is the Jacobson radical of "R". The property of being lift/rad can be characterized in terms of projective covers: "R" is lift/rad if and only if direct summands of the "R" module "R"/"J" (as a right or left module) have projective covers.
Examples.
In the category of "R" modules:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{C}"
},
{
"math_id": 1,
"text": "p : P \\to X"
},
{
"math_id": 2,
"text": "p(N) \\ne M"
},
{
"math_id": 3,
"text": "p':P'\\rightarrow M"
},
{
"math_id": 4,
"text": "p\\alpha=p'"
}
] | https://en.wikipedia.org/wiki?curid=10273917 |
10274 | Enthalpy | Measure of energy in a thermodynamic system
Enthalpy () is the sum of a thermodynamic system's internal energy and the product of its pressure and volume. It is a state function in thermodynamics used in many measurements in chemical, biological, and physical systems at a constant external pressure, which is conveniently provided by the large ambient atmosphere. The pressure–volume term expresses the work formula_0 that was done against constant external pressure formula_1 to establish the system's physical dimensions from formula_2 to some final volume formula_3 (as formula_4), i.e. to make room for it by displacing its surroundings.
The pressure-volume term is very small for solids and liquids at common conditions, and fairly small for gases. Therefore, enthalpy is a stand-in for energy in chemical systems; bond, lattice, solvation, and other chemical "energies" are actually enthalpy differences. As a state function, enthalpy depends only on the final configuration of internal energy, pressure, and volume, not on the path taken to achieve it.
In the International System of Units (SI), the unit of measurement for enthalpy is the joule. Other historical conventional units still in use include the calorie and the British thermal unit (BTU).
The total enthalpy of a system cannot be measured directly because the internal energy contains components that are unknown, not easily accessible, or are not of interest for the thermodynamic problem at hand. In practice, a change in enthalpy is the preferred expression for measurements at constant pressure, because it simplifies the description of energy transfer. When transfer of matter into or out of the system is also prevented and no electrical or mechanical (stirring shaft or lift pumping) work is done, at constant pressure the enthalpy change equals the energy exchanged with the environment by heat.
In chemistry, the standard enthalpy of reaction is the enthalpy change when reactants in their standard states ( usually ) change to products in their standard states.
This quantity is the standard heat of reaction at constant pressure and temperature, but it can be measured by calorimetric methods even if the temperature does vary during the measurement, provided that the initial and final pressure and temperature correspond to the standard state. The value does not depend on the path from initial to final state because enthalpy is a state function.
Enthalpies of chemical substances are usually listed for pressure as a standard state. Enthalpies and enthalpy changes for reactions vary as a function of temperature,
but tables generally list the standard heats of formation of substances at . For endothermic (heat-absorbing) processes, the change Δ"H" is a positive value; for exothermic (heat-releasing) processes it is negative.
The enthalpy of an ideal gas is independent of its pressure or volume, and depends only on its temperature, which correlates to its thermal energy. Real gases at common temperatures and pressures often closely approximate this behavior, which simplifies practical thermodynamic design and analysis.
The word "enthalpy" is derived from the Greek word "enthalpein", which means to heat.
Definition.
The enthalpy "H" of a thermodynamic system is defined as the sum of its internal energy and the product of its pressure and volume:
where "U" is the internal energy, p is pressure, and V is the volume of the system; p V is sometimes referred to as the pressure energy Ɛp .
Enthalpy is an extensive property; it is proportional to the size of the system (for homogeneous systems). As intensive properties, the "specific enthalpy", is referenced to a unit of mass m of the system, and the "molar enthalpy", where n is the number of moles. For inhomogeneous systems the enthalpy is the sum of the enthalpies of the component subsystems:
formula_5
where
H is the total enthalpy of all the subsystems,
k refers to the various subsystems,
Hk refers to the enthalpy of each subsystem.
A closed system may lie in thermodynamic equilibrium in a static gravitational field, so that its pressure p varies continuously with altitude, while, because of the equilibrium requirement, its temperature T is invariant with altitude. (Correspondingly, the system's gravitational potential energy density also varies with altitude.) Then the enthalpy summation becomes an integral:
formula_6
where
ρ ("rho") is density (mass per unit volume),
h is the specific enthalpy (enthalpy per unit mass),
("ρh") represents the enthalpy density (enthalpy per unit volume),
d"V" denotes an infinitesimally small element of volume within the system, for example, the volume of an infinitesimally thin horizontal layer.
The integral therefore represents the sum of the enthalpies of all the elements of the volume.
The enthalpy of a closed homogeneous system is its energy function with its entropy "S"[ "p" ] and its pressure p as natural state variables which provide a differential relation for d"H" of the simplest form, derived as follows. We start from the first law of thermodynamics for closed systems for an infinitesimal process:
formula_7
where
δ"Q" is a small amount of heat added to the system,
δ"W" is a small amount of work performed by the system.
In a homogeneous system in which only reversible processes or pure heat transfer are considered, the second law of thermodynamics gives with T the absolute temperature and d"S" the infinitesimal change in entropy S of the system. Furthermore, if only p V work is done, As a result,
formula_8
Adding d("p V") to both sides of this expression gives
formula_9
or
formula_10
So
formula_11
and the coefficients of the natural variable differentials d"S" and d"p" are just the single variables T and V.
Other expressions.
The above expression of d"H" in terms of entropy and pressure may be unfamiliar to some readers. There are also expressions in terms of more directly measurable variables such as temperature and pressure:(p88)
formula_12
Here Cp is the heat capacity at constant "pressure" and α is the coefficient of (cubic) thermal expansion:
formula_13
With this expression one can, in principle, determine the enthalpy if Cp and V are known as functions of p and T . However the expression is more complicated than formula_14 because T is not a natural variable for the enthalpy H.
At constant pressure, formula_15 so that formula_16 For an ideal gas, formula_17 reduces to this form even if the process involves a pressure change, because
In a more general form, the first law describes the internal energy with additional terms involving the chemical potential and the number of particles of various types. The differential statement for d"H" then becomes
formula_18
where μi is the chemical potential per particle for a type i particle, and Ni is the number of such particles. The last term can also be written as "μi" d"ni" (with d"n""i" 0 the number of moles of component i added to the system and, in this case, μi the molar chemical potential) or as "μi" d"mi" (with d"mi" the mass of component i added to the system and, in this case, μi the specific chemical potential).
Characteristic functions and natural state variables.
The enthalpy, expresses the thermodynamics of a system in the "energy representation". As a function of state, its arguments include both one intensive and several extensive state variables. The state variables "p", and {"Ni"} are said to be the "natural state variables" in this representation. They are suitable for describing processes in which they are determined by factors in the surroundings. For example, when a virtual parcel of atmospheric air moves to a different altitude, the pressure surrounding it changes, and the process is often so rapid that there is too little time for heat transfer. This is the basis of the so-called adiabatic approximation that is used in meteorology.
Conjugate with the enthalpy, with these arguments, the other characteristic function of state of a thermodynamic system is its entropy, as a function, of the same list of variables of state, except that the entropy, "S"["p"], is replaced in the list by the enthalpy, H. It expresses the "entropy representation". The state variables H, p, and {"Ni"} are said to be the "natural state variables" in this representation. They are suitable for describing processes in which they are experimentally controlled. For example, H and p can be controlled by allowing heat transfer, and by varying only the external pressure on the piston that sets the volume of the system.
Physical interpretation.
The U term is the energy of the system, and the p V term can be interpreted as the work that would be required to "make room" for the system if the pressure of the environment remained constant. When a system, for example, n moles of a gas of volume V at pressure p and temperature T, is created or brought to its present state from absolute zero, energy must be supplied equal to its internal energy U plus p V, where p V is the work done in pushing against the ambient (atmospheric) pressure.
In physics and statistical mechanics it may be more interesting to study the internal properties of a constant-volume system and therefore the internal energy is used.
In chemistry, experiments are often conducted at constant atmospheric pressure, and the pressure–volume work represents a small, well-defined energy exchange with the atmosphere, so that Δ"H" is the appropriate expression for the heat of reaction. For a heat engine, the change in its enthalpy after a full cycle is equal to zero, since the final and initial state are equal.
Relationship to heat.
In order to discuss the relation between the enthalpy increase and heat supply, we return to the first law for closed systems, with the physics sign convention: d"U"
δ"Q" − δ"W", where the heat δ"Q" is supplied by conduction, radiation, Joule heating. We apply it to the special case with a constant pressure at the surface. In this case the work is given by (where p is the pressure at the surface, d"V" is the increase of the volume of the system). Cases of long range electromagnetic interaction require further state variables in their formulation, and are not considered here. In this case the first law reads:
formula_19
Now,
formula_20
So
formula_21
If the system is under constant pressure, and consequently, the increase in enthalpy of the system is equal to the heat added:
formula_22 This is why the
now-obsolete term "heat content" was used for enthalpy in the 19th century.
Applications.
In thermodynamics, one can calculate enthalpy by determining the requirements for creating a system from "nothingness"; the mechanical work required, differs based upon the conditions that obtain during the creation of the thermodynamic system.
Energy must be supplied to remove particles from the surroundings to make space for the creation of the system, assuming that the pressure p remains constant; this is the The supplied energy must also provide the change in internal energy, which includes activation energies, ionization energies, mixing energies, vaporization energies, chemical bond energies, and so forth. Together, these constitute the change in the enthalpy For systems at constant pressure, with no external work done other than the p V work, the change in enthalpy is the heat received by the system.
For a simple system with a constant number of particles at constant pressure, the difference in enthalpy is the maximum amount of thermal energy derivable from an isobaric thermodynamic process.
Heat of reaction.
The total enthalpy of a system cannot be measured directly; the "enthalpy change" of a system is measured instead. Enthalpy change is defined by the following equation:
formula_23
where
Δ"H" is the "enthalpy change",
Hf is the final enthalpy of the system (in a chemical reaction, the enthalpy of the products or the system at equilibrium),
Hi is the initial enthalpy of the system (in a chemical reaction, the enthalpy of the reactants).
For an exothermic reaction at constant pressure, the system's change in enthalpy, Δ"H", is negative due to the products of the reaction having a smaller enthalpy than the reactants, and equals the heat released in the reaction if no electrical or shaft work is done. In other words, the overall decrease in enthalpy is achieved by the generation of heat.
Conversely, for a constant-pressure endothermic reaction, Δ"H" is positive and equal to the heat "absorbed" in the reaction.
From the definition of enthalpy as the enthalpy change at constant pressure is However for most chemical reactions, the work term "p" Δ"V" is much smaller than the internal energy change Δ"U", which is approximately equal to Δ"H". As an example, for the combustion of carbon monoxide 2 CO(g) + O2(g) → 2 CO2(g) , and
Since the differences are so small, reaction enthalpies are often described as reaction energies and analyzed in terms of bond energies.
Specific enthalpy.
The specific enthalpy of a uniform system is defined as where m is the mass of the system. The SI unit for specific enthalpy is joule per kilogram. It can be expressed in other specific quantities by where u is the specific internal energy, p is the pressure, and v is specific volume, which is equal to , where ρ is the density.
Enthalpy changes.
An enthalpy change describes the change in enthalpy observed in the constituents of a thermodynamic system when undergoing a transformation or chemical reaction. It is the difference between the enthalpy after the process has completed, i.e. the enthalpy of the products assuming that the reaction goes to completion, and the initial enthalpy of the system, namely the reactants. These processes are specified solely by their initial and final states, so that the enthalpy change for the reverse is the negative of that for the forward process.
A common standard enthalpy change is the enthalpy of formation, which has been determined for a large number of substances. Enthalpy changes are routinely measured and compiled in chemical and physical reference works, such as the CRC Handbook of Chemistry and Physics. The following is a selection of enthalpy changes commonly recognized in thermodynamics.
When used in these recognized terms the qualifier "change" is usually dropped and the property is simply termed "enthalpy of 'process"'. Since these properties are often used as reference values it is very common to quote them for a standardized set of environmental parameters, or standard conditions, including:
For such standardized values the name of the enthalpy is commonly prefixed with the term "standard", e.g. "standard enthalpy of formation".
Chemical properties.
Enthalpy of reaction - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of substance reacts completely.
Enthalpy of formation - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a compound is formed from its elementary antecedents.
Enthalpy of combustion - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a substance burns completely with oxygen.
Enthalpy of hydrogenation - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of an unsaturated compound reacts completely with an excess of hydrogen to form a saturated compound.
Enthalpy of atomization - is defined as the enthalpy change required to separate one mole of a substance into its constituent atoms completely.
Enthalpy of neutralization - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of water is formed when an acid and a base react.
Standard Enthalpy of solution - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a solute is dissolved completely in an excess of solvent, so that the solution is at infinite dilution.
Standard enthalpy of Denaturation (biochemistry) - is defined as the enthalpy change required to denature one mole of compound.
Enthalpy of hydration - is defined as the enthalpy change observed when one mole of gaseous ions are completely dissolved in water forming one mole of aqueous ions.
Physical properties.
Enthalpy of fusion - is defined as the enthalpy change required to completely change the state of one mole of substance from solid to liquid.
Enthalpy of vaporization - is defined as the enthalpy change required to completely change the state of one mole of substance from liquid to gas.
Enthalpy of sublimation - is defined as the enthalpy change required to completely change the state of one mole of substance from solid to gas.
Lattice enthalpy - is defined as the energy required to separate one mole of an ionic compound into separated gaseous ions to an infinite distance apart (meaning no force of attraction).
Enthalpy of mixing - is defined as the enthalpy change upon mixing of two (non-reacting) chemical substances.
Open systems.
In thermodynamic open systems, mass (of substances) may flow in and out of the system boundaries. The first law of thermodynamics for open systems states: The increase in the internal energy of a system is equal to the amount of energy added to the system by mass flowing in and by heating, minus the amount lost by mass flowing out and in the form of work done by the system:
formula_24
where Uin is the average internal energy entering the system, and Uout is the average internal energy leaving the system.
The region of space enclosed by the boundaries of the open system is usually called a control volume, and it may or may not correspond to physical walls. If we choose the shape of the control volume such that all flow in or out occurs perpendicular to its surface, then the flow of mass into the system performs work as if it were a piston of fluid pushing mass into the system, and the system performs work on the flow of mass out as if it were driving a piston of fluid. There are then two types of work performed: "Flow work" described above, which is performed on the fluid (this is also often called "p V work"), and "mechanical work" ("shaft work"), which may be performed on some mechanical device such as a turbine or pump.
These two types of work are expressed in the equation
formula_25
Substitution into the equation above for the control volume (cv) yields:
formula_26
The definition of enthalpy, H, permits us to use this thermodynamic potential to account for both internal energy and p V work in fluids for open systems:
formula_27
If we allow also the system boundary to move (e.g. due to moving pistons), we get a rather general form of the first law for open systems.
In terms of time derivatives, using Newton's dot notation for time derivatives, it reads:
formula_28
with sums over the various places k where heat is supplied, mass flows into the system, and boundaries are moving. The terms represent enthalpy flows, which can be written as
formula_29
with the mass flow and the molar flow at position k respectively. The term represents the rate of change of the system volume at position k that results in p V power done by the system. The parameter P represents all other forms of power done by the system such as shaft power, but it can also be, say, electric power produced by an electrical power plant.
Note that the previous expression holds true only if the kinetic energy flow rate is conserved between system inlet and outlet. Otherwise, it has to be included in the enthalpy balance. During steady-state operation of a device ("see turbine, pump, and engine"), the average may be set equal to zero. This yields a useful expression for the average power generation for these devices in the absence of chemical reactions:
formula_30
where the angle brackets denote time averages. The technical importance of the enthalpy is directly related to its presence in the first law for open systems, as formulated above.
Diagrams.
The enthalpy values of important substances can be obtained using commercial software. Practically all relevant material properties can be obtained either in tabular or in graphical form. There are many types of diagrams, such as diagrams, which give the specific enthalpy as function of temperature for various pressures, and diagrams, which give h as function of p for various T. One of the most common diagrams is the temperature–specific entropy diagram ( diagram). It gives the melting curve and saturated liquid and vapor values together with isobars and isenthalps. These diagrams are powerful tools in the hands of the thermal engineer.
Some basic applications.
The points a through h in the figure play a role in the discussion in this section.
Points e and g are saturated liquids, and point h is a saturated gas.
Throttling.
One of the simple applications of the concept of enthalpy is the so-called throttling process, also known as Joule–Thomson expansion. It concerns a steady adiabatic flow of a fluid through a flow resistance (valve, porous plug, or any other type of flow resistance) as shown in the figure. This process is very important, since it is at the heart of domestic refrigerators, where it is responsible for the temperature drop between ambient temperature and the interior of the refrigerator. It is also the final stage in many types of liquefiers.
For a steady state flow regime, the enthalpy of the system (dotted rectangle) has to be constant. Hence
formula_31
Since the mass flow is constant, the specific enthalpies at the two sides of the flow resistance are the same:
formula_32
that is, the enthalpy per unit mass does not change during the throttling. The consequences of this relation can be demonstrated using the diagram above.
Example 1.
Point c is at 200 bar and room temperature (300 K). A Joule–Thomson expansion from 200 bar to 1 bar follows a curve of constant enthalpy of roughly 425 (not shown in the diagram) lying between the 400 and 450 isenthalps and ends in point d, which is at a temperature of about 270 K . Hence the expansion from 200 bar to 1 bar cools nitrogen from 300 K to 270 K . In the valve, there is a lot of friction, and a lot of entropy is produced, but still the final temperature is below the starting value.
Example 2.
Point e is chosen so that it is on the saturated liquid line with It corresponds roughly with and Throttling from this point to a pressure of 1 bar ends in the two-phase region (point f). This means that a mixture of gas and liquid leaves the throttling valve. Since the enthalpy is an extensive parameter, the enthalpy in f ( "h"f ) is equal to the enthalpy in g ( "h"g ) multiplied by the liquid fraction in f ( "x"f ) plus the enthalpy in h ( "h"h ) multiplied by the gas fraction in f (1 − "x"f ) . So
formula_33
With numbers:
so
This means that the mass fraction of the liquid in the liquid–gas mixture that leaves the throttling valve is 64%.
Compressors.
A power P is applied e.g. as electrical power. If the compression is adiabatic, the gas temperature goes up. In the reversible case it would be at constant entropy, which corresponds with a vertical line in the "T" – "s" diagram. For example, compressing nitrogen from 1 bar (point a) to 2 bar (point b) would result in a temperature increase from 300 K to 380 K. In order to let the compressed gas exit at ambient temperature "T"a, heat exchange, e.g. by cooling water, is necessary. In the ideal case the compression is isothermal. The average heat flow to the surroundings is Q̇. Since the system is in the steady state the first law gives
formula_34
The minimal power needed for the compression is realized if the compression is reversible. In that case the second law of thermodynamics for open systems gives
formula_35
Eliminating Q̇ gives for the minimal power
formula_36
For example, compressing 1 kg of nitrogen from 1 bar to 200 bar costs at least :
With the data, obtained with the diagram, we find a value of
The relation for the power can be further simplified by writing it as
formula_37
With
this results in the final relation
formula_38
History and etymology.
The term "enthalpy" was coined relatively late in the history of thermodynamics, in the early 20th century. Energy was introduced in a modern sense by Thomas Young in 1802, while entropy by Rudolf Clausius in 1865. "Energy" uses the root of the Greek word ("ergon"), meaning "work", to express the idea of capacity to perform work. "Entropy" uses the Greek word ("tropē") meaning "transformation" or "turning". "Enthalpy" uses the root of the Greek word ("thalpos") "warmth, heat".
The term expresses the obsolete concept of "heat content", as d"H" refers to the amount of heat gained in a process at constant pressure only, but not in the general case when pressure is variable. J. W. Gibbs used the term "a heat function for constant pressure" for clarity.
Introduction of the concept of "heat content" H is associated with Benoît Paul Émile Clapeyron and Rudolf Clausius (Clausius–Clapeyron relation, 1850).
The term "enthalpy" first appeared in print in 1909. It is attributed to Heike Kamerlingh Onnes, who most likely introduced it orally the year before, at the first meeting of the Institute of Refrigeration in Paris. It gained currency only in the 1920s, notably with the "Mollier Steam Tables and Diagrams", published in 1927.
Until the 1920s, the symbol H was used, somewhat inconsistently, for "heat" in general. The definition of H as strictly limited to enthalpy or "heat content at constant pressure" was formally proposed by A. W. Porter in 1922.
Notes.
<templatestyles src="Reflist/styles.css" />
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "W"
},
{
"math_id": 1,
"text": "P_{ext}"
},
{
"math_id": 2,
"text": "V_{system, initial}=0"
},
{
"math_id": 3,
"text": "V_{system, final}"
},
{
"math_id": 4,
"text": "W=P_{ext}\\Delta V"
},
{
"math_id": 5,
"text": " H = \\sum_k H_k \\; ,"
},
{
"math_id": 6,
"text": " H = \\int \\left( \\rho\\, h \\right) \\, \\mathrm{d}V \\; ,"
},
{
"math_id": 7,
"text": " \\mathrm{d}U = \\mathrm{\\delta}\\, Q - \\mathrm{\\delta}\\, W \\; ,"
},
{
"math_id": 8,
"text": "\\mathrm{d}U = T\\,\\mathrm{d}S - p\\,\\mathrm{d}V ~."
},
{
"math_id": 9,
"text": " \\mathrm{d}U + \\mathrm{d}(p\\,V) = T\\,\\mathrm{d}S - p\\,\\mathrm{d}V + \\mathrm{d}(p\\,V) \\; ,"
},
{
"math_id": 10,
"text": "\\mathrm{d}(U + p\\,V) = T\\,\\mathrm{d}S + V\\,\\mathrm{d}p ~."
},
{
"math_id": 11,
"text": " \\mathrm{d}H(S,\\,p) = T\\,\\mathrm{d}S + V\\,\\mathrm{d}p ~."
},
{
"math_id": 12,
"text": " \\mathrm{d}H = C_\\mathsf{p}\\,\\mathrm{d}T + V\\,(1 - \\alpha T)\\,\\mathrm{d}p ~."
},
{
"math_id": 13,
"text": " \\alpha = \\frac{\\,1\\,}{V} \\left( \\frac{\\partial V}{\\,\\partial T\\,} \\right)_\\mathsf{p} ~."
},
{
"math_id": 14,
"text": "\\;\\mathrm{d}H = T\\,\\mathrm{d}S + V\\,\\mathrm{d}p\\;"
},
{
"math_id": 15,
"text": "\\; \\mathrm{d}P = 0 \\;"
},
{
"math_id": 16,
"text": "\\; \\mathrm{d}H = C_\\mathsf{p}\\,\\mathrm{d}T ~."
},
{
"math_id": 17,
"text": "\\;\\mathrm{d}H\\;"
},
{
"math_id": 18,
"text": " \\mathrm{d}H = T\\,\\mathrm{d}S + V\\,\\mathrm{d}p + \\sum_i \\mu_i\\,\\mathrm{d}N_i \\; ,"
},
{
"math_id": 19,
"text": " \\mathrm{d}U = \\mathrm{\\delta}\\, Q - p \\, \\mathrm{d}V ~."
},
{
"math_id": 20,
"text": " \\mathrm{d}H = \\mathrm{d}U + \\mathrm{d}(p\\,V) ~."
},
{
"math_id": 21,
"text": "\\begin{align}\n\\mathrm{d}H &= \\mathrm{\\delta}Q + V\\,\\mathrm{d}p + p \\,\\mathrm{d}V - p\\,\\mathrm{d}V\\\\\n &= \\mathrm{\\delta}Q + V\\,\\mathrm{d}p ~.\n\\end{align}"
},
{
"math_id": 22,
"text": " \\mathrm{d}H = \\mathrm{\\delta}\\, Q"
},
{
"math_id": 23,
"text": " \\Delta H = H_\\mathsf{f} - H_\\mathsf{i} \\, ,"
},
{
"math_id": 24,
"text": " \\mathrm{d}U = \\mathrm{\\delta}\\, Q + \\mathrm{d}U_\\mathsf{in} - \\mathrm{d}U_\\mathsf{out} - \\mathrm{\\delta}\\, W \\,,"
},
{
"math_id": 25,
"text": " \\mathrm{\\delta}\\, W = \\mathrm{d}(p_\\mathsf{out} V_\\mathsf{out}) - \\mathrm{d}(p_\\mathsf{in} V_\\mathsf{in}) + \\mathrm{\\delta}\\, W_\\mathsf{shaft} ~."
},
{
"math_id": 26,
"text": " \\mathrm{d}U_\\mathsf{cv} = \\mathrm{\\delta}\\, Q + \\mathrm{d}U_\\mathsf{in} + \\mathrm{d}\\left( p_\\mathsf{in} V_\\mathsf{in} \\right) - \\mathrm{d}U_\\mathsf{out} - \\mathrm{d}\\left( p_\\mathsf{out} V_\\mathsf{out} \\right) - \\mathrm{\\delta} W_\\mathsf{shaft} ~."
},
{
"math_id": 27,
"text": " \\mathrm{d}U_\\mathsf{cv} = \\mathrm{\\delta}\\, Q + \\mathrm{d}H_\\mathsf{in} - \\mathrm{d}H_\\mathsf{out} - \\mathrm{\\delta}\\, W_\\mathsf{shaft} ~."
},
{
"math_id": 28,
"text": " \\frac{ \\mathrm{d}U }{\\, \\mathrm{d}\\, t \\,} = \\sum_k \\dot Q_k + \\sum_k \\dot H_k - \\sum_k p_k\\frac{\\,\\mathrm{d}V_k}{\\, \\mathrm{d}t \\,} - P \\, ,"
},
{
"math_id": 29,
"text": " \\dot H_k = h_k \\dot m_k = H_\\mathsf{m} \\dot n_k \\, ,"
},
{
"math_id": 30,
"text": " P = \\sum_k \\left\\langle \\dot Q_k \\right\\rangle\n+ \\sum_k \\left\\langle \\dot H_k \\right\\rangle\n- \\sum_k \\left\\langle p_k\\frac{\\, \\mathrm{d}V_k }{\\,\\mathrm{d}t\\,} \\right\\rangle \\, ,"
},
{
"math_id": 31,
"text": " 0 = \\dot m h_1 - \\dot m h_2 ~."
},
{
"math_id": 32,
"text": " h_1 = h_2 \\; ,"
},
{
"math_id": 33,
"text": " h_\\mathbf\\mathsf{f} = x_\\mathbf\\mathsf{f} h_\\mathbf\\mathsf{g} + (1 - x_\\mathbf\\mathsf{f})h_\\mathsf\\mathbf{h} ~."
},
{
"math_id": 34,
"text": " 0 = -\\dot Q + \\dot m\\, h_1 - \\dot m\\, h_2 + P ~."
},
{
"math_id": 35,
"text": " 0 = -\\frac{\\, \\dot Q \\,}{T_\\mathsf{a}} + \\dot m \\, s_1 - \\dot m \\, s_2 ~."
},
{
"math_id": 36,
"text": " \\frac{\\, P_\\mathsf{min} \\,}{ \\dot m } = h_2 - h_1 - T_\\mathsf{a}\\left( s_2 - s_1 \\right) ~."
},
{
"math_id": 37,
"text": " \\frac{\\, P_\\mathsf{min} \\,}{ \\dot m } = \\int_1^2 \\left( \\mathrm{d}h - T_\\mathsf{a}\\,\\mathrm{d}s \\right) ~."
},
{
"math_id": 38,
"text": "\\frac{\\, P_\\mathsf{min} }{ \\dot m } = \\int_1^2 v\\,\\mathrm{d}p ~."
}
] | https://en.wikipedia.org/wiki?curid=10274 |
10274436 | Topological modular forms | In mathematics, topological modular forms (tmf) is the name of a spectrum that describes a generalized cohomology theory. In concrete terms, for any integer "n" there is a topological space formula_0, and these spaces are equipped with certain maps between them, so that for any topological space "X", one obtains an abelian group structure on the set formula_1 of homotopy classes of continuous maps from "X" to formula_0. One feature that distinguishes tmf is the fact that its coefficient ring, formula_2(point), is almost the same as the graded ring of holomorphic modular forms with integral cusp expansions. Indeed, these two rings become isomorphic after inverting the primes 2 and 3, but this inversion erases a lot of torsion information in the coefficient ring.
The spectrum of topological modular forms is constructed as the global sections of a sheaf of E-infinity ring spectra on the moduli stack of (generalized) elliptic curves. This theory has relations to the theory of modular forms in number theory, the homotopy groups of spheres, and conjectural index theories on loop spaces of manifolds. tmf was first constructed by Michael Hopkins and Haynes Miller; many of the computations can be found in preprints and articles by Paul Goerss, Hopkins, Mark Mahowald, Miller, Charles Rezk, and Tilman Bauer.
Construction.
The original construction of tmf uses the obstruction theory of Hopkins, Miller, and Paul Goerss, and is based on ideas of Dwyer, Kan, and Stover. In this approach, one defines a presheaf Otop ("top" stands for topological) of multiplicative cohomology theories on the etale site of the moduli stack of elliptic curves and shows that this can be lifted in an essentially unique way to a sheaf of E-infinity ring spectra. This sheaf has the following property: to any etale elliptic curve over a ring R, it assigns an E-infinity ring spectrum (a classical elliptic cohomology theory) whose associated formal group is the formal group of that elliptic curve.
A second construction, due to Jacob Lurie, constructs tmf rather by describing the moduli problem it represents and applying general representability theory to then show existence: just as the moduli stack of elliptic curves represents the functor that assigns to a ring the category of elliptic curves over it, the stack together with the sheaf of E-infinity ring spectra represents the functor that assigns to an E-infinity ring its category of oriented derived elliptic curves, appropriately interpreted. These constructions work over the moduli stack of smooth elliptic curves, and they also work for the Deligne-Mumford compactification of this moduli stack, in which elliptic curves with nodal singularities are included. TMF is the spectrum that results from the global sections over the moduli stack of smooth curves, and tmf is the spectrum arising as the global sections of the Deligne–Mumford compactification.
TMF is a periodic version of the connective tmf. While the ring spectra used to construct TMF are periodic with period 2, TMF itself has period 576. The periodicity is related to the modular discriminant.
Relations to other parts of mathematics.
Some interest in tmf comes from string theory and conformal field theory. Graeme Segal first proposed in the 1980s to provide a geometric construction of elliptic cohomology (the precursor to tmf) as some kind of moduli space of conformal field theories, and these ideas have been continued and expanded by Stephan Stolz and Peter Teichner. Their program is to try to construct TMF as a moduli space of supersymmetric Euclidean field theories.
In work more directly motivated by string theory, Edward Witten introduced the Witten genus, a homomorphism from the string bordism ring to the ring of modular forms, using equivariant index theory on a formal neighborhood of the trivial locus in the loop space of a manifold. This associates to any spin manifold with vanishing half first Pontryagin class a modular form. By work of Hopkins, Matthew Ando, Charles Rezk and Neil Strickland, the Witten genus can be lifted to topology. That is, there is a map from the string bordism spectrum to tmf (a so-called "orientation") such that the Witten genus is recovered as the composition of the induced map on the homotopy groups of these spectra and a map of the homotopy groups of tmf to modular forms. This allowed to prove certain divisibility statements about the Witten genus. The orientation of tmf is in analogy with the Atiyah–Bott–Shapiro map from the spin bordism spectrum to classical K-theory, which is a lift of the Dirac equation to topology. | [
{
"math_id": 0,
"text": "\\operatorname{tmf}^{n}"
},
{
"math_id": 1,
"text": "\\operatorname{tmf}^{n}(X)"
},
{
"math_id": 2,
"text": "\\operatorname{tmf}^{0}"
}
] | https://en.wikipedia.org/wiki?curid=10274436 |
10274608 | Stress majorization | Geometric placement based on ideal distances
Stress majorization is an optimization strategy used in multidimensional scaling (MDS) where, for a set of "formula_0" "formula_1"-dimensional data items, a configuration "formula_2" of formula_0 points in "formula_3 formula_4"-dimensional space is sought that minimizes the so-called "stress" function formula_5. Usually "formula_3" is formula_6 or formula_7, i.e. the "formula_8" matrix "formula_2" lists points in formula_9 or formula_10dimensional Euclidean space so that the result may be visualised (i.e. an MDS plot). The function formula_11 is a cost or loss function that measures the squared differences between ideal (formula_1-dimensional) distances and actual distances in "r"-dimensional space. It is defined as:
formula_12
where formula_13 is a weight for the measurement between a pair of points formula_14, formula_15 is the euclidean distance between formula_16 and formula_17 and formula_18 is the ideal distance between the points (their separation) in the formula_1-dimensional data space. Note that formula_19 can be used to specify a degree of confidence in the similarity between points (e.g. 0 can be specified if there is no information for a particular pair).
A configuration formula_2 which minimizes formula_5 gives a plot in which points that are close together correspond to points that are also close together in the original formula_1-dimensional data space.
There are many ways that formula_20 could be minimized. For example, Kruskal recommended an iterative steepest descent approach. However, a significantly better (in terms of guarantees on, and rate of, convergence) method for minimizing stress was introduced by Jan de Leeuw. De Leeuw's "iterative majorization" method at each step minimizes a simple convex function which both bounds formula_11 from above and touches the surface of formula_11 at a point formula_21, called the "supporting point". In convex analysis such a function is called a "majorizing" function. This iterative majorization process is also referred to as the SMACOF algorithm ("Scaling by MAjorizing a COmplicated Function").
The SMACOF algorithm.
The stress function formula_11 can be expanded as follows:
formula_22
Note that the first term is a constant formula_23 and the second term is quadratic in formula_2 (i.e. for the Hessian matrix formula_24 the second term is equivalent to trformula_25) and therefore relatively easily solved. The third term is bounded by:
formula_26
where formula_27 has:
formula_28 for formula_29
and formula_30 for formula_31
and formula_32.
Proof of this inequality is by the Cauchy-Schwarz inequality, see Borg (pp. 152–153).
Thus, we have a simple quadratic function formula_33 that majorizes stress:
formula_34
formula_35
The iterative minimization procedure is then:
This algorithm has been shown to decrease stress monotonically (see de Leeuw).
Use in graph drawing.
Stress majorization and algorithms similar to SMACOF also have application in the field of graph drawing. That is, one can find a reasonably aesthetically appealing layout for a network or graph by minimizing a stress function over the positions of the nodes in the graph. In this case, the formula_18 are usually set to the graph-theoretic distances between nodes "formula_16" and "formula_17" and the weights formula_19 are taken to be formula_40. Here, formula_41 is chosen as a trade-off between preserving long- or short-range ideal distances. Good results have been shown for formula_42.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "r"
},
{
"math_id": 4,
"text": "(\\ll m)"
},
{
"math_id": 5,
"text": "\\sigma(X)"
},
{
"math_id": 6,
"text": "2"
},
{
"math_id": 7,
"text": "3"
},
{
"math_id": 8,
"text": "(n\\times r)"
},
{
"math_id": 9,
"text": "2-"
},
{
"math_id": 10,
"text": "3-"
},
{
"math_id": 11,
"text": "\\sigma"
},
{
"math_id": 12,
"text": "\\sigma(X)=\\sum_{i<j\\le n}w_{ij}(d_{ij}(X)-\\delta_{ij})^2"
},
{
"math_id": 13,
"text": "w_{ij}\\ge 0"
},
{
"math_id": 14,
"text": "(i,j)"
},
{
"math_id": 15,
"text": "d_{ij}(X)"
},
{
"math_id": 16,
"text": "i"
},
{
"math_id": 17,
"text": "j"
},
{
"math_id": 18,
"text": "\\delta_{ij}"
},
{
"math_id": 19,
"text": "w_{ij}"
},
{
"math_id": 20,
"text": " \\sigma(X)"
},
{
"math_id": 21,
"text": "Z"
},
{
"math_id": 22,
"text": "\n\\sigma(X)=\\sum_{i<j\\le n}w_{ij}(d_{ij}(X)-\\delta_{ij})^2\n=\\sum_{i<j}w_{ij}\\delta_{ij}^2 + \\sum_{i<j}w_{ij}d_{ij}^2(X)-2\\sum_{i<j}w_{ij}\\delta_{ij}d_{ij}(X)\n"
},
{
"math_id": 23,
"text": "C"
},
{
"math_id": 24,
"text": "V"
},
{
"math_id": 25,
"text": "X'VX"
},
{
"math_id": 26,
"text": "\n\\sum_{i<j}w_{ij}\\delta_{ij}d_{ij}(X)=\\,\\operatorname{tr}\\, X'B(X)X \\ge \\,\\operatorname{tr}\\, X'B(Z)Z\n"
},
{
"math_id": 27,
"text": "B(Z)"
},
{
"math_id": 28,
"text": "b_{ij}=-\\frac{w_{ij}\\delta_{ij}}{d_{ij}(Z)}"
},
{
"math_id": 29,
"text": "d_{ij}(Z)\\ne 0, i \\ne j"
},
{
"math_id": 30,
"text": "b_{ij}=0"
},
{
"math_id": 31,
"text": "d_{ij}(Z)=0, i\\ne j"
},
{
"math_id": 32,
"text": "b_{ii}=-\\sum_{j=1,j\\ne i}^n b_{ij}"
},
{
"math_id": 33,
"text": "\\tau(X,Z)"
},
{
"math_id": 34,
"text": "\\sigma(X)=C+\\,\\operatorname{tr}\\, X'VX - 2 \\,\\operatorname{tr}\\, X'B(X)X\n"
},
{
"math_id": 35,
"text": "\\le C+\\,\\operatorname{tr}\\, X' V X - 2 \\,\\operatorname{tr}\\, X'B(Z)Z = \\tau(X,Z)\n"
},
{
"math_id": 36,
"text": "k^{th}"
},
{
"math_id": 37,
"text": "Z\\leftarrow X^{k-1}"
},
{
"math_id": 38,
"text": "X^k\\leftarrow \\min_X \\tau(X,Z)"
},
{
"math_id": 39,
"text": "\\sigma(X^{k-1})-\\sigma(X^{k})<\\epsilon"
},
{
"math_id": 40,
"text": "\\delta_{ij}^{-\\alpha}"
},
{
"math_id": 41,
"text": "\\alpha"
},
{
"math_id": 42,
"text": "\\alpha=2"
}
] | https://en.wikipedia.org/wiki?curid=10274608 |
10275945 | 6-demicube | Uniform 6-polytope
In geometry, a 6-demicube or demihexeract is a uniform 6-polytope, constructed from a "6-cube" (hexeract) with alternated vertices removed. It is part of a dimensionally infinite family of uniform polytopes called demihypercubes.
E. L. Elte identified it in 1912 as a semiregular polytope, labeling it as HM6 for a 6-dimensional "half measure" polytope.
Coxeter named this polytope as 131 from its Coxeter diagram, with a ring on one of the 1-length branches, . It can named similarly by a 3-dimensional exponential Schläfli symbol formula_0 or {3,33,1}.
Cartesian coordinates.
Cartesian coordinates for the vertices of a demihexeract centered at the origin are alternate halves of the hexeract:
(±1,±1,±1,±1,±1,±1)
with an odd number of plus signs.
As a configuration.
This configuration matrix represents the 6-demicube. The rows and columns correspond to vertices, edges, faces, cells, 4-faces and 5-faces. The diagonal numbers say how many of each element occur in the whole 6-demicube. The nondiagonal numbers say how many of the column's element occur in or at the row's element.
The diagonal f-vector numbers are derived through the Wythoff construction, dividing the full group order of a subgroup order by removing one mirror at a time.
Related polytopes.
There are 47 uniform polytopes with D6 symmetry, 31 are shared by the B6 symmetry, and 16 are unique:
The 6-demicube, 131 is third in a dimensional series of uniform polytopes, expressed by Coxeter as k31 series. The fifth figure is a Euclidean honeycomb, 331, and the final is a noncompact hyperbolic honeycomb, 431. Each progressive uniform polytope is constructed from the previous as its vertex figure.
It is also the second in a dimensional series of uniform polytopes and honeycombs, expressed by Coxeter as 13k series. The fourth figure is the Euclidean honeycomb 133 and the final is a noncompact hyperbolic honeycomb, 134.
Skew icosahedron.
Coxeter identified a subset of 12 vertices that form a regular skew icosahedron {3, 5} with the same symmetries as the icosahedron itself, but at different angles. He dubbed this the regular skew icosahedron. | [
{
"math_id": 0,
"text": "\\left\\{3 \\begin{array}{l}3, 3, 3\\\\3\\end{array}\\right\\}"
}
] | https://en.wikipedia.org/wiki?curid=10275945 |
10275954 | 7-cube | 7-dimensional hypercube
In geometry, a 7-cube is a seven-dimensional hypercube with 128 vertices, 448 edges, 672 square faces, 560 cubic cells, 280 tesseract 4-faces, 84 penteract 5-faces, and 14 hexeract 6-faces.
It can be named by its Schläfli symbol {4,35}, being composed of 3 6-cubes around each 5-face. It can be called a hepteract, a portmanteau of tesseract (the "4-cube") and "hepta" for seven (dimensions) in Greek. It can also be called a regular tetradeca-7-tope or tetradecaexon, being a 7 dimensional polytope constructed from 14 regular facets.
Related polytopes.
The "7-cube" is 7th in a series of hypercube:
The dual of a 7-cube is called a 7-orthoplex, and is a part of the infinite family of cross-polytopes.
Applying an "alternation" operation, deleting alternating vertices of the hepteract, creates another uniform polytope, called a demihepteract, (part of an infinite family called demihypercubes), which has 14 demihexeractic and 64 6-simplex 6-faces.
As a configuration.
This configuration matrix represents the 7-cube. The rows and columns correspond to vertices, edges, faces, cells, 4-faces, 5-faces and 6-faces. The diagonal numbers say how many of each element occur in the whole 7-cube. The nondiagonal numbers say how many of the column's element occur in or at the row's element.
formula_0
Cartesian coordinates.
Cartesian coordinates for the vertices of a hepteract centered at the origin and edge length 2 are
(±1,±1,±1,±1,±1,±1,±1)
while the interior of the same consists of all points (x0, x1, x2, x3, x4, x5, x6) with -1 < xi < 1. | [
{
"math_id": 0,
"text": "\\begin{bmatrix}\\begin{matrix}\n128 & 7 & 21 & 35 & 35 & 21 & 7 \n\\\\ 2 & 448 & 6 & 15 & 20 & 15 & 6 \n\\\\ 4 & 4 & 672 & 5 & 10 & 10 & 5 \n\\\\ 8 & 12 & 6 & 560 & 4 & 6 & 4 \n\\\\ 16 & 32 & 24 & 8 & 280 & 3 & 3 \n\\\\ 32 & 80 & 80 & 40 & 10 & 84 & 2 \n\\\\ 64 & 192 & 240 & 160 & 60 & 12 & 14 \n\\end{matrix}\\end{bmatrix}"
}
] | https://en.wikipedia.org/wiki?curid=10275954 |
10275958 | 7-demicube | Uniform 7-polytope
In geometry, a demihepteract or 7-demicube is a uniform 7-polytope, constructed from the 7-hypercube (hepteract) with alternated vertices removed. It is part of a dimensionally infinite family of uniform polytopes called demihypercubes.
E. L. Elte identified it in 1912 as a semiregular polytope, labeling it as HM7 for a 7-dimensional "half measure" polytope.
Coxeter named this polytope as 141 from its Coxeter diagram, with a ring on one of the 1-length branches, and Schläfli symbol formula_0 or {3,34,1}.
Cartesian coordinates.
Cartesian coordinates for the vertices of a demihepteract centered at the origin are alternate halves of the hepteract:
(±1,±1,±1,±1,±1,±1,±1)
with an odd number of plus signs.
As a configuration.
This configuration matrix represents the 7-demicube. The rows and columns correspond to vertices, edges, faces, cells, 4-faces, 5-faces and 6-faces. The diagonal numbers say how many of each element occur in the whole 7-demicube. The nondiagonal numbers say how many of the column's element occur in or at the row's element.
The diagonal f-vector numbers are derived through the Wythoff construction, dividing the full group order of a subgroup order by removing one mirror at a time.
Related polytopes.
There are 95 uniform polytopes with D6 symmetry, 63 are shared by the B6 symmetry, and 32 are unique:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left\\{3 \\begin{array}{l}3, 3, 3, 3\\\\3\\end{array}\\right\\}"
}
] | https://en.wikipedia.org/wiki?curid=10275958 |
10275959 | 8-demicube | In geometry, a demiocteract or 8-demicube is a uniform 8-polytope, constructed from the 8-hypercube, octeract, with alternated vertices removed. It is part of a dimensionally infinite family of uniform polytopes called demihypercubes.
E. L. Elte identified it in 1912 as a semiregular polytope, labeling it as HM8 for an 8-dimensional "half measure" polytope.
Coxeter named this polytope as 151 from its Coxeter diagram, with a ring on
one of the 1-length branches, and Schläfli symbol formula_0 or {3,35,1}.
Cartesian coordinates.
Cartesian coordinates for the vertices of an 8-demicube centered at the origin are alternate halves of the 8-cube:
(±1,±1,±1,±1,±1,±1,±1,±1)
with an odd number of plus signs.
Related polytopes and honeycombs.
This polytope is the vertex figure for the uniform tessellation, 251 with Coxeter-Dynkin diagram: | [
{
"math_id": 0,
"text": "\\left\\{3 \\begin{array}{l}3, 3, 3, 3, 3\\\\3\\end{array}\\right\\}"
}
] | https://en.wikipedia.org/wiki?curid=10275959 |
10275960 | 9-demicube | Uniform 9-polytope
In geometry, a demienneract or 9-demicube is a uniform 9-polytope, constructed from the 9-cube, with alternated vertices removed. It is part of a dimensionally infinite family of uniform polytopes called demihypercubes.
E. L. Elte identified it in 1912 as a semiregular polytope, labeling it as HM9 for a 9-dimensional "half measure" polytope.
Coxeter named this polytope as 161 from its Coxeter diagram, with a ring on
one of the 1-length branches, and Schläfli symbol formula_0 or {3,36,1}.
Cartesian coordinates.
Cartesian coordinates for the vertices of a demienneract centered at the origin are alternate halves of the enneract:
(±1,±1,±1,±1,±1,±1,±1,±1,±1)
with an odd number of plus signs. | [
{
"math_id": 0,
"text": "\\left\\{3 \\begin{array}{l}3, 3, 3, 3, 3, 3\\\\3\\end{array}\\right\\}"
}
] | https://en.wikipedia.org/wiki?curid=10275960 |
10275967 | 8-cube | 8-dimensional hypercube
In geometry, an 8-cube is an eight-dimensional hypercube. It has 256 vertices, 1024 edges, 1792 square faces, 1792 cubic cells, 1120 tesseract 4-faces, 448 5-cube 5-faces, 112 6-cube 6-faces, and 16 7-cube 7-faces.
It is represented by Schläfli symbol {4,36}, being composed of 3 7-cubes around each 6-face. It is called an octeract, a portmanteau of tesseract (the "4-cube") and "oct" for eight (dimensions) in Greek. It can also be called a regular hexdeca-8-tope or hexadecazetton, being an 8-dimensional polytope constructed from 16 regular facets.
It is a part of an infinite family of polytopes, called hypercubes. The dual of an 8-cube can be called an 8-orthoplex and is a part of the infinite family of cross-polytopes.
Cartesian coordinates.
Cartesian coordinates for the vertices of an 8-cube centered at the origin and edge length 2 are
(±1,±1,±1,±1,±1,±1,±1,±1)
while the interior of the same consists of all points (x0, x1, x2, x3, x4, x5, x6, x7) with -1 < xi < 1.
As a configuration.
This configuration matrix represents the 8-cube. The rows and columns correspond to vertices, edges, faces, cells, 4-faces, 5-faces, 6-faces, and 7-faces. The diagonal numbers say how many of each element occur in the whole 8-cube. The nondiagonal numbers say how many of the column's element occur in or at the row's element.
formula_0
The diagonal f-vector numbers are derived through the Wythoff construction, dividing the full group order of a subgroup order by removing one mirror at a time.
Derived polytopes.
Applying an "alternation" operation, deleting alternating vertices of the octeract, creates another uniform polytope, called a "8-demicube", (part of an infinite family called demihypercubes), which has 16 demihepteractic and 128 8-simplex facets.
Related polytopes.
The "8-cube" is 8th in an infinite series of hypercube:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{bmatrix}\\begin{matrix}\n256 & 8 & 28 & 56 & 70 & 56 & 28 & 8\n\\\\ 2 & 1024 & 7 & 21 & 35 & 35 & 21 & 7\n\\\\ 4 & 4 & 1792 & 6 & 15 & 20 & 15 & 6\n\\\\ 8 & 12 & 6 & 1792 & 5 & 10 & 10 & 5\n\\\\ 16 & 32 & 24 & 8 & 1120 & 4 & 6 & 4\n\\\\ 32 & 80 & 80 & 40 & 10 & 448 & 3 & 3\n\\\\ 64 & 192 & 240 & 160 & 60 & 12 & 112 & 2\n\\\\ 128 & 448 & 672 & 560 & 280 & 84 & 14 & 16\n\\end{matrix}\\end{bmatrix}"
}
] | https://en.wikipedia.org/wiki?curid=10275967 |
10275976 | 7-orthoplex | Regular 7- polytope
In geometry, a 7-orthoplex, or 7-cross polytope, is a regular 7-polytope with 14 vertices, 84 edges, 280 triangle faces, 560 tetrahedron cells, 672 5-cells "4-faces", 448 "5-faces", and 128 "6-faces".
It has two constructed forms, the first being regular with Schläfli symbol {35,4}, and the second with alternately labeled (checkerboarded) facets, with Schläfli symbol {3,3,3,3,31,1} or Coxeter symbol 411.
It is a part of an infinite family of polytopes, called cross-polytopes or "orthoplexes". The dual polytope is the 7-hypercube, or hepteract.
As a configuration.
This configuration matrix represents the 7-orthoplex. The rows and columns correspond to vertices, edges, faces, cells, 4-faces, 5-faces and 6-faces. The diagonal numbers say how many of each element occur in the whole 7-orthoplex. The nondiagonal numbers say how many of the column's element occur in or at the row's element.
formula_0
Construction.
There are two Coxeter groups associated with the 7-orthoplex, one regular, dual of the hepteract with the C7 or [4,3,3,3,3,3] symmetry group, and a half symmetry with two copies of 6-simplex facets, alternating, with the D7 or [34,1,1] symmetry group. A lowest symmetry construction is based on a dual of a 7-orthotope, called a 7-fusil.
Cartesian coordinates.
Cartesian coordinates for the vertices of a 7-orthoplex, centered at the origin are
(±1,0,0,0,0,0,0), (0,±1,0,0,0,0,0), (0,0,±1,0,0,0,0), (0,0,0,±1,0,0,0), (0,0,0,0,±1,0,0), (0,0,0,0,0,±1,0), (0,0,0,0,0,0,±1)
Every vertex pair is connected by an edge, except opposites. | [
{
"math_id": 0,
"text": "\\begin{bmatrix}\\begin{matrix}\n14 & 12 & 60 & 160 & 240 & 192 & 64 \n\\\\ 2 & 84 & 10 & 40 & 80 & 80 & 32 \n\\\\ 3 & 3 & 280 & 8 & 24 & 32 & 16 \n\\\\ 4 & 6 & 4 & 560 & 6 & 12 & 8 \n\\\\ 5 & 10 & 10 & 5 & 672 & 4 & 4 \n\\\\ 6 & 15 & 20 & 15 & 6 & 448 & 2 \n\\\\ 7 & 21 & 35 & 35 & 21 & 7 & 128 \n\\end{matrix}\\end{bmatrix}"
}
] | https://en.wikipedia.org/wiki?curid=10275976 |
10275985 | 6-simplex | Uniform 6-polytope
</math>0.654654|
In geometry, a 6-simplex is a self-dual regular 6-polytope. It has 7 vertices, 21 edges, 35 triangle faces, 35 tetrahedral cells, 21 5-cell 4-faces, and 7 5-simplex 5-faces. Its dihedral angle is cos−1(1/6), or approximately 80.41°.
Alternate names.
It can also be called a heptapeton, or hepta-6-tope, as a 7-facetted polytope in 6-dimensions. The name "heptapeton" is derived from "hepta" for seven facets in Greek and "-peta" for having five-dimensional facets, and "-on". Jonathan Bowers gives a heptapeton the acronym hop.
As a configuration.
This configuration matrix represents the 6-simplex. The rows and columns correspond to vertices, edges, faces, cells, 4-faces and 5-faces. The diagonal numbers say how many of each element occur in the whole 6-simplex. The nondiagonal numbers say how many of the column's element occur in or at the row's element. This self-dual simplex's matrix is identical to its 180 degree rotation.
formula_0
Coordinates.
The Cartesian coordinates for an origin-centered regular heptapeton having edge length 2 are:
formula_1
formula_2
formula_3
formula_4
formula_5
formula_6
The vertices of the "6-simplex" can be more simply positioned in 7-space as permutations of:
(0,0,0,0,0,0,1)
This construction is based on facets of the 7-orthoplex.
Related uniform 6-polytopes.
The regular 6-simplex is one of 35 uniform 6-polytopes based on the [3,3,3,3,3] Coxeter group, all shown here in A6 Coxeter plane orthographic projections.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{bmatrix}\\begin{matrix}7 & 6 & 15 & 20 & 15 & 6 \\\\ 2 & 21 & 5 & 10 & 10 & 5 \\\\ 3 & 3 & 35 & 4 & 6 & 4 \\\\ 4 & 6 & 4 & 35 & 3 & 3 \\\\ 5 & 10 & 10 & 5 & 21 & 2 \\\\ 6 & 15 & 20 & 15 & 6 & 7 \\end{matrix}\\end{bmatrix}"
},
{
"math_id": 1,
"text": "\\left(\\sqrt{1/21},\\ \\sqrt{1/15},\\ \\sqrt{1/10},\\ \\sqrt{1/6},\\ \\sqrt{1/3},\\ \\pm1\\right)"
},
{
"math_id": 2,
"text": "\\left(\\sqrt{1/21},\\ \\sqrt{1/15},\\ \\sqrt{1/10},\\ \\sqrt{1/6},\\ -2\\sqrt{1/3},\\ 0\\right)"
},
{
"math_id": 3,
"text": "\\left(\\sqrt{1/21},\\ \\sqrt{1/15},\\ \\sqrt{1/10},\\ -\\sqrt{3/2},\\ 0,\\ 0\\right)"
},
{
"math_id": 4,
"text": "\\left(\\sqrt{1/21},\\ \\sqrt{1/15},\\ -2\\sqrt{2/5},\\ 0,\\ 0,\\ 0\\right)"
},
{
"math_id": 5,
"text": "\\left(\\sqrt{1/21},\\ -\\sqrt{5/3},\\ 0,\\ 0,\\ 0,\\ 0\\right)"
},
{
"math_id": 6,
"text": "\\left(-\\sqrt{12/7},\\ 0,\\ 0,\\ 0,\\ 0,\\ 0\\right)"
}
] | https://en.wikipedia.org/wiki?curid=10275985 |
10276020 | 7-simplex | Type of 7-polytope
In 7-dimensional geometry, a 7-simplex is a self-dual regular 7-polytope. It has 8 vertices, 28 edges, 56 triangle faces, 70 tetrahedral cells, 56 5-cell 5-faces, 28 5-simplex 6-faces, and 8 6-simplex 7-faces. Its dihedral angle is cos−1(1/7), or approximately 81.79°.
Alternate names.
It can also be called an octaexon, or octa-7-tope, as an 8-facetted polytope in 7-dimensions. The name "octaexon" is derived from "octa" for eight facets in Greek and "-ex" for having six-dimensional facets, and "-on". Jonathan Bowers gives an octaexon the acronym oca.
As a configuration.
This configuration matrix represents the 7-simplex. The rows and columns correspond to vertices, edges, faces, cells, 4-faces, 5-faces and 6-faces. The diagonal numbers say how many of each element occur in the whole 7-simplex. The nondiagonal numbers say how many of the column's element occur in or at the row's element. This self-dual simplex's matrix is identical to its 180 degree rotation.
formula_0
Symmetry.
There are many lower symmetry constructions of the 7-simplex.
Some are expressed as join partitions of two or more lower simplexes. The symmetry order of each join is the product of the symmetry order of the elements, and raised further if identical elements can be interchanged.
Coordinates.
The Cartesian coordinates of the vertices of an origin-centered regular octaexon having edge length 2 are:
formula_1
formula_2
formula_3
formula_4
formula_5
formula_6
formula_7
More simply, the vertices of the "7-simplex" can be positioned in 8-space as permutations of (0,0,0,0,0,0,0,1). This construction is based on facets of the 8-orthoplex.
Related polytopes.
This polytope is a facet in the uniform tessellation 331 with Coxeter-Dynkin diagram:
This polytope is one of 71 uniform 7-polytopes with A7 symmetry.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{bmatrix}\\begin{matrix}8 & 7 & 21 & 35 & 35 & 21 & 7 \\\\ 2 & 28 & 6 & 15 & 20 & 15 & 6 \\\\ 3 & 3 & 56 & 5 & 10 & 10 & 5 \\\\ 4 & 6 & 4 & 70 & 4 & 6 & 4 \\\\ 5 & 10 & 10 & 5 & 56 & 3 & 3 \\\\ 6 & 15 & 20 & 15 & 6 & 28 & 2 \\\\ 7 & 21 & 35 & 35 & 21 & 7 & 8 \\end{matrix}\\end{bmatrix}"
},
{
"math_id": 1,
"text": "\\left(\\sqrt{1/28},\\ \\sqrt{1/21},\\ \\sqrt{1/15},\\ \\sqrt{1/10},\\ \\sqrt{1/6},\\ \\sqrt{1/3},\\ \\pm1\\right)"
},
{
"math_id": 2,
"text": "\\left(\\sqrt{1/28},\\ \\sqrt{1/21},\\ \\sqrt{1/15},\\ \\sqrt{1/10},\\ \\sqrt{1/6},\\ -2\\sqrt{1/3},\\ 0\\right)"
},
{
"math_id": 3,
"text": "\\left(\\sqrt{1/28},\\ \\sqrt{1/21},\\ \\sqrt{1/15},\\ \\sqrt{1/10},\\ -\\sqrt{3/2},\\ 0,\\ 0\\right)"
},
{
"math_id": 4,
"text": "\\left(\\sqrt{1/28},\\ \\sqrt{1/21},\\ \\sqrt{1/15},\\ -2\\sqrt{2/5},\\ 0,\\ 0,\\ 0\\right)"
},
{
"math_id": 5,
"text": "\\left(\\sqrt{1/28},\\ \\sqrt{1/21},\\ -\\sqrt{5/3},\\ 0,\\ 0,\\ 0,\\ 0\\right)"
},
{
"math_id": 6,
"text": "\\left(\\sqrt{1/28},\\ -\\sqrt{12/7},\\ 0,\\ 0,\\ 0,\\ 0,\\ 0\\right)"
},
{
"math_id": 7,
"text": "\\left(-\\sqrt{7/4},\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0\\right)"
}
] | https://en.wikipedia.org/wiki?curid=10276020 |
10276044 | 8-orthoplex | In geometry, an 8-orthoplex or 8-cross polytope is a regular 8-polytope with 16 vertices, 112 edges, 448 triangle faces, 1120 tetrahedron cells, 1792 5-cells "4-faces", 1792 "5-faces", 1024 "6-faces", and 256 "7-faces".
It has two constructive forms, the first being regular with Schläfli symbol {36,4}, and the second with alternately labeled (checkerboarded) facets, with Schläfli symbol {3,3,3,3,3,31,1} or Coxeter symbol 511.
It is a part of an infinite family of polytopes, called cross-polytopes or "orthoplexes". The dual polytope is an 8-hypercube, or octeract.
As a configuration.
This configuration matrix represents the 8-orthoplex. The rows and columns correspond to vertices, edges, faces, cells, 4-faces, 5-faces, 6-faces and 7-faces. The diagonal numbers say how many of each element occur in the whole 8-orthoplex. The nondiagonal numbers say how many of the column's element occur in or at the row's element.
formula_0
The diagonal f-vector numbers are derived through the Wythoff construction, dividing the full group order of a subgroup order by removing individual mirrors.
Construction.
There are two Coxeter groups associated with the 8-cube, one regular, dual of the octeract with the C8 or [4,3,3,3,3,3,3] symmetry group, and a half symmetry with two copies of 7-simplex facets, alternating, with the D8 or [35,1,1] symmetry group. A lowest symmetry construction is based on a dual of an 8-orthotope, called an 8-fusil.
Cartesian coordinates.
Cartesian coordinates for the vertices of an 8-cube, centered at the origin are
(±1,0,0,0,0,0,0,0), (0,±1,0,0,0,0,0,0), (0,0,±1,0,0,0,0,0), (0,0,0,±1,0,0,0,0),
(0,0,0,0,±1,0,0,0), (0,0,0,0,0,±1,0,0), (0,0,0,0,0,0,0,±1), (0,0,0,0,0,0,0,±1)
Every vertex pair is connected by an edge, except opposites.
Images.
It is used in its alternated form 511 with the 8-simplex to form the 521 honeycomb.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{bmatrix}\\begin{matrix}\n 16 & 14 & 84 & 280 & 560 & 672 & 448 & 128\n\\\\ 2 & 112 & 12 & 60 & 160 & 240 & 192 & 64\n\\\\ 3 & 3 & 448 & 10 & 40 & 80 & 80 & 32\n\\\\ 4 & 6 & 4 & 1120 & 8 & 24 & 32 & 16\n\\\\ 5 & 10 & 10 & 5 & 1792 & 6 & 12 & 8\n\\\\ 6 & 15 & 20 & 15 & 6 & 1792 & 4 & 4\n\\\\ 7 & 21 & 35 & 35 & 21 & 7 & 1024 & 2\n\\\\ 8 & 28 & 56 & 70 & 56 & 28 & 8 & 256\n\\end{matrix}\\end{bmatrix}"
}
] | https://en.wikipedia.org/wiki?curid=10276044 |
10276223 | Proper equilibrium | Proper equilibrium is a refinement of Nash Equilibrium by Roger B. Myerson.
Proper equilibrium further refines Reinhard Selten's notion of a
trembling hand perfect equilibrium by assuming that more costly trembles are made with
significantly smaller probability than less
costly ones.
Definition.
Given a normal form game and a parameter formula_0, a totally mixed strategy profile formula_1 is defined to be formula_2-proper if, whenever a player has two pure strategies s and s' such that the expected payoff of playing s is smaller than the expected payoff of
playing s' (that is formula_3), then the probability assigned to s
is at most formula_2 times the probability assigned to s'.
The strategy profile of the game is said to be a proper equilibrium
if it is a limit point, as formula_2 approaches 0, of a sequence of formula_2-proper strategy profiles.
Example.
The game to the right is a variant of Matching Pennies.
Player 1 (row player) hides a
penny and if Player 2 (column player) guesses correctly whether it is heads up or tails up, he gets the penny. In
this variant, Player 2 has a third option: Grabbing the penny without guessing.
The Nash equilibria of the game are the strategy profiles where Player 2 grabs the penny
with probability 1. Any mixed strategy of Player 1 is in (Nash) equilibrium with this pure strategy
of Player 2. Any such pair is even trembling hand perfect.
Intuitively, since Player 1 expects Player 2 to grab the penny, he is not concerned about
leaving Player 2 uncertain about whether it is heads up or tails up. However, it can be seen
that the unique proper equilibrium of this game is the one where Player 1 hides the penny heads up with probability 1/2 and tails up with probability 1/2 (and Player 2 grabs the penny).
This unique proper equilibrium can be motivated
intuitively as follows: Player 1 fully expects Player 2 to grab the penny.
However, Player 1 still prepares for the unlikely event that Player 2 does not grab the
penny and instead for some reason decides to make a guess. Player 1 prepares for this event by
making sure that Player 2 has no information about whether the penny is heads up or tails up,
exactly as in the original Matching Pennies game.
Proper equilibria of extensive games.
One may apply the properness notion to extensive form games in two different ways, completely analogous to the
two different ways trembling hand perfection
is applied to extensive games. This leads to the notions of normal form proper equilibrium
and extensive form proper equilibrium of an extensive form game. It was shown by van
Damme that a normal form proper equilibrium of an extensive form game is behaviorally equivalent to
a quasi-perfect equilibrium of that game. | [
{
"math_id": 0,
"text": "\\epsilon > 0"
},
{
"math_id": 1,
"text": "\\sigma"
},
{
"math_id": 2,
"text": "\\epsilon"
},
{
"math_id": 3,
"text": " u(s,\\sigma_{-i})<u(s',\\sigma_{-i})"
}
] | https://en.wikipedia.org/wiki?curid=10276223 |
10277277 | Octonion algebra | In mathematics, an octonion algebra or Cayley algebra over a field "F" is a composition algebra over "F" that has dimension 8 over "F". In other words, it is a 8-dimensional unital non-associative algebra "A" over "F" with a non-degenerate quadratic form "N" (called the "norm form") such that
formula_0
for all "x" and "y" in "A".
The most well-known example of an octonion algebra is the classical octonions, which are an octonion algebra over R, the field of real numbers. The split-octonions also form an octonion algebra over R. Up to R-algebra isomorphism, these are the only octonion algebras over the reals. The algebra of bioctonions is the octonion algebra over the complex numbers C.
The octonion algebra for "N" is a division algebra if and only if the form "N" is anisotropic. A split octonion algebra is one for which the quadratic form "N" is isotropic (i.e., there exists a non-zero vector "x" with "N"("x") = 0). Up to "F"-algebra isomorphism, there is a unique split octonion algebra over any field "F". When "F" is algebraically closed or a finite field, these are the only octonion algebras over "F".
Octonion algebras are always non-associative. They are, however, alternative algebras, alternativity being a weaker form of associativity. Moreover, the Moufang identities hold in any octonion algebra. It follows that the invertible elements in any octonion algebra form a Moufang loop, as do the elements of unit norm.
The construction of general octonion algebras over an arbitrary field "k" was described by Leonard Dickson in his book "Algebren und ihre Zahlentheorie" (1927) (Seite 264) and repeated by Max Zorn. The product depends on selection of a γ from "k". Given "q" and "Q" from a quaternion algebra over "k", the octonion is written "q" + "Q"e. Another octonion may be written "r" + "R"e. Then with * denoting the conjugation in the quaternion algebra, their product is
formula_1
Zorn’s German language description of this Cayley–Dickson construction contributed to the persistent use of this eponym describing the construction of composition algebras.
Cohl Furey has proposed that octonion algebras can be utilized in an attempt to reconcile components of the standard model.
Classification.
It is a theorem of Adolf Hurwitz that the "F"-isomorphism classes of the norm form are in one-to-one correspondence with the isomorphism classes of octonion "F"-algebras. Moreover, the possible norm forms are exactly the Pfister 3-forms over "F".
Since any two octonion "F"-algebras become isomorphic over the algebraic closure of "F", one can apply the ideas of non-abelian Galois cohomology. In particular, by using the fact that the automorphism group of the split octonions is the split algebraic group G2, one sees the correspondence of isomorphism classes of octonion "F"-algebras with isomorphism classes of G2-torsors over "F". These isomorphism classes form the non-abelian Galois cohomology set formula_2.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N(xy) = N(x)N(y)"
},
{
"math_id": 1,
"text": "(q + Qe)(r + Re) = (qr + \\gamma R^* Q) + (Rq + Q r^* )e ."
},
{
"math_id": 2,
"text": "H^1(F, G_2)"
}
] | https://en.wikipedia.org/wiki?curid=10277277 |
1027784 | Simplicial set | Mathematical construction used in homotopy theory
In mathematics, a simplicial set is an object composed of "simplices" in a specific way. Simplicial sets are higher-dimensional generalizations of directed graphs, partially ordered sets and categories. Formally, a simplicial set may be defined as a contravariant functor from the simplex category to the category of sets. Simplicial sets were introduced in 1950 by Samuel Eilenberg and Joseph A. Zilber.
Every simplicial set gives rise to a "nice" topological space, known as its geometric realization. This realization consists of geometric simplices, glued together according to the rules of the simplicial set. Indeed, one may view a simplicial set as a purely combinatorial construction designed to capture the essence of a "well-behaved" topological space for the purposes of homotopy theory. Specifically, the category of simplicial sets carries a natural model structure, and the corresponding homotopy category is equivalent to the familiar homotopy category of topological spaces.
Simplicial sets are used to define quasi-categories, a basic notion of higher category theory. A construction analogous to that of simplicial sets can be carried out in any category, not just in the category of sets, yielding the notion of simplicial objects.
Motivation.
A simplicial set is a categorical (that is, purely algebraic) model capturing those topological spaces that can be built up (or faithfully represented up to homotopy) from simplices and their incidence relations. This is similar to the approach of CW complexes to modeling topological spaces, with the crucial difference that simplicial sets are purely algebraic and do not carry any actual topology.
To get back to actual topological spaces, there is a "geometric realization" functor which turns simplicial sets into compactly generated Hausdorff spaces. Most classical results on CW complexes in homotopy theory are generalized by analogous results for simplicial sets. While algebraic topologists largely continue to prefer CW complexes, there is a growing contingent of researchers interested in using simplicial sets for applications in algebraic geometry where CW complexes do not naturally exist.
Intuition.
Simplicial sets can be viewed as a higher-dimensional generalization of directed multigraphs. A simplicial set contains vertices (known as "0-simplices" in this context) and arrows ("1-simplices") between some of these vertices. Two vertices may be connected by several arrows, and directed loops that connect a vertex to itself are also allowed. Unlike directed multigraphs, simplicial sets may also contain higher simplices. A 2-simplex, for instance, can be thought of as a two-dimensional "triangular" shape bounded by a list of three vertices "A", "B", "C" and three arrows "B" → "C", "A" → "C" and "A" → "B". In general, an "n"-simplex is an object made up from a list of "n" + 1 vertices (which are 0-simplices) and "n" + 1 faces (which are ("n" − 1)-simplices). The vertices of the "i"-th face are the vertices of the "n"-simplex minus the "i"-th vertex. The vertices of a simplex need not be distinct and a simplex is not determined by its vertices and faces: two different simplices may share the same list of faces (and therefore the same list of vertices), just like two different arrows in a multigraph may connect the same two vertices.
Simplicial sets should not be confused with abstract simplicial complexes, which generalize simple undirected graphs rather than directed multigraphs.
Formally, a simplicial set "X" is a collection of sets "X""n", "n" = 0, 1, 2, ..., together with certain maps between these sets: the "face maps" "d""n","i" : "X""n" → "X""n"−1 ("n" = 1, 2, 3, ... and 0 ≤ "i" ≤ "n") and "degeneracy maps" "s""n","i" : "X""n"→"X""n"+1 ("n" = 0, 1, 2, ... and 0 ≤ "i" ≤ "n"). We think of the elements of "X""n" as the "n"-simplices of "X". The map "d""n","i" assigns to each such "n"-simplex its "i"-th face, the face "opposite to" (i.e. not containing) the "i"-th vertex. The map "s""n","i" assigns to each "n"-simplex the degenerate ("n"+1)-simplex which arises from the given one by duplicating the "i"-th vertex. This description implicitly requires certain consistency relations among the maps "d""n","i" and "s""n","i". Rather than requiring these "simplicial identities" explicitly as part of the definition, the short and elegant modern definition uses the language of category theory.
Formal definition.
Let Δ denote the simplex category. The objects of Δ are nonempty linearly ordered sets of the form
["n"] = {0, 1, ..., "n"}
with "n"≥0. The morphisms in Δ are (non-strictly) order-preserving functions between these sets.
A simplicial set "X" is a contravariant functor
"X" : Δ → Set
where Set is the category of sets. (Alternatively and equivalently, one may define simplicial sets as covariant functors from the opposite category Δop to Set.) Given a simplicial set "X," we often write "Xn" instead of "X"(["n"]).
Simplicial sets form a category, usually denoted sSet, whose objects are simplicial sets and whose morphisms are natural transformations between them. This is nothing but the category of presheaves on Δ. As such, it is a topos.
Face and degeneracy maps and simplicial identities.
The simplex category Δ is generated by two particularly important families of morphisms (maps), whose images under a given simplicial set functor are called face maps and degeneracy maps of that simplicial set.
The "face maps" of a simplicial set "X" are the images in that simplicial set of the morphisms formula_0, where formula_1 is the only (order-preserving) injection formula_2 that "misses" formula_3.
Let us denote these face maps by formula_4 respectively, so that formula_5 is a map formula_6. If the first index is clear, we write formula_7 instead of formula_5.
The "degeneracy maps" of the simplicial set "X" are the images in that simplicial set of the morphisms formula_8, where formula_9 is the only (order-preserving) surjection formula_10 that "hits" formula_3 twice.
Let us denote these degeneracy maps by formula_11 respectively, so that formula_12 is a map formula_13. If the first index is clear, we write formula_14 instead of formula_12.
The defined maps satisfy the following simplicial identities:
Conversely, given a sequence of sets "Xn" together with maps formula_21 and formula_22 that satisfy the simplicial identities, there is a unique simplicial set "X" that has these face and degeneracy maps. So the identities provide an alternative way to define simplicial sets.
Examples.
Given a partially ordered set ("S",≤), we can define a simplicial set "NS", the nerve of "S", as follows: for every object ["n"] of Δ we set "NS"(["n"]) = hompo-set( ["n"] , "S"), the order-preserving maps from ["n"] to "S". Every morphism φ:["n"]→["m"] in Δ is an order preserving map, and via composition induces a map "NS"(φ) : "NS"(["m"]) → "NS"(["n"]). It is straightforward to check that "NS" is a contravariant functor from Δ to Set: a simplicial set.
Concretely, the "n"-simplices of the nerve "NS", i.e. the elements of "NS""n"="NS"(["n"]), can be thought of as ordered length-("n"+1) sequences of elements from "S": ("a"0 ≤ "a"1 ≤ ... ≤ "a""n"). The face map "d""i" drops the "i"-th element from such a list, and the degeneracy maps "s""i" duplicates the "i"-th element.
A similar construction can be performed for every category "C", to obtain the nerve "NC" of "C". Here, "NC"(["n"]) is the set of all functors from ["n"] to "C", where we consider ["n"] as a category with objects 0,1...,"n" and a single morphism from "i" to "j" whenever "i" ≤ "j".
Concretely, the "n"-simplices of the nerve "NC" can be thought of as sequences of "n" composable morphisms in "C": "a"0 → "a"1 → ... → "a""n". (In particular, the 0-simplices are the objects of "C" and the 1-simplices are the morphisms of "C".) The face map "d"0 drops the first morphism from such a list, the face map "d""n" drops the last, and the face map "d""i" for 0 < "i" < "n" drops "ai" and composes the "i"th and ("i" + 1)th morphisms. The degeneracy maps "s""i" lengthen the sequence by inserting an identity morphism at position "i".
We can recover the poset "S" from the nerve "NS" and the category "C" from the nerve "NC"; in this sense simplicial sets generalize posets and categories.
Another important class of examples of simplicial sets is given by the singular set "SY" of a topological space "Y". Here "SY""n" consists of all the continuous maps from the standard topological "n"-simplex to "Y". The singular set is further explained below.
The standard "n"-simplex and the category of simplices.
The standard "n"-simplex, denoted Δ"n", is a simplicial set defined as the functor homΔ(-, ["n"]) where ["n"] denotes the ordered set {0, 1, ... ,"n"} of the first ("n" + 1) nonnegative integers. (In many texts, it is written instead as hom(["n"],-) where the homset is understood to be in the opposite category Δop.)
By the Yoneda lemma, the "n"-simplices of a simplicial set "X" stand in 1–1 correspondence with the natural transformations from Δ"n" to "X," i.e. formula_23.
Furthermore, "X" gives rise to a category of simplices, denoted by formula_24 , whose objects are maps ("i.e." natural transformations) Δ"n" → "X" and whose morphisms are natural transformations Δ"n" → Δ"m" over "X" arising from maps ["n"] "→" ["m"] in Δ. That is, formula_24 is a slice category of Δ over "X". The following isomorphism shows that a simplicial set "X" is a colimit of its simplices:
formula_25
where the colimit is taken over the category of simplices of "X".
Geometric realization.
There is a functor |•|: sSet "→" CGHaus called the geometric realization taking a simplicial set "X" to its corresponding realization in the category of compactly-generated Hausdorff topological spaces. Intuitively, the realization of "X" is the topological space (in fact a CW complex) obtained if every "n-"simplex of "X" is replaced by a topological "n-"simplex (a certain "n-"dimensional subset of ("n" + 1)-dimensional Euclidean space defined below) and these topological simplices are glued together in the fashion the simplices of "X" hang together. In this process the orientation of the simplices of "X" is lost.
To define the realization functor, we first define it on standard n-simplices Δ"n" as follows: the geometric realization |Δ"n"| is the standard topological "n"-simplex in general position given by
formula_26
The definition then naturally extends to any simplicial set "X" by setting
|X| = limΔ"n" → "X" | Δ"n"|
where the colimit is taken over the n-simplex category of "X". The geometric realization is functorial on sSet.
It is significant that we use the category CGHaus of compactly-generated Hausdorff spaces, rather than the category Top of topological spaces, as the target category of geometric realization: like sSet and unlike Top, the category CGHaus is cartesian closed; the categorical product is defined differently in the categories Top and CGHaus, and the one in CGHaus corresponds to the one in sSet via geometric realization.
Singular set for a space.
The singular set of a topological space "Y" is the simplicial set "SY" defined by
("SY")(["n"]) = homT"op"(|Δ"n"|, "Y") for each object ["n"] ∈ Δ.
Every order-preserving map φ:["n"]→["m"] induces a continuous map |Δ"n"|→|Δ"m"| in a natural way, which by composition yields "SY"("φ") : "SY"(["m"]) → "SY"(["n"]). This definition is analogous to a standard idea in singular homology of "probing" a target topological space with standard topological "n"-simplices. Furthermore, the singular functor "S" is right adjoint to the geometric realization functor described above, i.e.:
homTop(|"X"|, "Y") ≅ homsSet("X", "SY")
for any simplicial set "X" and any topological space "Y". Intuitively, this adjunction can be understood as follows: a continuous map from the geometric realization of "X" to a space "Y" is uniquely specified if we associate to every simplex of "X" a continuous map from the corresponding standard topological simplex to "Y," in such a fashion that these maps are compatible with the way the simplices in "X" hang together.
Homotopy theory of simplicial sets.
In order to define a model structure on the category of simplicial sets, one has to define fibrations, cofibrations and weak equivalences. One can define fibrations to be Kan fibrations. A map of simplicial sets is defined to be a weak equivalence if its geometric realization is a weak homotopy equivalence of spaces. A map of simplicial sets is defined to be a cofibration if it is a monomorphism of simplicial sets. It is a difficult theorem of Daniel Quillen that the category of simplicial sets with these classes of morphisms becomes a model category, and indeed satisfies the axioms for a proper closed simplicial model category.
A key turning point of the theory is that the geometric realization of a Kan fibration is a Serre fibration of spaces. With the model structure in place, a homotopy theory of simplicial sets can be developed using standard homotopical algebra methods. Furthermore, the geometric realization and singular functors give a Quillen equivalence of closed model categories inducing an equivalence
|•|: "Ho"(sSet) ↔ "Ho"(Top)
between the homotopy category for simplicial sets and the usual homotopy category of CW complexes with homotopy classes of continuous maps between them. It is part of the general
definition of a Quillen adjunction that the right adjoint functor (in this case, the singular set functor) carries fibrations (resp. trivial fibrations) to fibrations (resp. trivial fibrations).
Simplicial objects.
A simplicial object "X" in a category "C" is a contravariant functor
"X" : Δ → "C"
or equivalently a covariant functor
"X": Δop → "C,"
where Δ still denotes the simplex category and op the opposite category. When "C" is the category of sets, we are just talking about the simplicial sets that were defined above. Letting "C" be the category of groups or category of abelian groups, we obtain the categories sGrp of simplicial groups and sAb of simplicial abelian groups, respectively.
Simplicial groups and simplicial abelian groups also carry closed model structures induced by that of the underlying simplicial sets.
The homotopy groups of simplicial abelian groups can be computed by making use of the Dold–Kan correspondence which yields an equivalence of categories between simplicial abelian groups and bounded chain complexes and is given by functors
"N:" sAb → Ch+
and
Γ: Ch+ → sAb.
History and uses of simplicial sets.
Simplicial sets were originally used to give precise and convenient descriptions of classifying spaces of groups. This idea was vastly extended by Grothendieck's idea of
considering classifying spaces of categories, and in particular by Quillen's work of algebraic K-theory. In this work, which earned him a Fields Medal, Quillen
developed surprisingly efficient methods for manipulating
infinite simplicial sets. These methods were used in other areas on the border between algebraic geometry and topology. For instance, the André–Quillen homology of a ring is a "non-abelian homology", defined and studied in this way.
Both the algebraic K-theory and the André–Quillen homology are defined using algebraic data to write down a simplicial set, and then taking the homotopy groups of this simplicial set.
Simplicial methods are often useful when one wants to prove that a space is a loop space. The basic idea is that if formula_27 is a group with classifying space formula_28, then formula_27 is homotopy equivalent to the loop space formula_29. If formula_28 itself is a group, we can iterate the procedure, and formula_27 is homotopy equivalent to the double loop space formula_30. In case formula_27 is an abelian group, we can actually iterate this infinitely many times, and obtain that formula_27 is an infinite loop space.
Even if formula_31 is not an abelian group, it can happen that it has a composition which is sufficiently commutative so that one can use the above idea to prove that formula_31 is an infinite loop space. In this way, one can prove that the algebraic formula_32-theory of a ring, considered as a topological space, is an infinite loop space.
In recent years, simplicial sets have been used in higher category theory and derived algebraic geometry. Quasi-categories can be thought of as categories in which the composition of morphisms is defined only up to homotopy, and information about the composition of higher homotopies is also retained. Quasi-categories are defined as simplicial sets satisfying one additional condition, the weak Kan condition.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\delta^{n,0},\\dotsc,\\delta^{n,n}\\colon[n-1]\\to[n]"
},
{
"math_id": 1,
"text": "\\delta^{n,i}"
},
{
"math_id": 2,
"text": "[n-1]\\to[n]"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "d_{n,0},\\dotsc,d_{n,n}"
},
{
"math_id": 5,
"text": "d_{n,i}"
},
{
"math_id": 6,
"text": "X_n \\to X_{n-1}"
},
{
"math_id": 7,
"text": "d_i"
},
{
"math_id": 8,
"text": "\\sigma^{n,0},\\dotsc,\\sigma^{n,n}\\colon[n+1]\\to[n]"
},
{
"math_id": 9,
"text": "\\sigma^{n,i}"
},
{
"math_id": 10,
"text": "[n+1]\\to[n]"
},
{
"math_id": 11,
"text": "s_{n,0},\\dotsc,s_{n,n}"
},
{
"math_id": 12,
"text": "s_{n,i}"
},
{
"math_id": 13,
"text": "X_n \\to X_{n+1}"
},
{
"math_id": 14,
"text": "s_i"
},
{
"math_id": 15,
"text": "d_i d_j = d_{j-1} d_i"
},
{
"math_id": 16,
"text": "d_{n-1,i} d_{n,j} = d_{n-1,j-1} d_{n,i}"
},
{
"math_id": 17,
"text": "d_i s_j = s_{j-1}d_i"
},
{
"math_id": 18,
"text": "d_i s_j = \\text{id}"
},
{
"math_id": 19,
"text": "d_i s_j = s_j d_{i-1}"
},
{
"math_id": 20,
"text": "s_i s_j = s_{j+1} s_i"
},
{
"math_id": 21,
"text": "d_{n,i} : X_n \\to X_{n-1}"
},
{
"math_id": 22,
"text": "s_{n,i} : X_n \\to X_{n+1}"
},
{
"math_id": 23,
"text": "X_n = X([n])\\cong \\operatorname{Nat}(\\operatorname{hom}_\\Delta(-,[n]),X)=\n\\operatorname{hom}_{\\textbf{sSet}}(\\Delta^n,X)"
},
{
"math_id": 24,
"text": "\\Delta\\downarrow{X}"
},
{
"math_id": 25,
"text": "X \\cong \\varinjlim_{\\Delta^n \\to X} \\Delta^n"
},
{
"math_id": 26,
"text": "|\\Delta^n| = \\{(x_0, \\dots, x_n) \\in \\mathbb{R}^{n+1}: 0\\leq x_i \\leq 1, \\sum x_i = 1 \\}."
},
{
"math_id": 27,
"text": "G"
},
{
"math_id": 28,
"text": "BG"
},
{
"math_id": 29,
"text": "\\Omega BG"
},
{
"math_id": 30,
"text": "\\Omega^2 B(BG)"
},
{
"math_id": 31,
"text": "X"
},
{
"math_id": 32,
"text": "K"
}
] | https://en.wikipedia.org/wiki?curid=1027784 |
10279126 | Aristotelian physics | Natural sciences as described by Aristotle
Aristotelian physics is the form of natural philosophy described in the works of the Greek philosopher Aristotle (384–322 BC). In his work "Physics", Aristotle intended to establish general principles of change that govern all natural bodies, both living and inanimate, celestial and terrestrial – including all motion (change with respect to place), quantitative change (change with respect to size or number), qualitative change, and substantial change ("coming to be" [coming into existence, 'generation'] or "passing away" [no longer existing, 'corruption']). To Aristotle, 'physics' was a broad field including subjects which would now be called the philosophy of mind, sensory experience, memory, anatomy and biology. It constitutes the foundation of the thought underlying many of his works.
Key concepts of Aristotelian physics include the structuring of the cosmos into concentric spheres, with the Earth at the centre and celestial spheres around it. The terrestrial sphere was made of four elements, namely earth, air, fire, and water, subject to change and decay. The celestial spheres were made of a fifth element, an unchangeable aether. Objects made of these elements have natural motions: those of earth and water tend to fall; those of air and fire, to rise. The speed of such motion depends on their weights and the density of the medium. Aristotle argued that a vacuum could not exist as speeds would become infinite.
Aristotle described four causes or explanations of change as seen on earth: the material, formal, efficient, and final causes of things. As regards living things, Aristotle's biology relied on observation of natural kinds, both the basic kinds and the groups to which these belonged. He did not conduct experiments in the modern sense, but relied on amassing data, observational procedures such as dissection, and making hypotheses about relationships between measurable quantities such as body size and lifespan.
Methods.
<templatestyles src="Template:Blockquote/styles.css" />Aristotle
While consistent with common human experience, Aristotle's principles were not based on controlled, quantitative experiments, so they do not describe our universe in the precise, quantitative way now expected of science. Contemporaries of Aristotle like Aristarchus rejected these principles in favor of heliocentrism, but their ideas were not widely accepted. Aristotle's principles were difficult to disprove merely through casual everyday observation, but later development of the scientific method challenged his views with experiments and careful measurement, using increasingly advanced technology such as the telescope and vacuum pump.
<templatestyles src="Template:Blockquote/styles.css" />In claiming novelty for their doctrines, those natural philosophers who developed the "new science" of the seventeenth century frequently contrasted "Aristotelian" physics with their own. Physics of the former sort, so they claimed, emphasized the qualitative at the expense of the quantitative, neglected mathematics and its proper role in physics (particularly in the analysis of local motion), and relied on such suspect explanatory principles as final causes and "occult" essences. Yet in his "Physics" Aristotle characterizes physics or the "science of nature" as pertaining to magnitudes ("megethê"), motion (or "process" or "gradual change" – "kinêsis"), and time ("chronon") ("Phys" III.4 202b30–1). Indeed, the "Physics" is largely concerned with an analysis of motion, particularly local motion, and the other concepts that Aristotle believes are requisite to that analysis.
There are clear differences between modern and Aristotelian physics, the main being the use of mathematics, largely absent in Aristotle. Some recent studies, however, have re-evaluated Aristotle's physics, stressing both its empirical validity and its continuity with modern physics.
Concepts.
Elements and spheres.
Aristotle divided his universe into "terrestrial spheres" which were "corruptible" and where humans lived, and moving but otherwise unchanging celestial spheres.
Aristotle believed that four classical elements make up everything in the terrestrial spheres: earth, air, fire and water. He also held that the heavens are made of a special weightless and incorruptible (i.e. unchangeable) fifth element called "aether". Aether also has the name "quintessence", meaning, literally, "fifth being".
Aristotle considered heavy matter such as iron and other metals to consist primarily of the element earth, with a smaller amount of the other three terrestrial elements. Other, lighter objects, he believed, have less earth, relative to the other three elements in their composition.
The four classical elements were not invented by Aristotle; they were originated by Empedocles. During the Scientific Revolution, the ancient theory of classical elements was found to be incorrect, and was replaced by the empirically tested concept of chemical elements.
Celestial spheres.
According to Aristotle, the Sun, Moon, planets and stars – are embedded in perfectly concentric "crystal spheres" that rotate eternally at fixed rates. Because the celestial spheres are incapable of any change except rotation, the terrestrial sphere of fire must account for the heat, starlight and occasional meteorites. The lowest, lunar sphere is the only celestial sphere that actually comes in contact with the sublunary orb's changeable, terrestrial matter, dragging the rarefied fire and air along underneath as it rotates. Like Homer's "æthere" (αἰθήρ) – the "pure air" of Mount Olympus – was the divine counterpart of the air breathed by mortal beings (άήρ, "aer"). The celestial spheres are composed of the special element "aether", eternal and unchanging, the sole capability of which is a uniform circular motion at a given rate (relative to the diurnal motion of the outermost sphere of fixed stars).
The concentric, aetherial, cheek-by-jowl "crystal spheres" that carry the Sun, Moon and stars move eternally with unchanging circular motion. Spheres are embedded within spheres to account for the "wandering stars" (i.e. the planets, which, in comparison with the Sun, Moon and stars, appear to move erratically). Mercury, Venus, Mars, Jupiter, and Saturn are the only planets (including minor planets) which were visible before the invention of the telescope, which is why Neptune and Uranus are not included, nor are any asteroids. Later, the belief that all spheres are concentric was forsaken in favor of Ptolemy's deferent and epicycle model. Aristotle submits to the calculations of astronomers regarding the total number of spheres and various accounts give a number in the neighborhood of fifty spheres. An unmoved mover is assumed for each sphere, including a "prime mover" for the sphere of fixed stars. The unmoved movers do not push the spheres (nor could they, being immaterial and dimensionless) but are the final cause of the spheres' motion, i.e. they explain it in a way that's similar to the explanation "the soul is moved by beauty".
Terrestrial change.
Unlike the eternal and unchanging celestial aether, each of the four terrestrial elements are capable of changing into either of the two elements they share a property with: e.g. the cold and wet (water) can transform into the hot and wet (air) or the cold and dry (earth). Any apparent change from cold and wet into the hot and dry (fire) is actually a two-step process, as first one of the property changes, then the other. These properties are predicated of an actual substance relative to the work it is able to do; that of heating or chilling and of desiccating or moistening. The four elements exist "only" with regard to this capacity and relative to some potential work. The celestial element is eternal and unchanging, so only the four terrestrial elements account for "coming to be" and "passing away" – or, in the terms of Aristotle's On Generation and Corruption (Περὶ γενέσεως καὶ φθορᾶς), "generation" and "corruption".
Natural place.
The Aristotelian explanation of gravity is that all bodies move toward their natural place. For the elements earth and water, that place is the center of the (geocentric) universe; the natural place of water is a concentric shell around the Earth because earth is heavier; it sinks in water. The natural place of air is likewise a concentric shell surrounding that of water; bubbles rise in water. Finally, the natural place of fire is higher than that of air but below the innermost celestial sphere (carrying the Moon).
In Book "Delta" of his "Physics" (IV.5), Aristotle defines "topos" (place) in terms of two bodies, one of which contains the other: a "place" is where the inner surface of the former (the containing body) touches the contained body. This definition remained dominant until the beginning of the 17th century, even though it had been questioned and debated by philosophers since antiquity. The most significant early critique was made in terms of geometry by the 11th-century Arab polymath al-Hasan Ibn al-Haytham (Alhazen) in his "Discourse on Place".
Natural motion.
Terrestrial objects rise or fall, to a greater or lesser extent, according to the ratio of the four elements of which they are composed. For example, earth, the heaviest element, and water, fall toward the center of the cosmos; hence the Earth and for the most part its oceans, will have already come to rest there. At the opposite extreme, the lightest elements, air and especially fire, rise up and away from the center.
The elements are not proper "substances" in Aristotelian theory (or the modern sense of the word). Instead, they are abstractions used to explain the varying natures and behaviors of actual materials in terms of ratios between them.
Motion and change are closely related in Aristotelian physics. Motion, according to Aristotle, involved a change from potentiality to actuality. He gave example of four types of change, namely change in substance, in quality, in quantity and in place.
Aristotle proposed that the speed at which two identically shaped objects sink or fall is directly proportional to their weights and inversely proportional to the density of the medium through which they move. While describing their terminal velocity, Aristotle must stipulate that there would be no limit at which to compare the speed of atoms falling through a vacuum, (they could move indefinitely fast because there would be no particular place for them to come to rest in the void). Now however it is understood that at any time prior to achieving terminal velocity in a relatively resistance-free medium like air, two such objects are expected to have nearly identical speeds because both are experiencing a force of gravity proportional to their masses and have thus been accelerating at nearly the same rate. This became especially apparent from the eighteenth century when partial vacuum experiments began to be made, but some two hundred years earlier Galileo had already demonstrated that objects of different weights reach the ground in similar times.
Unnatural motion.
Apart from the natural tendency of terrestrial exhalations to rise and objects to fall, unnatural or forced motion from side to side results from the turbulent collision and sliding of the objects as well as transmutation between the elements (On Generation and Corruption).
Chance.
In his "Physics" Aristotle examines accidents (συμβεβηκός, "symbebekòs") that have no cause but chance. "Nor is there any definite cause for an accident, but only chance (τύχη, "týche"), namely an indefinite (ἀόριστον, "aóriston") cause" ("Metaphysics" V, 1025a25).
It is obvious that there are principles and causes which are generable and destructible apart from the actual processes of generation and destruction; for if this is not true, everything will be of necessity: that is, if there must necessarily be some cause, other than accidental, of that which is generated and destroyed. Will this be, or not? Yes, if this happens; otherwise not ("Metaphysics" VI, 1027a29).
Continuum and vacuum.
Aristotle argues against the indivisibles of Democritus (which differ considerably from the historical and the modern use of the term "atom"). As a place without anything existing at or within it, Aristotle argued against the possibility of a vacuum or void. Because he believed that the speed of an object's motion is proportional to the force being applied (or, in the case of natural motion, the object's weight) and inversely proportional to the density of the medium, he reasoned that objects moving in a void would move indefinitely fast – and thus any and all objects surrounding the void would immediately fill it. The void, therefore, could never form.
The "voids" of modern-day astronomy (such as the Local Void adjacent to our own galaxy) have the opposite effect: ultimately, bodies off-center are ejected from the void due to the gravity of the material outside.
Four causes.
According to Aristotle, there are four ways to explain the "aitia" or causes of change. He writes that "we do not have knowledge of a thing until we have grasped its why, that is to say, its cause."
Aristotle held that there were four kinds of causes.
Material.
The material cause of a thing is that of which it is made. For a table, that might be wood; for a statue, that might be bronze or marble.
<templatestyles src="Template:Blockquote/styles.css" />"In one way we say that the "aition" is that out of which. as existing, something comes to be, like the bronze for the statue, the silver for the phial, and their genera" (194b2 3—6). By "genera", Aristotle means more general ways of classifying the matter (e.g. "metal"; "material"); and that will become important. A little later on. he broadens the range of the material cause to include letters (of syllables), fire and the other elements (of physical bodies), parts (of wholes), and even premisses (of conclusions: Aristotle re-iterates this claim, in slightly different terms, in "An. Post" II. 11).
Formal.
The formal cause of a thing is the essential property that makes it the kind of thing it is. In "Metaphysics" Book Α Aristotle emphasizes that form is closely related to essence and definition. He says for example that the ratio 2:1, and number in general, is the cause of the octave.
<templatestyles src="Template:Blockquote/styles.css" />"Another [cause] is the form and the exemplar: this is the formula (logos) of the essence "(to ti en einai)", and its genera, for instance the ratio 2:1 of the octave" ("Phys" 11.3 194b26—8)... Form is not just shape... We are asking (and this is the connection with essence, particularly in its canonical Aristotelian formulation) what it is to be some thing. And it is a feature of musical harmonics (first noted and wondered at by the Pythagoreans) that intervals of this type do indeed exhibit this ratio in some form in the instruments used to create them (the length of pipes, of strings, etc.). In some sense, the ratio explains what all the intervals have in common, why they turn out the same.
Efficient.
The efficient cause of a thing is the primary agency by which its matter took its form. For example, the efficient cause of a baby is a parent of the same species and that of a table is a carpenter, who knows the form of the table. In his "Physics" II, 194b29—32, Aristotle writes: "there is that which is the primary originator of the change and of its cessation, such as the deliberator who is responsible [sc. for the action] and the father of the child, and in general the producer of the thing produced and the changer of the thing changed".
<templatestyles src="Template:Blockquote/styles.css" />Aristotle’s examples here are instructive: one case of mental and one of physical causation, followed by a perfectly general characterization. But they conceal (or at any rate fail to make patent) a crucial feature of Aristotle’s concept of efficient causation, and one which serves to distinguish it from most modern homonyms. For Aristotle, any process requires a constantly operative efficient cause as long as it continues. This commitment appears most starkly to modern eyes in Aristotle’s discussion of projectile motion: what keeps the projectile moving after it leaves the
hand? "Impetus", "momentum", much less "inertia", are not possible answers. There must be a mover, distinct (at least in some sense) from the thing moved, which is exercising its motive capacity at every moment of the projectile’s flight (see "Phys" VIII. 10 266b29—267a11). Similarly, in every case of animal generation, there is always some thing responsible for the continuity of that generation, although it may do so by way of some intervening instrument ("Phys" II.3 194b35—195a3).
Final.
The final cause is that for the sake of which something takes place, its aim or teleological purpose: for a germinating seed, it is the adult plant, for a ball at the top of a ramp, it is coming to rest at the bottom, for an eye, it is seeing, for a knife, it is cutting.
<templatestyles src="Template:Blockquote/styles.css" />Goals have an explanatory function: that is a commonplace, at least in the context of action-ascriptions. Less of a commonplace is the view espoused by Aristotle, that finality and purpose are to be found throughout nature, which is for him the realm of those things which contain within themselves principles of movement and rest (i.e. efficient causes); thus it makes sense to attribute purposes not only to natural things themselves, but also to their parts: the parts of a natural whole exist for the sake of the whole. As Aristotle himself notes, "for the sake of" locutions are ambiguous: ""A" is for the sake of "B"" may mean that "A" exists or is undertaken in order to bring "B" about; or it may mean that "A" is for "B’s" benefit ("An" II.4 415b2—3, 20—1); but both types of finality have, he thinks, a crucial role to play in natural, as well as deliberative, contexts. Thus a man may exercise for the sake of his health: and so "health", and not just the hope of achieving it, is the cause of his action (this distinction is not trivial). But the eyelids are for the sake of the eye (to protect it: "PA" II.1 3) and the eye for the sake of the animal as a whole (to help it function properly: cf. "An" II.7).
Biology.
According to Aristotle, the science of living things proceeds by gathering observations about each natural kind of animal, organizing them into genera and species (the "differentiae" in "History of Animals") and then going on to study the causes (in "Parts of Animals" and "Generation of Animals", his three main biological works).
<templatestyles src="Template:Blockquote/styles.css" />The four causes of animal generation can be summarized as follows. The mother and father represent the material and efficient causes, respectively. The mother provides the matter out of which the embryo is formed, while the father provides the agency that informs that material and triggers its development. The formal cause is the definition of the animal’s substantial being ("GA" I.1 715a4: "ho logos tês ousias"). The final cause is the adult form, which is the end for the sake of which development takes place.
Organism and mechanism.
The four elements make up the uniform materials such as blood, flesh and bone, which are themselves the matter out of which are created the non-uniform organs of the body (e.g. the heart, liver and hands) "which in turn, as parts, are matter for the functioning body as a whole ("PA" II. 1 646a 13—24)".
<templatestyles src="Template:Blockquote/styles.css" />[There] is a certain obvious conceptual economy about the view that in natural processes naturally constituted things simply seek to realize in full actuality the potentials contained within them (indeed, this is what "is" for them to be natural); on the other hand, as the detractors of Aristotelianism from the seventeenth century on were not slow to point out, this economy is won at the expense of any serious empirical content. Mechanism, at least as practiced by Aristotle’s contemporaries and predecessors, may have been explanatorily inadequate – but at least it was an "attempt" at a general account given in reductive terms of the lawlike connections between things. Simply introducing what later reductionists were to scoff at as "occult qualities" does not explain – it merely, in the manner of Molière’s famous satirical joke, serves to re-describe the effect. Formal talk, or so it is said, is vacuous.<br><br>Things are not however quite as bleak as this. For one thing, there’s no point in trying to engage in reductionist science if you don’t have the wherewithal, empirical and conceptual, to do so successfully: science shouldn't be simply unsubstantiated speculative metaphysics. But more than that, there is a point to describing the world in such teleologically loaded terms: it makes sense of things in a way that atomist speculations do not. And further, Aristotle’s talk of species-forms is not as empty as his opponents would insinuate. He doesn't simply say that things do what they do because that's the sort of thing they do: the whole point of his classificatory biology, most clearly exemplified in "PA", is to show what sorts of function go with what, which presuppose which and which are subservient to which. And in this sense, formal or functional biology is susceptible of a type of reductionism. We start, he tells us, with the basic animal kinds which we all pre-theoretically (although not indefeasibly) recognize (cf. "PA" I.4): but we then go on to show how their parts relate to one another: why it is, for instance, that only blooded creatures have lungs, and how certain structures in one species are analogous or homologous to those in another (such as scales in fish, feathers in birds, hair in mammals). And the answers, for Aristotle, are to be found in the economy of functions, and how they all contribute to the overall well-being (the final cause in this sense) of the animal.
"See also Organic form."
Psychology.
According to Aristotle, perception and thought are similar, though not exactly alike in that perception is concerned only with the external objects that are acting on our sense organs at any given time, whereas we can think about anything we choose. Thought is about universal forms, in so far as they have been successfully understood, based on our memory of having encountered instances of those forms directly.
<templatestyles src="Template:Blockquote/styles.css" />Aristotle’s theory of cognition rests on two central pillars: his account of perception and his account of thought. Together, they make up a significant portion of his psychological writings, and his discussion of other mental states depends critically on them. These two activities, moreover, are conceived of in an analogous manner, at least with regard to their most basic forms. Each activity is triggered by its object – each, that is, is about the very thing that brings it about. This simple causal account explains the reliability of cognition: perception and thought are, in effect, transducers, bringing information about the world into our cognitive systems, because, at least in their most basic forms, they are infallibly about the causes that bring them about ("An" III.4 429a13–18). Other, more complex mental states are far from infallible. But they are still tethered to the world, in so far as they rest on the unambiguous and direct contact perception and thought enjoy with their objects.
Medieval commentary.
The Aristotelian theory of motion came under criticism and modification during the Middle Ages. Modifications began with John Philoponus in the 6th century, who partly accepted Aristotle's theory that "continuation of motion depends on continued action of a force" but modified it to include his idea that a hurled body also acquires an inclination (or "motive power") for movement away from whatever caused it to move, an inclination that secures its continued motion. This impressed virtue would be temporary and self-expending, meaning that all motion would tend toward the form of Aristotle's natural motion.
In "The Book of Healing" (1027), the 11th-century Persian polymath Avicenna developed Philoponean theory into the first coherent alternative to Aristotelian theory. Inclinations in the Avicennan theory of motion were not self-consuming but permanent forces whose effects were dissipated only as a result of external agents such as air resistance, making him "the first to conceive such a permanent type of impressed virtue for non-natural motion". Such a self-motion ("mayl") is "almost the opposite of the Aristotelian conception of violent motion of the projectile type, and it is rather reminiscent of the principle of inertia, i.e. Newton's first law of motion."
The eldest Banū Mūsā brother, Ja'far Muhammad ibn Mūsā ibn Shākir (800-873), wrote the "Astral Motion" and "The Force of Attraction". The Persian physicist, Ibn al-Haytham (965-1039) discussed the theory of attraction between bodies. It seems that he was aware of the magnitude of acceleration due to gravity and he discovered that the heavenly bodies "were accountable to the laws of physics". During his debate with Avicenna, al-Biruni also criticized the Aristotelian theory of gravity firstly for denying the existence of levity or gravity in the celestial spheres; and, secondly, for its notion of circular motion being an innate property of the heavenly bodies.
Hibat Allah Abu'l-Barakat al-Baghdaadi (1080–1165) wrote "al-Mu'tabar", a critique of Aristotelian physics where he negated Aristotle's idea that a constant force produces uniform motion, as he realized that a force applied continuously produces acceleration, a fundamental law of classical mechanics and an early foreshadowing of Newton's second law of motion. Like Newton, he described acceleration as the rate of change of speed.
In the 14th century, Jean Buridan developed the theory of impetus as an alternative to the Aristotelian theory of motion. The theory of impetus was a precursor to the concepts of inertia and momentum in classical mechanics. Buridan and Albert of Saxony also refer to Abu'l-Barakat in explaining that the acceleration of a falling body is a result of its increasing impetus. In the 16th century, Al-Birjandi discussed the possibility of the Earth's rotation and, in his analysis of what might occur if the Earth were rotating, developed a hypothesis similar to Galileo's notion of "circular inertia". He described it in terms of the following observational test:
<templatestyles src="Template:Blockquote/styles.css" />"The small or large rock will fall to the Earth along the path of a line that is perpendicular to the plane ("sath") of the horizon; this is witnessed by experience ("tajriba"). And this perpendicular is away from the tangent point of the Earth’s sphere and the plane of the perceived ("hissi") horizon. This point moves with the motion of the Earth and thus there will be no difference in place of fall of the two rocks."
Life and death of Aristotelian physics.
The reign of Aristotelian physics, the earliest known speculative theory of physics, lasted almost two millennia. After the work of many pioneers such as Copernicus, Tycho Brahe, Galileo, Kepler, Descartes and Newton, it became generally accepted that Aristotelian physics was neither correct nor viable. Despite this, it survived as a scholastic pursuit well into the seventeenth century, until universities amended their curricula.
In Europe, Aristotle's theory was first convincingly discredited by Galileo's studies. Using a telescope, Galileo observed that the Moon was not entirely smooth, but had craters and mountains, contradicting the Aristotelian idea of the incorruptibly perfect smooth Moon. Galileo also criticized this notion theoretically; a perfectly smooth Moon would reflect light unevenly like a shiny billiard ball, so that the edges of the moon's disk would have a different brightness than the point where a tangent plane reflects sunlight directly to the eye. A rough moon reflects in all directions equally, leading to a disk of approximately equal brightness which is what is observed. Galileo also observed that Jupiter has moons – i.e. objects revolving around a body other than the Earth – and noted the phases of Venus, which demonstrated that Venus (and, by implication, Mercury) traveled around the Sun, not the Earth.
According to legend, Galileo dropped balls of various densities from the Tower of Pisa and found that lighter and heavier ones fell at almost the same speed. His experiments actually took place using balls rolling down inclined planes, a form of falling sufficiently slow to be measured without advanced instruments.
In a relatively dense medium such as water, a heavier body falls faster than a lighter one. This led Aristotle to speculate that the rate of falling is proportional to the weight and inversely proportional to the density of the medium. From his experience with objects falling in water, he concluded that water is approximately ten times denser than air. By weighing a volume of compressed air, Galileo showed that this overestimates the density of air by a factor of forty. From his experiments with inclined planes, he concluded that if friction is neglected, all bodies fall at the same rate (which is also not true, since not only friction but also density of the medium relative to density of the bodies has to be negligible. Aristotle correctly noticed that medium density is a factor but focused on body weight instead of density. Galileo neglected medium density which led him to correct conclusion for vacuum).
Galileo also advanced a theoretical argument to support his conclusion. He asked if two bodies of different weights and different rates of fall are tied by a string, does the combined system fall faster because it is now more massive, or does the lighter body in its slower fall hold back the heavier body? The only convincing answer is neither: all the systems fall at the same rate.
Followers of Aristotle were aware that the motion of falling bodies was not uniform, but picked up speed with time. Since time is an abstract quantity, the peripatetics postulated that the speed was proportional to the distance. Galileo established experimentally that the speed is proportional to the time, but he also gave a theoretical argument that the speed could not possibly be proportional to the distance. In modern terms, if the rate of fall is proportional to the distance, the differential expression for the distance y travelled after time t is:
formula_0
with the condition that formula_1. Galileo demonstrated that this system would stay at formula_2 for all time. If a perturbation set the system into motion somehow, the object would pick up speed exponentially in time, not linearly.
Standing on the surface of the Moon in 1971, David Scott famously repeated Galileo's experiment by dropping a feather and a hammer from each hand at the same time. In the absence of a substantial atmosphere, the two objects fell and hit the Moon's surface at the same time.
The first convincing mathematical theory of gravity – in which two masses are attracted toward each other by a force whose effect decreases according to the inverse square of the distance between them – was Newton's law of universal gravitation. This, in turn, was replaced by the General theory of relativity due to Albert Einstein.
Modern evaluations of Aristotle's physics.
Modern scholars differ in their opinions of whether Aristotle's physics were sufficiently based on empirical observations to qualify as science, or else whether they were derived primarily from philosophical speculation and thus fail to satisfy the scientific method.
Carlo Rovelli has argued that Aristotle's physics are an accurate and non-intuitive representation of a particular domain (motion in fluids), and thus are just as scientific as Newton's laws of motion, which also are accurate in some domains while failing in others (i.e. special and general relativity).
Notes.
<templatestyles src="Refbegin/styles.css" />
a Here, the term "Earth" does not refer to planet Earth, known by modern science to be composed of a large number of chemical elements. Modern chemical elements are not conceptually similar to Aristotle's elements; the term "air", for instance, does not refer to breathable air.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n{dy\\over dt} \\propto y\n"
},
{
"math_id": 1,
"text": "y(0)=0"
},
{
"math_id": 2,
"text": "y=0"
}
] | https://en.wikipedia.org/wiki?curid=10279126 |
10280254 | Isothermal coordinates | In mathematics, specifically in differential geometry, isothermal coordinates on a Riemannian manifold are local coordinates where the metric is conformal to the Euclidean metric. This means that in isothermal coordinates, the Riemannian metric locally has the form
formula_0
where formula_1 is a positive smooth function. (If the Riemannian manifold is oriented, some authors insist that a coordinate system must agree with that orientation to be isothermal.)
Isothermal coordinates on surfaces were first introduced by Gauss. Korn and Lichtenstein proved that isothermal coordinates exist around any point on a two dimensional Riemannian manifold.
By contrast, most higher-dimensional manifolds do not admit isothermal coordinates anywhere; that is, they are not usually locally conformally flat. In dimension 3, a Riemannian metric is locally conformally flat if and only if its Cotton tensor vanishes. In dimensions > 3, a metric is locally conformally flat if and only if its Weyl tensor vanishes.
Isothermal coordinates on surfaces.
In 1822, Carl Friedrich Gauss proved the existence of isothermal coordinates on an arbitrary surface with a real-analytic Riemannian metric, following earlier results of
Joseph Lagrange in the special case of surfaces of revolution. The construction used by Gauss made use of the Cauchy–Kowalevski theorem, so that his method is fundamentally restricted to the real-analytic context. Following innovations in the theory of two-dimensional partial differential equations by Arthur Korn, Leon Lichtenstein found in 1916 the general existence of isothermal coordinates for Riemannian metrics of lower regularity, including smooth metrics and even Hölder continuous metrics.
Given a Riemannian metric on a two-dimensional manifold, the transition function between isothermal coordinate charts, which is a map between open subsets of R2, is necessarily angle-preserving. The angle-preserving property together with orientation-preservation is one characterization (among many) of holomorphic functions, and so an "oriented" coordinate atlas consisting of isothermal coordinate charts may be viewed as a holomorphic coordinate atlas. This demonstrates that a Riemannian metric and an orientation on a two-dimensional manifold combine to induce the structure of a Riemann surface (i.e. a one-dimensional complex manifold). Furthermore, given an oriented surface, two Riemannian metrics induce the same holomorphic atlas if and only if they are conformal to one another. For this reason, the study of Riemann surfaces is identical to the study of conformal classes of Riemannian metrics on oriented surfaces.
By the 1950s, expositions of the ideas of Korn and Lichtenstein were put into the language of complex derivatives and the Beltrami equation by Lipman Bers and Shiing-shen Chern, among others. In this context, it is natural to investigate the existence of generalized solutions, which satisfy the relevant partial differential equations but are no longer interpretable as coordinate charts in the usual way. This was initiated by Charles Morrey in his seminal 1938 article on the theory of elliptic partial differential equations on two-dimensional domains, leading later to the measurable Riemann mapping theorem of Lars Ahlfors and Bers.
Beltrami equation.
The existence of isothermal coordinates can be proved by applying known existence theorems for the Beltrami equation, which rely on Lp estimates for singular integral operators of Calderón and Zygmund. A simpler approach to the Beltrami equation has been given more recently by Adrien Douady.
If the Riemannian metric is given locally as
formula_2
then in the complex coordinate formula_3, it takes the form
formula_4
where formula_5 and formula_6 are smooth with formula_7 and formula_8. In fact
formula_9
In isothermal coordinates formula_10 the metric should take the form
formula_11
with ρ smooth. The complex coordinate formula_12 satisfies
formula_13
so that the coordinates ("u", "v") will be isothermal if the Beltrami equation
formula_14
has a diffeomorphic solution. Such a solution has been proved to exist in any neighbourhood where formula_15.
Existence via local solvability for elliptic partial differential equations.
The existence of isothermal coordinates on a smooth two-dimensional Riemannian manifold is a corollary of the standard "local solvability" result in the analysis of elliptic partial differential equations. In the present context, the relevant elliptic equation is the condition for a function to be harmonic relative to the Riemannian metric. The local solvability then states that any point p has a neighborhood U on which there is a harmonic function u with nowhere-vanishing derivative.
Isothermal coordinates are constructed from such a function in the following way. Harmonicity of u is identical to the closedness of the differential 1-form formula_16 defined using the Hodge star operator formula_17 associated to the Riemannian metric. The Poincaré lemma thus implies the existence of a function v on U with formula_18 By definition of the Hodge star, formula_19 and formula_20 are orthogonal to one another and hence linearly independent, and it then follows from the inverse function theorem that u and v form a coordinate system on some neighborhood of p. This coordinate system is automatically isothermal, since the orthogonality of formula_19 and formula_20 implies the diagonality of the metric, and the norm-preserving property of the Hodge star implies the equality of the two diagonal components.
Gaussian curvature.
In the isothermal coordinates formula_10, the Gaussian curvature takes the simpler form
formula_21
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " g = \\varphi (dx_1^2 + \\cdots + dx_n^2),"
},
{
"math_id": 1,
"text": "\\varphi"
},
{
"math_id": 2,
"text": " ds^2 = E \\, dx^2 + 2F \\, dx \\, dy + G \\, dy^2,"
},
{
"math_id": 3,
"text": " z = x + iy"
},
{
"math_id": 4,
"text": " ds^2 = \\lambda| \\, dz +\\mu \\, d\\overline{z}|^2,"
},
{
"math_id": 5,
"text": "\\lambda"
},
{
"math_id": 6,
"text": "\\mu"
},
{
"math_id": 7,
"text": "\\lambda>0"
},
{
"math_id": 8,
"text": "\\left\\vert \\mu \\right\\vert < 1"
},
{
"math_id": 9,
"text": " \\lambda={1\\over 4} ( E + G +2\\sqrt{EG -F^2}),\\,\\,\\, {\\displaystyle \\mu ={(E-G+2iF) \\over 4\\lambda }}."
},
{
"math_id": 10,
"text": "(u,v)"
},
{
"math_id": 11,
"text": " ds^2 = e^{\\rho} (du^2 + dv^2)"
},
{
"math_id": 12,
"text": " w = u + iv"
},
{
"math_id": 13,
"text": "e^{\\rho} \\, |dw|^2 = e^{\\rho} |w_{z}|^2 | \\, dz + {w_{\\overline {z}}\\over w_z} \\, d\\overline{z}|^2,"
},
{
"math_id": 14,
"text": " {\\partial w\\over \\partial \\overline{z}} = \\mu {\\partial w\\over \\partial z}"
},
{
"math_id": 15,
"text": "\\lVert \\mu \\rVert_\\infty<1"
},
{
"math_id": 16,
"text": "\\star du,"
},
{
"math_id": 17,
"text": "\\star"
},
{
"math_id": 18,
"text": "dv=\\star du."
},
{
"math_id": 19,
"text": "du"
},
{
"math_id": 20,
"text": "dv"
},
{
"math_id": 21,
"text": " K = -\\frac{1}{2} e^{-\\rho} \\left(\\frac{\\partial^2 \\rho}{\\partial u^2} + \\frac{\\partial^2 \\rho}{\\partial v^2}\\right)."
}
] | https://en.wikipedia.org/wiki?curid=10280254 |
10280692 | Digital biquad filter | Second order recursive digital linear filter
In signal processing, a digital biquad filter is a second order recursive linear filter, containing two poles and two zeros. "Biquad" is an abbreviation of "biquadratic", which refers to the fact that in the Z domain, its transfer function is the ratio of two quadratic functions:
formula_0
The coefficients are often normalized such that "a"0 = 1:
formula_1
High-order infinite impulse response filters can be highly sensitive to quantization of their coefficients, and can easily become unstable. This is much less of a problem with first and second-order filters; therefore, higher-order filters are typically implemented as serially-cascaded biquad sections (and a first-order filter if necessary). The two poles of the biquad filter must be inside the unit circle for it to be stable. In general, this is true for all discrete filters i.e. all poles must be inside the unit circle in the Z-domain for the filter to be stable.
Implementation.
Direct form 1.
The most straightforward implementation is the direct form 1, which has the following difference equation:
formula_2
or, if normalized:
formula_3
Here the formula_4, formula_5 and formula_6 coefficients determine zeros, and formula_7, formula_8 determine the position of the poles.
Flow graph of biquad filter in direct form 1:
When these sections are cascaded for filters of order greater than 2, efficiency of implementation can be improved by noticing the formula_9 delay of a section output is cloned in the next section input. Two storage delay components may be eliminated between sections.
Direct form 2.
The direct form 2 implements the same normalized transfer function as direct form 1, but in two parts:
formula_10
and using the difference equation:
formula_11
Flow graph of biquad filter in direct form 2:
The direct form 2 implementation only needs "N" delay units, where "N" is the order of the filter – potentially half as much as direct form 1. The derivation from the normalized direc form 1 is as follows:
formula_3
Assume the substitution:
formula_12
Which results in:
formula_13
Isolating the formula_4, formula_5 and formula_6 coefficients:
formula_14
Which assuming formula_15 yields the above result:
formula_10
The disadvantage is that direct form 2 increases the possibility of arithmetic overflow for filters of high "Q" or resonance. It has been shown that as "Q" increases, the round-off noise of both direct form topologies increases without bounds. This is because, conceptually, the signal is first passed through an all-pole filter (which normally boosts gain at the resonant frequencies) before the result of that is saturated, then passed through an all-zero filter (which often attenuates much of what the all-pole half amplifies).
The direct form 2 implementation is called the canonical form, because it uses the minimal amount of delays, adders and multipliers, yielding in the same transfer function as the direct form 1 implementation.
Transposed direct forms.
Each of the two direct forms may be transposed by reversing the flow graph without altering the transfer function. Branch points are changed to summers and summers are changed to branch points. These provide modified implementations that accomplish the same transfer function which can be mathematically significant in a real-world implementation where precision may be lost in state storage.
The difference equations for transposed direct form 2 are:
formula_16
where
formula_17
and
formula_18
Transposed direct form 1.
The direct form 1
is transposed into
Transposed direct form 2.
The direct form 2
is transposed into
Quantizing noise.
When a sample of n bits is multiplied by a coefficient of m bits, the product has n+m bits. These products are typically accumulated in a DSP register, the addition of five products may need 3 overflow bits; this register is often large enough to hold n+m+3 bits. The z−1 is implemented by storing a value for one sample time; this storage register is usually n bits, the accumulator register is rounded to fit n bits, and this introduced quantizing noise.
In the direct form 1 arrangement, there is a single quantizing/rounding function Q(z):
In the direct form 2 arrangement, there also is a quantizing/rounding function for an intermediate value. In a cascade, the value may not need rounding between stages, but the final output may need rounding.
Fixed point DSP usually prefers the non transposed forms and has an accumulator with a large number of bits, and is rounded when stored in main memory. Floating point DSP usually prefers the transposed form, each multiplication and potentially each addition are rounded; the additions are higher precision result, when both operands have similar magnitude.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ H(z)=\\frac{b_0+b_1z^{-1}+b_2z^{-2}} {a_0+a_1z^{-1}+a_2z^{-2} }"
},
{
"math_id": 1,
"text": "\\ H(z)=\\frac{b_0+b_1z^{-1}+b_2z^{-2}} {1+a_1z^{-1}+a_2z^{-2} }"
},
{
"math_id": 2,
"text": "\\ y[n] = \\frac{1}{a_0} \\left ( b_0x[n] + b_1x[n-1] + b_2x[n-2] - a_1y[n-1] - a_2y[n-2] \\right )"
},
{
"math_id": 3,
"text": "\\ y[n] = b_0x[n] + b_1x[n-1] + b_2x[n-2] - a_1y[n-1] - a_2y[n-2] "
},
{
"math_id": 4,
"text": "b_0"
},
{
"math_id": 5,
"text": "b_1"
},
{
"math_id": 6,
"text": "b_2"
},
{
"math_id": 7,
"text": "a_1"
},
{
"math_id": 8,
"text": "a_2"
},
{
"math_id": 9,
"text": "z^{-1}"
},
{
"math_id": 10,
"text": "\\ y[n]=b_0 w[n]+b_1 w[n-1]+b_2 w[n-2],"
},
{
"math_id": 11,
"text": "\\ w[n]=x[n]-a_1 w[n-1]-a_2 w[n-2]."
},
{
"math_id": 12,
"text": "\\ y[n] = b_0w[n] + b_1w[n-1] + b_2w[n-2] "
},
{
"math_id": 13,
"text": "\\ y[n] = b_0x[n] + b_1x[n-1] + b_2x[n-2] \n\n- a_1(b_0w[n-1] + b_1w[n-2] + b_2w[n-3]) \n\n- a_2(b_0w[n-2] + b_1w[n-3] + b_2w[n-4]) "
},
{
"math_id": 14,
"text": "\\ y[n] = b_0(x[n] - a_1w[n-1] - a_2w[n-2])\n\n+ b_1(x[n-1] - a_1w[n-2] - a_2w[n-3])\n\n+ b_2(x[n-2] - a_1w[n-3] - a_2w[n-4]) "
},
{
"math_id": 15,
"text": "\\ w[n]=x[n]-a_1 w[n-1]-a_2 w[n-2]"
},
{
"math_id": 16,
"text": "\\ y[n]=b_0 x[n]+s_1[n-1],"
},
{
"math_id": 17,
"text": "\\ s_1[n]=s_2[n-1]+b_1 x[n]-a_1 y[n]"
},
{
"math_id": 18,
"text": "\\ s_2[n]=b_2 x[n]-a_2 y[n]."
}
] | https://en.wikipedia.org/wiki?curid=10280692 |
1028158 | Canonical basis | Basis of a type of algebraic structure
In mathematics, a canonical basis is a basis of an algebraic structure that is canonical in a sense that depends on the precise context:
Representation theory.
The canonical basis for the irreducible representations of a quantized enveloping algebra of
type formula_2 and also for the plus part of that algebra was introduced by Lusztig by
two methods: an algebraic one (using a braid group action and PBW bases) and a topological one
(using intersection cohomology). Specializing the parameter formula_3 to formula_4 yields a canonical basis for the irreducible representations of the corresponding simple Lie algebra, which was
not known earlier. Specializing the parameter formula_3 to formula_5 yields something like a shadow of a basis. This shadow (but not the basis itself) for the case of irreducible representations
was considered independently by Kashiwara; it is sometimes called the crystal basis.
The definition of the canonical basis was extended to the Kac-Moody setting by Kashiwara (by an algebraic method) and by Lusztig (by a topological method).
There is a general concept underlying these bases:
Consider the ring of integral Laurent polynomials formula_6 with its two subrings formula_7 and the automorphism formula_8 defined by formula_9.
A "precanonical structure" on a free formula_10-module formula_11 consists of
If a precanonical structure is given, then one can define the formula_17 submodule formula_18 of formula_11.
A "canonical basis of the precanonical structure is then a formula_10-basis formula_19 of formula_11 that satisfies:
for all formula_15.
One can show that there exists at most one canonical basis for each precanonical structure. A sufficient condition for existence is that the polynomials formula_22 defined by formula_23 satisfy formula_24 and formula_25.
A canonical basis induces an isomorphism from formula_26 to formula_27.
Hecke algebras.
Let formula_28 be a Coxeter group. The corresponding Iwahori-Hecke algebra formula_29 has the standard basis formula_30, the group is partially ordered by the Bruhat order which is interval finite and has a dualization operation defined by formula_31. This is a precanonical structure on formula_29 that satisfies the sufficient condition above and the corresponding canonical basis of formula_29 is the Kazhdan–Lusztig basis
formula_32
with formula_33 being the Kazhdan–Lusztig polynomials.
Linear algebra.
If we are given an "n" × "n" matrix formula_1 and wish to find a matrix formula_34 in Jordan normal form, similar to formula_1, we are interested only in sets of linearly independent generalized eigenvectors. A matrix in Jordan normal form is an "almost diagonal matrix," that is, as close to diagonal as possible. A diagonal matrix formula_35 is a special case of a matrix in Jordan normal form. An ordinary eigenvector is a special case of a generalized eigenvector.
Every "n" × "n" matrix formula_1 possesses "n" linearly independent generalized eigenvectors. Generalized eigenvectors corresponding to distinct eigenvalues are linearly independent. If formula_36 is an eigenvalue of formula_1 of algebraic multiplicity formula_37, then formula_1 will have formula_37 linearly independent generalized eigenvectors corresponding to formula_36.
For any given "n" × "n" matrix formula_1, there are infinitely many ways to pick the "n" linearly independent generalized eigenvectors. If they are chosen in a particularly judicious manner, we can use these vectors to show that formula_1 is similar to a matrix in Jordan normal form. In particular,
Definition: A set of "n" linearly independent generalized eigenvectors is a canonical basis if it is composed entirely of Jordan chains.
Thus, once we have determined that a generalized eigenvector of rank "m" is in a canonical basis, it follows that the "m" − 1 vectors formula_38 that are in the Jordan chain generated by formula_39 are also in the canonical basis.
Computation.
Let formula_40 be an eigenvalue of formula_1 of algebraic multiplicity formula_41. First, find the ranks (matrix ranks) of the matrices formula_42. The integer formula_43 is determined to be the "first integer" for which formula_44 has rank formula_45 ("n" being the number of rows or columns of formula_1, that is, formula_1 is "n" × "n").
Now define
formula_46
The variable formula_47 designates the number of linearly independent generalized eigenvectors of rank "k" (generalized eigenvector rank; see generalized eigenvector) corresponding to the eigenvalue formula_40 that will appear in a canonical basis for formula_1. Note that
formula_48
Once we have determined the number of generalized eigenvectors of each rank that a canonical basis has, we can obtain the vectors explicitly (see generalized eigenvector).
Example.
This example illustrates a canonical basis with two Jordan chains. Unfortunately, it is a little difficult to construct an interesting example of low order.
The matrix
formula_49
has eigenvalues formula_50 and formula_51 with algebraic multiplicities formula_52 and formula_53, but geometric multiplicities formula_54 and formula_55.
For formula_56 we have formula_57
formula_58 has rank 5,
formula_59 has rank 4,
formula_60 has rank 3,
formula_61 has rank 2.
Therefore formula_62
formula_63
formula_64
formula_65
formula_66
Thus, a canonical basis for formula_1 will have, corresponding to formula_56 one generalized eigenvector each of ranks 4, 3, 2 and 1.
For formula_67 we have formula_68
formula_69 has rank 5,
formula_70 has rank 4.
Therefore formula_71
formula_72
formula_73
Thus, a canonical basis for formula_1 will have, corresponding to formula_67 one generalized eigenvector each of ranks 2 and 1.
A canonical basis for formula_1 is
formula_74
formula_75 is the ordinary eigenvector associated with formula_76.
formula_77 and formula_78 are generalized eigenvectors associated with formula_76.
formula_79 is the ordinary eigenvector associated with formula_80.
formula_81 is a generalized eigenvector associated with formula_80.
A matrix formula_34 in Jordan normal form, similar to formula_1 is obtained as follows:
formula_82
formula_83
where the matrix formula_84 is a generalized modal matrix for formula_1 and formula_85. | [
{
"math_id": 0,
"text": "(X^i)_i"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "ADE"
},
{
"math_id": 3,
"text": "q"
},
{
"math_id": 4,
"text": "q=1"
},
{
"math_id": 5,
"text": "q=0"
},
{
"math_id": 6,
"text": "\\mathcal{Z}:=\\mathbb{Z}\\left[v,v^{-1}\\right]"
},
{
"math_id": 7,
"text": "\\mathcal{Z}^{\\pm}:=\\mathbb{Z}\\left[v^{\\pm 1}\\right]"
},
{
"math_id": 8,
"text": "\\overline{\\cdot}"
},
{
"math_id": 9,
"text": "\\overline{v}:=v^{-1}"
},
{
"math_id": 10,
"text": "\\mathcal{Z}"
},
{
"math_id": 11,
"text": "F"
},
{
"math_id": 12,
"text": "(t_i)_{i\\in I}"
},
{
"math_id": 13,
"text": "I"
},
{
"math_id": 14,
"text": "(-\\infty,i] := \\{j\\in I \\mid j\\leq i\\}"
},
{
"math_id": 15,
"text": "i\\in I"
},
{
"math_id": 16,
"text": "F\\to F"
},
{
"math_id": 17,
"text": "\\mathcal{Z}^{\\pm}"
},
{
"math_id": 18,
"text": "F^{\\pm} := \\sum \\mathcal{Z}^{\\pm} t_j"
},
{
"math_id": 19,
"text": "(c_i)_{i\\in I}"
},
{
"math_id": 20,
"text": "\\overline{c_i}=c_i"
},
{
"math_id": 21,
"text": "c_i \\in \\sum_{j\\leq i} \\mathcal{Z}^+ t_j \\text{ and } c_i \\equiv t_i \\mod vF^+"
},
{
"math_id": 22,
"text": "r_{ij}\\in\\mathcal{Z}"
},
{
"math_id": 23,
"text": "\\overline{t_j}=\\sum_i r_{ij} t_i"
},
{
"math_id": 24,
"text": "r_{ii}=1"
},
{
"math_id": 25,
"text": "r_{ij}\\neq 0 \\implies i\\leq j"
},
{
"math_id": 26,
"text": "\\textstyle F^+\\cap \\overline{F^+} = \\sum_i \\mathbb{Z}c_i"
},
{
"math_id": 27,
"text": "F^+/vF^+"
},
{
"math_id": 28,
"text": "(W,S)"
},
{
"math_id": 29,
"text": "H"
},
{
"math_id": 30,
"text": "(T_w)_{w\\in W}"
},
{
"math_id": 31,
"text": "\\overline{T_w}:=T_{w^{-1}}^{-1}"
},
{
"math_id": 32,
"text": "C_w' = \\sum_{y\\leq w} P_{y,w}(v^2) T_w"
},
{
"math_id": 33,
"text": "P_{y,w}"
},
{
"math_id": 34,
"text": "J"
},
{
"math_id": 35,
"text": "D"
},
{
"math_id": 36,
"text": "\\lambda"
},
{
"math_id": 37,
"text": "\\mu"
},
{
"math_id": 38,
"text": " \\mathbf x_{m-1}, \\mathbf x_{m-2}, \\ldots , \\mathbf x_1 "
},
{
"math_id": 39,
"text": " \\mathbf x_m "
},
{
"math_id": 40,
"text": " \\lambda_i "
},
{
"math_id": 41,
"text": " \\mu_i "
},
{
"math_id": 42,
"text": " (A - \\lambda_i I), (A - \\lambda_i I)^2, \\ldots , (A - \\lambda_i I)^{m_i} "
},
{
"math_id": 43,
"text": "m_i"
},
{
"math_id": 44,
"text": " (A - \\lambda_i I)^{m_i} "
},
{
"math_id": 45,
"text": "n - \\mu_i "
},
{
"math_id": 46,
"text": " \\rho_k = \\operatorname{rank}(A - \\lambda_i I)^{k-1} - \\operatorname{rank}(A - \\lambda_i I)^k \\qquad (k = 1, 2, \\ldots , m_i)."
},
{
"math_id": 47,
"text": " \\rho_k "
},
{
"math_id": 48,
"text": " \\operatorname{rank}(A - \\lambda_i I)^0 = \\operatorname{rank}(I) = n ."
},
{
"math_id": 49,
"text": "A = \\begin{pmatrix} \n4 & 1 & 1 & 0 & 0 & -1 \\\\\n0 & 4 & 2 & 0 & 0 & 1 \\\\\n0 & 0 & 4 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 5 & 1 & 0 \\\\\n0 & 0 & 0 & 0 & 5 & 2 \\\\\n0 & 0 & 0 & 0 & 0 & 4\n\\end{pmatrix}"
},
{
"math_id": 50,
"text": " \\lambda_1 = 4 "
},
{
"math_id": 51,
"text": " \\lambda_2 = 5 "
},
{
"math_id": 52,
"text": " \\mu_1 = 4 "
},
{
"math_id": 53,
"text": " \\mu_2 = 2 "
},
{
"math_id": 54,
"text": " \\gamma_1 = 1 "
},
{
"math_id": 55,
"text": " \\gamma_2 = 1 "
},
{
"math_id": 56,
"text": " \\lambda_1 = 4,"
},
{
"math_id": 57,
"text": " n - \\mu_1 = 6 - 4 = 2, "
},
{
"math_id": 58,
"text": " (A - 4I) "
},
{
"math_id": 59,
"text": " (A - 4I)^2 "
},
{
"math_id": 60,
"text": " (A - 4I)^3 "
},
{
"math_id": 61,
"text": " (A - 4I)^4 "
},
{
"math_id": 62,
"text": "m_1 = 4."
},
{
"math_id": 63,
"text": " \\rho_4 = \\operatorname{rank}(A - 4I)^3 - \\operatorname{rank}(A - 4I)^4 = 3 - 2 = 1,"
},
{
"math_id": 64,
"text": " \\rho_3 = \\operatorname{rank}(A - 4I)^2 - \\operatorname{rank}(A - 4I)^3 = 4 - 3 = 1,"
},
{
"math_id": 65,
"text": " \\rho_2 = \\operatorname{rank}(A - 4I)^1 - \\operatorname{rank}(A - 4I)^2 = 5 - 4 = 1,"
},
{
"math_id": 66,
"text": " \\rho_1 = \\operatorname{rank}(A - 4I)^0 - \\operatorname{rank}(A - 4I)^1 = 6 - 5 = 1."
},
{
"math_id": 67,
"text": " \\lambda_2 = 5,"
},
{
"math_id": 68,
"text": " n - \\mu_2 = 6 - 2 = 4, "
},
{
"math_id": 69,
"text": " (A - 5I) "
},
{
"math_id": 70,
"text": " (A - 5I)^2 "
},
{
"math_id": 71,
"text": "m_2 = 2."
},
{
"math_id": 72,
"text": " \\rho_2 = \\operatorname{rank}(A - 5I)^1 - \\operatorname{rank}(A - 5I)^2 = 5 - 4 = 1,"
},
{
"math_id": 73,
"text": " \\rho_1 = \\operatorname{rank}(A - 5I)^0 - \\operatorname{rank}(A - 5I)^1 = 6 - 5 = 1."
},
{
"math_id": 74,
"text": "\n\\left\\{ \\mathbf x_1, \\mathbf x_2, \\mathbf x_3, \\mathbf x_4, \\mathbf y_1, \\mathbf y_2 \\right\\} =\n\\left\\{\n\\begin{pmatrix} -4 \\\\ 0 \\\\ 0 \\\\ 0 \\\\ 0 \\\\ 0 \\end{pmatrix},\n\\begin{pmatrix} -27 \\\\ -4 \\\\ 0 \\\\ 0 \\\\ 0 \\\\ 0 \\end{pmatrix},\n\\begin{pmatrix} 25 \\\\ -25 \\\\ -2 \\\\ 0 \\\\ 0 \\\\ 0 \\end{pmatrix},\n\\begin{pmatrix} 0 \\\\ 36 \\\\ -12 \\\\ -2 \\\\ 2 \\\\ -1 \\end{pmatrix},\n\\begin{pmatrix} 3 \\\\ 2 \\\\ 1 \\\\ 1 \\\\ 0 \\\\ 0 \\end{pmatrix},\n\\begin{pmatrix} -8 \\\\ -4 \\\\ -1 \\\\ 0 \\\\ 1 \\\\ 0 \\end{pmatrix} \n\\right\\}.\n"
},
{
"math_id": 75,
"text": " \\mathbf x_1 "
},
{
"math_id": 76,
"text": " \\lambda_1 "
},
{
"math_id": 77,
"text": " \\mathbf x_2, \\mathbf x_3 "
},
{
"math_id": 78,
"text": " \\mathbf x_4 "
},
{
"math_id": 79,
"text": " \\mathbf y_1 "
},
{
"math_id": 80,
"text": " \\lambda_2 "
},
{
"math_id": 81,
"text": " \\mathbf y_2 "
},
{
"math_id": 82,
"text": "\nM =\n\\begin{pmatrix} \\mathbf x_1 & \\mathbf x_2 & \\mathbf x_3 & \\mathbf x_4 & \\mathbf y_1 & \\mathbf y_2 \\end{pmatrix} =\n\\begin{pmatrix}\n-4 & -27 & 25 & 0 & 3 & -8 \\\\\n 0 & -4 & -25 & 36 & 2 & -4 \\\\\n 0 & 0 & -2 & -12 & 1 & -1 \\\\\n 0 & 0 & 0 & -2 & 1 & 0 \\\\\n 0 & 0 & 0 & 2 & 0 & 1 \\\\\n 0 & 0 & 0 & -1 & 0 & 0\n\\end{pmatrix},\n"
},
{
"math_id": 83,
"text": " J = \\begin{pmatrix}\n4 & 1 & 0 & 0 & 0 & 0 \\\\\n0 & 4 & 1 & 0 & 0 & 0 \\\\\n0 & 0 & 4 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 4 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 5 & 1 \\\\\n0 & 0 & 0 & 0 & 0 & 5\n\\end{pmatrix},\n"
},
{
"math_id": 84,
"text": "M"
},
{
"math_id": 85,
"text": "AM = MJ"
}
] | https://en.wikipedia.org/wiki?curid=1028158 |
10282799 | Lie bracket of vector fields | Operator in differential topology
In the mathematical field of differential topology, the Lie bracket of vector fields, also known as the Jacobi–Lie bracket or the commutator of vector fields, is an operator that assigns to any two vector fields "X" and "Y" on a smooth manifold "M" a third vector field denoted ["X", "Y"].
Conceptually, the Lie bracket ["X", "Y"] is the derivative of "Y" along the flow generated by "X", and is sometimes denoted "formula_0" ("Lie derivative of Y along X"). This generalizes to the Lie derivative of any tensor field along the flow generated by "X".
The Lie bracket is an R-bilinear operation and turns the set of all smooth vector fields on the manifold "M" into an (infinite-dimensional) Lie algebra.
The Lie bracket plays an important role in differential geometry and differential topology, for instance in the Frobenius integrability theorem, and is also fundamental in the geometric theory of nonlinear control systems.
V. I. Arnold refers to this as the "fisherman derivative", as one can imagine being a fisherman, holding a fishing rod, sitting in a boat. Both the boat and the float are flowing according to vector field "X", and the fisherman lengthens/shrinks and turns the fishing rod according to vector field "Y". The Lie bracket is the amount of dragging on the fishing float relative to the surrounding water.
Definitions.
There are three conceptually different but equivalent approaches to defining the Lie bracket:
Vector fields as derivations.
Each smooth vector field formula_1 on a manifold "M" may be regarded as a differential operator acting on smooth functions formula_2 (where formula_3 and formula_4 of class formula_5) when we define formula_6 to be another function whose value at a point formula_7 is the directional derivative of "f" at "p" in the direction "X"("p"). In this way, each smooth vector field "X" becomes a derivation on "C"∞("M"). Furthermore, any derivation on "C"∞("M") arises from a unique smooth vector field "X".
In general, the commutator formula_8 of any two derivations formula_9 and formula_10 is again a derivation, where formula_11 denotes composition of operators. This can be used to define the Lie bracket as the vector field corresponding to the commutator derivation:
formula_12
Flows and limits.
Let formula_13 be the flow associated with the vector field "X", and let D denote the tangent map derivative operator. Then the Lie bracket of "X" and "Y" at the point "x" ∈ "M" can be defined as the Lie derivative:
formula_14
This also measures the failure of the flow in the successive directions formula_15 to return to the point "x":
formula_16
In coordinates.
Though the above definitions of Lie bracket are intrinsic (independent of the choice of coordinates on the manifold "M"), in practice one often wants to compute the bracket in terms of a specific coordinate system formula_17. We write formula_18 for the associated local basis of the tangent bundle, so that general vector fields can be written formula_19and formula_20for smooth functions formula_21. Then the Lie bracket can be computed as:
formula_22
If "M" is (an open subset of) R"n", then the vector fields "X" and "Y" can be written as smooth maps of the form formula_23 and formula_24, and the Lie bracket formula_25 is given by:
formula_26
where formula_27 and formula_28 are "n" × "n" Jacobian matrices (formula_29 and formula_30 respectively using index notation) multiplying the "n" × 1 column vectors "X" and "Y".
Properties.
The Lie bracket of vector fields equips the real vector space formula_31 of all vector fields on "M" (i.e., smooth sections of the tangent bundle formula_32) with the structure of a Lie algebra, which means [ • , • ] is a map formula_33 with:
An immediate consequence of the second property is that formula_36 for any formula_37.
Furthermore, there is a "product rule" for Lie brackets. Given a smooth (scalar-valued) function "f" on "M" and a vector field "Y" on "M", we get a new vector field "fY" by multiplying the vector "Yx" by the scalar "f"("x") at each point "x" ∈ "M". Then:
where we multiply the scalar function "X"("f") with the vector field "Y", and the scalar function "f" with the vector field ["X", "Y"].
This turns the vector fields with the Lie bracket into a Lie algebroid.
Vanishing of the Lie bracket of "X" and "Y" means that following the flows in these directions defines a surface embedded in "M", with "X" and "Y" as coordinate vector fields:
Theorem: formula_39 iff the flows of "X" and "Y" commute locally, meaning formula_40 for all "x" ∈ "M" and sufficiently small "s", "t".
This is a special case of the Frobenius integrability theorem.
Examples.
For a Lie group "G", the corresponding Lie algebra formula_41 is the tangent space at the identity formula_42, which can be identified with the vector space of left invariant vector fields on "G". The Lie bracket of two left invariant vector fields is also left invariant, which defines the Jacobi–Lie bracket operation formula_43.
For a matrix Lie group, whose elements are matrices formula_44, each tangent space can be represented as matrices: formula_45, where formula_46 means matrix multiplication and "I" is the identity matrix. The invariant vector field corresponding to formula_47 is given by formula_48, and a computation shows the Lie bracket on formula_49 corresponds to the usual commutator of matrices:
formula_50
Generalizations.
As mentioned above, the Lie derivative can be seen as a generalization of the Lie bracket. Another generalization of the Lie bracket (to vector-valued differential forms) is the Frölicher–Nijenhuis bracket.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{L}_X Y"
},
{
"math_id": 1,
"text": "X : M \\rightarrow TM"
},
{
"math_id": 2,
"text": "f(p)"
},
{
"math_id": 3,
"text": "p \\in M"
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": "C^\\infty(M)"
},
{
"math_id": 6,
"text": "X(f)"
},
{
"math_id": 7,
"text": "p"
},
{
"math_id": 8,
"text": "\\delta_1\\circ \\delta_2 - \\delta_2\\circ\\delta_1"
},
{
"math_id": 9,
"text": "\\delta_1"
},
{
"math_id": 10,
"text": "\\delta_2"
},
{
"math_id": 11,
"text": "\\circ"
},
{
"math_id": 12,
"text": "[X,Y](f) = X(Y(f))-Y(X(f)) \\;\\;\\text{ for all } f\\in C^\\infty(M)."
},
{
"math_id": 13,
"text": "\\Phi^X_t"
},
{
"math_id": 14,
"text": "[X, Y]_x \\ =\\ (\\mathcal{L}_X Y)_x \\ :=\\ \\lim_{t \\to 0}\\frac{(\\mathrm{D}\\Phi^X_{-t}) Y_{\\Phi^X_t(x)} \\,-\\, Y_x}t \n\\ =\\ \n\\left.\\tfrac{\\mathrm{d}}{\\mathrm{d} t}\\right|_{t=0} (\\mathrm{D}\\Phi^X_{-t}) Y_{\\Phi^X_t(x)} ."
},
{
"math_id": 15,
"text": "X,Y,-X,-Y"
},
{
"math_id": 16,
"text": "[X, Y]_x \\ =\\ \\left.\\tfrac12\\tfrac{\\mathrm{d}^2}{\\mathrm{d}t^2}\\right|_{t=0} (\\Phi^Y_{-t} \\circ \\Phi^X_{-t} \\circ \\Phi^Y_{t} \\circ \\Phi^X_{t})(x) \n\\ =\\ \\left.\\tfrac{\\mathrm{d}}{\\mathrm{d} t}\\right|_{t=0} (\\Phi^Y_{\\!-\\sqrt{t}} \\circ \\Phi^X_{\\!-\\sqrt{t}} \\circ \\Phi^Y_{\\!\\sqrt{t}} \\circ \\Phi^X_{\\!\\sqrt{t}})(x) ."
},
{
"math_id": 17,
"text": "\\{ x^i \\}"
},
{
"math_id": 18,
"text": "\\partial_i = \\tfrac{\\partial}{\\partial x^i}"
},
{
"math_id": 19,
"text": "\\textstyle X=\\sum_{i=1}^n X^i \\partial_i"
},
{
"math_id": 20,
"text": "\\textstyle Y=\\sum_{i=1}^n Y^i \\partial_i"
},
{
"math_id": 21,
"text": "X^i, Y^i:M\\to\\mathbb{R}"
},
{
"math_id": 22,
"text": "[X,Y] := \\sum_{i=1}^n\\left(X(Y^i) - Y(X^i)\\right) \\partial_i = \\sum_{i=1}^n \\sum_{j=1}^n \\left(X^j \\partial_j Y^i - Y^j \\partial_j X^i \\right) \\partial_i ."
},
{
"math_id": 23,
"text": "X:M\\to\\mathbb{R}^n"
},
{
"math_id": 24,
"text": "Y:M\\to\\mathbb{R}^n"
},
{
"math_id": 25,
"text": "[X,Y]:M\\to\\mathbb{R}^n"
},
{
"math_id": 26,
"text": "[X,Y] := J_Y X - J_X Y"
},
{
"math_id": 27,
"text": "J_Y"
},
{
"math_id": 28,
"text": "J_X"
},
{
"math_id": 29,
"text": "\\partial_jY^i"
},
{
"math_id": 30,
"text": "\\partial_jX^i"
},
{
"math_id": 31,
"text": "V=\\Gamma(TM)"
},
{
"math_id": 32,
"text": "TM\\to M"
},
{
"math_id": 33,
"text": "V\\times V\\to V"
},
{
"math_id": 34,
"text": "[X, Y] = -[Y, X]"
},
{
"math_id": 35,
"text": "[X, [Y, Z]] + [Z, [X, Y]] + [Y, [Z, X]] = 0 ."
},
{
"math_id": 36,
"text": "[X, X] = 0"
},
{
"math_id": 37,
"text": "X"
},
{
"math_id": 38,
"text": " [X, fY] \\ =\\ X\\!(f)\\, Y \\,+\\, f\\, [X,Y] ,"
},
{
"math_id": 39,
"text": "[X,Y]=0\\,"
},
{
"math_id": 40,
"text": "(\\Phi^Y_t \\Phi^X_s) (x) =(\\Phi^X_{s}\\, \\Phi^Y_t)(x)"
},
{
"math_id": 41,
"text": "\\mathfrak{g}"
},
{
"math_id": 42,
"text": "T_eG"
},
{
"math_id": 43,
"text": "[\\,\\cdot\\,,\\,\\cdot\\,]: \\mathfrak g \\times \\mathfrak g\\to \\mathfrak g"
},
{
"math_id": 44,
"text": "g \\in G \\subset M_{n\\times n}(\\mathbb{R})"
},
{
"math_id": 45,
"text": "T_{g}G = g\\cdot T_I G \\subset M_{n\\times n}(\\mathbb{R})"
},
{
"math_id": 46,
"text": "\\cdot"
},
{
"math_id": 47,
"text": "X\\in \\mathfrak{g}=T_IG"
},
{
"math_id": 48,
"text": "X_g = g\\cdot X\\in T_gG"
},
{
"math_id": 49,
"text": "\\mathfrak g"
},
{
"math_id": 50,
"text": "[X,Y] \\ =\\ X\\cdot Y - Y\\cdot X ."
}
] | https://en.wikipedia.org/wiki?curid=10282799 |
10283 | Erlang (unit) | Load measure in telecommunications
<templatestyles src="Template:Infobox/styles-images.css" />
The erlang (symbol E) is a dimensionless unit that is used in telephony as a measure of offered load or carried load on service-providing elements such as telephone circuits or telephone switching equipment. A single cord circuit has the capacity to be used for 60 minutes in one hour. Full utilization of that capacity, 60 minutes of traffic, constitutes 1 erlang.
Carried traffic in erlangs is the average number of concurrent calls measured over a given period (often one hour), while offered traffic is the traffic that would be carried if all call-attempts succeeded. How much offered traffic is carried in practice will depend on what happens to unanswered calls when all servers are busy.
The CCITT named the international unit of telephone traffic the erlang in 1946 in honor of Agner Krarup Erlang. In Erlang's analysis of efficient telephone line usage he derived the formulae for two important cases, Erlang-B and Erlang-C, which became foundational results in teletraffic engineering and queueing theory. His results, which are still used today, relate quality of service to the number of available servers. Both formulae take offered load as one of their main inputs (in erlangs), which is often expressed as call arrival rate times average call length.
A distinguishing assumption behind the Erlang B formula is that there is no queue, so that if all service elements are already in use then a newly arriving call will be blocked and subsequently lost. The formula gives the probability of this occurring. In contrast, the Erlang C formula provides for the possibility of an unlimited queue and it gives the probability that a new call will need to wait in the queue due to all servers being in use. Erlang's formulae apply quite widely, but they may fail when congestion is especially high causing unsuccessful traffic to repeatedly retry. One way of accounting for retries when no queue is available is the Extended Erlang B method.
Traffic measurements of a telephone circuit.
When used to represent carried traffic, a value (which can be a non-integer such as 43.5) followed by “erlangs” represents the average number of concurrent calls carried by the circuits (or other service-providing elements), where that average is calculated over some reasonable period of time. The period over which the average is calculated is often one hour, but shorter periods (e.g., 15 minutes) may be used where it is known that there are short spurts of demand and a traffic measurement is desired that does not mask these spurts.
One erlang of carried traffic refers to a single resource being in continuous use, or two channels each being in use fifty percent of the time, and so on. For example, if an office has two telephone operators who are both busy all the time, that would represent two erlangs (2 E) of traffic; or a radio channel that is occupied continuously during the period of interest (e.g. one hour) is said to have a load of 1 erlang.
When used to describe offered traffic, a value followed by “erlangs” represents the average number of concurrent calls that would have been carried if there were an unlimited number of circuits (that is, if the call-attempts that were made when all circuits were in use had not been rejected). The relationship between offered traffic and carried traffic depends on the design of the system and user behavior. Three common models are (a) callers whose call-attempts are rejected go away and never come back, (b) callers whose call-attempts are rejected try again within a fairly short space of time, and (c) the system allows users to wait in queue until a circuit becomes available.
A third measurement of traffic is instantaneous traffic, expressed as a certain number of erlangs, meaning the exact number of calls taking place at a point in time. In this case the number is a non-negative integer. Traffic-level-recording devices, such as moving-pen recorders, plot instantaneous traffic.
Erlang's analysis.
The concepts and mathematics introduced by Agner Krarup Erlang have broad applicability beyond telephony. They apply wherever users arrive more or less at random to receive exclusive service from any one of a group of service-providing elements without prior reservation, for example, where the service-providing elements are ticket-sales windows, toilets on an airplane, or motel rooms. (Erlang's models do not apply where the service-providing elements are shared between several concurrent users or different amounts of service are consumed by different users, for instance, on circuits carrying data traffic.)
The goal of Erlang's traffic theory is to determine exactly how many service-providing elements should be provided in order to satisfy users, without wasteful over-provisioning. To do this, a target is set for the grade of service (GoS) or quality of service (QoS). For example, in a system where there is no queuing, the GoS may be that no more than 1 call in 100 is blocked (i.e., rejected) due to all circuits being in use (a GoS of 0.01), which becomes the target probability of call blocking, "Pb", when using the Erlang B formula.
There are several resulting formulae, including Erlang B, Erlang C and the related Engset formula, based on different models of user behavior and system operation. These may each be derived by means of a special case of continuous-time Markov processes known as a birth–death process. The more recent Extended Erlang B method provides a further traffic solution that draws on Erlang's results.
Calculating offered traffic.
Offered traffic (in erlangs) is related to the call arrival rate, "λ", and the average call-holding time (the average time of a phone call), "h", by:
formula_0
provided that "h" and "λ" are expressed using the same units of time (seconds and calls per second, or minutes and calls per minute).
The practical measurement of traffic is typically based on continuous observations over several days or weeks, during which the instantaneous traffic is recorded at regular, short intervals (such as every few seconds). These measurements are then used to calculate a single result, most commonly the busy-hour traffic (in erlangs). This is the average number of concurrent calls during a given one-hour period of the day, where that period is selected to give the highest result. (This result is called the time-consistent busy-hour traffic). An alternative is to calculate a busy-hour traffic value separately for each day (which may correspond to slightly different times each day) and take the average of these values. This generally gives a slightly higher value than the time-consistent busy-hour value.
Where the existing busy-hour carried traffic, "E"c, is measured on an already overloaded system, with a significant level of blocking, it is necessary to take account of the blocked calls in estimating the busy-hour offered traffic "E"o (which is the traffic value to be used in the Erlang formulae). The offered traffic can be estimated by "E"o = "E"c/(1 − "P"b). For this purpose, where the system includes a means of counting blocked calls and successful calls, "P"b can be estimated directly from the proportion of calls that are blocked. Failing that, "P"b can be estimated by using "E"c in place of "E"o in the Erlang formula and the resulting estimate of "P"b can then be used in "E"o = "E"c/(1 − "P"b) to provide a first estimate of "E"o.
Another method of estimating "E"o in an overloaded system is to measure the busy-hour call arrival rate, "λ" (counting successful calls and blocked calls), and the average call-holding time (for successful calls), "h", and then estimate "E"o using the formula "E" = "λh".
For a situation where the traffic to be handled is completely new traffic, the only choice is to try to model expected user behavior. For example, one could estimate active user population, "N", expected level of use, "U" (number of calls/transactions per user per day), busy-hour concentration factor, "C" (proportion of daily activity that will fall in the busy hour), and average holding time/service time, "h" (expressed in minutes). A projection of busy-hour offered traffic would then be "E"o = "h" erlangs. (The division by 60 translates the busy-hour call/transaction arrival rate into a per-minute value, to match the units in which "h" is expressed.)
Erlang B formula.
The Erlang B formula (or Erlang-B with a hyphen), also known as the Erlang loss formula, is a formula for the blocking probability that describes the probability of call losses for a group of identical parallel resources (telephone lines, circuits, traffic channels, or equivalent), sometimes referred to as an M/M/c/c queue. It is, for example, used to dimension a telephone network's links. The formula was derived by Agner Krarup Erlang and is not limited to telephone networks, since it describes a probability in a queuing system (albeit a special case with a number of servers but no queueing space for incoming calls to wait for a free server). Hence, the formula is also used in certain inventory systems with lost sales.
The formula applies under the condition that an unsuccessful call, because the line is busy, is not queued or retried, but instead really vanishes forever. It is assumed that call attempts arrive following a Poisson process, so call arrival instants are independent. Further, it is assumed that the message lengths (holding times) are exponentially distributed (Markovian system), although the formula turns out to apply under general holding time distributions.
The Erlang B formula assumes an infinite population of sources (such as telephone subscribers), which jointly offer traffic to "N" servers (such as telephone lines). The rate expressing the frequency at which new calls arrive, λ, (birth rate, traffic intensity, etc.) is constant, and does "not" depend on the number of active sources. The total number of sources is assumed to be infinite.
The Erlang B formula calculates the blocking probability of a buffer-less loss system, where a request that is not served immediately is aborted, causing that no requests become queued. Blocking occurs when a new request arrives at a time where all available servers are currently busy. The formula also assumes that blocked traffic is cleared and does not return.
The formula provides the GoS (grade of service) which is the probability "Pb" that a new call arriving to the resources group is rejected because all resources (servers, lines, circuits) are busy: "B"("E", "m") where "E" is the total offered traffic in erlang, offered to "m" identical parallel resources (servers, communication channels, traffic lanes).
formula_1
where:
Note: The "erlang" is a dimensionless load unit calculated as the mean arrival rate, λ, multiplied by the mean call holding time, "h".
See Little's law to prove that the erlang unit has to be dimensionless for Little's Law to be dimensionally sane.
This may be expressed recursively as follows, in a form that is used to simplify the calculation of tables of the Erlang B formula:
formula_3
formula_4
Typically, instead of "B"("E", "m") the inverse 1/"B"("E", "m") is calculated in numerical computation in order to ensure numerical stability:
formula_5
formula_6
Function ErlangB (E As Double, m As Integer) As Double
Dim InvB As Double
Dim j As Integer
InvB = 1.0
For j = 1 To m
InvB = 1.0 + InvB * j / E
Next j
ErlangB = 1.0 / InvB
End Function
or a Python version
def erlang_b(E, m: int) -> float:
"""Calculate the probability of call losses."""
inv_b = 1.0
for j in range(1, m + 1):
inv_b = 1.0 + inv_b * j / E
return 1.0 / inv_b
The Erlang B formula is decreasing and convex in "m".
It requires that call arrivals can be modeled by a Poisson process, which is not always a good match, but is valid for any statistical distribution of call holding times with a finite mean.
It applies to traffic transmission systems that do not buffer traffic.
More modern examples compared to POTS where Erlang B is still applicable, are optical burst switching (OBS) and several current approaches to optical packet switching (OPS).
Erlang B was developed as a trunk sizing tool for telephone networks with holding times in the minutes range, but being a mathematical equation it applies on any time-scale.
Extended Erlang B.
Extended Erlang B differs from the classic Erlang-B assumptions by allowing for a proportion of blocked callers to try again, causing an increase in offered traffic from the initial baseline level. It is an iterative calculation rather than a formula and adds an extra parameter, the recall factor formula_7, which defines the recall attempts.
The steps in the process are as follows. It starts at iteration formula_8 with a known initial baseline level of traffic formula_9, which is successively adjusted to calculate a sequence of new offered traffic values formula_10, each of which accounts for the recalls arising from the previously calculated offered traffic formula_11.
1. Calculate the probability of a caller being blocked on their first attempt
formula_12
as above for Erlang B.
2. Calculate the probable number of blocked calls
formula_13
3. Calculate the number of recalls, formula_14, assuming a fixed Recall Factor, formula_7,
formula_15
4. Calculate the new offered traffic
formula_16
where formula_9 is the initial (baseline) level of traffic.
5. Return to step 1, substituting formula_10 for formula_11, and iterate until a stable value of formula_17 is obtained.
Once a satisfactory value of formula_17 has been found, the blocking probability formula_2 and the recall factor can be used to calculate the probability that all of a caller's attempts are lost, not just their first call but also any subsequent retries.
Erlang C formula.
The Erlang C formula expresses the probability that an arriving customer will need to queue (as opposed to immediately being served). Just as the Erlang B formula, Erlang C assumes an infinite population of sources, which jointly offer traffic of formula_17 erlangs to formula_18 servers. However, if all the servers are busy when a request arrives from a source, the request is queued. An unlimited number of requests may be held in the queue in this way simultaneously. This formula calculates the probability of queuing offered traffic, assuming that blocked calls stay in the system until they can be handled. This formula is used to determine the number of agents or customer service representatives needed to staff a call centre, for a specified desired probability of queuing. However, the Erlang C formula assumes that callers never hang up while in queue, which makes the formula predict that more agents should be used than are really needed to maintain a desired service level.
formula_19
where:
It is assumed that the call arrivals can be modeled by a Poisson process and that call holding times are described by an exponential distribution, therefore the Erlang C formula follows from the assumptions of the M/M/c queue model.
Limitations of the Erlang formula.
When Erlang developed the Erlang-B and Erlang-C traffic equations, they were developed on a set of assumptions. These assumptions are accurate under most conditions; however in the event of extremely high traffic congestion, Erlang's equations fail to accurately predict the correct number of circuits required because of re-entrant traffic. This is termed a high-loss system, where congestion breeds further congestion at peak times. In such cases, it is first necessary for many additional circuits to be made available so that the high loss can be alleviated. Once this action has been taken, congestion will return to reasonable levels and Erlang's equations can then be used to determine how exactly many circuits are really required.
An example of an instance which would cause such a High Loss System to develop would be if a TV-based advertisement were to announce a particular telephone number to call at a specific time. In this case, a large number of people would simultaneously phone the number provided. If the service provider had not catered for this sudden peak demand, extreme traffic congestion will develop and Erlang's equations cannot be used.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " E = \\lambda h "
},
{
"math_id": 1,
"text": "P_b = B(E,m) = \\frac{\\frac{E^m}{m!}} { \\sum_{i=0}^m \\frac{E^i}{i!}} "
},
{
"math_id": 2,
"text": "P_b"
},
{
"math_id": 3,
"text": "B(E,0) = 1. \\,"
},
{
"math_id": 4,
"text": "B(E,j) = \\frac{E B(E,j - 1)}{E B(E,j - 1) + j} \\ \\forall{j} = 1,2,\\ldots,m. "
},
{
"math_id": 5,
"text": "\\frac{1}{B(E,0)} = 1"
},
{
"math_id": 6,
"text": "\\frac{1}{B(E,j)} = 1 + \\frac{j}{E}\\frac{1}{B(E,j - 1)} \\ \\forall{j} = 1,2,\\ldots,m. "
},
{
"math_id": 7,
"text": "R_f"
},
{
"math_id": 8,
"text": "k=0"
},
{
"math_id": 9,
"text": "E_{0}"
},
{
"math_id": 10,
"text": "E_{k+1}"
},
{
"math_id": 11,
"text": "E_{k}"
},
{
"math_id": 12,
"text": "P_b = B(E_k,m)\\,"
},
{
"math_id": 13,
"text": "B_e = E_k P_b\\,"
},
{
"math_id": 14,
"text": "R"
},
{
"math_id": 15,
"text": "R = B_e R_f\\,"
},
{
"math_id": 16,
"text": "E_{k+1}=E_{0}+R\\,"
},
{
"math_id": 17,
"text": "E"
},
{
"math_id": 18,
"text": "m"
},
{
"math_id": 19,
"text": "P_w = {{\\frac{E^m}{m!} \\frac{m}{m - E}} \\over \\left ( \\sum\\limits_{i=0}^{m-1} \\frac{E^i}{i!} \\right ) + \\frac{E^m}{m!} \\frac{m}{m - E}} \\,"
},
{
"math_id": 20,
"text": "P_w"
}
] | https://en.wikipedia.org/wiki?curid=10283 |
1028314 | Penman equation | The Penman equation describes evaporation ("E") from an open water surface, and was developed by Howard Penman in 1948. Penman's equation requires daily mean temperature, wind speed, air pressure, and solar radiation to predict E. Simpler Hydrometeorological equations continue to be used where obtaining such data is impractical, to give comparable results within specific contexts, e.g. humid vs arid climates.
Details.
Numerous variations of the Penman equation are used to estimate evaporation from water, and land. Specifically the Penman–Monteith equation refines weather based potential evapotranspiration (PET) estimates of vegetated land areas. It is widely regarded as one of the most accurate models, in terms of estimates.
The original equation was developed by Howard Penman at the Rothamsted Experimental Station, Harpenden, UK.
The equation for evaporation given by Penman is:
formula_0
where:
"m" = Slope of the saturation vapor pressure curve (Pa K−1)
"R"n = Net irradiance (W m−2)
"ρ"a = density of air (kg m−3)
"c"p = heat capacity of air (J kg−1 K−1)
δ"e" = vapor pressure deficit (Pa)
"g"a = momentum surface aerodynamic conductance (m s−1)
"λ"v = latent heat of vaporization (J kg−1)
"γ" = psychrometric constant (Pa K−1)
which (if the SI units in parentheses are used) will give the evaporation "E"mass in units of kg/(m2·s), kilograms of water evaporated every second for each square meter of area.
Remove λ to obviate that this is fundamentally an energy balance. Replace "λ"v with L to get familiar precipitation units "ET"vol, where "L"v="λ"v"ρ"water. This has units of m/s, or more commonly mm/day, because it is flux m3/s per m2=m/s.
This equation assumes a daily time step so that net heat exchange with the ground is insignificant, and a unit area surrounded by similar open water or vegetation so that net heat & vapor exchange with the surrounding area cancels out. Some times people replace "R"n with and "A" for total net available energy when a situation warrants account of additional heat fluxes.
Temperature, wind speed, relative humidity impact the values of "m", "g", "c"p, "ρ", and δ"e".
Shuttleworth (1993).
In 1993, W.Jim Shuttleworth modified and adapted the Penman equation to use SI, which made calculating evaporation simpler. The resultant equation is:
formula_1
where:
"E"mass = Evaporation rate (mm day−1)
"m" = Slope of the saturation vapor pressure curve (kPa K−1)
"R"n = Net irradiance (MJ m−2 day−1)
"γ" = psychrometric constant = formula_2 (kPa K−1)
"U"2 = wind speed (m s−1)
δ"e" = vapor pressure deficit (kPa)
"λ"v = latent heat of vaporization (MJ kg−1)
Note: this formula implicitly includes the division of the numerator by the density of water (1000 kg m−3) to obtain evaporation in units of mm d−1
δ"e" = (es - ea) = (1 – relative humidity) es
"e"s = saturated vapor pressure of air, as is found inside plant stoma.
"e"a = vapor pressure of free flowing air.
"e"s, mmHg = exp(21.07-5336/"T"a), approximation by Merva, 1975
Some useful relationships.
Therefore formula_3, mmHg/K
"T"a = air temperature in kelvins
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "E_{\\mathrm{mass}}=\\frac{m R_n + \\rho_a c_p \\left(\\delta e \\right) g_a }{\\lambda_v \\left(m + \\gamma \\right) }\n"
},
{
"math_id": 1,
"text": "E_{\\mathrm{mass}}=\\frac{m R_n + \\gamma * 6.43\\left(1+0.536 * U_2 \\right)\\delta e}{\\lambda_v \\left(m + \\gamma \\right) }\n"
},
{
"math_id": 2,
"text": "\\frac{0.0016286 * P_{kPa}} {\\lambda_v}"
},
{
"math_id": 3,
"text": "m= \\Delta =\\frac{d e_s}{d T_a} = \\frac{5336}{T_a^2} e^{\\left(21.07 - \\frac{5336}{T_a}\\right)}"
}
] | https://en.wikipedia.org/wiki?curid=1028314 |
1028321 | Wigner semicircle distribution | Probability distribution
The Wigner semicircle distribution, named after the physicist Eugene Wigner, is the probability distribution on [−"R", "R"] whose probability density function "f" is a scaled semicircle (i.e., a semi-ellipse) centered at (0, 0):
formula_0
for −"R" ≤ "x" ≤ "R", and "f"("x") = 0 if "|x|" > "R". The parameter R is commonly referred to as the "radius" parameter of the distribution.
The distribution arises as the limiting distribution of the eigenvalues of many random symmetric matrices, that is, as the dimensions of the random matrix approach infinity. The distribution of the spacing or gaps between eigenvalues is addressed by the similarly named Wigner surmise.
General properties.
Because of symmetry, all of the odd-order moments of the Wigner distribution are zero. For positive integers n, the 2"n"-th moment of this distribution is
formula_1
In the typical special case that "R"
2, this sequence coincides with the Catalan numbers 1, 2, 5, 14, etc. In particular, the second moment is and the fourth moment is , which shows that the excess kurtosis is −1. As can be calculated using the residue theorem, the Stieltjes transform of the Wigner distribution is given by
formula_2
for complex numbers z with positive imaginary part, where the complex square root is taken to have positive imaginary part.
The Wigner distribution coincides with a scaled and shifted beta distribution: if Y is a beta-distributed random variable with parameters , then the random variable 2"RY" – "R" exhibits a Wigner semicircle distribution with radius R. By this transformation it is direct to compute some statistical quantities for the Wigner distribution in terms of those for the beta distributions, which are better known. In particular, it is direct to recover the characteristic function of the Wigner distribution from that of Y:
formula_3
where 1"F"1 is the confluent hypergeometric function and "J"1 is the Bessel function of the first kind. Likewise the moment generating function can be calculated as
formula_4
where "I"1 is the modified Bessel function of the first kind. The final equalities in both of the above lines are well-known identities relating the confluent hypergeometric function with the Bessel functions.
The Chebyshev polynomials of the second kind are orthogonal polynomials with respect to the Wigner semicircle distribution of radius 1.
Relation to free probability.
In free probability theory, the role of Wigner's semicircle distribution is analogous to that of the normal distribution in classical probability theory. Namely,
in free probability theory, the role of cumulants is occupied by "free cumulants", whose relation to ordinary cumulants is simply that the role of the set of all partitions of a finite set in the theory of ordinary cumulants is replaced by the set of all noncrossing partitions of a finite set. Just as the cumulants of degree more than 2 of a probability distribution are all zero if and only if the distribution is normal, so also, the "free" cumulants of degree more than 2 of a probability distribution are all zero if and only if the distribution is Wigner's semicircle distribution.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(x)={2 \\over \\pi R^2}\\sqrt{R^2-x^2\\,}\\, "
},
{
"math_id": 1,
"text": "\\frac{1}{n+1}\\left({R \\over 2}\\right)^{2n} {2n\\choose n}\\, "
},
{
"math_id": 2,
"text": "s(z)=-\\frac{2}{R^2}(z-\\sqrt{z^2-R^2})"
},
{
"math_id": 3,
"text": "\\varphi(t)=e^{-iRt}\\varphi_Y(2Rt)=e^{-iRt}{}_1F_1\\left(\\frac{3}{2}; 3; 2iRt\\right)=\\frac{2J_1(Rt)}{Rt},"
},
{
"math_id": 4,
"text": "M(t)=e^{-Rt}M_Y(2Rt)=e^{-Rt}{}_1F_1\\left(\\frac{3}{2}; 3; 2Rt\\right)=\\frac{2I_1(Rt)}{Rt}"
}
] | https://en.wikipedia.org/wiki?curid=1028321 |
1028589 | Normal basis | In mathematics, specifically the algebraic theory of fields, a normal basis is a special kind of basis for Galois extensions of finite degree, characterised as forming a single orbit for the Galois group. The normal basis theorem states that any finite Galois extension of fields has a normal basis. In algebraic number theory, the study of the more refined question of the existence of a normal integral basis is part of Galois module theory.
Normal basis theorem.
Let formula_0 be a Galois extension with Galois group formula_1. The classical normal basis theorem states that there is an element formula_2 such that formula_3 forms a basis of "K", considered as a vector space over "F". That is, any element formula_4 can be written uniquely as formula_5 for some elements formula_6
A normal basis contrasts with a primitive element basis of the form formula_7, where formula_2 is an element whose minimal polynomial has degree formula_8.
Group representation point of view.
A field extension "K" / "F" with Galois group "G" can be naturally viewed as a representation of the group "G" over the field "F" in which each automorphism is represented by itself. Representations of "G" over the field "F" can be viewed as left modules for the group algebra "F"["G"]. Every homomorphism of left "F"["G"]-modules formula_9 is of form formula_10 for some formula_11. Since formula_12 is a linear basis of "F"["G"] over "F", it follows easily that formula_13 is bijective iff formula_14 generates a normal basis of "K" over "F". The normal basis theorem therefore amounts to the statement saying that if "K" / "F" is finite Galois extension, then formula_15 as left formula_16-module. In terms of representations of "G" over "F", this means that "K" is isomorphic to the regular representation.
Case of finite fields.
For finite fields this can be stated as follows: Let formula_17 denote the field of "q" elements, where "q" = "p""m" is a prime power, and let formula_18 denote its extension field of degree "n" ≥ 1. Here the Galois group is formula_19 with formula_20 a cyclic group generated by the "q"-power Frobenius automorphism formula_21with formula_22 Then there exists an element "β" ∈ "K" such that
formula_23
is a basis of "K" over "F".
Proof for finite fields.
In case the Galois group is cyclic as above, generated by formula_24 with formula_25 the normal basis theorem follows from two basic facts. The first is the linear independence of characters: a "multiplicative character" is a mapping "χ" from a group "H" to a field "K" satisfying formula_26; then any distinct characters formula_27 are linearly independent in the "K"-vector space of mappings. We apply this to the Galois group automorphisms formula_28 thought of as mappings from the multiplicative group formula_29. Now formula_30as an "F"-vector space, so we may consider formula_31 as an element of the matrix algebra M"n"("F"); since its powers formula_32 are linearly independent (over "K" and a fortiori over "F"), its minimal polynomial must have degree at least "n", i.e. it must be formula_33.
The second basic fact is the classification of finitely generated modules over a PID such as formula_34. Every such module "M" can be represented as formula_35, where formula_36 may be chosen so that they are monic polynomials or zero and formula_37 is a multiple of formula_36. formula_38 is the monic polynomial of smallest degree annihilating the module, or zero if no such non-zero polynomial exists. In the first case formula_39, in the second case formula_40. In our case of cyclic "G" of size "n" generated by formula_24 we have an "F"-algebra isomorphism formula_41 where "X" corresponds to formula_42, so every formula_16-module may be viewed as an formula_34-module with multiplication by "X" being multiplication by formula_43. In case of "K" this means formula_44, so the monic polynomial of smallest degree annihilating "K" is the minimal polynomial of formula_24. Since "K" is a finite dimensional "F"-space, the representation above is possible with formula_45. Since formula_46 we can only have formula_47, and formula_48 as "F"["X"]-modules. (Note this is an isomorphism of "F"-linear spaces, but "not" of rings or "F"-algebras.) This gives isomorphism of formula_16-modules formula_49 that we talked about above, and under it the basis formula_50 on the right side corresponds to a normal basis formula_51 of "K" on the left.
Note that this proof would also apply in the case of a cyclic Kummer extension.
Example.
Consider the field formula_52 over formula_53, with Frobenius automorphism formula_54. The proof above clarifies the choice of normal bases in terms of the structure of "K" as a representation of "G" (or "F"["G"]-module). The irreducible factorization
formula_55 means we have a direct sum of "F"["G"]-modules (by the Chinese remainder theorem):formula_56
The first component is just formula_0, while the second is isomorphic as an "F"["G"]-module to formula_57 under the action formula_58 (Thus formula_59 as "F"["G"]-modules, but "not" as "F"-algebras.)
The elements formula_2 which can be used for a normal basis are precisely those outside either of the submodules, so that formula_60 and formula_61. In terms of the "G"-orbits of "K", which correspond to the irreducible factors of:
formula_62
the elements of formula_63 are the roots of formula_64, the nonzero elements of the submodule formula_65 are the roots of formula_66, while the normal basis, which in this case is unique, is given by the roots of the remaining factor formula_67.
By contrast, for the extension field formula_68 in which "n" = 4 is divisible by "p" = 2, we have the "F"["G"]-module isomorphism
formula_69
Here the operator formula_70 is not diagonalizable, the module "L" has nested submodules given by generalized eigenspaces of formula_24, and the normal basis elements "β" are those outside the largest proper generalized eigenspace, the elements with formula_71.
Application to cryptography.
The normal basis is frequently used in cryptographic applications based on the discrete logarithm problem, such as elliptic curve cryptography, since arithmetic using a normal basis is typically more computationally efficient than using other bases.
For example, in the field formula_52 above, we may represent elements as bit-strings:
formula_72
where the coefficients are bits formula_73 Now we can square elements by doing a left circular shift, formula_74, since squaring "β"4 gives "β"8 = "β". This makes the normal basis especially attractive for cryptosystems that utilize frequent squaring.
Proof for the case of infinite fields.
Suppose formula_75 is a finite Galois extension of the infinite field "F". Let ["K" : "F"] = "n", formula_76, where formula_77. By the primitive element theorem there exists formula_4 such formula_78 and formula_79. Let us write formula_80. formula_81's (monic) minimal polynomial "f" over "K" is the irreducible degree "n" polynomial given by the formula
formula_82
Since "f" is separable (it has simple roots) we may define
formula_83
In other words,
formula_84
Note that formula_85 and formula_86 for formula_87. Next, define an formula_88 matrix "A" of polynomials over "K" and a polynomial "D" by
formula_89
Observe that formula_90, where "k" is determined by formula_91; in particular formula_47 iff formula_92. It follows that formula_93 is the permutation matrix corresponding to the permutation of "G" which sends each formula_94 to formula_95. (We denote by formula_93 the matrix obtained by evaluating formula_96 at formula_97.) Therefore, formula_98. We see that "D" is a non-zero polynomial, and therefore it has only a finite number of roots. Since we assumed "F" is infinite, we can find formula_99 such that formula_100. Define
formula_101
We claim that formula_102 is a normal basis. We only have to show that formula_103 are linearly independent over "F", so suppose formula_104 for some formula_105. Applying the automorphism formula_94 yields formula_106 for all "i". In other words, formula_107. Since formula_108, we conclude that formula_109, which completes the proof.
It is tempting to take formula_110 because formula_111. But this is impermissible because we used the fact that formula_112 to conclude that for any "F"-automorphism formula_113 and polynomial formula_114 over formula_115 the value of the polynomial formula_116 at "a" equals formula_117.
Primitive normal basis.
A primitive normal basis of an extension of finite fields "E" / "F" is a normal basis for "E" / "F" that is generated by a primitive element of "E", that is a generator of the multiplicative group "K"×. (Note that this is a more restrictive definition of primitive element than that mentioned above after the general normal basis theorem: one requires powers of the element to produce every non-zero element of "K", not merely a basis.) Lenstra and Schoof (1987) proved that every finite field extension possesses a primitive normal basis, the case when "F" is a prime field having been settled by Harold Davenport.
Free elements.
If "K" / "F" is a Galois extension and "x" in "K" generates a normal basis over "F", then "x" is free in "K" / "F". If "x" has the property that for every subgroup "H" of the Galois group "G", with fixed field "K""H", "x" is free for "K" / "K""H", then "x" is said to be completely free in "K" / "F". Every Galois extension has a completely free element.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F\\subset K"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "\\beta\\in K"
},
{
"math_id": 3,
"text": "\\{g(\\beta) : g\\in G\\}"
},
{
"math_id": 4,
"text": "\\alpha \\in K"
},
{
"math_id": 5,
"text": "\\alpha = \\sum_{g\\in G} a_g\\, g(\\beta)"
},
{
"math_id": 6,
"text": "a_g\\in F."
},
{
"math_id": 7,
"text": "\\{1,\\beta,\\beta^2,\\ldots,\\beta^{n-1}\\}"
},
{
"math_id": 8,
"text": "n=[K:F]"
},
{
"math_id": 9,
"text": "\\phi:F[G]\\rightarrow K"
},
{
"math_id": 10,
"text": "\\phi(r) = r\\beta"
},
{
"math_id": 11,
"text": "\\beta \\in K"
},
{
"math_id": 12,
"text": "\\{1\\cdot \\sigma| \\sigma \\in G\\}"
},
{
"math_id": 13,
"text": "\\phi"
},
{
"math_id": 14,
"text": "\\beta"
},
{
"math_id": 15,
"text": "K \\cong F[G]"
},
{
"math_id": 16,
"text": "F[G]"
},
{
"math_id": 17,
"text": "F = \\mathrm{GF}(q)=\\mathbb{F}_q"
},
{
"math_id": 18,
"text": "K= \\mathrm{GF}(q^n)=\\mathbb{F}_{q^n}"
},
{
"math_id": 19,
"text": "G = \\text{Gal}(K/F) = \\{1,\\Phi,\\Phi^2,\\ldots,\\Phi^{n-1}\\}"
},
{
"math_id": 20,
"text": "\\Phi^n = 1,"
},
{
"math_id": 21,
"text": "\\Phi(\\alpha)=\\alpha^q,"
},
{
"math_id": 22,
"text": "\\Phi^n = 1 =\\mathrm{Id}_K."
},
{
"math_id": 23,
"text": "\\{\\beta, \\Phi(\\beta), \\Phi^2(\\beta),\\ldots,\\Phi^{n-1}(\\beta)\\}\n\\ = \\ \n\\{\\beta, \\beta^q, \\beta^{q^2}, \\ldots,\\beta^{q^{n-1}}\\!\\}"
},
{
"math_id": 24,
"text": "\\Phi"
},
{
"math_id": 25,
"text": "\\Phi^n=1,"
},
{
"math_id": 26,
"text": "\\chi(h_1h_2)=\\chi(h_1)\\chi(h_2)"
},
{
"math_id": 27,
"text": "\\chi_1,\\chi_2,\\ldots "
},
{
"math_id": 28,
"text": "\\chi_i=\\Phi^i: K \\to K,"
},
{
"math_id": 29,
"text": "H=K^\\times"
},
{
"math_id": 30,
"text": "K\\cong F^n"
},
{
"math_id": 31,
"text": "\\Phi : F^n\\to F^n"
},
{
"math_id": 32,
"text": "1,\\Phi,\\ldots,\\Phi^{n-1}"
},
{
"math_id": 33,
"text": "X^n-1"
},
{
"math_id": 34,
"text": "F[X]"
},
{
"math_id": 35,
"text": "M \\cong\\bigoplus_{i=1}^k \\frac{F[X]}{(f_i(X))}"
},
{
"math_id": 36,
"text": "f_i(X)"
},
{
"math_id": 37,
"text": "f_{i+1}(X)"
},
{
"math_id": 38,
"text": "f_k(X)"
},
{
"math_id": 39,
"text": "\\dim_F M = \\sum_{i=1}^k \\deg f_i"
},
{
"math_id": 40,
"text": "\\dim_F M = \\infty"
},
{
"math_id": 41,
"text": "F[G]\\cong \\frac {F[X]}{(X^n-1)}"
},
{
"math_id": 42,
"text": "1 \\cdot \\Phi"
},
{
"math_id": 43,
"text": "1\\cdot\\Phi"
},
{
"math_id": 44,
"text": "X\\alpha = \\Phi(\\alpha)"
},
{
"math_id": 45,
"text": "f_k(X)=X^n-1"
},
{
"math_id": 46,
"text": "\\dim_F(K) = n,"
},
{
"math_id": 47,
"text": "k=1"
},
{
"math_id": 48,
"text": "K \\cong \\frac{F[X]}{(X^n{-}\\,1)}"
},
{
"math_id": 49,
"text": "K\\cong F[G]"
},
{
"math_id": 50,
"text": "\\{1,X,X^2,\\ldots,X^{n-1}\\}"
},
{
"math_id": 51,
"text": "\\{\\beta, \\Phi(\\beta),\\Phi^2(\\beta),\\ldots,\\Phi^{n-1}(\\beta)\\}"
},
{
"math_id": 52,
"text": "K=\\mathrm{GF}(2^3)=\\mathbb{F}_{8}"
},
{
"math_id": 53,
"text": "F=\\mathrm{GF}(2)=\\mathbb{F}_{2}"
},
{
"math_id": 54,
"text": "\\Phi(\\alpha)=\\alpha^2"
},
{
"math_id": 55,
"text": "X^n-1 \\ =\\ X^3-1\\ = \\ (X{-}1)(X^2{+}X{+}1) \\ \\in\\ F[X]"
},
{
"math_id": 56,
"text": "K\\ \\cong\\ \\frac{F[X]}{(X^3{-}\\,1)} \\ \\cong\\ \\frac{F[X]}{(X{+}1)} \\oplus \\frac{F[X]}{(X^2{+}X{+}1)}."
},
{
"math_id": 57,
"text": "\\mathbb{F}_{2^2} \\cong \\mathbb{F}_2[X]/(X^2{+}X{+}1)"
},
{
"math_id": 58,
"text": "\\Phi\\cdot X^i = X^{i+1}."
},
{
"math_id": 59,
"text": "K \\cong \\mathbb F_2\\oplus \\mathbb F_4"
},
{
"math_id": 60,
"text": "(\\Phi{+}1)(\\beta)\\neq 0"
},
{
"math_id": 61,
"text": "(\\Phi^2{+}\\Phi{+}1)(\\beta)\\neq 0"
},
{
"math_id": 62,
"text": "t^{2^3}-t \\ = \\ t(t{+}1)\\left(t^3 + t + 1\\right)\\left(t^3 + t^2 + 1\\right)\\ \\in\\ F[t],"
},
{
"math_id": 63,
"text": "F=\\mathbb{F}_2"
},
{
"math_id": 64,
"text": "t(t{+}1)"
},
{
"math_id": 65,
"text": "\\mathbb{F}_4"
},
{
"math_id": 66,
"text": "t^3+t+1"
},
{
"math_id": 67,
"text": "t^3{+}t^2{+}1"
},
{
"math_id": 68,
"text": "L = \\mathrm{GF}(2^4)=\\mathbb{F}_{16}"
},
{
"math_id": 69,
"text": "L \\ \\cong\\ \\mathbb{F}_2[X]/(X^4{-}1)\\ =\\ \\mathbb{F}_2[X]/(X{+}1)^4."
},
{
"math_id": 70,
"text": "\\Phi\\cong X"
},
{
"math_id": 71,
"text": "(\\Phi{+}1)^3(\\beta)\\neq 0"
},
{
"math_id": 72,
"text": "\\alpha \\ =\\ (a_2,a_1,a_0)\\ =\\ a_2\\Phi^2(\\beta) + a_1\\Phi(\\beta)+a_0\\beta\\ =\\ a_2\\beta^4 + a_1\\beta^2 +a_0\\beta,"
},
{
"math_id": 73,
"text": "a_i\\in \\mathrm{GF}(2)=\\{0,1\\}."
},
{
"math_id": 74,
"text": "\\alpha^2=\\Phi(a_2,a_1,a_0) = (a_1,a_0,a_2)"
},
{
"math_id": 75,
"text": "K/F"
},
{
"math_id": 76,
"text": "\\text{Gal}(K/F) = G =\\{\\sigma_1...\\sigma_n\\}"
},
{
"math_id": 77,
"text": "\\sigma_1 = \\text{Id}"
},
{
"math_id": 78,
"text": "i\\ne j\\implies \\sigma_i(\\alpha)\\ne\\sigma_j(\\alpha)"
},
{
"math_id": 79,
"text": "K=F[\\alpha]"
},
{
"math_id": 80,
"text": "\\alpha_i = \\sigma_i(\\alpha)"
},
{
"math_id": 81,
"text": "\\alpha"
},
{
"math_id": 82,
"text": "\\begin {align}\nf(X) &= \\prod_{i=1}^n(X - \\alpha_i)\n\\end {align}"
},
{
"math_id": 83,
"text": "\n\\begin {align}\ng(X) &= \\ \\frac{f(X)}{(X-\\alpha)f'(\\alpha)}\\\\\ng_i(X) &= \\ \\frac{f(X)}{(X-\\alpha_i) f'(\\alpha_i)} =\\ \\sigma_i(g(X)).\n\\end {align}\n"
},
{
"math_id": 84,
"text": "\\begin {align}\ng_i(X)&= \\prod_{\\begin {array}{c}1 \\le j \\le n \\\\ j\\ne i\\end {array}}\\frac{X-\\alpha_j}{\\alpha_i - \\alpha_j}\\\\\ng(X)&= g_1(X).\n\\end {align}"
},
{
"math_id": 85,
"text": "g(\\alpha)=1"
},
{
"math_id": 86,
"text": "g_i(\\alpha)=0"
},
{
"math_id": 87,
"text": "i \\ne 1"
},
{
"math_id": 88,
"text": "n \\times n"
},
{
"math_id": 89,
"text": "\n\\begin {align}\nA_{ij}(X) &= \\sigma_i(\\sigma_j(g(X)) = \\sigma_i(g_j(X))\\\\\nD(X) &= \\det A(X).\n\\end {align}"
},
{
"math_id": 90,
"text": "A_{ij}(X) = g_k(X)"
},
{
"math_id": 91,
"text": "\\sigma_k = \\sigma_i \\cdot \\sigma_j"
},
{
"math_id": 92,
"text": "\\sigma_i = \\sigma_j^{-1}"
},
{
"math_id": 93,
"text": "A(\\alpha)"
},
{
"math_id": 94,
"text": "\\sigma_i"
},
{
"math_id": 95,
"text": "\\sigma_i^{-1}"
},
{
"math_id": 96,
"text": "A(X)"
},
{
"math_id": 97,
"text": "x=\\alpha"
},
{
"math_id": 98,
"text": "D(\\alpha) = \\det A(\\alpha) = \\pm 1"
},
{
"math_id": 99,
"text": "a\\in F"
},
{
"math_id": 100,
"text": "D(a)\\ne 0"
},
{
"math_id": 101,
"text": "\n\\begin {align}\n\\beta &= g(a) \\\\ \n\\beta_i &= g_i(a) = \\sigma_i(\\beta).\n\\end {align}\n"
},
{
"math_id": 102,
"text": "\\{\\beta_1, \\ldots, \\beta_n\\}"
},
{
"math_id": 103,
"text": "\\beta_1, \\ldots,\\beta_n"
},
{
"math_id": 104,
"text": "\\sum_{j=1}^n x_j \\beta_j = 0"
},
{
"math_id": 105,
"text": "x_1...x_n\\in F"
},
{
"math_id": 106,
"text": "\\sum_{j=1}^n x_j \\sigma_i(g_j(a)) = 0"
},
{
"math_id": 107,
"text": "A(a) \\cdot \\overline {x} = \\overline {0}"
},
{
"math_id": 108,
"text": "\\det A(a) = D(a) \\ne 0"
},
{
"math_id": 109,
"text": "\\overline x = \\overline 0"
},
{
"math_id": 110,
"text": "a=\\alpha"
},
{
"math_id": 111,
"text": "D(\\alpha)\\neq0"
},
{
"math_id": 112,
"text": "a \\in F"
},
{
"math_id": 113,
"text": "\\sigma"
},
{
"math_id": 114,
"text": "h(X)"
},
{
"math_id": 115,
"text": "K"
},
{
"math_id": 116,
"text": "\\sigma(h(X))"
},
{
"math_id": 117,
"text": "\\sigma(h(a))"
}
] | https://en.wikipedia.org/wiki?curid=1028589 |
102883 | Belief | Mental state of holding a proposition or premise to be true
A belief is a subjective attitude that a proposition is true or a state of affairs is the case. A subjective attitude is a mental state of having some stance, take, or opinion about something. In epistemology, philosophers use the term "belief" to refer to attitudes about the world which can be either true or false. To believe something is to take it to be true; for instance, to believe that snow is white is comparable to accepting the truth of the proposition "snow is white". However, holding a belief does not require active introspection. For example, few individuals carefully consider whether or not the sun will rise tomorrow, simply assuming that it will. Moreover, beliefs need not be "occurrent" (e.g. a person actively thinking "snow is white"), but can instead be "dispositional" (e.g. a person who if asked about the color of snow would assert "snow is white").
There are various ways that contemporary philosophers have tried to describe beliefs, including as representations of ways that the world could be (Jerry Fodor), as dispositions to act as if certain things are true (Roderick Chisholm), as interpretive schemes for making sense of someone's actions (Daniel Dennett and Donald Davidson), or as mental states that fill a particular function (Hilary Putnam). Some have also attempted to offer significant revisions to our notion of belief, including eliminativists about belief who argue that there is no phenomenon in the natural world which corresponds to our folk psychological concept of belief (Paul Churchland) and formal epistemologists who aim to replace our bivalent notion of belief ("either we have a belief or we don't have a belief") with the more permissive, probabilistic notion of credence ("there is an entire spectrum of degrees of belief, not a simple dichotomy between belief and non-belief").
Beliefs are the subject of various important philosophical debates. Notable examples include: "What is the rational way to revise one's beliefs when presented with various sorts of evidence?", "Is the content of our beliefs entirely determined by our mental states, or do the relevant facts have any bearing on our beliefs (e.g. if I believe that I'm holding a glass of water, is the non-mental fact that water is H2O part of the content of that belief)?", "How fine-grained or coarse-grained are our beliefs?", and "Must it be possible for a belief to be expressible in language, or are there non-linguistic beliefs?"
Conceptions.
Various conceptions of the essential features of beliefs have been proposed, but there is no consensus as to which is the right one. "Representationalism" is the traditionally dominant position. Its most popular version maintains that attitudes toward representations, which are typically associated with propositions, are mental attitudes that constitute beliefs. These attitudes are part of the internal constitution of the mind holding the attitude. This view contrasts with "functionalism", which defines beliefs not in terms of the internal constitution of the mind but in terms of the function or the causal role played by beliefs. According to "dispositionalism", beliefs are identified with dispositions to behave in certain ways. This view can be seen as a form of functionalism, defining beliefs in terms of the behavior they tend to cause. "Interpretationism" constitutes another conception, which has gained popularity in contemporary philosophy. It holds that the beliefs of an entity are in some sense dependent on or relative to someone's interpretation of this entity. "Representationalism" tends to be associated with mind-body-dualism. "Naturalist" considerations against this dualism are among the motivations for choosing one of the alternative conceptions.
Representationalism.
Representationalism characterizes beliefs in terms of mental representations. Representations are usually defined as objects with semantic properties—like having content, referring to something, or being true or false. Beliefs form a special class of mental representations since they do not involve sensory qualities in order to represent something, unlike perceptions or episodic memories. Because of this, it seems natural to construe beliefs as attitudes towards propositions, which also constitute non-sensory representations, i.e. as propositional attitudes. As mental attitudes, beliefs are characterized by both their content and their mode. The content of an attitude is what this attitude is directed at: its object. Propositional attitudes are directed at propositions. Beliefs are usually distinguished from other propositional attitudes, like desires, by their mode or the way in which they are directed at propositions. The mode of beliefs has a mind-to-world direction of fit: beliefs try to represent the world as it is; they do not, unlike desires, involve an intention to change it. For example, if Rahul believes that it will be sunny today, then he has a mental attitude towards the proposition "It will be sunny today" which affirms that this proposition is true. This is different from Sofía's desire that it will be sunny today, despite the fact that both Rahul and Sofía have attitudes toward the same proposition. The mind-to-world direction of fit of beliefs is sometimes expressed by saying that beliefs aim at truth. This aim is also reflected in the tendency to revise one's belief upon receiving new evidence that an existing belief is false. Upon hearing a forecast of bad weather, Rahul is likely to change his mental attitude but Sofía is not.
There are different ways of conceiving how mental representations are realized in the mind. One form of this is the "language of thought hypothesis", which claims that mental representations have a language-like structure, sometimes referred to as "mentalese". Just like regular language, this involves simple elements that are combined in various ways according to syntactic rules to form more complex elements that act as bearers of meaning. On this conception, holding a belief would involve storing such a complex element in one's mind. Different beliefs are separated from each other in that they correspond to different elements stored in the mind. A more holistic alternative to the "language of thought hypothesis" is the "map-conception", which uses an analogy of maps to elucidate the nature of beliefs. According to this view, the belief system of a mind should be conceived of not as a set of many individual sentences but as a map encoding the information contained in these sentences. For example, the fact that Brussels is halfway between Paris and Amsterdam can be expressed both linguistically as a sentence and in a map through its internal geometrical relations.
Functionalism.
Functionalism contrasts with representationalism in that it defines beliefs not in terms of the internal constitution of the mind but in terms of the function or the causal role played by them. This view is often combined with the idea that the same belief can be realized in various ways and that it does not matter how it is realized as long as it plays the causal role characteristic to it. As an analogy, a hard drive is defined in a functionalist manner: it performs the function of storing and retrieving digital data. This function can be realized in many different ways: being made of plastic or steel, or using magnetism or laser. Functionalists hold that something similar is true for beliefs (or mental states in general). Among the roles relevant to beliefs is their relation to perceptions and to actions: perceptions usually cause beliefs and beliefs cause actions. For example, seeing that a traffic light has switched to red is usually associated with a belief that the light is red, which in turn causes the driver to bring the car to a halt. Functionalists use such characteristics to define beliefs: whatever is caused by perceptions in a certain way and also causes behavior in a certain way is called a belief. This is not just true for humans but may include animals, hypothetical aliens or even computers. From this perspective, it would make sense to ascribe the belief that a traffic light is red to a self-driving car behaving just like a human driver.
Dispositionalism is sometimes seen as a specific form of functionalism. It defines beliefs only concerning their role as causes of behavior or as dispositions to behave in a certain way. For example, a belief that there is a pie in the pantry is associated with the disposition to affirm this when asked and to go to the pantry when hungry. While it is uncontroversial that beliefs shape our behavior, the thesis that beliefs can be defined exclusively through their role in producing behavior has been contested. The problem arises because the mechanisms shaping our behavior seem to be too complex to single out the general contribution of one particular belief for any possible situation. For example, one may decide not to affirm that there is a pie in the pantry when asked because one wants to keep it secret. Or one might not eat the pie despite being hungry, because one also believes that it is poisoned. Due to this complexity, we are unable to define even a belief as simple as this one in terms of the behavioral dispositions for which it could be responsible.
Interpretationism.
According to interpretationism, the beliefs of an entity are in some sense dependent on, or relative to, someone's interpretation of this entity. Daniel Dennett is an important defender of such a position. He holds that we ascribe beliefs to entities in order to predict how they will behave. Entities with simple behavioral patterns can be described using physical laws or in terms of their function. Dennett refers to these forms of explanation as the "physical stance" and the "design stance". These stances are contrasted with the intentional stance, which is applied to entities with a more complex behavior by ascribing beliefs and desires to these entities. For example, we can predict that a chess player will move her queen to f7 if we ascribe to her the desire to win the game and the belief that this move will achieve that. The same procedure can also be applied to predicting how a chess computer will behave. The entity has the belief in question if this belief can be used to predict its behavior. Having a belief is relative to an interpretation since there may be different equally good ways of ascribing beliefs to predict behavior. So there may be another interpretation that predicts the move of the queen to f7 that does not involve the belief that this move will win the game. Another version of interpretationism is due to Donald Davidson, who uses the thought experiment of radical interpretation, in which the goal is to make sense of the behavior and language of another person from scratch without any knowledge of this person's language. This process involves ascribing beliefs and desires to the speaker. The speaker really has these beliefs if this project can be successful in principle.
Interpretationism can be combined with eliminativism and instrumentalism about beliefs. Eliminativists hold that, strictly speaking, there are no beliefs. Instrumentalists agree with eliminativists but add that belief-ascriptions are useful nonetheless. This usefulness can be explained in terms of interpretationism: belief-ascriptions help us in predicting how entities will behave. It has been argued that interpretationism can also be understood in a more realistic sense: that entities really have the beliefs ascribed to them and that these beliefs participate in the causal network. But, for this to be possible, it may be necessary to define interpretationism as a methodology and not as an ontological outlook on beliefs.
Origins.
Biologist Lewis Wolpert discusses the importance of causal beliefs and associates the making and use of tools with the origin of human beliefs.
Historical.
In the context of Ancient Greek thought, three related concepts were identified regarding the concept of belief: "pistis," "doxa," and "dogma." Simplified, "pistis" refers to "trust" and "confidence," "doxa" refers to "opinion" and "acceptance," and "dogma" refers to the positions of a philosopher or of a philosophical school such as Stoicism.
Types.
Beliefs can be categorized into various types depending on their ontological status, their degree, their object or their semantic properties.
Occurrent and dispositional.
Having an occurrent belief that the Grand Canyon is in Arizona involves entertaining the representation associated with this belief—for example, by actively thinking about it. But the great majority of our beliefs are not active most of the time: they are merely dispositional. They usually become activated or occurrent when needed or relevant in some way and then fall back into their dispositional state afterward. For example, the belief that 57 is greater than 14 was probably dispositional to the reader before reading this sentence, has become occurrent while reading it and may soon become dispositional again as the mind focuses elsewhere. The distinction between occurrent and dispositional beliefs is sometimes identified with the distinction between conscious and unconscious beliefs. But it has been argued that, despite overlapping, the two distinctions do not match. The reason for this is that beliefs can shape one's behavior and be involved in one's reasoning even if the subject is not conscious of them. Such beliefs are cases of unconscious occurrent mental states. On this view, being occurrent corresponds to being active, either consciously or unconsciously.
A dispositional belief is not the same as a disposition to believe. We have various dispositions to believe given the right perceptions; for example, to believe that it is raining given a perception of rain. Without this perception, there is still a disposition to believe but no actual dispositional belief. On a dispositionalist conception of belief, there are no occurrent beliefs, since all beliefs are defined in terms of dispositions.
Full and partial.
An important dispute in formal epistemology concerns the question of whether beliefs should be conceptualized as "full" beliefs or as "partial" beliefs. Full beliefs are all-or-nothing attitudes: either one has a belief in a proposition or one does not. This conception is sufficient to understand many belief ascriptions found in everyday language: for example, Pedro's belief that the Earth is bigger than the Moon. But some cases involving comparisons between beliefs are not easily captured through full beliefs alone: for example, that Pedro's belief that the Earth is bigger than the Moon is more certain than his belief that the Earth is bigger than Venus. Such cases are most naturally analyzed in terms of partial beliefs involving degrees of belief, so-called "credences". The higher the degree of a belief, the more certain the believer is that the believed proposition is true. This is usually formalized by numbers between 0 and 1: a degree of 1 represents an absolutely certain belief, a belief of 0 corresponds to an absolutely certain disbelief and all the numbers in between correspond to intermediate degrees of certainty. In the Bayesian approach, these degrees are interpreted as subjective probabilities: e.g. a belief of degree 0.9 that it will rain tomorrow means that the agent thinks that the probability of rain tomorrow is 90%. Bayesianism uses this relation between beliefs and probability to define the norms of rationality in terms of the laws of probability. This includes both synchronic laws about what one should believe at any moment and diachronic laws about how one should revise one's beliefs upon receiving new evidence.
The central question in the dispute between full and partial beliefs is whether these two types are really distinct types or whether one type can be explained in terms of the other. One answer to this question is called the "Lockean thesis". It states that partial beliefs are basic and that full beliefs are to be conceived as partial beliefs above a certain threshold: for example, that every belief above 0.9 is a full belief. Defenders of a primitive notion of full belief, on the other hand, have tried to explain partial beliefs as full beliefs about probabilities. On this view, having a partial belief of degree 0.9 that it will rain tomorrow is the same as having a full belief that the probability of rain tomorrow is 90%. Another approach circumvents the notion of probability altogether and replaces degrees of belief with degrees of disposition to revise one's full belief. From this perspective, both a belief of degree 0.6 and a belief of degree 0.9 may be seen as full beliefs. The difference between them is that the former belief can readily be changed upon receiving new evidence while the latter is more stable.
Belief-in and belief-that.
Traditionally, philosophers have mainly focused in their inquiries concerning belief on the notion of "belief-that". Belief-that can be characterized as a propositional attitude to a claim which is either true or false. "Belief-in", on the other hand, is more closely related to notions like trust or faith in that it refers usually to an attitude to persons. "Belief-in" plays a central role in many religious traditions in which "belief in God" is one of the central virtues of their followers. The difference between belief-in and belief-that is sometimes blurry since various expressions using the term "belief in" seem to be translatable into corresponding expressions using the term "belief that" instead. For example, a "belief in" fairies may be said to be a "belief that" fairies exist. In this sense, belief-in is often used when the entity is not real, or its existence is in doubt. Typical examples would include: "he believes in witches and ghosts" or "many children believe in Santa Claus" or "I believe in a deity". Not all usages of belief-in concern the existence of something: some are "commendatory" in that they express a positive attitude towards their object. It has been suggested that these cases can also be accounted for in terms of belief-that. For example, a "belief in" marriage could be translated as a "belief that" marriage is good. Belief-in is used in a similar sense when expressing self-confidence or faith in one's self or one's abilities.
Defenders of a reductive account of belief-in have used this line of thought to argue that "belief in God" can be analyzed in a similar way: e.g. that it amounts to a belief that God exists with his characteristic attributes, like omniscience and omnipotence. Opponents of this account often concede that belief-in may entail various forms of belief-that, but that there are additional aspects to belief-in that are not reducible to belief-that. For example, a "belief in" an ideal may involve the "belief that" this ideal is something good, but it additionally involves a positive evaluative attitude toward this ideal that goes beyond a mere propositional attitude. Applied to the "belief in" God, opponents of the reductive approach may hold that a "belief that" God exists may be a necessary pre-condition for "belief in" God, but that it is not sufficient.
"De dicto" and "de re".
The difference between "de dicto" and "de re" beliefs or the corresponding ascriptions concerns the contributions singular terms like names and other referential devices make to the semantic properties of the belief or its ascription. In regular contexts, the truth-value of a sentence does not change upon substitution of co-referring terms. For example, since the names "Superman" and "Clark Kent" refer to the same person, we can replace one with the other in the sentence "Superman is strong" without changing its truth-value; this issue is more complicated in case of belief ascriptions. For example, Lois believes that Superman is strong but she does not believe that Clark Kent is strong. This difficulty arises due to the fact that she does not know that the two names refer to the same entity. Beliefs or belief ascriptions for which this substitution does not generally work are "de dicto", otherwise, they are "de re". In a "de re" sense, Lois does believe that Clark Kent is strong, while in a "de dicto" sense she does not. The contexts corresponding to "de dicto" ascriptions are known as referentially opaque contexts while "de re" ascriptions are referentially transparent.
Collective belief.
A collective belief is referred to when people speak of what "we" believe when this is not simply elliptical for what "we all" believe.
Sociologist Émile Durkheim wrote of collective beliefs and proposed that they, like all "social facts", "inhered in" social groups as opposed to individual persons. Jonathan Dancy states that "Durkheim's discussion of collective belief, though suggestive, is relatively obscure".
Margaret Gilbert has offered a related account in terms of the joint commitment of a number of persons as a body to accept a certain belief. According to this account, individuals who together collectively believe something need not personally believe it individually. Gilbert's work on the topic has stimulated a developing literature among philosophers. One question that has arisen is whether and how philosophical accounts of belief in general need to be sensitive to the possibility of collective belief.
Collective belief can play a role in social control
and serve as a touchstone for identifying and purging heresies,
deviancy
or political deviationism.
Contents.
As mental representations, beliefs have contents, which is what the belief is about or what it represents. Within philosophy, there are various disputes about how the contents of beliefs are to be understood. Holists and molecularists hold that the content of one particular belief depends on or is determined by other beliefs belonging to the same subject, which is denied by atomists. The question of dependence or determination also plays a central role in the internalism-externalism- debate. Internalism states that the contents of someone's beliefs depend only on what is internal to that person and are determined entirely by things going on inside this person's head. Externalism, on the other hand, holds that the relations to one's environment also have a role to play in this.
Atomism, molecularism and holism.
The disagreement between "atomism, molecularism and holism" concerns the question of how the content of one belief depends on the contents of other beliefs held by the same subject. Atomists deny such dependence relations, molecularists restrict them to only a few closely related beliefs while holists hold that they may obtain between any two beliefs, however unrelated they seem. For example, assume that Mei and Benjamin both affirm that Jupiter is a planet. The most straightforward explanation, given by the atomists, would be that they have the same belief, i.e. that they hold the same content to be true. But now assume that Mei also believes that Pluto is a planet, which is denied by Benjamin. This indicates that they have different concepts of planet, which would mean that they were affirming different contents when they both agreed that Jupiter is a planet. This reasoning leads to molecularism or holism because the content of the Jupiter-belief depends on the Pluto-belief in this example.
An important motivation for this position comes from W. V. Quine's confirmational holism, which holds that, because of this interconnectedness, we cannot confirm or disconfirm individual hypotheses, that confirmation happens on the level of the theory as a whole. Another motivation is due to considerations of the nature of learning: it is often not possible to understand one concept, like force in Newtonian physics, without understanding other concepts, like mass or kinetic energy. One problem for holism is that genuine disagreements seem to be impossible or very rare: disputants would usually talk past each other since they never share exactly the same web of beliefs needed to determine the content of the source of the disagreement.
Internalism and externalism.
"Internalism and externalism" disagree about whether the contents of our beliefs are determined only by what's happening in our head or also by other factors. Internalists deny such a dependence on external factors. They hold that a person and a molecule-by-molecule copy would have exactly the same beliefs. Hilary Putnam objects to this position by way of his twin Earth thought experiment. He imagines a twin Earth in another part of the universe that is exactly like ours, except that their water has a different chemical composition despite behaving just like ours. According to Putnam, the reader's thought that water is wet is about "our water" while the reader's twin's thought on twin Earth that water is wet is about "their water". This is the case despite the fact that the two readers have the same molecular composition. So it seems necessary to include external factors in order to explain the difference. One problem with this position is that this difference in content does not bring any causal difference with it: the two readers act in exactly the same way. This casts doubt on the thesis that there is any genuine difference in need of explanation between the contents of the two beliefs.
Epistemology.
Epistemology is concerned with delineating the boundary between justified belief and opinion, and involved generally with a theoretical philosophical study of knowledge. The primary problem in epistemology is to understand what is needed to have knowledge. In a notion derived from Plato's dialogue "Theaetetus", where the epistemology of Socrates most clearly departs from that of the sophists, who appear to have defined knowledge as "justified true belief". The tendency to base knowledge ("episteme") on common opinion ("doxa") Socrates dismisses, results from failing to distinguish a dispositive belief ("doxa") from knowledge ("episteme") when the opinion is regarded correct (n.b., "orthé" not "alethia"), in terms of right, and juristically so (according to the premises of the dialogue), which was the task of the rhetors to prove. Plato dismisses this possibility of an affirmative relation between opinion and knowledge even when the one who opines grounds his belief on the rule, and is able to add justification ("logos": reasonable and necessarily plausible assertions/evidence/guidance) to it. A belief can be based fully or partially on intuition.
Plato has been credited for the "justified true belief" theory of knowledge, even though Plato in the "Theaetetus" elegantly dismisses it, and even posits this argument of Socrates as a cause for his death penalty. The epistemologists, Gettier and Goldman, have questioned the "justified true belief" definition.
Justified true belief.
Justified true belief is a definition of knowledge that gained approval during the Enlightenment, "justified" standing in contrast to "revealed". There have been attempts to trace it back to Plato and his dialogues, more specifically in the "Theaetetus," and the "Meno". The concept of justified true belief states that in order to know that a given proposition is true, one must not only believe the relevant true proposition, but also have justification for doing so. In more formal terms, an agent formula_0 knows that a proposition formula_1 is true if and only if:
That theory of knowledge suffered a significant setback with the discovery of Gettier problems, situations in which the above conditions were seemingly met but where many philosophers deny that anything is known. Robert Nozick suggested a clarification of "justification" which he believed eliminates the problem: the justification has to be such that were the justification false, the knowledge would be false. Bernecker and Dretske (2000) argue that "no epistemologist since Gettier has seriously and successfully defended the traditional view." On the other hand, Paul Boghossian argues that the justified true belief account is the "standard, widely accepted" definition of knowledge.
Belief systems.
A belief system comprises a set of mutually supportive beliefs. The beliefs of any such system can be religious, philosophical, political, ideological, or a combination of these.
Glover's view.
The British philosopher Jonathan Glover, following Meadows (2008), says that beliefs are always part of a belief system, and that tenanted belief systems are difficult for the tenants to completely revise or reject. He suggests that beliefs have to be considered holistically, and that no belief exists in isolation in the mind of the believer. Each belief always implicates and relates to other beliefs. Glover provides the example of a patient with an illness who returns to a doctor, but the doctor says that the prescribed medicine is not working. At that point, the patient has a great deal of flexibility in choosing what beliefs to keep or reject: the patient could believe that the doctor is incompetent, that the doctor's assistants made a mistake, that the patient's own body is unique in some unexpected way, that Western medicine is ineffective, or even that Western science is entirely unable to discover truths about ailments.
This insight has relevance for inquisitors, missionaries, agitprop groups and thought-police. The British philosopher Stephen Law has described some belief systems (including belief in homeopathy, psychic powers, and alien abduction) as "claptrap" and says that such belief-systems can "draw people in and hold them captive so they become willing slaves of claptrap ... if you get sucked in, it can be extremely difficult to think your way clear again".
Religion.
Religion is a personal set or institutionalized system of religious attitudes, beliefs, and practices; the service or worship of God or the supernatural. Religious belief is distinct from religious practice and from religious behaviours—with some believers not practicing religion and some practitioners not believing religion. "Belief" is no less of a theoretical term than is "religion". Religious beliefs often relate to the existence, characteristics and worship of a deity or deities, to the idea of divine intervention in the universe and in human life, or to the deontological explanations for the values and practices centered on the teachings of a spiritual leader or community. In contrast to other belief systems, religious beliefs are usually codified.
Forms.
A popular view holds that different religions each have identifiable and exclusive sets of beliefs or creeds, but surveys of religious belief have often found that the official doctrine and descriptions of the beliefs offered by religious authorities do not always agree with the privately held beliefs of those who identify as members of a particular religion. For a broad classification of the kinds of religious belief, see below.
Fundamentalism.
First self-applied as a term to the conservative doctrine outlined by anti-modernist Protestants in the United States, "fundamentalism" in religious terms denotes strict adherence to an interpretation of scriptures that are generally associated with theologically conservative positions or traditional understandings of the text and are distrustful of innovative readings, new revelation, or alternative interpretations. Religious fundamentalism has been identified in the media as being associated with fanatical or zealous political movements around the world that have used a strict adherence to a particular religious doctrine as a means to establish political identity and to enforce societal norms.
Orthodoxy.
First used in the context of Early Christianity, the term "orthodoxy" relates to religious belief that closely follows the edicts, apologies, and hermeneutics of a prevailing religious authority. In the case of Early Christianity, this authority was the communion of bishops, and is often referred to by the term "Magisterium". The term "orthodox" was applied almost as an epithet to a group of Jewish believers who held to pre-Enlightenment understanding of Judaism—now known as Orthodox Judaism. The Eastern Orthodox Church of Christianity and the Catholic Church each consider themselves to be the true heir to Early Christian belief and practice. The antonym of "orthodox" is "heterodox", and those adhering to orthodoxy often accuse the heterodox of apostasy, schism, or heresy.
Modernism/reform.
The Renaissance and later the Enlightenment in Europe exhibited varying degrees of religious tolerance and intolerance towards new and old religious ideas. The "philosophes" took particular exception to many of the more fantastical claims of religions and directly challenged religious authority and the prevailing beliefs associated with the established churches. In response to the liberalizing political and social movements, some religious groups attempted to integrate Enlightenment ideals of rationality, equality, and individual liberty into their belief systems, especially in the nineteenth and twentieth centuries. Reform Judaism and Liberal Christianity offer two examples of such religious associations.
Attitudes to other religions.
Adherents of particular religions deal with the differing doctrines and practices espoused by other religions or by other religious denominations in a variety of ways.
Exclusivism.
People with exclusivist beliefs typically explain other beliefs either as in error, or as corruptions or counterfeits of the true faith. This approach is a fairly consistent feature among smaller new religious movements that often rely on doctrine that claims a unique revelation by the founders or leaders, and considers it a matter of faith that the "correct" religion has a monopoly on truth. All three major Abrahamic monotheistic religions have passages in their holy scriptures that attest to the primacy of the scriptural testimony, and indeed monotheism itself is often vouched as an innovation characterized specifically by its explicit rejection of earlier polytheistic faiths.
Some exclusivist faiths incorporate a specific element of proselytization. This is a strongly-held belief in the Christian tradition which follows the doctrine of the Great Commission, and is less emphasized by the Islamic faith where the Quranic edict "There shall be no compulsion in religion" (2:256) is often quoted as a justification for toleration of alternative beliefs. The Jewish tradition does not actively seek out converts.
Exclusivism correlates with conservative, fundamentalist, and orthodox approaches of many religions, while pluralistic and syncretist approaches either explicitly downplay or reject the exclusivist tendencies within a religion.
Inclusivism.
People with inclusivist beliefs recognize some truth in all faith systems, highlighting agreements and minimizing differences. This attitude is sometimes associated with Interfaith dialogue or with the Christian Ecumenical movement, though in principle such attempts at pluralism are not necessarily inclusivist and many actors in such interactions (for example, the Roman Catholic Church) still hold to exclusivist dogma while participating in inter-religious organizations. Explicitly inclusivist religions include many that are associated with the New Age movement, as well as modern reinterpretations of Hinduism and Buddhism. The Baháʼí Faith considers it doctrine that there is truth in all faith-systems.
Pluralism and syncretism are two closely related concepts. People with pluralist beliefs make no distinction between faith systems, viewing each one as valid within a particular culture. People with syncretic views blend the views of a variety of different religions or traditional beliefs into a unique fusion which suits their particular experiences and contexts (eclecticism). Unitarian Universalism exemplifies a syncretic faith.
Adherence.
Typical reasons for adherence to religion include the following:
Psychologist James Alcock also summarizes a number of apparent benefits which reinforce religious belief. These include prayer appearing to account for successful resolution of problems, "a bulwark against existential anxiety and fear of annihilation," an increased sense of control, companionship with one's deity, a source of self-significance, and group identity.
Apostasy.
Typical reasons for rejection of religion include:
Psychology.
Mainstream psychology and related disciplines have traditionally treated belief as if it were the simplest form of mental representation and therefore one of the building blocks of conscious thought. Philosophers have tended to be more abstract in their analysis, and much of the work examining the viability of the belief concept stems from philosophical analysis.
The concept of belief presumes a subject (the believer) and an object of belief (the proposition). Like other propositional attitudes, belief implies the existence of mental states and intentionality, both of which are hotly debated topics in the philosophy of mind, whose foundations and relation to brain states are still controversial.
Beliefs are sometimes divided into core beliefs (that are actively thought about) and dispositional beliefs (that may be ascribed to someone who has not thought about the issue). For example, if asked "do you believe tigers wear pink pajamas?" a person might answer that they do not, despite the fact they may never have thought about this situation before.
Philosopher Lynne Rudder Baker has outlined four main contemporary approaches to belief in her book "Saving Belief":
Strategic approaches make a distinction between rules, norms and beliefs as follows:
Belief formation and revision.
Belief revision is a term commonly used to refer to the modification of beliefs. An extensive amount of scientific research and philosophical discussion exists around belief revision. Generally speaking, the process of belief revision entails the believer weighing the set of truths and/or evidence, and the dominance of a set of truths or evidence on an alternative to a held belief can lead to revision. One process of belief revision is Bayesian updating (or Bayesian inference) and is often referenced for its mathematical basis and conceptual simplicity. However, such a process may not be representative for individuals whose beliefs are not easily characterized as probabilistic.
There are several techniques for individuals or groups to change the beliefs of others; these methods generally fall under the umbrella of persuasion. Persuasion can take on more specific forms such as consciousness raising when considered in an activist or political context. Belief modification may also occur as a result of the experience of outcomes. Because goals are based, in part on beliefs, the success or failure at a particular goal may contribute to modification of beliefs that supported the original goal.
Whether or not belief modification actually occurs is dependent not only on the extent of truths or evidence for the alternative belief, but also characteristics outside the specific truths or evidence. This includes, but is not limited to: the source characteristics of the message, such as credibility; social pressures; the anticipated consequences of a modification; or the ability of the individual or group to act on the modification. Therefore, individuals seeking to achieve belief modification in themselves or others need to consider all possible forms of resistance to belief revision.
Glover maintains that any person can continue to hold any belief if they would really like to (for example, with help from "ad hoc" hypotheses). One belief can be held fixed, and other beliefs will be altered around it. Glover warns that some beliefs may not be entirely explicitly believed (for example, some people may not realize they have racist belief-systems adopted from their environment as a child). Glover believes that people tend to first realize that beliefs can change, and may be contingent on their upbringing, around age 12 or 15.
Glover emphasizes that beliefs are difficult to change. He says that one may try to rebuild one's beliefs on more secure foundations (axioms), like building a new house, but warns that this may not be possible. Glover offers the example of René Descartes, saying: "[Descartes] starts off with the characteristic beliefs of a 17th-century Frenchman; he then junks the lot, he rebuilds the system, and somehow it looks a lot like the beliefs of a 17th-century Frenchman." To Glover, belief systems are not like houses but are instead like boats. As Glover puts it: "Maybe the whole thing needs rebuilding, but inevitably at any point you have to keep enough of it intact to keep floating."
Models of belief formation.
Psychologists study belief formation and the relationship between beliefs and actions. Three types of "models of belief formation" and change have been proposed: conditional inference process models, linear models and information processing models.
Conditional inference process models emphasize the role of inference for belief formation. When asked to estimate the likelihood that a statement is true, people allegedly search their memory for information that has implications for the validity of this statement. Once this information has been identified, they estimate the likelihood that the statement would be true if the information were true, and the likelihood that the statement would be true if the information were false. If their estimates for these two probabilities differ, people average them, weighting each by the likelihood that the information is true and false. Thus, information bears directly on beliefs of another, related statement.
Unlike the previous model, linear models take into consideration the possibility of multiple factors influencing belief formation. Using regression procedures, these models predict belief formation on the basis of several different pieces of information, with weights assigned to each piece on the basis of their relative importance.
Information processing models address the fact that the responses people have to belief-relevant information is unlikely to be predicted from the objective basis of the information that they can recall at the time their beliefs are reported. Instead, these responses reflect the number and meaning of the thoughts that people have about the message at the time that they encounter it.
Some influences on people's belief formation include:
However, even educated people, well aware of the process by which beliefs form, still strongly cling to their beliefs, and act on those beliefs even against their own self-interest. In her book "Leadership Therapy", Anna Rowley states: "You want your beliefs to change. It's proof that you are keeping your eyes open, living fully, and welcoming everything that the world and people around you can teach you." This view implies that peoples' beliefs may evolve as they gain new experiences.
Prediction.
Different psychological models have tried to predict people's beliefs and some of them try to estimate the exact probabilities of beliefs. For example, Robert Wyer developed a model of subjective probabilities. When people rate the likelihood of a certain statement (e.g., "It will rain tomorrow"), this rating can be seen as a subjective probability value. The subjective probability model posits that these subjective probabilities follow the same rules as objective probabilities. For example, the law of total probability might be applied to predict a subjective probability value. Wyer found that this model produces relatively accurate predictions for probabilities of single events and for changes in these probabilities, but that the probabilities of several beliefs linked by "and" or "or" do not follow the model as well.
Delusion.
In the DSM-5, delusions are defined as fixed false beliefs that are not changed even when confronted with conflicting evidence.
Belief studies.
There is research investigating specific beliefs, types of beliefs and patterns of beliefs. For example, a study estimated contemporary prevalence and associations with belief in witchcraft around the world, which (in its data) varied between 9% and 90% between nations and is still a widespread element in worldviews globally. It also shows associations such as with lower "innovative activity", higher levels of anxiety, lower life expectancy, and higher religiosity.<ref name="10.1371/journal.pone.0276872"></ref> Other research is investigating beliefs in misinformation and their resistance to correction, including with respect to misinformation countermeasures. It describes cognitive, social and affective processes that leave people vulnerable to the formation of false beliefs. A study introduced the concept of "false social reality" which refers to widespread perceptions of public opinion that are shown to be false, such as underestimated general public support in the U.S. for climate change mitigation policies. Studies also suggested some uses of psychedelics can shift beliefs in some humans in certain ways, such as increasing attribution of consciousness to various entities (including plants and inanimate objects) and towards panpsychism and fatalism.
Emotion and beliefs.
Research has indicated that emotion and cognition act in conjunction to produce beliefs, and more specifically emotion plays a vital role in the formation and maintenance of beliefs.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "P"
}
] | https://en.wikipedia.org/wiki?curid=102883 |
1028841 | Simplex category | Category of non-empty finite ordinals and order-preserving maps
In mathematics, the simplex category (or simplicial category or nonempty finite ordinal category) is the category of non-empty finite ordinals and order-preserving maps. It is used to define simplicial and cosimplicial objects.
Formal definition.
The simplex category is usually denoted by formula_0. There are several equivalent descriptions of this category. formula_0 can be described as the category of "non-empty finite ordinals" as objects, thought of as totally ordered sets, and "(non-strictly) order-preserving functions" as morphisms. The objects are commonly denoted formula_1 (so that formula_2 is the ordinal formula_3). The category is generated by coface and codegeneracy maps, which amount to inserting or deleting elements of the orderings. (See simplicial set for relations of these maps.)
A simplicial object is a presheaf on formula_0, that is a contravariant functor from formula_0 to another category. For instance, simplicial sets are contravariant with the codomain category being the category of sets. A cosimplicial object is defined similarly as a covariant functor originating from formula_0.
Augmented simplex category.
The augmented simplex category, denoted by formula_4 is the category of "all finite ordinals and order-preserving maps", thus formula_5, where formula_6. Accordingly, this category might also be denoted FinOrd. The augmented simplex category is occasionally referred to as algebraists' simplex category and the above version is called topologists' simplex category.
A contravariant functor defined on formula_4 is called an augmented simplicial object and a covariant functor out of formula_4 is called an augmented cosimplicial object; when the codomain category is the category of sets, for example, these are called augmented simplicial sets and augmented cosimplicial sets respectively.
The augmented simplex category, unlike the simplex category, admits a natural monoidal structure. The monoidal product is given by concatenation of linear orders, and the unit is the empty ordinal formula_7 (the lack of a unit prevents this from qualifying as a monoidal structure on formula_0). In fact, formula_4 is the monoidal category freely generated by a single monoid object, given by formula_8 with the unique possible unit and multiplication. This description is useful for understanding how any comonoid object in a monoidal category gives rise to a simplicial object since it can then be viewed as the image of a functor from formula_9 to the monoidal category containing the comonoid; by forgetting the augmentation we obtain a simplicial object. Similarly, this also illuminates the construction of simplicial objects from monads (and hence adjoint functors) since monads can be viewed as monoid objects in endofunctor categories. | [
{
"math_id": 0,
"text": "\\Delta"
},
{
"math_id": 1,
"text": " [n] = \\{0, 1, \\dots, n\\} "
},
{
"math_id": 2,
"text": " [n] "
},
{
"math_id": 3,
"text": " n+1 "
},
{
"math_id": 4,
"text": "\\Delta_+"
},
{
"math_id": 5,
"text": "\\Delta_+=\\Delta\\cup [-1]"
},
{
"math_id": 6,
"text": "[-1]=\\emptyset"
},
{
"math_id": 7,
"text": "[-1]"
},
{
"math_id": 8,
"text": "[0]"
},
{
"math_id": 9,
"text": "\\Delta_+^\\text{op}"
}
] | https://en.wikipedia.org/wiki?curid=1028841 |
1028978 | Combs method | The Combs method is a rule base reduction method of writing fuzzy logic rules described by William E. Combs in 1997. It is designed to prevent combinatorial explosion in fuzzy logic rules.
The Combs method takes advantage of the logical equality formula_0.
Equality proof.
The simplest proof of given equality involves usage of truth tables:
Combinatorial explosion.
Suppose we have a fuzzy system that considers N variables at a time, each of which can fit into at least one of S sets. The number of rules necessary to cover all the cases in a traditional fuzzy system is formula_1, whereas the Combs method would need only formula_2 rules. For example, if we have five sets and five variables to consider to produce one output, covering all the cases would require 3125 rules in a traditional system, while the Combs method would require only 25 rules, taming the combinatorial explosion that occurs when more inputs or more sets are added to the system.
This article will focus on the Combs method itself. To learn more about the way rules are traditionally formed, see fuzzy logic and fuzzy associative matrix.
Example.
Suppose we were designing an artificial personality system that determined how friendly the personality is supposed to be towards a person in a strategic video game. The personality would consider its own fear, trust, and love in the other person. A set of rules in the Combs system might look like this:
The table translates to:
[IF Fear IS Unafraid THEN Friendship IS Enemies OR
IF Fear IS ModerateFear THEN Friendship IS Neutral OR
IF Fear IS Afraid THEN Friendship IS GoodFriends ]
OR
[IF Trust IS Distrusting THEN Friendship IS Enemies OR
IF Trust IS ModerateTrust THEN Friendship IS Neutral OR
IF Trust IS Trusting THEN Friendship IS GoodFriends]
OR
[IF Love IS Unloving THEN Friendship IS Enemies OR
IF Love IS ModerateLove THEN Friendship IS Neutral OR
IF Love IS Loving THEN Friendship IS GoodFriends]
In this case, because the table follows a straightforward pattern in the output, it could be rewritten as:
Each column of the table maps to the output provided in the last row. To obtain the output of the system, we just average the outputs of each rule for that output. For example, to calculate how much the computer is Enemies with the player, we take the average of how much the computer is Unafraid, Distrusting, and Unloving of the player. When all three averages are obtained, the result can then be defuzzified by any of the traditional means.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "((p \\land q) \\Rightarrow r) \\iff ((p \\Rightarrow r) \\lor (q \\Rightarrow r))"
},
{
"math_id": 1,
"text": "S^N"
},
{
"math_id": 2,
"text": "S \\times N"
}
] | https://en.wikipedia.org/wiki?curid=1028978 |
10290343 | Dirac algebra | In mathematical physics, the Dirac algebra is the Clifford algebra formula_0. This was introduced by the mathematical physicist P. A. M. Dirac in 1928 in developing the Dirac equation for spin- particles with a matrix representation of the gamma matrices, which represent the generators of the algebra.
The gamma matrices are a set of four formula_1 matrices formula_2 with entries in formula_3, that is, elements of formula_4 that satisfy
formula_5
where by convention, an identity matrix has been suppressed on the right-hand side. The numbers formula_6 are the components of the Minkowski metric.
For this article we fix the signature to be "mostly minus", that is, formula_7.
The Dirac algebra is then the linear span of the identity, the gamma matrices formula_8 as well as any linearly independent products of the gamma matrices. This forms a finite-dimensional algebra over the field formula_9 or formula_3, with dimension formula_10.
Basis for the algebra.
The algebra has a basis
formula_11
formula_12
formula_13
formula_14
formula_15
where in each expression, each greek index is increasing as we move to the right. In particular, there is no repeated index in the expressions. By dimension counting, the dimension of the algebra is 16.
The algebra can be generated by taking products of the formula_8 alone: the identity arises as
formula_16
while the others are explicitly products of the formula_8.
These elements span the space generated by formula_8. We conclude that we really do have a basis of the Clifford algebra generated by the formula_17
Quadratic powers and Lorentz algebra.
For the theory in this section, there are many choices of conventions found in the literature, often corresponding to factors of formula_18. For clarity, here we will choose conventions to minimise the number of numerical factors needed, but may lead to generators being anti-Hermitian rather than Hermitian.
There is another common way to write the quadratic subspace of the Clifford algebra:
formula_19
with formula_20. Note formula_21.
There is another way to write this which holds even when formula_22:
formula_23
This form can be used to show that the formula_24 form a representation of the Lorentz algebra (with real conventions)
formula_25
Physics conventions.
It is common convention in physics to include a factor of formula_18, so that Hermitian conjugation (where transposing is done with respect to the spacetime greek indices) gives a 'Hermitian matrix' of sigma generators
only 6 of which are non-zero due to antisymmetry of the bracket, span the six-dimensional representation space of the tensor (1, 0) ⊕ (0, 1)-representation of the Lorentz algebra inside formula_26. Moreover, they have the commutation relations of the Lie algebra,
and hence constitute a representation of the Lorentz algebra (in addition to spanning a representation space) sitting inside formula_27 the formula_28 spin representation.
Spin(1, 3).
The exponential map for matrices is well defined. The formula_24 satisfy the Lorentz algebra, and turn out to exponentiate to a representation of the spin group formula_29 of the Lorentz group formula_30 (strictly, the future-directed part formula_31 connected to the identity). The formula_24 are then the spin generators of this representation.
We emphasize that formula_24 is itself a matrix, "not" the components of a matrix. Its components as a formula_1 complex matrix are labelled by convention using greek letters from the start of the alphabet formula_32.
The action of formula_24 on a spinor formula_33, which in this setting is an element of the vector space formula_34, is
formula_35, or in components,
formula_36
This corresponds to an infinitesimal Lorentz transformation on a spinor. Then a finite Lorentz transformation, parametrized by the components formula_37 (antisymmetric in formula_38) can be expressed as
formula_39
From the property that
formula_40
it follows that
formula_41
And formula_42 as defined above satisfies
formula_43
This motivates the definition of Dirac adjoint for spinors formula_33, of
formula_44.
The corresponding transformation for formula_42 is
formula_45.
With this, it becomes simple to construct Lorentz invariant quantities for construction of Lagrangians such as the Dirac Lagrangian.
Quartic power.
The quartic subspace contains a single basis element,
formula_46
where formula_47 is the totally antisymmetric tensor such that formula_48 by convention.
This is antisymmetric under exchange of any two adjacent gamma matrices.
"γ"5.
When considering the complex span, this basis element can alternatively be taken to be
formula_49
More details can be found here.
As a volume form.
By total antisymmetry of the quartic element, it can be considered to be a volume form. In fact, this observation extends to a discussion of Clifford algebras as a generalization of the exterior algebra: both arise as quotients of the tensor algebra, but the exterior algebra gives a more restrictive quotient, where the anti-commutators all vanish.
Derivation starting from the Dirac and Klein–Gordon equation.
The defining form of the gamma elements can be derived if one assumes the covariant form of the Dirac equation:
formula_50
and the Klein–Gordon equation:
formula_51
to be given, and requires that these equations lead to consistent results.
Derivation from consistency requirement (proof). Multiplying the Dirac equation by its conjugate equation yields:
formula_52
The demand of consistency with the Klein–Gordon equation leads immediately to:
formula_53
where formula_54 is the anticommutator, formula_6 is the Minkowski metric with signature (+ − − −) and formula_55 is the 4x4 unit matrix.
Cl1,3(C) and Cl1,3(R).
The Dirac algebra can be regarded as a complexification of the real spacetime algebra Cl1,3(formula_9):
formula_56
Cl1,3(formula_9) differs from Cl1,3(formula_3): in Cl1,3(formula_9) only "real" linear combinations of the gamma matrices and their products are allowed.
Proponents of geometric algebra strive to work with real algebras wherever that is possible. They argue that it is generally possible (and usually enlightening) to identify the presence of an imaginary unit in a physical equation. Such units arise from one of the many quantities in a real Clifford algebra that square to −1, and these have geometric significance because of the properties of the algebra and the interaction of its various subspaces. Some of these proponents also question whether it is necessary or even useful to introduce an additional imaginary unit in the context of the Dirac equation.
In the mathematics of Riemannian geometry, it is conventional to define the Clifford algebra Clp,q(formula_9) for arbitrary dimensions p,q; the anti-commutation of the Weyl spinors emerges naturally from the Clifford algebra. The Weyl spinors transform under the action of the spin group formula_57. The complexification of the spin group, called the spinc group formula_58, is a product formula_59 of the spin group with the circle formula_60 with the product formula_61 just a notational device to identify formula_62 with formula_63 The geometric point of this is that it disentangles the real spinor, which is covariant under Lorentz transformations, from the formula_64 component, which can be identified with the formula_64 fiber of the electromagnetic interaction. The formula_61 is entangling parity and charge conjugation in a manner suitable for relating the Dirac particle/anti-particle states (equivalently, the chiral states in the Weyl basis). The bispinor, insofar as it has linearly independent left and right components, can interact with the electromagnetic field. This is in contrast to the Majorana spinor and the ELKO spinor, which cannot ("i.e." they are electrically neutral), as they explicitly constrain the spinor so as to not interact with the formula_65 part coming from the complexification. The ELKO spinor (Eigenspinoren des Ladungskonjugationsoperators) is a class 5 Lounesto spinor.
Insofar as the presentation of charge and parity can be a confusing topic in conventional quantum field theory textbooks, the more careful dissection of these topics in a general geometric setting can be elucidating. Standard expositions of the Clifford algebra construct the Weyl spinors from first principles; that they "automatically" anti-commute is an elegant geometric by-product of the construction, completely by-passing any arguments that appeal to the Pauli exclusion principle (or the sometimes common sensation that Grassmann variables have been introduced via "ad hoc" argumentation.)
In contemporary physics practice, the Dirac algebra continues to be the standard environment the spinors of the Dirac equation "live" in, rather than the spacetime algebra.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Cl}_{1,3}(\\mathbb{C})"
},
{
"math_id": 1,
"text": "4\\times 4"
},
{
"math_id": 2,
"text": "\\{\\gamma^\\mu\\} = \\{\\gamma^0,\\gamma^1, \\gamma^2, \\gamma^3\\}"
},
{
"math_id": 3,
"text": "\\mathbb{C}"
},
{
"math_id": 4,
"text": "\\text{Mat}_{4\\times 4}(\\mathbb{C})"
},
{
"math_id": 5,
"text": "\\displaystyle\\{ \\gamma^\\mu, \\gamma^\\nu \\} = \\gamma^\\mu \\gamma^\\nu + \\gamma^\\nu \\gamma^\\mu = 2 \\eta^{\\mu \\nu},"
},
{
"math_id": 6,
"text": "\\eta^{\\mu \\nu} \\,"
},
{
"math_id": 7,
"text": "(+,-,-,-)"
},
{
"math_id": 8,
"text": "\\gamma^\\mu"
},
{
"math_id": 9,
"text": "\\mathbb{R}"
},
{
"math_id": 10,
"text": "16 = 2^4"
},
{
"math_id": 11,
"text": "I_4,"
},
{
"math_id": 12,
"text": "\\gamma^\\mu,"
},
{
"math_id": 13,
"text": "\\gamma^\\mu\\gamma^\\nu,"
},
{
"math_id": 14,
"text": "\\gamma^\\mu\\gamma^\\nu\\gamma^\\rho,"
},
{
"math_id": 15,
"text": "\\gamma^\\mu\\gamma^\\nu\\gamma^\\rho\\gamma^\\sigma = \\gamma^0\\gamma^1\\gamma^2\\gamma^3"
},
{
"math_id": 16,
"text": "I_4 = (\\gamma^0)^2"
},
{
"math_id": 17,
"text": "\\gamma^\\mu."
},
{
"math_id": 18,
"text": "\\pm i"
},
{
"math_id": 19,
"text": "S^{\\mu\\nu} = \\frac{1}{4}[\\gamma^\\mu,\\gamma^\\nu]"
},
{
"math_id": 20,
"text": "\\mu\\neq\\nu"
},
{
"math_id": 21,
"text": "S^{\\mu\\nu} = - S^{\\nu\\mu}"
},
{
"math_id": 22,
"text": "\\mu=\\nu"
},
{
"math_id": 23,
"text": "S^{\\mu\\nu} = \\frac{1}{2}(\\gamma^\\mu\\gamma^\\nu - \\eta^{\\mu\\nu})."
},
{
"math_id": 24,
"text": "S^{\\mu\\nu}"
},
{
"math_id": 25,
"text": "[S^{\\mu\\nu}, S^{\\rho\\sigma}] = S^{\\mu\\sigma}\\eta^{\\nu\\rho} - S^{\\nu\\sigma}\\eta^{\\mu\\rho} + S^{\\nu\\rho}\\eta^{\\mu\\sigma} - S^{\\mu\\rho}\\eta^{\\nu\\sigma}."
},
{
"math_id": 26,
"text": "\\mathcal{Cl}_{1,3}(\\R)"
},
{
"math_id": 27,
"text": "\\mathcal{Cl}_{1,3}(\\R),"
},
{
"math_id": 28,
"text": "\\left(\\frac{1}{2},0\\right)\\oplus\\left(0,\\frac{1}{2}\\right)"
},
{
"math_id": 29,
"text": "\\text{Spin}(1,3)"
},
{
"math_id": 30,
"text": "\\text{SO}(1,3)"
},
{
"math_id": 31,
"text": "\\text{SO}(1,3)^+"
},
{
"math_id": 32,
"text": "\\alpha,\\beta,\\cdots"
},
{
"math_id": 33,
"text": "\\psi"
},
{
"math_id": 34,
"text": "\\mathbb{C}^4"
},
{
"math_id": 35,
"text": "\\psi\\mapsto S^{\\mu\\nu}\\psi"
},
{
"math_id": 36,
"text": "\\psi^\\alpha \\mapsto (S^{\\mu\\nu})^\\alpha{}_\\beta\\psi^\\beta."
},
{
"math_id": 37,
"text": "\\omega_{\\mu\\nu}"
},
{
"math_id": 38,
"text": "\\mu,\\nu"
},
{
"math_id": 39,
"text": "S := \\exp\\left(\\frac{1}{2}\\omega_{\\mu\\nu}S^{\\mu\\nu}\\right)."
},
{
"math_id": 40,
"text": "(\\gamma^\\mu)^\\dagger = \\gamma^0\\gamma^\\mu\\gamma^0,"
},
{
"math_id": 41,
"text": "(S^{\\mu\\nu})^\\dagger = -\\gamma^0 S^{\\mu\\nu}\\gamma^0."
},
{
"math_id": 42,
"text": "S"
},
{
"math_id": 43,
"text": "S^\\dagger = \\gamma^0 S^{-1} \\gamma^0"
},
{
"math_id": 44,
"text": "\\bar\\psi:= \\psi^\\dagger \\gamma^0"
},
{
"math_id": 45,
"text": "\\bar S := \\gamma^0 S^\\dagger \\gamma^0 = S^{-1}"
},
{
"math_id": 46,
"text": "\\gamma^0\\gamma^1\\gamma^2\\gamma^3 = \\frac{1}{4!}\\epsilon_{\\mu\\nu\\rho\\sigma}\\gamma^\\mu\\gamma^\\nu\\gamma^\\rho\\gamma^\\sigma,"
},
{
"math_id": 47,
"text": "\\epsilon_{\\mu\\nu\\rho\\sigma}"
},
{
"math_id": 48,
"text": "\\epsilon_{0123} = +1"
},
{
"math_id": 49,
"text": "\\gamma^5 := i\\gamma^0\\gamma^1\\gamma^2\\gamma^3."
},
{
"math_id": 50,
"text": "-i \\hbar \\gamma^\\mu \\partial_\\mu \\psi + m c \\psi = 0 \\,."
},
{
"math_id": 51,
"text": " - \\partial_t^2 \\psi + \\nabla^2 \\psi = m^2 \\psi"
},
{
"math_id": 52,
"text": "\\psi^{\\dagger} ( i \\hbar \\gamma^\\mu \\partial_\\mu + m c ) ( -i \\hbar \\gamma^\\nu \\partial_\\nu + m c ) \\psi = 0 \\,."
},
{
"math_id": 53,
"text": "\\displaystyle\\{ \\gamma^\\mu, \\gamma^\\nu \\} = \\gamma^\\mu \\gamma^\\nu + \\gamma^\\nu \\gamma^\\mu = 2 \\eta^{\\mu \\nu} I_4 "
},
{
"math_id": 54,
"text": "\\{ , \\}"
},
{
"math_id": 55,
"text": "\\ I_4 \\,"
},
{
"math_id": 56,
"text": " \\mathrm{Cl}_{1,3}(\\Complex) = \\mathrm{Cl}_{1,3}(\\R) \\otimes \\Complex. "
},
{
"math_id": 57,
"text": "\\mathrm{Spin}(n)"
},
{
"math_id": 58,
"text": "\\mathrm{Spin}^\\mathbb{C}(n)"
},
{
"math_id": 59,
"text": "\\mathrm{Spin}(n)\\times_{\\mathbb{Z}_2} S^1"
},
{
"math_id": 60,
"text": "S^1 \\cong U(1)"
},
{
"math_id": 61,
"text": "\\times_{\\mathbb{Z}_2}"
},
{
"math_id": 62,
"text": "(a,u)\\in \\mathrm{Spin}(n)\\times S^1"
},
{
"math_id": 63,
"text": "(-a, -u)."
},
{
"math_id": 64,
"text": "U(1)"
},
{
"math_id": 65,
"text": "S^1"
}
] | https://en.wikipedia.org/wiki?curid=10290343 |
1029137 | Eigenplane | In mathematics, an eigenplane is a two-dimensional invariant subspace in a given vector space. By analogy with the term "eigenvector" for a vector which, when operated on by a linear operator is another vector which is a scalar multiple of itself, the term eigenplane can be used to describe a two-dimensional plane (a "2-plane"), such that the operation of a linear operator on a vector in the 2-plane always yields another vector in the same 2-plane.
A particular case that has been studied is that in which the linear operator is an isometry "M" of the hypersphere (written "S3") represented within four-dimensional Euclidean space:
formula_0
where s and t are four-dimensional column vectors and Λθ is a two-dimensional eigenrotation within the eigenplane.
In the usual eigenvector problem, there is freedom to multiply an eigenvector by an arbitrary scalar; in this case there is freedom to multiply by an arbitrary non-zero rotation.
This case is potentially physically interesting in the case that the shape of the universe is a multiply connected 3-manifold, since finding the angles of the eigenrotations of a candidate isometry for topological lensing is a way to falsify such hypotheses. | [
{
"math_id": 0,
"text": "M \\; [ \\mathbf{s} \\; \\mathbf{t} ] \\; = \\; [ \\mathbf{s} \\; \\mathbf{t} ] \\Lambda_\\theta "
}
] | https://en.wikipedia.org/wiki?curid=1029137 |
1029177 | Primitive polynomial (field theory) | Minimal polynomial of a primitive element in a finite field
In finite field theory, a branch of mathematics, a primitive polynomial is the minimal polynomial of a primitive element of the finite field GF("p""m"). This means that a polynomial "F"("X") of degree m with coefficients in GF("p") = Z/"p"Z is a "primitive polynomial" if it is monic and has a root "α" in GF("p""m") such that formula_0 is the entire field GF("p""m"). This implies that "α" is a primitive ("p""m" − 1)-root of unity in GF("p""m").
Properties.
("x" − "α") ("x" − "α""p") ("x" − "α""p"2) … ("x" − "α""p""m"−1). That the coefficients of a polynomial of this form, for any α in GF("p""n"), not necessarily primitive, lie in GF("p") follows from the property that the polynomial is invariant under application of the Frobenius automorphism to its coefficients (using "α""p""n"
"α") and from the fact that the fixed field of the Frobenius automorphism is GF("p").
Examples.
Over GF(3) the polynomial "x"2 + 1 is irreducible but not primitive because it divides "x"4 − 1: its roots generate a cyclic group of order 4, while the multiplicative group of GF(32) is a cyclic group of order 8. The polynomial "x"2 + 2"x" + 2, on the other hand, is primitive. Denote one of its roots by α. Then, because the natural numbers less than and relatively prime to 32 − 1
8 are 1, 3, 5, and 7, the four primitive roots in GF(32) are α, "α"3
2"α" + 1, "α"5
2"α", and "α"7
"α" + 2. The primitive roots α and "α"3 are algebraically conjugate. Indeed "x"2 + 2"x" + 2
("x" − "α") ("x" − (2"α" + 1)). The remaining primitive roots "α"5 and "α"7
("α"5)3 are also algebraically conjugate and produce the second primitive polynomial: "x"2 + "x" + 2
("x" − 2"α") ("x" − ("α" + 2)).
For degree 3, GF(33) has "φ"(33 − 1)
"φ"(26)
12 primitive elements. As each primitive polynomial of degree 3 has three roots, all necessarily primitive, there are 12 / 3
4 primitive polynomials of degree 3. One primitive polynomial is "x"3 + 2"x" + 1. Denoting one of its roots by γ, the algebraically conjugate elements are "γ"3 and "γ"9. The other primitive polynomials are associated with algebraically conjugate sets built on other primitive elements "γ""r" with r relatively prime to 26:
formula_1
Applications.
Field element representation.
Primitive polynomials can be used to represent the elements of a finite field. If "α" in GF("p""m") is a root of a primitive polynomial "F"("x"), then the nonzero elements of GF("p""m") are represented as successive powers of "α":
formula_2
This allows an economical representation in a computer of the nonzero elements of the finite field, by representing an element by the corresponding exponent of formula_3 This representation makes multiplication easy, as it corresponds to addition of exponents modulo formula_4
Pseudo-random bit generation.
Primitive polynomials over GF(2), the field with two elements, can be used for pseudorandom bit generation. In fact, every linear-feedback shift register with maximum cycle length (which is 2"n" − 1, where "n" is the length of the linear-feedback shift register) may be built from a primitive polynomial.
In general, for a primitive polynomial of degree "m" over GF(2), this process will generate 2"m" − 1 pseudo-random bits before repeating the same sequence.
CRC codes.
The cyclic redundancy check (CRC) is an error-detection code that operates by interpreting the message bitstring as the coefficients of a polynomial over GF(2) and dividing it by a fixed generator polynomial also over GF(2); see Mathematics of CRC. Primitive polynomials, or multiples of them, are sometimes a good choice for generator polynomials because they can reliably detect two bit errors that occur far apart in the message bitstring, up to a distance of 2"n" − 1 for a degree "n" primitive polynomial.
Primitive trinomials.
A useful class of primitive polynomials is the primitive trinomials, those having only three nonzero terms: "xr" + "xk" + 1. Their simplicity makes for particularly small and fast linear-feedback shift registers. A number of results give techniques for locating and testing primitiveness of trinomials.
For polynomials over GF(2), where 2"r" − 1 is a Mersenne prime, a polynomial of degree "r" is primitive if and only if it is irreducible. (Given an irreducible polynomial, it is "not" primitive only if the period of "x" is a non-trivial factor of 2"r" − 1. Primes have no non-trivial factors.) Although the Mersenne Twister pseudo-random number generator does not use a trinomial, it does take advantage of this.
Richard Brent has been tabulating primitive trinomials of this form, such as "x"74207281 + "x"30684570 + 1. This can be used to create a pseudo-random number generator of the huge period 274207281 − 1 ≈ .
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\{0,1,\\alpha, \\alpha^2,\\alpha^3, \\ldots \\alpha^{p^m-2}\\}"
},
{
"math_id": 1,
"text": "\\begin{align}x^3+2x+1 & = (x-\\gamma)(x-\\gamma^3)(x-\\gamma^9)\\\\\nx^3+2x^2+x+1 &= (x-\\gamma^5)(x-\\gamma^{5\\cdot3})(x-\\gamma^{5\\cdot9}) = (x-\\gamma^5)(x-\\gamma^{15})(x-\\gamma^{19})\\\\\nx^3+x^2+2x+1 &= (x-\\gamma^7)(x-\\gamma^{7\\cdot3})(x-\\gamma^{7\\cdot9}) = (x-\\gamma^7)(x-\\gamma^{21})(x-\\gamma^{11})\\\\\nx^3+2x^2+1 &= (x-\\gamma^{17})(x-\\gamma^{17\\cdot3})(x-\\gamma^{17\\cdot9}) = (x-\\gamma^{17})(x-\\gamma^{25})(x-\\gamma^{23}).\n\\end{align}"
},
{
"math_id": 2,
"text": "\n\\mathrm{GF}(p^m) = \\{ 0, 1= \\alpha^0, \\alpha, \\alpha^2, \\ldots, \\alpha^{p^m-2} \\} .\n"
},
{
"math_id": 3,
"text": "\\alpha."
},
{
"math_id": 4,
"text": "p^m-1."
}
] | https://en.wikipedia.org/wiki?curid=1029177 |
10294 | Encryption | Process of converting plaintext to ciphertext
In cryptography, encryption is the process of transforming (more specifically, encoding) information in a way that, ideally, only authorized parties can decode. This process converts the original representation of the information, known as plaintext, into an alternative form known as ciphertext. Despite its goal, encryption does not itself prevent interference but denies the intelligible content to a would-be interceptor.
For technical reasons, an encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is possible to decrypt the message without possessing the key but, for a well-designed encryption scheme, considerable computational resources and skills are required. An authorized recipient can easily decrypt the message with the key provided by the originator to recipients but not to unauthorized users.
Historically, various forms of encryption have been used to aid in cryptography. Early encryption techniques were often used in military messaging. Since then, new techniques have emerged and become commonplace in all areas of modern computing. Modern encryption schemes use the concepts of public-key and symmetric-key. Modern encryption techniques ensure security because modern computers are inefficient at cracking the encryption.
History.
Ancient.
One of the earliest forms of encryption is symbol replacement, which was first found in the tomb of Khnumhotep II, who lived in 1900 BC Egypt. Symbol replacement encryption is “non-standard,” which means that the symbols require a cipher or key to understand. This type of early encryption was used throughout Ancient Greece and Rome for military purposes. One of the most famous military encryption developments was the Caesar cipher, which was a system in which a letter in normal text is shifted down a fixed number of positions down the alphabet to get the encoded letter. A message encoded with this type of encryption could be decoded with the fixed number on the Caesar cipher.
Around 800 AD, Arab mathematician Al-Kindi developed the technique of frequency analysis – which was an attempt to systematically crack ciphers, including the Caesar cipher. This technique looked at the frequency of letters in the encrypted message to determine the appropriate shift. This technique was rendered ineffective after the creation of the polyalphabetic cipher, described by Al-Qalqashandi (1355–1418) and Leon Battista Alberti (in 1465), which incorporated different sets of languages. In order for frequency analysis to be useful, the person trying to decrypt the message would need to know which language the sender chose.
19th–20th century.
Around 1790, Thomas Jefferson theorized a cipher to encode and decode messages in order to provide a more secure way of military correspondence. The cipher, known today as the Wheel Cipher or the Jefferson Disk, although never actually built, was theorized as a spool that could jumble an English message up to 36 characters. The message could be decrypted by plugging in the jumbled message to a receiver with an identical cipher.
A similar device to the Jefferson Disk, the M-94, was developed in 1917 independently by US Army Major Joseph Mauborne. This device was used in U.S. military communications until 1942.
In World War II, the Axis powers used a more advanced version of the M-94 called the Enigma Machine. The Enigma Machine was more complex because unlike the Jefferson Wheel and the M-94, each day the jumble of letters switched to a completely new combination. Each day's combination was only known by the Axis, so many thought the only way to break the code would be to try over 17,000 combinations within 24 hours. The Allies used computing power to severely limit the number of reasonable combinations they needed to check every day, leading to the breaking of the Enigma Machine.
Modern.
Today, encryption is used in the transfer of communication over the Internet for security and commerce. As computing power continues to increase, computer encryption is constantly evolving to prevent eavesdropping attacks. With one of the first "modern" cipher suites, DES, utilizing a 56-bit key with 72,057,594,037,927,936 possibilities being able to be cracked in 22 hours and 15 minutes by EFF's DES cracker in 1999, which used a brute-force method of cracking. Modern encryption standards often use stronger key sizes often 256, like AES (256-bit mode), TwoFish, ChaCha20-Poly1305, Serpent (configurable up to 512-bit). Cipher suites utilizing a 128-bit or higher key, like AES, will not be able to be brute-forced due to the total amount of keys of 3.4028237e+38 possibilities. The most likely option for cracking ciphers with high key size is to find vulnerabilities in the cipher itself, like inherent biases and backdoors or by exploiting physical side effects through Side-channel attacks. For example, RC4, a stream cipher, was cracked due to inherent biases and vulnerabilities in the cipher.
Encryption in cryptography.
In the context of cryptography, encryption serves as a mechanism to ensure confidentiality. Since data may be visible on the Internet, sensitive information such as passwords and personal communication may be exposed to potential interceptors. The process of encrypting and decrypting messages involves keys. The two main types of keys in cryptographic systems are symmetric-key and public-key (also known as asymmetric-key).
Many complex cryptographic algorithms often use simple modular arithmetic in their implementations.
Types.
In symmetric-key schemes, the encryption and decryption keys are the same. Communicating parties must have the same key in order to achieve secure communication. The German Enigma Machine utilized a new symmetric-key each day for encoding and decoding messages.
In public-key encryption schemes, the encryption key is published for anyone to use and encrypt messages. However, only the receiving party has access to the decryption key that enables messages to be read. Public-key encryption was first described in a secret document in 1973; beforehand, all encryption schemes were symmetric-key (also called private-key). Although published subsequently, the work of Diffie and Hellman was published in a journal with a large readership, and the value of the methodology was explicitly described. The method became known as the Diffie-Hellman key exchange.
RSA (Rivest–Shamir–Adleman) is another notable public-key cryptosystem. Created in 1978, it is still used today for applications involving digital signatures. Using number theory, the RSA algorithm selects two prime numbers, which help generate both the encryption and decryption keys.
A publicly available public-key encryption application called Pretty Good Privacy (PGP) was written in 1991 by Phil Zimmermann, and distributed free of charge with source code. PGP was purchased by Symantec in 2010 and is regularly updated.
Uses.
Encryption has long been used by militaries and governments to facilitate secret communication. It is now commonly used in protecting information within many kinds of civilian systems. For example, the Computer Security Institute reported that in 2007, 71% of companies surveyed utilized encryption for some of their data in transit, and 53% utilized encryption for some of their data in storage. Encryption can be used to protect data "at rest", such as information stored on computers and storage devices (e.g. USB flash drives). In recent years, there have been numerous reports of confidential data, such as customers' personal records, being exposed through loss or theft of laptops or backup drives; encrypting such files at rest helps protect them if physical security measures fail. Digital rights management systems, which prevent unauthorized use or reproduction of copyrighted material and protect software against reverse engineering (see also copy protection), is another somewhat different example of using encryption on data at rest.
Encryption is also used to protect data in transit, for example data being transferred via networks (e.g. the Internet, e-commerce), mobile telephones, wireless microphones, wireless intercom systems, Bluetooth devices and bank automatic teller machines. There have been numerous reports of data in transit being intercepted in recent years. Data should also be encrypted when transmitted across networks in order to protect against eavesdropping of network traffic by unauthorized users.
Data erasure.
Conventional methods for permanently deleting data from a storage device involve overwriting the device's whole content with zeros, ones, or other patterns – a process which can take a significant amount of time, depending on the capacity and the type of storage medium. Cryptography offers a way of making the erasure almost instantaneous. This method is called crypto-shredding. An example implementation of this method can be found on iOS devices, where the cryptographic key is kept in a dedicated 'effaceable storage'. Because the key is stored on the same device, this setup on its own does not offer full privacy or security protection if an unauthorized person gains physical access to the device.
Limitations.
Encryption is used in the 21st century to protect digital data and information systems. As computing power increased over the years, encryption technology has only become more advanced and secure. However, this advancement in technology has also exposed a potential limitation of today's encryption methods.
The length of the encryption key is an indicator of the strength of the encryption method. For example, the original encryption key, DES (Data Encryption Standard), was 56 bits, meaning it had 2^56 combination possibilities. With today's computing power, a 56-bit key is no longer secure, being vulnerable to brute force attacks.
Quantum computing utilizes properties of quantum mechanics in order to process large amounts of data simultaneously. Quantum computing has been found to achieve computing speeds thousands of times faster than today's supercomputers. This computing power presents a challenge to today's encryption technology. For example, RSA encryption utilizes the multiplication of very large prime numbers to create a semiprime number for its public key. Decoding this key without its private key requires this semiprime number to be factored, which can take a very long time to do with modern computers. It would take a supercomputer anywhere between weeks to months to factor in this key. However, quantum computing can use quantum algorithms to factor this semiprime number in the same amount of time it takes for normal computers to generate it. This would make all data protected by current public-key encryption vulnerable to quantum computing attacks. Other encryption techniques like elliptic curve cryptography and symmetric key encryption are also vulnerable to quantum computing.
While quantum computing could be a threat to encryption security in the future, quantum computing as it currently stands is still very limited. Quantum computing currently is not commercially available, cannot handle large amounts of code, and only exists as computational devices, not computers. Furthermore, quantum computing advancements will be able to be utilized in favor of encryption as well. The National Security Agency (NSA) is currently preparing post-quantum encryption standards for the future. Quantum encryption promises a level of security that will be able to counter the threat of quantum computing.
Attacks and countermeasures.
Encryption is an important tool but is not sufficient alone to ensure the security or privacy of sensitive information throughout its lifetime. Most applications of encryption protect information only at rest or in transit, leaving sensitive data in clear text and potentially vulnerable to improper disclosure during processing, such as by a cloud service for example. Homomorphic encryption and secure multi-party computation are emerging techniques to compute encrypted data; these techniques are general and Turing complete but incur high computational and/or communication costs.
In response to encryption of data at rest, cyber-adversaries have developed new types of attacks. These more recent threats to encryption of data at rest include cryptographic attacks, stolen ciphertext attacks, attacks on encryption keys, insider attacks, data corruption or integrity attacks, data destruction attacks, and ransomware attacks. Data fragmentation and active defense data protection technologies attempt to counter some of these attacks, by distributing, moving, or mutating ciphertext so it is more difficult to identify, steal, corrupt, or destroy.
The debate around encryption.
The question of balancing the need for national security with the right to privacy has been debated for years, since encryption has become critical in today's digital society. The modern encryption debate started around the '90s when US government tried to ban cryptography because, according to them, it would threaten national security. The debate is polarized around two opposing views. Those who see strong encryption as a problem making it easier for criminals to hide their illegal acts online and others who argue that encryption keep digital communications safe. The debate heated up in 2014, when Big Tech like Apple and Google set encryption by default in their devices. This was the start of a series of controversies that puts governments, companies and internet users at stake.
Integrity protection of Ciphertexts.
Encryption, by itself, can protect the confidentiality of messages, but other techniques are still needed to protect the integrity and authenticity of a message; for example, verification of a message authentication code (MAC) or a digital signature usually done by a hashing algorithm or a PGP signature. Authenticated encryption algorithms are designed to provide both encryption and integrity protection together. Standards for cryptographic software and hardware to perform encryption are widely available, but successfully using encryption to ensure security may be a challenging problem. A single error in system design or execution can allow successful attacks. Sometimes an adversary can obtain unencrypted information without directly undoing the encryption. See for example traffic analysis, TEMPEST, or Trojan horse.
Integrity protection mechanisms such as MACs and digital signatures must be applied to the ciphertext when it is first created, typically on the same device used to compose the message, to protect a message end-to-end along its full transmission path; otherwise, any node between the sender and the encryption agent could potentially tamper with it. Encrypting at the time of creation is only secure if the encryption device itself has correct keys and has not been tampered with. If an endpoint device has been configured to trust a root certificate that an attacker controls, for example, then the attacker can both inspect and tamper with encrypted data by performing a man-in-the-middle attack anywhere along the message's path. The common practice of TLS interception by network operators represents a controlled and institutionally sanctioned form of such an attack, but countries have also attempted to employ such attacks as a form of control and censorship.
Ciphertext length and padding.
Even when encryption correctly hides a message's content and it cannot be tampered with at rest or in transit, a message's "length" is a form of metadata that can still leak sensitive information about the message. For example, the well-known CRIME and BREACH attacks against HTTPS were side-channel attacks that relied on information leakage via the length of encrypted content. Traffic analysis is a broad class of techniques that often employs message lengths to infer sensitive implementation about traffic flows by aggregating information about a large number of messages.
Padding a message's payload before encrypting it can help obscure the cleartext's true length, at the cost of increasing the ciphertext's size and introducing or increasing bandwidth overhead. Messages may be padded randomly or deterministically, with each approach having different tradeoffs. Encrypting and padding messages to form padded uniform random blobs or PURBs is a practice guaranteeing that the cipher text leaks no metadata about its cleartext's content, and leaks asymptotically minimal formula_0 information via its length.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(\\log\\log M)"
}
] | https://en.wikipedia.org/wiki?curid=10294 |
10295395 | Elliptic cohomology | Algebraic invariant of topological spaces
In mathematics, elliptic cohomology is a cohomology theory in the sense of algebraic topology. It is related to elliptic curves and modular forms.
History and motivation.
Historically, elliptic cohomology arose from the study of elliptic genera. It was known by Atiyah and Hirzebruch that if formula_0 acts smoothly and non-trivially on a spin manifold, then the index of the Dirac operator vanishes. In 1983, Witten conjectured that in this situation the equivariant index of a certain twisted Dirac operator is at least constant. This led to certain other problems concerning formula_0-actions on manifolds, which could be solved by Ochanine by the introduction of elliptic genera. In turn, Witten related these to (conjectural) index theory on free loop spaces. Elliptic cohomology, invented in its original form by Landweber, Stong and Ravenel in the late 1980s, was introduced to clarify certain issues with elliptic genera and provide a context for (conjectural) index theory of families of differential operators on free loop spaces. In some sense it can be seen as an approximation to the K-theory of the free loop space.
Definitions and constructions.
Call a cohomology theory formula_1 even periodic if formula_2 for i odd and there is an invertible element formula_3. These theories possess a complex orientation, which gives a formal group law. A particularly rich source for formal group laws are elliptic curves. A cohomology theory formula_4 with
formula_5
is called "elliptic" if it is even periodic and its formal group law is isomorphic to a formal group law of an elliptic curve formula_6 over formula_7. The usual construction of such elliptic cohomology theories uses the Landweber exact functor theorem. If the formal group law of formula_6 is Landweber exact, one can define an elliptic cohomology theory (on finite complexes) by
formula_8
Franke has identified the condition needed to fulfill Landweber exactness:
These conditions can be checked in many cases related to elliptic genera. Moreover, the conditions are fulfilled in the universal case in the sense that the map from the moduli stack of elliptic curves to the moduli stack of formal groups
formula_14
is flat. This gives then a presheaf of cohomology theoriesformula_15over the site of affine schemes flat over the moduli stack of elliptic curves. The desire to get a universal elliptic cohomology theory by taking global sections has led to the construction of the topological modular formspg 20formula_16as the homotopy limit of this presheaf over the previous site. | [
{
"math_id": 0,
"text": "S^1"
},
{
"math_id": 1,
"text": "A^*"
},
{
"math_id": 2,
"text": "A^i = 0"
},
{
"math_id": 3,
"text": "u\\in A^2"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "A^0 = R"
},
{
"math_id": 6,
"text": "E"
},
{
"math_id": 7,
"text": "R"
},
{
"math_id": 8,
"text": "A^*(X) = MU^*(X)\\otimes_{MU^*}R[u,u^{-1}]. \\, "
},
{
"math_id": 9,
"text": "\\mathbb{Z}"
},
{
"math_id": 10,
"text": "X"
},
{
"math_id": 11,
"text": "\\text{Spec }R/pR"
},
{
"math_id": 12,
"text": "E_x"
},
{
"math_id": 13,
"text": "x\\in X"
},
{
"math_id": 14,
"text": "\\mathcal{M}_{1,1}\\to\\mathcal{M}_{fg}"
},
{
"math_id": 15,
"text": "\\mathcal{O}_{e\\ell\\ell}^{pre}: \\text{Aff}/(\\mathcal{M}_{1,1})_{flat} \\to \\textbf{Spectra}"
},
{
"math_id": 16,
"text": "\\mathbf{Tmf} = \\underset{X \\to \\mathcal{M}_{1,1}}{\\textbf{Holim}}\\text{ } \\mathcal{O}_{e\\ell\\ell}^{pre}(X)"
}
] | https://en.wikipedia.org/wiki?curid=10295395 |