id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
1072325 | Hearing the shape of a drum | To hear the shape of a drum is to infer information about the shape of the drumhead from the sound it makes, i.e., from the list of overtones, via the use of mathematical theory.
"Can One Hear the Shape of a Drum?" is the title of a 1966 article by Mark Kac in the "American Mathematical Monthly" which made the question famous, though this particular phrasing originates with Lipman Bers. Similar questions can be traced back all the way to physicist Arthur Schuster in 1882. For his paper, Kac was given the Lester R. Ford Award in 1967 and the Chauvenet Prize in 1968.
The frequencies at which a drumhead can vibrate depend on its shape. The Helmholtz equation calculates the frequencies if the shape is known. These frequencies are the eigenvalues of the Laplacian in the space. A central question is whether the shape can be predicted if the frequencies are known; for example, whether a Reuleaux triangle can be recognized in this way. Kac admitted that he did not know whether it was possible for two different shapes to yield the same set of frequencies. The question of whether the frequencies determine the shape was finally answered in the negative in the early 1990s by Gordon, Webb and Wolpert.
Formal statement.
More formally, the drum is conceived as an elastic membrane whose boundary is clamped. It is represented as a domain "D" in the plane. Denote by "λ""n" the Dirichlet eigenvalues for "D": that is, the eigenvalues of the Dirichlet problem for the Laplacian:
formula_0
Two domains are said to be isospectral (or homophonic) if they have the same eigenvalues. The term "homophonic" is justified because the Dirichlet eigenvalues are precisely the fundamental tones that the drum is capable of producing: they appear naturally as Fourier coefficients in the solution wave equation with clamped boundary.
Therefore, the question may be reformulated as: what can be inferred on "D" if one knows only the values of "λ""n"? Or, more specifically: are there two distinct domains that are isospectral?
Related problems can be formulated for the Dirichlet problem for the Laplacian on domains in higher dimensions or on Riemannian manifolds, as well as for other elliptic differential operators such as the Cauchy–Riemann operator or Dirac operator. Other boundary conditions besides the Dirichlet condition, such as the Neumann boundary condition, can be imposed. See spectral geometry and isospectral as related articles.
The answer.
In 1964, John Milnor observed that a theorem on lattices due to Ernst Witt implied the existence of a pair of 16-dimensional flat tori that have the same eigenvalues but different shapes. However, the problem in two dimensions remained open until 1992, when Carolyn Gordon, David Webb, and Scott Wolpert constructed, based on the Sunada method, a pair of regions in the plane that have different shapes but identical eigenvalues. The regions are concave polygons. The proof that both regions have the same eigenvalues uses the symmetries of the Laplacian. This idea has been generalized by Buser, Conway, Doyle, and Semmler who constructed numerous similar examples. So, the answer to Kac's question is: for many shapes, one cannot hear the shape of the drum "completely". However, some information can be inferred.
On the other hand, Steve Zelditch proved that the answer to Kac's question is positive if one imposes restrictions to certain convex planar regions with analytic boundary. It is not known whether two non-convex analytic domains can have the same eigenvalues. It is known that the set of domains isospectral with a given one is compact in the C∞ topology. Moreover, the sphere (for instance) is spectrally rigid, by Cheng's eigenvalue comparison theorem. It is also known, by a result of Osgood, Phillips, and Sarnak that the moduli space of Riemann surfaces of a given genus does not admit a continuous isospectral flow through any point, and is compact in the Fréchet–Schwartz topology.
Weyl's formula.
Weyl's formula states that one can infer the area "A" of the drum by counting how rapidly the "λ""n" grow. We define "N"("R") to be the number of eigenvalues smaller than "R" and we get
formula_1
where "d" is the dimension, and formula_2 is the volume of the "d"-dimensional unit ball. Weyl also conjectured that the next term in the approximation below would give the perimeter of "D". In other words, if "L" denotes the length of the perimeter (or the surface area in higher dimension), then one should have
formula_3
For a smooth boundary, this was proved by Victor Ivrii in 1980. The manifold is also not allowed to have a two-parameter family of periodic geodesics, such as a sphere would have.
The Weyl–Berry conjecture.
For non-smooth boundaries, Michael Berry conjectured in 1979 that the correction should be of the order of
formula_4
where "D" is the Hausdorff dimension of the boundary. This was disproved by J. Brossard and R. A. Carmona, who then suggested that one should replace the Hausdorff dimension with the upper box dimension. In the plane, this was proved if the boundary has dimension 1 (1993), but mostly disproved for higher dimensions (1996); both results are by and Pomerance.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{cases}\n\\Delta u + \\lambda u = 0\\\\\nu|_{\\partial D} = 0\n\\end{cases}\n"
},
{
"math_id": 1,
"text": "A = \\omega_d^{-1}(2\\pi)^d \\lim_{R\\to\\infty}\\frac{N(R)}{R^{d/2}},"
},
{
"math_id": 2,
"text": "\\omega_d"
},
{
"math_id": 3,
"text": "N(R) = (2\\pi)^{-d}\\omega_d AR^{d/2} \\mp \\frac{1}{4}(2\\pi)^{-d+1}\\omega_{d-1} LR^{(d-1)/2} + o(R^{(d-1)/2})."
},
{
"math_id": 4,
"text": "R^{D/2},"
}
] | https://en.wikipedia.org/wiki?curid=1072325 |
10723825 | Waterman polyhedron | In geometry, the Waterman polyhedra are a family of polyhedra discovered around 1990 by the mathematician Steve Waterman. A Waterman polyhedron is created by packing spheres according to the cubic close(st) packing (CCP), also known as the face-centered cubic (fcc) packing, then sweeping away the spheres that are farther from the center than a defined radius, then creating the convex hull of the sphere centers.
Waterman polyhedra form a vast family of polyhedra. Some of them have a number of nice properties such as multiple symmetries, or interesting and regular shapes. Others are just a collection of faces formed from irregular convex polygons.
The most popular Waterman polyhedra are those with centers at the point (0,0,0) and built out of hundreds of polygons. Such polyhedra resemble spheres. In fact, the more faces a Waterman polyhedron has, the more it resembles its circumscribed sphere in volume and total area.
With each point of 3D space we can associate a family of Waterman polyhedra with different values of radii of the circumscribed spheres. Therefore, from a mathematical point of view we can consider Waterman polyhedra as 4D spaces W(x, y, z, r), where x, y, z are coordinates of a point in 3D, and r is a positive number greater than 1.
Seven origins of cubic close(st) packing (CCP).
There can be seven origins defined in CCP, where n = {1, 2, 3, …}:
Depending on the origin of the sweeping, a different shape and resulting polyhedron are obtained.
Relation to Platonic and Archimedean solids.
Some Waterman polyhedra create Platonic solids and Archimedean solids. For this comparison of Waterman polyhedra they are normalized, e.g. W2 O1 has a different size or volume than W1 O6, but has the same form as an octahedron.
Archimedean solids.
The W7 O1 might be mistaken for a truncated cuboctahedron, as well W3 O1 = W12 O1 mistaken for a rhombicuboctahedron, but those Waterman polyhedra have two edge lengths and therefore do not qualify as Archimedean solids.
Generalized Waterman polyhedra.
Generalized Waterman polyhedra are defined as the convex hull derived from the point set of any spherical extraction from a regular lattice.
Included is a detailed analysis of the following 10 lattices – bcc, cuboctahedron, diamond, fcc, hcp, truncated octahedron, rhombic dodecahedron, simple cubic, truncated tet tet, truncated tet truncated octahedron cuboctahedron.
Each of the 10 lattices were examined to isolate those particular origin points that manifested a unique polyhedron, as well as possessing some minimal symmetry requirement. From a viable origin point, within a lattice, there exists an unlimited series of polyhedra. Given its proper sweep interval, then there is a one-to-one correspondence between each integer value and a generalized Waterman polyhedron. | [
{
"math_id": 0,
"text": "\\sqrt{2n}"
},
{
"math_id": 1,
"text": "\\tfrac12\\sqrt{2+4n}"
},
{
"math_id": 2,
"text": "\\tfrac13\\sqrt{6(n+1)}"
},
{
"math_id": 3,
"text": "\\tfrac13\\sqrt{3+6n}"
},
{
"math_id": 4,
"text": "\\tfrac12\\sqrt{3+8(n-1)}"
},
{
"math_id": 5,
"text": "\\tfrac12\\sqrt{1+4n}"
},
{
"math_id": 6,
"text": "\\sqrt{1+2(n-1)}"
}
] | https://en.wikipedia.org/wiki?curid=10723825 |
1072520 | Velocity factor | Ratio of the speed at which a wavefront passes through the medium to the speed of light in vacuum
The velocity factor (VF), also called wave propagation speed or velocity of propagation (VoP or formula_0), of a transmission medium is the ratio of the speed at which a wavefront (of an electromagnetic signal, a radio signal, a light pulse in an optical fibre or a change of the electrical voltage on a copper wire) passes through the medium, to the speed of light in vacuum. For optical signals, the velocity factor is the reciprocal of the refractive index.
The speed of radio signals in vacuum, for example, is the speed of light, and so the velocity factor of a radio wave in vacuum is 1.0 (unity). In air, the velocity factor is ~0.9997. In electrical cables, the velocity factor mainly depends on the insulating material (see table below).
The use of the terms "velocity of propagation" and "wave propagation speed" to mean a ratio of speeds is confined to the computer networking and cable industries. In a general science and engineering context, these terms would be understood to mean a true speed or velocity in units of distance per time, while "velocity factor" is used for the ratio.
Typical velocity factors.
Velocity factor is an important characteristic of communication media such as category 5 cables and radio transmission lines. Plenum data cable typically has a VF between 0.42 and 0.72 (42% to 72% of the speed of light in vacuum) and riser cable around 0.70 (approximately 210,000,000 m/s or 4.76 ns per metre).
Some typical velocity factors for radio communications cables provided in handbooks and texts are given in the following table:
Calculating velocity factor.
Electric wave.
VF equals the reciprocal of the square root of the dielectric constant (relative permittivity), formula_1 or formula_2, of the material through which the signal passes:
formula_3
in the usual case where the relative permeability, formula_4, is 1. In the most general case:
formula_5
which includes unusual magnetic conducting materials, such as ferrite.
The velocity factor for a lossless transmission line is given by:
formula_6
where formula_7 is the distributed inductance (in henries per unit length), formula_8 is the capacitance between the two conductors (in farads per unit length), and formula_9 is the speed of light in vacuum.
Optical wave.
VF equals the reciprocal of the refractive index formula_10 of the medium, usually optical fiber.
formula_11
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "v_\\mathrm{P}"
},
{
"math_id": 1,
"text": "\\kappa"
},
{
"math_id": 2,
"text": "\\epsilon_\\mathrm{r}"
},
{
"math_id": 3,
"text": "\\mathrm{VF} = { \\frac{1}{\\sqrt{\\kappa}} } \\ "
},
{
"math_id": 4,
"text": "\\mu_\\mathrm{r}"
},
{
"math_id": 5,
"text": "\\mathrm{VF} = { \\frac{1}{\\sqrt{\\mu_\\mathrm{r}\\epsilon_\\mathrm{r}}} } \\ "
},
{
"math_id": 6,
"text": "\\mathrm{VF} = { \\frac{1}{c_\\mathrm{0}\\sqrt{L'C'}} } \\ "
},
{
"math_id": 7,
"text": "L'"
},
{
"math_id": 8,
"text": "C'"
},
{
"math_id": 9,
"text": "c_\\mathrm{0}"
},
{
"math_id": 10,
"text": "{n}"
},
{
"math_id": 11,
"text": "\\mathrm{VF} = { \\frac{1}{n} } \\ "
}
] | https://en.wikipedia.org/wiki?curid=1072520 |
1072573 | Hand formula | United States legal term
In the United States, the Hand formula, also known as the Hand rule, calculus of negligence, or BPL formula, is a conceptual formula created by Judge Learned Hand which describes a process for determining whether a legal duty of care has been breached (see negligence). The original description of the calculus was in "United States v. Carroll Towing Co.", in which an improperly secured barge had drifted away from a pier and caused damage to several other boats.
Articulation of the rule.
Hand stated:
<templatestyles src="Template:Blockquote/styles.css" />[T]he owner's duty, as in other similar situations, to provide against resulting injuries is a function of three variables: (1) The probability that she will break away; (2) the gravity of the resulting injury, if she does; (3) the burden of adequate precautions.
This relationship has been formalized by the law and economics school as such: an act is in breach of the duty of care if:
formula_0
where "B" is the cost (burden) of taking precautions, and "P" is the probability of loss ("L"). "L" is the gravity of loss. The product of P x L must be a greater amount than B to create a duty of due care for the defendant.
Rationale.
The calculus of negligence is based on the Coase theorem. The tort system acts as if, before the injury or damage, a contract had been made between the parties under the assumption that a rational, cost-minimizing individual will not spend money on taking precautions if those precautions are more expensive than the costs of the harm that they prevent. In other words, rather than spending money on safety, the individual will simply allow harm to occur and pay for the costs of that harm, because that will be more cost-efficient than taking precautions. This represents cases where B is greater than PL.
If the harm could be avoided for "less" than the cost of the harm (B is less than PL), then the individual "should" take the precautions, rather than allowing the harm to occur. If precautions were not taken, we find that a legal duty of care has been breached, and we impose liability on the individual to pay for the harm.
This approach, in theory, leads to an optimal allocation of resources; where harm can be cheaply avoided, the legal system requires precautions. Where precautions are prohibitively expensive, it does not. In marginal-cost terms, we require individuals to invest one unit of precautions up until the point that those precautions prevent exactly one unit of harm, and no less.
Mathematical rationale.
The Hand formula attempts to formalize the intuitive notion that when the expected loss formula_1 exceeds the cost of taking precautions, the duty of care has been breached:formula_2To assess the expected loss, statistical methods, such as regression analysis, may be used. A common metric for quantifying losses in the case of work accidents is the present value of lost future earnings and medical costs associated with the accident. In the case when the probability of loss is assumed to be a single number formula_3, and formula_4 is the loss from the event occurring, the familiar form of the Hand formula is recovered. More generally, for continuous outcomes the Hand formula takes form:formula_5where formula_6 is the domain for losses and formula_7 is the probability density function of losses. Assuming that losses are positive, common choices for loss distributions include the gamma, lognormal, and Weibull distributions.
Criticism.
Critics point out that term "gravity of loss (L)" is vague, and could entail a wide variety of damages, from a scratched fender to several dead victims. Even then, on top of that, how exactly a juror should determine a value for such a loss is abstract in itself. The speculative nature of the rule also seizes upon how a juror should determine the probability of loss (P).
Additionally, the rule fails to account for possible alternatives, whether it be the use of alternate methods to reach the same outcome, or abandoning the risky activity altogether.
Human teams estimating risk need to guard against judgment errors, cf. absolute probability judgement.
Use in practice.
In the U.S., juries, with guidance from the court, decide what particular acts or omissions constitute negligence, so a reference to the standard of ordinary care removes the need to discuss this conceptual formula. Juries are not told this formula but essentially use their common sense to decide what an ordinarily careful person would have done under the circumstances. The Hand formula has less practical value for the lay researcher seeking to understand how the courts actually determine negligence cases in the United States than the jury instructions used by the courts in the individual states.
Outside legal proceedings, this formula is the core premise of insurance, risk management, quality assurance, information security and privacy practices. It factors into due care and due diligence decisions in business risk. Restrictions exist in the cases where the loss applies to human life or the probability of adverse finding in court cases. One famous case of abuse by industry in recent years related to the Ford Pinto.
Quality assurance techniques extend the use of probability and loss to include uncertainty bounds in each quantity and possible interactions between uncertainty in probability and impact for two purposes. First, to more accurately model customer acceptance and process reliability to produce wanted outcomes. Second, to seek cost effective factors either up or down stream of the event that produce better results at sustainably reduced costs. Example, simply providing a protective rail near a cliff also includes quality manufacture features of the rail as part of the solution. Reasonable signs warning of the risk before persons reach the cliff may actually be more effective in reducing fatalities than the rail itself.
Australia.
In Australia, the calculus of negligence is a normative judgement with no formula or rule.
In New South Wales, the test is how a reasonable person (or other standard of care) would respond to the risk in the circumstances considering the 'probability that the harm would occur if care were not taken' and, 'the likely seriousness of the harm', 'the burden of taking precautions to avoid the risk of harm', and the 'social utility of the activity that creates the risk of harm'. State and Territory legislatures require that the social utility of the activity that creates the risk of harm be taken into account in determining whether or not a reasonable person would have taken precautions against that risk of harm. For example, in "Haris v Bulldogs Rugby League Club Limited" the court considered the social utility of holding football matches when determining whether a football club took sufficient precautions to protect spectators from the risk of being struck by fireworks set off as part of the entertainment during a game.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "PL>B"
},
{
"math_id": 1,
"text": "\\mathbb{E}(L)"
},
{
"math_id": 2,
"text": "\\mathbb{E}(L) > B"
},
{
"math_id": 3,
"text": "P"
},
{
"math_id": 4,
"text": "L"
},
{
"math_id": 5,
"text": "\\int_{\\Omega} Lf(L)dL > B"
},
{
"math_id": 6,
"text": "\\Omega"
},
{
"math_id": 7,
"text": "f(L)"
}
] | https://en.wikipedia.org/wiki?curid=1072573 |
10726541 | Holm–Bonferroni method | Statistical method
In statistics, the Holm–Bonferroni method, also called the Holm method or Bonferroni–Holm method, is used to counteract the problem of multiple comparisons. It is intended to control the family-wise error rate (FWER) and offers a simple test uniformly more powerful than the Bonferroni correction. It is named after Sture Holm, who codified the method, and Carlo Emilio Bonferroni.
Motivation.
When considering several hypotheses, the problem of multiplicity arises: the more hypotheses are tested, the higher the probability of obtaining Type I errors (false positives). The Holm–Bonferroni method is one of many approaches for controlling the FWER, i.e., the probability that one or more Type I errors will occur, by adjusting the rejection criterion for each of the individual hypotheses.
Formulation.
The method is as follows:
This method ensures that the FWER is at most formula_10, in the strong sense.
Rationale.
The simple Bonferroni correction rejects only null hypotheses with "p"-value less than or equal to formula_11, in order to ensure that the FWER, i.e., the risk of rejecting one or more true null hypotheses (i.e., of committing one or more type I errors) is at most formula_10. The cost of this protection against type I errors is an increased risk of failing to reject one or more false null hypotheses (i.e., of committing one or more type II errors).
The Holm–Bonferroni method also controls the FWER at formula_10, but with a lower increase of type II error risk than the classical Bonferroni method. The Holm–Bonferroni method sorts the "p"-values from lowest to highest and compares them to nominal alpha levels of formula_11 to formula_10 (respectively), namely the values formula_12.
Proof.
Let formula_17 be the family of hypotheses sorted by their p-values formula_18. Let formula_19 be the set of indices corresponding to the (unknown) true null hypotheses, having formula_20 members.
Claim: If we wrongly reject some true hypothesis, there is a true hypothesis formula_21 for which formula_22 at most formula_23.
First note that, in this case, there is at least one true hypothesis, so formula_24. Let formula_25 be such that formula_21 is the first rejected true hypothesis. Then formula_26are all rejected false hypotheses. It follows that formula_27 and, hence, formula_28 (1). Since formula_21 is rejected, it must be formula_29 by definition of the testing procedure. Using (1), we conclude that formula_30, as desired.
So let us define the random event formula_31. Note that, for formula_32, since formula_33 is a true null hypothesis, we have that formula_34. Subadditivity of the probability measure implies that formula_35. Therefore, the probability to reject a true hypothesis is at most formula_10.
Alternative proof.
The Holm–Bonferroni method can be viewed as a closed testing procedure, with the Bonferroni correction applied locally on each of the intersections of null hypotheses.
The closure principle states that a hypothesis formula_33 in a family of hypotheses formula_2 is rejected – while controlling the FWER at level formula_10 – if and only if all the sub-families of the intersections with formula_33 are rejected at level formula_10.
The Holm–Bonferroni method is a "shortcut procedure", since it makes formula_0 or less comparisons, while the number of all intersections of null hypotheses to be tested is of order formula_36.
It controls the FWER in the strong sense.
In the Holm–Bonferroni procedure, we first test formula_37. If it is not rejected then the intersection of all null hypotheses formula_38 is not rejected too, such that there exists at least one intersection hypothesis for each of elementary hypotheses formula_2 that is not rejected, thus we reject none of the elementary hypotheses.
If formula_37 is rejected at level formula_39 then all the intersection sub-families that contain it are rejected too, thus formula_37 is rejected.
This is because formula_40 is the smallest in each one of the intersection sub-families and the size of the sub-families is at most formula_0, such that the Bonferroni threshold larger than formula_39.
The same rationale applies for formula_41. However, since formula_37 already rejected, it sufficient to reject all the intersection sub-families of formula_41 without formula_37. Once formula_42 holds all the intersections that contains formula_41 are rejected.
The same applies for each formula_43.
Example.
Consider four null hypotheses formula_44 with unadjusted p-values formula_45, formula_46, formula_47 and formula_48, to be tested at significance level formula_49. Since the procedure is step-down, we first test formula_50, which has the smallest p-value formula_51. The p-value is compared to formula_52, the null hypothesis is rejected and we continue to the next one. Since formula_53 we reject formula_54 as well and continue. The next hypothesis formula_55 is not rejected since formula_56. We stop testing and conclude that formula_5 and formula_57 are rejected and formula_7 and formula_55 are not rejected while controlling the family-wise error rate at level formula_49. Note that even though formula_58 applies, formula_7 is not rejected. This is because the testing procedure stops once a failure to reject occurs.
Extensions.
Holm–Šidák method.
When the hypothesis tests are not negatively dependent, it is possible to replace formula_59 with:
formula_60
resulting in a slightly more powerful test.
Weighted version.
Let formula_61 be the ordered unadjusted p-values. Let formula_62, formula_63 correspond to formula_64. Reject formula_62 as long as
formula_65
Adjusted "p"-values.
The adjusted "p"-values for Holm–Bonferroni method are:
formula_66
In the earlier example, the adjusted "p"-values are formula_67, formula_68, formula_69 and formula_70. Only hypotheses formula_5 and formula_57 are rejected at level formula_49.
Similar adjusted "p"-values for Holm-Šidák method can be defined recursively as formula_71, where formula_72. Due to the inequality formula_73 for formula_74, the Holm-Šidák method will be more powerful than Holm–Bonferroni method.
The weighted adjusted "p"-values are:
formula_75
A hypothesis is rejected at level α if and only if its adjusted "p"-value is less than α. In the earlier example using equal weights, the adjusted "p"-values are 0.03, 0.06, 0.06, and 0.02. This is another way to see that using α = 0.05, only hypotheses one and four are rejected by this procedure.
Alternatives and usage.
The Holm–Bonferroni method is "uniformly" more powerful than the classic Bonferroni correction, meaning that it is always at least as powerful.
There are other methods for controlling the FWER that are more powerful than Holm–Bonferroni. For instance, in the Hochberg procedure, rejection of formula_76 is made after finding the "maximal" index formula_13 such that formula_77. Thus, The Hochberg procedure is uniformly more powerful than the Holm procedure. However, the Hochberg procedure requires the hypotheses to be independent or under certain forms of positive dependence, whereas Holm–Bonferroni can be applied without such assumptions. A similar step-up procedure is the Hommel procedure, which is uniformly more powerful than the Hochberg procedure.
Naming.
Carlo Emilio Bonferroni did not take part in inventing the method described here. Holm originally called the method the "sequentially rejective Bonferroni test", and it became known as Holm–Bonferroni only after some time. Holm's motives for naming his method after Bonferroni are explained in the original paper: "The use of the Boole inequality within multiple inference theory is usually called the Bonferroni technique, and for this reason we will call our test the sequentially rejective Bonferroni test." | [
{
"math_id": 0,
"text": "m"
},
{
"math_id": 1,
"text": "P_1,\\ldots,P_m"
},
{
"math_id": 2,
"text": "H_1,\\ldots,H_m"
},
{
"math_id": 3,
"text": " \\alpha "
},
{
"math_id": 4,
"text": "P_1 \\leq \\alpha / m"
},
{
"math_id": 5,
"text": "H_1"
},
{
"math_id": 6,
"text": "P_2 \\leq \\alpha / (m-1)"
},
{
"math_id": 7,
"text": "H_2"
},
{
"math_id": 8,
"text": "P_k \\leq \\frac{\\alpha}{m+1-k}"
},
{
"math_id": 9,
"text": "H_k"
},
{
"math_id": 10,
"text": "\\alpha"
},
{
"math_id": 11,
"text": "\\frac{\\alpha}{m}"
},
{
"math_id": 12,
"text": "\\frac{\\alpha}{m}, \\frac{\\alpha}{m-1}, \\ldots , \\frac{\\alpha}{2}, \\frac{\\alpha}{1}"
},
{
"math_id": 13,
"text": "k"
},
{
"math_id": 14,
"text": "H_{(1)}, \\ldots , H_{(k-1)}"
},
{
"math_id": 15,
"text": " H_{(k)}, ... , H_{(m)}"
},
{
"math_id": 16,
"text": "k = 1"
},
{
"math_id": 17,
"text": "H_{(1)}\\ldots H_{(m)}"
},
{
"math_id": 18,
"text": "P_{(1)}\\leq P_{(2)}\\leq\\cdots\\leq P_{(m)}"
},
{
"math_id": 19,
"text": "I_0"
},
{
"math_id": 20,
"text": "m_0"
},
{
"math_id": 21,
"text": "H_{(\\ell)}"
},
{
"math_id": 22,
"text": "P_{(\\ell)}"
},
{
"math_id": 23,
"text": "\\frac{\\alpha}{m_0}"
},
{
"math_id": 24,
"text": "m_0 \\geq 1"
},
{
"math_id": 25,
"text": "\\ell"
},
{
"math_id": 26,
"text": "H_{(1)}, \\ldots, H_{(\\ell-1)}"
},
{
"math_id": 27,
"text": "\\ell-1 \\leq m - m_0"
},
{
"math_id": 28,
"text": "\\frac{1}{m-\\ell+1}\\leq \\frac{1}{m_0}"
},
{
"math_id": 29,
"text": "P_{(\\ell)} \\leq \\frac{\\alpha}{m-\\ell+1}"
},
{
"math_id": 30,
"text": "P_{(\\ell)} \\leq \\frac{\\alpha}{m_0}"
},
{
"math_id": 31,
"text": "A= \\bigcup_{i \\in I_0} \\left\\{ P_i \\leq \\frac{\\alpha}{m_0}\\right\\}"
},
{
"math_id": 32,
"text": "i \\in I_o"
},
{
"math_id": 33,
"text": "H_i"
},
{
"math_id": 34,
"text": "P\\left(\\left\\{ P_i \\leq \\frac{\\alpha}{m_0}\\right\\}\\right) = \\frac{\\alpha}{m_0}"
},
{
"math_id": 35,
"text": "\\Pr(A)\\leq\\sum_{i\\in I_0}P\\left(\\left\\{ P_i \\leq \\frac{\\alpha}{m_0}\\right\\}\\right) = \\sum_{i \\in I_0} \\frac{\\alpha}{m_0} = \\alpha"
},
{
"math_id": 36,
"text": "2^m"
},
{
"math_id": 37,
"text": "H_{(1)}"
},
{
"math_id": 38,
"text": "\\bigcap\\nolimits_{i = 1}^m H_i"
},
{
"math_id": 39,
"text": "\\alpha/m"
},
{
"math_id": 40,
"text": "P_{(1)}"
},
{
"math_id": 41,
"text": "H_{(2)}"
},
{
"math_id": 42,
"text": "P_{(2)}\\leq\\alpha/(m-1)"
},
{
"math_id": 43,
"text": "1\\leq i \\leq m"
},
{
"math_id": 44,
"text": "H_1,\\ldots,H_4"
},
{
"math_id": 45,
"text": "p_1=0.01"
},
{
"math_id": 46,
"text": "p_2=0.04"
},
{
"math_id": 47,
"text": "p_3=0.03"
},
{
"math_id": 48,
"text": "p_4=0.005"
},
{
"math_id": 49,
"text": "\\alpha=0.05"
},
{
"math_id": 50,
"text": "H_4=H_{(1)}"
},
{
"math_id": 51,
"text": "p_4=p_{(1)}=0.005"
},
{
"math_id": 52,
"text": "\\alpha/4=0.0125"
},
{
"math_id": 53,
"text": "p_1=p_{(2)}=0.01<0.0167=\\alpha/3"
},
{
"math_id": 54,
"text": "H_1=H_{(2)}"
},
{
"math_id": 55,
"text": "H_3"
},
{
"math_id": 56,
"text": "p_3=p_{(3)}=0.03 > 0.025=\\alpha/2"
},
{
"math_id": 57,
"text": "H_4"
},
{
"math_id": 58,
"text": "p_2=p_{(4)}=0.04 < 0.05=\\alpha"
},
{
"math_id": 59,
"text": "\\frac{\\alpha}{m},\\frac{\\alpha}{m-1}, \\ldots, \\frac{\\alpha}{1}"
},
{
"math_id": 60,
"text": "1-(1-\\alpha)^{1/m},1-(1-\\alpha)^{1/(m-1)},\\ldots,1-(1-\\alpha)^{1}"
},
{
"math_id": 61,
"text": "P_{(1)},\\ldots,P_{(m)}"
},
{
"math_id": 62,
"text": "H_{(i)}"
},
{
"math_id": 63,
"text": "0\\leq w_{(i)}"
},
{
"math_id": 64,
"text": "P_{(i)}"
},
{
"math_id": 65,
"text": "P_{(j)}\\leq \\frac{w_{(j)}}{\\sum^m_{k=j} w_{(k)}} \\alpha,\\quad j=1,\\ldots,i"
},
{
"math_id": 66,
"text": "\\widetilde{p}_{(i)}=\\max_{j\\leq i}\\left\\{ (m-j+1)p_{(j)}\\right\\}_1, \\text{ where } \\{x\\}_1 \\equiv \\min(x,1)."
},
{
"math_id": 67,
"text": "\\widetilde{p}_1 = 0.03"
},
{
"math_id": 68,
"text": "\\widetilde{p}_2 = 0.06"
},
{
"math_id": 69,
"text": "\\widetilde{p}_3 = 0.06"
},
{
"math_id": 70,
"text": "\\widetilde{p}_4 = 0.02"
},
{
"math_id": 71,
"text": "\\widetilde{p}_{(i)}=\\max\\left\\{\\widetilde{p}_{(i-1)}, 1 - (1-p_{(i)})^{m-i+1} \\right\\}"
},
{
"math_id": 72,
"text": "\\widetilde{p}_{(1)} = 1 - (1 - p_{(1)})^m"
},
{
"math_id": 73,
"text": "1 - (1 - \\alpha)^{1/n} < \\alpha/n"
},
{
"math_id": 74,
"text": "n \\geq 2"
},
{
"math_id": 75,
"text": "\\widetilde{p}_{(i)}=\\max_{j\\leq i} \\left\\{\\frac{\\sum^m_{k=j}{w_{(k)}}}{w_{(j)}} p_{(j)}\\right\\}_1, \\text{ where } \\{x\\}_1 \\equiv \\min(x,1)."
},
{
"math_id": 76,
"text": "H_{(1)} \\ldots H_{(k)}"
},
{
"math_id": 77,
"text": "P_{(k)} \\leq \\frac{\\alpha}{m+1-k}"
}
] | https://en.wikipedia.org/wiki?curid=10726541 |
1072751 | Habitable zone | Orbits where planets may have liquid surface water
In astronomy and astrobiology, the habitable zone (HZ), or more precisely the circumstellar habitable zone (CHZ), is the range of orbits around a star within which a planetary surface can support liquid water given sufficient atmospheric pressure. The bounds of the HZ are based on Earth's position in the Solar System and the amount of radiant energy it receives from the Sun. Due to the importance of liquid water to Earth's biosphere, the nature of the HZ and the objects within it may be instrumental in determining the scope and distribution of planets capable of supporting Earth-like extraterrestrial life and intelligence.
The habitable zone is also called the Goldilocks zone, a metaphor, allusion and antonomasia of the children's fairy tale of "Goldilocks and the Three Bears", in which a little girl chooses from sets of three items, rejecting the ones that are too extreme (large or small, hot or cold, etc.), and settling on the one in the middle, which is "just right".
Since the concept was first presented in 1953, many stars have been confirmed to possess an HZ planet, including some systems that consist of multiple HZ planets. Most such planets, being either super-Earths or gas giants, are more massive than Earth, because massive planets are easier to detect. On November 4, 2013, astronomers reported, based on Kepler space telescope data, that there could be as many as 40 billion Earth-sized planets orbiting in the habitable zones of Sun-like stars and red dwarfs in the Milky Way. About 11 billion of these may be orbiting Sun-like stars. Proxima Centauri b, located about 4.2 light-years (1.3 parsecs) from Earth in the constellation of Centaurus, is the nearest known exoplanet, and is orbiting in the habitable zone of its star. The HZ is also of particular interest to the emerging field of habitability of natural satellites, because planetary-mass moons in the HZ might outnumber planets.
In subsequent decades, the HZ concept began to be challenged as a primary criterion for life, so the concept is still evolving. Since the discovery of evidence for extraterrestrial liquid water, substantial quantities of it are now thought to occur outside the circumstellar habitable zone. The concept of deep biospheres, like Earth's, that exist independently of stellar energy, are now generally accepted in astrobiology given the large amount of liquid water known to exist in lithospheres and asthenospheres of the Solar System. Sustained by other energy sources, such as tidal heating or radioactive decay or pressurized by non-atmospheric means, liquid water may be found even on rogue planets, or their moons. Liquid water can also exist at a wider range of temperatures and pressures as a solution, for example with sodium chlorides in seawater on Earth, chlorides and sulphates on equatorial Mars, or ammoniates, due to its different colligative properties. In addition, other circumstellar zones, where non-water solvents favorable to hypothetical life based on alternative biochemistries could exist in liquid form at the surface, have been proposed.
History.
An estimate of the range of distances from the Sun allowing the existence of liquid water appears in Newton's "Principia" (Book III, Section 1, corol. 4). The philosopher Louis Claude de Saint-Martin speculated in his 1802 work "Man: His True Nature and Ministry", "... we may presume, that, being susceptible of vegetation, it [the Earth] has been placed, in the series of planets, in the rank which was necessary, and at exactly the right distance from the sun, to accomplish its secondary object of vegetation; and from this we might infer that the other planets are either too near or too remote from the sun, to vegetate."
The concept of a circumstellar habitable zone was first introduced
in 1913, by Edward Maunder in his book "Are The Planets Inhabited?". The concept was later discussed in 1953 by Hubertus Strughold, who in his treatise "The Green and the Red Planet: A Physiological Study of the Possibility of Life on Mars", coined the term "ecosphere" and referred to various "zones" in which life could emerge. In the same year, Harlow Shapley wrote "Liquid Water Belt", which described the same concept in further scientific detail. Both works stressed the importance of liquid water to life. Su-Shu Huang, an American astrophysicist, first introduced the term "habitable zone" in 1959 to refer to the area around a star where liquid water could exist on a sufficiently large body, and was the first to introduce it in the context of planetary habitability and extraterrestrial life. A major early contributor to the habitable zone concept, Huang argued in 1960 that circumstellar habitable zones, and by extension extraterrestrial life, would be uncommon in multiple star systems, given the gravitational instabilities of those systems.
The concept of habitable zones was further developed in 1964 by Stephen H. Dole in his book "Habitable Planets for Man", in which he discussed the concept of the circumstellar habitable zone as well as various other determinants of planetary habitability, eventually estimating the number of habitable planets in the Milky Way to be about 600 million. At the same time, science-fiction author Isaac Asimov introduced the concept of a circumstellar habitable zone to the general public through his various explorations of space colonization. The term "Goldilocks zone" emerged in the 1970s, referencing specifically a region around a star whose temperature is "just right" for water to be present in the liquid phase. In 1993, astronomer James Kasting introduced the term "circumstellar habitable zone" to refer more precisely to the region then (and still) known as the habitable zone. Kasting was the first to present a detailed model for the habitable zone for exoplanets.
An update to habitable zone concept came in 2000 when astronomers Peter Ward and Donald Brownlee introduced the idea of the "galactic habitable zone", which they later developed with Guillermo Gonzalez. The galactic habitable zone, defined as the region where life is most likely to emerge in a galaxy, encompasses those regions close enough to a galactic center that stars there are enriched with heavier elements, but not so close that star systems, planetary orbits, and the emergence of life would be frequently disrupted by the intense radiation and enormous gravitational forces commonly found at galactic centers.
Subsequently, some astrobiologists propose that the concept be extended to other solvents, including dihydrogen, sulfuric acid, dinitrogen, formamide, and methane, among others, which would support hypothetical life forms that use an alternative biochemistry. In 2013, further developments in habitable zone concepts were made with the proposal of a circum "planetary" habitable zone, also known as the "habitable edge", to encompass the region around a planet where the orbits of natural satellites would not be disrupted, and at the same time tidal heating from the planet would not cause liquid water to boil away.
It has been noted that the current term of 'circumstellar habitable zone' poses confusion as the name suggests that planets within this region will possess a habitable environment. However, surface conditions are dependent on a host of different individual properties of that planet. This misunderstanding is reflected in excited reports of 'habitable planets'. Since it is completely unknown whether conditions on these distant HZ worlds could host life, different terminology is needed.
Determination.
Whether a body is in the circumstellar habitable zone of its host star is dependent on the radius of the planet's orbit (for natural satellites, the host planet's orbit), the mass of the body itself, and the radiative flux of the host star. Given the large spread in the masses of planets within a circumstellar habitable zone, coupled with the discovery of super-Earth planets which can sustain thicker atmospheres and stronger magnetic fields than Earth, circumstellar habitable zones are now split into two separate regions—a "conservative habitable zone" in which lower-mass planets like Earth can remain habitable, complemented by a larger "extended habitable zone" in which a planet like Venus, with stronger greenhouse effects, can have the right temperature for liquid water to exist at the surface.
Solar System estimates.
Estimates for the habitable zone within the Solar System range from 0.38 to 10.0 astronomical units, though arriving at these estimates has been challenging for a variety of reasons. Numerous planetary mass objects orbit within, or close to, this range and as such receive sufficient sunlight to raise temperatures above the freezing point of water. However, their atmospheric conditions vary substantially.
The aphelion of Venus, for example, touches the inner edge of the zone in most estimates and while atmospheric pressure at the surface is sufficient for liquid water, a strong greenhouse effect raises surface temperatures to at which water can only exist as vapor. The entire orbits of the Moon, Mars, and numerous asteroids also lie within various estimates of the habitable zone. Only at Mars' lowest elevations (less than 30% of the planet's surface) is atmospheric pressure and temperature sufficient for water to, if present, exist in liquid form for short periods. At Hellas Basin, for example, atmospheric pressures can reach 1,115 Pa and temperatures above zero Celsius (about the triple point for water) for 70 days in the Martian year. Despite indirect evidence in the form of seasonal flows on warm Martian slopes, no confirmation has been made of the presence of liquid water there. While other objects orbit partly within this zone, including comets, Ceres is the only one of planetary mass. A combination of low mass and an inability to mitigate evaporation and atmosphere loss against the solar wind make it impossible for these bodies to sustain liquid water on their surface.
Despite this, studies are strongly suggestive of past liquid water on the surface of Venus, Mars, Vesta and Ceres, suggesting a more common phenomenon than previously thought. Since sustainable liquid water is thought to be essential to support complex life, most estimates, therefore, are inferred from the effect that a repositioned orbit would have on the habitability of Earth or Venus as their surface gravity allows sufficient atmosphere to be retained for several billion years.
According to the extended habitable zone concept, planetary-mass objects with atmospheres capable of inducing sufficient radiative forcing could possess liquid water farther out from the Sun. Such objects could include those whose atmospheres contain a high component of greenhouse gas and terrestrial planets much more massive than Earth (super-Earth class planets), that have retained atmospheres with surface pressures of up to 100 kbar. There are no examples of such objects in the Solar System to study; not enough is known about the nature of atmospheres of these kinds of extrasolar objects, and their position in the habitable zone cannot determine the net temperature effect of such atmospheres including induced albedo, anti-greenhouse or other possible heat sources.
For reference, the average distance from the Sun of some major bodies within the various estimates of the habitable zone is: Mercury, 0.39 AU; Venus, 0.72 AU; Earth, 1.00 AU; Mars, 1.52 AU; Vesta, 2.36 AU; Ceres and Pallas, 2.77 AU; Jupiter, 5.20 AU; Saturn, 9.58 AU. In the most conservative estimates, only Earth lies within the zone; in the most permissive estimates, even Saturn at perihelion, or Mercury at aphelion, might be included.
Extrasolar extrapolation.
Astronomers use stellar flux and the inverse-square law to extrapolate circumstellar habitable zone models created for the Solar System to other stars. For example, according to Kopparapu's habitable zone estimate, although the Solar System has a circumstellar habitable zone centered at 1.34 AU from the Sun, a star with 0.25 times the luminosity of the Sun would have a habitable zone centered at formula_0, or 0.5, the distance from the star, corresponding to a distance of 0.67 AU. Various complicating factors, though, including the individual characteristics of stars themselves, mean that extrasolar extrapolation of the HZ concept is more complex.
Spectral types and star-system characteristics.
Some scientists argue that the concept of a circumstellar habitable zone is actually limited to stars in certain types of systems or of certain spectral types. Binary systems, for example, have circumstellar habitable zones that differ from those of single-star planetary systems, in addition to the orbital stability concerns inherent with a three-body configuration. If the Solar System were such a binary system, the outer limits of the resulting circumstellar habitable zone could extend as far as 2.4 AU.
With regard to spectral types, Zoltán Balog proposes that O-type stars cannot form planets due to the photoevaporation caused by their strong ultraviolet emissions. Studying ultraviolet emissions, Andrea Buccino found that only 40% of stars studied (including the Sun) had overlapping liquid water and ultraviolet habitable zones. Stars smaller than the Sun, on the other hand, have distinct impediments to habitability. For example, Michael Hart proposed that only main-sequence stars of spectral class K0 or brighter could offer habitable zones, an idea which has evolved in modern times into the concept of a tidal locking radius for red dwarfs. Within this radius, which is coincidental with the red-dwarf habitable zone, it has been suggested that the volcanism caused by tidal heating could cause a "tidal Venus" planet with high temperatures and no hospitable environment for life.
Others maintain that circumstellar habitable zones are more common and that it is indeed possible for water to exist on planets orbiting cooler stars. Climate modeling from 2013 supports the idea that red dwarf stars can support planets with relatively constant temperatures over their surfaces in spite of tidal locking. Astronomy professor Eric Agol argues that even white dwarfs may support a relatively brief habitable zone through planetary migration. At the same time, others have written in similar support of semi-stable, temporary habitable zones around brown dwarfs. Also, a habitable zone in the outer parts of stellar systems may exist during the pre-main-sequence phase of stellar evolution, especially around M-dwarfs, potentially lasting for billion-year timescales.
Stellar evolution.
Circumstellar habitable zones change over time with stellar evolution. For example, hot O-type stars, which may remain on the main sequence for fewer than 10 million years, would have rapidly changing habitable zones not conducive to the development of life. Red dwarf stars, on the other hand, which can live for hundreds of billions of years on the main sequence, would have planets with ample time for life to develop and evolve. Even while stars are on the main sequence, though, their energy output steadily increases, pushing their habitable zones farther out; our Sun, for example, was 75% as bright in the Archaean as it is now, and in the future, continued increases in energy output will put Earth outside the Sun's habitable zone, even before it reaches the red giant phase. In order to deal with this increase in luminosity, the concept of a "continuously habitable zone" has been introduced. As the name suggests, the continuously habitable zone is a region around a star in which planetary-mass bodies can sustain liquid water for a given period. Like the general circumstellar habitable zone, the continuously habitable zone of a star is divided into a conservative and extended region.
In red dwarf systems, gigantic stellar flares which could double a star's brightness in minutes and huge starspots which can cover 20% of the star's surface area, have the potential to strip an otherwise habitable planet of its atmosphere and water. As with more massive stars, though, stellar evolution changes their nature and energy flux, so by about 1.2 billion years of age, red dwarfs generally become sufficiently constant to allow for the development of life.
Once a star has evolved sufficiently to become a red giant, its circumstellar habitable zone will change dramatically from its main-sequence size. For example, the Sun is expected to engulf the previously habitable Earth as a red giant. However, once a red giant star reaches the horizontal branch, it achieves a new equilibrium and can sustain a new circumstellar habitable zone, which in the case of the Sun would range from 7 to 22 AU. At such stage, Saturn's moon Titan would likely be habitable in Earth's temperature sense. Given that this new equilibrium lasts for about 1 Gyr, and because life on Earth emerged by 0.7 Gyr from the formation of the Solar System at latest, life could conceivably develop on planetary mass objects in the habitable zone of red giants. However, around such a helium-burning star, important life processes like photosynthesis could only happen around planets where the atmosphere has carbon dioxide, as by the time a solar-mass star becomes a red giant, planetary-mass bodies would have already absorbed much of their free carbon dioxide. Moreover, as Ramirez and Kaltenegger (2016) showed, intense stellar winds would completely remove the atmospheres of such smaller planetary bodies, rendering them uninhabitable anyway. Thus, Titan would not be habitable even after the Sun becomes a red giant. Nevertheless, life need not originate during this stage of stellar evolution for it to be detected. Once the star becomes a red giant, and the habitable zone extends outward, the icy surface would melt, forming a temporary atmosphere that can be searched for signs of life that may have been thriving before the start of the red giant stage.
Desert planets.
A planet's atmospheric conditions influence its ability to retain heat so that the location of the habitable zone is also specific to each type of planet: desert planets (also known as dry planets), with very little water, will have less water vapor in the atmosphere than Earth and so have a reduced greenhouse effect, meaning that a desert planet could maintain oases of water closer to its star than Earth is to the Sun. The lack of water also means there is less ice to reflect heat into space, so the outer edge of desert-planet habitable zones is further out.
Other considerations.
A planet cannot have a hydrosphere—a key ingredient for the formation of carbon-based life—unless there is a source for water within its stellar system. The origin of water on Earth is still not completely understood; possible sources include the result of impacts with icy bodies, outgassing, mineralization, leakage from hydrous minerals from the lithosphere, and photolysis. For an extrasolar system, an icy body from beyond the frost line could migrate into the habitable zone of its star, creating an ocean planet with seas hundreds of kilometers deep such as GJ 1214 b or Kepler-22b may be.
Maintenance of liquid surface water also requires a sufficiently thick atmosphere. Possible origins of terrestrial atmospheres are currently theorised to outgassing, impact degassing and ingassing. Atmospheres are thought to be maintained through similar processes along with biogeochemical cycles and the mitigation of atmospheric escape. In a 2013 study led by Italian astronomer Giovanni Vladilo, it was shown that the size of the circumstellar habitable zone increased with greater atmospheric pressure. Below an atmospheric pressure of about 15 millibars, it was found that habitability could not be maintained because even a small shift in pressure or temperature could render water unable to form as a liquid.
Although traditional definitions of the habitable zone assume that carbon dioxide and water vapor are the most important greenhouse gases (as they are on the Earth), a study led by Ramses Ramirez and co-author Lisa Kaltenegger has shown that the size of the habitable zone is greatly increased if prodigious volcanic outgassing of hydrogen is also included along with the carbon dioxide and water vapor. The outer edge in the Solar System would extend out as far as 2.4 AU in that case. Similar increases in the size of the habitable zone were computed for other stellar systems. An earlier study by Ray Pierrehumbert and Eric Gaidos had eliminated the CO2-H2O concept entirely, arguing that young planets could accrete many tens to hundreds of bars of hydrogen from the protoplanetary disc, providing enough of a greenhouse effect to extend the solar system outer edge to 10 AU. In this case, though, the hydrogen is not continuously replenished by volcanism and is lost within millions to tens of millions of years.
In the case of planets orbiting in the HZs of red dwarf stars, the extremely close distances to the stars cause tidal locking, an important factor in habitability. For a tidally locked planet, the sidereal day is as long as the orbital period, causing one side to permanently face the host star and the other side to face away. In the past, such tidal locking was thought to cause extreme heat on the star-facing side and bitter cold on the opposite side, making many red dwarf planets uninhabitable; however, three-dimensional climate models in 2013 showed that the side of a red dwarf planet facing the host star could have extensive cloud cover, increasing its bond albedo and reducing significantly temperature differences between the two sides.
Planetary mass natural satellites have the potential to be habitable as well. However, these bodies need to fulfill additional parameters, in particular being located within the circumplanetary habitable zones of their host planets. More specifically, moons need to be far enough from their host giant planets that they are not transformed by tidal heating into volcanic worlds like Io, but must remain within the Hill radius of the planet so that they are not pulled out of the orbit of their host planet. Red dwarfs that have masses less than 20% of that of the Sun cannot have habitable moons around giant planets, as the small size of the circumstellar habitable zone would put a habitable moon so close to the star that it would be stripped from its host planet. In such a system, a moon close enough to its host planet to maintain its orbit would have tidal heating so intense as to eliminate any prospects of habitability.
A planetary object that orbits a star with high orbital eccentricity may spend only some of its year in the HZ and experience a large variation in temperature and atmospheric pressure. This would result in dramatic seasonal phase shifts where liquid water may exist only intermittently. It is possible that subsurface habitats could be insulated from such changes and that extremophiles on or near the surface might survive through adaptions such as hibernation (cryptobiosis) and/or hyperthermostability. Tardigrades, for example, can survive in a dehydrated state temperature between and . Life on a planetary object orbiting outside HZ might hibernate on the cold side as the planet approaches the apastron where the planet is coolest and become active on approach to the periastron when the planet is sufficiently warm.
Extrasolar discoveries.
A 2015 review concluded that the exoplanets Kepler-62f, Kepler-186f and Kepler-442b were likely the best candidates for being potentially habitable. These are at a distance of 990, 490 and 1,120 light-years away, respectively. Of these, Kepler-186f is closest in size to Earth with 1.2 times Earth's radius, and it is located towards the outer edge of the habitable zone around its red dwarf star. Among nearest terrestrial exoplanet candidates, Tau Ceti e is 11.9 light-years away. It is in the inner edge of its planetary system's habitable zone, giving it an estimated average surface temperature of .
Studies that have attempted to estimate the number of terrestrial planets within the circumstellar habitable zone tend to reflect the availability of scientific data. A 2013 study by Ravi Kumar Kopparapu put "ηe", the fraction of stars with planets in the HZ, at 0.48, meaning that there may be roughly 95–180 billion habitable planets in the Milky Way. However, this is merely a statistical prediction; only a small fraction of these possible planets have yet been discovered.
Previous studies have been more conservative. In 2011, Seth Borenstein concluded that there are roughly 500 million habitable planets in the Milky Way. NASA's Jet Propulsion Laboratory 2011 study, based on observations from the Kepler mission, raised the number somewhat, estimating that about "1.4 to 2.7 percent" of all stars of spectral class F, G, and K are expected to have planets in their HZs.
Early findings.
The first discoveries of extrasolar planets in the HZ occurred just a few years after the first extrasolar planets were discovered. However, these early detections were all gas giant-sized, and many were in eccentric orbits. Despite this, studies indicate the possibility of large, Earth-like moons around these planets supporting liquid water.
One of the first discoveries was 70 Virginis b, a gas giant initially nicknamed "Goldilocks" due to it being neither "too hot" nor "too cold". Later study revealed temperatures analogous to Venus, ruling out any potential for liquid water. 16 Cygni Bb, also discovered in 1996, has an extremely eccentric orbit that spends only part of its time in the HZ, such an orbit would causes extreme seasonal effects. In spite of this, simulations have suggested that a sufficiently large companion could support surface water year-round.
Gliese 876 b, discovered in 1998, and Gliese 876 c, discovered in 2001, are both gas giants discovered in the habitable zone around Gliese 876 that may also have large moons. Another gas giant, Upsilon Andromedae d was discovered in 1999 orbiting Upsilon Andromidae's habitable zone.
Announced on April 4, 2001, HD 28185 b is a gas giant found to orbit entirely within its star's circumstellar habitable zone and has a low orbital eccentricity, comparable to that of Mars in the Solar System. Tidal interactions suggest it could harbor habitable Earth-mass satellites in orbit around it for many billions of years, though it is unclear whether such satellites could form in the first place.
HD 69830 d, a gas giant with 17 times the mass of Earth, was found in 2006 orbiting within the circumstellar habitable zone of HD 69830, 41 light years away from Earth. The following year, 55 Cancri f was discovered within the HZ of its host star 55 Cancri A. Hypothetical satellites with sufficient mass and composition are thought to be able to support liquid water at their surfaces.
Though, in theory, such giant planets could possess moons, the technology did not exist to detect moons around them, and no extrasolar moons had been discovered. Planets within the zone with the potential for solid surfaces were therefore of much higher interest.
Habitable super-Earths.
The 2007 discovery of Gliese 581c, the first super-Earth in the circumstellar habitable zone, created significant interest in the system by the scientific community, although the planet was later found to have extreme surface conditions that may resemble Venus. Gliese 581 d, another planet in the same system and thought to be a better candidate for habitability, was also announced in 2007. Its existence was later disconfirmed in 2014, but only for a short time. As of 2015, the planet has no newer disconfirmations. Gliese 581 g, yet another planet thought to have been discovered in the circumstellar habitable zone of the system, was considered to be more habitable than both Gliese 581 c and d. However, its existence was also disconfirmed in 2014, and astronomers are divided about its existence.
Discovered in August 2011, HD 85512 b was initially speculated to be habitable, but the new circumstellar habitable zone criteria devised by Kopparapu et al. in 2013 place the planet outside the circumstellar habitable zone.
Kepler-22 b, discovered in December 2011 by the Kepler space probe, is the first transiting exoplanet discovered around a Sun-like star. With a radius 2.4 times that of Earth, Kepler-22b has been predicted by some to be an ocean planet. Gliese 667 Cc, discovered in 2011 but announced in 2012, is a super-Earth orbiting in the circumstellar habitable zone of Gliese 667 C. It is one of the most Earth-like planets known.
Gliese 163 c, discovered in September 2012 in orbit around the red dwarf Gliese 163 is located 49 light years from Earth. The planet has 6.9 Earth masses and 1.8–2.4 Earth radii, and with its close orbit receives 40 percent more stellar radiation than Earth, leading to surface temperatures of about ° C. HD 40307 g, a candidate planet tentatively discovered in November 2012, is in the circumstellar habitable zone of HD 40307. In December 2012, Tau Ceti e and Tau Ceti f were found in the circumstellar habitable zone of Tau Ceti, a Sun-like star 12 light years away. Although more massive than Earth, they are among the least massive planets found to date orbiting in the habitable zone; however, Tau Ceti f, like HD 85512 b, did not fit the new circumstellar habitable zone criteria established by the 2013 Kopparapu study. It is now considered as uninhabitable.
Near Earth-sized planets and Solar analogs.
Recent discoveries have uncovered planets that are thought to be similar in size or mass to Earth. "Earth-sized" ranges are typically defined by mass. The lower range used in many definitions of the super-Earth class is 1.9 Earth masses; likewise, sub-Earths range up to the size of Venus (~0.815 Earth masses). An upper limit of 1.5 Earth radii is also considered, given that above 1.5 R🜨 the average planet density rapidly decreases with increasing radius, indicating these planets have a significant fraction of volatiles by volume overlying a rocky core. A genuinely Earth-like planet – an Earth analog or "Earth twin" – would need to meet many conditions beyond size and mass; such properties are not observable using current technology.
A solar analog (or "solar twin") is a star that resembles the Sun. No solar twin with an exact match as that of the Sun has been found. However, some stars are nearly identical to the Sun and are considered solar twins. An exact solar twin would be a G2V star with a 5,778 K temperature, be 4.6 billion years old, with the correct metallicity and a 0.1% solar luminosity variation. Stars with an age of 4.6 billion years are at the most stable state. Proper metallicity and size are also critical to low luminosity variation.
Using data collected by NASA's Kepler space telescope and the W. M. Keck Observatory, scientists have estimated that 22% of solar-type stars in the Milky Way galaxy have Earth-sized planets in their habitable zone.
On 7 January 2013, astronomers from the Kepler team announced the discovery of Kepler-69c (formerly "KOI-172.02"), an Earth-size exoplanet candidate (1.7 times the radius of Earth) orbiting Kepler-69, a star similar to the Sun, in the HZ and expected to offer habitable conditions. The discovery of two planets orbiting in the habitable zone of Kepler-62, by the Kepler team was announced on April 19, 2013. The planets, named Kepler-62e and Kepler-62f, are likely solid planets with sizes 1.6 and 1.4 times the radius of Earth, respectively.
With a radius estimated at 1.1 Earth, Kepler-186f, discovery announced in April 2014, is the closest yet size to Earth of an exoplanet confirmed by the transit method though its mass remains unknown and its parent star is not a Solar analog.
Kapteyn b, discovered in June 2014 is a possible rocky world of about 4.8 Earth masses and about 1.5 Earth radii were found orbiting the habitable zone of the red subdwarf Kapteyn's Star, 12.8 light-years away.
On 6 January 2015, NASA announced the 1000th confirmed exoplanet discovered by the Kepler Space Telescope. Three of the newly confirmed exoplanets were found to orbit within habitable zones of their related stars: two of the three, Kepler-438b and Kepler-442b, are near-Earth-size and likely rocky; the third, Kepler-440b, is a super-Earth. However, Kepler-438b is found to be a subject of powerful flares, so it is now considered uninhabitable. 16 January, K2-3d a planet of 1.5 Earth radii was found orbiting within the habitable zone of K2-3, receiving 1.4 times the intensity of visible light as Earth.
Kepler-452b, announced on 23 July 2015 is 50% bigger than Earth, likely rocky and takes approximately 385 Earth days to orbit the habitable zone of its G-class (solar analog) star Kepler-452.
The discovery of a system of three tidally-locked planets orbiting the habitable zone of an ultracool dwarf star, TRAPPIST-1, was announced in May 2016. The discovery is considered significant because it dramatically increases the possibility of smaller, cooler, more numerous and closer stars possessing habitable planets.
Two potentially habitable planets, discovered by the K2 mission in July 2016 orbiting around the M dwarf K2-72 around 227 light years from the Sun: K2-72c and K2-72e are both of similar size to Earth and receive similar amounts of stellar radiation.
Announced on the 20 April 2017, LHS 1140b is a super-dense super-Earth 39 light years away, 6.6 times Earth's mass and 1.4 times radius, its star 15% the mass of the Sun but with much less observable stellar flare activity than most M dwarfs. The planet is one of few observable by both transit and radial velocity that's mass is confirmed with an atmosphere may be studied.
Discovered by radial velocity in June 2017, with approximately three times the mass of Earth, Luyten b orbits within the habitable zone of Luyten's Star just 12.2 light-years away.
At 11 light-years away, the second closest planet, Ross 128 b, was announced in November 2017 following a decade's radial velocity study of relatively "quiet" red dwarf star Ross 128. At 1.35 times Earth's mass, is it roughly Earth-sized and likely rocky in composition.
Discovered in March 2018, K2-155d is about 1.64 times the radius of Earth, is likely rocky and orbits in the habitable zone of its red dwarf star 203 light years away.
One of the earliest discoveries by the Transiting Exoplanet Survey Satellite (TESS) announced on July 31, 2019, is a Super-Earth planet GJ 357 d orbiting the outer edge of a red dwarf 31 light years away.
K2-18b is an exoplanet 124 light-years away, orbiting in the habitable zone of the K2-18, a red dwarf. This planet is significant for water vapor found in its atmosphere; this was announced on September 17, 2019.
In September 2020, astronomers identified 24 superhabitable planet (planets better than Earth) contenders, from among more than 4000 confirmed exoplanets at present, based on astrophysical parameters, as well as the natural history of known life forms on the Earth.
Habitability outside the HZ.
Liquid-water environments have been found to exist in the absence of atmospheric pressure and at temperatures outside the HZ temperature range. For example, Saturn's moons Titan and Enceladus and Jupiter's moons Europa and Ganymede, all of which are outside the habitable zone, may hold large volumes of liquid water in subsurface oceans.
Outside the HZ, tidal heating and radioactive decay are two possible heat sources that could contribute to the existence of liquid water. Abbot and Switzer (2011) put forward the possibility that subsurface water could exist on rogue planets as a result of radioactive decay-based heating and insulation by a thick surface layer of ice.
With some theorising that life on Earth may have actually originated in stable, subsurface habitats, it has been suggested that it may be common for wet subsurface extraterrestrial habitats such as these to 'teem with life'. On Earth itself, living organisms may be found more than below the surface.
Another possibility is that outside the HZ organisms may use alternative biochemistries that do not require water at all. Astrobiologist Christopher McKay, has suggested that methane (CH4) may be a solvent conducive to the development of "cryolife", with the Sun's "methane habitable zone" being centered on from the star. This distance is coincident with the location of Titan, whose lakes and rain of methane make it an ideal location to find McKay's proposed cryolife. In addition, testing of a number of organisms has found some are capable of surviving in extra-HZ conditions.
Significance for complex and intelligent life.
The Rare Earth hypothesis argues that complex and intelligent life is uncommon and that the HZ is one of many critical factors. According to Ward & Brownlee (2004) and others, not only is a HZ orbit and surface water a primary requirement to sustain life but a requirement to support the secondary conditions required for multicellular life to emerge and evolve. The secondary habitability factors are both geological (the role of surface water in sustaining necessary plate tectonics) and biochemical (the role of radiant energy in supporting photosynthesis for necessary atmospheric oxygenation). But others, such as Ian Stewart and Jack Cohen in their 2002 book "Evolving the Alien" argue that complex intelligent life may arise outside the HZ. Intelligent life outside the HZ may have evolved in subsurface environments, from alternative biochemistries or even from nuclear reactions.
On Earth, several complex multicellular life forms (or eukaryotes) have been identified with the potential to survive conditions that might exist outside the conservative habitable zone. Geothermal energy sustains ancient circumvent ecosystems, supporting large complex life forms such as "Riftia pachyptila". Similar environments may be found in oceans pressurised beneath solid crusts, such as those of Europa and Enceladus, outside of the habitable zone. Numerous microorganisms have been tested in simulated conditions and in low Earth orbit, including eukaryotes. An animal example is the "Milnesium tardigradum", which can withstand extreme temperatures well above the boiling point of water and the cold vacuum of outer space. In addition, the lichens "Rhizocarpon geographicum" and "Xanthoria elegans" have been found to survive in an environment where the atmospheric pressure is far too low for surface liquid water and where the radiant energy is also much lower than that which most plants require to photosynthesize. The fungi "Cryomyces antarcticus" and "Cryomyces minteri" are also able to survive and reproduce in Mars-like conditions.
Species, including humans, known to possess animal cognition require large amounts of energy, and have adapted to specific conditions, including an abundance of atmospheric oxygen and the availability of large quantities of chemical energy synthesized from radiant energy. If humans are to colonize other planets, true Earth analogs in the HZ are most likely to provide the closest natural habitat; this concept was the basis of Stephen H. Dole's 1964 study. With suitable temperature, gravity, atmospheric pressure and the presence of water, the necessity of spacesuits or space habitat analogs on the surface may be eliminated, and complex Earth life can thrive.
Planets in the HZ remain of paramount interest to researchers looking for intelligent life elsewhere in the universe. The Drake equation, sometimes used to estimate the number of intelligent civilizations in our galaxy, contains the factor or parameter ne, which is the average number of planetary-mass objects orbiting within the HZ of each star. A low value lends support to the Rare Earth hypothesis, which posits that intelligent life is a rarity in the Universe, whereas a high value provides evidence for the Copernican mediocrity principle, the view that habitability—and therefore life—is common throughout the Universe. A 1971 NASA report by Drake and Bernard Oliver proposed the "water hole", based on the spectral absorption lines of the hydrogen and hydroxyl components of water, as a good, obvious band for communication with extraterrestrial intelligence that has since been widely adopted by astronomers involved in the search for extraterrestrial intelligence. According to Jill Tarter, Margaret Turnbull and many others, HZ candidates are the priority targets to narrow waterhole searches and the Allen Telescope Array now extends Project Phoenix to such candidates.
Because the HZ is considered the most likely habitat for intelligent life, METI efforts have also been focused on systems likely to have planets there. The 2001 Teen Age Message and 2003 Cosmic Call 2, for example, were sent to the 47 Ursae Majoris system, known to contain three Jupiter-mass planets and possibly with a terrestrial planet in the HZ. The Teen Age Message was also directed to the 55 Cancri system, which has a gas giant in its HZ. A Message from Earth in 2008, and Hello From Earth in 2009, were directed to the Gliese 581 system, containing three planets in the HZ—Gliese 581 c, d, and the unconfirmed g.
References.
<templatestyles src="Reflist/styles.css" />
External links.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "\\sqrt{0.25}"
}
] | https://en.wikipedia.org/wiki?curid=1072751 |
10728412 | Convolution random number generator | Non-uniform number generator
In statistics and computer software, a convolution random number generator is a pseudo-random number sampling method that can be used to generate random variates from certain classes of probability distribution. The particular advantage of this type of approach is that it allows advantage to be taken of existing software for generating random variates from other, usually non-uniform, distributions. However, faster algorithms may be obtainable for the same distributions by other more complicated approaches.
A number of distributions can be expressed in terms of the (possibly weighted) sum of two or more random variables from other distributions. (The distribution of the sum is the convolution of the distributions of the individual random variables).
Example.
Consider the problem of generating a random variable with an Erlang distribution, formula_0. Such a random variable can be defined as the sum of "k" random variables each with an exponential distribution formula_1. This problem is equivalent to generating a random number for a special case of the Gamma distribution, in which the shape parameter takes an integer value.
Notice that:
formula_2
One can now generate formula_3 samples using a random number generator for the exponential distribution:
if formula_4 then formula_5 | [
{
"math_id": 0,
"text": "X\\ \\sim \\operatorname{Erlang}(k, \\theta)"
},
{
"math_id": 1,
"text": "\\operatorname{Exp}(k \\theta) \\,"
},
{
"math_id": 2,
"text": "\\operatorname{E}[X] = \\frac{1}{k \\theta} + \\frac{1}{k \\theta} + \\cdots + \\frac{1}{k \\theta} = \\frac{1}{\\theta} ."
},
{
"math_id": 3,
"text": "\\operatorname{Erlang}(k, \\theta)"
},
{
"math_id": 4,
"text": "X_i\\ \\sim \\operatorname{Exp}(k \\theta)"
},
{
"math_id": 5,
"text": "X=\\sum_{i=1}^k X_i \\sim \\operatorname{Erlang}(k,\\theta) ."
}
] | https://en.wikipedia.org/wiki?curid=10728412 |
1072894 | Nitrogen-13 | Isotope of nitrogen
Nitrogen-13 (13N) is a radioisotope of nitrogen used in positron emission tomography (PET). It has a half-life of a little under ten minutes, so it must be made at the PET site. A cyclotron may be used for this purpose.
Nitrogen-13 is used to tag ammonia molecules for PET myocardial perfusion imaging.
Production.
Nitrogen-13 is used in medical PET imaging in the form of 13N-labelled ammonia. It can be produced with a medical cyclotron, using a target of pure water with a trace amount of ethanol. The reactants are oxygen-16 (present as H2O) and a proton, and the products are nitrogen-13 and an alpha particle (helium-4).
1H + 16O → 13N + 4He
The proton must be accelerated to have total energy greater than 5.66 MeV. This is the threshold energy for this reaction, as it is endothermic (i.e., the mass of the products is greater than the reactants, so energy needs to be supplied which is converted to mass). For this reason, the proton needs to carry extra energy to induce the nuclear reaction.
The energy difference is actually 5.22 MeV, but if the proton only supplied this energy, the reactants would be formed with no kinetic energy. As momentum must be conserved, the true energy that needs to be supplied by the proton is given by:
formula_0
where formula_1 is the mass of 4He and formula_2 is the mass of 13N ; therefore formula_3 = 0,307 806 661. The presence of ethanol (at a concentration of ~5mM) in aqueous solution allows the convenient formation of ammonia as nitrogen-13 is produced. Other routes of producing 13N-labelled ammonia exist, some of which facilitate co-generation of other light radionuclides for diagnostic imaging.
Nitrogen-13 plays a significant role in the CNO cycle, which is the dominant source of energy in main-sequence stars more massive than 1.5 times the mass of the Sun.
Lightning may have a role in the production of nitrogen-13.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K =(1+m/M) |E|"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "m/M"
}
] | https://en.wikipedia.org/wiki?curid=1072894 |
1072915 | Kernel (linear algebra) | Vectors mapped to 0 by a linear map
In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the part of the domain which is mapped to the zero vector of the co-domain; the kernel is always a linear subspace of the domain. That is, given a linear map "L" : "V" → "W" between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that "L"(v) = 0, where 0 denotes the zero vector in W, or more symbolically:
formula_0
Properties.
The kernel of L is a linear subspace of the domain V.
In the linear map formula_1 two elements of V have the same image in W if and only if their difference lies in the kernel of L, that is,
formula_2
From this, it follows that the image of L is isomorphic to the quotient of V by the kernel:
formula_3
In the case where V is finite-dimensional, this implies the rank–nullity theorem:
formula_4
where the term refers to the dimension of the image of L, formula_5 while "" refers to the dimension of the kernel of L, formula_6
That is,
formula_7
so that the rank–nullity theorem can be restated as
formula_8
When V is an inner product space, the quotient formula_9 can be identified with the orthogonal complement in V of formula_10. This is the generalization to linear operators of the row space, or coimage, of a matrix.
Generalization to modules.
The notion of kernel also makes sense for homomorphisms of modules, which are generalizations of vector spaces where the scalars are elements of a ring, rather than a field. The domain of the mapping is a module, with the kernel constituting a submodule. Here, the concepts of rank and nullity do not necessarily apply.
In functional analysis.
If V and W are topological vector spaces such that W is finite-dimensional, then a linear operator "L": "V" → "W" is continuous if and only if the kernel of L is a closed subspace of V.
Representation as matrix multiplication.
Consider a linear map represented as a "m" × "n" matrix A with coefficients in a field K (typically formula_11 or formula_12), that is operating on column vectors x with n components over K.
The kernel of this linear map is the set of solutions to the equation "A"x = 0, where 0 is understood as the zero vector. The dimension of the kernel of "A" is called the nullity of "A". In set-builder notation,
formula_13
The matrix equation is equivalent to a homogeneous system of linear equations:
formula_14
Thus the kernel of "A" is the same as the solution set to the above homogeneous equations.
Subspace properties.
The kernel of a "m" × "n" matrix A over a field K is a linear subspace of K"n". That is, the kernel of A, the set Null("A"), has the following three properties:
The row space of a matrix.
The product "A"x can be written in terms of the dot product of vectors as follows:
formula_15
Here, a1, ... , a"m" denote the rows of the matrix A. It follows that x is in the kernel of A, if and only if x is orthogonal (or perpendicular) to each of the row vectors of A (since orthogonality is defined as having a dot product of 0).
The row space, or coimage, of a matrix A is the span of the row vectors of A. By the above reasoning, the kernel of A is the orthogonal complement to the row space. That is, a vector x lies in the kernel of A, if and only if it is perpendicular to every vector in the row space of A.
The dimension of the row space of A is called the rank of "A", and the dimension of the kernel of A is called the nullity of A. These quantities are related by the rank–nullity theorem
formula_16
Left null space.
The left null space, or cokernel, of a matrix A consists of all column vectors x such that xT"A" = 0T, where T denotes the transpose of a matrix. The left null space of A is the same as the kernel of "A"T. The left null space of A is the orthogonal complement to the column space of A, and is dual to the cokernel of the associated linear transformation. The kernel, the row space, the column space, and the left null space of A are the four fundamental subspaces associated with the matrix A.
Nonhomogeneous systems of linear equations.
The kernel also plays a role in the solution to a nonhomogeneous system of linear equations:
formula_17
If u and v are two possible solutions to the above equation, then
formula_18
Thus, the difference of any two solutions to the equation "A"x = b lies in the kernel of A.
It follows that any solution to the equation "Ax = b can be expressed as the sum of a fixed solution v and an arbitrary element of the kernel. That is, the solution set to the equation "Ax = b is
formula_19
Geometrically, this says that the solution set to "A"x = b is the translation of the kernel of A by the vector v. See also Fredholm alternative and flat (geometry).
Illustration.
The following is a simple illustration of the computation of the kernel of a matrix (see , below for methods better suited to more complex calculations). The illustration also touches on the row space and its relation to the kernel.
Consider the matrix
formula_20
The kernel of this matrix consists of all vectors ("x", "y", "z") ∈ R3 for which
formula_21
which can be expressed as a homogeneous system of linear equations involving x, y, and z:
formula_22
The same linear equations can also be written in matrix form as:
formula_23
Through Gauss–Jordan elimination, the matrix can be reduced to:
formula_24
Rewriting the matrix in equation form yields:
formula_25
The elements of the kernel can be further expressed in parametric vector form, as follows:
formula_26
Since c is a free variable ranging over all real numbers, this can be expressed equally well as:
formula_27
The kernel of A is precisely the solution set to these equations (in this case, a line through the origin in R3). Here, since the vector (−1,−26,16)T constitutes a basis of the kernel of A. The nullity of A is 1.
The following dot products are zero:
formula_28
which illustrates that vectors in the kernel of A are orthogonal to each of the row vectors of A.
These two (linearly independent) row vectors span the row space of A—a plane orthogonal to the vector (−1,−26,16)T.
With the rank 2 of A, the nullity 1 of A, and the dimension 3 of A, we have an illustration of the rank-nullity theorem.
Computation by Gaussian elimination.
A basis of the kernel of a matrix may be computed by Gaussian elimination.
For this purpose, given an "m" × "n" matrix A, we construct first the row augmented matrix formula_34 where "I" is the "n" × "n" identity matrix.
Computing its column echelon form by Gaussian elimination (or any other suitable method), we get a matrix formula_35 A basis of the kernel of A consists in the non-zero columns of C such that the corresponding column of B is a zero column.
In fact, the computation may be stopped as soon as the upper matrix is in column echelon form: the remainder of the computation consists in changing the basis of the vector space generated by the columns whose upper part is zero.
For example, suppose that
formula_36
Then
formula_37
Putting the upper part in column echelon form by column operations on the whole matrix gives
formula_38
The last three columns of B are zero columns. Therefore, the three last vectors of C,
formula_39
are a basis of the kernel of A.
Proof that the method computes the kernel: Since column operations correspond to post-multiplication by invertible matrices, the fact that formula_40 reduces to formula_41 means that there exists an invertible matrix formula_42 such that formula_43 with formula_44 in column echelon form. Thus formula_45, formula_46, and formula_47. A column vector formula_48 belongs to the kernel of formula_49 (that is formula_50) if and only if formula_51 where formula_52. As formula_44 is in column echelon form, formula_53, if and only if the nonzero entries of formula_54 correspond to the zero columns of formula_44. By multiplying by formula_55, one may deduce that this is the case if and only if formula_56 is a linear combination of the corresponding columns of formula_55.
Numerical computation.
The problem of computing the kernel on a computer depends on the nature of the coefficients.
Exact coefficients.
If the coefficients of the matrix are exactly given numbers, the column echelon form of the matrix may be computed with Bareiss algorithm more efficiently than with Gaussian elimination. It is even more efficient to use modular arithmetic and Chinese remainder theorem, which reduces the problem to several similar ones over finite fields (this avoids the overhead induced by the non-linearity of the computational complexity of integer multiplication).
For coefficients in a finite field, Gaussian elimination works well, but for the large matrices that occur in cryptography and Gröbner basis computation, better algorithms are known, which have roughly the same computational complexity, but are faster and behave better with modern computer hardware.
Floating point computation.
For matrices whose entries are floating-point numbers, the problem of computing the kernel makes sense only for matrices such that the number of rows is equal to their rank: because of the rounding errors, a floating-point matrix has almost always a full rank, even when it is an approximation of a matrix of a much smaller rank. Even for a full-rank matrix, it is possible to compute its kernel only if it is well conditioned, i.e. it has a low condition number.
Even for a well conditioned full rank matrix, Gaussian elimination does not behave correctly: it introduces rounding errors that are too large for getting a significant result. As the computation of the kernel of a matrix is a special instance of solving a homogeneous system of linear equations, the kernel may be computed with any of the various algorithms designed to solve homogeneous systems. A state of the art software for this purpose is the Lapack library.
See also.
<templatestyles src="Div col/styles.css"/>
Notes and references.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\ker(L) = \\left\\{ \\mathbf{v} \\in V \\mid L(\\mathbf{v})=\\mathbf{0} \\right\\} = L^{-1}(\\mathbf{0})."
},
{
"math_id": 1,
"text": "L : V \\to W,"
},
{
"math_id": 2,
"text": "L\\left(\\mathbf{v}_1\\right) = L\\left(\\mathbf{v}_2\\right) \\quad \\text{ if and only if } \\quad L\\left(\\mathbf{v}_1-\\mathbf{v}_2\\right) = \\mathbf{0}."
},
{
"math_id": 3,
"text": "\\operatorname{im}(L) \\cong V / \\ker(L)."
},
{
"math_id": 4,
"text": "\\dim(\\ker L) + \\dim(\\operatorname{im} L) = \\dim(V)."
},
{
"math_id": 5,
"text": "\\dim(\\operatorname{im} L),"
},
{
"math_id": 6,
"text": "\\dim(\\ker L)."
},
{
"math_id": 7,
"text": "\\operatorname{Rank}(L) = \\dim(\\operatorname{im} L) \\qquad \\text{ and } \\qquad \\operatorname{Nullity}(L) = \\dim(\\ker L),"
},
{
"math_id": 8,
"text": "\\operatorname{Rank}(L) + \\operatorname{Nullity}(L) = \\dim \\left(\\operatorname{domain} L\\right)."
},
{
"math_id": 9,
"text": "V / \\ker(L)"
},
{
"math_id": 10,
"text": "\\ker(L)"
},
{
"math_id": 11,
"text": "\\mathbb{R}"
},
{
"math_id": 12,
"text": "\\mathbb{C}"
},
{
"math_id": 13,
"text": "\\operatorname{N}(A) = \\operatorname{Null}(A) = \\operatorname{ker}(A) = \\left\\{ \\mathbf{x}\\in K^n \\mid A\\mathbf{x} = \\mathbf{0} \\right\\}."
},
{
"math_id": 14,
"text": "A\\mathbf{x}=\\mathbf{0} \\;\\;\\Leftrightarrow\\;\\;\n\\begin{alignat}{7}\na_{11} x_1 &&\\; + \\;&& a_{12} x_2 &&\\; + \\;\\cdots\\; + \\;&& a_{1n} x_n &&\\; = \\;&&& 0 \\\\\na_{21} x_1 &&\\; + \\;&& a_{22} x_2 &&\\; + \\;\\cdots\\; + \\;&& a_{2n} x_n &&\\; = \\;&&& 0 \\\\\n && && && && &&\\vdots\\ \\;&&& \\\\\na_{m1} x_1 &&\\; + \\;&& a_{m2} x_2 &&\\; + \\;\\cdots\\; + \\;&& a_{mn} x_n &&\\; = \\;&&& 0\\text{.} \\\\\n\\end{alignat}"
},
{
"math_id": 15,
"text": "A\\mathbf{x} = \\begin{bmatrix} \\mathbf{a}_1 \\cdot \\mathbf{x} \\\\ \\mathbf{a}_2 \\cdot \\mathbf{x} \\\\ \\vdots \\\\ \\mathbf{a}_m \\cdot \\mathbf{x} \\end{bmatrix}."
},
{
"math_id": 16,
"text": "\\operatorname{rank}(A) + \\operatorname{nullity}(A) = n."
},
{
"math_id": 17,
"text": "A\\mathbf{x} = \\mathbf{b}\\quad \\text{or} \\quad \\begin{alignat}{7}\na_{11} x_1 &&\\; + \\;&& a_{12} x_2 &&\\; + \\;\\cdots\\; + \\;&& a_{1n} x_n &&\\; = \\;&&& b_1 \\\\\na_{21} x_1 &&\\; + \\;&& a_{22} x_2 &&\\; + \\;\\cdots\\; + \\;&& a_{2n} x_n &&\\; = \\;&&& b_2 \\\\\n && && && && &&\\vdots\\ \\;&&& \\\\\na_{m1} x_1 &&\\; + \\;&& a_{m2} x_2 &&\\; + \\;\\cdots\\; + \\;&& a_{mn} x_n &&\\; = \\;&&& b_m \\\\\n\\end{alignat}"
},
{
"math_id": 18,
"text": "A(\\mathbf{u} - \\mathbf{v}) = A\\mathbf{u} - A\\mathbf{v} = \\mathbf{b} - \\mathbf{b} = \\mathbf{0}"
},
{
"math_id": 19,
"text": "\\left\\{ \\mathbf{v}+\\mathbf{x} \\mid A \\mathbf{v}=\\mathbf{b} \\land \\mathbf{x}\\in\\operatorname{Null}(A) \\right\\},"
},
{
"math_id": 20,
"text": "A = \\begin{bmatrix} 2 & 3 & 5 \\\\ -4 & 2 & 3 \\end{bmatrix}."
},
{
"math_id": 21,
"text": "\\begin{bmatrix} 2 & 3 & 5 \\\\ -4 & 2 & 3 \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\\\ z \\end{bmatrix} = \\begin{bmatrix} 0 \\\\ 0 \\end{bmatrix},"
},
{
"math_id": 22,
"text": "\\begin{align}\n 2x + 3y + 5z &= 0, \\\\\n-4x + 2y + 3z &= 0.\n\\end{align}"
},
{
"math_id": 23,
"text": "\n \\left[\\begin{array}{ccc|c}\n 2 & 3 & 5 & 0 \\\\\n -4 & 2 & 3 & 0\n \\end{array}\\right].\n"
},
{
"math_id": 24,
"text": "\n \\left[\\begin{array}{ccc|c}\n 1 & 0 & 1/16 & 0 \\\\\n 0 & 1 & 13/8 & 0\n \\end{array}\\right].\n"
},
{
"math_id": 25,
"text": "\\begin{align}\nx &= -\\frac{1}{16}z \\\\\ny &= -\\frac{13}{8}z.\n\\end{align}"
},
{
"math_id": 26,
"text": "\\begin{bmatrix} x \\\\ y \\\\ z\\end{bmatrix} = c \\begin{bmatrix} -1/16 \\\\ -13/8 \\\\ 1\\end{bmatrix}\\quad (\\text{where }c \\in \\mathbb{R})"
},
{
"math_id": 27,
"text": "\n\\begin{bmatrix} x \\\\ y \\\\ z \\end{bmatrix}\n = c \\begin{bmatrix} -1 \\\\ -26 \\\\ 16 \\end{bmatrix}.\n"
},
{
"math_id": 28,
"text": "\n \\begin{bmatrix} 2 & 3 & 5 \\end{bmatrix}\n \\begin{bmatrix} -1 \\\\ -26 \\\\ 16 \\end{bmatrix}\n= 0\n\\quad\\mathrm{and}\\quad\n \\begin{bmatrix} -4 & 2 & 3 \\end{bmatrix}\n \\begin{bmatrix} -1 \\\\ -26 \\\\ 16 \\end{bmatrix}\n= 0 ,\n"
},
{
"math_id": 29,
"text": " L(x_1, x_2, x_3) = (2 x_1 + 3 x_2 + 5 x_3,\\; - 4 x_1 + 2 x_2 + 3 x_3)"
},
{
"math_id": 30,
"text": " \\begin{alignat}{7}\n 2x_1 &\\;+\\;& 3x_2 &\\;+\\;& 5x_3 &\\;=\\;& 0 \\\\\n -4x_1 &\\;+\\;& 2x_2 &\\;+\\;& 3x_3 &\\;=\\;& 0\n\\end{alignat}"
},
{
"math_id": 31,
"text": "L(f) = f(0.3)."
},
{
"math_id": 32,
"text": "D(f) = \\frac{df}{dx}."
},
{
"math_id": 33,
"text": " s(x_1, x_2, x_3, x_4, \\ldots) = (x_2, x_3, x_4, \\ldots)."
},
{
"math_id": 34,
"text": " \\begin{bmatrix}A \\\\ \\hline I \\end{bmatrix},"
},
{
"math_id": 35,
"text": " \\begin{bmatrix} B \\\\\\hline C \\end{bmatrix}."
},
{
"math_id": 36,
"text": "A = \\begin{bmatrix}\n1 & 0 & -3 & 0 & 2 & -8 \\\\\n0 & 1 & 5 & 0 & -1 & 4 \\\\\n0 & 0 & 0 & 1 & 7 & -9 \\\\\n0 & 0 & 0 & 0 & 0 & 0\n\\end{bmatrix}. "
},
{
"math_id": 37,
"text": " \\begin{bmatrix} A \\\\ \\hline I \\end{bmatrix} =\n\\begin{bmatrix}\n1 & 0 & -3 & 0 & 2 & -8 \\\\\n0 & 1 & 5 & 0 & -1 & 4 \\\\\n0 & 0 & 0 & 1 & 7 & -9 \\\\\n0 & 0 & 0 & 0 & 0 & 0 \\\\\n\\hline\n1 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 1\n\\end{bmatrix}. "
},
{
"math_id": 38,
"text": " \\begin{bmatrix} B \\\\ \\hline C \\end{bmatrix} =\n\\begin{bmatrix}\n1 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 \\\\\n\\hline\n1 & 0 & 0 & 3 & -2 & 8 \\\\\n0 & 1 & 0 & -5 & 1 & -4 \\\\\n0 & 0 & 0 & 1 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & -7 & 9 \\\\\n0 & 0 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 1\n\\end{bmatrix}. "
},
{
"math_id": 39,
"text": "\\left[\\!\\! \\begin{array}{r} 3 \\\\ -5 \\\\ 1 \\\\ 0 \\\\ 0 \\\\ 0 \\end{array} \\right] ,\\;\n\\left[\\!\\! \\begin{array}{r} -2 \\\\ 1 \\\\ 0 \\\\ -7 \\\\ 1 \\\\ 0 \\end{array} \\right],\\;\n\\left[\\!\\! \\begin{array}{r} 8 \\\\ -4 \\\\ 0 \\\\ 9 \\\\ 0 \\\\ 1 \\end{array} \\right] "
},
{
"math_id": 40,
"text": "\\begin{bmatrix} A \\\\ \\hline I \\end{bmatrix}"
},
{
"math_id": 41,
"text": "\\begin{bmatrix} B \\\\ \\hline C \\end{bmatrix}"
},
{
"math_id": 42,
"text": "P"
},
{
"math_id": 43,
"text": " \\begin{bmatrix} A \\\\ \\hline I \\end{bmatrix} P = \\begin{bmatrix} B \\\\ \\hline C \\end{bmatrix}, "
},
{
"math_id": 44,
"text": "B"
},
{
"math_id": 45,
"text": "AP = B"
},
{
"math_id": 46,
"text": "IP = C "
},
{
"math_id": 47,
"text": " AC = B "
},
{
"math_id": 48,
"text": "\\mathbf v"
},
{
"math_id": 49,
"text": "A"
},
{
"math_id": 50,
"text": "A \\mathbf v = \\mathbf 0"
},
{
"math_id": 51,
"text": "B \\mathbf w = \\mathbf 0,"
},
{
"math_id": 52,
"text": "\\mathbf w = P^{-1} \\mathbf v = C^{-1} \\mathbf v"
},
{
"math_id": 53,
"text": "B \\mathbf w = \\mathbf 0"
},
{
"math_id": 54,
"text": "\\mathbf w"
},
{
"math_id": 55,
"text": "C"
},
{
"math_id": 56,
"text": "\\mathbf v = C \\mathbf w"
}
] | https://en.wikipedia.org/wiki?curid=1072915 |
1072943 | Forward algorithm | Hidden Markov model algorithm
The forward algorithm, in the context of a hidden Markov model (HMM), is used to calculate a 'belief state': the probability of a state at a certain time, given the history of evidence. The process is also known as "filtering". The forward algorithm is closely related to, but distinct from, the Viterbi algorithm.
Introduction.
The forward and backward algorithms should be placed within the context of probability as they appear to simply be names given to a set of standard mathematical procedures within a few fields. For example, neither "forward algorithm" nor "Viterbi" appear in the Cambridge encyclopedia of mathematics. The main observation to take away from these algorithms is how to organize Bayesian updates and inference to be computationally efficient in the context of directed graphs of variables (see sum-product networks).
For an HMM such as this one:
this probability is written as formula_0. Here formula_1 is the hidden state which is abbreviated as formula_2 and formula_3 are the observations formula_4 to formula_5.
The backward algorithm complements the forward algorithm by taking into account the future history if one wanted to improve the estimate for past times. This is referred to as "smoothing" and the forward/backward algorithm computes formula_6 for formula_7. Thus, the full forward/backward algorithm takes into account all evidence. Note that a belief state can be calculated at each time step, but doing this does not, in a strict sense, produce the most likely state "sequence", but rather the most likely state at each time step, given the previous history. In order to achieve the most likely sequence, the Viterbi algorithm is required. It computes the most likely state sequence given the history of observations, that is, the state sequence that maximizes formula_8.
Algorithm.
The goal of the forward algorithm is to compute the joint probability formula_9, where for notational convenience we have abbreviated formula_1 as formula_2 and formula_10 as formula_3. Once the joint probability formula_9 is computed, the other probabilities formula_11 and formula_12 are easily obtained.
Both the state formula_2 and observation formula_13 are assumed to be discrete, finite random variables. The hidden Markov model's state transition probabilities formula_14, observation/emission probabilities formula_15, and initial prior probability formula_16 are assumed to be known. Furthermore, the sequence of observations formula_3 are assumed to be given.
Computing formula_9 naively would require marginalizing over all possible state sequences formula_17, the number of which grows exponentially with formula_5. Instead, the forward algorithm takes advantage of the conditional independence rules of the hidden Markov model (HMM) to perform the calculation recursively.
To demonstrate the recursion, let
formula_18.
Using the chain rule to expand formula_19, we can then write
formula_20.
Because formula_13 is conditionally independent of everything but formula_2, and formula_2 is conditionally independent of everything but formula_21, this simplifies to
formula_22.
Thus, since formula_15 and formula_14 are given by the model's emission distributions and transition probabilities, which are assumed to be known, one can quickly calculate formula_23 from formula_24 and avoid incurring exponential computation time.
The recursion formula given above can be written in a more compact form. Let formula_25 be the transition probabilities and formula_26 be the emission probabilities, then
formula_27
where formula_28 is the transition probability matrix, formula_29 is the i-th row of the emission probability matrix formula_30 which corresponds to the actual observation formula_31 at time formula_5, and formula_32 is the alpha vector. The formula_33 is the hadamard product between the transpose of formula_29 and formula_34.
The initial condition is set in accordance to the prior probability over formula_35 as
formula_36.
Once the joint probability formula_37 has been computed using the forward algorithm, we can easily obtain the related joint probability formula_12 as
formula_38
and the required conditional probability formula_11 as
formula_39
Once the conditional probability has been calculated, we can also find the point estimate of formula_2. For instance, the MAP estimate of formula_2 is given by
formula_40
while the MMSE estimate of formula_2 is given by
formula_41
The forward algorithm is easily modified to account for observations from variants of the hidden Markov model as well, such as the Markov jump linear system.
Pseudocode.
<templatestyles src="Framebox/styles.css" />
Example.
This example on observing possible states of weather from the observed condition of seaweed. We have observations of seaweed for three consecutive days as dry, damp, and soggy in order. The possible states of weather can be sunny, cloudy, or rainy. In total, there can be formula_49 such weather sequences. Exploring all such possible state sequences is computationally very expensive. To reduce this complexity, Forward algorithm comes in handy, where the trick lies in using the conditional independence of the sequence steps to calculate partial probabilities, formula_50 as shown in the above derivation. Hence, we can calculate the probabilities as the product of the appropriate observation/emission probability, formula_15 ( probability of state formula_13 seen at time t from previous observation) with the sum of probabilities of reaching that state at time t, calculated using transition probabilities. This reduces complexity of the problem from searching whole search space to just using previously computed formula_51's and transition probabilities.
Complexity.
Complexity of Forward Algorithm is formula_52, where formula_53 is the number of hidden or latent variables, like weather in the example above, and formula_54 is the length of the sequence of the observed variable. This is clear reduction from the adhoc method of exploring all the possible states with a complexity of formula_55.
History.
The forward algorithm is one of the algorithms used to solve the decoding problem. Since the development of speech recognition and pattern recognition and related fields like computational biology which use HMMs, the forward algorithm has gained popularity.
Applications.
The forward algorithm is mostly used in applications that need us to determine the probability of being in a specific state when we know about the sequence of observations. The algorithm can be applied wherever we can train a model as we receive data using Baum-Welch or any general EM algorithm. The Forward algorithm will then tell us about the probability of data with respect to what is expected from our model. One of the applications can be in the domain of Finance, where it can help decide on when to buy or sell tangible assets.
It can have applications in all fields where we apply Hidden Markov Models. The popular ones include Natural language processing domains like tagging part-of-speech and speech recognition. Recently it is also being used in the domain of Bioinformatics.
Forward algorithm can also be applied to perform Weather speculations. We can have a HMM describing the weather and its relation to the state of observations for few consecutive days (some examples could be dry, damp, soggy, sunny, cloudy, rainy etc.). We can consider calculating the probability of observing any sequence of observations recursively given the HMM. We can then calculate the probability of reaching an intermediate state as the sum of all possible paths to that state. Thus the partial probabilities for the final observation will hold the probability of reaching those states going through all possible paths.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p(x_t | y_{1:t} )"
},
{
"math_id": 1,
"text": "x(t)"
},
{
"math_id": 2,
"text": "x_t"
},
{
"math_id": 3,
"text": "y_{1:t}"
},
{
"math_id": 4,
"text": "1"
},
{
"math_id": 5,
"text": "t"
},
{
"math_id": 6,
"text": "p(x_t | y_{1:T} )"
},
{
"math_id": 7,
"text": "1 < t < T"
},
{
"math_id": 8,
"text": "p(x_{0:t}|y_{0:t})"
},
{
"math_id": 9,
"text": "p(x_t,y_{1:t})"
},
{
"math_id": 10,
"text": "(y(1), y(2), ..., y(t))"
},
{
"math_id": 11,
"text": "p(x_t|y_{1:t})"
},
{
"math_id": 12,
"text": "p(y_{1:t})"
},
{
"math_id": 13,
"text": "y_t"
},
{
"math_id": 14,
"text": "p(x_t|x_{t-1})"
},
{
"math_id": 15,
"text": "p(y_t|x_t)"
},
{
"math_id": 16,
"text": "p(x_0)"
},
{
"math_id": 17,
"text": "\\{x_{1:t-1}\\}"
},
{
"math_id": 18,
"text": "\\alpha(x_t) = p(x_t,y_{1:t}) = \\sum_{x_{t-1}}p(x_t,x_{t-1},y_{1:t})"
},
{
"math_id": 19,
"text": "p(x_t,x_{t-1},y_{1:t})"
},
{
"math_id": 20,
"text": "\\alpha(x_t) = \\sum_{x_{t-1}}p(y_t|x_t,x_{t-1},y_{1:t-1})p(x_t|x_{t-1},y_{1:t-1})p(x_{t-1},y_{1:t-1})"
},
{
"math_id": 21,
"text": "x_{t-1}"
},
{
"math_id": 22,
"text": "\\alpha(x_t) = p(y_t|x_t)\\sum_{x_{t-1}}p(x_t|x_{t-1})\\alpha(x_{t-1})"
},
{
"math_id": 23,
"text": "\\alpha(x_t)"
},
{
"math_id": 24,
"text": "\\alpha(x_{t-1})"
},
{
"math_id": 25,
"text": "a_{ij}=p(x_t=i|x_{t-1}=j)"
},
{
"math_id": 26,
"text": "b_{ij}=p(y_t=i|x_t=j)"
},
{
"math_id": 27,
"text": "\\mathbf{\\alpha}_t = \\mathbf{b}_t^T \\odot \\mathbf{A} \\mathbf{\\alpha}_{t-1}"
},
{
"math_id": 28,
"text": "\\mathbf{A} = [a_{ij}]"
},
{
"math_id": 29,
"text": "\\mathbf{b}_t"
},
{
"math_id": 30,
"text": "\\mathbf{B} = [b_{ij}]"
},
{
"math_id": 31,
"text": "y_t = i"
},
{
"math_id": 32,
"text": "\\mathbf{\\alpha}_t = [\\alpha(x_t=1),\\ldots, \\alpha(x_t=n)]^T"
},
{
"math_id": 33,
"text": "\\odot"
},
{
"math_id": 34,
"text": "\\mathbf{A} \\mathbf{\\alpha}_{t-1}"
},
{
"math_id": 35,
"text": "x_0"
},
{
"math_id": 36,
"text": "\\alpha(x_0) = p(y_0|x_0)p(x_0)"
},
{
"math_id": 37,
"text": "\\alpha(x_t) = p(x_t,y_{1:t})"
},
{
"math_id": 38,
"text": "p(y_{1:t}) = \\sum_{x_t} p(x_t, y_{1:t}) = \\sum_{x_t} \\alpha(x_t)"
},
{
"math_id": 39,
"text": "p(x_t|y_{1:t}) = \\frac{p(x_t,y_{1:t})}{p(y_{1:t})} = \\frac{\\alpha(x_t)}{\\sum_{x_t} \\alpha(x_t)}."
},
{
"math_id": 40,
"text": "\\widehat{x}_t^{MAP} = \\arg \\max_{x_t} \\; p(x_t|y_{1:t}) = \\arg \\max_{x_t} \\; \\alpha(x_t),"
},
{
"math_id": 41,
"text": "\\widehat{x}_t^{MMSE} = \\mathbb{E}[x_t|y_{1:t}] = \\sum_{x_t} x_t p(x_t|y_{1:t}) = \\frac{\\sum_{x_t} x_t \\alpha(x_t)}{\\sum_{x_t} \\alpha(x_t)}."
},
{
"math_id": 42,
"text": "t = 0"
},
{
"math_id": 43,
"text": "p(y_t|x_t) "
},
{
"math_id": 44,
"text": "y_{1:T}"
},
{
"math_id": 45,
"text": "\\alpha(x_0)"
},
{
"math_id": 46,
"text": "t = 1"
},
{
"math_id": 47,
"text": "T"
},
{
"math_id": 48,
"text": "p(x_T|y_{1:T})= \\frac{\\alpha(x_T)}{\\sum_{x_T} \\alpha(x_T)}"
},
{
"math_id": 49,
"text": "3^3=27"
},
{
"math_id": 50,
"text": "\\alpha(x_t) = p(x_t,y_{1:t}) = p(y_t|x_t)\\sum_{x_{t-1}}p(x_t|x_{t-1})\\alpha(x_{t-1})"
},
{
"math_id": 51,
"text": "\\alpha"
},
{
"math_id": 52,
"text": "\\Theta(nm^2)"
},
{
"math_id": 53,
"text": "m"
},
{
"math_id": 54,
"text": "n"
},
{
"math_id": 55,
"text": "\\Theta(nm^n)"
}
] | https://en.wikipedia.org/wiki?curid=1072943 |
10730216 | Calcium hexaboride | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Calcium hexaboride (sometimes calcium boride) is a compound of calcium and boron with the chemical formula CaB6. It is an important material due to its high electrical conductivity , hardness, chemical stability, and melting point. It is a black, lustrous, chemically inert powder with a low density. It has the cubic structure typical for metal hexaborides, with octahedral units of 6 boron atoms combined with calcium atoms. CaB6 and lanthanum-doped CaB6 both show weak ferromagnetic properties, which is a remarkable fact because calcium and boron are neither magnetic, nor have inner 3d or 4f electronic shells, which are usually required for ferromagnetism.
Properties.
CaB6 has been investigated in the past due to a variety of peculiar physical properties, such as superconductivity, valence fluctuation and Kondo effects. However, the most remarkable property of CaB6 is its ferromagnetism. It occurs at unexpectedly high temperature (600 K) and with low magnetic moment (below 0.07 formula_0 per atom). The origin of this high temperature ferromagnetism is the ferromagnetic phase of a dilute electron gas, linkage to the presumed excitonic state in calcium boride, or external impurities on the surface of the sample. The impurities might include iron and nickel, probably coming from impurities in the boron used to prepare the sample.
CaB6 is insoluble in H2O, MeOH (methanol), and EtOH (ethanol) and dissolves slowly in acids. Its microhardness is 27 GPa, Knoop hardness is 2600 kg/mm2), Young modulus is 379 GPa, and electrical resistivity is greater than 2·1010 Ω·m for pure crystals. CaB6 is a semiconductor with an energy gap estimated as 1.0 eV. The low, semi-metallic conductivity of many CaB6 samples can be explained by unintentional doping due to impurities and possible non-stoichiometry.
Structural information.
The crystal structure of calcium hexaboride is a cubic lattice with calcium at the cell centre and compact, regular octahedra of boron atoms linked at the vertices by B-B bonds to give a three-dimensional boron network. Each calcium has 24 nearest-neighbor boron atoms The calcium atoms are arranged in simple cubic packing so that there are holes between groups of eight calcium atoms situated at the vertices of a cube. The simple cubic structure is expanded by the introduction of the octahedral B6 groups and the structure is a CsCl-like packing of the calcium and hexaboride groups. Another way of describing calcium hexaboride is as having a metal and a B62− octahedral polymeric anions in a CsCl-type structure where the Calcium atoms occupy the Cs sites and the B6 octahedra in the Cl sites. The Ca-B bond length is 3.05 Å and the B-B bond length is 1.7 Å.
43Ca NMR data contains δpeak at -56.0 ppm and δiso at -41.3 ppm where δiso is taken as peak max +0.85 width, the negative shift is due to the high coordination number.
Raman Data: Calcium hexaboride has three Raman peaks at 754.3, 1121.8, and 1246.9 cm−1 due to the active modes A1g, Eg, and T2g respectively.
Observed Vibrational Frequencies cm−1 : 1270(strong) from A1g stretch, 1154 (med.) and 1125(shoulder) from Eg stretch, 526, 520, 485, and 470 from F1g rotation, 775 (strong) and 762 (shoulder) from F2g bend, 1125 (strong) and 1095 (weak) from F1u bend, 330 and 250 from F1u translation, and 880 (med.) and 779 from F2u bend.
CaO + 3 B2O3 + 10 Mg → CaB6 + 10 MgO
Preparation.
Other methods of producing CaB6 powder include:
Ca + 6B → CaB6
Ca(OH)2 +7B → CaB6 + BO(g) + H2O(g)
CaCl2 + 6NaBH4 → CaB6 + 2NaCl + 12H2 + 4Na
results in relatively poor quality material.
Uses.
Calcium hexaboride is used in the manufacturing of boron-alloyed steel and as a deoxidation agent in production of oxygen-free copper. The latter results in higher conductivity than conventionally phosphorus-deoxidized copper owing to the low solubility of boron in copper. CaB6 can also serve as a high temperature material, surface protection, abrasives, tools, and wear resistant material.
CaB6 is highly conductive, has low work function, and thus can be used as a hot cathode material. When used at elevated temperature, calcium hexaboride will oxidize degrading its properties and shortening its usable lifespan.
CaB6 is also a promising candidate for n-type thermoelectric materials, because its power factor is larger than or comparable to that of common thermoelectric materials Bi2Te3 and PbTe.
CaB also can be used as an antioxidant in carbon bonded refractories.
Precautions.
Calcium hexaboride is irritating to the eyes, skin, and respiratory system. This product should be handled with proper protective eyeware and clothing. Never put calcium hexaboride down the drain or add water to it.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu_\\mathrm{B}"
}
] | https://en.wikipedia.org/wiki?curid=10730216 |
1073133 | Marcel Riesz | Hungarian mathematician
Marcel Riesz ( ; 16 November 1886 – 4 September 1969) was a Hungarian mathematician, known for work on summation methods, potential theory, and other parts of analysis, as well as number theory, partial differential equations, and Clifford algebras. He spent most of his career in Lund, Sweden.
Marcel is the younger brother of Frigyes Riesz, who was also an important mathematician and at times they worked together (see F. and M. Riesz theorem).
Biography.
Marcel Riesz was born in Győr, Austria-Hungary. He was the younger brother of the mathematician Frigyes Riesz. In 1904, he won the Loránd Eötvös competition. Upon entering the Budapest University, he also studied in Göttingen, and the academic year 1910-11 he spent in Paris. Earlier, in 1908, he attended the
1908 International Congress of Mathematicians in Rome. There he met Gösta Mittag-Leffler, in three years, Mittag-Leffler would offer Riesz to come to Sweden.
Riesz obtained his PhD at Eötvös Loránd University under the supervision of Lipót Fejér. In 1911, he moved to Sweden, where from 1911 to 1925 he taught at "Stockholm University".
From 1926 to 1952, he was a professor at Lund University. According to Lars Gårding, Riesz arrived in Lund as a renowned star of mathematics, and for a time his appointment may have seemed like an exile. Indeed, there was no established school of mathematics in Lund at the time. However, Riesz managed to turn the tide and make the academic atmosphere more active.
Retired from the Lund University, he spent 10 years at universities in the United States. As a visiting research professor, he worked in Maryland, Chicago, etc.
After ten years of intense work with little rest, he suffered a breakdown. Riesz returned to Lund in 1962. After a long illness, he died there in 1969.
Riesz was elected a member of the Royal Swedish Academy of Sciences in 1936.
Mathematical work.
Classical analysis.
The work of Riesz as a student of Fejér in Budapest was devoted to trigonometric series:
formula_0
One of his results states that if
formula_1
and if the Fejer means of the series tend to zero, then all the coefficients "a""n" and "b""n" are zero.
His results on summability of trigonometric series include a generalisation of Fejér's theorem to Cesàro means of arbitrary order. He also studied the summability of power and Dirichlet series, and coauthored a book on the latter with G.H. Hardy.
In 1916, he introduced the Riesz interpolation formula for trigonometric polynomials, which allowed him to give a new proof of Bernstein's inequality.
He also introduced the Riesz function Riesz("x"), and showed that the Riemann hypothesis is equivalent to the bound Riesz("x") = O("x"<templatestyles src="Fraction/styles.css" />1⁄4 + "ε") as "x" → ∞, for any "ε" > 0.
Together with his brother Frigyes Riesz, he proved the F. and M. Riesz theorem, which implies, in particular, that if "μ" is a complex measure on the unit circle such that
formula_2
then the variation |"μ"| of "μ" and the Lebesgue measure on the circle are mutually absolutely continuous.
Functional-analytic methods.
Part of the analytic work of Riesz in the 1920s used methods of functional analysis.
In the early 1920s, he worked on the moment problem, to which he introduced the operator-theoretic approach by proving the Riesz extension theorem (which predated the closely related Hahn–Banach theorem).
Later, he devised an interpolation theorem to show that the Hilbert transform is a bounded operator in "L""p" (1 < "p" < ∞). The generalisation of the interpolation theorem by his student Olaf Thorin is now known as the Riesz–Thorin theorem.
Riesz also established, independently of Andrey Kolmogorov, what is now called the "Kolmogorov–Riesz compactness criterion" in "L""p": a subset "K" ⊂"L""p"(R"n") is precompact if and only if the following three conditions hold: (a) "K" is bounded;
(b) for every "ε" > 0 there exists "R" > 0 so that
formula_3
for every "f" ∈ "K";
(c) for every "ε" > 0 there exists "ρ" > 0 so that
formula_4
for every "y" ∈ R"n" with |"y"| < "ρ", and every "f" ∈ "K".
Potential theory, PDE, and Clifford algebras.
After 1930, the interests of Riesz shifted to potential theory and partial differential equations. He made use of "generalised potentials", generalisations of the Riemann–Liouville integral. In particular, Riesz discovered the Riesz potential, a generalisation of the Riemann–Liouville integral to dimension higher than one.
In the 1940s and 1950s, Riesz worked on Clifford algebras. His 1958 lecture notes, the complete version of which was only published in 1993 (), were dubbed by the physicist David Hestenes "the midwife of the rebirth" of Clifford algebras.
Students.
Riesz's doctoral students in Stockholm include Harald Cramér and Einar Carl Hille. In Lund, Riesz supervised the theses of Otto Frostman, Lars Gårding, Lars Hörmander, and Olof Thorin.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\frac{a_0}{2} + \\sum_{n=1}^\\infty \\left\\{ a_n \\cos (nx) + b_n \\sin(nx) \\right\\}.\\, "
},
{
"math_id": 1,
"text": " \\sum_{n=1}^\\infty \\frac{|a_n|+|b_n|}{n^2} < \\infty,\\, "
},
{
"math_id": 2,
"text": " \\int z^n d\\mu(z) = 0, n=1,2,3\\cdots,\\, "
},
{
"math_id": 3,
"text": " \\int_{|x|>R} |f(x)|^p dx < \\epsilon^p\\,"
},
{
"math_id": 4,
"text": " \\int_{\\mathbb{R}^n} |f(x+y)-f(x)|^p dx < \\epsilon^p\\,"
}
] | https://en.wikipedia.org/wiki?curid=1073133 |
10731502 | Honeycomb structure | Natural or man-made structures that have the geometry of a honeycomb
Honeycomb structures are natural or man-made structures that have the geometry of a honeycomb to allow the minimization of the amount of used material to reach minimal weight and minimal material cost. The geometry of honeycomb structures can vary widely but the common feature of all such structures is an array of hollow cells formed between thin vertical walls. The cells are often columnar and hexagonal in shape. A honeycomb-shaped structure provides a material with minimal density and relative high out-of-plane compression properties and out-of-plane shear properties.
Man-made honeycomb structural materials are commonly made by layering a honeycomb material between two thin layers that provide strength in tension. This forms a plate-like assembly. Honeycomb materials are widely used where flat or slightly curved surfaces are needed and their high specific strength is valuable. They are widely used in the aerospace industry for this reason, and honeycomb materials in aluminum, fibreglass and advanced composite materials have been featured in aircraft and rockets since the 1950s. They can also be found in many other fields, from packaging materials in the form of paper-based honeycomb cardboard, to sporting goods like skis and snowboards.
Introduction.
Natural honeycomb structures include beehives, honeycomb weathering in rocks, tripe, and bone.
Man-made honeycomb structures include sandwich-structured composites with honeycomb cores. Man-made honeycomb structures are manufactured by using a variety of different materials, depending on the intended application and required characteristics, from paper or thermoplastics, used for low strength and stiffness for low load applications, to high strength and stiffness for high performance applications, from aluminum or fiber reinforced plastics. The strength of laminated or sandwich panels depends on the size of the panel, facing material used and the number or density of the honeycomb cells within it. Honeycomb composites are used widely in many industries, from aerospace industries, automotive and furniture to packaging and logistics.
The material takes its name from its visual resemblance to a bee's honeycomb – a hexagonal sheet structure.
History.
The hexagonal comb of the honey bee has been admired and wondered about from ancient times. The first man-made honeycomb, according to Greek mythology, is said to have been manufactured by Daedalus from gold by lost wax casting more than 3000 years ago. Marcus Varro reports that the Greek geometers Euclid and Zenodorus found that the hexagon shape makes most efficient use of space and building materials. The interior ribbing and hidden chambers in the dome of the Pantheon in Rome is an early example of a honeycomb structure.
Galileo Galilei discusses in 1638 the resistance of hollow solids: "Art, and nature even more, makes use of these in thousands of operations in which robustness is increased without adding weight, as is seen in the bones of birds and in many stalks that are light and very resistant to bending and breaking”.
Robert Hooke discovers in 1665 that the natural cellular structure of cork is similar to the hexagonal honeybee comb. and Charles Darwin states in 1859 that "the comb of the hive-bee, as far as we can see, is absolutely perfect in economizing labour and wax”.
The first paper honeycomb structures might have been made by the Chinese 2000 years ago for ornaments, but no reference for this has been found. Paper honeycombs and the expansion production process has been invented in Halle/Saale in Germany by Hans Heilbrun in 1901 for decorative applications. First honeycomb structures from corrugated metal sheets had been proposed for bee keeping in 1890. For the same purpose, as foundation sheets to harvest more honey, a honeycomb moulding process using a paper paste glue mixture had been patented in 1878. The three basic techniques for honeycomb production that are still used today—expansion, corrugation and moulding—were already developed by 1901 for non-sandwich applications.
Hugo Junkers first explored the idea of a honeycomb core within a laminate structure. He proposed and patented the first honeycomb cores for aircraft application in 1915. He described in detail his concept to replace the fabric covered aircraft structures by metal sheets and reasoned that a metal sheet can also be loaded in compression if it is supported at very small intervals by arranging side by side a series of square or rectangular cells or triangular or hexagonal hollow bodies. The problem of bonding a continuous skin to cellular cores led Junkers later to the open corrugated structure, which could be riveted or welded together.
The first use of honeycomb structures for structural applications had been independently proposed for building application and published already in 1914. In 1934 Edward G. Budd patented a welded steel honeycomb sandwich panel from corrugated metal sheets and Claude Dornier aimed 1937 to solve the core-skin bonding problem by rolling or pressing a skin which is in a plastic state into the core cell walls. The first successful structural adhesive bonding of honeycomb sandwich structures was achieved by Norman de Bruyne of Aero Research Limited, who patented an adhesive with the right viscosity to form resin fillets on the honeycomb core in 1938. The North American XB-70 Valkyrie made extensive use of stainless steel honeycomb panels using a brazing process they developed.
A summary of the important developments in the history of honeycomb technology is given below:
Manufacture.
The three traditional honeycomb production techniques, expansion, corrugation, and moulding, were all developed by 1901 for non-sandwich applications. For decorative applications the expanded honeycomb production reached a remarkable degree of automation in the first decade of the 20th century.
Today honeycomb cores are manufactured via the expansion process and the corrugation process from composite materials such as glass-reinforced plastic (also known as fiberglass), carbon fiber reinforced plastic, Nomex aramide paper reinforced plastic, or from a metal (usually aluminum).
Honeycombs from metals (like aluminum) are today produced by the expansion process. Continuous processes of folding honeycombs from a single aluminum sheet after cutting slits had been developed already around 1920.
Continuous in-line production of metal honeycomb can be done from metal rolls by cutting and bending.
Thermoplastic honeycomb cores (usually from polypropylene) are usually made by extrusion processed via a block of extruded profiles or extruded tubes from which the honeycomb sheets are sliced.
Recently a new, unique process to produce thermoplastic honeycombs has been implemented, allowing a continuous production of a honeycomb core as well as in-line production of honeycombs with direct lamination of skins into cost efficient sandwich panel.
Applications.
Composite honeycomb structures have been used in numerous engineering and scientific applications.
More recent developments show that honeycomb structures are also advantageous in applications involving nanohole arrays in anodized alumina, microporous arrays in polymer thin films, activated carbon honeycombs, and photonic band gap honeycomb structures.
Aerodynamics.
A honeycomb mesh is often used in aerodynamics to reduce or to create wind turbulence. It is also used to obtain a standard profile in a wind tunnel (temperature, flow speed). A major factor in choosing the right mesh is the length ratio (length vs honeycomb cell diameter) "L/d".
Length ratio < 1:
Honeycomb meshes of low length ratio can be used on vehicles front grille. Beside the aesthetic reasons, these meshes are used as screens to get a uniform profile and to reduce the intensity of turbulence.
Length ratio » 1:
Honeycomb meshes of large length ratio reduce lateral turbulence and eddies of the flow. Early wind tunnels used them with no screens; unfortunately, this method introduced high turbulence intensity in the test section. Most modern tunnels use both honeycomb and screens.
While aluminium honeycombs are common use in the industry, other materials are offered for specific applications. People using metal structures should take care of removing burrs as they can introduce additional turbulences. Polycarbonate structures are a low-cost alternative.
The honeycombed, screened center of this open-circuit air intake for Langley's first wind tunnel ensured a steady, non-turbulent flow of air. Two mechanics pose near the entrance end of the actual tunnel, where air was pulled into the test section through a honeycomb arrangement to smooth the flow.
Honeycomb is not the only cross-section available in order to reduce eddies in an airflow. Square, rectangular, circular and hexagonal cross-sections are other choices available, although honeycomb is generally the preferred choice.
Properties.
In combination with two skins applied on the honeycomb, the structure offers a sandwich panel with excellent rigidity at minimal weight. The behavior of the honeycomb structures is orthotropic, meaning the panels react differently depending on the orientation of the structure. It is therefore necessary to distinguish between the directions of symmetry, the so-called L and W-direction. The L-direction is the strongest and the stiffest direction. The weakest direction is at 60° from the L-direction (in the case of a regular hexagon) and the most compliant direction is the W-direction.
Another important property of honeycomb sandwich core is its compression strength. Due to the efficient hexagonal configuration, where walls support each other, compression strength of honeycomb cores is typically higher (at same weight) compared to other sandwich core structures such as, for instance, foam cores or corrugated cores.
The mechanical properties of honeycombs depend on its cell geometry, the properties of the material from which the honeycomb is constructed (often referred to as the solid), which include the Young's modulus, yield stress, and fracture stress of the material, and the relative density of the honeycomb (the density of the honeycomb normalized by that of the solid, ρ*/ρs). The ratio of the effective elastic moduli and the solid's Young's moduli, "e.g.", formula_0 and formula_1, of low-density honeycombs are independent of the solid. The mechanical properties of honeycombs will also vary based on the direction in which the load is applied.
In-plane loading: Under in-plane loading, it is often assumed that the wall thickness of the honeycomb is small compared to the length of the wall. For a regular honeycomb, the relative density is proportional to the wall thickness to wall length ratio (t/L) and the Young’s modulus is proportional to (t/L)3. Under high enough compressive load, the honeycomb reaches a critical stress and fails due to one of the following mechanisms – elastic buckling, plastic yielding, or brittle crushing. The mode of failure is dependent on the material of the solid which the honeycomb is made of. Elastic buckling of the cell walls is the mode of failure for elastomeric materials, ductile materials fail due to plastic yielding, and brittle crushing is the mode of failure when the solid is brittle. The elastic buckling stress is proportional to the relative density cubed, plastic collapse stress is proportional to relative density squared, and brittle crushing stress is proportional to relative density squared. Following the critical stress and failure of the material, a plateau stress is observed in the material, in which increases in strain are observed while the stress of the honeycomb remains roughly constant. Once a certain strain is reached, the material will begin to undergo densification as further compression pushes the cell walls together.
Out of-plane loading: Under out-of-plane loading, the out-of-plane Young’s modulus of a regular hexagonal honeycombs is proportional to the relative density of the honeycomb. The elastic buckling stress is proportional to (t/L)3 while the plastic buckling stress is proportional to (t/L)5/3.
The shape of the honeycomb cell is often varied to meet different engineering applications. Shapes that are commonly used besides the regular hexagonal cell include triangular cells, square cells, and circular-cored hexagonal cells, and circular-cored square cells. The relative densities of these cells will depend on their new geometry.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\kappa^*/E_\\text{s}"
},
{
"math_id": 1,
"text": "E^* / E_\\text{s}"
}
] | https://en.wikipedia.org/wiki?curid=10731502 |
1073230 | Eilenberg–MacLane space | Topological space with only one nontrivial homotopy group
In mathematics, specifically algebraic topology, an Eilenberg–MacLane space is a topological space with a single nontrivial homotopy group.
Let "G" be a group and "n" a positive integer. A connected topological space "X" is called an Eilenberg–MacLane space of type formula_0, if it has "n"-th homotopy group formula_1 isomorphic to "G" and all other homotopy groups trivial. Assuming that "G" is abelian in the case that formula_2, Eilenberg–MacLane spaces of type formula_0 always exist, and are all weak homotopy equivalent. Thus, one may consider formula_0 as referring to a weak homotopy equivalence class of spaces. It is common to refer to any representative as "a formula_0" or as "a model of formula_0". Moreover, it is common to assume that this space is a CW-complex (which is always possible via CW approximation).
The name is derived from Samuel Eilenberg and Saunders Mac Lane, who introduced such spaces in the late 1940s.
As such, an Eilenberg–MacLane space is a special kind of topological space that in homotopy theory can be regarded as a building block for CW-complexes via fibrations in a Postnikov system. These spaces are important in many contexts in algebraic topology, including computations of homotopy groups of spheres, definition of cohomology operations, and for having a strong connection to singular cohomology.
A generalised Eilenberg–Maclane space is a space which has the homotopy type of a product of Eilenberg–Maclane spaces
formula_3.
Examples.
Some further elementary examples can be constructed from these by using the fact that the product formula_33 is formula_34. For instance the n-dimensional Torus formula_35 is a formula_36.
Remark on constructing Eilenberg–MacLane spaces.
For formula_37 and formula_38 an arbitrary group the construction of formula_39 is identical to that of the classifying space of the group formula_38. Note that if G has a torsion element, then every CW-complex of type K(G,1) has to be infinite-dimensional.
There are multiple techniques for constructing higher Eilenberg-Maclane spaces. One of which is to construct a Moore space formula_40 for an abelian group formula_41: Take the wedge of "n"-spheres, one for each generator of the group "A" and realise the relations between these generators by attaching "(n+1)"-cells via corresponding maps in formula_42 of said wedge sum. Note that the lower homotopy groups formula_43 are already trivial by construction. Now iteratively kill all higher homotopy groups formula_44 by successively attaching cells of dimension greater than formula_45, and define formula_46 as direct limit under inclusion of this iteration.
Another useful technique is to use the geometric realization of simplicial abelian groups. This gives an explicit presentation of simplicial abelian groups which represent Eilenberg-Maclane spaces.
Another simplicial construction, in terms of classifying spaces and universal bundles, is given in J. Peter May's book.
Since taking the loop space lowers the homotopy groups by one slot, we have a canonical homotopy equivalence formula_47, hence there is a fibration sequence
formula_48.
Note that this is not a cofibration sequence ― the space formula_49 is not the homotopy cofiber of formula_50.
This fibration sequence can be used to study the cohomology of formula_49 from formula_0 using the Leray spectral sequence. This was exploited by Jean-Pierre Serre while he studied the homotopy groups of spheres using the Postnikov system and spectral sequences.
Properties of Eilenberg–MacLane spaces.
Bijection between homotopy classes of maps and cohomology.
An important property of formula_51's is that for any abelian group "G", and any based CW-complex "X", the set formula_52 of based homotopy classes of based maps from "X" to formula_53 is in natural bijection with the "n"-th singular cohomology group formula_54 of the space "X". Thus one says that the formula_55 are representing spaces for singular cohomology with coefficients in "G". Since
formula_56
there is a distinguished element formula_57 corresponding to the identity. The above bijection is given by the pullback of that element formula_58. This is similar to the Yoneda lemma of category theory.
A constructive proof of this theorem can be found here, another making use of the relation between omega-spectra and generalized reduced cohomology theories can be found here and the main idea is sketched later as well.
Loop spaces / Omega spectra.
The loop space of an Eilenberg–MacLane space is again an Eilenberg–MacLane space: formula_59. Further there is an adjoint relation between the loop-space and the reduced suspension: formula_60, which gives formula_61 the structure of an abelian group, where the operation is the concatenation of loops. This makes the bijection formula_62 mentioned above a group isomorphism.
Also this property implies that Eilenberg–MacLane spaces with various "n" form an omega-spectrum, called an "Eilenberg–MacLane spectrum". This spectrum defines via formula_63 a reduced cohomology theory on based CW-complexes and for any reduced cohomology theory formula_64 on CW-complexes with formula_65 for formula_66 there is a natural isomorphism formula_67, where formula_68 denotes reduced singular cohomology. Therefore these two cohomology theories coincide.
In a more general context, Brown representability says that every reduced cohomology theory on based CW-complexes comes from an omega-spectrum.
Relation with Homology.
For a fixed abelian group formula_38 there are maps on the stable homotopy groups
formula_69
induced by the map formula_70. Taking the direct limit over these maps, one can verify that this defines a reduced homology theory
formula_71
on CW complexes. Since formula_72 vanishes for formula_73, formula_74 agrees with reduced singular homology formula_75 with coefficients in G on CW-complexes.
Functoriality.
It follows from the universal coefficient theorem for cohomology that the Eilenberg MacLane space is a "quasi-functor" of the group; that is, for each positive integer formula_22 if formula_76 is any homomorphism of abelian groups, then there is a non-empty set
formula_77
satisfying formula_78
where formula_79 denotes the homotopy class of a continuous map formula_80 and formula_81
Relation with Postnikov/Whitehead tower.
Every connected CW-complex formula_82 possesses a Postnikov tower, that is an inverse system of spaces:
formula_83
such that for every formula_84:
Dually there exists a Whitehead tower, which is a sequence of CW-complexes:
formula_92
such that for every formula_84:
With help of Serre spectral sequences computations of higher homotopy groups of spheres can be made. For instance formula_97 and formula_98 using a Whitehead tower of formula_99 can be found here, more generally those of formula_100 using a Postnikov systems can be found here.
Cohomology operations.
For fixed natural numbers "m,n" and abelian groups "G,H" exists a bijection between the set of all cohomology operations formula_101 and formula_102 defined by formula_103, where formula_104 is a fundamental class.
As a result, cohomology operations cannot decrease the degree of the cohomology groups and degree preserving cohomology operations are corresponding
to coefficient homomorphism formula_105. This follows from the Universal coefficient theorem for cohomology and the (m-1)-connectedness of formula_106.
Some interesting examples for cohomology operations are Steenrod Squares and Powers, when formula_107 are finite cyclic groups. When studying those the importance of the cohomology of formula_108 with coefficients in formula_109 becomes apparent quickly; some extensive tabeles of those groups can be found here.
Group (co)homology.
One can define the group (co)homology of G with coefficients in the group A as the singular (co)homology of the Eilenberg-MacLane space formula_39 with coefficients in A.
Further Applications.
The loop space construction described above is used in string theory to obtain, for example, the string group, the fivebrane group and so on, as the Whitehead tower arising from the short exact sequence
formula_110
with formula_111 the string group, and formula_112 the spin group. The relevance of formula_7 lies in the fact that there are the homotopy equivalences
formula_113
for the classifying space formula_114, and the fact formula_115. Notice that because the complex spin group is a group extension
formula_116,
the String group can be thought of as a "higher" complex spin group extension, in the sense of higher group theory since the space formula_7 is an example of a higher group. It can be thought of the topological realization of the groupoid formula_117 whose object is a single point and whose morphisms are the group formula_118. Because of these homotopical properties, the construction generalizes: any given space formula_119 can be used to start a short exact sequence that kills the homotopy group formula_120 in a topological group.
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
References.
Cartan seminar and applications.
The Cartan seminar contains many fundamental results about Eilenberg-Maclane spaces including their homology and cohomology, and applications for calculating the homotopy groups of spheres. | [
{
"math_id": 0,
"text": "K(G,n)"
},
{
"math_id": 1,
"text": "\\pi_n(X)"
},
{
"math_id": 2,
"text": "n > 1"
},
{
"math_id": 3,
"text": "\\prod_{m}K(G_m,m)"
},
{
"math_id": 4,
"text": "S^1"
},
{
"math_id": 5,
"text": "K(\\Z,1)"
},
{
"math_id": 6,
"text": "\\mathbb{CP}^{\\infty}"
},
{
"math_id": 7,
"text": "K(\\Z,2)"
},
{
"math_id": 8,
"text": "\\mathbb{RP}^{\\infty}"
},
{
"math_id": 9,
"text": "K(\\Z/2,1)"
},
{
"math_id": 10,
"text": "\\textstyle\\bigvee_{i=1}^k S^1"
},
{
"math_id": 11,
"text": "K(F_k,1)"
},
{
"math_id": 12,
"text": "F_k"
},
{
"math_id": 13,
"text": "S^3"
},
{
"math_id": 14,
"text": "K(G,1)"
},
{
"math_id": 15,
"text": "K(\\Gamma,1)"
},
{
"math_id": 16,
"text": "\\Gamma=\\pi_1(M)"
},
{
"math_id": 17,
"text": " L(\\infty, q)"
},
{
"math_id": 18,
"text": "S^\\infty"
},
{
"math_id": 19,
"text": " (z \\mapsto e^{2\\pi i m/q}z) "
},
{
"math_id": 20,
"text": " m \\in \\Z/q "
},
{
"math_id": 21,
"text": "K(\\mathbb{Z}/q,1)"
},
{
"math_id": 22,
"text": "n"
},
{
"math_id": 23,
"text": "K(P_n,1)"
},
{
"math_id": 24,
"text": "P_n"
},
{
"math_id": 25,
"text": " \\mathbb{R}^2 "
},
{
"math_id": 26,
"text": "K(B_n,1)"
},
{
"math_id": 27,
"text": "B_n"
},
{
"math_id": 28,
"text": " SP(S^n)"
},
{
"math_id": 29,
"text": "K(\\mathbb{Z},n)"
},
{
"math_id": 30,
"text": " SP(M(G,n)) "
},
{
"math_id": 31,
"text": " K(G,n) "
},
{
"math_id": 32,
"text": " M(G,n) "
},
{
"math_id": 33,
"text": "K(G,n) \\times K(H,n)"
},
{
"math_id": 34,
"text": "K(G\\times H,n)"
},
{
"math_id": 35,
"text": "\\mathbb{T}^n"
},
{
"math_id": 36,
"text": " K(\\mathbb{Z}^n, 1)"
},
{
"math_id": 37,
"text": " n = 1 "
},
{
"math_id": 38,
"text": " G "
},
{
"math_id": 39,
"text": " K(G,1) "
},
{
"math_id": 40,
"text": "M(A,n)"
},
{
"math_id": 41,
"text": "A"
},
{
"math_id": 42,
"text": " \\pi_n(\\bigvee S^n) "
},
{
"math_id": 43,
"text": "\\pi_{i < n} (M(A,n)) "
},
{
"math_id": 44,
"text": "\\pi_{i > n} (M(A,n)) "
},
{
"math_id": 45,
"text": " n + 1 "
},
{
"math_id": 46,
"text": " K(A,n) "
},
{
"math_id": 47,
"text": "K(G,n)\\simeq\\Omega K(G,n+1)"
},
{
"math_id": 48,
"text": "K(G,n) \\to * \\to K(G,n+1)"
},
{
"math_id": 49,
"text": "K(G,n+1)"
},
{
"math_id": 50,
"text": "K(G,n) \\to *"
},
{
"math_id": 51,
"text": "K(G, n)"
},
{
"math_id": 52,
"text": "[X, K(G,n)]"
},
{
"math_id": 53,
"text": " K(G,n)"
},
{
"math_id": 54,
"text": "H^n(X, G)"
},
{
"math_id": 55,
"text": "K(G,n)'s"
},
{
"math_id": 56,
"text": "\\begin{array}{rcl}\nH^n(K(G,n),G) &=& \\operatorname{Hom}(H_n(K(G,n);\\Z), G) \\\\\n&=& \\operatorname{Hom}(\\pi_n(K(G,n)), G) \\\\\n&=& \\operatorname{Hom}(G,G),\n\\end{array}"
},
{
"math_id": 57,
"text": "u \\in H^n(K(G,n),G)"
},
{
"math_id": 58,
"text": " f \\mapsto f^*u "
},
{
"math_id": 59,
"text": "\\Omega K(G,n) \\cong K(G,n-1)"
},
{
"math_id": 60,
"text": " [\\Sigma X, Y] = [X,\\Omega Y] "
},
{
"math_id": 61,
"text": "[X,K(G,n)] \\cong [X,\\Omega^2K(G,n+2)] "
},
{
"math_id": 62,
"text": " [X, K(G,n)] \\to H^n(X, G) "
},
{
"math_id": 63,
"text": " X \\mapsto h^n(X):= [X, K(G,n)] "
},
{
"math_id": 64,
"text": " h^* "
},
{
"math_id": 65,
"text": " h^n(S^0) = 0 "
},
{
"math_id": 66,
"text": " n \\neq 0"
},
{
"math_id": 67,
"text": " h^n(X) \\cong \\tilde{H}^n(X, h^0(S^0) "
},
{
"math_id": 68,
"text": " \\tilde{H^*} "
},
{
"math_id": 69,
"text": " \\pi_{q+n}^s(X \\wedge K(G,n)) \\cong \\pi_{q+n+1}^s(X \\wedge \\Sigma K(G,n)) \\to \\pi_{q+n+1}^s(X \\wedge K(G,n+1)) "
},
{
"math_id": 70,
"text": " \\Sigma K(G,n) \\to K(G,n+1)"
},
{
"math_id": 71,
"text": "h_q(X) = \\varinjlim _{n} \\pi_{q+n}^s(X \\wedge K(G,n)) "
},
{
"math_id": 72,
"text": " h_q(S^0) = \\varinjlim \\pi_{q+n}^s(K(G,n)) "
},
{
"math_id": 73,
"text": " q \\neq 0"
},
{
"math_id": 74,
"text": " h_* "
},
{
"math_id": 75,
"text": "\\tilde{H}_*(\\cdot,G) "
},
{
"math_id": 76,
"text": "a\\colon G \\to G'"
},
{
"math_id": 77,
"text": "K(a,n) = \\{[f]: f\\colon K(G,n) \\to K(G',n), H_n(f) = a\\},"
},
{
"math_id": 78,
"text": "K(a \\circ b,n) \\supset K(a,n) \\circ K(b,n) \\text{ and } 1 \\in K(1,n), "
},
{
"math_id": 79,
"text": "[f]"
},
{
"math_id": 80,
"text": "f"
},
{
"math_id": 81,
"text": "S \\circ T := \\{s \\circ t: s \\in S, t \\in T \\}."
},
{
"math_id": 82,
"text": " X "
},
{
"math_id": 83,
"text": "\\cdots \\to X_3 \\xrightarrow{p_3} X_2 \\xrightarrow{p_2} X_1 \\simeq K(\\pi_1(X), 1) "
},
{
"math_id": 84,
"text": " n "
},
{
"math_id": 85,
"text": " X \\to X_n "
},
{
"math_id": 86,
"text": " \\pi_i "
},
{
"math_id": 87,
"text": " i \\leq n"
},
{
"math_id": 88,
"text": " \\pi_i(X_n) = 0 "
},
{
"math_id": 89,
"text": " i > n "
},
{
"math_id": 90,
"text": " X_n \\xrightarrow{p_n} X_{n-1} "
},
{
"math_id": 91,
"text": " K(\\pi_n(X),n)"
},
{
"math_id": 92,
"text": "\\cdots \\to X_3 \\to X_2 \\to X_1 \\to X "
},
{
"math_id": 93,
"text": " X_n \\to X "
},
{
"math_id": 94,
"text": " X_n "
},
{
"math_id": 95,
"text": " X_n \\to X_{n-1}"
},
{
"math_id": 96,
"text": " K(\\pi_n(X), n-1) "
},
{
"math_id": 97,
"text": " \\pi_4(S^3) "
},
{
"math_id": 98,
"text": " \\pi_5(S^3) "
},
{
"math_id": 99,
"text": " S^3 "
},
{
"math_id": 100,
"text": " \\pi_{n+i}(S^n) \\ i \\leq 3 "
},
{
"math_id": 101,
"text": "\\Theta :H^m(\\cdot,G) \\to H^n(\\cdot,H) "
},
{
"math_id": 102,
"text": " H^n(K(G,m),H) "
},
{
"math_id": 103,
"text": " \\Theta \\mapsto \\Theta(\\alpha) "
},
{
"math_id": 104,
"text": " \\alpha \\in H^m(K(G,m),G) "
},
{
"math_id": 105,
"text": " \\operatorname{Hom}(G,H) "
},
{
"math_id": 106,
"text": " K(G,m) "
},
{
"math_id": 107,
"text": " G=H"
},
{
"math_id": 108,
"text": " K(\\Z /p ,n) "
},
{
"math_id": 109,
"text": " \\Z /p "
},
{
"math_id": 110,
"text": "0\\rightarrow K(\\Z,2)\\rightarrow \\operatorname{String}(n)\\rightarrow \\operatorname{Spin}(n)\\rightarrow 0"
},
{
"math_id": 111,
"text": "\\text{String}(n)"
},
{
"math_id": 112,
"text": "\\text{Spin}(n)"
},
{
"math_id": 113,
"text": "K(\\mathbb{Z},1) \\simeq U(1) \\simeq B\\Z"
},
{
"math_id": 114,
"text": "B\\Z"
},
{
"math_id": 115,
"text": "K(\\Z,2) \\simeq BU(1)"
},
{
"math_id": 116,
"text": "0\\to K(\\Z,1) \\to \\text{Spin}^\\Complex(n) \\to \\text{Spin}(n) \\to 0"
},
{
"math_id": 117,
"text": "\\mathbf{B}U(1)"
},
{
"math_id": 118,
"text": "U(1)"
},
{
"math_id": 119,
"text": "K(\\Z,n)"
},
{
"math_id": 120,
"text": "\\pi_{n+1}"
}
] | https://en.wikipedia.org/wiki?curid=1073230 |
1073567 | Sigma-additive set function | Mapping function
In mathematics, an additive set function is a function formula_0 mapping sets to numbers, with the property that its value on a union of two disjoint sets equals the sum of its values on these sets, namely, formula_1 If this additivity property holds for any two sets, then it also holds for any finite number of sets, namely, the function value on the union of "k" disjoint sets (where "k" is a finite number) equals the sum of its values on the sets. Therefore, an additive set function is also called a finitely additive set function (the terms are equivalent). However, a finitely additive set function might not have the additivity property for a union of an "infinite" number of sets. A σ-additive set function is a function that has the additivity property even for countably infinite many sets, that is, formula_2
Additivity and sigma-additivity are particularly important properties of measures. They are abstractions of how intuitive properties of size (length, area, volume) of a set sum when considering multiple objects. Additivity is a weaker condition than σ-additivity; that is, σ-additivity implies additivity.
The term modular set function is equivalent to additive set function; see modularity below.
Additive (or finitely additive) set functions.
Let formula_3 be a set function defined on an algebra of sets formula_4 with values in formula_5 (see the extended real number line). The function formula_3 is called <templatestyles src="Template:Visible anchor/styles.css" />additive or <templatestyles src="Template:Visible anchor/styles.css" />finitely additive, if whenever formula_6 and formula_7 are disjoint sets in formula_8 then
formula_9
A consequence of this is that an additive function cannot take both formula_10 and formula_11 as values, for the expression formula_12 is undefined.
One can prove by mathematical induction that an additive function satisfies
formula_13
for any formula_14 disjoint sets in formula_15
σ-additive set functions.
Suppose that formula_4 is a σ-algebra. If for every sequence formula_16 of pairwise disjoint sets in formula_8
formula_17
holds then formula_3 is said to be countably additive or 𝜎-additive.
Every 𝜎-additive function is additive but not vice versa, as shown below.
τ-additive set functions.
Suppose that in addition to a sigma algebra formula_18 we have a topology formula_19 If for every directed family of measurable open sets formula_20
formula_21
we say that formula_3 is formula_22-additive. In particular, if formula_3 is inner regular (with respect to compact sets) then it is τ-additive.
Properties.
Useful properties of an additive set function formula_3 include the following.
Value of empty set.
Either formula_23 or formula_3 assigns formula_24 to all sets in its domain, or formula_3 assigns formula_10 to all sets in its domain. "Proof": additivity implies that for every set formula_25 formula_26 If formula_27 then this equality can be satisfied only by plus or minus infinity.
Monotonicity.
If formula_3 is non-negative and formula_28 then formula_29 That is, formula_3 is a <templatestyles src="Template:Visible anchor/styles.css" />monotone set function. Similarly, If formula_3 is non-positive and formula_28 then formula_30
Modularity.
A set function formula_3 on a family of sets formula_31 is called a <templatestyles src="Template:Visible anchor/styles.css" />modular set function and a <templatestyles src="Template:Visible anchor/styles.css" />valuation if whenever formula_25 formula_32 formula_33 and formula_34 are elements of formula_35 then
formula_36
The above property is called <templatestyles src="Template:Visible anchor/styles.css" />modularity and the argument below proves that additivity implies modularity.
Given formula_6 and formula_32 formula_37 "Proof": write formula_38 and formula_39 and formula_40 where all sets in the union are disjoint. Additivity implies that both sides of the equality equal formula_41
However, the related properties of "submodularity" and "subadditivity" are not equivalent to each other.
Note that modularity has a different and unrelated meaning in the context of complex functions; see modular form.
Set difference.
If formula_28 and formula_42 is defined, then formula_43
Examples.
An example of a 𝜎-additive function is the function formula_3 defined over the power set of the real numbers, such that
formula_44
If formula_16 is a sequence of disjoint sets of real numbers, then either none of the sets contains 0, or precisely one of them does. In either case, the equality
formula_45
holds.
See measure and signed measure for more examples of 𝜎-additive functions.
A "charge" is defined to be a finitely additive set function that maps formula_46 to formula_47 (Cf. ba space for information about "bounded" charges, where we say a charge is "bounded" to mean its range is a bounded subset of "R".)
An additive function which is not σ-additive.
An example of an additive function which is not σ-additive is obtained by considering formula_3, defined over the Lebesgue sets of the real numbers formula_48 by the formula
formula_49
where formula_50 denotes the Lebesgue measure and formula_51 the Banach limit. It satisfies formula_52 and if formula_53 then formula_54
One can check that this function is additive by using the linearity of the limit. That this function is not σ-additive follows by considering the sequence of disjoint sets
formula_55
for formula_56 The union of these sets is the positive reals, and formula_3 applied to the union is then one, while formula_3 applied to any of the individual sets is zero, so the sum of formula_57is also zero, which proves the counterexample.
Generalizations.
One may define additive functions with values in any additive monoid (for example any group or more commonly a vector space). For sigma-additivity, one needs in addition that the concept of limit of a sequence be defined on that set. For example, spectral measures are sigma-additive functions with values in a Banach algebra. Another example, also from quantum mechanics, is the positive operator-valued measure.
See also.
"This article incorporates material from additive on PlanetMath, which is licensed under the ."
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu"
},
{
"math_id": 1,
"text": "\\mu(A \\cup B) = \\mu(A) + \\mu(B)."
},
{
"math_id": 2,
"text": "\\mu\\left(\\bigcup_{n=1}^\\infty A_n\\right) = \\sum_{n=1}^\\infty \\mu(A_n)."
},
{
"math_id": 3,
"text": "\\mu"
},
{
"math_id": 4,
"text": "\\scriptstyle\\mathcal{A}"
},
{
"math_id": 5,
"text": "[-\\infty, \\infty]"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "B"
},
{
"math_id": 8,
"text": "\\scriptstyle\\mathcal{A},"
},
{
"math_id": 9,
"text": "\\mu(A \\cup B) = \\mu(A) + \\mu(B)."
},
{
"math_id": 10,
"text": "- \\infty"
},
{
"math_id": 11,
"text": "+ \\infty"
},
{
"math_id": 12,
"text": "\\infty - \\infty"
},
{
"math_id": 13,
"text": "\\mu\\left(\\bigcup_{n=1}^N A_n\\right)=\\sum_{n=1}^N \\mu\\left(A_n\\right)"
},
{
"math_id": 14,
"text": "A_1, A_2, \\ldots, A_N"
},
{
"math_id": 15,
"text": "\\mathcal{A}."
},
{
"math_id": 16,
"text": "A_1, A_2, \\ldots, A_n, \\ldots"
},
{
"math_id": 17,
"text": "\\mu\\left(\\bigcup_{n=1}^\\infty A_n\\right) = \\sum_{n=1}^\\infty \\mu(A_n),"
},
{
"math_id": 18,
"text": "\\mathcal{A},"
},
{
"math_id": 19,
"text": "\\tau."
},
{
"math_id": 20,
"text": "\\mathcal{G} \\subseteq \\mathcal{A} \\cap \\tau,"
},
{
"math_id": 21,
"text": "\\mu\\left(\\bigcup \\mathcal{G} \\right) = \\sup_{G\\in\\mathcal{G}} \\mu(G),"
},
{
"math_id": 22,
"text": "\\tau"
},
{
"math_id": 23,
"text": "\\mu(\\varnothing) = 0,"
},
{
"math_id": 24,
"text": "\\infty"
},
{
"math_id": 25,
"text": "A,"
},
{
"math_id": 26,
"text": "\\mu(A) = \\mu(A \\cup \\varnothing) = \\mu(A) + \\mu( \\varnothing)."
},
{
"math_id": 27,
"text": "\\mu(\\varnothing) \\neq 0,"
},
{
"math_id": 28,
"text": "A \\subseteq B"
},
{
"math_id": 29,
"text": "\\mu(A) \\leq \\mu(B)."
},
{
"math_id": 30,
"text": "\\mu(A) \\geq \\mu(B)."
},
{
"math_id": 31,
"text": "\\mathcal{S}"
},
{
"math_id": 32,
"text": "B,"
},
{
"math_id": 33,
"text": "A\\cup B,"
},
{
"math_id": 34,
"text": "A\\cap B"
},
{
"math_id": 35,
"text": "\\mathcal{S},"
},
{
"math_id": 36,
"text": " \\phi(A\\cup B)+ \\phi(A\\cap B) = \\phi(A) + \\phi(B)"
},
{
"math_id": 37,
"text": "\\mu(A \\cup B) + \\mu(A \\cap B) = \\mu(A) + \\mu(B)."
},
{
"math_id": 38,
"text": "A = (A \\cap B) \\cup (A \\setminus B)"
},
{
"math_id": 39,
"text": "B = (A \\cap B) \\cup (B \\setminus A)"
},
{
"math_id": 40,
"text": "A \\cup B = (A \\cap B) \\cup (A \\setminus B) \\cup (B \\setminus A),"
},
{
"math_id": 41,
"text": "\\mu(A \\setminus B) + \\mu(B \\setminus A) + 2\\mu(A \\cap B)."
},
{
"math_id": 42,
"text": "\\mu(B) - \\mu(A)"
},
{
"math_id": 43,
"text": "\\mu(B \\setminus A) = \\mu(B) - \\mu(A)."
},
{
"math_id": 44,
"text": "\\mu (A)= \\begin{cases} 1 & \\mbox{ if } 0 \\in A \\\\ \n 0 & \\mbox{ if } 0 \\notin A.\n\\end{cases}"
},
{
"math_id": 45,
"text": "\\mu\\left(\\bigcup_{n=1}^\\infty A_n\\right) = \\sum_{n=1}^\\infty \\mu(A_n)"
},
{
"math_id": 46,
"text": "\\varnothing"
},
{
"math_id": 47,
"text": "0."
},
{
"math_id": 48,
"text": "\\R"
},
{
"math_id": 49,
"text": "\\mu(A) = \\lim_{k\\to\\infty} \\frac{1}{k} \\cdot \\lambda(A \\cap (0,k)),"
},
{
"math_id": 50,
"text": "\\lambda"
},
{
"math_id": 51,
"text": "\\lim"
},
{
"math_id": 52,
"text": "0 \\leq \\mu(A) \\leq 1"
},
{
"math_id": 53,
"text": "\\sup A < \\infty"
},
{
"math_id": 54,
"text": "\\mu(A) = 0."
},
{
"math_id": 55,
"text": "A_n = [n,n + 1)"
},
{
"math_id": 56,
"text": "n = 0, 1, 2, \\ldots"
},
{
"math_id": 57,
"text": "\\mu(A_n)"
}
] | https://en.wikipedia.org/wiki?curid=1073567 |
1073746 | Brown's representability theorem | On representability of a contravariant functor on the category of connected CW complexes
In mathematics, Brown's representability theorem in homotopy theory gives necessary and sufficient conditions for a contravariant functor "F" on the homotopy category "Hotc" of pointed connected CW complexes, to the category of sets Set, to be a representable functor.
More specifically, we are given
"F": "Hotc"op → Set,
and there are certain obviously necessary conditions for "F" to be of type "Hom"(—, "C"), with "C" a pointed connected CW-complex that can be deduced from category theory alone. The statement of the substantive part of the theorem is that these necessary conditions are then sufficient. For technical reasons, the theorem is often stated for functors to the category of pointed sets; in other words the sets are also given a base point.
Brown representability theorem for CW complexes.
The representability theorem for CW complexes, due to Edgar H. Brown, is the following. Suppose that:
Then "F" is representable by some CW complex "C", that is to say there is an isomorphism
"F"("Z") ≅ "Hom""Hotc"("Z", "C")
for any CW complex "Z", which is natural in "Z" in that for any morphism from "Z" to another CW complex "Y" the induced maps "F"("Y") → "F"("Z") and "Hom""Hot"("Y", "C") → "Hom""Hot"("Z", "C") are compatible with these isomorphisms.
The converse statement also holds: any functor represented by a CW complex satisfies the above two properties. This direction is an immediate consequence of basic category theory, so the deeper and more interesting part of the equivalence is the other implication.
The representing object "C" above can be shown to depend functorially on "F": any natural transformation from "F" to another functor satisfying the conditions of the theorem necessarily induces a map of the representing objects. This is a consequence of Yoneda's lemma.
Taking "F"("X") to be the singular cohomology group "H""i"("X","A") with coefficients in a given abelian group "A", for fixed "i" > 0; then the representing space for "F" is the Eilenberg–MacLane space "K"("A", "i"). This gives a means of showing the existence of Eilenberg-MacLane spaces.
Variants.
Since the homotopy category of CW-complexes is equivalent to the localization of the category of all topological spaces at the weak homotopy equivalences, the theorem can equivalently be stated for functors on a category defined in this way.
However, the theorem is false without the restriction to "connected" pointed spaces, and an analogous statement for unpointed spaces is also false.
A similar statement does, however, hold for spectra instead of CW complexes. Brown also proved a general categorical version of the representability theorem, which includes both the version for pointed connected CW complexes and the version for spectra.
A version of the representability theorem in the case of triangulated categories is due to Amnon Neeman. Together with the preceding remark, it gives a criterion for a (covariant) functor "F": "C" → "D" between triangulated categories satisfying certain technical conditions to have a right adjoint functor. Namely, if "C" and "D" are triangulated categories with "C" compactly generated and "F" a triangulated functor commuting with arbitrary direct sums, then "F" is a left adjoint. Neeman has applied this to proving the Grothendieck duality theorem in algebraic geometry.
Jacob Lurie has proved a version of the Brown representability theorem for the homotopy category of a pointed quasicategory with a compact set of generators which are cogroup objects in the homotopy category. For instance, this applies to the homotopy category of pointed connected CW complexes, as well as to the unbounded derived category of a Grothendieck abelian category (in view of Lurie's higher-categorical refinement of the derived category).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F(\\vee_\\alpha X_\\alpha) \\cong \\prod_\\alpha F(X_\\alpha),"
}
] | https://en.wikipedia.org/wiki?curid=1073746 |
10738834 | Fatou's theorem | In mathematics, specifically in complex analysis, Fatou's theorem, named after Pierre Fatou, is a statement concerning holomorphic functions on the unit disk and their pointwise extension to the boundary of the disk.
Motivation and statement of theorem.
If we have a holomorphic function formula_0 defined on the open unit disk formula_1, it is reasonable to ask under what conditions we can extend this function to the boundary of the unit disk. To do this, we can look at what the function looks like on each circle inside the disk centered at 0, each with some radius formula_2. This defines a new function:
formula_3
where
formula_4
is the unit circle. Then it would be expected that the values of the extension of formula_0 onto the circle should be the limit of these functions, and so the question reduces to determining when formula_5 converges, and in what sense, as formula_6, and how well defined is this limit. In particular, if the formula_7 norms of these formula_5 are well behaved, we have an answer:
Theorem. Let formula_8 be a holomorphic function such that
formula_9
where formula_5 are defined as above. Then formula_5 converges to some function formula_10 pointwise almost everywhere and in formula_7 norm. That is,
formula_11
Now, notice that this pointwise limit is a radial limit. That is, the limit being taken is along a straight line from the center of the disk to the boundary of the circle, and the statement above hence says that
formula_12
The natural question is, with this boundary function defined, will we converge pointwise to this function by taking a limit in any other way? That is, suppose instead of following a straight line to the boundary, we follow an arbitrary curve formula_13 converging to some point formula_14 on the boundary. Will formula_0 converge to formula_15? (Note that the above theorem is just the special case of formula_16). It turns out that the curve formula_17 needs to be "non-tangential", meaning that the curve does not approach its target on the boundary in a way that makes it tangent to the boundary of the circle. In other words, the range of formula_17 must be contained in a wedge emanating from the limit point. We summarize as follows:
Definition. Let formula_13 be a continuous path such that formula_18. Define
formula_19
That is, formula_20 is the wedge inside the disk with angle formula_21 whose axis passes between formula_14 and zero. We say that formula_17 converges "non-tangentially" to formula_14, or that it is a "non-tangential limit", if there exists formula_22 such that formula_17 is contained in formula_20 and formula_23.
Fatou's Theorem. Let formula_24 Then for almost all formula_25
formula_26
for every non-tangential limit formula_17 converging to formula_27 where formula_28 is defined as above. | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "\\mathbb{D}=\\{z:|z|<1\\}"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "\\begin{cases} f_r:S^1 \\to \\Complex \\\\ f_{r}(e^{i\\theta})=f(re^{i\\theta}) \\end{cases}"
},
{
"math_id": 4,
"text": "S^1:=\\{e^{i\\theta}:\\theta\\in[0,2\\pi]\\}=\\{z\\in \\Complex:|z|=1\\},"
},
{
"math_id": 5,
"text": "f_r"
},
{
"math_id": 6,
"text": "r\\to 1"
},
{
"math_id": 7,
"text": "L^p"
},
{
"math_id": 8,
"text": "f:\\mathbb{D}\\to\\Complex"
},
{
"math_id": 9,
"text": "\\sup_{0<r<1}\\| f_r\\|_{L^p(S^{1})}<\\infty,"
},
{
"math_id": 10,
"text": "f_1\\in L^p(S^{1})"
},
{
"math_id": 11,
"text": "\\begin{align}\n\\left |f_r(e^{i\\theta})-f_{1}(e^{i\\theta}) \\right | &\\to 0 && \\text{for almost every } \\theta\\in [0,2\\pi] \\\\\n\\|f_r-f_1\\|_{L^p(S^1)} &\\to 0\n\\end{align}"
},
{
"math_id": 12,
"text": " f(re^{i\\theta})\\to f_1(e^{i\\theta}) \\qquad \\text{for almost every } \\theta."
},
{
"math_id": 13,
"text": "\\gamma:[0,1)\\to \\mathbb{D}"
},
{
"math_id": 14,
"text": "e^{i\\theta}"
},
{
"math_id": 15,
"text": "f_{1}(e^{i\\theta})"
},
{
"math_id": 16,
"text": "\\gamma(t)=te^{i\\theta}"
},
{
"math_id": 17,
"text": "\\gamma"
},
{
"math_id": 18,
"text": "\\lim\\nolimits_{t\\to 1}\\gamma(t)=e^{i\\theta}\\in S^{1}"
},
{
"math_id": 19,
"text": "\\begin{align}\n\\Gamma_\\alpha &=\\{z:\\arg z\\in [\\pi-\\alpha,\\pi+\\alpha]\\} \\\\\n\\Gamma_\\alpha(\\theta) &=\\mathbb{D}\\cap e^{i\\theta}(\\Gamma_\\alpha+1)\n\\end{align}"
},
{
"math_id": 20,
"text": "\\Gamma_\\alpha(\\theta)"
},
{
"math_id": 21,
"text": "2\\alpha"
},
{
"math_id": 22,
"text": "0<\\alpha<\\tfrac{\\pi}{2}"
},
{
"math_id": 23,
"text": "\\lim\\nolimits_{t\\to 1}\\gamma(t)=e^{i\\theta}"
},
{
"math_id": 24,
"text": "f\\in H^p(\\mathbb{D})."
},
{
"math_id": 25,
"text": "\\theta\\in[0,2\\pi],"
},
{
"math_id": 26,
"text": "\\lim_{t\\to 1}f(\\gamma(t))=f_1(e^{i\\theta})"
},
{
"math_id": 27,
"text": "e^{i\\theta},"
},
{
"math_id": 28,
"text": "f_1"
}
] | https://en.wikipedia.org/wiki?curid=10738834 |
10739141 | Trace monoid | Generalization of strings in computer scienceIn computer science, a trace is a set of strings, wherein certain letters in the string are allowed to commute, but others are not. It generalizes the concept of a string, by not forcing the letters to always be in a fixed order, but allowing certain reshufflings to take place. Traces were introduced by Pierre Cartier and Dominique Foata in 1969 to give a combinatorial proof of MacMahon's master theorem. Traces are used in theories of concurrent computation, where commuting letters stand for portions of a job that can execute independently of one another, while non-commuting letters stand for locks, synchronization points or thread joins.
The trace monoid or free partially commutative monoid is a monoid of traces. In a nutshell, it is constructed as follows: sets of commuting letters are given by an independency relation. These induce an equivalence relation of equivalent strings; the elements of the equivalence classes are the traces. The equivalence relation then partitions up the free monoid (the set of all strings of finite length) into a set of equivalence classes; the result is still a monoid; it is a quotient monoid and is called the "trace monoid". The trace monoid is universal, in that all dependency-homomorphic (see below) monoids are in fact isomorphic.
Trace monoids are commonly used to model concurrent computation, forming the foundation for process calculi. They are the object of study in trace theory. The utility of trace monoids comes from the fact that they are isomorphic to the monoid of dependency graphs; thus allowing algebraic techniques to be applied to graphs, and vice versa. They are also isomorphic to history monoids, which model the history of computation of individual processes in the context of all scheduled processes on one or more computers.
Trace.
Let formula_0 denote the free monoid, that is, the set of all strings written in the alphabet formula_1. Here, the asterisk denotes, as usual, the Kleene star. An independency relation formula_2 on formula_1 then induces a (symmetric) binary relation formula_3 on formula_0, where formula_4 if and only if there exist formula_5, and a pair formula_6 such that formula_7 and formula_8. Here, formula_9 and formula_10 are understood to be strings (elements of formula_0), while formula_11 and formula_12 are letters (elements of formula_1).
The trace is defined as the reflexive transitive closure of formula_3. The trace is thus an equivalence relation on formula_0, and is denoted by formula_13, where formula_14 is the dependency relation corresponding to formula_15 that is formula_16 and conversely formula_17 Clearly, different dependencies will give different equivalence relations.
The transitive closure implies that formula_18 if and only if there exists a sequence of strings formula_19 such that formula_20 and formula_21 and formula_22 for all formula_23. The trace is stable under the monoid operation on formula_0 (concatenation) and is therefore a congruence relation on formula_0.
The trace monoid, commonly denoted as formula_24, is defined as the quotient monoid
formula_25
The homomorphism
formula_26
is commonly referred to as the natural homomorphism or canonical homomorphism. That the terms "natural" or "canonical" are deserved follows from the fact that this morphism embodies a universal property, as discussed in a later section.
One will also find the trace monoid denoted as formula_27 where formula_2 is the independency relation. Confusingly, one can also find the commutation relation used instead of the independency relation (it differs by including all the diagonal elements).
Examples.
Consider the alphabet formula_28. A possible dependency relation is
formula_29
The corresponding independency is
formula_30
Therefore, the letters formula_31 commute. Thus, for example, a trace equivalence class for the string formula_32 would be
formula_33
The equivalence class formula_34 is an element of the trace monoid.
Properties.
The cancellation property states that equivalence is maintained under right cancellation. That is, if formula_35, then formula_36. Here, the notation formula_37 denotes right cancellation, the removal of the first occurrence of the letter "a" from the string "w", starting from the right-hand side. Equivalence is also maintained by left-cancellation. Several corollaries follow:
A strong form of Levi's lemma holds for traces. Specifically, if formula_46 for strings "u", "v", "x", "y", then there exist strings formula_47 and formula_48 such that formula_49
for all letters formula_50 and formula_51 such that formula_52 occurs in formula_53 and formula_54 occurs in formula_55, and
formula_56
formula_57
Universal property.
A dependency morphism (with respect to a dependency "D") is a morphism
formula_58
to some monoid "M", such that the "usual" trace properties hold, namely:
1. formula_59 implies that formula_60
2. formula_42 implies that formula_61
3. formula_62 implies that formula_63
4. formula_64 and formula_41 imply that formula_42
Dependency morphisms are universal, in the sense that for a given, fixed dependency "D", if formula_58 is a dependency morphism to a monoid "M", then "M" is isomorphic to the trace monoid formula_65. In particular, the natural homomorphism is a dependency morphism.
Normal forms.
There are two well-known normal forms for words in trace monoids. One is the "lexicographic" normal form, due to Anatolij V. Anisimov and Donald Knuth, and the other is the "Foata" normal form due to Pierre Cartier and Dominique Foata who studied the trace monoid for its combinatorics in the 1960s.
Unicode's Normalization Form Canonical Decomposition (NFD) is an example of a lexicographic normal form - the ordering is to sort consecutive characters with non-zero canonical combining class by that class.
Trace languages.
Just as a formal language can be regarded as a subset of formula_0, the set of all possible strings, so a trace language is defined as a subset of formula_65 all possible traces.
Alternatively, but equivalently, a language formula_66 is a trace language, or is said to be consistent with dependency "D" if
formula_67
where
formula_68
is the trace closure of a set of strings.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
General references
Seminal publications | [
{
"math_id": 0,
"text": "\\Sigma^*"
},
{
"math_id": 1,
"text": "\\Sigma"
},
{
"math_id": 2,
"text": "I"
},
{
"math_id": 3,
"text": "\\sim"
},
{
"math_id": 4,
"text": "u\\sim v"
},
{
"math_id": 5,
"text": "x,y\\in \\Sigma^*"
},
{
"math_id": 6,
"text": "(a,b)\\in I"
},
{
"math_id": 7,
"text": "u=xaby"
},
{
"math_id": 8,
"text": "v=xbay"
},
{
"math_id": 9,
"text": "u,v,x"
},
{
"math_id": 10,
"text": "y"
},
{
"math_id": 11,
"text": "a"
},
{
"math_id": 12,
"text": "b"
},
{
"math_id": 13,
"text": "\\equiv_D"
},
{
"math_id": 14,
"text": "D"
},
{
"math_id": 15,
"text": "I ,"
},
{
"math_id": 16,
"text": "D = (\\Sigma \\times \\Sigma) \\setminus I"
},
{
"math_id": 17,
"text": "I = (\\Sigma \\times \\Sigma) \\setminus D ."
},
{
"math_id": 18,
"text": "u\\equiv v"
},
{
"math_id": 19,
"text": "(w_0,w_1,\\cdots,w_n)"
},
{
"math_id": 20,
"text": "u\\sim w_0"
},
{
"math_id": 21,
"text": "v\\sim w_n"
},
{
"math_id": 22,
"text": "w_i\\sim w_{i+1}"
},
{
"math_id": 23,
"text": "0\\le i < n"
},
{
"math_id": 24,
"text": "\\mathbb {M}(D)"
},
{
"math_id": 25,
"text": "\\mathbb {M}(D) = \\Sigma^* / \\equiv_D."
},
{
"math_id": 26,
"text": "\\phi_D:\\Sigma^*\\to \\mathbb {M}(D)"
},
{
"math_id": 27,
"text": "M(\\Sigma,I)"
},
{
"math_id": 28,
"text": "\\Sigma=\\{a,b,c\\}"
},
{
"math_id": 29,
"text": "\\begin{matrix} D \n &=& \\{a,b\\}\\times\\{a,b\\} \\quad \\cup \\quad \\{a,c\\}\\times\\{a,c\\} \\\\\n &=& \\{a,b\\}^2 \\cup \\{a,c\\}^2 \\\\\n &=& \\{ (a,b),(b,a),(a,c),(c,a),(a,a),(b,b),(c,c)\\} \n\\end{matrix}"
},
{
"math_id": 30,
"text": "I_D=\\{(b,c)\\,,\\,(c,b)\\}"
},
{
"math_id": 31,
"text": "b,c"
},
{
"math_id": 32,
"text": "abababbca"
},
{
"math_id": 33,
"text": "[abababbca]_D = \\{abababbca\\,,\\; abababcba\\,,\\; ababacbba \\}"
},
{
"math_id": 34,
"text": "[abababbca]_D"
},
{
"math_id": 35,
"text": "w\\equiv v"
},
{
"math_id": 36,
"text": "(w\\div a)\\equiv (v\\div a)"
},
{
"math_id": 37,
"text": "w\\div a"
},
{
"math_id": 38,
"text": "w \\equiv v"
},
{
"math_id": 39,
"text": "xwy\\equiv xvy"
},
{
"math_id": 40,
"text": "ua\\equiv vb"
},
{
"math_id": 41,
"text": "a\\ne b"
},
{
"math_id": 42,
"text": "(a,b)\\in I_D"
},
{
"math_id": 43,
"text": "u\\equiv wb"
},
{
"math_id": 44,
"text": "v\\equiv wa"
},
{
"math_id": 45,
"text": "\\pi_\\Sigma(w)\\equiv \\pi_\\Sigma(v)"
},
{
"math_id": 46,
"text": "uv\\equiv xy"
},
{
"math_id": 47,
"text": "z_1, z_2, z_3"
},
{
"math_id": 48,
"text": "z_4"
},
{
"math_id": 49,
"text": "(w_2, w_3)\\in I_D"
},
{
"math_id": 50,
"text": "w_2\\in\\Sigma"
},
{
"math_id": 51,
"text": "w_3\\in\\Sigma"
},
{
"math_id": 52,
"text": "w_2"
},
{
"math_id": 53,
"text": "z_2"
},
{
"math_id": 54,
"text": "w_3"
},
{
"math_id": 55,
"text": "z_3"
},
{
"math_id": 56,
"text": "u\\equiv z_1z_2,\\qquad v\\equiv z_3z_4,"
},
{
"math_id": 57,
"text": "x\\equiv z_1z_3,\\qquad y\\equiv z_2z_4."
},
{
"math_id": 58,
"text": "\\psi:\\Sigma^*\\to M"
},
{
"math_id": 59,
"text": "\\psi(w)=\\psi(\\varepsilon)"
},
{
"math_id": 60,
"text": "w=\\varepsilon"
},
{
"math_id": 61,
"text": "\\psi(ab)=\\psi(ba)"
},
{
"math_id": 62,
"text": "\\psi(ua)=\\psi(v)"
},
{
"math_id": 63,
"text": "\\psi(u)=\\psi(v\\div a)"
},
{
"math_id": 64,
"text": "\\psi(ua)=\\psi(vb)"
},
{
"math_id": 65,
"text": "\\mathbb{M}(D)"
},
{
"math_id": 66,
"text": "L\\subseteq\\Sigma^*"
},
{
"math_id": 67,
"text": "L = [L]_D"
},
{
"math_id": 68,
"text": "[L]_D = \\bigcup_{w \\in L} [w]_D"
}
] | https://en.wikipedia.org/wiki?curid=10739141 |
10739341 | Dependency relation | In computer science, in particular in concurrency theory, a dependency relation is a binary relation on a finite domain formula_0, symmetric, and reflexive; i.e. a finite tolerance relation. That is, it is a finite set of ordered pairs formula_1, such that
In general, dependency relations are not transitive; thus, they generalize the notion of an equivalence relation by discarding transitivity.
formula_0 is also called the alphabet on which formula_1 is defined. The independency induced by formula_1 is the binary relation formula_6
formula_7
That is, the independency is the set of all ordered pairs that are not in formula_1. The independency relation is symmetric and irreflexive. Conversely, given any symmetric and irreflexive relation formula_6 on a finite alphabet, the relation
formula_8
is a dependency relation.
The pair formula_9 is called the concurrent alphabet. The pair formula_10 is called the independency alphabet or reliance alphabet, but this term may also refer to the triple formula_11 (with formula_6 induced by formula_1). Elements formula_12 are called dependent if formula_13 holds, and independent, else (i.e. if formula_14 holds).
Given a reliance alphabet formula_11, a symmetric and irreflexive relation formula_15 can be defined on the free monoid formula_16 of all possible strings of finite length by: formula_17 for all strings formula_18 and all independent symbols formula_19. The equivalence closure of formula_15 is denoted formula_20 or formula_21 and called formula_11-equivalence. Informally, formula_22 holds if the string formula_23 can be transformed into formula_24 by a finite sequence of swaps of adjacent independent symbols. The equivalence classes of formula_20 are called traces, and are studied in trace theory.
Examples.
Given the alphabet formula_25, a possible dependency relation is formula_26, see picture.
The corresponding independency is formula_27. Then e.g. the symbols formula_28 are independent of one another, and e.g. formula_29 are dependent. The string formula_30 is equivalent to formula_31 and to formula_32, but to no other string.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Sigma"
},
{
"math_id": 1,
"text": "D"
},
{
"math_id": 2,
"text": "(a,b)\\in D"
},
{
"math_id": 3,
"text": "(b,a) \\in D"
},
{
"math_id": 4,
"text": "a \\in \\Sigma"
},
{
"math_id": 5,
"text": "(a,a) \\in D"
},
{
"math_id": 6,
"text": "I"
},
{
"math_id": 7,
"text": "I = (\\Sigma \\times \\Sigma) \\setminus D"
},
{
"math_id": 8,
"text": "D = (\\Sigma \\times \\Sigma) \\setminus I"
},
{
"math_id": 9,
"text": "(\\Sigma, D)"
},
{
"math_id": 10,
"text": "(\\Sigma, I)"
},
{
"math_id": 11,
"text": "(\\Sigma, D, I)"
},
{
"math_id": 12,
"text": "x,y \\in \\Sigma"
},
{
"math_id": 13,
"text": "xDy"
},
{
"math_id": 14,
"text": "xIy"
},
{
"math_id": 15,
"text": "\\doteq"
},
{
"math_id": 16,
"text": "\\Sigma^*"
},
{
"math_id": 17,
"text": "x a b y \\doteq x b a y"
},
{
"math_id": 18,
"text": "x, y \\in \\Sigma^*"
},
{
"math_id": 19,
"text": "a, b \\in I"
},
{
"math_id": 20,
"text": "\\equiv"
},
{
"math_id": 21,
"text": "\\equiv_{(\\Sigma, D, I)}"
},
{
"math_id": 22,
"text": "p \\equiv q"
},
{
"math_id": 23,
"text": "p"
},
{
"math_id": 24,
"text": "q"
},
{
"math_id": 25,
"text": "\\Sigma=\\{a,b,c\\}"
},
{
"math_id": 26,
"text": "D = \\{ (a,b),\\, (b,a),\\, (a,c),\\, (c,a),\\, (a,a),\\, (b,b),\\, (c,c) \\}"
},
{
"math_id": 27,
"text": "I=\\{(b,c),\\,(c,b)\\}"
},
{
"math_id": 28,
"text": "b,c"
},
{
"math_id": 29,
"text": "a,b"
},
{
"math_id": 30,
"text": "a c b b a"
},
{
"math_id": 31,
"text": "a b c b a"
},
{
"math_id": 32,
"text": "a b b c a"
}
] | https://en.wikipedia.org/wiki?curid=10739341 |
1074140 | Cotton tensor | In differential geometry, the Cotton tensor on a (pseudo)-Riemannian manifold of dimension "n" is a third-order tensor concomitant of the metric. The vanishing of the Cotton tensor for "n" = 3 is necessary and sufficient condition for the manifold to be locally conformally flat. By contrast, in dimensions "n" ≥ 4,
the vanishing of the Cotton tensor is necessary but not sufficient for the metric to be conformally flat; instead, the corresponding necessary and sufficient condition in these higher dimensions is the vanishing of the Weyl tensor, while the Cotton tensor just becomes a constant times
the divergence of the Weyl tensor. For "n" < 3 the Cotton tensor is identically zero. The concept is named after Émile Cotton.
The proof of the classical result that for "n" = 3 the vanishing of the Cotton tensor is equivalent to the metric being conformally flat is given by Eisenhart using a standard integrability argument. This tensor density is uniquely characterized by its conformal properties coupled with the demand that it be differentiable for arbitrary metrics, as shown by .
Recently, the study of three-dimensional spaces is becoming of great interest, because the Cotton tensor restricts the relation between the Ricci tensor and the energy–momentum tensor of matter in the Einstein equations and plays an important role in the Hamiltonian formalism of general relativity.
Definition.
In coordinates, and denoting the Ricci tensor by "R""ij" and the scalar curvature by "R", the components of the Cotton tensor are
formula_0
The Cotton tensor can be regarded as a vector valued 2-form, and for "n" = 3 one can use the Hodge star operator to convert this into a second order trace free tensor density
formula_1
sometimes called the "Cotton–York tensor".
Properties.
Conformal rescaling.
Under conformal rescaling of the metric formula_2 for some scalar function formula_3. We see that the Christoffel symbols transform as
formula_4
where formula_5 is the tensor
formula_6
The Riemann curvature tensor transforms as
formula_7
In formula_8-dimensional manifolds, we obtain the Ricci tensor by contracting the transformed Riemann tensor to see it transform as
formula_9
Similarly the Ricci scalar transforms as
formula_10
Combining all these facts together permits us to conclude the Cotton-York tensor transforms as
formula_11
or using coordinate independent language as
formula_12
where the gradient is contracted with the Weyl tensor "W".
Symmetries.
The Cotton tensor has the following symmetries:
formula_13
and therefore
formula_14
In addition the Bianchi formula for the Weyl tensor can be rewritten as
formula_15
where formula_16 is the positive divergence in the first component of "W".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C_{ijk} = \\nabla_{k} R_{ij} - \\nabla_{j} R_{ik} + \\frac{1}{2(n-1)}\\left( \\nabla_{j}Rg_{ik} - \\nabla_{k}Rg_{ij}\\right)."
},
{
"math_id": 1,
"text": "C_i^j = \\nabla_{k} \\left( R_{li} - \\frac{1}{4} Rg_{li}\\right)\\epsilon^{klj},"
},
{
"math_id": 2,
"text": "\\tilde{g} = e^{2\\omega} g"
},
{
"math_id": 3,
"text": "\\omega"
},
{
"math_id": 4,
"text": "\\widetilde{\\Gamma}^{\\alpha}_{\\beta\\gamma}=\\Gamma^{\\alpha}_{\\beta\\gamma}+S^{\\alpha}_{\\beta\\gamma}"
},
{
"math_id": 5,
"text": "S^{\\alpha}_{\\beta\\gamma}"
},
{
"math_id": 6,
"text": "S^{\\alpha}_{\\beta\\gamma} = \\delta^{\\alpha}_{\\gamma} \\partial_{\\beta} \\omega + \\delta^{\\alpha}_{\\beta} \\partial_{\\gamma} \\omega - g_{\\beta\\gamma} \\partial^{\\alpha} \\omega"
},
{
"math_id": 7,
"text": "{\\widetilde{R}^{\\lambda}}{}_{\\mu\\alpha\\beta}={R^{\\lambda}}_{\\mu\\alpha\\beta}+\\nabla_{\\alpha}S^{\\lambda}_{\\beta\\mu}-\\nabla_{\\beta}S^{\\lambda}_{\\alpha\\mu}+S^{\\lambda}_{\\alpha\\rho}S^{\\rho}_{\\beta\\mu}-S^{\\lambda}_{\\beta\\rho}S^{\\rho}_{\\alpha\\mu}"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "\\widetilde{R}_{\\beta\\mu}=R_{\\beta\\mu}-g_{\\beta\\mu}\\nabla^{\\alpha}\\partial_{\\alpha}\\omega-(n-2)\\nabla_{\\mu}\\partial_{\\beta}\\omega+(n-2)(\\partial_{\\mu}\\omega\\partial_{\\beta}\\omega-g_{\\beta\\mu}\\partial^{\\lambda}\\omega\\partial_{\\lambda}\\omega)"
},
{
"math_id": 10,
"text": "\\widetilde{R}=e^{-2\\omega}R-2e^{-2\\omega}(n-1)\\nabla^{\\alpha}\\partial_{\\alpha}\\omega-(n-2)(n-1)e^{-2\\omega}\\partial^{\\lambda}\\omega\\partial_{\\lambda}\\omega"
},
{
"math_id": 11,
"text": "\\widetilde{C}_{\\alpha\\beta\\gamma}=C_{\\alpha\\beta\\gamma}+(n-2)\\partial_{\\lambda}\\omega {W_{\\beta\\gamma\\alpha}}^{\\lambda}"
},
{
"math_id": 12,
"text": " \\tilde{C} = C \\; + (n-2) \\; \\operatorname{grad} \\, \\omega \\; \\lrcorner \\; W,"
},
{
"math_id": 13,
"text": "C_{ijk} = - C_{ikj} \\, "
},
{
"math_id": 14,
"text": "C_{[ijk]} = 0. \\, "
},
{
"math_id": 15,
"text": "\\delta W = (3-n) C, \\, "
},
{
"math_id": 16,
"text": "\\delta"
}
] | https://en.wikipedia.org/wiki?curid=1074140 |
1074464 | Well test | Evaluation of how much water can be pumped from a water well
In hydrology, a well test is conducted to evaluate the amount of water that can be pumped from a particular water well. More specifically, a well test will allow prediction of the maximum rate at which water can be pumped from a well, and the distance that the water level in the well will fall for a given pumping rate and duration of pumping.
Well testing differs from aquifer testing in that the behaviour of the well is primarily of concern in the former, while the characteristics of the aquifer (the geological formation or unit that supplies water to the well) are quantified in the latter.
When water is pumped from a well the water level in the well falls. This fall is called drawdown. The amount of water that can be pumped is limited by the drawdown produced. Typically, drawdown also increases with the length of time that the pumping continues.
Well losses vs. aquifer losses.
The components of observed drawdown in a pumping well were first described by Jacob (1947), and the test was refined independently by Hantush (1964) and Bierschenk (1963) as consisting of two related components,
formula_0,
where s is drawdown (units of length e.g., m), formula_1 is the pumping rate (units of volume flowrate e.g., m³/day), formula_2 is the aquifer loss coefficient (which increases with time — as predicted by the Theis solution) and formula_3 is the well loss coefficient (which is constant for a given flow rate).
The first term of the equation (formula_4) describes the linear component of the drawdown; i.e., the part in which doubling the pumping rate doubles the drawdown.
The second term (formula_5) describes what is often called the 'well losses'; the non-linear component of the drawdown. To quantify this it is necessary to pump the well at several different flow rates (commonly called "steps"). Rorabaugh (1953) added to this analysis by making the exponent an arbitrary power (usually between 1.5 and 3.5).
To analyze this equation, both sides are divided by the discharge rate (formula_1), leaving formula_6 on the left side, which is commonly referred to as "specific drawdown". The right hand side of the equation becomes that of a straight line. Plotting the specific drawdown after a set amount of time (formula_7) since the beginning of each step of the test (since drawdown will continue to increase with time) versus pumping rate should produce a straight line.
formula_8
Fitting a straight line through the observed data, the slope of the best fit line will be formula_3 (well losses) and the intercept of this line with formula_9 will be formula_2 (aquifer losses). This process is fitting an idealized model to real world data, and seeing what parameters in the model make it fit reality best. The assumption is then made that these fitted parameters best represent reality (given the assumptions that went into the model are true).
The relationship above is for fully penetrating wells in confined aquifers (the same assumptions used in the Theis solution for determining aquifer characteristics in an aquifer test).
Well efficiency.
Often the "well efficiency" is determined from this sort of test, this is a percentage indicating the fraction of total observed drawdown in a pumping well which is due to aquifer losses (as opposed to being due to flow through the well screen and inside the borehole). A perfectly efficient well, with perfect well screen and where the water flows inside the well in a frictionless manner would have 100% efficiency. Unfortunately well efficiency is hard to compare between wells because it depends on the characteristics of the aquifer too (the same amount of well losses compared to a more transmissive aquifer would give a lower efficiency).
Specific capacity.
Specific capacity is a quantity that which a water well can produce per unit of drawdown. It is normally obtained from a step drawdown test. Specific capacity is expressed as:
formula_10
where
formula_11 is the specific capacity ([L2T−1]; m²/day or USgal/day/ft)
formula_1 is the pumping rate ([L3T−1]; m³/day or USgal/day), and
formula_12 is the drawdown ([L]; m or ft)
The specific capacity of a well is also a function of the pumping rate it is determined at. Due to non-linear well losses the specific capacity will decrease with higher pumping rates. This complication makes the absolute value of specific capacity of little use; though it is useful for comparing the efficiency of the same well through time (e.g., to see if the well requires rehabilitation).
References.
Additional references on pumping test analysis methods other than the one described above (typically referred to as the Hantush-Bierschenk method) can be found in the general references on aquifer tests and hydrogeology. | [
{
"math_id": 0,
"text": " s = BQ + CQ^2 "
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "C"
},
{
"math_id": 4,
"text": "BQ"
},
{
"math_id": 5,
"text": "CQ^2"
},
{
"math_id": 6,
"text": "s/Q"
},
{
"math_id": 7,
"text": "\\Delta t"
},
{
"math_id": 8,
"text": " \\frac{s}{Q} = B + CQ "
},
{
"math_id": 9,
"text": "Q=0"
},
{
"math_id": 10,
"text": "S_c=\\frac{Q}{h_0 - h}"
},
{
"math_id": 11,
"text": "S_c"
},
{
"math_id": 12,
"text": "h_0 - h"
}
] | https://en.wikipedia.org/wiki?curid=1074464 |
1074742 | Dedekind-infinite set | Set with an equinumerous proper subset
In mathematics, a set "A" is Dedekind-infinite (named after the German mathematician Richard Dedekind) if some proper subset "B" of "A" is equinumerous to "A". Explicitly, this means that there exists a bijective function from "A" onto some proper subset "B" of "A". A set is Dedekind-finite if it is not Dedekind-infinite (i.e., no such bijection exists). Proposed by Dedekind in 1888, Dedekind-infiniteness was the first definition of "infinite" that did not rely on the definition of the natural numbers.
A simple example is formula_0, the set of natural numbers. From Galileo's paradox, there exists a bijection that maps every natural number "n" to its square "n"2. Since the set of squares is a proper subset of formula_0, formula_0 is Dedekind-infinite.
Until the foundational crisis of mathematics showed the need for a more careful treatment of set theory, most mathematicians assumed that a set is infinite if and only if it is Dedekind-infinite. In the early twentieth century, Zermelo–Fraenkel set theory, today the most commonly used form of axiomatic set theory, was proposed as an axiomatic system to formulate a theory of sets free of paradoxes such as Russell's paradox. Using the axioms of Zermelo–Fraenkel set theory with the originally highly controversial axiom of choice included (ZFC) one can show that a set is Dedekind-finite if and only if it is finite in the usual sense. However, there exists a model of Zermelo–Fraenkel set theory without the axiom of choice (ZF) in which there exists an infinite, Dedekind-finite set, showing that the axioms of ZF are not strong enough to prove that every set that is Dedekind-finite is finite. There are definitions of finiteness and infiniteness of sets besides the one given by Dedekind that do not depend on the axiom of choice.
A vaguely related notion is that of a Dedekind-finite ring.
Comparison with the usual definition of infinite set.
This definition of "infinite set" should be compared with the usual definition: a set "A" is infinite when it cannot be put in bijection with a finite ordinal, namely a set of the form {0, 1, 2, ..., "n"−1} for some natural number "n" – an infinite set is one that is literally "not finite", in the sense of bijection.
During the latter half of the 19th century, most mathematicians simply assumed that a set is infinite if and only if it is Dedekind-infinite. However, this equivalence cannot be proved with the axioms of Zermelo–Fraenkel set theory without the axiom of choice (AC) (usually denoted "ZF"). The full strength of AC is not needed to prove the equivalence; in fact, the equivalence of the two definitions is strictly weaker than the axiom of countable choice (CC). (See the references below.)
Dedekind-infinite sets in ZF.
A set "A" is Dedekind-infinite if it satisfies any, and then all, of the following equivalent (over ZF) conditions:
it is dually Dedekind-infinite if:
it is weakly Dedekind-infinite if it satisfies any, and then all, of the following equivalent (over ZF) conditions:
and it is infinite if:
Then, ZF proves the following implications: Dedekind-infinite ⇒ dually Dedekind-infinite ⇒ weakly Dedekind-infinite ⇒ infinite.
There exist models of ZF having an infinite Dedekind-finite set. Let "A" be such a set, and let "B" be the set of finite injective sequences from "A". Since "A" is infinite, the function "drop the last element" from "B" to itself is surjective but not injective, so "B" is dually Dedekind-infinite. However, since "A" is Dedekind-finite, then so is "B" (if "B" had a countably infinite subset, then using the fact that the elements of "B" are injective sequences, one could exhibit a countably infinite subset of "A").
When sets have additional structures, both kinds of infiniteness can sometimes be proved equivalent over ZF. For instance, ZF proves that a well-ordered set is Dedekind-infinite if and only if it is infinite.
History.
The term is named after the German mathematician Richard Dedekind, who first explicitly introduced the definition. It is notable that this definition was the first definition of "infinite" that did not rely on the definition of the natural numbers (unless one follows Poincaré and regards the notion of number as prior to even the notion of set). Although such a definition was known to Bernard Bolzano, he was prevented from publishing his work in any but the most obscure journals by the terms of his political exile from the University of Prague in 1819. Moreover, Bolzano's definition was more accurately a relation that held between two infinite sets, rather than a definition of an infinite set "per se".
For a long time, many mathematicians did not even entertain the thought that there might be a distinction between the notions of infinite set and Dedekind-infinite set. In fact, the distinction was not really realised until after Ernst Zermelo formulated the AC explicitly. The existence of infinite, Dedekind-finite sets was studied by Bertrand Russell and Alfred North Whitehead in 1912; these sets were at first called "mediate cardinals" or "Dedekind cardinals".
With the general acceptance of the axiom of choice among the mathematical community, these issues relating to infinite and Dedekind-infinite sets have become less central to most mathematicians. However, the study of Dedekind-infinite sets played an important role in the attempt to clarify the boundary between the finite and the infinite, and also an important role in the history of the AC.
Relation to the axiom of choice.
Since every infinite well-ordered set is Dedekind-infinite, and since the AC is equivalent to the well-ordering theorem stating that every set can be well-ordered, clearly the general AC implies that every infinite set is Dedekind-infinite. However, the equivalence of the two definitions is much weaker than the full strength of AC.
In particular, there exists a model of ZF in which there exists an infinite set with no countably infinite subset. Hence, in this model, there exists an infinite, Dedekind-finite set. By the above, such a set cannot be well-ordered in this model.
If we assume the axiom CC (i. e., ACω), then it follows that every infinite set is Dedekind-infinite. However, the equivalence of these two definitions is in fact strictly weaker than even the CC. Explicitly, there exists a model of ZF in which every infinite set is Dedekind-infinite, yet the CC fails (assuming consistency of ZF).
Proof of equivalence to infinity, assuming axiom of countable choice.
That every Dedekind-infinite set is infinite can be easily proven in ZF: every finite set has by definition a bijection with some finite ordinal "n", and one can prove by induction on "n" that this is not Dedekind-infinite.
By using the axiom of countable choice (denotation: axiom CC) one can prove the converse, namely that every infinite set "X" is Dedekind-infinite, as follows:
First, define a function over the natural numbers (that is, over the finite ordinals) "f" : N → Power(Power("X")), so that for every natural number "n", "f"("n") is the set of finite subsets of "X" of size "n" (i.e. that have a bijection with the finite ordinal "n"). "f"("n") is never empty, or otherwise "X" would be finite (as can be proven by induction on "n").
The image of f is the countable set {"f"("n") | "n" ∈ N}, whose members are themselves infinite (and possibly uncountable) sets. By using the axiom of countable choice we may choose one member from each of these sets, and this member is itself a finite subset of "X". More precisely, according to the axiom of countable choice, a (countable) set exists, "G" = {"g"("n") | "n" ∈ N}, so that for every natural number "n", "g"("n") is a member of "f"("n") and is therefore a finite subset of "X" of size "n".
Now, we define "U" as the union of the members of "G". "U" is an infinite countable subset of "X", and a bijection from the natural numbers to "U", "h" : N → "U", can be easily defined. We may now define a bijection "B" : "X" → "X" \ "h"(0) that takes every member not in "U" to itself, and takes "h"("n") for every natural number to "h"("n" + 1). Hence, "X" is Dedekind-infinite, and we are done.
Generalizations.
Expressed in category-theoretical terms, a set "A" is Dedekind-finite if in the category of sets, every monomorphism "f" : "A" → "A" is an isomorphism. A von Neumann regular ring "R" has the analogous property in the category of (left or right) "R"-modules if and only if in "R", "xy" = 1 implies "yx" = 1. More generally, a "Dedekind-finite ring" is any ring that satisfies the latter condition. Beware that a ring may be Dedekind-finite even if its underlying set is Dedekind-infinite, e.g. the integers.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{N}"
}
] | https://en.wikipedia.org/wiki?curid=1074742 |
1074990 | Clebsch–Gordan coefficients | Coefficients in angular momentum eigenstates of quantum systems
In physics, the Clebsch–Gordan (CG) coefficients are numbers that arise in angular momentum coupling in quantum mechanics. They appear as the expansion coefficients of total angular momentum eigenstates in an uncoupled tensor product basis. In more mathematical terms, the CG coefficients are used in representation theory, particularly of compact Lie groups, to perform the explicit direct sum decomposition of the tensor product of two irreducible representations (i.e., a reducible representation into irreducible representations, in cases where the numbers and types of irreducible components are already known abstractly). The name derives from the German mathematicians Alfred Clebsch and Paul Gordan, who encountered an equivalent problem in invariant theory.
From a vector calculus perspective, the CG coefficients associated with the SO(3) group can be defined simply in terms of integrals of products of spherical harmonics and their complex conjugates. The addition of spins in quantum-mechanical terms can be read directly from this approach as spherical harmonics are eigenfunctions of total angular momentum and projection thereof onto an axis, and the integrals correspond to the Hilbert space inner product. From the formal definition of angular momentum, recursion relations for the Clebsch–Gordan coefficients can be found. There also exist complicated explicit formulas for their direct calculation.
The formulas below use Dirac's bra–ket notation and the Condon–Shortley phase convention is adopted.
Review of the angular momentum operators.
Angular momentum operators are self-adjoint operators jx, jy, and jz that satisfy the commutation relations
formula_0
where "ε""klm" is the Levi-Civita symbol. Together the three operators define a "vector operator", a rank one Cartesian tensor operator,
formula_1
It is also known as a spherical vector, since it is also a spherical tensor operator. It is only for rank one that spherical tensor operators coincide with the Cartesian tensor operators.
By developing this concept further, one can define another operator j2 as the inner product of j with itself:
formula_2
This is an example of a Casimir operator. It is diagonal and its eigenvalue characterizes the particular irreducible representation of the angular momentum algebra formula_3. This is physically interpreted as the square of the total angular momentum of the states on which the representation acts.
One can also define "raising" (j+) and "lowering" (j−) operators, the so-called ladder operators,
formula_4
Spherical basis for angular momentum eigenstates.
It can be shown from the above definitions that j2 commutes with jx, jy, and jz:
formula_5
When two Hermitian operators commute, a common set of eigenstates exists. Conventionally, j2 and jz are chosen. From the commutation relations, the possible eigenvalues can be found. These eigenstates are denoted where "j" is the "angular momentum quantum number" and "m" is the "angular momentum projection" onto the z-axis.
They comprise the spherical basis, are complete, and satisfy the following eigenvalue equations,
formula_6
The raising and lowering operators can be used to alter the value of "m",
formula_7
where the ladder coefficient is given by:
In principle, one may also introduce a (possibly complex) phase factor in the definition of formula_8. The choice made in this article is in agreement with the Condon–Shortley phase convention. The angular momentum states are orthogonal (because their eigenvalues with respect to a Hermitian operator are distinct) and are assumed to be normalized,
formula_9
Here the italicized "j" and "m" denote integer or half-integer angular momentum quantum numbers of a particle or of a system. On the other hand, the roman jx, jy, jz, j+, j−, and j2 denote operators. The formula_10 symbols are Kronecker deltas.
Tensor product space.
We now consider systems with two physically different angular momenta "j"1 and "j"2. Examples include the spin and the orbital angular momentum of a single electron, or the spins of two electrons, or the orbital angular momenta of two electrons. Mathematically, this means that the angular momentum operators act on a space formula_11 of dimension formula_12 and also on a space formula_13 of dimension formula_14. We are then going to define a family of "total angular momentum" operators acting on the tensor product space formula_15, which has dimension formula_16. The action of the total angular momentum operator on this space constitutes a representation of the SU(2) Lie algebra, but a reducible one. The reduction of this reducible representation into irreducible pieces is the goal of Clebsch–Gordan theory.
Let "V"1 be the (2 "j"1 + 1)-dimensional vector space spanned by the states
formula_17
and "V"2 the (2 "j"2 + 1)-dimensional vector space spanned by the states
formula_18
The tensor product of these spaces, "V"3 ≡ "V"1 ⊗ "V"2, has a (2 "j"1 + 1) (2 "j"2 + 1)-dimensional "uncoupled" basis
formula_19
Angular momentum operators are defined to act on states in "V"3 in the following manner:
formula_20
and
formula_21
where 1 denotes the identity operator.
The total angular momentum operators are defined by the coproduct (or tensor product) of the two representations acting on "V"1⊗"V"2,
formula_22
The total angular momentum operators can be shown to "satisfy the very same commutation relations",
formula_23
where "k", "l", "m" ∈ {"x", "y", "z"}. Indeed, the preceding construction is the standard method for constructing an action of a Lie algebra on a tensor product representation.
Hence, a set of "coupled" eigenstates exist for the total angular momentum operator as well,
formula_24
for "M" ∈ {−"J", −"J" + 1, ..., "J"}. Note that it is common to omit the ["j"1 "j"2] part.
The total angular momentum quantum number "J" must satisfy the triangular condition that
formula_25
such that the three nonnegative integer or half-integer values could correspond to the three sides of a triangle.
The total number of total angular momentum eigenstates is necessarily equal to the dimension of "V"3:
formula_26
As this computation suggests, the tensor product representation decomposes as the direct sum of one copy of each of the irreducible representations of dimension formula_27, where formula_28 ranges from formula_29 to formula_30 in increments of 1. As an example, consider the tensor product of the three-dimensional representation corresponding to formula_31 with the two-dimensional representation with formula_32. The possible values of formula_28 are then formula_33 and formula_34. Thus, the six-dimensional tensor product representation decomposes as the direct sum of a two-dimensional representation and a four-dimensional representation.
The goal is now to describe the preceding decomposition explicitly, that is, to explicitly describe basis elements in the tensor product space for each of the component representations that arise.
The total angular momentum states form an orthonormal basis of "V"3:
formula_35
These rules may be iterated to, e.g., combine n doublets (s=1/2) to obtain the Clebsch-Gordan decomposition series, (Catalan's triangle),
formula_36
where formula_37 is the integer floor function; and the number preceding the boldface irreducible representation dimensionality (2"j"+1) label indicates multiplicity of that representation in the representation reduction. For instance, from this formula, addition of three spin 1/2s yields a spin 3/2 and two spin 1/2s, formula_38.
Formal definition of Clebsch–Gordan coefficients.
The coupled states can be expanded via the completeness relation (resolution of identity) in the uncoupled basis
The expansion coefficients
formula_39
are the "Clebsch–Gordan coefficients". Note that some authors write them in a different order such as . Another common notation is
⟨"j"1 "m"1 "j"2 "m"2|"J" "M"⟩ = "C".
Applying the operators
formula_40
to both sides of the defining equation shows that the Clebsch–Gordan coefficients can only be nonzero when
formula_41
Recursion relations.
The recursion relations were discovered by physicist Giulio Racah from the Hebrew University of Jerusalem in 1941.
Applying the total angular momentum raising and lowering operatorsformula_42
to the left hand side of the defining equation givesformula_43
Applying the same operators to the right hand side givesformula_44
Combining these results gives recursion relations for the Clebsch–Gordan coefficients, where "C"± was defined in 1:
formula_45
Taking the upper sign with the condition that "M"
"J" gives initial recursion relation:formula_46In the Condon–Shortley phase convention, one adds the constraint that
formula_47
(and is therefore also real).
The Clebsch–Gordan coefficients ⟨"j"1 "m"1 "j"2 "m"2 | "J" "M"⟩ can then be found from these recursion relations. The normalization is fixed by the requirement that the sum of the squares, which equivalent to the requirement that the norm of the state |["j"1 "j"2] "J" "J"⟩ must be one.
The lower sign in the recursion relation can be used to find all the Clebsch–Gordan coefficients with "M"
"J" − 1. Repeated use of that equation gives all coefficients.
This procedure to find the Clebsch–Gordan coefficients shows that they are all real in the Condon–Shortley phase convention.
Orthogonality relations.
These are most clearly written down by introducing the alternative notation
formula_48
The first orthogonality relation is
formula_49
(derived from the fact that formula_50) and the second one is
formula_51
Special cases.
For "J"
0 the Clebsch–Gordan coefficients are given by
formula_52
For "J"
"j"1 + "j"2 and "M"
"J" we have
formula_53
For "j"1
"j"2
"J" / 2 and "m"1
−"m"2 we have
formula_54
For "j"1
"j"2
"m"1
−"m"2 we have
formula_55
For "j"2
1, "m"2
0 we have
formula_56
For "j"2
1/2 we have
formula_57
Symmetry properties.
formula_58
A convenient way to derive these relations is by converting the Clebsch–Gordan coefficients to Wigner 3-j symbols using 3. The symmetry properties of Wigner 3-j symbols are much simpler.
Rules for phase factors.
Care is needed when simplifying phase factors: a quantum number may be a half-integer rather than an integer, therefore (−1)2"k" is not necessarily 1 for a given quantum number "k" unless it can be proven to be an integer. Instead, it is replaced by the following weaker rule:
formula_59
for any angular-momentum-like quantum number "k".
Nonetheless, a combination of "ji" and "mi" is always an integer, so the stronger rule applies for these combinations:
formula_60
This identity also holds if the sign of either "ji" or "mi" or both is reversed.
It is useful to observe that any phase factor for a given ("ji", "mi") pair can be reduced to the canonical form:
formula_61
where "a" ∈ {0, 1, 2, 3} and "b" ∈ {0, 1} (other conventions are possible too). Converting phase factors into this form makes it easy to tell whether two phase factors are equivalent. (Note that this form is only "locally" canonical: it fails to take into account the rules that govern combinations of ("ji", "mi") pairs such as the one described in the next paragraph.)
An additional rule holds for combinations of "j"1, "j"2, and "j"3 that are related by a Clebsch-Gordan coefficient or Wigner 3-j symbol:
formula_62
This identity also holds if the sign of any "ji" is reversed, or if any of them are substituted with an "mi" instead.
Relation to Wigner 3-j symbols.
Clebsch–Gordan coefficients are related to Wigner 3-j symbols which have more convenient symmetry relations.
The factor (−1)2 "j"2 is due to the Condon–Shortley constraint that ⟨"j"1 "j"1 "j"2 ("J" − "j"1)|"J J"⟩ > 0, while (–1)"J" − "M" is due to the time-reversed nature of |"J M"⟩.
This allows to reach the general expression:
formula_63
The summation is performed over those integer values k for which the argument of each factorial in the denominator is non-negative, i.e. summation limits K and N are taken equal: the lower one formula_64 the upper one formula_65 Factorials of negative numbers are conventionally taken equal to zero, so that the values of the 3"j" symbol at, for example, formula_66 or formula_67 are automatically set to zero.
Relation to Wigner D-matrices.
formula_68
Relation to spherical harmonics.
In the case where integers are involved, the coefficients can be related to integrals of spherical harmonics:
formula_69
It follows from this and orthonormality of the spherical harmonics that CG coefficients are in fact the expansion coefficients of a product of two spherical harmonics in terms of a single spherical harmonic:
formula_70
Other properties.
formula_71
Clebsch–Gordan coefficients for specific groups.
For arbitrary groups and their representations, Clebsch–Gordan coefficients are not known in general. However, algorithms to produce Clebsch–Gordan coefficients for the special unitary group SU("n") are known. In particular, SU(3) Clebsch-Gordan coefficients have been computed and tabulated because of their utility in characterizing hadronic decays, where a flavor-SU(3) symmetry exists that relates the up, down, and strange quarks. A web interface for tabulating SU(N) Clebsch–Gordan coefficients is readily available.
See also.
<templatestyles src="Div col/styles.css"/>
Remarks.
<templatestyles src="Reflist/styles.css" />
Notes.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}\n &[\\mathrm{j}_k, \\mathrm{j}_l]\n \\equiv \\mathrm{j}_k \\mathrm{j}_l - \\mathrm{j}_l \\mathrm{j}_k\n = i \\hbar \\varepsilon_{klm} \\mathrm{j}_m\n & k, l, m &\\in \\{ \\mathrm{x, y, z}\\},\n\\end{align}\n"
},
{
"math_id": 1,
"text": "\n\\mathbf j = (\\mathrm{j_x}, \\mathrm{j_y}, \\mathrm{j_z}).\n"
},
{
"math_id": 2,
"text": "\n\\mathbf j^2 = \\mathrm{j_x^2} + \\mathrm{j_y^2} + \\mathrm{j_z^2}.\n"
},
{
"math_id": 3,
"text": "\\mathfrak{so}(3,\\mathbb{R}) \\cong \\mathfrak{su}(2)"
},
{
"math_id": 4,
"text": "\n\\mathrm {j_\\pm} = \\mathrm{j_x} \\pm i \\mathrm{j_y}.\n"
},
{
"math_id": 5,
"text": "\n\\begin{align}\n &[\\mathbf j^2, \\mathrm {j}_k] = 0 & k &\\in \\{\\mathrm x, \\mathrm y, \\mathrm z\\}.\n\\end{align}\n"
},
{
"math_id": 6,
"text": "\n\\begin{align}\n \\mathbf j^2 |j \\, m\\rangle &= \\hbar^2 j (j + 1) |j \\, m\\rangle, & j &\\in \\{0, \\tfrac{1}{2}, 1, \\tfrac{3}{2}, \\ldots\\} \\\\\n \\mathrm{j_z} |j \\, m\\rangle &= \\hbar m |j \\, m\\rangle, & m &\\in \\{-j, -j + 1, \\ldots, j\\}.\n\\end{align}\n"
},
{
"math_id": 7,
"text": "\n \\mathrm j_\\pm |j \\, m\\rangle = \\hbar C_\\pm(j, m) |j \\, (m \\pm 1)\\rangle,\n"
},
{
"math_id": 8,
"text": "C_\\pm(j, m)"
},
{
"math_id": 9,
"text": " \n\\langle j \\, m | j' \\, m' \\rangle = \\delta_{j, j'} \\delta_{m, m'}.\n"
},
{
"math_id": 10,
"text": "\\delta"
},
{
"math_id": 11,
"text": "V_1"
},
{
"math_id": 12,
"text": "2j_1+1"
},
{
"math_id": 13,
"text": "V_2"
},
{
"math_id": 14,
"text": "2j_2 + 1"
},
{
"math_id": 15,
"text": "V_1 \\otimes V_2"
},
{
"math_id": 16,
"text": "(2j_1+1)(2j_2+1)"
},
{
"math_id": 17,
"text": "\n\\begin{align}\n &|j_1 \\, m_1\\rangle, & m_1 &\\in \\{-j_1, -j_1 + 1, \\ldots, j_1\\}\n\\end{align},\n"
},
{
"math_id": 18,
"text": "\n\\begin{align}\n &|j_2 \\, m_2\\rangle, & m_2 &\\in \\{-j_2, -j_2 + 1, \\ldots, j_2\\}\n\\end{align}.\n"
},
{
"math_id": 19,
"text": "\n |j_1 \\, m_1 \\, j_2 \\, m_2\\rangle \\equiv |j_1 \\, m_1\\rangle \\otimes |j_2 \\, m_2\\rangle,\n \\quad m_1 \\in \\{-j_1, -j_1 + 1, \\ldots, j_1\\},\n \\quad m_2 \\in \\{-j_2, -j_2 + 1, \\ldots, j_2\\}.\n"
},
{
"math_id": 20,
"text": "\n (\\mathbf j \\otimes 1) |j_1 \\, m_1 \\, j_2 \\, m_2\\rangle \\equiv \\mathbf j |j_1 \\, m_1\\rangle \\otimes |j_2 \\, m_2\\rangle\n"
},
{
"math_id": 21,
"text": "\n (1 \\otimes \\mathrm \\mathbf j) |j_1 \\, m_1 \\, j_2 \\, m_2\\rangle \\equiv |j_1 \\, m_1\\rangle \\otimes \\mathbf j |j_2 \\, m_2\\rangle,\n"
},
{
"math_id": 22,
"text": "\\mathbf{J} \\equiv \\mathbf{j}_1 \\otimes 1 + 1 \\otimes \\mathbf{j}_2~."
},
{
"math_id": 23,
"text": "\n [\\mathrm{J}_k, \\mathrm{J}_l] = i \\hbar \\varepsilon_{k l m} \\mathrm{J}_m ~,\n"
},
{
"math_id": 24,
"text": "\n\\begin{align}\n \\mathbf{J}^2 |[j_1 \\, j_2] \\, J \\, M\\rangle &= \\hbar^2 J (J + 1) |[j_1 \\, j_2] \\, J \\, M\\rangle \\\\\n \\mathrm{J_z} |[j_1 \\, j_2] \\, J \\, M\\rangle &= \\hbar M |[j_1 \\, j_2] \\, J \\, M\\rangle\n\\end{align}\n"
},
{
"math_id": 25,
"text": "\n |j_1 - j_2| \\leq J \\leq j_1 + j_2,\n"
},
{
"math_id": 26,
"text": "\n \\sum_{J = |j_1 - j_2|}^{j_1 + j_2} (2 J + 1) = (2 j_1 + 1) (2 j_2 + 1) ~.\n"
},
{
"math_id": 27,
"text": "2J+1"
},
{
"math_id": 28,
"text": "J"
},
{
"math_id": 29,
"text": "|j_1 - j_2|"
},
{
"math_id": 30,
"text": "j_1 + j_2"
},
{
"math_id": 31,
"text": "j_1 = 1"
},
{
"math_id": 32,
"text": "j_2 = 1/2"
},
{
"math_id": 33,
"text": "J = 1/2"
},
{
"math_id": 34,
"text": "J = 3/2"
},
{
"math_id": 35,
"text": "\n \\left\\langle J\\, M | J'\\, M' \\right\\rangle = \\delta_{J, J'}\\delta_{M, M'}~.\n"
},
{
"math_id": 36,
"text": "\n \\mathbf{2}^{\\otimes n} = \\bigoplus_{k=0}^{\\lfloor n/2 \\rfloor}~\n \\left(\\frac{n + 1 - 2k}{n + 1}{n + 1 \\choose k}\\right)~(\\mathbf{n} + \\mathbf{1} - \\mathbf{2}\\mathbf{k})~,\n"
},
{
"math_id": 37,
"text": "\\lfloor n/2 \\rfloor"
},
{
"math_id": 38,
"text": "{\\mathbf 2}\\otimes{\\mathbf 2}\\otimes{\\mathbf 2} = {\\mathbf 4}\\oplus{\\mathbf 2}\\oplus{\\mathbf 2}"
},
{
"math_id": 39,
"text": "\n\\langle j_1 \\, m_1 \\, j_2 \\, m_2 | J \\, M \\rangle\n"
},
{
"math_id": 40,
"text": "\n\\begin{align}\n \\mathrm J&=\\mathrm j \\otimes 1+1\\otimes\\mathrm j \\\\\n \\mathrm J_{\\mathrm z}&=\\mathrm j_{\\mathrm z}\\otimes 1+1\\otimes\\mathrm j_{\\mathrm z}\n\\end{align}\n"
},
{
"math_id": 41,
"text": "\n\\begin{align}\n |j_1 - j_2| \\leq J &\\leq j_1 + j_2 \\\\\n M &= m_1 + m_2.\n\\end{align}\n"
},
{
"math_id": 42,
"text": "\n \\mathrm J_\\pm = \\mathrm j_\\pm \\otimes 1 + 1 \\otimes \\mathrm j_\\pm\n"
},
{
"math_id": 43,
"text": "\n\\begin{align}\n \\mathrm J_\\pm |[j_1 \\, j_2] \\, J \\, M\\rangle\n &= \\hbar C_\\pm(J, M) |[j_1 \\, j_2] \\, J \\, (M \\pm 1)\\rangle \\\\\n &= \\hbar C_\\pm(J, M)\n \\sum_{m_1, m_2}\n |j_1 \\, m_1 \\, j_2 \\, m_2\\rangle\n \\langle j_1 \\, m_1 \\, j_2 \\, m_2 | J \\, (M \\pm 1)\\rangle\n\\end{align}\n"
},
{
"math_id": 44,
"text": "\n\\begin{align}\n \\mathrm J_\\pm &\\sum_{m_1, m_2}\n |j_1 \\, m_1 \\, j_2 \\, m_2\\rangle\n \\langle j_1 \\, m_1 \\, j_2 \\, m_2 | J \\, M\\rangle \\\\\n = \\hbar &\\sum_{m_1, m_2} \\Bigl(\n C_\\pm(j_1, m_1) |j_1 \\, (m_1 \\pm 1) \\, j_2 \\, m_2\\rangle\n + C_\\pm(j_2, m_2) |j_1 \\, m_1 \\, j_2 \\, (m_2 \\pm 1)\\rangle\n \\Bigr) \\langle j_1 \\, m_1 \\, j_2 \\, m_2 | J \\, M\\rangle \\\\\n = \\hbar &\\sum_{m_1, m_2} |j_1 \\, m_1 \\, j_2 \\, m_2\\rangle \\Bigl(\n C_\\pm(j_1, m_1 \\mp 1) \\langle j_1 \\, (m_1 \\mp 1) \\, j_2 \\, m_2 | J \\, M\\rangle\n + C_\\pm(j_2, m_2 \\mp 1) \\langle j_1 \\, m_1 \\, j_2 \\, (m_2 \\mp 1) | J \\, M\\rangle\n \\Bigr) .\n\\end{align}\n"
},
{
"math_id": 45,
"text": "\n C_\\pm(J, M) \\langle j_1 \\, m_1 \\, j_2 \\, m_2 | J \\, (M \\pm 1)\\rangle\n = C_\\pm(j_1, m_1 \\mp 1) \\langle j_1 \\, (m_1 \\mp 1) \\, j_2 \\, m_2 | J \\, M\\rangle\n + C_\\pm(j_2, m_2 \\mp 1) \\langle j_1 \\, m_1 \\, j_2 \\, (m_2 \\mp 1) | J \\, M\\rangle.\n"
},
{
"math_id": 46,
"text": "\n 0 = C_+(j_1, m_1 - 1) \\langle j_1 \\, (m_1 - 1) \\, j_2 \\, m_2 | J \\, J\\rangle\n + C_+(j_2, m_2 - 1) \\langle j_1 \\, m_1 \\, j_2 \\, (m_2 - 1) | J \\, J\\rangle.\n"
},
{
"math_id": 47,
"text": "\\langle j_1 \\, j_1 \\, j_2 \\, (J - j_1) | J \\, J\\rangle > 0"
},
{
"math_id": 48,
"text": "\n \\langle J \\, M | j_1 \\, m_1 \\, j_2 \\, m_2 \\rangle \\equiv \\langle j_1 \\, m_1 \\, j_2 \\, m_2 | J \\, M \\rangle\n"
},
{
"math_id": 49,
"text": "\n \\sum_{J = |j_1 - j_2|}^{j_1 + j_2} \\sum_{M = -J}^J\n \\langle j_1 \\, m_1 \\, j_2 \\, m_2 | J \\, M \\rangle\n \\langle J \\, M | j_1 \\, m_1' \\, j_2 \\, m_2' \\rangle\n = \\langle j_1 \\, m_1 \\, j_2 \\, m_2 | j_1 \\, m_1' \\, j_2 \\, m_2' \\rangle\n = \\delta_{m_1, m_1'} \\delta_{m_2, m_2'}\n"
},
{
"math_id": 50,
"text": "\\mathbf 1 = \\sum_x |x\\rangle \\langle x|"
},
{
"math_id": 51,
"text": "\n \\sum_{m_1, m_2}\n \\langle J \\, M | j_1 \\, m_1 \\, j_2 \\, m_2 \\rangle\n \\langle j_1 \\, m_1 \\, j_2 \\, m_2 | J' \\, M' \\rangle\n = \\langle J \\, M | J' \\, M' \\rangle\n = \\delta_{J, J'} \\delta_{M, M'}.\n"
},
{
"math_id": 52,
"text": "\n \\langle j_1 \\, m_1 \\, j_2 \\, m_2 | 0 \\, 0 \\rangle\n = \\delta_{j_1, j_2} \\delta_{m_1, -m_2} \\frac{(-1)^{j_1 - m_1}}{\\sqrt{2 j_1 + 1}}.\n"
},
{
"math_id": 53,
"text": " \\langle j_1 \\, j_1 \\, j_2 \\, j_2 | (j_1 + j_2) \\, (j_1 + j_2) \\rangle = 1."
},
{
"math_id": 54,
"text": "\n \\langle j_1 \\, m_1 \\, j_1 \\, (-m_1) | (2 j_1) \\, 0 \\rangle\n = \\frac{(2 j_1)!^2}{(j_1 - m_1)! (j_1 + m_1)! \\sqrt{(4 j_1)!}}.\n"
},
{
"math_id": 55,
"text": "\n \\langle j_1 \\, j_1 \\, j_1 \\, (-j_1) | J \\, 0 \\rangle\n = (2 j_1)! \\sqrt{\\frac{2 J + 1}{(J + 2 j_1 + 1)! (2 j_1 - J)!}}.\n"
},
{
"math_id": 56,
"text": "\n\\begin{align}\n \\langle j_1 \\, m \\, 1 \\, 0 | (j_1 + 1) \\, m \\rangle &= \\sqrt{\\frac{(j_1 - m + 1) (j_1 + m + 1)}{(2 j_1 + 1) (j_1 + 1)}} \\\\\n \\langle j_1 \\, m \\, 1 \\, 0 | j_1 \\, m \\rangle &= \\frac{m}{\\sqrt{j_1 (j_1 + 1)}} \\\\\n \\langle j_1 \\, m \\, 1 \\, 0 | (j_1 - 1) \\, m \\rangle &= -\\sqrt{\\frac{(j_1 - m) (j_1 + m)}{j_1 (2 j_1 + 1)}}\n\\end{align}\n"
},
{
"math_id": 57,
"text": "\n\\begin{align}\n \\left\\langle j_1 \\, \\left( M - \\frac{1}{2} \\right) \\, \\frac{1}{2} \\, \\frac{1}{2} \\Bigg| \\left( j_1 \\pm \\frac{1}{2} \\right) \\, M \\right\\rangle &= \\pm \\sqrt{\\frac{1}{2} \\left( 1 \\pm \\frac{M}{j_1 + \\frac{1}{2}} \\right)} \\\\\n \\left\\langle j_1 \\, \\left( M + \\frac{1}{2} \\right) \\, \\frac{1}{2} \\, \\left( -\\frac{1}{2} \\right) \\Bigg| \\left( j_1 \\pm \\frac{1}{2} \\right) \\, M \\right\\rangle &= \\sqrt{\\frac{1}{2} \\left( 1 \\mp \\frac{M}{j_1 + \\frac{1}{2}} \\right)}\n\\end{align}\n"
},
{
"math_id": 58,
"text": "\n\\begin{align}\n\\langle j_1 \\, m_1 \\, j_2 \\, m_2 | J \\, M \\rangle\n &= (-1)^{j_1 + j_2 - J} \\langle j_1 \\, (-m_1) \\, j_2 \\, (-m_2) | J \\, (-M)\\rangle \\\\\n &= (-1)^{j_1 + j_2 - J} \\langle j_2 \\, m_2 \\, j_1 \\, m_1 | J \\, M \\rangle \\\\\n &= (-1)^{j_1 - m_1} \\sqrt{\\frac{2 J + 1}{2 j_2 + 1}} \\langle j_1 \\, m_1 \\, J \\, (-M)| j_2 \\, (-m_2) \\rangle \\\\\n &= (-1)^{j_2 + m_2} \\sqrt{\\frac{2 J + 1}{2 j_1 + 1}} \\langle J \\, (-M) \\, j_2 \\, m_2| j_1 \\, (-m_1) \\rangle \\\\\n &= (-1)^{j_1 - m_1} \\sqrt{\\frac{2 J + 1}{2 j_2 + 1}} \\langle J \\, M \\, j_1 \\, (-m_1) | j_2 \\, m_2 \\rangle \\\\\n &= (-1)^{j_2 + m_2} \\sqrt{\\frac{2 J + 1}{2 j_1 + 1}} \\langle j_2 \\, (-m_2) \\, J \\, M | j_1 \\, m_1 \\rangle\n\\end{align}\n"
},
{
"math_id": 59,
"text": "(-1)^{4 k} = 1"
},
{
"math_id": 60,
"text": "(-1)^{2 (j_i - m_i)} = 1"
},
{
"math_id": 61,
"text": "(-1)^{a j_i + b (j_i - m_i)}"
},
{
"math_id": 62,
"text": "(-1)^{2 (j_1 + j_2 + j_3)} = 1"
},
{
"math_id": 63,
"text": "\n\\begin{align}\n \\langle j_1 \\, m_1 \\, j_2 \\, m_2 | J \\, M \\rangle\n& \\equiv \\delta(m_1+m_2,M) \\sqrt{\\frac{(2J+1)(j_1+j_2-J)!(j_1-j_2+J)!(-j_1+j_2+J)!}{(j_1+j_2+J+1)!}}\\ \\times {} \\\\[6pt]\n&\\times\\sqrt{(j_1-m_1)!(j_1+m_1)!(j_2-m_2)!(j_2+m_2)!(J-M)!(J+M)!}\\ \\times {} \\\\[6pt]\n&\\times\\sum_{k=K}^N \\frac{(-1)^k}{k!(j_1+j_2-J-k)!(j_1-m_1-k)!(j_2+m_2-k)!(J-j_2+m_1+k)!(J-j_1-m_2+k)!},\n\\end{align}.\n"
},
{
"math_id": 64,
"text": "K=\\max(0, j_2-J-m_1, j_1-J+m_2),"
},
{
"math_id": 65,
"text": "N=\\min(j_1+j_2-J, j_1-m_1, j_2+m_2)."
},
{
"math_id": 66,
"text": "j_3>j_1+j_2"
},
{
"math_id": 67,
"text": "j_1<m_1"
},
{
"math_id": 68,
"text": "\n\\begin{align}\n &\\int_0^{2 \\pi} d \\alpha \\int_0^\\pi \\sin \\beta \\, d\\beta \\int_0^{2 \\pi} d \\gamma \\,\n D^J_{M, K}(\\alpha, \\beta, \\gamma)^*\n D^{j_1}_{m_1, k_1}(\\alpha, \\beta, \\gamma)\n D^{j_2}_{m_2, k_2}(\\alpha, \\beta, \\gamma) \\\\\n {}={} &\\frac{8 \\pi^2}{2 J + 1}\n \\langle j_1 \\, m_1 \\, j_2 \\, m_2 | J \\, M \\rangle\n \\langle j_1 \\, k_1 \\, j_2 \\, k_2 | J \\, K \\rangle\n\\end{align}\n"
},
{
"math_id": 69,
"text": "\n \\int_{4 \\pi} Y_{\\ell_1}^{m_1}{}^*(\\Omega) Y_{\\ell_2}^{m_2}{}^*(\\Omega) Y_L^M (\\Omega) \\, d \\Omega\n = \\sqrt{\\frac{(2 \\ell_1 + 1) (2 \\ell_2 + 1)}{4 \\pi (2 L + 1)}}\n \\langle \\ell_1 \\, 0 \\, \\ell_2 \\, 0 | L \\, 0 \\rangle\n \\langle \\ell_1 \\, m_1 \\, \\ell_2 \\, m_2 | L \\, M \\rangle\n"
},
{
"math_id": 70,
"text": "\n Y_{\\ell_1}^{m_1}(\\Omega) Y_{\\ell_2}^{m_2}(\\Omega)\n = \\sum_{L, M}\n \\sqrt{\\frac{(2 \\ell_1 + 1) (2 \\ell_2 + 1)}{4 \\pi (2 L + 1)}}\n \\langle \\ell_1 \\, 0 \\, \\ell_2 \\, 0 | L \\, 0 \\rangle\n \\langle \\ell_1 \\, m_1 \\, \\ell_2 \\, m_2 | L \\, M \\rangle\n Y_L^M (\\Omega)\n"
},
{
"math_id": 71,
"text": "\\sum_m (-1)^{j - m} \\langle j \\, m \\, j \\, (-m) | J \\, 0 \\rangle = \\delta_{J, 0} \\sqrt{2 j + 1}"
}
] | https://en.wikipedia.org/wiki?curid=1074990 |
1074997 | Lamb shift | Difference in energy of hydrogenic atom electron states not predicted by the Dirac equation
In physics, the Lamb shift, named after Willis Lamb, is an anomalous difference in energy between two electron orbitals in a hydrogen atom. The difference was not predicted by theory and it cannot be derived from the Dirac equation, which predicts identical energies. Hence the Lamb "shift" is a deviation from theory seen in the differing energies contained by the 2"S"1/2 and 2"P"1/2 orbitals of the hydrogen atom.
The Lamb shift is caused by interactions between the virtual photons created through vacuum energy fluctuations and the electron as it moves around the hydrogen nucleus in each of these two orbitals. The Lamb shift has since played a significant role through vacuum energy fluctuations in theoretical prediction of Hawking radiation from black holes.
This effect was first measured in 1947 in the Lamb–Retherford experiment on the hydrogen microwave spectrum and this measurement provided the stimulus for renormalization theory to handle the divergences. It was the harbinger of modern quantum electrodynamics developed by Julian Schwinger, Richard Feynman, Ernst Stueckelberg, Sin-Itiro Tomonaga and Freeman Dyson. Lamb won the Nobel Prize in Physics in 1955 for his discoveries related to the Lamb shift.
Importance.
In 1978, on Lamb's 65th birthday, Freeman Dyson addressed him as follows: "Those years, when the Lamb shift was the central theme of physics, were golden years for all the physicists of my generation. You were the first to see that this tiny shift, so elusive and hard to measure, would clarify our thinking about particles and fields."
Derivation.
This heuristic derivation of the electrodynamic level shift follows Theodore A. Welton's approach.
The fluctuations in the electric and magnetic fields associated with the QED vacuum perturbs the electric potential due to the atomic nucleus. This perturbation causes a fluctuation in the position of the electron, which explains the energy shift. The difference of potential energy is given by
formula_0
Since the fluctuations are isotropic,
formula_1
formula_2
So one can obtain
formula_3
The classical equation of motion for the electron displacement ("δr")"k" induced by a single mode of the field of wave vector "k" and frequency "ν" is
formula_4
and this is valid only when the frequency "ν" is greater than "ν"0 in the Bohr orbit, formula_5. The electron is unable to respond to the fluctuating field if the fluctuations are smaller than the natural orbital frequency in the atom.
For the field oscillating at "ν",
formula_6
therefore
formula_7
where formula_8 is some large normalization volume (the volume of the hypothetical "box" containing the hydrogen atom), and formula_9 denotes the hermitian conjugate of the preceding term. By the summation over all formula_10
formula_11
This result diverges when no limits about the integral (at both large and small frequencies). As mentioned above, this method is expected to be valid only when formula_5, or equivalently formula_12. It is also valid only for wavelengths longer than the Compton wavelength, or equivalently formula_13. Therefore, one can choose the upper and lower limit of the integral and these limits make the result converge.
formula_14.
For the atomic orbital and the Coulomb potential,
formula_15
since it is known that
formula_16
For "p" orbitals, the nonrelativistic wave function vanishes at the origin (at the nucleus), so there is no energy shift. But for "s" orbitals there is some finite value at the origin,
formula_17
where the Bohr radius is
formula_18
Therefore,
formula_19.
Finally, the difference of the potential energy becomes:
formula_20
where formula_21 is the fine-structure constant. This shift is about 500 MHz, within an order of magnitude of the observed shift of 1057 MHz. This is equal to an energy of only 7.00 x 10^-25 J., or 4.37 x 10^-6 eV.
Welton's heuristic derivation of the Lamb shift is similar to, but distinct from, the calculation of the Darwin term using Zitterbewegung, a contribution to the fine structure that is of lower order in formula_21 than the Lamb shift.
Lamb–Retherford experiment.
In 1947 Willis Lamb and Robert Retherford carried out an experiment using microwave techniques to stimulate radio-frequency transitions between
2"S"1/2 and 2"P"1/2 levels of hydrogen. By using lower frequencies than for optical transitions the Doppler broadening could be neglected (Doppler broadening is proportional to the frequency). The energy difference Lamb and Retherford found was a rise of about 1000 MHz (0.03 cm−1) of the 2"S"1/2 level above the 2"P"1/2 level.
This particular difference is a one-loop effect of quantum electrodynamics, and can be interpreted as the influence of virtual photons that have been emitted and re-absorbed by the atom. In quantum electrodynamics the electromagnetic field is quantized and, like the harmonic oscillator in quantum mechanics, its lowest state is not zero. Thus, there exist small zero-point oscillations that cause the electron to execute rapid oscillatory motions. The electron is "smeared out" and each radius value is changed from "r" to "r" + "δr" (a small but finite perturbation).
The Coulomb potential is therefore perturbed by a small amount and the degeneracy of the two energy levels is removed. The new potential can be approximated (using atomic units) as follows:
formula_22
The Lamb shift itself is given by
formula_23
with "k"("n", 0) around 13 varying slightly with "n", and
formula_24
with log("k"("n",ℓ)) a small number (approx. −0.05) making "k"("n",ℓ) close to unity.
For a derivation of Δ"E"Lamb see for example:
In the hydrogen spectrum.
In 1947, Hans Bethe was the first to explain the Lamb shift in the hydrogen spectrum, and he thus laid the foundation for the modern development of quantum electrodynamics. Bethe was able to derive the Lamb shift by implementing the idea of mass renormalization, which allowed him to calculate the observed energy shift as the difference between the shift of a bound electron and the shift of a free electron.
The Lamb shift currently provides a measurement of the fine-structure constant α to better than one part in a million, allowing a precision test of quantum electrodynamics.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta V = V(\\vec{r}+\\delta \\vec{r})-V(\\vec{r})=\\delta \\vec{r} \\cdot \\nabla V (\\vec{r}) + \\frac{1}{2} (\\delta \\vec{r} \\cdot \\nabla)^2V(\\vec{r})+\\cdots"
},
{
"math_id": 1,
"text": "\\langle \\delta \\vec{r} \\rangle _{\\rm vac} =0,"
},
{
"math_id": 2,
"text": "\\langle (\\delta \\vec{r} \\cdot \\nabla )^2 \\rangle _{\\rm vac} = \\frac{1}{3} \\langle (\\delta \\vec{r})^2\\rangle _{\\rm vac} \\nabla ^2."
},
{
"math_id": 3,
"text": "\\langle \\Delta V\\rangle =\\frac{1}{6} \\langle (\\delta \\vec{r})^2\\rangle _{\\rm vac}\\left\\langle \\nabla ^2\\left(\\frac{-e^2}{4\\pi \\epsilon _0r}\\right)\\right\\rangle _{\\rm at}."
},
{
"math_id": 4,
"text": "m\\frac{d^2}{dt^2} (\\delta r)_{\\vec{k}}=-eE_{\\vec{k}},"
},
{
"math_id": 5,
"text": "\\nu > \\pi c/a_0"
},
{
"math_id": 6,
"text": "\\delta r(t)\\cong \\delta r(0)(e^{-i\\nu t}+e^{i\\nu t}),"
},
{
"math_id": 7,
"text": "(\\delta r)_{\\vec{k}} \\cong \\frac{e}{mc^2k^2} E_{\\vec{k}}=\\frac{e}{mc^2k^2} \\mathcal{E} _{\\vec{k}} \\left (a_{\\vec{k}}e^{-i\\nu t+i\\vec{k}\\cdot \\vec{r}}+h.c. \\right) \\qquad \\text{with} \\qquad \\mathcal{E} _{\\vec{k}}=\\left(\\frac{\\hbar ck/2}{\\epsilon _0 \\Omega}\\right)^{1/2},"
},
{
"math_id": 8,
"text": "\\Omega"
},
{
"math_id": 9,
"text": "h.c."
},
{
"math_id": 10,
"text": "\\vec{k},"
},
{
"math_id": 11,
"text": "\\begin{align}\n\\langle (\\delta \\vec{r} )^2\\rangle _{\\rm vac} &=\\sum_{\\vec{k}} \\left(\\frac{e}{mc^2k^2} \\right)^2 \\left\\langle 0\\left |(E_{\\vec{k}})^2 \\right |0 \\right \\rangle \\\\\n&=\\sum_{\\vec{k}} \\left(\\frac{e}{mc^2k^2} \\right)^2\\left(\\frac{\\hbar ck}{2\\epsilon _0 \\Omega} \\right) \\\\\n&=2\\frac{\\Omega}{(2\\pi )^3}4\\pi \\int dkk^2\\left(\\frac{e}{mc^2k^2} \\right)^2\\left(\\frac{\\hbar ck}{2\\epsilon_0 \\Omega}\\right) && \\text{since continuity of } \\vec{k} \\text{ implies } \\sum_{\\vec{k}} \\to 2 \\frac{\\Omega}{(2\\pi)^3} \\int d^3 k \\\\\n&=\\frac{1}{2\\epsilon_0\\pi^2}\\left(\\frac{e^2}{\\hbar c}\\right)\\left(\\frac{\\hbar}{mc}\\right)^2\\int \\frac{dk}{k}\n\\end{align}"
},
{
"math_id": 12,
"text": "k > \\pi/a_0"
},
{
"math_id": 13,
"text": "k < mc/\\hbar"
},
{
"math_id": 14,
"text": "\\langle(\\delta\\vec{r})^2\\rangle_{\\rm vac}\\cong\\frac{1}{2\\epsilon_0\\pi^2}\\left(\\frac{e^2}{\\hbar c}\\right)\\left(\\frac{\\hbar}{mc}\\right)^2\\ln\\frac{4\\epsilon_0\\hbar c}{e^2}"
},
{
"math_id": 15,
"text": "\\left\\langle\\nabla^2\\left(\\frac{-e^2}{4\\pi\\epsilon_0r}\\right)\\right\\rangle_{\\rm at}=\\frac{-e^2}{4\\pi\\epsilon_0}\\int d\\vec{r}\\psi^*(\\vec{r})\\nabla^2\\left(\\frac{1}{r}\\right)\\psi(\\vec{r})=\\frac{e^2}{\\epsilon_0}|\\psi(0)|^2,"
},
{
"math_id": 16,
"text": "\\nabla^2\\left(\\frac{1}{r}\\right)=-4\\pi\\delta(\\vec{r})."
},
{
"math_id": 17,
"text": "\\psi_{2S}(0)=\\frac{1}{(8\\pi a_0^3)^{1/2}},"
},
{
"math_id": 18,
"text": "a_0=\\frac{4\\pi\\epsilon_0\\hbar^2}{me^2}."
},
{
"math_id": 19,
"text": "\\left\\langle\\nabla^2\\left(\\frac{-e^2}{4\\pi\\epsilon_0r}\\right)\\right\\rangle_{\\rm at}=\\frac{e^2}{\\epsilon_0}|\\psi_{2S}(0)|^2=\\frac{e^2}{8\\pi\\epsilon_0a_0^3}"
},
{
"math_id": 20,
"text": "\\langle\\Delta V\\rangle=\\frac{4}{3}\\frac{e^2}{4\\pi\\epsilon_0}\\frac{e^2}{4\\pi\\epsilon_0\\hbar c}\\left(\\frac{\\hbar}{mc}\\right)^2\\frac{1}{8\\pi a_0^3}\\ln\\frac{4\\epsilon_0\\hbar c}{e^2} = \\alpha^5 mc^2 \\frac{1}{6\\pi} \\ln\\frac{1}{\\pi\\alpha},"
},
{
"math_id": 21,
"text": "\\alpha"
},
{
"math_id": 22,
"text": "\\langle E_\\mathrm{pot} \\rangle=-\\frac{Ze^2}{4\\pi\\epsilon_0}\\left\\langle\\frac{1}{r+\\delta r}\\right\\rangle."
},
{
"math_id": 23,
"text": "\\Delta E_\\mathrm{Lamb}=\\alpha^5 m_e c^2 \\frac{k(n,0)}{4n^3}\\ \\mathrm{for}\\ \\ell=0\\, "
},
{
"math_id": 24,
"text": "\\Delta E_\\mathrm{Lamb}=\\alpha^5 m_e c^2 \\frac{1}{4n^3}\\left[k(n,\\ell)\\pm \\frac{1}{\\pi(j+\\frac{1}{2})(\\ell+\\frac{1}{2})}\\right]\\ \\mathrm{for}\\ \\ell\\ne 0\\ \\mathrm{and}\\ j=\\ell\\pm\\frac{1}{2},"
}
] | https://en.wikipedia.org/wiki?curid=1074997 |
1075005 | Exergy | Maximum energy available for use
Exergy, often referred to as "available energy" or "useful work potential", is a fundamental concept in the field of thermodynamics and engineering. It plays a crucial role in understanding and quantifying the quality of energy within a system and its potential to perform useful work. Exergy analysis has widespread applications in various fields, including energy engineering, environmental science, and industrial processes.
From a scientific and engineering perspective, second-law-based exergy analysis is valuable because it provides a number of benefits over energy analysis alone. These benefits include the basis for determining energy quality (or exergy content), enhancing the understanding of fundamental physical phenomena, and improving design, performance evaluation and optimization efforts. In thermodynamics, the exergy of a system is the maximum useful work that can be produced as the system is brought into equilibrium with its environment by an ideal process. The specification of an "ideal process" allows the determination of "maximum work" production. From a conceptual perspective, exergy is the "ideal" potential of a system to do work or cause a change as it achieves equilibrium with its environment. Exergy is also known as "availability". Exergy is non-zero when there is dis-equilibrium between the system and its environment, and exergy is zero when equilibrium is established (the state of maximum entropy for the system plus its environment).
Determining exergy was one of the original goals of thermodynamics. The term "exergy" was coined in 1956 by Zoran Rant (1904–1972) by using the Greek "ex" and "ergon", meaning "from work",<templatestyles src="Citation/styles.css"/>[3] but the concept had been earlier developed by J. Willard Gibbs (the namesake of Gibbs free energy) in 1873.<templatestyles src="Citation/styles.css"/>[4]
Energy is neither created nor destroyed, but is simply converted from one form to another (see First law of thermodynamics). In contrast to energy, exergy is always destroyed when a process is non-ideal or irreversible (see Second law of thermodynamics). To illustrate, when someone states that "I used a lot of energy running up that hill", the statement contradicts the first law. Although the energy is not consumed, intuitively we perceive that something is. The key point is that energy has quality or measures of usefulness, and this energy quality (or exergy content) is what is consumed or destroyed. This occurs because everything, all real processes, produce entropy and the destruction of exergy or the rate of "irreversibility" is proportional to this entropy production (Gouy–Stodola theorem). Where entropy production may be calculated as the net increase in entropy of the system together with its surroundings. Entropy production is due to things such as friction, heat transfer across a finite temperature difference and mixing. In distinction from "exergy destruction", "exergy loss" is the transfer of exergy across the boundaries of a system, such as with mass or heat loss, where the exergy flow or transfer is potentially recoverable. The energy quality or exergy content of these mass and energy losses are low in many situations or applications, where exergy content is defined as the ratio of exergy to energy on a percentage basis. For example, while the exergy content of electrical work produced by a thermal power plant is 100%, the exergy content of low-grade heat rejected by the power plant, at say, 41 degrees Celsius, relative to an environment temperature of 25 degrees Celsius, is only 5%.
Definitions.
Exergy is a combination property of a system and its environment because it depends on the state of both and is a consequence of dis-equilibrium between them. Exergy is neither a thermodynamic property of matter nor a thermodynamic potential of a system. Exergy and energy always have the same units, and the joule (symbol: J) is the unit of energy in the International System of Units (SI). The internal energy of a system is always measured from a fixed reference state and is therefore always a state function. Some authors define the exergy of the system to be changed when the environment changes, in which case it is not a state function. Other writers prefer a slightly alternate definition of the available energy or exergy of a system where the environment is firmly defined, as an unchangeable absolute reference state, and in this alternate definition, exergy becomes a property of the state of the system alone.
However, from a theoretical point of view, exergy may be defined without reference to any environment. If the intensive properties of different finitely extended elements of a system differ, there is always the possibility to extract mechanical work from the system. Yet, with such an approach one has to abandon the requirement that the environment is large enough relative to the "system" such that its intensive properties, such as temperature, are unchanged due to its interaction with the system. So that exergy is defined in an absolute sense, it will be assumed in this article that, unless otherwise stated, that the environment's intensive properties are unchanged by its interaction with the system.
For a heat engine, the exergy can be simply defined in an absolute sense, as the energy input times the Carnot efficiency, assuming the low-temperature heat reservoir is at the temperature of the environment. Since many systems can be modeled as a heat engine, this definition can be useful for many applications.
Terminology.
The term exergy is also used, by analogy with its physical definition, in information theory related to reversible computing. Exergy is also synonymous with "available energy", "exergic energy", "essergy" (considered archaic), "utilizable energy", "available useful work", "maximum (or minimum) work", "maximum (or minimum) work content", "reversible work", "ideal work", "availability" or "available work".
Implications.
The exergy destruction of a cycle is the sum of the exergy destruction of the processes that compose that cycle. The exergy destruction of a cycle can also be determined without tracing the individual processes by considering the entire cycle as a single process and using one of the exergy destruction equations.
Examples.
For two thermal reservoirs at temperatures "T"H and "T"C < TH, as considered by Carnot, the exergy is the work "W" that can be done by a reversible engine. Specifically, with "Q"H the heat provided by the hot reservoir, Carnot's analysis gives "W"/"Q"H = ("T"H − "T"C)/"T"H. Although, exergy or maximum work is determined by conceptually utilizing an ideal process, it is the property of a system in a given environment. Exergy analysis is not merely for reversible cycles, but for all cycles (including non-cyclic or non-ideal), and indeed for all thermodynamic processes.
As an example, consider the non-cyclic process of expansion of an ideal gas. For free expansion in an isolated system, the energy and temperature do not change, so by energy conservation no work is done. On the other hand, for expansion done against a moveable wall that always matched the (varying) pressure of the expanding gas (so the wall develops negligible kinetic energy), with no heat transfer (adiabatic wall), the maximum work would be done. This corresponds to the exergy. Thus, in terms of exergy, Carnot considered the exergy for a cyclic process with two thermal reservoirs (fixed temperatures). Just as the work done depends on the process, so the exergy depends on the process, reducing to Carnot's result for Carnot's case.
W. Thomson (from 1892, Lord Kelvin), as early as 1849 was exercised by what he called “lost energy”, which appears to be the same as “destroyed energy” and what has been called “anergy”. In 1874 he wrote that “lost energy” is the same as the energy dissipated by, e.g., friction, electrical conduction (electric field-driven charge diffusion), heat conduction (temperature-driven thermal diffusion), viscous processes (transverse momentum diffusion) and particle diffusion (ink in water). On the other hand, Kelvin did not indicate how to compute the “lost energy”. This awaited the 1931 and 1932 works of Onsager on irreversible processes.
Mathematical description.
An application of the second law of thermodynamics.
Exergy uses system boundaries in a way that is unfamiliar to many. We imagine the presence of a Carnot engine between the system and its reference environment even though this engine does not exist in the real world. Its only purpose is to measure the results of a "what-if" scenario to represent the most efficient work interaction possible between the system and its surroundings.
If a real-world reference environment is chosen that behaves like an unlimited reservoir that remains unaltered by the system, then Carnot's speculation about the consequences of a system heading towards equilibrium with time is addressed by two equivalent mathematical statements. Let "B", the exergy or available work, decrease with time, and "S"total, the entropy of the system and its reference environment enclosed together in a larger isolated system, increase with time:
For macroscopic systems (above the thermodynamic limit), these statements are both expressions of the second law of thermodynamics if the following expression is used for exergy:
where the extensive quantities for the system are: "U" = Internal energy, "V" = Volume, and "N"i = Moles of component "i". The intensive quantities for the surroundings are: "P"R = Pressure, "T"R = temperature, "μ"i, R
= Chemical potential of component "i". Indeed the total entropy of the universe reads:
the second term formula_0 being the entropy of the surroundings to within a constant.
Individual terms also often have names attached to them: formula_1 is called "available PV work", formula_2 is called "entropic loss" or "heat loss" and the final term is called "available chemical energy."
Other thermodynamic potentials may be used to replace internal energy so long as proper care is taken in recognizing which natural variables correspond to which potential. For the recommended nomenclature of these potentials, see (Alberty, 2001)[#endnote_]. Equation (2) is useful for processes where system volume, entropy, and the number of moles of various components change because internal energy is also a function of these variables and no others.
An alternative definition of internal energy does not separate available chemical potential from "U". This expression is useful (when substituted into equation (1)) for processes where system volume and entropy change, but no chemical reaction occurs:
In this case, a given set of chemicals at a given entropy and volume will have a single numerical value for this thermodynamic potential. A multi-state system may complicate or simplify the problem because the Gibbs phase rule predicts that intensive quantities will no longer be completely independent from each other.
A historical and cultural tangent.
In 1848, William Thomson, 1st Baron Kelvin, asked (and immediately answered) the question
Is there any principle on which an absolute thermometric scale can be founded? It appears to me that Carnot's theory of the motive power of heat enables us to give an affirmative answer.[#endnote_]
With the benefit of the hindsight contained in equation (5), we are able to understand the historical impact of Kelvin's idea on physics. Kelvin suggested that the best temperature scale would describe a constant ability for a unit of temperature in the surroundings to alter the available work from Carnot's engine. From equation (3):
Rudolf Clausius recognized the presence of a proportionality constant in Kelvin's analysis and gave it the name entropy in 1865 from the Greek for "transformation" because it quantifies the amount of energy lost during the conversion from heat to work. The available work from a Carnot engine is at its maximum when the surroundings are at a temperature of absolute zero.
Physicists then, as now, often look at a property with the word "available" or "utilizable" in its name with a certain unease. The idea of what is available raises the question of "available to what?" and raises a concern about whether such a property is anthropocentric. Laws derived using such a property may not describe the universe but instead, describe what people wish to see.
The field of statistical mechanics (beginning with the work of Ludwig Boltzmann in developing the Boltzmann equation) relieved many physicists of this concern. From this discipline, we now know that macroscopic properties may all be determined from properties on a microscopic scale where entropy is more "real" than temperature itself ("see Thermodynamic temperature"). Microscopic kinetic fluctuations among particles cause entropic loss, and this energy is unavailable for work because these fluctuations occur randomly in all directions. The anthropocentric act is taken, in the eyes of some physicists and engineers today, when someone draws a hypothetical boundary, in fact, he says: "This is my system. What occurs beyond it is surroundings." In this context, exergy is sometimes described as an anthropocentric property, both by some who use it and by some who don't. However, exergy is based on the dis-equilibrium between a system and its environment, so its very real and necessary to define the system distinctly from its environment. It can be agreed that entropy is generally viewed as a more fundamental property of matter than exergy.
A potential for every thermodynamic situation.
In addition to formula_3 and formula_4, the other thermodynamic potentials are frequently used to determine exergy. For a given set of chemicals at a given entropy and pressure, enthalpy "H" is used in the expression:
For a given set of chemicals at a given temperature and volume, Helmholtz free energy "A" is used in the expression:
For a given set of chemicals at a given temperature and pressure, Gibbs free energy "G" is used in the expression:
where formula_5 is evaluated at the isothermal system temperature (formula_6), and formula_7 is defined with respect to the isothermal temperature of the system's environment (formula_8). The exergy formula_7 is the energy formula_9 reduced by the product of the entropy times the environment temperature formula_8, which is the slope or partial derivative of the internal energy with respect to entropy in the environment. That is, higher entropy reduces the exergy or free energy available relative to the energy level formula_9.
Work can be produced from this energy, such as in an isothermal process, but any entropy generation during the process will cause the destruction of exergy (irreversibility) and the reduction of these thermodynamic potentials. Further, exergy losses can occur if mass and energy is transferred out of the system at non-ambient or elevated temperature, pressure or chemical potential. Exergy losses are potentially recoverable though because the exergy has not been destroyed, such as what occurs in waste heat recovery systems (although the energy quality or exergy content is typically low). As a special case, an isothermal process operating at ambient temperature will have no thermally related exergy losses.
Exergy Analysis involving Radiative Heat Transfer.
All matter emits radiation continuously as a result of its non-zero (absolute) temperature. This emitted energy flow is proportional to the material’s temperature raised to the fourth power. As a result, any radiation conversion device that seeks to absorb and convert radiation (while reflecting a fraction of the incoming source radiation) inherently emits its own radiation. Also, given that reflected and emitted radiation can occupy the same direction or solid angle, the entropy flows, and as a result, the exergy flows, are generally not independent. The entropy and exergy balance equations for a control volume (CV), re-stated to correctly apply to situations involving radiative transfer, are expressed as,
formula_10
where formula_11 or "Π" denotes entropy production within the control volume, and,
formula_12
This rate equation for the exergy within an open system X ("Ξ or B") takes into account the exergy transfer rates across the system boundary by heat transfer (q for conduction & convection, and "M" by radiative fluxes), by mechanical or electrical work transfer ("W"), and by mass transfer ("m"), as well as taking into account the exergy destruction ("I") that occurs within the system due to irreversibility’s or non-ideal processes. Note that chemical exergy, kinetic energy, and gravitational potential energy have been excluded for simplicity.
The exergy irradiance or flux M, and the exergy radiance N (where M = πN for isotropic radiation), depend on the spectral and directional distribution of the radiation (for example, see the next section on ‘Exergy Flux of Radiation with an Arbitrary Spectrum’). Sunlight can be crudely approximated as blackbody, or more accurately, as graybody radiation. Noting that, although a graybody spectrum looks similar to a blackbody spectrum, the entropy and exergy are very different.
Petela determined that the exergy of isotropic blackbody radiation was given by the expression,
formula_13
where the exergy within the enclosed system is X ("Ξ or B"), c is the speed of light, V is the volume occupied by the enclosed radiation system or void, T is the material emission temperature, To is the environmental temperature, and x is the dimensionless temperature ratio To/T.
However, for decades this result was contested in terms of its relevance to the conversion of radiation fluxes, and in particular, solar radiation. For example, Bejan stated that “Petela’s efficiency is no more than a convenient, albeit artificial way, of non-dimensionalizing the calculated work output” and that Petela’s efficiency “is not a ‘conversion efficiency.’ ” However, it has been shown that Petela’s result represents the exergy of blackbody radiation. This was done by resolving a number of issues, including that of inherent irreversibility, defining the environment in terms of radiation, the effect of inherent emission by the conversion device and the effect of concentrating source radiation.
Exergy Flux of Radiation with an Arbitrary Spectrum (including Sunlight).
In general, terrestrial solar radiation has an arbitrary non-blackbody spectrum. Ground level spectrums can vary greatly due to reflection, scattering and absorption in the atmosphere. While the emission spectrums of thermal radiation in engineering systems can vary widely as well.
In determining the exergy of radiation with an arbitrary spectrum, it must be considered whether reversible or ideal conversion (zero entropy production) is possible. It has been shown that reversible conversion of blackbody radiation fluxes across an infinitesimal temperature difference is theoretically possible ]. However, this reversible conversion can only be theoretically achieved because equilibrium can exist between blackbody radiation and matter. However, non-blackbody radiation cannot even exist in equilibrium with itself, nor with its own emitting material.
Unlike blackbody radiation, non-blackbody radiation cannot exist in equilibrium with matter, so it appears likely that the interaction of non-blackbody radiation with matter is always an inherently irreversible process. For example, an enclosed non-blackbody radiation system (such as a void inside a solid mass) is unstable and will spontaneously equilibriate to blackbody radiation unless the enclosure is perfectly reflecting (i.e., unless there is no thermal interaction of the radiation with its enclosure – which is not possible in actual, or real, non-ideal systems). Consequently, a cavity initially devoid of thermal radiation inside a non-blackbody material will spontaneously and rapidly (due to the high velocity of the radiation), through a series of absorption and emission interactions, become filled with blackbody radiation rather than non-blackbody radiation.
The approaches by Petela and Karlsson both assume that reversible conversion of non-blackbody radiation is theoretically possible, that is, without addressing or considering the issue. Exergy is not a property of the system alone, it’s a property of both the system and its environment. Thus, it is of key importance non-blackbody radiation cannot exist in equilibrium with matter, indicating that the interaction of non-blackbody radiation with matter is an inherently irreversible process.
The flux (irradiance) of radiation with an arbitrary spectrum, based on the inherent irreversibility of non-blackbody radiation conversion, is given by the expression,
formula_14
The exergy flux formula_15 is expressed as a function of only the energy flux or irradiance formula_9 and the environment temperature formula_16. For graybody radiation, the exergy flux is given by the expression,
formula_17
As one would expect, the exergy flux of non-blackbody radiation reduces to the result for blackbody radiation when emissivity is equal to one.
Note that the exergy flux of graybody radiation can be a small fraction of the energy flux. For example, the ratio of exergy flux to energy flux formula_18 for graybody radiation with emissivity formula_19 is equal to 40.0%, for formula_20 and formula_21. That is, a maximum of only 40% of the graybody energy flux can be converted to work in this case (already only 50% of that of the blackbody energy flux with the same emission temperature). Graybody radiation has a spectrum that looks similar to the blackbody spectrum, but the entropy and exergy flux cannot be accurately approximated as that of blackbody radiation with the same emission temperature. However, it can be reasonably approximated by the entropy flux of blackbody radiation with the same energy flux (lower emission temperature).
Blackbody radiation has the highest entropy-to-energy ratio of all radiation with the same energy flux, but the lowest entropy-to-energy ratio, and the highest exergy content, of all radiation with the same emission temperature. For example, the exergy content of graybody radiation is lower than that of blackbody radiation with the same emission temperature and decreases as emissivity decreases. For the example above with formula_22 the exergy flux of the blackbody radiation source flux is 52.5% of the energy flux compared to 40.0% for graybody radiation with formula_19, or compared to 15.5% for graybody radiation with formula_23.
The Exergy Flux of Sunlight.
In addition to the production of power directly from sunlight, solar radiation provides most of the exergy for processes on Earth, including processes that sustain living systems directly, as well as all fuels and energy sources that are used for transportation and electric power production (directly or indirectly). This is primarily with the exception of nuclear fission power plants and geothermal energy (due to natural fission decay). Solar energy is, for the most part, thermal radiation from the Sun with an emission temperature near 5762 Kelvin, but it also includes small amounts of higher energy radiation from the fusion reaction or higher thermal emission temperatures within the Sun. The source of most energy on Earth is nuclear in origin.
The figure below depicts typical solar radiation spectrums under clear sky conditions for AM0 (extraterrestrial solar radiation), AM1 (terrestrial solar radiation with solar zenith angle of 0 degrees) and AM4 (terrestrial solar radiation with solar zenith angle of 75.5 degrees). The solar spectrum at sea level (terrestrial solar spectrum) depends on a number of factors including the position of the Sun in the sky, atmospheric turbidity, the level of local atmospheric pollution, and the amount and type of cloud cover. These spectrums are for relatively clear air (α = 1.3, β = 0.04) assuming a U.S. standard atmosphere with 20 mm of precipitable water vapor and 3.4 mm of ozone. The Figure shows the spectral energy irradiance (W/m2μm) which does not provide information regarding the directional distribution of the solar radiation. The exergy content of the solar radiation, assuming that it is subtended by the solid angle of the ball of the Sun (no circumsolar), is 93.1%, 92.3% and 90.8%, respectively, for the AM0, AM1 and the AM4 spectrums.
The exergy content of terrestrial solar radiation is also reduced because of the diffuse component caused by the complex interaction of solar radiation, originally in a very small solid angle beam, with material in the Earth’s atmosphere. The characteristics and magnitude of diffuse terrestrial solar radiation depends on a number of factors, as mentioned, including the position of the Sun in the sky, atmospheric turbidity, the level of local atmospheric pollution, and the amount and type of cloud cover. Solar radiation under clear sky conditions exhibits a maximum intensity towards the Sun (circumsolar radiation) but also exhibits an increase in intensity towards the horizon (horizon brightening). In contrast for opaque overcast skies the solar radiation can be completely diffuse with a maximum intensity in the direction of the zenith and monotonically decreasing towards the horizon. The magnitude of the diffuse component generally varies with frequency, being highest in the ultraviolet region.
The dependence of the exergy content on directional distribution can be illustrated by considering, for example, the AM1 and AM4 terrestrial spectrums depicted in the figure, with the following simplified cases of directional distribution:
• For AM1: 80% of the solar radiation is contained in the solid angle subtended by the Sun, 10% is contained and isotropic in a solid angle 0.008 sr (this field of view includes circumsolar radiation), while the remaining 10% of the solar radiation is diffuse and isotropic in the solid angle 2π sr.
• For AM4: 65% of the solar radiation is contained in the solid angle subtended by the Sun, 20% of the solar radiation is contained and isotropic in a solid angle 0.008 sr, while the remaining 15% of the solar radiation is diffuse and isotropic in the solid angle 2π sr. Note that when the Sun is low in the sky the diffuse component can be the dominant part of the incident solar radiation.
For these cases of directional distribution, the exergy content of the terrestrial solar radiation for the AM1 and AM4 spectrum depicted are 80.8% and 74.0%, respectively. From these sample calculations it is evideνnt that the exergy content of terrestrial solar radiation is strongly dependent on the directional distribution of the radiation. This result is interesting because one might expect that the performance of a conversion device would depend on the incoming rate of photons and their spectral distribution but not on the directional distribution of the incoming photons. However, for a given incoming flux of photons with a certain spectral distribution, the entropy (level of disorder) is higher the more diffuse the directional distribution. From the second law of thermodynamics, the incoming entropy of the solar radiation cannot be destroyed and consequently reduces the maximum work output that can be obtained by a conversion device.
Chemical exergy.
Similar to thermomechanical exergy, chemical exergy depends on the temperature and pressure of a system as well as on the composition. The key difference in evaluating chemical exergy versus thermomechanical exergy is that thermomechanical exergy does not take into account the difference in a system and the environment's chemical composition. If the temperature, pressure or composition of a system differs from the environment's state, then the overall system will have exergy.
The definition of chemical exergy resembles the standard definition of thermomechanical exergy, but with a few differences. Chemical exergy is defined as the maximum work that can be obtained when the considered system is brought into reaction with reference substances present in the environment. Defining the exergy reference environment is one of the most vital parts of analyzing chemical exergy. In general, the environment is defined as the composition of air at 25 °C and 1 atm of pressure. At these properties air consists of N2=75.67%, O2=20.35%, H2O(g)=3.12%, CO2=0.03% and other gases=0.83%. These molar fractions will become of use when applying Equation 8 below.
CaHbOc is the substance that is entering a system that one wants to find the maximum theoretical work of. By using the following equations, one can calculate the chemical exergy of the substance in a given system. Below, Equation 9 uses the Gibbs function of the applicable element or compound to calculate the chemical exergy. Equation 10 is similar but uses standard molar chemical exergy, which scientists have determined based on several criteria, including the ambient temperature and pressure that a system is being analyzed and the concentration of the most common components. These values can be found in thermodynamic books or in online tables.
Important equations.
where:
where formula_30 is the standard molar chemical exergy taken from a table for the specific conditions that the system is being evaluated.
Equation 10 is more commonly used due to the simplicity of only having to look up the standard chemical exergy for given substances. Using a standard table works well for most cases, even if the environmental conditions vary slightly, the difference is most likely negligible.
Total exergy.
After finding the chemical exergy in a given system, one can find the total exergy by adding it to the thermomechanical exergy. Depending on the situation, the amount of chemical exergy added can be very small. If the system being evaluated involves combustion, the amount of chemical exergy is very large and necessary to find the total exergy of the system.
Irreversibility.
Irreversibility accounts for the amount of exergy destroyed in a closed system, or in other words, the wasted work potential. This is also called dissipated energy. For highly efficient systems, the value of "I", is low, and vice versa. The equation to calculate the irreversibility of a closed system, as it relates to the exergy of that system, is as follows:
where formula_31, also denoted by "Π", is the entropy generated by processes within the system. If formula_32 then there are irreversibilities present in the system. If formula_33 then there are no irreversibilities present in the system. The value of "I", the irreversibility, can not be negative, as this implies entropy destruction, a direct violation of the second law of thermodynamics.
Exergy analysis also relates the actual work of a work producing device to the maximal work, that could be obtained in the reversible or ideal process:
That is, the irreversibility is the ideal maximum work output minus the actual work production. Whereas, for a work consuming device such as refrigeration or heat pump, irreversibility is the actual work input minus the ideal minimum work input.
The first term at the right part is related to the difference in exergy at inlet and outlet of the system:
where "B" is also denoted by "Ξ or X".
For an isolated system there are no heat or work interactions or transfers of exergy between the system and its surroundings. The exergy of an isolated system can therefore only decrease, by a magnitude equal to the irreversibility of that system or process,
Applications.
Applying equation (1) to a subsystem yields:
This expression applies equally well for theoretical ideals in a wide variety of applications: electrolysis (decrease in "G"), galvanic cells and fuel cells (increase in "G"), explosives (increase in "A"), heating and refrigeration (exchange of "H"), motors (decrease in "U") and generators (increase in "U").
Utilization of the exergy concept often requires careful consideration of the choice of reference environment because, as Carnot knew, unlimited reservoirs do not exist in the real world. A system may be maintained at a constant temperature to simulate an unlimited reservoir in the lab or in a factory, but those systems cannot then be isolated from a larger surrounding environment. However, with a proper choice of system boundaries, a reasonable constant reservoir can be imagined. A process sometimes must be compared to "the most realistic impossibility," and this invariably involves a certain amount of guesswork.
Engineering applications.
One goal of energy and exergy methods in engineering is to compute what comes into and out of several possible designs before a design is built. Energy input and output will always balance according to the First Law of Thermodynamics or the energy conservation principle. Exergy output will not equal the exergy input for real processes since a part of the exergy input is always destroyed according to the Second Law of Thermodynamics for real processes. After the input and output are calculated, an engineer will often want to select the most efficient process. An energy efficiency or "first law efficiency" will determine the most efficient process based on wasting as little energy as possible relative to energy inputs. An exergy efficiency or "second-law efficiency" will determine the most efficient process based on wasting "and destroying" as little available work as possible from a given input of available work, per unit of whatever the desired output is.
Exergy has been applied in a number of design applications in order to optimize systems or identify components or subsystems with the greatest potential for improvement. For instance, an exergy analysis of environmental control systems on the international space station revealed the oxygen generation assembly as the subsystem which destroyed the most exergy.
Exergy is particularly useful for broad engineering analyses with many systems of varied nature, since it can account for mechanical, electrical, nuclear, chemical, or thermal systems. For this reason, Exergy analysis has also been used to optimize the performance of rocket vehicles. Exergy analysis affords additional insight, relative to energy analysis alone, because it incorporates the second law, and considers both the system and its relationship with its environment. For example, exergy analysis has been used to compare possible power generation and storage systems on the moon, since exergy analysis is conducted in reference to the unique environmental operating conditions of a specific application, such as on the surface of the moon.
Application of exergy to unit operations in chemical plants was partially responsible for the huge growth of the chemical industry during the 20th century.
As a simple example of exergy, air at atmospheric conditions of temperature, pressure, "and composition" contains energy but no exergy when it is chosen as the thermodynamic reference state known as ambient. Individual processes on Earth such as combustion in a power plant often eventually result in products that are incorporated into the atmosphere, so defining this reference state for exergy is useful even though the atmosphere itself is not at equilibrium and is full of long and short term variations.
If standard ambient conditions are used for calculations during chemical plant operation when the actual weather is very cold or hot, then certain parts of a chemical plant might seem to have an exergy efficiency of greater than 100%. Without taking into account the non-standard atmospheric temperature variation, these calculations can give an impression of being a perpetual motion machine. Using actual conditions will give actual values, but standard ambient conditions are useful for initial design calculations.
Applications in natural resource utilization.
In recent decades, utilization of exergy has spread outside of physics and engineering to the fields of industrial ecology, ecological economics, systems ecology, and energetics. Defining where one field ends and the next begins is a matter of semantics, but applications of exergy can be placed into rigid categories.
After the milestone work of Jan Szargut who emphasized the relation between exergy and availability,
it is necessary to remember "Exergy Ecology and Democracy".
by Goran Wall, a short essay, which evidences the strict relation that relates exergy disruption with environmental and social disruption.
From this activity it has derived a fundamental research activity in ecological economics and environmental accounting perform exergy-cost analyses in order to evaluate the impact of human activity on the current and future natural environment. As with ambient air, this often requires the unrealistic substitution of properties from a natural environment in place of the reference state environment of Carnot. For example, ecologists and others have developed reference conditions for the ocean and for the Earth's crust. Exergy values for human activity using this information can be useful for comparing policy alternatives based on the efficiency of utilizing natural resources to perform work. Typical questions that may be answered are:
Does the human production of one unit of an economic good by method "A" utilize more of a resource's exergy than by method "B"?
Does the human production of economic good "A" utilize more of a resource's exergy than the production of good "B"?
Does the human production of economic good "A" utilize a resource's exergy more efficiently than the production of good "B"?
There has been some progress in standardizing and applying these methods.
Measuring exergy requires the evaluation of a system's reference state environment. With respect to the applications of exergy on natural resource utilization, the process of quantifying a system requires the assignment of value (both utilized and potential) to resources that are not always easily dissected into typical cost-benefit terms. However, to fully realize the potential of a system to do work, it is becoming increasingly imperative to understand exergetic potential of natural resources, and how human interference alters this potential.
Referencing the inherent qualities of a system in place of a reference state environment is the most direct way that ecologists determine the exergy of a natural resource. Specifically, it is easiest to examine the thermodynamic properties of a system, and the reference substances that are acceptable within the reference environment. This determination allows for the assumption of qualities in a natural state: deviation from these levels may indicate a change in the environment caused by outside sources. There are three kinds of reference substances that are acceptable, due to their proliferation on the planet: gases within the atmosphere, solids within the Earth's crust, and molecules or ions in seawater. By understanding these basic models, it's possible to determine the exergy of multiple earth systems interacting, like the effects of solar radiation on plant life. These basic categories are utilized as the main components of a reference environment when examining how exergy can be defined through natural resources.
Other qualities within a reference state environment include temperature, pressure, and any number of combinations of substances within a defined area. Again, the exergy of a system is determined by the potential of that system to do work, so it is necessary to determine the baseline qualities of a system before it is possible to understand the potential of that system. The thermodynamic value of a resource can be found by multiplying the exergy of the resource by the cost of obtaining the resource and processing it.
Today, it is becoming increasingly popular to analyze the environmental impacts of natural resource utilization, especially for energy usage. To understand the ramifications of these practices, exergy is utilized as a tool for determining the impact potential of emissions, fuels, and other sources of energy. Combustion of fossil fuels, for example, is examined with respect to assessing the environmental impacts of burning coal, oil, and natural gas. The current methods for analyzing the emissions from these three products can be compared to the process of determining the exergy of the systems affected; specifically, it is useful to examine these with regard to the reference state environment of gases within the atmosphere. In this way, it is easier to determine how human action is affecting the natural environment.
Applications in sustainability.
In systems ecology, researchers sometimes consider the exergy of the current formation of natural resources from a small number of exergy inputs (usually solar radiation, tidal forces, and geothermal heat). This application not only requires assumptions about reference states, but it also requires assumptions about the real environments of the past that might have been close to those reference states. Can we decide which is the most "realistic impossibility" over such a long period of time when we are only speculating about the reality?
For instance, comparing oil exergy to coal exergy using a common reference state would require geothermal exergy inputs to describe the transition from biological material to fossil fuels during millions of years in the Earth's crust, and solar radiation exergy inputs to describe the material's history before then when it was part of the biosphere. This would need to be carried out mathematically backwards through time, to a presumed era when the oil and coal could be assumed to be receiving the same exergy inputs from these sources. A speculation about a past environment is different from assigning a reference state with respect to known environments today. Reasonable guesses about real ancient environments may be made, but they are untestable guesses, and so some regard this application as pseudoscience or pseudo-engineering.
The field describes this accumulated exergy in a natural resource over time as embodied energy with units of the "embodied joule" or "emjoule".
The important application of this research is to address sustainability issues in a quantitative fashion through a sustainability measurement:
Does the human production of an economic good deplete the exergy of Earth's natural resources more quickly than those resources are able to receive exergy?
If so, how does this compare to the depletion caused by producing the same good (or a different one) using a different set of natural resources?
Exergy and environmental policy.
Today environmental policies does not consider exergy as an instrument for a more equitable and effective environmental policy. Recently, exergy analysis allowed to obtain an important fault in today governmental GHGs emission balances, which often do not consider international transport related emissions, therefore the impacts of import/export are not accounted,
Therefore, some preliminary cases of the impacts of import export transportation and of technology had provided evidencing the opportunity of introducing an effective exergy based taxation which can reduce the fiscal impact on citizens.
In addition Exergy can be a precious instrument for an effective estimation of the path toward UN sustainable development goals (SDG).
Assigning one thermodynamically obtained value to an economic good.
A technique proposed by systems ecologists is to consolidate the three exergy inputs described in the last section into the single exergy input of solar radiation, and to express the total input of exergy into an economic good as a "solar embodied joule" or "sej". ("See Emergy") Exergy inputs from solar, tidal, and geothermal forces all at one time had their origins at the beginning of the solar system under conditions which could be chosen as an initial reference state, and other speculative reference states could in theory be traced back to that time. With this tool we would be able to answer:
What fraction of the total human depletion of the Earth's exergy is caused by the production of a particular economic good?
What fraction of the total human and non-human depletion of the Earth's exergy is caused by the production of a particular economic good?
No additional thermodynamic laws are required for this idea, and the principles of energetics may confuse many issues for those outside the field. The combination of untestable hypotheses, unfamiliar jargon that contradicts accepted jargon, intense advocacy among its supporters, and some degree of isolation from other disciplines have contributed to this protoscience being regarded by many as a pseudoscience. However, its basic tenets are only a further utilization of the exergy concept.
Implications in the development of complex physical systems.
A common hypothesis in systems ecology is that the design engineer's observation that a greater capital investment is needed to create a process with increased exergy efficiency is actually the economic result of a fundamental law of nature. By this view, exergy is the analogue of economic currency in the natural world. The analogy to capital investment is the accumulation of exergy into a system over long periods of time resulting in embodied energy. The analogy of capital investment resulting in a factory with high exergy efficiency is an increase in natural organizational structures with high exergy efficiency. ("See Maximum power"). Researchers in these fields describe biological evolution in terms of increases in organism complexity due to the requirement for increased exergy efficiency because of competition for limited sources of exergy.
Some biologists have a similar hypothesis. A biological system (or a chemical plant) with a number of intermediate compartments and intermediate reactions is more efficient because the process is divided up into many small substeps, and this is closer to the reversible ideal of an infinite number of infinitesimal substeps. Of course, an excessively large number of intermediate compartments comes at a capital cost that may be too high.
Testing this idea in living organisms or ecosystems is impossible for all practical purposes because of the large time scales and small exergy inputs involved for changes to take place. However, if this idea is correct, it would not be a new fundamental law of nature. It would simply be living systems and ecosystems maximizing their exergy efficiency by utilizing laws of thermodynamics developed in the 19th century.
Philosophical and cosmological implications.
Some proponents of utilizing exergy concepts describe them as a biocentric or ecocentric alternative for terms like quality and value. The "deep ecology" movement views economic usage of these terms as an anthropocentric philosophy which should be discarded. A possible universal thermodynamic concept of value or utility appeals to those with an interest in monism.
For some, the result of this line of thinking about tracking exergy into the deep past is a restatement of the cosmological argument that the universe was once at equilibrium and an input of exergy from some First Cause created a universe full of available work. Current science is unable to describe the first 10−43 seconds of the universe ("See Timeline of the Big Bang"). An external reference state is not able to be defined for such an event, and (regardless of its merits), such an argument may be better expressed in terms of entropy.
Quality of energy types.
The ratio of exergy to energy in a substance can be considered a measure of energy quality. Forms of energy such as macroscopic kinetic energy, electrical energy, and chemical Gibbs free energy are 100% recoverable as work, and therefore have exergy equal to their energy. However, forms of energy such as radiation and thermal energy can not be converted completely to work, and have exergy content less than their energy content. The exact proportion of exergy in a substance depends on the amount of entropy relative to the surrounding environment as determined by the Second Law of Thermodynamics.
Exergy is useful when measuring the efficiency of an energy conversion process. The exergetic, or 2nd Law, efficiency is a ratio of the exergy output divided by the exergy input. This formulation takes into account the quality of the energy, often offering a more accurate and useful analysis than efficiency estimates only using the First Law of Thermodynamics.
Work can be extracted also from bodies colder than the surroundings. When the flow of energy is coming into the body, work is performed by this energy obtained from the large reservoir, the surrounding. A quantitative treatment of the notion of energy quality rests on the definition of energy. According to the standard definition, Energy is a measure of the ability to do work. Work can involve the movement of a mass by a force that results from a transformation of energy. If there is an energy transformation, the second principle of energy flow transformations says that this process must involve the dissipation of some energy as heat. Measuring the amount of heat released is one way of quantifying the energy, or ability to do work and apply a force over a distance.
Exergy of heat available at a temperature.
Maximal possible conversion of heat to work, or exergy content of heat, depends on the temperature at which heat is available and the temperature level at which the reject heat can be disposed, that is the temperature of the surrounding. The upper limit for conversion is known as Carnot efficiency and was discovered by Nicolas Léonard Sadi Carnot in 1824. See also Carnot heat engine.
Carnot efficiency is
where "T""H" is the higher temperature and "T""C" is the lower temperature, both as absolute temperature. From Equation 15 it is clear that in order to maximize efficiency one should maximize "T""H" and minimize "T""C".
Exergy exchanged is then:
where "T""source" is the temperature of the heat source, and "T""o" is the temperature of the surrounding.
Connection with economic value.
Exergy in a sense can be understood as a measure of the value of energy. Since high-exergy energy carriers can be used for more versatile purposes, due to their ability to do more work, they can be postulated to hold more economic value. This can be seen in the prices of energy carriers, i.e. high-exergy energy carriers such as electricity tend to be more valuable than low-exergy ones such as various fuels or heat. This has led to the substitution of more valuable high-exergy energy carriers with low-exergy energy carriers, when possible. An example is heating systems, where higher investment to heating systems allows using low-exergy energy sources. Thus high-exergy content is being substituted with capital investments.
Exergy based Life Cycle Assessment (LCA).
Exergy of a system is the maximum useful work possible during a process that brings the system into equilibrium with a heat reservoir. Wall clearly states the relation between exergy analysis and resource accounting. This intuition confirmed by Dewulf Sciubba lead to exergo-economic accounting and to methods specifically dedicated to LCA such as exergetic material input per unit of service (EMIPS). The
concept of material input per unit of service (MIPS) is quantified in terms of the second law of thermodynamics, allowing the calculation of both resource input and service output in exergy terms. This exergetic material input per unit of service (EMIPS) has been elaborated for transport technology. The service not only takes into account the total mass to be transported
and the total distance, but also the mass per single transport and the delivery time. The applicability of the EMIPS methodology relates specifically to the transport system and allows an effective coupling with life cycle assessment. The exergy analysis according to EMIPS allowed the definition of a precise strategy for reducing environmental impacts of transport toward more sustainable transport. Such a strategy requires the reduction of the weight of vehicles, sustainable styles of driving, reducing the friction of tires, encouraging electric and hybrid vehicles, improving the walking and cycling environment in cities, and by enhancing the role of public transport, especially electric rail.
History.
Carnot.
In 1824, Sadi Carnot studied the improvements developed for steam engines by James Watt and others. Carnot utilized a purely theoretical perspective for these engines and developed new ideas. He wrote:
The question has often been raised whether the motive power of heat is unbounded, whether the possible improvements in steam engines have an assignable limit—a limit by which the nature of things will not allow to be passed by any means whatever... In order to consider in the most general way the principle of the production of motion by heat, it must be considered independently of any mechanism or any particular agent. It is necessary to establish principles applicable not only to steam-engines but to all imaginable heat-engines... The production of motion in steam-engines is always accompanied by a circumstance on which we should fix our attention. This circumstance is the re-establishing of equilibrium... Imagine two bodies A and B, kept each at a constant temperature, that of A being higher than that of B. These two bodies, to which we can give or from which we can remove the heat without causing their temperatures to vary, exercise the functions of two unlimited reservoirs...[#endnote_]
Carnot next described what is now called the Carnot engine, and proved by a thought experiment that any heat engine performing better than this engine would be a perpetual motion machine. Even in the 1820s, there was a long history of science forbidding such devices. According to Carnot, "Such a creation is entirely contrary to ideas now accepted, to the laws of mechanics and of sound physics. It is inadmissible."<templatestyles src="Citation/styles.css"/>[4]
This description of an upper bound to the work that may be done by an engine was the earliest modern formulation of the second law of thermodynamics. Because it involves no mathematics, it still often serves as the entry point for a modern understanding of both the second law and entropy. Carnot's focus on heat engines, equilibrium, and heat reservoirs is also the best entry point for understanding the closely related concept of exergy.
Carnot believed in the incorrect caloric theory of heat that was popular during his time, but his thought experiment nevertheless described a fundamental limit of nature. As kinetic theory replaced caloric theory through the early and mid-19th century ("see Timeline of thermodynamics"), several scientists added mathematical precision to the first and second laws of thermodynamics and developed the concept of entropy. Carnot's focus on processes at the human scale (above the thermodynamic limit) led to the most universally applicable concepts in physics. Entropy and the second-law are applied today in fields ranging from quantum mechanics to physical cosmology.
Gibbs.
In the 1870s, Josiah Willard Gibbs unified a large quantity of 19th century thermochemistry into one compact theory. Gibbs's theory incorporated the new concept of a chemical potential to cause change when distant from a chemical equilibrium into the older work begun by Carnot in describing thermal and mechanical equilibrium and their potentials for change. Gibbs's unifying theory resulted in the thermodynamic potential state functions describing differences from thermodynamic equilibrium.
In 1873, Gibbs derived the mathematics of "available energy of the body and medium" into the form it has today.<templatestyles src="Citation/styles.css"/>[3] (See the equations above). The physics describing exergy has changed little since that time.
Helmholtz.
In the 1880s, German scientist Hermann von Helmholtz derived the equation for the maximum work which can be reversibly obtained from a closed system.
Rant.
In 1956, Yugoslav scholar Zoran Rant proposed the concept of Exergy, extending Gibbs and Helmholtz' work. Since then, continuous development in exergy analysis has seen many applications in thermodynamics, and exergy has been accepted as the maximum theoretical useful work which can be obtained from a system with respect to its environment.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " - U/T_R - P_R V /T_R + \\sum_i \\mu_{i,R} N_i /T_R "
},
{
"math_id": 1,
"text": "P_R V"
},
{
"math_id": 2,
"text": "T_R S"
},
{
"math_id": 3,
"text": "U "
},
{
"math_id": 4,
"text": " U[\\boldsymbol{\\mu}]"
},
{
"math_id": 5,
"text": "G"
},
{
"math_id": 6,
"text": "T"
},
{
"math_id": 7,
"text": "B"
},
{
"math_id": 8,
"text": "T_R"
},
{
"math_id": 9,
"text": "H"
},
{
"math_id": 10,
"text": " \\frac {dS_{CV}}{dt} = \\int_{CV boundary} (\\frac{q_{cc}}{T_b} + J_{NetRad})dA+\\sum_{i}^m({\\dot{m}_i}s_i)-\\sum_{o}^n({\\dot{m}_o}s_o)+\\dot{S}_{gen}"
},
{
"math_id": 11,
"text": "{S}_{gen}"
},
{
"math_id": 12,
"text": " \\frac {{dX}_{CV}}{dt} = \\int_{CV boundary}[{q_{cc}}(1-\\frac{T_o}{T_b}) + M_{NetRad}]dA-({\\dot{W}_{CV}}-P_o(\\frac{dV_{CV}}{dt}))+\\sum_{i}^r({\\dot{m}_i}(h_i-{T_o}{s_i}))-\\dot{I}_{CV}"
},
{
"math_id": 13,
"text": "M_{BR} = \\frac{cX}{4V} = \\sigma T^4 (1-\\frac{4}{3}x+\\frac{1}{3}x^4)"
},
{
"math_id": 14,
"text": "M=H-T_o(\\frac{4}{3}\\sigma^{0.25} H^{0.75})+\\frac{\\sigma}{3}T_o^4)"
},
{
"math_id": 15,
"text": "M"
},
{
"math_id": 16,
"text": "T_o"
},
{
"math_id": 17,
"text": "M_{GR}=\\sigma T^4 (\\epsilon-\\frac{4}{3}x\\epsilon^{0.75}+\\frac{1}{3}x^4)"
},
{
"math_id": 18,
"text": "(M/H)"
},
{
"math_id": 19,
"text": "\\epsilon = 0.50"
},
{
"math_id": 20,
"text": "T = 500^oC"
},
{
"math_id": 21,
"text": " T_o = 27^oC (x = 0.388)"
},
{
"math_id": 22,
"text": "x = 0.388"
},
{
"math_id": 23,
"text": "\\epsilon = 0.10"
},
{
"math_id": 24,
"text": "\\bar{g}_{x}"
},
{
"math_id": 25,
"text": "\\left( T_{0}, p_{0} \\right)"
},
{
"math_id": 26,
"text": "\\bar{g}_{F}"
},
{
"math_id": 27,
"text": "\\bar{R}"
},
{
"math_id": 28,
"text": "T_{0}"
},
{
"math_id": 29,
"text": "y_{x}^{e}"
},
{
"math_id": 30,
"text": "\\bar{e}_{x}^{ch}"
},
{
"math_id": 31,
"text": " S_\\text{gen} "
},
{
"math_id": 32,
"text": "I > 0"
},
{
"math_id": 33,
"text": "I = 0"
}
] | https://en.wikipedia.org/wiki?curid=1075005 |
1075022 | Cluster decomposition | Locality condition in quantum field theory
In physics, the cluster decomposition property states that experiments carried out far from each other cannot influence each other. Usually applied to quantum field theory, it requires that vacuum expectation values of operators localized in bounded regions factorize whenever these regions becomes sufficiently distant from each other. First formulated by Eyvind Wichmann and James H. Crichton in 1963 in the context of the "S"-matrix, it was conjectured by Steven Weinberg that in the low energy limit the cluster decomposition property, together with Lorentz invariance and quantum mechanics, inevitably lead to quantum field theory. String theory satisfies all three of the conditions and so provides a counter-example against this being true at all energy scales.
Formulation.
The "S"-matrix formula_0 describes the amplitude for a process with an initial state formula_1 evolving into a final state formula_2. If the initial and final states consist of two clusters, with formula_3 and formula_4 close to each other but far from the pair formula_5 and formula_6, then the cluster decomposition property requires the "S"-matrix to factorize
formula_7
as the distance between the two clusters increases. The physical interpretation of this is that any two spatially well separated experiments formula_8 and formula_9 cannot influence each other. This condition is fundamental to the ability to doing physics without having to know the state of the entire universe. By expanding the "S"-matrix into a sum of a product of connected "S"-matrix elements formula_10, which at the perturbative level are equivalent to , the cluster decomposition property can be restated as demanding that connected "S"-matrix elements must vanish whenever some of its clusters of particles are far apart from each other.
This position space formulation can also be reformulated in terms of the momentum space "S"-matrix formula_11. Since its Fourier transformation gives the position space connected "S"-matrix, this only depends on position through the exponential terms. Therefore, performing a uniform translation in a direction formula_12 on a subset of particles will effectively change the momentum space "S"-matrix as
formula_13
By translational invariance, a translation of all particles cannot change the "S"-matrix, therefore formula_14 must be proportional to a momentum conserving delta function formula_15 to ensure that the translation exponential factor vanishes. If there is an additional delta function of only a subset of momenta corresponding to some cluster of particles, then this cluster can be moved arbitrarily far through a translation without changing the "S"-matrix, which would violate cluster decomposition. This means that in momentum space the property requires that the "S"-matrix only has a single delta function.
Cluster decomposition can also be formulated in terms of correlation functions, where for any two operators formula_16 and formula_17 localized to some region, the vacuum expectation values factorize as the two operators become distantly separated
formula_18
This formulation allows for the property to be applied to theories that lack an "S"-matrix such as conformal field theories. It is in terms of these Wightman functions that the property is usually formulated in axiomatic quantum field theory. In some formulations, such as Euclidean constructive field theory, it is explicitly introduced as an axiom.
Properties.
If a theory is constructed from creation and annihilation operators, then the cluster decomposition property automatically holds. This can be seen by expanding out the "S"-matrix as a sum of Feynman diagrams which allows for the identification of connected "S"-matrix elements with connected Feynman diagrams. Vertices arise whenever creation and annihilation operators commute past each other leaving behind a single momentum delta function. In any connected diagram with V vertices, I internal lines and L loops, I-L of the delta functions go into fixing internal momenta, leaving V-(I-L) delta functions unfixed. A form of Euler's formula states that any graph with C disjoint connected components satisfies C = V-I+L. Since the connected "S"-matrix elements correspond to C=1 diagrams, these only have a single delta function and thus the cluster decomposition property, as formulated above in momentum space in terms of delta functions, holds.
Microcausality, the locality condition requiring commutation relations of local operators to vanish for spacelike separations, is a sufficient condition for the "S"-matrix to satisfy cluster decomposition. In this sense cluster decomposition serves a similar purpose for the "S"-matrix as microcausality does for fields, preventing causal influence from propagating between regions that are distantly separated. However, cluster decomposition is weaker than having no superluminal causation since it can be formulated for classical theories as well.
One key requirement for cluster decomposition is that it requires a unique vacuum state, with it failing if the vacuum state is a mixed state. The rate at which the correlation functions factorize depends on the spectrum of the theory, where if it has mass gap of mass formula_19 then there is an exponential falloff formula_20 while if there are massless particles present then it can be as slow as formula_21. | [
{
"math_id": 0,
"text": "S_{\\beta \\alpha}"
},
{
"math_id": 1,
"text": "\\alpha"
},
{
"math_id": 2,
"text": "\\beta"
},
{
"math_id": 3,
"text": "\\alpha_1"
},
{
"math_id": 4,
"text": "\\beta_1"
},
{
"math_id": 5,
"text": "\\alpha_2"
},
{
"math_id": 6,
"text": "\\beta_2"
},
{
"math_id": 7,
"text": "\nS_{\\beta \\alpha} \\rightarrow S_{\\beta_1 \\alpha_1}S_{\\beta_2\\alpha_2}\n"
},
{
"math_id": 8,
"text": "\\alpha_1 \\rightarrow \\beta_1"
},
{
"math_id": 9,
"text": "\\alpha_2 \\rightarrow \\beta_2"
},
{
"math_id": 10,
"text": "S_{\\beta \\alpha}^c"
},
{
"math_id": 11,
"text": "\\tilde S^c_{\\beta \\alpha}"
},
{
"math_id": 12,
"text": "\\boldsymbol a"
},
{
"math_id": 13,
"text": "\n\\tilde S_{\\beta \\alpha}^c \\xrightarrow{\\boldsymbol x_i \\rightarrow \\boldsymbol x_i+\\boldsymbol a} e^{i\\boldsymbol a\\cdot (\\sum_i \\boldsymbol p_i)} \\tilde S_{\\beta \\alpha}^c.\n"
},
{
"math_id": 14,
"text": "\\tilde S_{\\beta \\alpha}"
},
{
"math_id": 15,
"text": "\\delta (\\Sigma \\boldsymbol p)"
},
{
"math_id": 16,
"text": "\\mathcal O_1(x)"
},
{
"math_id": 17,
"text": "\\mathcal O_2(x)"
},
{
"math_id": 18,
"text": "\n\\lim_{|\\boldsymbol x|\\rightarrow \\infty}\\langle \\mathcal O_1(\\boldsymbol x)\\mathcal O_2(0)\\rangle \\rightarrow \\langle \\mathcal O_1\\rangle \\langle \\mathcal O_2 \\rangle.\n"
},
{
"math_id": 19,
"text": "m"
},
{
"math_id": 20,
"text": "\\langle \\phi(x) \\phi(0)\\rangle \\sim e^{-m|x|}"
},
{
"math_id": 21,
"text": "1/|x|^2"
}
] | https://en.wikipedia.org/wiki?curid=1075022 |
1075092 | Polyakov action | 2D conformal field theory used in string theory
In physics, the Polyakov action is an action of the two-dimensional conformal field theory describing the worldsheet of a string in string theory. It was introduced by Stanley Deser and Bruno Zumino and independently by L. Brink, P. Di Vecchia and P. S. Howe in 1976, and has become associated with Alexander Polyakov after he made use of it in quantizing the string in 1981. The action reads:
formula_0
where formula_1 is the string tension, formula_2 is the metric of the target manifold, formula_3 is the worldsheet metric, formula_4 its inverse, and formula_5 is the determinant of formula_3. The metric signature is chosen such that timelike directions are + and the spacelike directions are −. The spacelike worldsheet coordinate is called formula_6, whereas the timelike worldsheet coordinate is called formula_7. This is also known as the nonlinear sigma model.
The Polyakov action must be supplemented by the Liouville action to describe string fluctuations.
Global symmetries.
N.B.: Here, a symmetry is said to be local or global from the two dimensional theory (on the worldsheet) point of view. For example, Lorentz transformations, that are local symmetries of the space-time, are global symmetries of the theory on the worldsheet.
The action is invariant under spacetime translations and infinitesimal Lorentz transformations
where formula_8, and formula_9 is a constant. This forms the Poincaré symmetry of the target manifold.
The invariance under (i) follows since the action formula_10 depends only on the first derivative of formula_11. The proof of the invariance under (ii) is as follows:
formula_12
Local symmetries.
The action is invariant under worldsheet diffeomorphisms (or coordinates transformations) and Weyl transformations.
Diffeomorphisms.
Assume the following transformation:
formula_13
It transforms the metric tensor in the following way:
formula_14
One can see that:
formula_15
One knows that the Jacobian of this transformation is given by
formula_16
which leads to
formula_17
and one sees that
formula_18
Summing up this transformation and relabeling formula_19, we see that the action is invariant.
Weyl transformation.
Assume the Weyl transformation:
formula_20
then
formula_21
And finally:
And one can see that the action is invariant under Weyl transformation. If we consider "n"-dimensional (spatially) extended objects whose action is proportional to their worldsheet area/hyperarea, unless "n" = 1, the corresponding Polyakov action would contain another term breaking Weyl symmetry.
One can define the stress–energy tensor:
formula_22
Let's define:
formula_23
Because of Weyl symmetry, the action does not depend on formula_24:
formula_25
where we've used the functional derivative chain rule.
Relation with Nambu–Goto action.
Writing the Euler–Lagrange equation for the metric tensor formula_26 one obtains that
formula_27
Knowing also that:
formula_28
One can write the variational derivative of the action:
formula_29
where formula_30, which leads to
formula_31
If the auxiliary worldsheet metric tensor formula_32 is calculated from the equations of motion:
formula_33
and substituted back to the action, it becomes the Nambu–Goto action:
formula_34
However, the Polyakov action is more easily quantized because it is linear.
Equations of motion.
Using diffeomorphisms and Weyl transformation, with a Minkowskian target space, one can make the physically insignificant transformation formula_35, thus writing the action in the "conformal gauge":
formula_36
where formula_37.
Keeping in mind that formula_38 one can derive the constraints:
formula_39
Substituting formula_40, one obtains
formula_41
And consequently
formula_42
The boundary conditions to satisfy the second part of the variation of the action are as follows.
Working in light-cone coordinates formula_44, we can rewrite the equations of motion as
formula_45
Thus, the solution can be written as formula_46, and the stress-energy tensor is now diagonal. By Fourier-expanding the solution and imposing canonical commutation relations on the coefficients, applying the second equation of motion motivates the definition of the Virasoro operators and lead to the Virasoro constraints that vanish when acting on physical states.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{S} = \\frac{T}{2} \\int\\mathrm{d}^2\\sigma\\, \\sqrt{-h}\\,h^{ab} g_{\\mu\\nu}(X) \\partial_a X^\\mu(\\sigma) \\partial_b X^\\nu(\\sigma),"
},
{
"math_id": 1,
"text": "T"
},
{
"math_id": 2,
"text": "g_{\\mu\\nu}"
},
{
"math_id": 3,
"text": "h_{ab}"
},
{
"math_id": 4,
"text": "h^{ab}"
},
{
"math_id": 5,
"text": "h"
},
{
"math_id": 6,
"text": "\\sigma"
},
{
"math_id": 7,
"text": "\\tau"
},
{
"math_id": 8,
"text": " \\omega_{\\mu \\nu} = -\\omega_{\\nu \\mu} "
},
{
"math_id": 9,
"text": " b^\\alpha "
},
{
"math_id": 10,
"text": " \\mathcal{S} "
},
{
"math_id": 11,
"text": " X^\\alpha "
},
{
"math_id": 12,
"text": "\\begin{align}\n \\mathcal{S}'\n &= {T \\over 2}\\int \\mathrm{d}^2\\sigma\\, \\sqrt{-h}\\, h^{ab} g_{\\mu \\nu} \\partial_a \\left( X^\\mu + \\omega^\\mu_{\\ \\delta} X^\\delta \\right) \\partial_b \\left( X^\\nu + \\omega^\\nu_{\\ \\delta} X^\\delta \\right) \\\\\n &= \\mathcal{S} + {T \\over 2}\\int \\mathrm{d}^2\\sigma\\, \\sqrt{-h}\\, h^{ab} \\left( \\omega_{\\mu \\delta} \\partial_a X^\\mu \\partial_b X^\\delta + \\omega_{\\nu \\delta} \\partial_a X^\\delta \\partial_b X^\\nu \\right) + \\operatorname{O}\\left(\\omega^2\\right) \\\\\n &= \\mathcal{S} + {T \\over 2}\\int \\mathrm{d}^2\\sigma\\, \\sqrt{-h}\\, h^{ab} \\left( \\omega_{\\mu \\delta} + \\omega_{\\delta \\mu } \\right) \\partial_a X^\\mu \\partial_b X^\\delta + \\operatorname{O}\\left(\\omega^2\\right) \\\\\n &= \\mathcal{S} + \\operatorname{O}\\left(\\omega^2\\right).\n\\end{align}"
},
{
"math_id": 13,
"text": " \\sigma^\\alpha \\rightarrow \\tilde{\\sigma}^\\alpha\\left(\\sigma,\\tau \\right). "
},
{
"math_id": 14,
"text": " h^{ab}(\\sigma) \\rightarrow \\tilde{h}^{ab} = h^{cd} (\\tilde{\\sigma})\\frac{\\partial {\\sigma}^a}{\\partial \\tilde{\\sigma}^c} \\frac{\\partial {\\sigma}^b}{\\partial \\tilde{\\sigma}^d}. "
},
{
"math_id": 15,
"text": "\n \\tilde{h}^{ab} \\frac{\\partial}{\\partial {\\sigma}^a} X^\\mu(\\tilde{\\sigma}) \\frac{\\partial}{\\partial \\sigma^b} X^\\nu(\\tilde{\\sigma}) =\n h^{cd} \\left(\\tilde{\\sigma}\\right)\\frac{\\partial \\sigma^a}{\\partial \\tilde{\\sigma}^c} \\frac{\\partial \\sigma^b}{\\partial \\tilde{\\sigma}^d} \\frac{\\partial}{\\partial \\sigma^a} X^\\mu(\\tilde{\\sigma})\\frac{\\partial}{\\partial {\\sigma}^b} X^\\nu(\\tilde{\\sigma}) =\n h^{ab}\\left(\\tilde{\\sigma}\\right)\\frac{\\partial}{\\partial \\tilde{\\sigma}^a}X^\\mu(\\tilde{\\sigma}) \\frac{\\partial}{\\partial \\tilde{\\sigma}^b} X^\\nu(\\tilde{\\sigma}).\n"
},
{
"math_id": 16,
"text": " \\mathrm{J} = \\operatorname{det} \\left( \\frac{\\partial \\tilde{\\sigma}^\\alpha}{\\partial \\sigma^\\beta} \\right), "
},
{
"math_id": 17,
"text": "\\begin{align}\n \\mathrm{d}^2 \\tilde{\\sigma} &= \\mathrm{J} \\mathrm{d}^2 \\sigma \\\\\n h &= \\operatorname{det} \\left( h_{ab} \\right) \\\\\n \\Rightarrow \\tilde{h} &= \\mathrm{J}^2 h,\n\\end{align}"
},
{
"math_id": 18,
"text": " \\sqrt{-\\tilde{h}} \\mathrm{d}^2 {\\sigma} = \\sqrt{-h \\left(\\tilde{\\sigma}\\right)} \\mathrm{d}^2 \\tilde{\\sigma}. "
},
{
"math_id": 19,
"text": " \\tilde{\\sigma} = \\sigma "
},
{
"math_id": 20,
"text": " h_{ab} \\to \\tilde{h}_{ab} = \\Lambda(\\sigma) h_{ab}, "
},
{
"math_id": 21,
"text": "\\begin{align}\n \\tilde{h}^{ab} &= \\Lambda^{-1}(\\sigma) h^{ab}, \\\\\n \\operatorname{det} \\left( \\tilde{h}_{ab} \\right) &= \\Lambda^2(\\sigma) \\operatorname{det} (h_{ab}).\n\\end{align}"
},
{
"math_id": 22,
"text": " T^{ab} = \\frac{-2}{\\sqrt{-h}} \\frac{\\delta S}{\\delta h_{ab}}. "
},
{
"math_id": 23,
"text": " \\hat{h}_{ab} = \\exp\\left(\\phi(\\sigma)\\right) h_{ab}. "
},
{
"math_id": 24,
"text": " \\phi "
},
{
"math_id": 25,
"text": " \\frac{\\delta S}{\\delta \\phi} = \\frac{\\delta S}{\\delta \\hat{h}_{ab}} \\frac{\\delta \\hat{h}_{ab}}{\\delta \\phi} = -\\frac12 \\sqrt{-h} \\,T_{ab}\\, e^{\\phi}\\, h^{ab} = -\\frac12 \\sqrt{-h} \\,T^a_{\\ a} \\,e^{\\phi} = 0 \\Rightarrow T^{a}_{\\ a} = 0, "
},
{
"math_id": 26,
"text": " h^{ab} "
},
{
"math_id": 27,
"text": " \\frac{\\delta S}{\\delta h^{ab}} = T_{ab} = 0. "
},
{
"math_id": 28,
"text": " \\delta \\sqrt{-h} = -\\frac12 \\sqrt{-h} h_{ab} \\delta h^{ab}. "
},
{
"math_id": 29,
"text": " \\frac{\\delta S}{\\delta h^{ab}} = \\frac{T}{2} \\sqrt{-h} \\left( G_{ab} - \\frac12 h_{ab} h^{cd} G_{cd} \\right), "
},
{
"math_id": 30,
"text": " G_{ab} = g_{\\mu \\nu} \\partial_a X^\\mu \\partial_b X^\\nu "
},
{
"math_id": 31,
"text": "\\begin{align}\n T_{ab} &= T \\left( G_{ab} - \\frac12 h_{ab} h^{cd} G_{cd} \\right) = 0, \\\\\n G_{ab} &= \\frac12 h_{ab} h^{cd} G_{cd}, \\\\\n G &= \\operatorname{det} \\left( G_{ab} \\right) = \\frac14 h \\left( h^{cd} G_{cd} \\right)^2.\n\\end{align}"
},
{
"math_id": 32,
"text": "\\sqrt{-h}"
},
{
"math_id": 33,
"text": " \\sqrt{-h} = \\frac{2 \\sqrt{-G}}{h^{cd} G_{cd}} "
},
{
"math_id": 34,
"text": " S = {T \\over 2}\\int \\mathrm{d}^2 \\sigma \\sqrt{-h} h^{ab} G_{ab} = {T \\over 2}\\int \\mathrm{d}^2 \\sigma \\frac{2 \\sqrt{-G}}{h^{cd} G_{cd}} h^{ab} G_{ab} = T \\int \\mathrm{d}^2 \\sigma \\sqrt{-G}."
},
{
"math_id": 35,
"text": "\\sqrt{-h} h^{ab} \\rightarrow \\eta^{ab}"
},
{
"math_id": 36,
"text": " \\mathcal{S} = {T \\over 2}\\int \\mathrm{d}^2 \\sigma \\sqrt{-\\eta} \\eta^{ab} g_{\\mu \\nu} (X) \\partial_a X^\\mu (\\sigma) \\partial_b X^\\nu(\\sigma) = {T \\over 2}\\int \\mathrm{d}^2 \\sigma \\left( \\dot{X}^2 - X'^2 \\right), "
},
{
"math_id": 37,
"text": " \\eta_{ab} = \\left( \\begin{array}{cc} 1 & 0 \\\\ 0 & -1 \\end{array} \\right) "
},
{
"math_id": 38,
"text": " T_{ab} = 0 "
},
{
"math_id": 39,
"text": "\\begin{align}\n T_{01} &= T_{10} = \\dot{X} X' = 0, \\\\\n T_{00} &= T_{11} = \\frac12 \\left( \\dot{X}^2 + X'^2 \\right) = 0.\n\\end{align}"
},
{
"math_id": 40,
"text": " X^\\mu \\to X^\\mu + \\delta X^\\mu "
},
{
"math_id": 41,
"text": "\\begin{align}\n \\delta \\mathcal{S}\n &= T \\int \\mathrm{d}^2 \\sigma \\eta^{ab} \\partial_a X^\\mu \\partial_b \\delta X_\\mu \\\\\n &= -T \\int \\mathrm{d}^2 \\sigma \\eta^{ab} \\partial_a \\partial_b X^\\mu \\delta X_\\mu + \\left( T \\int d \\tau X' \\delta X \\right)_{\\sigma=\\pi} - \\left( T \\int d \\tau X' \\delta X \\right)_{\\sigma=0} \\\\\n &= 0.\n\\end{align}"
},
{
"math_id": 42,
"text": " \\square X^\\mu = \\eta^{ab} \\partial_a \\partial_b X^\\mu = 0. "
},
{
"math_id": 43,
"text": " X^\\mu(\\tau, \\sigma + \\pi) = X^\\mu(\\tau, \\sigma). "
},
{
"math_id": 44,
"text": "\\xi^\\pm = \\tau \\pm \\sigma"
},
{
"math_id": 45,
"text": "\\begin{align}\n \\partial_+ \\partial_- X^\\mu &= 0, \\\\\n (\\partial_+ X)^2 = (\\partial_- X)^2 &= 0.\n\\end{align}"
},
{
"math_id": 46,
"text": "X^\\mu = X^\\mu_+ (\\xi^+) + X^\\mu_- (\\xi^-)"
}
] | https://en.wikipedia.org/wiki?curid=1075092 |
1075379 | Vitali–Hahn–Saks theorem | In mathematics, the Vitali–Hahn–Saks theorem, introduced by Vitali (1907), Hahn (1922), and Saks (1933), proves that under some conditions a sequence of measures converging point-wise does so uniformly and the limit is also a measure.
Statement of the theorem.
If formula_0 is a measure space with formula_1 and a sequence formula_2 of complex measures. Assuming that each formula_2 is absolutely continuous with respect to formula_3 and that a for all formula_4 the finite limits exist formula_5 Then the absolute continuity of the formula_2 with respect to formula_6 is uniform in formula_7 that is, formula_8 implies that formula_9 uniformly in formula_10 Also formula_11 is countably additive on formula_12
Preliminaries.
Given a measure space formula_13 a distance can be constructed on formula_14 the set of measurable sets formula_4 with formula_15 This is done by defining
formula_16 where formula_17 is the symmetric difference of the sets formula_18
This gives rise to a metric space formula_19 by identifying two sets formula_20 when formula_21 Thus a point formula_22 with representative formula_23 is the set of all formula_24 such that formula_25
Proposition: formula_19 with the metric defined above is a complete metric space.
"Proof:" Let
formula_26
Then
formula_27
This means that the metric space formula_19 can be identified with a subset of the Banach space formula_28.
Let formula_29, with
formula_30
Then we can choose a sub-sequence formula_31 such that formula_32 exists almost everywhere and formula_33. It follows that formula_34 for some formula_35 (furthermore formula_36 if and only if formula_37 for formula_38 large enough, then we have that formula_39 the limit inferior of the sequence) and hence formula_40 Therefore, formula_19 is complete.
Proof of Vitali-Hahn-Saks theorem.
Each formula_2 defines a function formula_41 on formula_42 by taking formula_43. This function is well defined, this is it is independent on the representative formula_44 of the class formula_45 due to the absolute continuity of formula_2 with respect to formula_6. Moreover formula_46 is continuous.
For every formula_47 the set
formula_48
is closed in formula_42, and by the hypothesis formula_49 we have that
formula_50
By Baire category theorem at least one formula_51 must contain a non-empty open set of formula_42. This means that there is formula_52 and a formula_53 such that
formula_54 implies formula_55
On the other hand, any formula_4 with formula_56 can be represented as formula_57 with formula_58 and formula_59. This can be done, for example by taking formula_60 and formula_61. Thus, if formula_56 and formula_62 then
formula_63
Therefore, by the absolute continuity of formula_64 with respect to formula_6, and since formula_65 is arbitrary, we get that formula_66 implies formula_67 uniformly in formula_10 In particular, formula_68 implies formula_69
By the additivity of the limit it follows that formula_11 is finitely-additive. Then, since formula_70 it follows that formula_11 is actually countably additive.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(S,\\mathcal{B},m)"
},
{
"math_id": 1,
"text": "m(S)<\\infty,"
},
{
"math_id": 2,
"text": "\\lambda_n"
},
{
"math_id": 3,
"text": "m,"
},
{
"math_id": 4,
"text": "B\\in\\mathcal{B}"
},
{
"math_id": 5,
"text": "\\lim_{n\\to\\infty}\\lambda_n(B)=\\lambda(B)."
},
{
"math_id": 6,
"text": "m"
},
{
"math_id": 7,
"text": "n,"
},
{
"math_id": 8,
"text": "\\lim_B m(B)=0"
},
{
"math_id": 9,
"text": "\\lim_{B}\\lambda_n(B)=0"
},
{
"math_id": 10,
"text": "n."
},
{
"math_id": 11,
"text": "\\lambda"
},
{
"math_id": 12,
"text": "\\mathcal{B}."
},
{
"math_id": 13,
"text": "(S,\\mathcal{B},m),"
},
{
"math_id": 14,
"text": "\\mathcal{B}_0,"
},
{
"math_id": 15,
"text": "m(B) < \\infty."
},
{
"math_id": 16,
"text": "d(B_1,B_2) = m(B_1\\Delta B_2),"
},
{
"math_id": 17,
"text": "B_1\\Delta B_2 = (B_1\\setminus B_2) \\cup (B_2\\setminus B_1)"
},
{
"math_id": 18,
"text": "B_1,B_2\\in\\mathcal{B}_0."
},
{
"math_id": 19,
"text": "\\tilde{\\mathcal{B}_0}"
},
{
"math_id": 20,
"text": "B_1,B_2\\in \\mathcal{B}_0"
},
{
"math_id": 21,
"text": "m(B_1\\Delta B_2)=0."
},
{
"math_id": 22,
"text": "\\overline{B}\\in\\tilde{\\mathcal{B}_0}"
},
{
"math_id": 23,
"text": "B\\in\\mathcal{B}_0"
},
{
"math_id": 24,
"text": "B_1\\in\\mathcal{B}_0"
},
{
"math_id": 25,
"text": "m(B\\Delta B_1) = 0."
},
{
"math_id": 26,
"text": "\\chi_B(x)=\\begin{cases}1,&x\\in B\\\\0,&x\\notin B\\end{cases}"
},
{
"math_id": 27,
"text": "d(B_1,B_2)=\\int_S|\\chi_{B_1}(s)-\\chi_{B_2}(x)|dm"
},
{
"math_id": 28,
"text": "L^1(S,\\mathcal{B},m)"
},
{
"math_id": 29,
"text": "B_n\\in\\mathcal{B}_0"
},
{
"math_id": 30,
"text": "\\lim_{n,k\\to\\infty}d(B_n,B_k)=\\lim_{n,k\\to\\infty}\\int_S|\\chi_{B_n}(x)-\\chi_{B_k}(x)|dm=0"
},
{
"math_id": 31,
"text": "\\chi_{B_{n'}}"
},
{
"math_id": 32,
"text": "\\lim_{n'\\to\\infty}\\chi_{B_{n'}}(x)=\\chi(x)"
},
{
"math_id": 33,
"text": "\\lim_{n'\\to\\infty}\\int_S|\\chi(x)-\\chi_{B_{n'}(x)}|dm=0"
},
{
"math_id": 34,
"text": "\\chi=\\chi_{B_{\\infty}}"
},
{
"math_id": 35,
"text": "B_{\\infty}\\in\\mathcal{B}_0"
},
{
"math_id": 36,
"text": "\\chi (x) = 1"
},
{
"math_id": 37,
"text": "\\chi_{B_{n'}} (x) = 1"
},
{
"math_id": 38,
"text": "n'"
},
{
"math_id": 39,
"text": "B_{\\infty} = \\liminf_{n'\\to\\infty}B_{n'} = {\\bigcup_{n'=1}^\\infty}\\left({\\bigcap_{m=n'}^\\infty}B_m\\right) "
},
{
"math_id": 40,
"text": "\\lim_{n\\to\\infty}d(B_\\infty,B_n)=0."
},
{
"math_id": 41,
"text": "\\overline{\\lambda}_n(\\overline{B})"
},
{
"math_id": 42,
"text": "\\tilde{\\mathcal{B}}"
},
{
"math_id": 43,
"text": "\\overline{\\lambda}_n(\\overline{B})=\\lambda_n(B)"
},
{
"math_id": 44,
"text": "B"
},
{
"math_id": 45,
"text": "\\overline{B}"
},
{
"math_id": 46,
"text": "\\overline{\\lambda}_n"
},
{
"math_id": 47,
"text": "\\epsilon>0"
},
{
"math_id": 48,
"text": "F_{k,\\epsilon}=\\{\\overline{B}\\in\\tilde{\\mathcal{B}}:\\ \\sup_{n\\geq1}|\\overline{\\lambda}_k(\\overline{B})-\\overline{\\lambda}_{k+n}(\\overline{B})|\\leq\\epsilon\\}"
},
{
"math_id": 49,
"text": "\\lim_{n\\to\\infty}\\lambda_n(B)=\\lambda(B)"
},
{
"math_id": 50,
"text": "\\tilde{\\mathcal{B}}=\\bigcup_{k=1}^{\\infty}F_{k,\\epsilon}"
},
{
"math_id": 51,
"text": "F_{k_0,\\epsilon}"
},
{
"math_id": 52,
"text": "\\overline{B_0}\\in\\tilde{\\mathcal{B}}"
},
{
"math_id": 53,
"text": "\\delta>0"
},
{
"math_id": 54,
"text": "d(B,B_0)<\\delta"
},
{
"math_id": 55,
"text": "\\sup_{n\\geq1}|\\overline{\\lambda}_{k_0}(\\overline{B})-\\overline{\\lambda}_{k_0+n}(\\overline{B})|\\leq\\epsilon"
},
{
"math_id": 56,
"text": "m(B)\\leq\\delta"
},
{
"math_id": 57,
"text": "B=B_1\\setminus B_2"
},
{
"math_id": 58,
"text": "d(B_1,B_0)\\leq\\delta"
},
{
"math_id": 59,
"text": "d(B_2,B_0)\\leq \\delta"
},
{
"math_id": 60,
"text": "B_1=B\\cup B_0"
},
{
"math_id": 61,
"text": "B_2=B_0\\setminus(B\\cap B_0)"
},
{
"math_id": 62,
"text": "k\\geq k_0"
},
{
"math_id": 63,
"text": "\\begin{align}|\\lambda_k(B)|&\\leq|\\lambda_{k_0}(B)|+|\\lambda_{k_0}(B)-\\lambda_k(B)|\\\\&\\leq|\\lambda_{k_0}(B)|+|\\lambda_{k_0}(B_1)-\\lambda_k(B_1)|+|\\lambda_{k_0}(B_2)-\\lambda_k(B_2)|\\\\&\\leq|\\lambda_{k_0}(B)|+2\\epsilon\\end{align}"
},
{
"math_id": 64,
"text": "\\lambda_{k_0}"
},
{
"math_id": 65,
"text": "\\epsilon"
},
{
"math_id": 66,
"text": "m(B)\\to0"
},
{
"math_id": 67,
"text": "\\lambda_n(B) \\to 0"
},
{
"math_id": 68,
"text": "m(B) \\to 0"
},
{
"math_id": 69,
"text": "\\lambda(B) \\to 0."
},
{
"math_id": 70,
"text": "\\lim_{m(B) \\to 0}\\lambda(B) = 0"
}
] | https://en.wikipedia.org/wiki?curid=1075379 |
1075596 | Hahn embedding theorem | Description of linearly ordered groups
In mathematics, especially in the area of abstract algebra dealing with ordered structures on abelian groups, the Hahn embedding theorem gives a simple description of all linearly ordered abelian groups. It is named after Hans Hahn.
Overview.
The theorem states that every linearly ordered abelian group "G" can be embedded as an ordered subgroup of the additive group formula_0 endowed with a lexicographical order, where formula_1 is the additive group of real numbers (with its standard order), Ω is the set of "Archimedean equivalence classes" of "G", and formula_0 is the set of all functions from Ω to formula_1 which vanish outside a well-ordered set.
Let 0 denote the identity element of "G". For any nonzero element "g" of "G", exactly one of the elements "g" or −"g" is greater than 0; denote this element by |"g"|. Two nonzero elements "g" and "h" of "G" are "Archimedean equivalent" if there exist natural numbers "N" and "M" such that "N"|"g"| > |"h"| and "M"|"h"| > |"g"|. Intuitively, this means that neither "g" nor "h" is "infinitesimal" with respect to the other. The group "G" is Archimedean if "all" nonzero elements are Archimedean-equivalent. In this case, Ω is a singleton, so formula_0 is just the group of real numbers. Then Hahn's Embedding Theorem reduces to Hölder's theorem (which states that a linearly ordered abelian group is Archimedean if and only if it is a subgroup of the ordered additive group of the real numbers).
gives a clear statement and proof of the theorem. The papers of and together provide another proof. See also .
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{R}^\\Omega"
},
{
"math_id": 1,
"text": "\\mathbb{R}"
}
] | https://en.wikipedia.org/wiki?curid=1075596 |
10756143 | Abel–Jacobi map | Construction in algebraic geometry
In mathematics, the Abel–Jacobi map is a construction of algebraic geometry which relates an algebraic curve to its Jacobian variety. In Riemannian geometry, it is a more general construction mapping a manifold to its Jacobi torus.
The name derives from the theorem of Abel and Jacobi that two effective divisors are linearly equivalent if and only if they are indistinguishable under the Abel–Jacobi map.
Construction of the map.
In complex algebraic geometry, the Jacobian of a curve "C" is constructed using path integration. Namely, suppose "C" has genus "g", which means topologically that
formula_0
Geometrically, this homology group consists of (homology classes of) "cycles" in "C", or in other words, closed loops. Therefore, we can choose 2"g" loops formula_1 generating it. On the other hand, another more algebro-geometric way of saying that the genus of "C" is "g" is that
formula_2
where "K" is the canonical bundle on "C".
By definition, this is the space of globally defined holomorphic differential forms on "C", so we can choose "g" linearly independent forms formula_3. Given forms and closed loops we can integrate, and we define 2"g" vectors
formula_4
It follows from the Riemann bilinear relations that the formula_5 generate a nondegenerate lattice formula_6 (that is, they are a real basis for formula_7), and the Jacobian is defined by
formula_8
The Abel–Jacobi map is then defined as follows. We pick some base point formula_9 and, nearly mimicking the definition of formula_10 define the map
formula_11
Although this is seemingly dependent on a path from formula_12 to formula_13 any two such paths define a closed loop in formula_14 and, therefore, an element of formula_15 so integration over it gives an element of formula_16 Thus the difference is erased in the passage to the quotient by formula_6. Changing base-point formula_12 does change the map, but only by a translation of the torus.
The Abel–Jacobi map of a Riemannian manifold.
Let formula_17 be a smooth compact manifold. Let formula_18 be its fundamental group. Let formula_19 be its abelianisation map. Let formula_20 be the torsion subgroup of formula_21. Let formula_22 be the quotient by torsion. If formula_17 is a surface, formula_23 is non-canonically isomorphic to formula_24, where formula_25 is the genus; more generally, formula_23 is non-canonically isomorphic to formula_26, where formula_27 is the first Betti number. Let formula_28 be the composite homomorphism.
Definition. The cover formula_29 of the manifold formula_17 corresponding to the subgroup formula_30 is called the universal (or maximal) free abelian cover.
Now assume formula_17 has a Riemannian metric. Let formula_31 be the space of harmonic 1-forms on formula_17, with dual formula_32 canonically identified with formula_33. By integrating an integral harmonic 1-form along paths from a basepoint formula_34, we obtain a map to the circle formula_35.
Similarly, in order to define a map formula_36 without choosing a basis for cohomology, we argue as follows. Let formula_37 be a point in the universal cover formula_38 of formula_17. Thus formula_37 is represented by a point of formula_17 together with a path formula_39 from formula_40 to it. By integrating along the path formula_39, we obtain a linear form on formula_31:
formula_41
This gives rise a map
formula_42
which, furthermore, descends to a map
formula_43
where formula_44 is the universal free abelian cover.
Definition. The Jacobi variety (Jacobi torus) of formula_17 is the torus
formula_45
Definition. The "Abel–Jacobi map"
formula_46
is obtained from the map above by passing to quotients.
The Abel–Jacobi map is unique up to translations of the Jacobi torus. The map has applications in Systolic geometry. The Abel–Jacobi map of a Riemannian manifold shows up in the large time asymptotics of the heat kernel on a periodic manifold ( and ).
In much the same way, one can define a graph-theoretic analogue of Abel–Jacobi map as a piecewise-linear map from a finite graph into a flat torus (or a Cayley graph associated with a finite abelian group), which is closely related to asymptotic behaviors of random walks on crystal lattices, and can be used for design of crystal structures.
The Abel–Jacobi map of a compact Riemann surface.
We provide an analytic construction of the Abel-Jacobi map on compact Riemann surfaces.
Let formula_17 denotes a compact Riemann surface of genus formula_47. Let formula_48 be a canonical homology basis on formula_17, and formula_49 the dual basis for formula_50, which is a formula_25 dimensional complex vector space consists of holomorphic differential forms. "Dual basis" we mean formula_51, for formula_52. We can form a symmetric matrix whose entries are formula_53, for formula_54. Let formula_55 be the lattice generated by the formula_56-columns of the formula_57 matrix whose entries consists of formula_58 for formula_54 where formula_59. We call formula_60 the "Jacobian variety" of formula_17 which is a compact, commutative formula_25-dimensional complex Lie group.
We can define a map formula_61 by choosing a point formula_62 and setting
formula_63
which is a well-defined holomorphic mapping with rank 1 (maximal rank). Then we can naturally extend this to a mapping of divisor classes;
If we denote formula_64 the "divisor class group" of formula_17 then define a map formula_65 by setting
formula_66
Note that if formula_67 then this map is independent of the choice of the base point so we can define the base point independent map
formula_68 where formula_69 denotes the divisors of degree zero of formula_17.
The below Abel's theorem show that the kernel of the map formula_70 is precisely the subgroup of principal divisors. Together with the Jacobi inversion problem, we can say that formula_71 is isomorphic as a group to the group of divisors of degree zero modulo its subgroup of principal divisors.
Abel–Jacobi theorem.
The following theorem was proved by Abel (known as Abel's theorem): Suppose that
formula_72
is a divisor (meaning a formal integer-linear combination of points of "C"). We can define
formula_73
and therefore speak of the value of the Abel–Jacobi map on divisors. The theorem is then that if "D" and "E" are two "effective" divisors, meaning that the formula_74 are all positive integers, then
formula_75 if and only if formula_76 is linearly equivalent to formula_77 This implies that the Abel-Jacobi map induces an injective map (of abelian groups) from the space of divisor classes of degree zero to the Jacobian.
Jacobi proved that this map is also surjective (known as Jacobi inversion problem), so the two groups are naturally isomorphic.
The Abel–Jacobi theorem implies that the Albanese variety of a compact complex curve (dual of holomorphic 1-forms modulo periods) is isomorphic to its Jacobian variety (divisors of degree 0 modulo equivalence). For higher-dimensional compact projective varieties the Albanese variety and the Picard variety are dual but need not be isomorphic. | [
{
"math_id": 0,
"text": "H_1(C, \\Z) \\cong \\Z^{2g}."
},
{
"math_id": 1,
"text": "\\gamma_1, \\ldots, \\gamma_{2g}"
},
{
"math_id": 2,
"text": "H^0(C, K) \\cong \\Complex^g,"
},
{
"math_id": 3,
"text": "\\omega_1, \\ldots, \\omega_g"
},
{
"math_id": 4,
"text": "\\Omega_j = \\left(\\int_{\\gamma_j} \\omega_1, \\ldots, \\int_{\\gamma_j} \\omega_g\\right) \\in \\Complex^g."
},
{
"math_id": 5,
"text": "\\Omega_j"
},
{
"math_id": 6,
"text": "\\Lambda"
},
{
"math_id": 7,
"text": "\\Complex^g \\cong \\R^{2g}"
},
{
"math_id": 8,
"text": "J(C) = \\Complex^g/\\Lambda."
},
{
"math_id": 9,
"text": "p_0 \\in C"
},
{
"math_id": 10,
"text": "\\Lambda,"
},
{
"math_id": 11,
"text": "\\begin{cases} u : C \\to J(C) \\\\ u(p) = \\left( \\int_{p_0}^p \\omega_1, \\dots, \\int_{p_0}^p \\omega_g\\right) \\bmod \\Lambda \\end{cases}"
},
{
"math_id": 12,
"text": "p_0"
},
{
"math_id": 13,
"text": "p,"
},
{
"math_id": 14,
"text": "C"
},
{
"math_id": 15,
"text": "H_1(C, \\Z),"
},
{
"math_id": 16,
"text": "\\Lambda."
},
{
"math_id": 17,
"text": "M"
},
{
"math_id": 18,
"text": "\\pi = \\pi_1(M)"
},
{
"math_id": 19,
"text": "f: \\pi \\to \\pi^{ab}"
},
{
"math_id": 20,
"text": "\\operatorname{tor}= \\operatorname{tor}(\\pi^{ab})"
},
{
"math_id": 21,
"text": "\\pi^{ab}"
},
{
"math_id": 22,
"text": "g: \\pi^{ab} \\to \\pi^{ab}/\\operatorname{tor}"
},
{
"math_id": 23,
"text": "\\pi^{ab}/\\operatorname{tor}"
},
{
"math_id": 24,
"text": "\\Z^{2g}"
},
{
"math_id": 25,
"text": "g"
},
{
"math_id": 26,
"text": "\\Z^b "
},
{
"math_id": 27,
"text": "b"
},
{
"math_id": 28,
"text": "\\varphi=g \\circ f : \\pi \\to \\Z^b "
},
{
"math_id": 29,
"text": "\\bar M"
},
{
"math_id": 30,
"text": "\\ker(\\varphi) \\subset \\pi"
},
{
"math_id": 31,
"text": "E"
},
{
"math_id": 32,
"text": "E^*"
},
{
"math_id": 33,
"text": "H_1(M,\\R)"
},
{
"math_id": 34,
"text": "x_0\\in M"
},
{
"math_id": 35,
"text": "\\R/\\Z=S^1"
},
{
"math_id": 36,
"text": "M\\to H_1(M,\\R) / H_1(M,\\Z)_{\\R}"
},
{
"math_id": 37,
"text": "x"
},
{
"math_id": 38,
"text": "\\tilde{M}"
},
{
"math_id": 39,
"text": "c"
},
{
"math_id": 40,
"text": "x_0"
},
{
"math_id": 41,
"text": "h\\to \\int_c h."
},
{
"math_id": 42,
"text": "\\tilde{M}\\to E^* = H_1(M,\\R),"
},
{
"math_id": 43,
"text": "\\begin{cases} \\overline{A}_M: \\overline{M}\\to E^* \\\\ c\\mapsto \\left( h\\mapsto \\int_c h \\right) \\end{cases}"
},
{
"math_id": 44,
"text": "\\overline{M}"
},
{
"math_id": 45,
"text": "J_1(M)=H_1(M,\\R)/H_1(M,\\Z)_{\\R}."
},
{
"math_id": 46,
"text": "A_M: M \\to J_1(M),"
},
{
"math_id": 47,
"text": "g>0"
},
{
"math_id": 48,
"text": "\\{a_1,...,a_g,b_1,...,b_g\\}"
},
{
"math_id": 49,
"text": "\\{\\zeta_1,...,\\zeta_g\\}"
},
{
"math_id": 50,
"text": "\\mathcal{H}^1(M)"
},
{
"math_id": 51,
"text": "\\int_{a_k}\\zeta_j = \\delta_{jk}"
},
{
"math_id": 52,
"text": "j,k = 1,...,g"
},
{
"math_id": 53,
"text": "\\int_{b_k}\\zeta_j"
},
{
"math_id": 54,
"text": "j,k=1,...,g"
},
{
"math_id": 55,
"text": "L"
},
{
"math_id": 56,
"text": "2g"
},
{
"math_id": 57,
"text": "g\\times 2g"
},
{
"math_id": 58,
"text": "\\int_{c_k}\\zeta_j"
},
{
"math_id": 59,
"text": "c_k\\in\\{a_k,b_k\\}"
},
{
"math_id": 60,
"text": "J(M)=\\Bbb C^g/L(M)"
},
{
"math_id": 61,
"text": "\\varphi:M\\to J(M)"
},
{
"math_id": 62,
"text": "P_0\\in M"
},
{
"math_id": 63,
"text": "\n\\varphi(P) = \\left(\\int_{P_0}^P\\zeta_1,...,\\int_{P_0}^P\\zeta_g\\right).\n"
},
{
"math_id": 64,
"text": "\\mathrm{Div}(M)"
},
{
"math_id": 65,
"text": "\\varphi:\\mathrm{Div}(M)\\to J(M)"
},
{
"math_id": 66,
"text": "\n\\varphi(D) = \\sum_{j=1}^r\\varphi(P_j)-\\sum_{j=1}^s\\varphi(Q_j),\\quad D = P_1\\cdots P_r/Q_1\\cdots Q_s.\n"
},
{
"math_id": 67,
"text": "r =s"
},
{
"math_id": 68,
"text": "\\varphi_0:\\mathrm{Div}^{(0)}(M)\\to J(M)"
},
{
"math_id": 69,
"text": "\\mathrm{Div}^{(0)}(M)"
},
{
"math_id": 70,
"text": "\\varphi_0"
},
{
"math_id": 71,
"text": "J(M)"
},
{
"math_id": 72,
"text": "D = \\sum\\nolimits_i n_i p_i"
},
{
"math_id": 73,
"text": "u(D) = \\sum\\nolimits_i n_i u(p_i)"
},
{
"math_id": 74,
"text": "n_i"
},
{
"math_id": 75,
"text": "u(D) = u(E)"
},
{
"math_id": 76,
"text": "D"
},
{
"math_id": 77,
"text": "E."
}
] | https://en.wikipedia.org/wiki?curid=10756143 |
10756580 | Metabolic flux analysis | Experimental fluxomics technique
Metabolic flux analysis (MFA) is an experimental fluxomics technique used to examine production and consumption rates of metabolites in a biological system. At an intracellular level, it allows for the quantification of metabolic fluxes, thereby elucidating the central metabolism of the cell. Various methods of MFA, including isotopically stationary metabolic flux analysis, isotopically non-stationary metabolic flux analysis, and thermodynamics-based metabolic flux analysis, can be coupled with stoichiometric models of metabolism and mass spectrometry methods with isotopic mass resolution to elucidate the transfer of moieties containing isotopic tracers from one metabolite into another and derive information about the metabolic network. Metabolic flux analysis (MFA) has many applications such as determining the limits on the ability of a biological system to produce a biochemical such as ethanol, predicting the response to gene knockout, and guiding the identification of bottleneck enzymes in metabolic networks for metabolic engineering efforts.
Metabolic flux analysis may use 13C-labeled isotope tracers for isotopic labeling experiments. Nuclear magnetic resonance (NMR) techniques and mass spectrometry may then be used to measure metabolite labeling patterns to provide information for determination of pathway fluxes. Because MFA typically requires rigorous flux calculation of complex metabolic networks, publicly available software tools have been developed to automate MFA and reduce its computational burden.
Experimental method.
Although using a stoichiometric balance and constraints of the metabolites comprising the metabolic network can elucidate fluxes, this approach has limitations including difficulty in stimulating fluxes through parallel, cyclic, and reversible pathways. Moreover, there is limited insight on how metabolites interconvert in a metabolic network without the use of isotope tracers. Thus, the use of isotopes has become the dominant technique for MFA.
Isotope labeling experiments.
Isotope labeling experiments are optimal for gathering experimental data necessary for MFA. Because fluxes determine the isotopic labeling patterns of intracellular metabolites, measuring these patterns allows for inference of fluxes. The first step in the workflow of isotope labeling experiments is cell culture on labeled substrates. A substrate such as glucose is labeled by isotope(s), most often 13C, and is introduced into the culture medium. The medium also typically contains vitamins and essential amino acids to facilitate cells' growth. The labeled substrate is then metabolized by the cells, leading to the incorporation of the 13C tracer in other intracellular metabolites. After the cells reach steady-state physiology (i.e., constant metabolite concentrations in culture), cells are then lysed to extract metabolites. For mammalian cells, extraction involves quenching of cells using methanol to stop their cellular metabolism and subsequent extraction of metabolites using methanol and water extraction. Concentrations of metabolites and labeled isotope in metabolites of the extracts are measured by instruments like liquid chromatography-mass spectrometry or NMR, which also provide information on the position and number of labeled atoms on the metabolites. This data are necessary for gaining insight into the dynamics of intracellular metabolism and metabolite turnover rates to infer metabolic flux.
Methodologies.
Isotopically stationary.
A predominant method for metabolic flux analysis is isotopically stationary MFA. This technique for flux quantitation is applicable under metabolic and isotopic steady-state, two conditions that assume that metabolite concentrations and isotopomer distributions are not changing over time, respectively. Knowledge of the stoichiometric matrix (S) comprising the consumption and production of metabolites within biochemical reactions is needed to balance fluxes (v) around the assumed metabolic network model. Assuming metabolic steady-state, metabolic fluxes can thus be quantitated by solving the inverse of the following simple linear algebra equation:
formula_0
To reduce the possible solution space for flux distributions, isotopically stationary MFA requires additional stoichiometric constraints such as growth rates, substrate secretion and uptake, and product accumulation rates as well as upper and lower bounds for fluxes. Although isotopically stationary MFA allows precise deduction of metabolic fluxes through mathematical modeling, the analysis is limited to batch cultures during the exponential phase. Moreover, after addition of a labeled substrate, the time-point for when metabolic and isotopic steady-state may be accurately assumed can be difficult to determine.
Isotopically non-stationary.
When isotope labeling is transient and has not yet equilibrated, isotopically non-stationary MFA (INST-MFA) is advantageous in deducing fluxes, particularly for systems with slow labeling dynamics. Similar to isotopically stationary MFA, this method requires mass and isotopomer balances to characterize the stoichiometry and atom transitions of the metabolic network. Unlike traditional MFA methods, however, INST-MFA requires applying ordinary differential equations to examine how isotopic labeling patterns of metabolites change over time; such examination can be accomplished by measuring changing isotopic labeling patterns over different time points to input into INST-MFA. INST-MFA is thus a powerful method for elucidating fluxes of systems with pathway bottlenecks and revealing metabolic phenotypes of autotrophic organisms. Although INST-MFA's computationally intensive demands previously hindered its widespread use, newly developed software tools have streamlined INST-MFA to decrease computational time and demand.
Thermodynamics-based.
Thermodynamics-Based Metabolic Flux Analysis (TMFA) is a specialized type of metabolic flux analysis which utilizes linear thermodynamic constraints in addition to mass balance constraints to generate thermodynamically feasible fluxes and metabolite activity profiles. TMFA takes into consideration only pathways and fluxes that are feasible by using the Gibbs free energy change of the reactions and activities of the metabolites that are part of the model. By calculating Gibbs free energies of metabolic reactions and consequently their thermodynamic favorability, TMFA facilitates identification of limiting pathway bottleneck reactions that may be ideal candidates for pathway regulation.
Software.
Simulation algorithms are needed to model the biological system and calculate the fluxes of all pathways in a complex network. Several computational software exist to meet the need for efficient and precise tools for flux quantitation. Generally, the steps for applying modeling software towards MFA include metabolic reconstruction to compile all desired enzymatic reactions and metabolites, provide experimental information such as the labeling pattern of the substrate, define constraints such as growth equations, and minimizing the error between the experimental and simulated results to obtain final fluxes. Examples of MFA software include 13CFLUX2 and OpenFLUX, which evaluate 13C labeling experiments for flux calculation under metabolic and isotopically stationary conditions. The increasing interest in developing computation tools for INST-MFA calculation has also led to the development of software applications such as INCA, which was the first software capable of performing INST-MFA and simulating transient isotope labeling experiments.
Applications.
Biofuel production.
Metabolic flux analysis has been used to guide scale-up efforts for fermentation of biofuels. By directly measuring enzymatic reaction rates, MFA can capture the dynamics of cells' behavior and metabolic phenotypes in bioreactors during large-scale fermentations. For example, MFA models were used to optimize the conversion of xylose into ethanol in xylose-fermenting yeast by using calculated flux distributions to determine maximal theoretical capacities of the selected yeast towards ethanol production.
Metabolic engineering.
Identification of bottleneck enzymes determines rate-limiting reactions that limit the productivity of a biosynthetic pathway. Moreover, MFA can help predict unexpected phenotypes of genetically engineered strains by constructing a fundamental understanding of how fluxes are wired in engineered cells. For example, by calculating the Gibbs free energies of reactions in "Escherichia coli" metabolism, TMFA facilitated identification of a thermodynamic bottleneck reaction in a genome-scale model of "Escherichia coli."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S \\times v = 0"
}
] | https://en.wikipedia.org/wiki?curid=10756580 |
1075916 | Generalized signal averaging | Within signal processing, in many cases only one image with noise is available, and averaging is then realized in a local neighbourhood. Results are acceptable if the noise is smaller in size than the smallest objects of interest in the image, but blurring of edges is a serious disadvantage. In the case of smoothing within a single image, one has to assume that there are no changes in the gray levels of the underlying image data. This assumption is clearly violated at locations of image edges, and edge blurring is a direct consequence of violating the assumption.
Description.
Averaging is a special case of discrete convolution. For a 3 by 3 neighbourhood, the convolution mask "M" is:
formula_0
The significance of the central pixel may be increased, as it approximates the properties of noise with a Gaussian probability distribution:
formula_1
formula_2
A suitable page for beginners about matrices is at:
https://web.archive.org/web/20060819141930/http://www.gamedev.net/reference/programming/features/imageproc/page2.asp
The whole article starts on page: https://web.archive.org/web/20061019072001/http://www.gamedev.net/reference/programming/features/imageproc/
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nM = \\frac{1}{9} \\begin{bmatrix}1 & 1 & 1 \\\\\n1 & 1 & 1 \\\\\n1 & 1 & 1 \\\\\n\\end{bmatrix}\n"
},
{
"math_id": 1,
"text": "\nM = \\frac{1}{10} \\begin{bmatrix}1 & 1 & 1 \\\\\n1 & 2 & 1 \\\\\n1 & 1 & 1 \\\\\n\\end{bmatrix}\n"
},
{
"math_id": 2,
"text": "\nM = \\frac{1}{16} \\begin{bmatrix}1 & 2 & 1 \\\\\n2 & 4 & 2 \\\\\n1 & 2 & 1 \\\\\n\\end{bmatrix}\n"
}
] | https://en.wikipedia.org/wiki?curid=1075916 |
1076026 | Angel problem | Question in combinatorial game theory
The angel problem is a question in combinatorial game theory proposed by John Horton Conway. The game is commonly referred to as the angels and devils game. The game is played by two players called the angel and the devil. It is played on an infinite chessboard (or equivalently the points of a 2D lattice). The angel has a power "k" (a natural number 1 or higher), specified before the game starts. The board starts empty with the angel in one square. On each turn, the angel jumps to a different empty square which could be reached by at most "k" moves of a chess king, i.e. the distance from the starting square is at most "k" in the infinity norm. The devil, on its turn, may add a block on any single square not containing the angel. The angel may leap over blocked squares, but cannot land on them. The devil wins if the angel is unable to move. The angel wins by surviving indefinitely.
The angel problem is: can an angel with high enough power win?
There must exist a winning strategy for one of the players. If the devil can force a win then it can do so in a finite number of moves. If the devil cannot force a win then there is always an action that the angel can take to avoid losing and a winning strategy for it is always to pick such a move. More abstractly, the "pay-off set" (i.e., the set of all plays in which the angel wins) is a closed set (in the natural topology on the set of all plays), and it is known that such games are determined. Of course, for any infinite game, if player 2 doesn't have a winning strategy, player 1 can always pick a move that leads to a position where player 2 doesn't have a winning strategy, but in some games, simply playing forever doesn't confer a win to player 1, so undetermined games may exist.
Conway offered a reward for a general solution to this problem ($100 for a winning strategy for an angel of sufficiently high power, and $1000 for a proof that the devil can win irrespective of the angel's power). Progress was made first in higher dimensions. In late 2006, the original problem was solved when independent proofs appeared, showing that an angel can win. Bowditch proved that a 4-angel (that is, an angel with power "k" = 4) can win and Máthé and Kloster gave proofs that a 2-angel can win.
History.
The problem was first published in the 1982 book "Winning Ways" by Berlekamp, Conway, and Guy, under the name "the angel and the square-eater."
In two dimensions, some early partial results included:
In three dimensions, it was shown that:
Finally, in 2006
there emerged four independent and almost simultaneous proofs that the angel has a winning strategy in two dimensions.
Brian Bowditch's proof works for the 4-angel, while Oddvar Kloster's proof and András Máthé's proof work for the 2-angel. Additionally, Peter Gacs has a claimed proof which requires a much stronger angel; the details are fairly complex and have not been reviewed by a journal for accuracy. The proofs by Bowditch and Máthé have been published in "Combinatorics, Probability and Computing". The proof by Kloster has been published in "Theoretical Computer Science".
Further unsolved questions.
In 3D, given that the angel always increases its "y"-coordinate, and that the devil is limited to three planes, it is unknown whether the devil has a winning strategy.
Proof sketches.
Kloster's 2-angel proof.
Oddvar Kloster discovered a constructive algorithm to solve the problem with a 2-angel. This algorithm is quite simple and also optimal, since, as noted above, the devil has a winning strategy against a 1-angel.
We start out by drawing a vertical line immediately to the left of the angel's starting position, down to formula_2 and up to formula_3. This line represents the path the angel will take, which will be updated after each of the devil's moves, and partitions the board's squares into a "left set" and a "right set." Once a square becomes part of the left set, it will remain so for the remainder of the game, and the angel will not make any future moves to any of these squares. Every time the devil blocks off a new square, we search over all possible modifications to the path such that we move one or more squares in the right set which the devil has blocked off into the left set. We will only do this if the path increases in length by no more than twice the number of blocked squares moved into the left set. Of such qualifying paths, we choose one that moves the greatest number of blocked off squares into the left set. The angel then makes two steps along this path, keeping the path to its left when moving in the forward direction (so if the devil were not blocking off squares, the angel would travel north indefinitely). Note that when going clockwise around a corner, the angel will not move for one step, because the two segments touching the corner have the same square to their right.
Máthé's 2-angel proof.
Máthé introduces the "nice devil," which never destroys a square that
the angel could have chosen to occupy on an earlier turn. When the angel plays against the nice devil it concedes defeat if the devil manages to confine it to a finite bounded region of the board (otherwise the angel could just hop back and forth between two squares and never lose).
Máthé's proof breaks into two parts:
Roughly speaking, in the second part, the angel wins against the nice devil by
pretending that the entire left half-plane is destroyed
(in addition to any squares actually destroyed by the nice devil),
and treating destroyed squares as the walls of a maze,
which it then skirts by means of a "hand-on-the-wall" technique.
That is, the angel keeps its left hand on the wall of the maze
and runs alongside the wall.
One then proves that a nice devil cannot trap an angel that adopts this strategy.
The proof of the first part is by contradiction, and hence Máthé's proof does not immediately
yield an explicit winning strategy against the real devil.
However, Máthé remarks that his proof could in principle be adapted to give such an explicit strategy.
Bowditch's 4-angel proof.
Brian Bowditch defines a variant (game 2) of the original game with the following rule changes:
A circuitous path is a path formula_4 where formula_5 is a semi-infinite arc (a non self-intersecting path with a starting point but no ending point) and formula_6 are pairwise disjoint loops with the following property:
Bowditch considers a variant (game 1) of the game with the changes 2 and 3 with a 5-devil. He then shows that a winning strategy in this game will yield a winning strategy in our original game for a 4-angel. He then goes on to show that an angel playing a 5-devil (game 2) can achieve a win using a fairly simple algorithm.
Bowditch claims that a 4-angel can win the original version of the game by imagining a phantom angel playing a 5-devil in the game 2.
The angel follows the path the phantom would take but avoiding the loops. Hence as the path formula_12 is a semi-infinite arc the angel does not return to any square it has previously been to and so the path is a winning path even in the original game.
3D version of the problem.
"Guardian" proof.
The proof, which shows that in a three-dimensional version of the game a high powered angel has a winning strategy, makes use of "guardians". For each cube of any size, there is a guardian that watches over that cube. The guardians decide at each move whether the cube they are watching over is unsafe, safe, or almost safe. The definitions of "safe" and "almost safe" need to be chosen to ensure this works. This decision is based purely on the density of blocked points in that cube and the size of that cube.
If the angel is given no orders, then it just moves up. If some cubes that the angel is occupying cease to be safe, then the guardian of the biggest of these cubes is instructed to arrange for the angel to leave through one of the borders of that cube. If a guardian is instructed to escort the angel out of its cube to a particular face, the guardian does so by plotting a path of subcubes that are all safe. The guardians in these cubes are then instructed to escort the angel through their respective subcubes. The angel's path in a given subcube is not determined until the angel arrives at that cube. Even then, the path is only determined roughly. This ensures the devil cannot just choose a place on the path sufficiently far along it and block it.
The strategy can be proven to work because the time it takes the devil to convert a safe cube in the angel's path to an unsafe cube is longer than the time it takes the angel to get to that cube.
This proof was published by Imre Leader and Béla Bollobás in 2006. A substantially similar proof was published by Martin Kutz in 2005.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d_1 < d_2 < d_3 < \\cdots"
},
{
"math_id": 1,
"text": "d"
},
{
"math_id": 2,
"text": "y = -\\infty"
},
{
"math_id": 3,
"text": "y = \\infty"
},
{
"math_id": 4,
"text": "\\pi = \\cup^{\\infty}_{i=1} (\\sigma_i \\cup \\gamma_i)"
},
{
"math_id": 5,
"text": "\\sigma = \\cup^{\\infty}_{i=1} \\sigma_i"
},
{
"math_id": 6,
"text": "{\\gamma_i}"
},
{
"math_id": 7,
"text": "\\forall i: |\\gamma_i|\\leq i"
},
{
"math_id": 8,
"text": "|\\gamma_i|"
},
{
"math_id": 9,
"text": "\\gamma_i"
},
{
"math_id": 10,
"text": "\\sigma_i"
},
{
"math_id": 11,
"text": "\\sigma_{i+1}"
},
{
"math_id": 12,
"text": "\\sigma"
}
] | https://en.wikipedia.org/wiki?curid=1076026 |
10760797 | Zobel network | "For the wave filter invented by Zobel and sometimes named after him see m-derived filters."
Zobel networks are a type of filter section based on the image-impedance design principle. They are named after Otto Zobel of Bell Labs, who published a much-referenced paper on image filters in 1923. The distinguishing feature of Zobel networks is that the input impedance is fixed in the design independently of the transfer function. This characteristic is achieved at the expense of a much higher component count compared to other types of filter sections. The impedance would normally be specified to be constant and purely resistive. For this reason, Zobel networks are also known as constant resistance networks. However, any impedance achievable with discrete components is possible.
Zobel networks were formerly widely used in telecommunications to flatten and widen the frequency response of copper land lines, producing a higher performance line from one originally intended for ordinary telephone use. Analogue technology has given way to digital technology and they are now little used.
When used to cancel out the reactive portion of loudspeaker impedance, the design is sometimes called a Boucherot cell. In this case, only half the network is implemented as fixed components, the other half being the real and imaginary components of the loudspeaker impedance. This network is more akin to the power factor correction circuits used in electrical power distribution, hence the association with Boucherot's name.
A common circuit form of Zobel networks is in the form of a bridged T network. This term is often used to mean a Zobel network, sometimes incorrectly when the circuit implementation is not a bridged T.
"Parts of this article or section rely on the reader's knowledge of the complex impedance representation of capacitors and inductors and on knowledge of the frequency domain representation of signals".
Derivation.
The basis of a Zobel network is a balanced bridge circuit as shown in the circuit to the right. The condition for balance is that;
formula_0
If this is expressed in terms of a normalised "Z"0 = 1 as is conventionally done in filter tables, then the balance condition is simply;
formula_1
Or, formula_2 is simply the inverse, or dual impedance of formula_3.
The bridging impedance "Z""B" is across the balance points and hence has no potential across it. Consequently, it will draw no current and its value makes no difference to the function of the circuit. Its value is often chosen to be "Z"0 for reasons which will become clear in the discussion of bridged T circuits further on.
Input impedance.
The input impedance is given by
formula_4
Substituting the balance condition,
formula_5
yields
formula_6
The input impedance can be designed to be purely resistive by setting
formula_7
The input impedance will then be real and independent of ω in band and out of band no matter what complexity of filter section is chosen.
Transfer function.
If the "Z"0 in the bottom right of the bridge is taken to be the output load then a transfer function of "V"o/"V"in can be calculated for the section. Only the RHS (right-hand side) branch needs to be considered in this calculation. The reason for this can be seen by considering that there is no current flow through "Z""B". None of the current flowing through the LHS (left-hand side) branch is going to flow into the load. The LHS branch, therefore, cannot possibly affect the output. It certainly affects the input impedance (and hence the input terminal voltage) but not the transfer function. The transfer function can now easily be seen to be;
formula_8
Bridged T implementation.
The load impedance is actually the impedance of the following stage or of a transmission line and can sensibly be omitted from the circuit diagram. If we also set;
formula_9
then the circuit to the right results. This is referred to as a bridged T circuit because the impedance "Z" is seen to "bridge" across the T section. The purpose of setting "Z""B" = "Z"0 is to make the filter section symmetrical. This has the advantage that it will then present the same impedance, "Z"0, at both the input and the output port.
Types of section.
A Zobel filter section can be implemented for low-pass, high-pass, band-pass or band-stop. It is also possible to implement a flat frequency response attenuator. This last is of some importance for the practical filter sections described later.
Attenuator.
For an attenuator section, "Z" is simply
formula_10
and,
formula_11
The attenuation of the section is given by;
formula_12
Low pass.
For a low-pass filter section, "Z" is an inductor and "Z" ' is a capacitor;
formula_13
and
formula_14
where
formula_15
The transfer function of the section is given by
formula_16
The 3 dB point occurs when "ωL = R"0 so the 3 dB cut-off frequency is given by
formula_17
where "ω" is in the stop band well above "ω"c,
formula_18
it can be seen from this that "A"("ω") is falling away in the stop band at the classic 6 dB/8ve (or 20 dB/decade).
High pass.
For a high-pass filter section, "Z" is a capacitor and "Z' " is an inductor:
formula_19
and
formula_20
where
formula_21
The transfer function of the section is given by
formula_22
The 3 dB point occurs when "ωC" = <templatestyles src="Fraction/styles.css" />1⁄"R"0 so the 3 dB cut-off frequency is given by
formula_23
In the stop band,
formula_24
falling at 6 dB/8ve with decreasing frequency.
Band pass.
For a band-pass filter section, "Z" is a series resonant circuit and "Z' " is a shunt resonant circuit;
formula_25
and
formula_26
The transfer function of the section is given by
formula_27
The 3 dB point occurs when |1 − "ω"2"LC"| = "ωCR"0 so the 3 dB cut-off frequencies are given by
formula_28
from which the centre frequency, "ω""m", and bandwidth, Δ"ω", can be determined:
formula_29
Note that this is different from the resonant frequency
formula_30
the relationship between them being given by
formula_31
Band stop.
For a band-stop filter section, "Z" is a shunt resonant circuit and "Z' " is a series resonant circuit:
formula_32
and
formula_33
The transfer function and bandwidth can be found by analogy with the band-pass section.
formula_34
And,
formula_35
Practical sections.
Zobel networks are rarely used for traditional frequency filtering. Other filter types are significantly more efficient for this purpose. Where Zobels come into their own is in frequency equalisation applications, particularly on transmission lines. The difficulty with transmission lines is that the impedance of the line varies in a complex way across the band and is tedious to measure. For most filter types, this variation in impedance will cause a significant difference in response to the theoretical, and is mathematically difficult to compensate for, even assuming that the impedance is known precisely. If Zobel networks are used however, it is only necessary to measure the line response into a fixed resistive load and then design an equaliser to compensate it. It is entirely unnecessary to know anything at all about the line impedance as the Zobel network will present exactly the same impedance to line as the measuring instruments. Its response will therefore be precisely as theoretically predicted. This is a tremendous advantage where high quality lines with flat frequency responses are desired.
Basic loss.
For audio lines, it is invariably necessary to combine L/C filter components with resistive attenuator components in the same filter section. The reason for this is that the usual design strategy is to require the section to attenuate all frequencies down to the level of the frequency in the passband with the lowest level. Without the resistor components, the filter, at least in theory, would increase attenuation without limit. The attenuation in the stop band of the filter (that is, the limiting maximum attenuation) is referred to as the "basic loss" of the section. In other words, the flat part of the band is attenuated by the basic loss down to the level of the falling part of the band which it is desired to equalise. The following discussion of practical sections relates in particular to audio transmission lines.
6 dB/octave roll-off.
The most significant effect that needs to be compensated for is that at some cut-off frequency the line response starts to roll-off like a simple low-pass filter. The effective bandwidth of the line can be increased with a section that is a high-pass filter matching this roll-off, combined with an attenuator. In the flat part of the pass-band only the attenuator part of the filter section is significant. This is set at an attenuation equal to the level of the highest frequency of interest. All frequencies up to this point will then be equalised flat to an attenuated level. Above this point, the output of the filter will again start to roll-off.
Mismatched lines.
Quite commonly in telecomms networks, a circuit is made up of two sections of line which do not have the same characteristic impedance. For instance 150 Ω and 300 Ω. One effect of this is that the roll-off can start at 6 dB/octave at an initial cut-off frequency formula_36, but then at formula_37 can become suddenly steeper. This situation then requires (at least) two high-pass sections to compensate each operating at a different formula_38.
Bumps and dips.
Bumps and dips in the passband can be compensated for with band-stop and band-pass sections respectively. Again, an attenuator element is also required, but usually rather smaller than that required for the roll-off. These anomalies in the pass-band can be caused by mismatched line segments as described above. Dips can also be caused by ground temperature variations.
Transformer roll-off.
Occasionally, a low-pass section is included to compensate for excessive line transformer roll-off at the low frequency end. However, this effect is usually very small compared to the other effects noted above.
Low frequency sections will usually have inductors of high values. Such inductors have many turns and consequently tend to have significant resistance. In order to keep the section constant resistance at the input, the dual branch of the bridge T must contain a dual of the stray resistance, that is, a resistor in parallel with the capacitor. Even with the compensation, the stray resistance still has the effect of inserting attenuation at low frequencies. This in turn has the effect of slightly reducing the amount of LF lift the section would otherwise have produced. The basic loss of the section can be increased by the same amount as the stray resistance is inserting and this will return the LF lift achieved to that designed for.
Compensation of inductor resistance is not such an issue at high frequencies were the inductors will tend to be smaller. In any case, for a high-pass section the inductor is in series with the basic loss resistor and the stray resistance can merely be subtracted from that resistor. On the other hand, the compensation technique may be required for resonant sections, especially a high Q resonator being used to lift a very narrow band. For these sections the value of inductors can also be large.
Temperature compensation.
An adjustable attenuation high-pass filter can be used to compensate for changes in ground temperature. Ground temperature is very slow varying in comparison to surface temperature. Adjustments are usually only required 2-4 times per year for audio applications.
Typical filter chain.
A typical complete filter will consist of a number of Zobel sections for roll-off, frequency dips and temperature followed by a flat attenuator section to bring the level down to a standard attenuation. This is followed by a fixed gain amplifier to bring the signal back up to a usable level, typically 0 dBu. The gain of the amplifier is usually at most 45 dB. Any more and the amplification of line noise will tend to cancel out the quality benefits of improved bandwidth. This limit on amplification essentially limits how much the bandwidth can be increased by these techniques. No one part of the incoming signal band will be amplified by the full 45 dB. The 45 dB is made up of the line loss in the flat part of its spectrum plus the basic loss of each section. In general, each section will be minimum loss at a different frequency band, hence the amplification in that band will be limited to the basic loss of just that one filter section, assuming insignificant overlap. A typical choice for R0 is 600 Ω. A good quality transformer (usually essential, but not shown on the diagram), known as a repeating coil, is at the beginning of the chain where the line terminates.
Other section implementations.
Besides the Bridged T, there are a number of other possible section forms that can be used.
L-sections.
As mentioned above, formula_39 can be set to any desired impedance without affecting the input impedance. In particular, setting it as either an open circuit or a short circuit results in a simplified section circuit, called L–sections. These are shown above for the case of a high pass section with basic loss.
The input port still presents an impedance of formula_40 (provided that the output is terminated in formula_40) but the output port no longer presents a constant impedance. Both the open-circuit and the short-circuit L–sections are capable of being reversed so that formula_40 is then presented at the output and the variable impedance is presented at the input.
To retain the benefit of Zobel networks constant impedance, the variable impedance port must not face the line impedance. Nor should it face the variable impedance port of another L-section. Facing the amplifier is acceptable since the input impedance of the amplifier is normally arranged to be formula_40 within acceptable tolerances. In other words, variable impedance must not face variable impedance.
Balanced bridged T.
The Zobel networks described here can be used to equalise land lines composed of twisted pair or star quad cables. The balanced circuit nature of these lines delivers a good common mode rejection ratio (CMRR). To maintain the CMRR, circuits connected to the line should maintain the balance. For this reason, balanced versions of Zobel networks are sometimes required. This is achieved by halving the impedance of the series components and then putting identical components in the return leg of the circuit.
Balanced C-sections.
A C–section is a balanced version of an L–section. The balance is achieved in the same way as a balanced full bridged T section by placing half of the series impedance in, what was, the common conductor. C–sections, like the L–section from which they are derived, can come in both open-circuit and short circuit varieties. The same restrictions apply to C–sections regarding impedance terminations as to L–sections.
X-section.
It is possible to transform a bridged–T section into a Lattice, or X–section (see Bartlett's bisection theorem). The X–section is a kind of bridge circuit, but usually drawn as a lattice, hence the name. Its topology makes it intrinsically balanced but it is never used to implement the constant resistance filters of the kind described here because of the increased component count. The component count increase arises out of the transformation process rather than the balance. There is however, one common application for this topology, the lattice phase equaliser, which is also constant resistance and also invented by Zobel. This circuit differs from those described here in that the bridge circuit is not generally in the balanced condition.
Half sections.
In respect of constant resistance filters, the term half section has a somewhat different meaning to other kinds of image filter. Generally, a half section is formed by cutting through the midpoint of the series impedance and shunt admittance of a full section of a ladder network. It is literally half a section. Here, however, there is a somewhat different definition. A half section is either the series impedance (series half-section) or shunt admittance (shunt half-section) that, when connected between source and load impedances of R0, will result in the same transfer function as some arbitrary constant resistance circuit. The purpose of using half sections is that the same functionality is achieved with a drastically reduced component count.
If a constant resistance circuit has an input Vin, then a generator with an impedance R0 must have an open-circuit voltage of E=2Vin in order to produce Vin at the input of the constant resistance circuit. If now the constant resistance circuit is replaced by an impedance of 2Z, as in the diagram above, it can be seen by simple symmetry that the voltage Vin will appear halfway along the impedance 2Z. The output of this circuit can now be calculated as,
formula_41
which is precisely the same as a bridged T section with series element Z. The series half-section is thus a series impedance of 2Z. By corresponding reasoning, the shunt half-section is a shunt impedance of <templatestyles src="Fraction/styles.css" />1⁄2Z' (or twice the admittance).
It must be emphasised that these half sections are far from being constant resistance. They have the same transfer function as a constant resistance network, but only when correctly terminated. An equaliser will not give good results if a half-section is positioned facing the line since the line will have a variable (and probably unknown) impedance. Likewise, two half-sections cannot be connected directly to each other as these both will have variable impedances. However, if a sufficiently large attenuator is placed between the two variable impedances, this will have the effect of masking the effect. A high value attenuator will have an input impedance formula_42 no matter what the terminating impedance on the other side. In the example practical chain shown above there is a 22 dB attenuator required in the chain. This does not need to be at the end of the chain, it can be placed anywhere desired and used to mask two mismatched impedances. It can also be split into two or more parts and used for masking more than one mismatch.
"See also Boucherot cell"
Zobel networks and loudspeaker drivers.
Zobel networks can be used to make the impedance a loudspeaker presents to its amplifier output appear as a steady resistance. This is beneficial to the amplifier performance. The impedance of a loudspeaker is partly resistive. The resistance represents the energy transferred from the amplifier to the sound output plus some heating losses in the loudspeaker. However, the speaker also possesses inductance due to the windings of its coil. The impedance of the loudspeaker is thus typically modelled as a series resistor and inductor. A parallel circuit of a series resistor and capacitor of the correct values will form a Zobel bridge. It is obligatory to choose formula_43 because the centre point between the inductor and resistor is inaccessible (and, in fact, fictitious - the resistor and inductor are distributed quantities as in a transmission line). The loudspeaker may be modelled more accurately by a more complex equivalent circuit. The compensating Zobel network will also become more complex to the same degree.
Note that the circuit will work just as well if the capacitor and resistor are interchanged. In this case the circuit is no longer a Zobel balanced bridge but clearly the impedance has not changed. The same circuit could have been arrived at by designing from Boucherot's minimising reactive power point of view. From this design approach there is no difference in the order of the capacitor and the resistor and Boucherot cell might be considered a more accurate description.
Video equalisers.
Zobel networks can be used for the equalisation of video lines as well as audio lines. There is, however, a noticeably different approach taken with the two types of signal. The difference in the cable characteristics can be summarised a s follows;
This more predictable response of video allows a different design approach. The video equaliser is built as a single bridged T section but with a rather more complex network for Z. For short lines, or for a trimming equaliser, a Bode filter topology might be used. For longer lines a network with Cauer filter topology might be used. Another driver for this approach is the fact that a video signal occupies a large number of octaves, around 20 or so. If equalised with simple basic sections, a large number of filter sections would be required. Simple sections are designed, typically, to equalise a range of one or two octaves.
Bode equaliser.
A Bode network, as with a Zobel network, is a symmetrical bridge T network which meets the constant k condition. It does not however meet the constant resistance condition, that is, the bridge is not in balance. Any impedance network, Z, can be used in a Bode network, just as with a Zobel network, but the high pass section shown for correcting high-end frequencies is the most common. A Bode network terminated in a variable resistor can be used to produce a variable impedance at the input terminals of the network. A useful property of this network is that the input impedance can be made to vary from a capacitive impedance through a purely resistive impedance to an inductive impedance all by adjusting the single load potentiometer, RL. The bridging resistor, R0, is chosen to equal the nominal impedance so that in the special case when RL is set to R0 the network behaves as a Zobel network and Zin is also equal to R0.
The Bode network is used in an equaliser by connecting the whole network such that the input impedance of the Bode network, Zin, is in series with the load. Since the impedance of the Bode network can be either capacitive or inductive depending on the position of the adjustment potentiometer, the response may be a boost or a cut to the band of frequencies it is acting on. The transfer function of this arrangement is:
formula_44
The Bode equaliser can be converted into a constant resistance filter by using the entire Bode network as the Z branch of a Zobel network, resulting in a rather complex network of bridge T networks embedded in a larger bridge T. It can be seen that this results in the same transfer function by noting that the transfer function of the Bode equaliser is identical to the transfer function of the general form of Zobel equaliser. Note that the dual of a constant resistance bridge T network is the identical network. The dual of a Bode network is therefore the same network except for the load resistance RL, which must be the inverse, RL', in the dual circuit. To adjust the equaliser RL and RL' must be ganged, or otherwise kept in step such that as RL increases RL' will decrease and vice versa.
Cauer equaliser.
To equalise long video lines, a network with Cauer topology is used as the Z impedance of a Zobel constant resistance network. Just as the input impedance of a Bode network is used as the Z impedance of a Zobel network to form a Zobel Bode equaliser, so the input impedance of a Cauer network is used to make a Zobel Cauer equaliser. The equaliser is required to correct an attenuation increasing with frequency and for this a Cauer ladder network consisting of series resistors and shunt capacitors is required. Optionally, there may be an inductor included in series with the first capacitor which increases the equalisation at the high end due to the steeper slope produced as resonance is approached. This may be required on longer lines. The shunt resistor R1 provides the basic loss of the Zobel network in the usual way.
The dual of a RC Cauer network is a LR Cauer network which is required for the Z' impedance as shown in the example. Adjustment is a bit problematic with this equaliser. In order to maintain the constant resistance, the pairs of components C1/L1', C2/L2' etc., must remain dual impedances as the component is adjusted, so both parts of the pair must be adjusted together. With the Zobel Bode equaliser, this is a simple matter of ganging two pots together - a component configuration available off-the-shelf. Ganging together a variable capacitor and inductor is not, however, a very practical solution. These equalisers tend to be "hand built", one solution being to select the capacitors on test and fit fixed values according to the measurements and then adjust the inductors until the required match is achieved. The furthest element of the ladder from the driving point is equalising the lowest frequency of interest. This is adjusted first as it will also have an effect on higher frequencies and from there progressively higher frequencies are adjusted working along the ladder towards the driving point.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
*Zobel, O. J., "Distortion correction in electrical circuits with constant resistance recurrent networks", Bell System Technical Journal, Vol. 7 (1928), p. 438.
*"Redifon Radio Diary, 1970", William Collins Sons & Co, 1969 | [
{
"math_id": 0,
"text": "\\frac{Z}{Z_0} = \\frac{Z_0}{Z'}"
},
{
"math_id": 1,
"text": "Z = \\frac{1}{Z'}"
},
{
"math_id": 2,
"text": "\\scriptstyle Z'"
},
{
"math_id": 3,
"text": "\\scriptstyle Z"
},
{
"math_id": 4,
"text": "\\frac{1}{Z_\\text{in}} = \\frac{1}{Z_0 + Z'} + \\frac{1}{Z + Z_0}"
},
{
"math_id": 5,
"text": "Z' = \\frac{Z_0^2}{Z}"
},
{
"math_id": 6,
"text": "Z_\\text{in} = Z_0"
},
{
"math_id": 7,
"text": "Z_0 = R_0\\!\\,"
},
{
"math_id": 8,
"text": "A(\\omega) = \\frac{Z_0}{Z + Z_0}"
},
{
"math_id": 9,
"text": "Z_B = Z_0\\,\\!"
},
{
"math_id": 10,
"text": "Z = R\\,\\!"
},
{
"math_id": 11,
"text": "Z' = R' = \\frac{R_0^2}{R}"
},
{
"math_id": 12,
"text": "L = 20\\log\\left(\\frac{R}{R_0} + 1\\right)\\, \\text{dB}"
},
{
"math_id": 13,
"text": "Z = i \\omega L\\!\\,"
},
{
"math_id": 14,
"text": "Z' = \\frac{1}{i \\omega C'}"
},
{
"math_id": 15,
"text": " C' = \\frac{L}{R_0^2}"
},
{
"math_id": 16,
"text": "A(\\omega) = \\frac{R_0}{i \\omega L + R_0}"
},
{
"math_id": 17,
"text": " \\omega_c = \\frac{R_0}{L}"
},
{
"math_id": 18,
"text": "A(\\omega) \\approx \\frac{R_0}{i \\omega L}"
},
{
"math_id": 19,
"text": "Z = \\frac{1}{i \\omega C}"
},
{
"math_id": 20,
"text": "Z' = i \\omega L' \\!\\,"
},
{
"math_id": 21,
"text": "L' = CR_0^2 \\!\\,"
},
{
"math_id": 22,
"text": "A(\\omega) = \\frac{i \\omega C R_0}{1 + i \\omega C R_0}"
},
{
"math_id": 23,
"text": "\\omega_c = \\frac{1}{C R_0}"
},
{
"math_id": 24,
"text": "A(\\omega) \\approx i \\omega C R_0"
},
{
"math_id": 25,
"text": "Z = i \\omega L + \\frac{1}{i \\omega C}"
},
{
"math_id": 26,
"text": "Y' = \\frac{1}{Z'} = i \\omega C' + \\frac{1}{i \\omega L'}"
},
{
"math_id": 27,
"text": "A(\\omega) = \\frac{i \\omega C R_0}{1 + i \\omega C R_0 - \\omega^2 LC}"
},
{
"math_id": 28,
"text": " \\omega_c = \\frac{1}{2LC}\\left(\\pm R_0 C + \\sqrt{R_0^2C^2 + 4LC}\\right)"
},
{
"math_id": 29,
"text": "\\begin{align}\n \\Delta\\omega &= \\frac{R_0}{L} \\\\\n \\omega_m &= \\sqrt{\\frac{R_0^2}{4 L^2} + \\frac{1}{LC}}\n\\end{align}"
},
{
"math_id": 30,
"text": "\\omega_0 = \\sqrt{\\frac{1}{LC}}"
},
{
"math_id": 31,
"text": "\\omega_m^2 = \\left(\\frac{\\Delta \\omega}{2}\\right)^2 + \\omega_0^2"
},
{
"math_id": 32,
"text": "Y = \\frac{1}{Z}= i \\omega C + \\frac{1}{i \\omega L}"
},
{
"math_id": 33,
"text": "Z' = i \\omega L' + \\frac{1}{i \\omega C'}"
},
{
"math_id": 34,
"text": "\\Delta\\omega = \\frac{1}{C R_0}"
},
{
"math_id": 35,
"text": "\\omega_m = \\sqrt{\\left(\\frac{1}{2 R_0 C}\\right)^2 + \\frac{1}{LC}}"
},
{
"math_id": 36,
"text": "\\scriptstyle f_{c1}"
},
{
"math_id": 37,
"text": "\\scriptstyle f_{c2}"
},
{
"math_id": 38,
"text": "\\scriptstyle f_c"
},
{
"math_id": 39,
"text": "\\scriptstyle Z_B"
},
{
"math_id": 40,
"text": "\\scriptstyle R_0"
},
{
"math_id": 41,
"text": "\\frac{V_O}{V_{in}} = \\frac{R_0}{Z + R_0}"
},
{
"math_id": 42,
"text": "\\scriptstyle \\approx R_0"
},
{
"math_id": 43,
"text": "\\scriptstyle R_B \\;=\\; \\infin"
},
{
"math_id": 44,
"text": "A(\\omega) = \\frac{R_0}{Z_{in} + R_0}"
}
] | https://en.wikipedia.org/wiki?curid=10760797 |
10761147 | Reeb vector field | In mathematics, the Reeb vector field, named after the French mathematician Georges Reeb, is a notion that appears in various domains of contact geometry including:
Definition.
Let formula_2 be a contact vector field on a manifold formula_3 of dimension formula_4. Let formula_5 for a 1-form formula_0 on formula_3 such that formula_6. Given a contact form formula_0, there exists a unique field (the "Reeb vector field") formula_7 on formula_3 such that:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "R \\in \\mathrm{ker }\\ d\\alpha, \\ \\alpha (R) = 1 "
},
{
"math_id": 2,
"text": "\\xi"
},
{
"math_id": 3,
"text": "M"
},
{
"math_id": 4,
"text": "2n+1"
},
{
"math_id": 5,
"text": "\\xi = Ker \\; \\alpha"
},
{
"math_id": 6,
"text": "\\alpha \\wedge (d \\alpha)^n \\neq 0"
},
{
"math_id": 7,
"text": "X_\\alpha"
},
{
"math_id": 8,
"text": "i(X_\\alpha)d \\alpha = 0"
},
{
"math_id": 9,
"text": "i(X_\\alpha) \\alpha = 1"
}
] | https://en.wikipedia.org/wiki?curid=10761147 |
1076205 | Price equation | Description of how a trait or gene changes in frequency over time
In the theory of evolution and natural selection, the Price equation (also known as Price's equation or Price's theorem) describes how a trait or allele changes in frequency over time. The equation uses a covariance between a trait and fitness, to give a mathematical description of evolution and natural selection. It provides a way to understand the effects that gene transmission and natural selection have on the frequency of alleles within each new generation of a population. The Price equation was derived by George R. Price, working in London to re-derive W.D. Hamilton's work on kin selection. Examples of the Price equation have been constructed for various evolutionary cases. The Price equation also has applications in economics.
The Price equation is a mathematical relationship between various statistical descriptors of population dynamics, rather than a physical or biological law, and as such is not subject to experimental verification. In simple terms, it is a mathematical statement of the expression "survival of the fittest".
Statement.
The Price equation shows that a change in the average amount formula_0 of a trait in a population from one generation to the next (formula_1) is determined by the covariance between the amounts formula_2 of the trait for subpopulation formula_3 and the fitnesses formula_4 of the subpopulations, together with the expected change in the amount of the trait value due to fitness, namely formula_5:
formula_6
Here formula_7 is the average fitness over the population, and formula_8 and formula_9 represent the population mean and covariance respectively. 'Fitness' formula_7 is the ratio of the average number of offspring for the whole population per the number of adult individuals in the population, and formula_4 is that same ratio only for subpopulation formula_3.
If the covariance between fitness (formula_4) and trait value (formula_2) is positive, the trait value is expected to rise on average across population formula_3. If the covariance is negative, the characteristic is harmful, and its frequency is expected to drop.
The second term, formula_5, represents the portion of formula_1 due to all factors other than direct selection which can affect trait evolution. This term can encompass genetic drift, mutation bias, or meiotic drive. Additionally, this term can encompass the effects of multi-level selection or group selection. Price (1972) referred to this as the "environment change" term, and denoted both terms using partial derivative notation (∂NS and ∂EC). This concept of environment includes interspecies and ecological effects. Price describes this as follows:
<templatestyles src="Template:Blockquote/styles.css" />Fisher adopted the somewhat unusual point of view of regarding dominance and epistasis as being environment effects. For example, he writes (1941): ‘A change in the proportion of any pair of genes itself constitutes a change in the environment in which individuals of the species find themselves.’ Hence he regarded the natural selection effect on M as being limited to the additive or linear effects of changes in gene frequencies, while everything else – dominance, epistasis, population pressure, climate, and interactions with other species – he regarded as a matter of the environment.
Proof.
Suppose we are given four equal-length lists of real numbers formula_10, formula_2, formula_11, formula_12 from which we may define formula_13. formula_10 and formula_2 will be called the parent population numbers and characteristics associated with each index "i". Likewise formula_11 and formula_12 will be called the child population numbers and characteristics, and formula_14 will be called the fitness associated with index "i". (Equivalently, we could have been given formula_10, formula_2, formula_4, formula_12 with formula_15.) Define the parent and child population totals:
and the probabilities (or frequencies):
Note that these are of the form of probability mass functions in that formula_16 and are in fact the probabilities that a random individual drawn from the parent or child population has a characteristic formula_2. Define the fitnesses:
formula_17
The average of any list formula_18 is given by:
formula_19
so the average characteristics are defined as:
and the average fitness is:
formula_20
A simple theorem can be proved:
formula_21
so that:
formula_22
and
formula_23
The covariance of formula_4 and formula_2 is defined by:
formula_24
Defining formula_25, the expectation value of formula_26 is
formula_27
The sum of the two terms is:
formula_28
Using the above mentioned simple theorem, the sum becomes
formula_29
where
formula_30.
Derivation of the continuous-time Price equation.
Consider a set of groups with formula_31 that are characterized by a particular trait, denoted by formula_32. The number formula_33 of individuals belonging to group formula_3 experiences exponential growth:formula_34where formula_35 corresponds to the fitness of the group. We want to derive an equation describing the time-evolution of the expected value of the trait:formula_36Based on the chain rule, we may derive an ordinary differential equation:formula_37A further application of the chain rule for formula_38 gives us:formula_39Summing up the components gives us that:formula_40
which is also known as the replicator equation. Now, note that:
formula_41Therefore, putting all of these components together, we arrive at the continuous-time Price equation:formula_42
Simple Price equation.
When the characteristic values formula_2 do not change from the parent to the child generation, the second term in the Price equation becomes zero resulting in a simplified version of the Price equation:
formula_43
which can be restated as:
formula_44
where formula_45 is the fractional fitness: formula_46.
This simple Price equation can be proven using the definition in Equation (2) above. It makes this fundamental statement about evolution: "If a certain inheritable characteristic is correlated with an increase in fractional fitness, the average value of that characteristic in the child population will be increased over that in the parent population."
Applications.
The Price equation can describe any system that changes over time, but is most often applied in evolutionary biology. The evolution of sight provides an example of simple directional selection. The evolution of sickle cell anemia shows how a heterozygote advantage can affect trait evolution. The Price equation can also be applied to population context dependent traits such as the evolution of sex ratios. Additionally, the Price equation is flexible enough to model second order traits such as the evolution of mutability. The Price equation also provides an extension to Founder effect which shows change in population traits in different settlements
Dynamical sufficiency and the simple Price equation.
Sometimes the genetic model being used encodes enough information into the parameters used by the Price equation to allow the calculation of the parameters for all subsequent generations. This property is referred to as dynamical sufficiency. For simplicity, the following looks at dynamical sufficiency for the simple Price equation, but is also valid for the full Price equation.
Referring to the definition in Equation (2), the simple Price equation for the character formula_0 can be written:
formula_47
For the second generation:
formula_48
The simple Price equation for formula_0 only gives us the value of formula_49 for the first generation, but does not give us the value of formula_50 and formula_51, which are needed to calculate formula_52 for the second generation. The variables formula_4 and formula_51 can both be thought of as characteristics of the first generation, so the Price equation can be used to calculate them as well:
formula_53
The five 0-generation variables formula_7, formula_0, formula_51, formula_54, and formula_55 must be known before proceeding to calculate the three first generation variables formula_50, formula_49, and formula_56, which are needed to calculate formula_52 for the second generation. It can be seen that in general the Price equation cannot be used to propagate forward in time unless there is a way of calculating the higher moments formula_57 and formula_58 from the lower moments in a way that is independent of the generation. Dynamical sufficiency means that such equations can be found in the genetic model, allowing the Price equation to be used alone as a propagator of the dynamics of the model forward in time.
Full Price equation.
The simple Price equation was based on the assumption that the characters formula_2 do not change over one generation. If it is assumed that they do change, with formula_2 being the value of the character in the child population, then the full Price equation must be used. A change in character can come about in a number of ways. The following two examples illustrate two such possibilities, each of which introduces new insight into the Price equation.
Genotype fitness.
We focus on the idea of the fitness of the genotype. The index formula_3 indicates the genotype and the number of type formula_3 genotypes in the child population is:
formula_59
which gives fitness:
formula_60
Since the individual mutability formula_2 does not change, the average mutabilities will be:
formula_61
with these definitions, the simple Price equation now applies.
Lineage fitness.
In this case we want to look at the idea that fitness is measured by the number of children an organism has, regardless of their genotype. Note that we now have two methods of grouping, by lineage, and by genotype. It is this complication that will introduce the need for the full Price equation. The number of children an formula_3-type organism has is:
formula_62
which gives fitness:
formula_63
We now have characters in the child population which are the average character of the formula_3-th parent.
formula_64
with global characters:
formula_61
with these definitions, the full Price equation now applies.
Criticism.
The use of the change in average characteristic (formula_65) per generation as a measure of evolutionary progress is not always appropriate. There may be cases where the average remains unchanged (and the covariance between fitness and characteristic is zero) while evolution is nevertheless in progress. For example, if we have formula_66, formula_67, and formula_68, then for the child population, formula_69 showing that the peak fitness at formula_70 is in fact fractionally increasing the population of individuals with formula_71. However, the average characteristics are "z=2" and "z'=2" so that formula_72. The covariance formula_73 is also zero. The simple Price equation is required here, and it yields "0=0". In other words, it yields no information regarding the progress of evolution in this system.
A critical discussion of the use of the Price equation can be found in van Veelen (2005), van Veelen "et al". (2012), and van Veelen (2020). Frank (2012) discusses the criticism in van Veelen "et al". (2012).
Cultural references.
Price's equation features in the plot and title of the 2008 thriller film "WΔZ".
The Price equation also features in posters in the computer game "BioShock 2", in which a consumer of a "Brain Boost" tonic is seen deriving the Price equation while simultaneously reading a book. The game is set in the 1950s, substantially before Price's work.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "z"
},
{
"math_id": 1,
"text": "\\Delta z"
},
{
"math_id": 2,
"text": "z_i"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "w_i"
},
{
"math_id": 5,
"text": "\\mathrm{E}(w_i \\Delta z_i)"
},
{
"math_id": 6,
"text": "\\Delta{z} = \\frac{1}{w}\\operatorname{cov}(w_i, z_i) + \\frac{1}{w}\\operatorname{E}(w_i\\,\\Delta z_i)."
},
{
"math_id": 7,
"text": "w"
},
{
"math_id": 8,
"text": "\\operatorname{E}"
},
{
"math_id": 9,
"text": "\\operatorname{cov}"
},
{
"math_id": 10,
"text": "n_i"
},
{
"math_id": 11,
"text": "n_i'"
},
{
"math_id": 12,
"text": "z_i'"
},
{
"math_id": 13,
"text": "w_i=n_i'/n_i"
},
{
"math_id": 14,
"text": "w_i'"
},
{
"math_id": 15,
"text": "n_i'=w_i n_i"
},
{
"math_id": 16,
"text": "\\sum_i q_i = \\sum_i q_i' = 1"
},
{
"math_id": 17,
"text": "w_i\\;\\stackrel{\\mathrm{def}}{=}\\;n_i'/n_i"
},
{
"math_id": 18,
"text": "x_i"
},
{
"math_id": 19,
"text": "E(x_i)=\\sum_i q_i x_i"
},
{
"math_id": 20,
"text": "w\\;\\stackrel{\\mathrm{def}}{=}\\;\\sum_i q_i w_i"
},
{
"math_id": 21,
"text": "q_i w_i = \\left(\\frac{n_i}{n}\\right)\\left(\\frac{n_i'}{n_i}\\right) = \\left(\\frac{n_i'}{n'}\\right) \\left(\\frac{n'}{n}\\right)=q_i'\\left(\\frac{n'}{n}\\right)"
},
{
"math_id": 22,
"text": "w=\\frac{n'}{n}\\sum_i q_i' = \\frac{n'}{n}"
},
{
"math_id": 23,
"text": "q_i w_i = w\\,q_i'"
},
{
"math_id": 24,
"text": "\\operatorname{cov}(w_i,z_i)\\;\\stackrel{\\mathrm{def}}{=}\\;E(w_i z_i)-E(w_i)E(z_i) = \\sum_i q_i w_i z_i - w z"
},
{
"math_id": 25,
"text": "\\Delta z_i \\;\\stackrel{\\mathrm{def}}{=}\\; z_i'-z_i"
},
{
"math_id": 26,
"text": "w_i \\Delta z_i"
},
{
"math_id": 27,
"text": "E(w_i \\Delta z_i) = \\sum q_i w_i (z_i'-z_i) = \\sum_i q_i w_i z_i' - \\sum_i q_i w_i z_i"
},
{
"math_id": 28,
"text": "\\operatorname{cov}(w_i,z_i)+E(w_i \\Delta z_i) = \\sum_i q_i w_i z_i - w z + \\sum_i q_i w_i z_i' - \\sum_i q_i w_i z_i = \\sum_i q_i w_i z_i' - w z "
},
{
"math_id": 29,
"text": "\\operatorname{cov}(w_i,z_i)+E(w_i \\Delta z_i) = w\\sum_i q_i' z_i' - w z = w z'-wz = w\\Delta z"
},
{
"math_id": 30,
"text": "\\Delta z\\;\\stackrel{\\mathrm{def}}{=}\\;z'-z"
},
{
"math_id": 31,
"text": "i = 1,...,n"
},
{
"math_id": 32,
"text": "x_{i}"
},
{
"math_id": 33,
"text": "n_{i}"
},
{
"math_id": 34,
"text": "{dn_{i}\\over{dt}} = f_{i}n_{i}"
},
{
"math_id": 35,
"text": "f_{i}"
},
{
"math_id": 36,
"text": "\\mathbb{E}(x) = \\sum_{i}p_{i}x_{i} \\equiv \\mu, \\quad p_{i} = {n_{i}\\over{\\sum_{i}n_{i}}}"
},
{
"math_id": 37,
"text": "\\begin{aligned}\n{d\\mu\\over{dt}} &= \\sum_{i} {\\partial \\mu\\over{\\partial p_{i}}}{dp_{i}\\over{dt}} + \\sum_{i} {\\partial \\mu\\over{\\partial x_{i}}}{dx_{i}\\over{dt}} \\\\\n&= \\sum_{i} x_{i}{dp_{i}\\over{dt}} + \\sum_{i} p_{i}{dx_{i}\\over{dt}} \\\\\n&= \\sum_{i} x_{i}{dp_{i}\\over{dt}} + \\mathbb{E}\\left( {dx\\over{dt}} \\right)\n\\end{aligned}"
},
{
"math_id": 38,
"text": "dp_{i}/dt"
},
{
"math_id": 39,
"text": "{dp_{i}\\over{dt}} = \\sum_{j}{\\partial p_{i}\\over{\\partial n_{j}}}{dn_{j}\\over{dt}}, \\quad {\\partial p_{i}\\over{\\partial n_{j}}} = \\begin{cases} -p_{i}/N, \\quad &i\\neq j \\\\ (1-p_{i})/N, \\quad &i=j \\end{cases}"
},
{
"math_id": 40,
"text": "\\begin{aligned}\n{dp_{i}\\over{dt}} &= p_{i}\\left(f_{i} - \\sum_{j}p_{j}f_{j}\\right) \\\\\n&= p_{i}\\left[f_{i} - \\mathbb{E}(f)\\right]\n\\end{aligned}"
},
{
"math_id": 41,
"text": "\\begin{aligned}\n\\sum_{i} x_{i}{dp_{i}\\over{dt}} &= \\sum_{i} p_{i}x_{i}\\left[f_{i} - \\mathbb{E}(f)\\right] \\\\\n&= \\mathbb{E}\\left\\{ x_{i}\\left[f_{i}-\\mathbb{E}(f)\\right] \\right\\} \\\\\n&= \\text{Cov}(x,f)\n\\end{aligned}"
},
{
"math_id": 42,
"text": "{d\\over{dt}}\\mathbb{E}(x) = \\underbrace{\\text{Cov}(x,f)}_{\\text{Selection effect}} + \\underbrace{\\mathbb{E}(\\dot{x})}_{\\text{Dynamic effect}}"
},
{
"math_id": 43,
"text": "w\\,\\Delta z = \\operatorname{cov}\\left(w_i, z_i\\right)"
},
{
"math_id": 44,
"text": "\\Delta z = \\operatorname{cov}\\left(v_i, z_i\\right)"
},
{
"math_id": 45,
"text": "v_i"
},
{
"math_id": 46,
"text": "v_i=w_i/w"
},
{
"math_id": 47,
"text": "w(z' - z) = \\langle w_i z_i \\rangle - wz"
},
{
"math_id": 48,
"text": "w'(z'' - z') = \\langle w'_i z'_i \\rangle - w'z'"
},
{
"math_id": 49,
"text": "z'"
},
{
"math_id": 50,
"text": "w'"
},
{
"math_id": 51,
"text": "\\langle w_iz_i\\rangle"
},
{
"math_id": 52,
"text": "z''"
},
{
"math_id": 53,
"text": "\\begin{align}\n w(w' - w) &= \\langle w_i^2\\rangle - w^2 \\\\\n w\\left(\\langle w'_i z'_i\\rangle - \\langle w_i z_i\\rangle\\right) &= \\langle w_i ^2 z_i\\rangle - w\\langle w_i z_i\\rangle\n\\end{align}"
},
{
"math_id": 54,
"text": "\\langle w_i^2\\rangle"
},
{
"math_id": 55,
"text": "\\langle w_i^2z_i"
},
{
"math_id": 56,
"text": "\\langle w'_iz'_i\\rangle"
},
{
"math_id": 57,
"text": "\\langle w_i^n\\rangle"
},
{
"math_id": 58,
"text": "\\langle w_i^nz_i\\rangle"
},
{
"math_id": 59,
"text": "n'_i = \\sum_j w_{ji}n_j\\,"
},
{
"math_id": 60,
"text": "w_i = \\frac{n'_i}{n_i}"
},
{
"math_id": 61,
"text": "\\begin{align}\n z &= \\frac{1}{n}\\sum_i z_i n_i \\\\\n z' &= \\frac{1}{n'}\\sum_i z_i n'_i\n\\end{align}"
},
{
"math_id": 62,
"text": "n'_i = n_i\\sum_j w_{ij}\\,"
},
{
"math_id": 63,
"text": "w_i = \\frac{n'_i}{n_i} = \\sum_j w_{ij}"
},
{
"math_id": 64,
"text": "z'_j = \\frac{\\sum_i n_i z_i w_{ij} }{\\sum_i n_i w_{ij}}"
},
{
"math_id": 65,
"text": "z'-z"
},
{
"math_id": 66,
"text": "z_i=(1,2,3)"
},
{
"math_id": 67,
"text": "n_i=(1,1,1)"
},
{
"math_id": 68,
"text": "w_i=(1,4,1)"
},
{
"math_id": 69,
"text": "n_i'=(1,4,1)"
},
{
"math_id": 70,
"text": "w_2=4"
},
{
"math_id": 71,
"text": "z_i=2"
},
{
"math_id": 72,
"text": "\\Delta z=0"
},
{
"math_id": 73,
"text": "\\mathrm{cov}(z_i,w_i)"
}
] | https://en.wikipedia.org/wiki?curid=1076205 |
1076393 | Net income | Measure of the profitability of a business venture
In business and accounting, net income (also total comprehensive income, net earnings, net profit, bottom line, sales profit, or credit sales) is an entity's income minus cost of goods sold, expenses, depreciation and amortization, interest, and taxes for an accounting period.
It is computed as the residual of all revenues and gains less all expenses and losses for the period, and has also been defined as the net increase in shareholders' equity that results from a company's operations. It is different from gross income, which only deducts the cost of goods sold from revenue.
For households and individuals, net income refers to the (gross) income minus taxes and other deductions (e.g. mandatory pension contributions).
Definition.
Net income can be distributed among holders of common stock as a dividend or held by the firm as an addition to retained earnings. As profit and earnings are used synonymously for income (also depending on UK and US usage), net earnings and net profit are commonly found as synonyms for net income. Often, the term income is substituted for net income, yet this is not preferred due to the possible ambiguity. Net income is informally called the bottom line because it is typically found on the last line of a company's income statement (a related term is top line, meaning revenue, which forms the first line of the account statement).
In simplistic terms, net profit is the money left over after paying all the expenses of an endeavor. In practice this can get very complex in large organizations. The bookkeeper or accountant must itemise and allocate revenues and expenses properly to the specific working scope and context in which the term is applied.
Net income is usually calculated per annum, for each fiscal year. The items deducted will typically include tax expense, financing expense (interest expense), and minority interest. Likewise,
preferred stock dividends will be subtracted too, though they are not an expense. For a merchandising company, subtracted costs
may be the cost of goods sold, sales discounts, and sales returns and allowances. For a product company, advertising,
manufacturing, & design and development costs are included. Net income can also be calculated by adding a company's operating income to non-operating income and then subtracting off taxes.
The net profit margin percentage is a related ratio. This figure is calculated by dividing net profit by revenue or turnover, and it represents profitability, as a percentage.
An equation for net income.
Net profit: To calculate net profit for a venture (such as a company, division, or project), subtract all costs, including a fair share of total corporate overheads, from the gross revenues or turnover.
formula_0
A detailed example of a net income calculation:
formula_1
Net profit is a measure of the fundamental profitability of the venture. "It is the revenues of the activity less the costs of the activity. The main complication is . . . when needs to be allocated" across ventures. "Almost by definition, overheads are costs that cannot be directly tied to any specific" project, product, or division. "The classic example would be the cost of headquarters staff." "Although it is theoretically possible to calculate profits for any sub-(venture), such as a product or region, often the calculations are rendered suspect by the need to allocate overhead costs." Because overhead costs generally do not come in neat packages, their allocation across ventures is not an exact science.
Example.
Net profit on a P & L (profit and loss) account:
Another equation to calculate net income:
Net sales (revenue) - Cost of goods sold = Gross profit - SG&A expenses (combined costs of operating the company) - Research and development (R&D) = Earnings before interest, taxes, depreciation and amortization (EBITDA) - Depreciation and amortization = Earnings before interest and taxes (EBIT) - Interest expense (cost of borrowing money) = Earnings before taxes (EBT) - Tax expense = Net income (EAT)
Net sales = gross sales – (customer discounts, returns, and allowances)
Gross profit = net sales – cost of goods sold
Operating profit = gross profit – total operating expenses
Net profit = operating profit – taxes – interest
Net profit = net sales – cost of goods sold – operating expense – taxes – interest
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Net Profit} = \\text{Sales Revenue} - \\text{Total Costs}"
},
{
"math_id": 1,
"text": "\\text{Net Income} = \\text{Gross Profit} - \\text{Operating Expenses} - \\text{Other Business Expenses} - \\text{Taxes} - \\text{Interest on Debt} + \\text{Other Income}"
}
] | https://en.wikipedia.org/wiki?curid=1076393 |
10766404 | Generalised circle | Concept in geometry including line and circle
In geometry, a generalized circle, sometimes called a "cline" or "circline", is a straight line or a circle, the curves of constant curvature in the Euclidean plane.
The natural setting for generalized circles is the extended plane, a plane along with one point at infinity through which every straight line is considered to pass. Given any three distinct points in the extended plane, there exists precisely one generalized circle passing through all three.
Generalized circles sometimes appear in Euclidean geometry, which has a well-defined notion of distance between points, and where every circle has a center and radius: the point at infinity can be considered infinitely distant from any other point, and a line can be considered as a degenerate circle without a well-defined center and with infinite radius (zero curvature). A reflection across a line is a Euclidean isometry (distance-preserving transformation) which maps lines to lines and circles to circles; but an inversion in a circle is not, distorting distances and mapping any line to a circle passing through the reference circles's center, and vice-versa.
However, generalized circles are fundamental to inversive geometry, in which circles and lines are considered indistinguishable, the point at infinity is not distinguished from any other point, and the notions of curvature and distance between points are ignored. In inversive geometry, reflections, inversions, and more generally their compositions, called Möbius transformations, map generalized circles to generalized circles, and preserve the inversive relationships between objects.
The extended plane can be identified with the sphere using a stereographic projection. The point at infinity then becomes an ordinary point on the sphere, and all generalized circles become circles on the sphere.
Extended complex plane.
The extended Euclidean plane can be identified with the extended complex plane, so that equations of complex numbers can be used to describe lines, circles and inversions.
Bivariate linear equation.
A circle formula_0 is the set of points formula_1 in a plane that lie at radius formula_2 from a center point formula_3
formula_4
In the complex plane, formula_5 is a complex number and formula_0 is a set of complex numbers. Using the property that a complex number multiplied by its conjugate is the square of its modulus (its Euclidean distance from the origin), an implicit equation for formula_0 is:
formula_6
This is a homogeneous bivariate linear polynomial equation in terms of the complex variable formula_1 and its conjugate formula_7 of the form
formula_8
where coefficients formula_9 and formula_10 are real, and formula_11 and formula_12 are complex conjugates.
By dividing by formula_9 and then reversing the steps above, the radius formula_2 and center formula_5 can be recovered from any equation of this form. The equation represents a generalized circle in the plane when formula_2 is real, which occurs when formula_13 so that the squared radius formula_14 is positive. When formula_9 is zero, the equation defines a straight line.
Complex reciprocal.
That the reciprocal transformation formula_15 maps generalized circles to generalized circles is straight-forward to verify:
formula_16
Lines through the origin (formula_17) map to lines through the origin; lines not through the origin (formula_18) map to circles through the origin; circles through the origin (formula_19) map to lines not through the origin; and circles not through the origin (formula_20) map to circles not through the origin.
Complex matrix representation.
The defining equation of a generalized circle
formula_21
can be written as a matrix equation
formula_22
Symbolically,
formula_23
with coefficients placed into an invertible hermitian matrix formula_24 representing the circle, and formula_25 a vector representing an extended complex number.
Two such matrices specify the same generalized circle if and only if one is a scalar multiple of the other.
To transform the generalized circle represented by formula_26 by the Möbius transformation formula_27 apply the inverse of the Möbius transformation formula_28 to the vector formula_29 in the implicit equation,
formula_30
so the new circle can be represented by the matrix formula_31
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Gamma"
},
{
"math_id": 1,
"text": "z"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "\\gamma."
},
{
"math_id": 4,
"text": "\\Gamma(\\gamma, r) = \\{ z : \\text{the distance between } z \\text{ and } \\gamma \\text{ is } r \\} "
},
{
"math_id": 5,
"text": "\\gamma"
},
{
"math_id": 6,
"text": "\\begin{align}\nr^2 &= \\left| z - \\gamma \\right|^2 = (z-\\gamma)\\overline{(z-\\gamma)} \\\\[5mu]\n0 &= z \\bar z - \\bar \\gamma z - \\gamma \\bar z + \\left(\\gamma \\bar \\gamma - r^2\\right).\n\\end{align}"
},
{
"math_id": 7,
"text": "\\bar{z},"
},
{
"math_id": 8,
"text": "\nA z \\bar z + B z + C \\bar z + D = 0,\n"
},
{
"math_id": 9,
"text": "A"
},
{
"math_id": 10,
"text": "D"
},
{
"math_id": 11,
"text": "B"
},
{
"math_id": 12,
"text": "C"
},
{
"math_id": 13,
"text": "AD < BC"
},
{
"math_id": 14,
"text": "r^2 = (BC - AD)/A^2"
},
{
"math_id": 15,
"text": "z \\mapsto w = 1/z"
},
{
"math_id": 16,
"text": "\\begin{align}\n0 &= A z \\bar z + B z + C \\bar z + D \\\\[5mu]\n&= \\frac{A}{w\\bar w} + \\frac{B}{w} + \\frac{C}{\\bar w} + D \\\\[5mu]\n&= A + B \\bar w + C w + D w \\bar w \\\\[5mu]\n&= D \\bar w w + C w + B \\bar w + A .\n\\end{align}"
},
{
"math_id": 17,
"text": "A = D = 0"
},
{
"math_id": 18,
"text": "A = 0, D \\neq 0"
},
{
"math_id": 19,
"text": "A \\neq 0, D =0"
},
{
"math_id": 20,
"text": "A \\neq 0, D \\neq 0"
},
{
"math_id": 21,
"text": "0 = A z \\bar z + B z + C \\bar z + D"
},
{
"math_id": 22,
"text": "\n0 = \\begin{pmatrix}z & 1 \\end{pmatrix}\n\\begin{pmatrix}A & B \\\\ C & D \\end{pmatrix}\n\\begin{pmatrix}\\bar{z} \\\\ 1 \\end{pmatrix}.\n"
},
{
"math_id": 23,
"text": "0 = \\mathbf{z}^\\text{T} \\mathfrak C \\, \\bar{\\mathbf{z}},"
},
{
"math_id": 24,
"text": "\\mathfrak C = {\\mathfrak C}^\\dagger"
},
{
"math_id": 25,
"text": "\\mathbf{z} = \\begin{pmatrix} z & 1 \\end{pmatrix}^\\text{T}"
},
{
"math_id": 26,
"text": "\\mathfrak C"
},
{
"math_id": 27,
"text": "\\mathfrak H,"
},
{
"math_id": 28,
"text": "\\mathfrak G = \\mathfrak H^{-1}"
},
{
"math_id": 29,
"text": "\\mathbf{z}"
},
{
"math_id": 30,
"text": "\\begin{align}\n0 &= \\left({\\mathfrak G} \\mathbf{z}\\right)^\\text{T}\n\\mathfrak C \\,\n\\overline{(\n \\mathfrak G\n \\mathbf{z}\n)} \\\\ [5mu]\n&= \n\\mathbf{z}^\\text{T}\n\\left({\\mathfrak G}^\\text{T} \\mathfrak C \\bar{\\mathfrak G} \\right)\n\\bar{\\mathbf{z}},\n\\end{align}"
},
{
"math_id": 31,
"text": "{\\mathfrak G}^\\text{T} {\\mathfrak C} \\bar{\\mathfrak G}."
}
] | https://en.wikipedia.org/wiki?curid=10766404 |
10766937 | Semiautomaton | In mathematics and theoretical computer science, a semiautomaton is a deterministic finite automaton having inputs but no output. It consists of a set "Q" of states, a set Σ called the input alphabet, and a function "T": "Q" × Σ → "Q" called the transition function.
Associated with any semiautomaton is a monoid called the characteristic monoid, input monoid, transition monoid or transition system of the semiautomaton, which acts on the set of states "Q". This may be viewed either as an action of the free monoid of strings in the input alphabet Σ, or as the induced transformation semigroup of "Q".
In older books like Clifford and Preston (1967) semigroup actions are called "operands".
In category theory, semiautomata essentially are functors.
Transformation semigroups and monoid acts.
A transformation semigroup or transformation monoid is a pair formula_0 consisting of a set "Q" (often called the "set of states") and a semigroup or monoid "M" of functions, or "transformations", mapping "Q" to itself. They are functions in the sense that every element "m" of "M" is a map formula_1. If "s" and "t" are two functions of the transformation semigroup, their semigroup product is defined as their function composition formula_2.
Some authors regard "semigroup" and "monoid" as synonyms. Here a semigroup need not have an identity element; a monoid is a semigroup with an identity element (also called "unit"). Since the notion of functions acting on a set always includes the notion of an identity function, which when applied to the set does nothing, a transformation semigroup can be made into a monoid by adding the identity function.
"M"-acts.
Let "M" be a monoid and "Q" be a non-empty set. If there exists a multiplicative operation
formula_3
formula_4
which satisfies the properties
formula_5
for 1 the unit of the monoid, and
formula_6
for all formula_7 and formula_8, then the triple formula_9 is called a right "M"-act or simply a right act. In long-hand, formula_10 is the "right multiplication of elements of Q by elements of M". The right act is often written as formula_11.
A left act is defined similarly, with
formula_12
formula_13
and is often denoted as formula_14.
An "M"-act is closely related to a transformation monoid. However the elements of "M" need not be functions "per se", they are just elements of some monoid. Therefore, one must demand that the action of formula_10 be consistent with multiplication in the monoid ("i.e." formula_15), as, in general, this might not hold for some arbitrary formula_10, in the way that it does for function composition.
Once one makes this demand, it is completely safe to drop all parenthesis, as the monoid product and the action of the monoid on the set are completely associative. In particular, this allows elements of the monoid to be represented as strings of letters, in the computer-science sense of the word "string". This abstraction then allows one to talk about string operations in general, and eventually leads to the concept of formal languages as being composed of strings of letters.
Another difference between an "M"-act and a transformation monoid is that for an "M"-act "Q", two distinct elements of the monoid may determine the same transformation of "Q". If we demand that this does not happen, then an "M"-act is essentially the same as a transformation monoid.
"M"-homomorphism.
For two "M"-acts formula_11 and formula_16 sharing the same monoid formula_17, an "M"-homomorphism formula_18 is a map formula_19 such that
formula_20
for all formula_21 and formula_22. The set of all "M"-homomorphisms is commonly written as formula_23 or formula_24.
The "M"-acts and "M"-homomorphisms together form a category called "M"-Act.
Semiautomata.
A semiautomaton is a triple formula_25 where formula_26 is a non-empty set, called the "input alphabet", "Q" is a non-empty set, called the "set of states", and "T" is the "transition function"
formula_27
When the set of states "Q" is a finite set—it need not be—, a semiautomaton may be thought of as a deterministic finite automaton formula_28, but without the initial state formula_29 or set of accept states "A". Alternately, it is a finite state machine that has no output, and only an input.
Any semiautomaton induces an act of a monoid in the following way.
Let formula_30 be the free monoid generated by the alphabet formula_26 (so that the superscript * is understood to be the Kleene star); it is the set of all finite-length strings composed of the letters in formula_26.
For every word "w" in formula_30, let formula_31 be the function, defined recursively, as follows, for all "q" in "Q":
Let formula_41 be the set
formula_42
The set formula_41 is closed under function composition; that is, for all formula_43, one has formula_44. It also contains formula_45, which is the identity function on "Q". Since function composition is associative, the set formula_41 is a monoid: it is called the input monoid, characteristic monoid, characteristic semigroup or transition monoid of the semiautomaton formula_25.
Properties.
If the set of states "Q" is finite, then the transition functions are commonly represented as state transition tables. The structure of all possible transitions driven by strings in the free monoid has a graphical depiction as a de Bruijn graph.
The set of states "Q" need not be finite, or even countable. As an example, semiautomata underpin the concept of quantum finite automata. There, the set of states "Q" are given by the complex projective space formula_46, and individual states are referred to as "n"-state qubits. State transitions are given by unitary "n"×"n" matrices. The input alphabet formula_26 remains finite, and other typical concerns of automata theory remain in play. Thus, the quantum semiautomaton may be simply defined as the triple formula_47 when the alphabet formula_26 has "p" letters, so that there is one unitary matrix formula_48 for each letter formula_38. Stated in this way, the quantum semiautomaton has many geometrical generalizations. Thus, for example, one may take a Riemannian symmetric space in place of formula_46, and selections from its group of isometries as transition functions.
The syntactic monoid of a regular language is isomorphic to the transition monoid of the minimal automaton accepting the language.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(M,Q)"
},
{
"math_id": 1,
"text": "m\\colon Q\\to Q"
},
{
"math_id": 2,
"text": "(st)(q)=(s\\circ t)(q)=s(t(q))"
},
{
"math_id": 3,
"text": "\\mu\\colon Q\\times M \\to Q"
},
{
"math_id": 4,
"text": "(q,m)\\mapsto qm=\\mu(q,m)"
},
{
"math_id": 5,
"text": "q1=q"
},
{
"math_id": 6,
"text": "q(st)=(qs)t"
},
{
"math_id": 7,
"text": "q\\in Q"
},
{
"math_id": 8,
"text": "s,t\\in M"
},
{
"math_id": 9,
"text": "(Q,M,\\mu)"
},
{
"math_id": 10,
"text": "\\mu"
},
{
"math_id": 11,
"text": "Q_M"
},
{
"math_id": 12,
"text": "\\mu\\colon M\\times Q \\to Q"
},
{
"math_id": 13,
"text": "(m,q)\\mapsto mq=\\mu(m,q)"
},
{
"math_id": 14,
"text": "\\,_MQ"
},
{
"math_id": 15,
"text": "\\mu(q, st)=\\mu(\\mu(q,s),t)"
},
{
"math_id": 16,
"text": "B_M"
},
{
"math_id": 17,
"text": "M"
},
{
"math_id": 18,
"text": "f\\colon Q_M\\to B_M"
},
{
"math_id": 19,
"text": "f\\colon Q \\to B "
},
{
"math_id": 20,
"text": "f(qm)=f(q)m"
},
{
"math_id": 21,
"text": "q\\in Q_M"
},
{
"math_id": 22,
"text": "m\\in M"
},
{
"math_id": 23,
"text": "\\mathrm{Hom}(Q_M, B_M)"
},
{
"math_id": 24,
"text": "\\mathrm{Hom}_M(Q, B)"
},
{
"math_id": 25,
"text": "(Q,\\Sigma,T)"
},
{
"math_id": 26,
"text": "\\Sigma"
},
{
"math_id": 27,
"text": "T\\colon Q\\times \\Sigma \\to Q."
},
{
"math_id": 28,
"text": "(Q,\\Sigma,T,q_0,A)"
},
{
"math_id": 29,
"text": "q_0"
},
{
"math_id": 30,
"text": "\\Sigma^*"
},
{
"math_id": 31,
"text": "T_w\\colon Q\\to Q"
},
{
"math_id": 32,
"text": "w=\\varepsilon"
},
{
"math_id": 33,
"text": "T_\\varepsilon(q)=q"
},
{
"math_id": 34,
"text": "\\varepsilon"
},
{
"math_id": 35,
"text": "w=\\sigma"
},
{
"math_id": 36,
"text": "T_\\sigma(q)=T(q,\\sigma)"
},
{
"math_id": 37,
"text": "w=\\sigma v"
},
{
"math_id": 38,
"text": "\\sigma\\in\\Sigma"
},
{
"math_id": 39,
"text": "v\\in \\Sigma^*"
},
{
"math_id": 40,
"text": "T_w(q)=T_v(T_\\sigma(q))"
},
{
"math_id": 41,
"text": "M(Q,\\Sigma,T)"
},
{
"math_id": 42,
"text": "M(Q,\\Sigma,T)=\\{T_w \\vert w\\in\\Sigma^* \\}."
},
{
"math_id": 43,
"text": "v,w\\in\\Sigma^*"
},
{
"math_id": 44,
"text": "T_w\\circ T_v=T_{vw}"
},
{
"math_id": 45,
"text": "T_\\varepsilon"
},
{
"math_id": 46,
"text": "\\mathbb{C}P^n"
},
{
"math_id": 47,
"text": "(\\mathbb{C}P^n,\\Sigma,\\{U_{\\sigma_1},U_{\\sigma_2},\\dotsc,U_{\\sigma_p}\\})"
},
{
"math_id": 48,
"text": "U_\\sigma"
}
] | https://en.wikipedia.org/wiki?curid=10766937 |
10768144 | Transition system | State machine that may have infinite states
In theoretical computer science, a transition system is a concept used in the study of computation. It is used to describe the potential behavior of discrete systems. It consists of states and transitions between states, which may be labeled with labels chosen from a set; the same label may appear on more than one transition. If the label set is a singleton, the system is essentially unlabeled, and a simpler definition that omits the labels is possible.
Transition systems coincide mathematically with abstract rewriting systems (as explained further in this article) and directed graphs. They differ from finite-state automata in several ways:
Transition systems can be represented as directed graphs.
Formal definition.
Formally, a transition system is a pair formula_0 where formula_1 is a set of states and formula_2, the "transition relation", is a subset of formula_3. We say that there is a transition from state formula_4 to state formula_5 iff formula_6, and denote it formula_7.
A labelled transition system is a tuple formula_8 where formula_1 is a set of states, formula_9 is a set of labels, and formula_2, the "labelled transition relation", is a subset of formula_10. We say that there is a transition from state formula_4 to state formula_5 with label formula_11 iff formula_12 and denote it
formula_13
Labels can represent different things depending on the language of interest. Typical uses of labels include representing input expected, conditions that must be true to trigger the transition, or actions performed during the transition. Labelled transitions systems were originally introduced as "named" transition systems.
Coalgebra formulation.
The formal definition can be rephrased as follows. Labelled state transition systems on formula_1 with labels from formula_9 correspond one-to-one with functions formula_15, where formula_16 is the (covariant) powerset functor. Under this bijection formula_8 is sent to formula_17, defined by
formula_18.
In other words, a labelled state transition system is a coalgebra for the functor formula_19.
Relation between labelled and unlabelled transition system.
There are many relations between these concepts. Some are simple, such as observing that a labelled transition system where the set of labels consists of only one element is equivalent to an unlabelled transition system. However, not all these relations are equally trivial.
Comparison with abstract rewriting systems.
As a mathematical object, an unlabeled transition system is identical with an (unindexed) abstract rewriting system. If we consider the rewriting relation as an indexed set of relations, as some authors do, then a labeled transition system is equivalent to an abstract rewriting system with the indices being the labels. The focus of the study and the terminology are different, however. In a transition system one is interested in interpreting the labels as actions, whereas in an abstract rewriting system the focus is on how objects may be transformed (rewritten) into others.
Extensions.
In model checking, a transition system is sometimes defined to include an additional labeling function for the states as well, resulting in a notion that encompasses that of Kripke structure.
Action languages are extensions of transition systems, adding a set of "fluents" "F", a set of values "V", and a function that maps "F" × "S" to "V".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(S, T)"
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": "T"
},
{
"math_id": 3,
"text": "S \\times S"
},
{
"math_id": 4,
"text": "p"
},
{
"math_id": 5,
"text": "q"
},
{
"math_id": 6,
"text": "(p, q) \\in T"
},
{
"math_id": 7,
"text": "p \\rightarrow q"
},
{
"math_id": 8,
"text": "(S, \\Lambda, T)"
},
{
"math_id": 9,
"text": "\\Lambda"
},
{
"math_id": 10,
"text": "S \\times \\Lambda \\times S"
},
{
"math_id": 11,
"text": "\\alpha"
},
{
"math_id": 12,
"text": "(p, \\alpha, q) \\in T"
},
{
"math_id": 13,
"text": "\np \\xrightarrow{\\alpha} q \\,.\n"
},
{
"math_id": 14,
"text": "(p, \\alpha, q)"
},
{
"math_id": 15,
"text": "S \\to \\mathcal{P}(\\Lambda \\times S) "
},
{
"math_id": 16,
"text": "\\mathcal{P}"
},
{
"math_id": 17,
"text": "\\xi_T : S \\to \\mathcal{P}(\\Lambda \\times S) "
},
{
"math_id": 18,
"text": " p \\mapsto \\{\\,(\\alpha, q) \\in \\Lambda \\times S \\mid p \\xrightarrow{\\alpha} q \\,\\}"
},
{
"math_id": 19,
"text": "P(\\Lambda \\times {-})"
}
] | https://en.wikipedia.org/wiki?curid=10768144 |
10770999 | Variable-order Markov model | Markov-based processes with variable "memory"
In the mathematical theory of stochastic processes, variable-order Markov (VOM) models are an important class of models that extend the well known Markov chain models. In contrast to the Markov chain models, where each random variable in a sequence with a Markov property depends on a fixed number of random variables, in VOM models this number of conditioning random variables may vary based on the specific observed realization.
This realization sequence is often called the "context"; therefore the VOM models are also called "context trees". VOM models are nicely rendered by colorized probabilistic suffix trees (PST). The flexibility in the number of conditioning random variables turns out to be of real advantage for many applications, such as statistical analysis, classification and prediction.
Example.
Consider for example a sequence of random variables, each of which takes a value from the ternary alphabet {"a", "b", "c"}. Specifically, consider the string constructed from infinite concatenations of the sub-string "aaabc": "aaabcaaabcaaabcaaabc…aaabc".
The VOM model of maximal order 2 can approximate the above string using "only" the following five conditional probability components: Pr("a" | "aa")
0.5, Pr("b" | "aa")
0.5, Pr("c" | "b")
1.0, Pr("a" | "c")
1.0, Pr("a" | "ca")
1.0.
In this example, Pr("c" | "ab")
Pr("c" | "b")
1.0; therefore, the shorter context "b" is sufficient to determine the next character. Similarly, the VOM model of maximal order 3 can generate the string exactly using only five conditional probability components, which are all equal to 1.0.
To construct the Markov chain of order 1 for the next character in that string, one must estimate the following 9 conditional probability components: Pr("a" | "a"), Pr("a" | "b"), Pr("a" | "c"), Pr("b" | "a"), Pr("b" | "b"), Pr("b" | "c"), Pr("c" | "a"), Pr("c" | "b"), Pr("c" | "c"). To construct the Markov chain of order 2 for the next character, one must estimate 27 conditional probability components: Pr("a" | "aa"), Pr("a" | "ab"), …, Pr("c" | "cc"). And to construct the Markov chain of order three for the next character one must estimate the following 81 conditional probability components: Pr("a" | "aaa"), Pr("a" | "aab"), …, Pr("c" | "ccc").
In practical settings there is seldom sufficient data to accurately estimate the exponentially increasing number of conditional probability components as the order of the Markov chain increases.
The variable-order Markov model assumes that in realistic settings, there are certain realizations of states (represented by contexts) in which some past states are independent from the future states; accordingly, "a great reduction in the number of model parameters can be achieved."
Definition.
Let A be a state space (finite alphabet) of size formula_0.
Consider a sequence with the Markov property formula_1 of n realizations of random variables, where formula_2 is the state (symbol) at position i formula_3, and the concatenation of states formula_4 and formula_5 is denoted by formula_6.
Given a training set of observed states, formula_7, the construction algorithm of the VOM models learns a model P that provides a probability assignment for each state in the sequence given its past (previously observed symbols) or future states.
Specifically, the learner generates a conditional probability distribution formula_8 for a symbol formula_9 given a context formula_10, where the * sign represents a sequence of states of any length, including the empty context.
VOM models attempt to estimate conditional distributions of the form formula_8 where the context length formula_11 varies depending on the available statistics.
In contrast, conventional Markov models attempt to estimate these conditional distributions by assuming a fixed contexts' length formula_12 and, hence, can be considered as special cases of the VOM models.
Effectively, for a given training sequence, the VOM models are found to obtain better model parameterization than the fixed-order Markov models that leads to a better variance-bias tradeoff of the learned models.
Application areas.
Various efficient algorithms have been devised for estimating the parameters of the VOM model.
VOM models have been successfully applied to areas such as machine learning, information theory and bioinformatics, including specific applications such as coding and data compression, document compression, classification and identification of DNA and protein sequences, statistical process control, spam filtering, haplotyping, speech recognition, sequence analysis in social sciences, and others.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "|A|"
},
{
"math_id": 1,
"text": "x_1^{n}=x_1x_2\\dots x_n"
},
{
"math_id": 2,
"text": " x_i\\in A"
},
{
"math_id": 3,
"text": "\\scriptstyle (1 \\le i \\le n)"
},
{
"math_id": 4,
"text": "x_i"
},
{
"math_id": 5,
"text": "x_{i+1}"
},
{
"math_id": 6,
"text": "x_ix_{i+1}"
},
{
"math_id": 7,
"text": "x_1^{n}"
},
{
"math_id": 8,
"text": "P(x_i\\mid s)"
},
{
"math_id": 9,
"text": "x_i \\in A"
},
{
"math_id": 10,
"text": "s\\in A^*"
},
{
"math_id": 11,
"text": "|s| \\le D"
},
{
"math_id": 12,
"text": "|s| = D"
}
] | https://en.wikipedia.org/wiki?curid=10770999 |
1077155 | Compact Linear Collider | Concept for a linear particle accelerator
The Compact Linear Collider (CLIC) is a concept for a future linear particle accelerator that aims to explore the next energy frontier. CLIC would collide electrons with positrons and is currently the only mature option for a multi-TeV linear collider. The accelerator would be between long, more than ten times longer than the existing Stanford Linear Accelerator (SLAC) in California, US. CLIC is proposed to be built at CERN, across the border between France and Switzerland near Geneva, with first beams starting by the time the Large Hadron Collider (LHC) has finished operations around 2035.
The CLIC accelerator would use a novel two-beam acceleration technique at an acceleration gradient of 100 MV/m, and its staged construction would provide collisions at three centre-of-mass energies up to 3 TeV for optimal physics reach. Research and development (R&D) are being carried out to achieve the high precision physics goals under challenging beam and background conditions.
CLIC aims to discover new physics beyond the Standard Model of particle physics, through precision measurements of Standard Model properties as well as direct detection of new particles. The collider would offer high sensitivity to electroweak states, exceeding the predicted precision of the full LHC programme. The current CLIC design includes the possibility for electron beam polarisation.
The CLIC collaboration produced a Conceptual Design Report (CDR) in 2012, complemented by an updated energy staging scenario in 2016. Additional detailed studies of the physics case for CLIC, an advanced design of the accelerator complex and the detector, as well as numerous R&D results are summarised in a recent series of CERN Yellow Reports.
Background.
There are two main types of particle colliders, which differ in the types of particles they collide: lepton colliders and hadron colliders. Each type of collider can produce different final states of particles and can study different physics phenomena. Examples of hadron colliders are the ISR, the SPS and the LHC at CERN, and the Tevatron in the US. Examples of lepton colliders are the SuperKEKB in Japan, the BEPC II in China, DAFNE in Italy, the VEPP in Russia, SLAC in the US, and the Large Electron–Positron Collider at CERN. Some of these lepton colliders are still running.
Hadrons are compound objects, which lead to more complicated collision events and limit the achievable precision of physics measurements. This is for instance why the Large Hadron Collider was designed to operate at such a high energy even while it was already known the Higgs particle ought to be found at around the energies it eventually was: the lesser accuracy of a hadron collider necessitated more numerous and higher energy impacts to compensate. Lepton colliders on the other hand collide fundamental particles, therefore the initial state of each event is known and higher precision measurements can be achieved.
Another means of categorizing colliders is by their physical geometry: either linear or circular. Circular colliders benefit from being able to accelerate particles over and over to reach very high energies, and from being able to repeatedly intersect their beams, to reach very high numbers of collisions between individual particles.
On the other hand they are limited by the fact that keeping the particles circulating means constantly accelerating them inwards. This makes charged particles emit synchrotron radiation, eventually leading to a significant energy loss and a limit on achievable collision energy. This so called synchrotron loss is especially harmful to lepton colliders, because it scales as the fourth power of particle speed, and the only stable leptons around (electrons and positrons) are, as the name says, very light. They will have to be accelerated to much higher speeds than heavier particles (baryons) in order to gain the same energy, and suddenly synchrotron loss becomes the limiting factor.
As a linear collider, CLIC will not have this problem. It still has to tackle the problems of not being able to recirculate its beams, though, which despite it being called "compact", necessitates massive scale and a rather unconventional design to reach the high linear accelerations required.
Three energy stages.
CLIC is foreseen to be built and operated in three stages with different centre-of-mass energies: 380 GeV, 1.5 TeV, and 3 TeV. The integrated luminosities at each stage are expected to be 1 ab−1, 2.5 ab−1, and 5 ab−1 respectively, providing a broad physics programme over a 27-year period. These centre-of-mass energies have been motivated by current LHC data and studies of the physics potential carried out by the CLIC study.
Already at 380 GeV, CLIC has good coverage of Standard Model physics; the energy stages beyond this allow for the discovery of new physics as well as increased precision measurements of Standard Model processes. Additionally, CLIC will operate at the top quark pair-production threshold around 350 GeV with the aim of precisely measuring the properties of the top quark.
Physics case for CLIC.
CLIC would allow the exploration of new energy ranges, provide possible solutions to unanswered problems, and enable the discovery of phenomena beyond our current understanding.
Higgs physics.
The current LHC data suggest that the particle found in 2012 is the Higgs boson as predicted by the Standard Model of particle physics. However, the LHC can only partially answer questions about the true nature of this particle, such as its composite/fundamental nature, coupling strengths, and possible role in an extended electroweak sector. CLIC could examine these questions in more depth by measuring the Higgs couplings to a precision not achieved before. The 380 GeV stage of CLIC allows, for example, accurate model-independent measurements of Higgs boson couplings to fermions and bosons through the Higgsstrahlung and WW-fusion production processes. The second and third stages give access to phenomena such as the top-Yukawa coupling, rare Higgs decays and the Higgs self-coupling.
Top-quark physics.
The top quark, the heaviest of all known fundamental particles, has currently never been studied in electron-positron collisions. The CLIC linear collider plans to have an extensive top quark physics programme. A major aim of this programme would be a threshold scan around the top quark pair-production threshold (~350 GeV) to precisely determine the mass and other significant properties of the top quark. For this scan, CLIC currently plans to devote 10% of the running time of the first stage, collecting 100 fb−1. This study would allow the top quark mass to be ascertained in a theoretically well-defined manner and at a higher precision than possible with hadron colliders. CLIC would also aim to measure the top quark electroweak couplings to the Z boson and the photon, as deviations of these values from those predicted by the Standard Model could be evidence of new physics phenomena, such as extra dimensions. Further observation of top quark decays with flavour-changing neutral currents at CLIC would be an indirect indication of new physics, as these should not be seen by CLIC under current Standard Model predictions.
New phenomena.
CLIC could discover new physics phenomena either through indirect measurements or by direct observation. Large deviations in precision measurements of particle properties from the Standard Model prediction would indirectly signal the presence of new physics. Such indirect methods give access to energy scales far beyond the available collision energy, reaching sensitivities of up to tens of TeV.
Examples of indirect measurements CLIC would be capable of at 3 TeV are: using the production of muon pairs to provide evidence of a Z′ boson (reach up to ~30 TeV) indicating a simple gauge extension beyond the Standard Model; using vector boson scattering for giving insight into the mechanism of electroweak symmetry breaking; and exploiting the combination of several final states to determine the elementary or composite nature of the Higgs boson (reach of compositeness scale up to ~50 TeV). Direct pair production of particles up to a mass of 1.5 TeV, and single particle production up to a mass of 3 TeV is possible at CLIC. Due to the clean environment of electron-positron colliders, CLIC would be able to measure the properties of these potential new particles to a very high precision. Examples of particles CLIC could directly observe at 3 TeV are some of those proposed by the supersymmetry theory: charginos, neutralinos (both ~≤ 1.5 TeV), and sleptons (≤ 1.5 TeV).
However, research from experimental data on the cosmological constant, LIGO noise, and pulsar timing, suggests it's very unlikely that there are any new particles with masses much higher than those which can be found in the standard model or the LHC. On the other hand, this research has also indicated that quantum gravity or perturbative quantum field theory will become strongly coupled before 1 PeV, leading to other new physics in the TeVs.
Beams and accelerators.
To reach the desired 3 TeV beam energy, while keeping the length of the accelerator compact, CLIC targets an accelerating gradient up to 100 MV/m. CLIC is based on normal-conducting acceleration cavities operated at room temperature, as they allow for higher acceleration gradients than superconducting cavities. With this technology, the main limitation is the high-voltage breakdown rate (BDR), which follows the empirical law formula_0, where formula_1 is the accelerating gradient and formula_2 is the RF pulse length. The high accelerating gradient and the target BDR value (3 × 10−7 pulse−1m−1) drive most of the beam parameter"s" and machine design.
In order to reach these high accelerating gradients while keeping the power consumption affordable, CLIC makes use of a novel two-beam-acceleration scheme: a so-called Drive Beam runs parallel to the colliding Main Beam. The Drive Beam is decelerated in special devices called Power Extraction and Transfer Structures (PETS) that extract energy from the Drive Beam in the form of powerful Radio Frequency (RF) waves, which is then used to accelerate the Main Beam. Up to 90% of the energy of the Drive Beam is extracted and efficiently transferred to the Main Beam.
Main beam.
The electrons needed for the main beam are produced by illuminating a GaAs-type cathode with a Q-switched polarised laser, and are longitudinally polarised at the level of 80%. The positron"s" for the main beam are produced by sending a 5 GeV electron beam on a tungsten target. After an initial acceleration up to 2.86 GeV, both electrons and positrons enter damping rings for emittance reduction by radiation damping. Both beams are then further accelerated to 9 GeV in a common booster linac. Long transfer lines transport the two beams to the beginning of the main linacs where they are accelerated up to 1.5 TeV before going into the Beam Delivery System (BDS), which squeezes and brings the beams into collision. The two beams collide at the IP with 20 mrad crossing angle in the horizontal plane.
Drive beam.
Each Drive Beam complex is composed of a 2.5 km-long linac, followed by a Drive Beam Recombination Complex: a system of delay lines and combiner rings where the incoming beam pulses are interleaved to ultimately form a 12 GHz sequence and a local beam current as high as 100A. Each 2.5 km-long Drive Beam linac is powered by 1 GHz klystron"s". This produces a 148 μs-long beam (for the 1.5 TeV energy stage scenario) with a bunching frequency of 0.5 GHz. Every 244 ns the bunching phase is switched by 180 degrees, i.e. odd and even buckets at 1 GHz are filled alternately. This phase-coding allows the first factor two recombination: the odd bunches are delayed in a Delay Loop (DL), while the even bunches bypass it. The time of flight of the DL is about 244 ns and tuned at the picosecond level such that the two trains of bunches can merge, forming several 244 ns-long trains with bunching frequency at 1 GHz, separated by 244 ns of empty space. This new time-structure allows for further factor 3 and factor 4 recombination in the following combiner rings with a similar mechanism as in the DL. The final time structure of the beam is made of several (up to 25) 244 ns-long trains of bunches at 12 GHz, spaced by gaps of about 5.5 μs. The recombination is timed such that each combined train arrives in its own decelerator sector, synchronized with the arrival of the Main Beam. The use of low-frequency (1 GHz), long-pulse-length (148 μs) klystrons for accelerating the Drive Beam and the beam recombination makes it more convenient than using klystrons to directly accelerate the Main Beam.
Test facilities.
The main technology challenges of the CLIC accelerator design have been successfully addressed in various test facilities. The Drive Beam production and recombination, and the two-beam acceleration concept were demonstrated at the CLIC Test Facility 3 (CTF3). X-band high-power klystron-based RF sources were built in stages at the high-gradient X-band test facility (XBOX), CERN. These facilities provide the RF power and infrastructure required for the conditioning and verification of the performance of CLIC accelerating structures, and other X-band based projects. Additional X-band high-gradient tests are being carried out at the NEXTEF facility at KEK and at SLAC, a new test stand is being commissioned at Tsinghua University and further test stands are being constructed at INFN Frascati and SINAP in Shanghai.
CLIC detector.
A state-of-the-art detector is essential to profit from the complete physics potential of CLIC. The current detector design, named CLICdet, has been optimised via full simulation studies and R&D activities. The detector follows the standard design of grand particle detectors at high energy colliders: a cylindrical detector volume with a layered configuration, surrounding the beam axis. CLICdet would have dimensions of ~13 × 12 m (height × length) and weigh ~8000 tonnes.
Detector Layers.
CLICdet consists of four main layers of increasing radius: vertex and tracking system, calorimeters, solenoid magnet, and muon detector.
The vertex and tracking system is located at the innermost region of CLICdet and aims to detect the position and momenta of particles with minimum adverse impact on their energy and trajectory. The vertex detector is cylindrical with three double layers of detector materials at increasing radii and has three segmented disks at each end in a spiral configuration to aid air flow cooling. These are assumed to be made of 25x25 μm2 silicon pixels of thickness 50 μm, and the aim is to have a single point resolution of 3 μm. The tracking system is made of silicon sensor modules expected to be 200 μm thick.
The calorimeters surround the vertex and tracking system and aim to measure the energy of particles via absorption. The electromagnetic calorimeter (ECAL) consists of ~40 layers of silicon/tungsten in a sandwich structure; the hadronic calorimeter (HCAL) has 60 steel absorber plates with scintillating material inserted in between.
These inner CLICdet layers are enclosed in a superconducting solenoid magnet with a field strength of 4 T. This magnetic field bends charged particles, allowing for momentum and charge measurements. The magnet is then surrounded by an iron yoke which would contain large area detectors for muon identification.
The detector also has a luminosity calorimeter (LumiCal) to measure the products of Bhabha scattering events, a beam calorimeter to complete the ECAL coverage down to 10 mrads polar angle, and an intra-train feedback system to counteract luminosity loss due to relative beam-beam offsets.
Power pulsing and cooling.
Strict requirements on the material budget for the vertex and tracking system do not allow the use of conventional liquid cooling systems for CLICdet. Therefore, it is proposed that a dry gas cooling system will be used for this inner region. Air gaps have been factored into the design of the detector to allow the flow of the gas, which will be air or Nitrogen. To allow for effective air cooling, the average power consumption of the Silicon sensors in the vertex detector needs to be lowered. Therefore, these sensors will operate via a current-based power pulsing scheme: switching the sensors from a high to low power consumption state whenever possible, corresponding to the 50 Hz bunch train crossing rate.
Status.
As of 2017[ [update]], approximately two percent of the CERN annual budget is invested in the development of CLIC technologies. The first stage of CLIC with a length of around is currently estimated at a cost of six billion CHF. CLIC is a global project involving more than 70 institutes in more than 30 countries. It consists of two collaborations: the CLIC detector and physics collaboration (CLICdp), and the CLIC accelerator study. CLIC is currently in the development stage, conducting performance studies for accelerator parts and systems, detector technology and optimisation studies, and physics analysis. In parallel, the collaborations are working with the theory community to evaluate the physics potential of CLIC.
The CLIC project has submitted two concise documents as input to the next update of the European Strategy for Particle Physics (ESPP) summarising the physics potential of CLIC as well as the status of the CLIC accelerator and detector projects.
The update of the ESPP is a community-wide process, which is expected to conclude in May 2020 with the publication of a strategy document.
Detailed information on the CLIC project is available in CERN Yellow Reports, on the CLIC potential for New Physics, the CLIC project implementation plan and the Detector technologies for CLIC. An overview is provided in the 2018 CLIC Summary Report.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "BDR \\propto E^{30}\\tau^5"
},
{
"math_id": 1,
"text": "E"
},
{
"math_id": 2,
"text": "\\tau"
}
] | https://en.wikipedia.org/wiki?curid=1077155 |
1077261 | Spin quantum number | Quantum number parameterizing spin and angular momentum
In physics and chemistry, the spin quantum number is a quantum number (designated s) that describes the intrinsic angular momentum (or spin angular momentum, or simply "spin") of an electron or other particle. It has the same value for all particles of the same type, such as s = for all electrons. It is an integer for all bosons, such as photons, and a half-odd-integer for all fermions, such as electrons and protons.
The component of the spin along a specified axis is given by the spin magnetic quantum number, conventionally written ms. The value of ms is the component of spin angular momentum, in units of the reduced Planck constant ħ, parallel to a given direction (conventionally labelled the z–axis). It can take values ranging from +s to −s in integer increments. For an electron, ms can be either or .
Nomenclature.
The phrase "spin quantum number" refers to quantized spin angular momentum.
The symbol s is used for the spin quantum number, and ms is described as the spin magnetic quantum number or as the z-component of spin sz.
Both the total spin and the z-component of spin are quantized, leading to two quantum numbers spin and spin magnet quantum numbers. The (total) spin quantum number has only one value for every elementary particle. Some introductory chemistry textbooks describe ms as the "spin quantum number", and s is not mentioned since its value is a fixed property of the electron; some even use the variable s in place of ms.
The two spin quantum numbers formula_0 and formula_1 are the spin angular momentum analogs of the two orbital angular momentum quantum numbers formula_2 and formula_3.
Spin quantum numbers apply also to systems of coupled spins, such as atoms that may contain more than one electron. Capitalized symbols are used: S for the total electronic spin, and mS or MS for the z-axis component. A pair of electrons in a spin singlet state has S = 0, and a pair in the triplet state has S = 1, with mS = −1, 0, or +1. Nuclear-spin quantum numbers are conventionally written I for spin, and mI or MI for the z-axis component.
The name "spin" comes from a geometrical spinning of the electron about an axis, as proposed by Uhlenbeck and Goudsmit. However, this simplistic picture was quickly realized to be physically unrealistic, because it would require the electrons to rotate faster than the speed of light. It was therefore replaced by a more abstract quantum-mechanical description.
History.
During the period between 1916 and 1925, much progress was being made concerning the arrangement of electrons in the periodic table. In order to explain the Zeeman effect in the Bohr atom, Sommerfeld proposed that electrons would be based on three 'quantum numbers', n, k, and m, that described the size of the orbit, the shape of the orbit, and the direction in which the orbit was pointing. Irving Langmuir had explained in his 1919 paper regarding electrons in their shells, "Rydberg has pointed out that these numbers are obtained from the series formula_4. The factor two suggests a fundamental two-fold symmetry for all stable atoms." This formula_5 configuration was adopted by Edmund Stoner, in October 1924 in his paper 'The Distribution of Electrons Among Atomic Levels' published in the Philosophical Magazine.
The qualitative success of the Sommerfeld quantum number scheme failed to explain the Zeeman effect in weak magnetic field strengths, the anomalous Zeeman effect. In December 1924,
Wolfgang Pauli showed that the core electron angular momentum was not related to the effect as had previously been assumed. Rather he proposed that only the outer "light" electrons determined the angular momentum and he
hypothesized that this required a fourth quantum number with a two-valuedness. This fourth quantum number became the spin magnetic quantum number.
Electron spin.
A spin- particle is characterized by an angular momentum quantum number for spin "s" = . In solutions of the Schrödinger-Pauli equation, angular momentum is quantized according to this number, so that magnitude of the spin angular momentum is
formula_6
The hydrogen spectrum fine structure is observed as a doublet corresponding to two possibilities for the "z"-component of the angular momentum, where for any given direction z:
formula_7
whose solution has only two possible z-components for the electron. In the electron, the two different spin orientations are sometimes called "spin-up" or "spin-down".
The spin property of an electron would give rise to magnetic moment, which was a requisite for the fourth quantum number.
The magnetic moment vector of an electron spin is given by:
formula_8
where formula_9 is the electron charge, formula_10 is the electron mass, and formula_11 is the electron spin g-factor, which is approximately 2.0023.
Its "z"-axis projection is given by the spin magnetic quantum number formula_12 according to:
formula_13
where formula_14 is the Bohr magneton.
When atoms have even numbers of electrons the spin of each electron in each orbital has opposing orientation to that of its immediate neighbor(s). However, many atoms have an odd number of electrons or an arrangement of electrons in which there is an unequal number of "spin-up" and "spin-down" orientations. These atoms or electrons are said to have unpaired spins that are detected in electron spin resonance.
Nuclear spin.
Atomic nuclei also have spins. The nuclear spin I is a fixed property of each nucleus and may be either an integer or a half-integer. The component mI of nuclear spin parallel to the z–axis can have (2I + 1) values I, I–1, ..., –I. For example, a 14N nucleus has I = 1, so that there are 3 possible orientations relative to the z–axis, corresponding to states mI = +1, 0 and −1.
The spins I of different nuclei are interpreted using the nuclear shell model. Even-even nuclei with even numbers of both protons and neutrons, such as 12C and 16O, have spin zero. Odd mass number nuclei have half-integer spins, such as for 7Li, for 13C and for 17O, usually corresponding to the angular momentum of the last nucleon added. Odd-odd nuclei with odd numbers of both protons and neutrons have integer spins, such as 3 for 10B, and 1 for 14N. Values of nuclear spin for a given isotope are found in the lists of isotopes for each element. (See isotopes of oxygen, isotopes of aluminium, etc. etc.)
Detection of spin.
When lines of the hydrogen spectrum are examined at very high resolution, they are found to be closely spaced doublets. This splitting is called fine structure, and was one of the first experimental evidences for electron spin. The direct observation of the electron's intrinsic angular momentum was achieved in the Stern–Gerlach experiment.
Stern–Gerlach experiment.
The theory of spatial quantization of the spin moment of the momentum of electrons of atoms situated in the magnetic field needed to be proved experimentally. In 1922 (two years before the theoretical description of the spin was created) Otto Stern and Walter Gerlach observed it in the experiment they conducted.
Silver atoms were evaporated using an electric furnace in a vacuum. Using thin slits, the atoms were guided into a flat beam and the beam sent through an in-homogeneous magnetic field before colliding with a metallic plate. The laws of classical physics predict that the collection of condensed silver atoms on the plate should form a thin solid line in the same shape as the original beam. However, the in-homogeneous magnetic field caused the beam to split in two separate directions, creating two lines on the metallic plate.
The phenomenon can be explained with the spatial quantization of the spin moment of momentum. In atoms the electrons are paired such that one spins upward and one downward, neutralizing the effect of their spin on the action of the atom as a whole. But in the valence shell of silver atoms, there is a single electron whose spin remains unbalanced.
The unbalanced spin creates spin magnetic moment, making the electron act like a very small magnet. As the atoms pass through the in-homogeneous magnetic field, the force moment in the magnetic field influences the electron's dipole until its position matches the direction of the stronger field. The atom would then be pulled toward or away from the stronger magnetic field a specific amount, depending on the value of the valence electron's spin. When the spin of the electron is the atom moves away from the stronger field, and when the spin is the atom moves toward it. Thus the beam of silver atoms is split while traveling through the in-homogeneous magnetic field, according to the spin of each atom's valence electron.
In 1927 Phipps and Taylor conducted a similar experiment, using atoms of hydrogen with similar results. Later scientists conducted experiments using other atoms that have only one electron in their valence shell: (copper, gold, sodium, potassium). Every time there were two lines formed on the metallic plate.
The atomic nucleus also may have spin, but protons and neutrons are much heavier than electrons (about 1836 times), and the magnetic dipole moment is inversely proportional to the mass. So the nuclear magnetic dipole momentum is much smaller than that of the whole atom. This small magnetic dipole was later measured by Stern, Frisch and Easterman.
Electron paramagnetic resonance.
For atoms or molecules with an unpaired electron, transitions in a magnetic field can also be observed in which only the spin quantum number changes, without change in the electron orbital or the other quantum numbers. This is the method of electron paramagnetic resonance (EPR) or electron spin resonance (ESR), used to study free radicals. Since only the magnetic interaction of the spin changes, the energy change is much smaller than for transitions between orbitals, and the spectra are observed in the microwave region.
Relation to spin vectors.
For a solution of either the nonrelativistic Pauli equation or the relativistic Dirac equation, the quantized angular momentum (see angular momentum quantum number) can be written as:
formula_15
where
Given an arbitrary direction z (usually determined by an external magnetic field) the spin z-projection is given by
formula_19
where ms is the magnetic spin quantum number, ranging from −s to +s in steps of one. This generates different values of ms.
The allowed values for s are non-negative integers or half-integers. Fermions have half-integer values, including the electron, proton and neutron which all have Bosons such as the photon and all mesons) have integer spin values.
Algebra.
The algebraic theory of spin is a carbon copy of the angular momentum in quantum mechanics theory.
First of all, spin satisfies the fundamental commutation relation:
formula_20
formula_21
where formula_22 is the (antisymmetric) Levi-Civita symbol. This means that it is impossible to know two coordinates of the spin at the same time because of the restriction of the uncertainty principle.
Next, the eigenvectors of formula_23 and formula_24 satisfy:
formula_25
formula_26
formula_27
where formula_28 are the ladder (or "raising" and "lowering") operators.
Energy levels from the Dirac equation.
In 1928, Paul Dirac developed a relativistic wave equation, now termed the Dirac equation, which predicted the spin magnetic moment correctly, and at the same time treated the electron as a point-like particle. Solving the Dirac equation for the energy levels of an electron in the hydrogen atom, all four quantum numbers including s occurred naturally and agreed well with experiment.
Total spin of an atom or molecule.
For some atoms the spins of several unpaired electrons (s1, s2, ...) are coupled to form a "total spin" quantum number S. This occurs especially in light atoms (or in molecules formed only of light atoms) when spin–orbit coupling is weak compared to the coupling between spins or the coupling between orbital angular momenta, a situation known as L S coupling because L and S are constants of motion. Here L is the total "orbital" angular momentum quantum number.
For atoms with a well-defined S, the multiplicity of a state is defined as . This is equal to the number of different possible values of the total (orbital plus spin) angular momentum J for a given (L, S) combination, provided that S ≤ L (the typical case). For example, if S = 1, there are three states which form a triplet. The eigenvalues of Sz for these three states are +1ħ, 0, and −1ħ. The term symbol of an atomic state indicates its values of L, S, and J.
As examples, the ground states of both the oxygen atom and the dioxygen molecule have two unpaired electrons and are therefore triplet states. The atomic state is described by the term symbol 3P, and the molecular state by the term symbol 3Σ.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "s"
},
{
"math_id": 1,
"text": "m_s"
},
{
"math_id": 2,
"text": "l"
},
{
"math_id": 3,
"text": "m_l"
},
{
"math_id": 4,
"text": "N = 2(1 + 2^2 + 2^2 + 3^2 + 3^2 + 4^2)"
},
{
"math_id": 5,
"text": "2n^2"
},
{
"math_id": 6,
"text": " \\| \\bold{S} \\| = \\hbar\\sqrt{s(s+1)} = \\tfrac{\\sqrt{3}}{2}\\ \\hbar ~."
},
{
"math_id": 7,
"text": " s_z = \\pm \\tfrac{1}{2}\\hbar ~."
},
{
"math_id": 8,
"text": "\\ \\boldsymbol{\\mu}_\\text{s} = -\\frac{e}{\\ 2m\\ }\\ g_\\text{s}\\ \\bold{S}\\ "
},
{
"math_id": 9,
"text": "-e"
},
{
"math_id": 10,
"text": "m"
},
{
"math_id": 11,
"text": "g_\\text{s}"
},
{
"math_id": 12,
"text": "m_\\text{s}"
},
{
"math_id": 13,
"text": "\\mu_z = -m_\\text{s}\\ g_\\text{s}\\ \\mu_\\mathsf{B} = \\pm \\tfrac{1}{2}\\ g_\\text{s}\\ \\mu_\\mathsf{B}\\ "
},
{
"math_id": 14,
"text": "\\ \\mu_\\mathsf{B}\\ "
},
{
"math_id": 15,
"text": " \\Vert \\mathbf{s} \\Vert = \\sqrt{s \\, (s+1)\\,} \\, \\hbar"
},
{
"math_id": 16,
"text": "\\mathbf{s}"
},
{
"math_id": 17,
"text": "\\Vert \\mathbf{s}\\Vert"
},
{
"math_id": 18,
"text": "\\hbar"
},
{
"math_id": 19,
"text": "s_z = m_s \\, \\hbar"
},
{
"math_id": 20,
"text": "\\ [S_i, S_j ] = i\\ \\hbar\\ \\epsilon_{ijk}\\ S_k\\ ,"
},
{
"math_id": 21,
"text": "\\ \\left[S_i, S^2 \\right] = 0\\ "
},
{
"math_id": 22,
"text": "\\ \\epsilon_{ijk}\\ "
},
{
"math_id": 23,
"text": "\\ S^2\\ "
},
{
"math_id": 24,
"text": "\\ S_z\\ "
},
{
"math_id": 25,
"text": "\\ S^2\\ | s, m_s \\rangle= {\\hbar}^2\\ s(s+1)\\ | s, m_s \\rangle\\ "
},
{
"math_id": 26,
"text": "\\ S_z\\ | s, m_s \\rangle = \\hbar\\ m_s\\ | s, m_s \\rangle\\ "
},
{
"math_id": 27,
"text": "\\ S_\\pm\\ | s, m_s \\rangle = \\hbar\\ \\sqrt{s(s+1) - m_s(m_s \\pm 1)\\ }\\; | s, m_s \\pm 1 \\rangle\\ "
},
{
"math_id": 28,
"text": "\\ S_\\pm = S_x \\pm i S_y\\ "
}
] | https://en.wikipedia.org/wiki?curid=1077261 |
10773039 | Eötvös rule | The Eötvös rule, named after the Hungarian physicist Loránd (Roland) Eötvös (1848–1919) enables the prediction of the surface tension of an arbitrary liquid pure substance at all temperatures. The density, molar mass and the critical temperature of the liquid have to be known. At the critical point the surface tension is zero.
The first assumption of the Eötvös rule is:
1. The surface tension is a linear function of the temperature.
This assumption is approximately fulfilled for most known liquids. When plotting the surface tension versus the temperature a fairly straight line can be seen which has a surface tension of zero at the critical temperature.
The Eötvös rule also gives a relation of the surface tension behaviour of different liquids in respect to each other:
2. The temperature dependence of the surface tension can be plotted for all liquids in a way that the data collapses to a single master curve. To do so either the molar mass, the density, or the molar volume of the corresponding liquid has to be known.
More accurate versions are found on the main page for surface tension.
The Eötvös rule.
If "V" is the molar volume and "T"c the critical temperature of a liquid the surface tension γ is given by
formula_0
where "k" is a constant valid for all liquids, with a value of 2.1×10−7 J/(K·mol2/3).
More precise values can be gained when considering that the line normally passes the temperature axis 6 K before the critical point:
formula_1
The molar volume "V" is given by the molar mass "M" and the density ρ
formula_2
The term formula_3 is also referred to as the "molar surface tension" γmol :
formula_4
A useful representation that prevents the use of the unit mol−2/3 is given by the Avogadro constant NA :
formula_5
As John Lennard-Jones and Corner showed in 1940 by means of the statistical mechanics the constant "k"′ is nearly equal to the Boltzmann constant.
Water.
For water, the following equation is valid between 0 and 100 °C.
formula_6
History.
As a student, Eötvös started to research surface tension and developed a new method for its determination. The Eötvös rule was first found phenomenologically and published in 1886. In 1893 William Ramsay and Shields showed an improved version considering that the line normally passes the temperature axis 6 K before the critical point. John Lennard-Jones and Corner published (1940) a derivation of the equation by means of statistical mechanics. In 1945 E. A. Guggenheim gave a further improved variant of the equation. | [
{
"math_id": 0,
"text": "\\gamma V^{2/3} = k(T_c - T)\\,"
},
{
"math_id": 1,
"text": "\\gamma V^{2/3} = k(T_c - 6 \\ \\mathrm{K} - T)\\,"
},
{
"math_id": 2,
"text": "V = M / \\rho\\,"
},
{
"math_id": 3,
"text": "\\gamma V^{2/3}"
},
{
"math_id": 4,
"text": "\\gamma_{mol} = \\gamma V^{2/3}\\,"
},
{
"math_id": 5,
"text": "\\gamma = k' \\left( \\frac{M}{\\rho N_A} \\right)^{-2/3}(T_c - 6 \\ \\mathrm{K} - T) = k' \\left( \\frac{N_A}{V} \\right)^{2/3}(T_c - 6 \\ \\mathrm{K} - T)"
},
{
"math_id": 6,
"text": "\\gamma = 0.07275 \\ \\mathrm{ N/m} \\cdot (1-0.002 \\cdot (T - 291 \\ \\mathrm{K}))"
}
] | https://en.wikipedia.org/wiki?curid=10773039 |
10775385 | Complete quotient | In the metrical theory of regular continued fractions, the "k"th complete quotient ζ "k" is obtained by ignoring the first "k" partial denominators "a""i". For example, if a regular continued fraction is given by
formula_0
then the successive complete quotients ζ "k" are given by
formula_1
A recursive relationship.
From the definition given above we can immediately deduce that
formula_2
or, equivalently,
formula_3
Complete quotients and the convergents of "x".
Denoting the successive convergents of the regular continued fraction "x" = ["a"0; "a"1, "a"2, …] by "A"0, "A"1/"B"1, "A"2/"B"2, … (as explained more fully in the article fundamental recurrence formulas), it can be shown that
formula_4
for all "k" ≥ 0.
This result can be better understood by recalling that the successive convergents of an infinite regular continued fraction approach the value "x" in a sort of zig-zag pattern:
formula_5
so that when "k" is even we have "A""k"/"B""k" < "x" < "A""k"+1/"B""k"+1, and when "k" is odd we have "A""k"+1/"B""k"+1 < "x" < "A""k"/"B""k". In either case, the "k" + 1st complete quotient ζ "k"+1 is the unique real number that expresses "x" in the form of a semiconvergent.
Complete quotients and equivalent real numbers.
An equivalence relation defined by LFTs.
Consider the set of linear fractional transformations (LFTs) defined by
formula_6
where "a", "b", "c", and "d" are integers, and "ad" − "bc" = ±1. Since this set of LFTs contains an identity element (0 + "x")/1, and since it is closed under composition of functions, and every member of the set has an inverse in the set, these LFTs form a group (the group operation being composition of functions), GL(2,Z).
We can define an equivalence relation on the set of real numbers by means of this group of linear fractional transformations. We will say that two real numbers "x" and "y" are equivalent (written "x" ~ "y") if
formula_7
for some integers "a", "b", "c", and "d" such that "ad" − "bc" = ±1.
Clearly this relation is symmetric, reflexive, and transitive, so it is an equivalence relation and it can be used to separate the real numbers into equivalence classes. All the rational numbers are equivalent, because each rational number is equivalent to zero. What can be said about the irrational numbers? Do they also fall into a single equivalence class?
A theorem about "equivalent" irrational numbers.
Two irrational numbers "x" and "y" are equivalent under this scheme if and only if the infinitely long "tails" in their expansions as regular continued fractions are exactly the same. More precisely, the following theorem can be proved.
Let "x" and "y" be two irrational (real) numbers, and let the "k"th complete quotient in the regular continued fraction expansions of "x" and "y" be denoted by ζ "k" and ψ "k", respectively, Then "x" ~ "y" (under the equivalence defined in the preceding section) if and only if there are positive integers "m" and "n" such that ζ "m" = ψ "n".
An example.
The golden ratio φ is the irrational number with the very simplest possible expansion as a regular continued fraction: φ = [1; 1, 1, 1, …]. The theorem tells us first that if "x" is any real number whose expansion as a regular continued fraction contains the infinite string
[1, 1, 1, 1, …], then there are integers "a", "b", "c", and "d" (with "ad" − "bc" = ±1) such that
formula_8
Conversely, if "a", "b", "c", and "d" are integers (with "ad" − "bc" = ±1), then the regular continued fraction expansion of every real number "y" that can be expressed in the form
formula_9
eventually reaches a "tail" that looks just like the regular continued fraction for φ. | [
{
"math_id": 0,
"text": "\nx = [a_0; a_1, a_2, a_3, \\dots] = a_0 + \\cfrac{1}{a_1 + \\cfrac{1}{a_2 + \\cfrac{1}{a_3 + \\cfrac{1}{\\ddots}}}},\n"
},
{
"math_id": 1,
"text": "\n\\begin{align}\n\\zeta_0 & = [a_0; a_1, a_2, a_3, \\dots]\\\\\n\\zeta_1 & = [a_1; a_2, a_3, a_4, \\dots]\\\\\n\\zeta_2 & = [a_2; a_3, a_4, a_5, \\dots]\\\\\n\\zeta_k & = [a_k; a_{k+1}, a_{k+2}, a_{k+3}, \\dots]. \\,\n\\end{align}\n"
},
{
"math_id": 2,
"text": "\n\\zeta_k = a_k + \\frac{1}{\\zeta_{k+1}} = [a_k; \\zeta_{k+1}], \\,\n"
},
{
"math_id": 3,
"text": "\n\\zeta_{k+1} = \\frac{1}{\\zeta_k - a_k}.\\,\n"
},
{
"math_id": 4,
"text": "\nx = \\frac{A_k \\zeta_{k+1} + A_{k-1}}{B_k \\zeta_{k+1} + B_{k-1}}\\,\n"
},
{
"math_id": 5,
"text": "\nA_0 < \\frac{A_2}{B_2} < \\frac{A_4}{B_4} < \\cdots < \\frac{A_{2n}}{B_{2n}} < x\n< \\frac{A_{2n+1}}{B_{2n+1}} < \\cdots < \\frac{A_5}{B_5} < \\frac{A_3}{B_3} < \\frac{A_1}{B_1}.\\,\n"
},
{
"math_id": 6,
"text": "\nf(x) = \\frac{a + bx}{c + dx}\\,\n"
},
{
"math_id": 7,
"text": "\ny = f(x) = \\frac{a + bx}{c + dx}\\,\n"
},
{
"math_id": 8,
"text": "\nx = \\frac{a + b\\phi}{c + d\\phi}.\\,\n"
},
{
"math_id": 9,
"text": "\ny = \\frac{a + b\\phi}{c + d\\phi}\\,\n"
}
] | https://en.wikipedia.org/wiki?curid=10775385 |
10775645 | Real structure | Mathematics concept
In mathematics, a real structure on a complex vector space is a way to decompose the complex vector space in the direct sum of two real vector spaces. The prototype of such a structure is the field of complex numbers itself, considered as a complex vector space over itself and with the conjugation map formula_0, with formula_1, giving the "canonical" real structure on formula_2, that is formula_3.
The conjugation map is antilinear: formula_4 and formula_5.
Vector space.
A real structure on a complex vector space "V" is an antilinear involution formula_6. A real structure defines a real subspace formula_7, its fixed locus, and the natural map
formula_8
is an isomorphism. Conversely any vector space that is the complexification
of a real vector space has a natural real structure.
One first notes that every complex space "V" has a realification obtained by taking the same vectors as in the original set and restricting the scalars to be real. If formula_9 and formula_10 then the vectors formula_11 and formula_12 are linearly independent in the realification of "V". Hence:
formula_13
Naturally, one would wish to represent "V" as the direct sum of two real vector spaces, the "real and imaginary parts of "V"". There is no canonical way of doing this: such a splitting is an additional real structure in "V". It may be introduced as follows. Let formula_14 be an antilinear map such that formula_15, that is an antilinear involution of the complex space "V".
Any vector formula_16 can be written formula_17,
where formula_18 and formula_19.
Therefore, one gets a direct sum of vector spaces formula_20 where:
formula_21 and formula_22.
Both sets formula_23 and formula_24 are real vector spaces. The linear map formula_25, where formula_26, is an isomorphism of real vector spaces, whence:
formula_27.
The first factor formula_23 is also denoted by formula_28 and is left invariant by formula_29, that is formula_30. The second factor formula_24 is
usually denoted by formula_31. The direct sum formula_20 reads now as:
formula_32,
i.e. as the direct sum of the "real" formula_28 and "imaginary" formula_31 parts of "V". This construction strongly depends on the choice of an antilinear involution of the complex vector space "V". The complexification of the real vector space formula_28, i.e.,
formula_33 admits
a natural real structure and hence is canonically isomorphic to the direct sum of two copies of formula_34:
formula_35.
It follows a natural linear isomorphism formula_36 between complex vector spaces with a given real structure.
A real structure on a complex vector space "V", that is an antilinear involution formula_14, may be equivalently described in terms of the linear map formula_37 from the vector space formula_38 to the complex conjugate vector space formula_39 defined by
formula_40.
Algebraic variety.
For an algebraic variety defined over a subfield of the real numbers,
the real structure is the complex conjugation acting on the points of the variety in complex projective or affine space.
Its fixed locus is the space of real points of the variety (which may be empty).
Scheme.
For a scheme defined over a subfield of the real numbers, complex conjugation
is in a natural way a member of the Galois group of the algebraic closure of the base field.
The real structure is the Galois action of this conjugation on the extension of the
scheme over the algebraic closure of the base field.
The real points are the points whose residue field is fixed (which may be empty).
Reality structure.
In mathematics, a reality structure on a complex vector space "V" is a decomposition of "V" into two real subspaces, called the real and imaginary parts of "V":
formula_41
Here "V"R is a real subspace of "V", i.e. a subspace of "V" considered as a vector space over the real numbers. If "V" has complex dimension "n" (real dimension 2"n"), then "V"R must have real dimension "n".
The standard reality structure on the vector space formula_42 is the decomposition
formula_43
In the presence of a reality structure, every vector in "V" has a real part and an imaginary part, each of which is a vector in "V"R:
formula_44
In this case, the complex conjugate of a vector "v" is defined as follows:
formula_45
This map formula_46 is an antilinear involution, i.e.
formula_47
Conversely, given an antilinear involution formula_48 on a complex vector space "V", it is possible to define a reality structure on "V" as follows. Let
formula_49
and define
formula_50
Then
formula_41
This is actually the decomposition of "V" as the eigenspaces of the real linear operator "c". The eigenvalues of "c" are +1 and −1, with eigenspaces "V"R and formula_51 "V"R, respectively. Typically, the operator "c" itself, rather than the eigenspace decomposition it entails, is referred to as the reality structure on "V".
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma: {\\mathbb C} \\to {\\mathbb C}\\,"
},
{
"math_id": 1,
"text": "\\sigma (z)={\\bar z}"
},
{
"math_id": 2,
"text": "{\\mathbb C}\\,"
},
{
"math_id": 3,
"text": "{\\mathbb C}={\\mathbb R}\\oplus i{\\mathbb R}\\,"
},
{
"math_id": 4,
"text": "\\sigma (\\lambda z)={\\bar \\lambda}\\sigma(z)\\,"
},
{
"math_id": 5,
"text": "\\sigma (z_1+z_2)=\\sigma(z_1)+\\sigma(z_2)\\,"
},
{
"math_id": 6,
"text": "\\sigma: V \\to V"
},
{
"math_id": 7,
"text": "V_{\\mathbb{R}} \\subset V"
},
{
"math_id": 8,
"text": " V_{\\mathbb R} \\otimes_{\\mathbb{R}} {\\mathbb C} \\to V "
},
{
"math_id": 9,
"text": "t\\in V\\,"
},
{
"math_id": 10,
"text": "t\\neq 0"
},
{
"math_id": 11,
"text": "t\\,"
},
{
"math_id": 12,
"text": "it\\,"
},
{
"math_id": 13,
"text": " \\dim_{\\mathbb R}V = 2\\dim_{\\mathbb C}V "
},
{
"math_id": 14,
"text": "\\sigma: V \\to V\\,"
},
{
"math_id": 15,
"text": "\\sigma\\circ\\sigma=id_{V}\\,"
},
{
"math_id": 16,
"text": "v\\in V\\,"
},
{
"math_id": 17,
"text": "{v = v^{+} + v^{-}}\\,"
},
{
"math_id": 18,
"text": "v^+ ={1\\over {2}}(v+\\sigma v)"
},
{
"math_id": 19,
"text": "v^- ={1\\over {2}}(v-\\sigma v)\\,"
},
{
"math_id": 20,
"text": "V=V^{+}\\oplus V^{-}\\,"
},
{
"math_id": 21,
"text": "V^{+}=\\{v\\in V | \\sigma v = v\\}"
},
{
"math_id": 22,
"text": "V^{-}=\\{v\\in V | \\sigma v = -v\\}\\,"
},
{
"math_id": 23,
"text": "V^+\\,"
},
{
"math_id": 24,
"text": "V^-\\,"
},
{
"math_id": 25,
"text": "K: V^+ \\to V^-\\,"
},
{
"math_id": 26,
"text": "K(t)=it\\,"
},
{
"math_id": 27,
"text": " \\dim_{\\mathbb R}V^+ = \\dim_{\\mathbb R}V^- = \\dim_{\\mathbb C}V\\,"
},
{
"math_id": 28,
"text": "V_{\\mathbb{R}}\\,"
},
{
"math_id": 29,
"text": "\\sigma\\,"
},
{
"math_id": 30,
"text": "\\sigma(V_{\\mathbb{R}})\\subset V_{\\mathbb{R}}\\,"
},
{
"math_id": 31,
"text": "iV_{\\mathbb{R}}\\,"
},
{
"math_id": 32,
"text": "V=V_{\\mathbb{R}} \\oplus iV_{\\mathbb{R}}\\,"
},
{
"math_id": 33,
"text": "V^{\\mathbb{C}}= V_{\\mathbb R} \\otimes_{\\mathbb{R}} \\mathbb{C}\\,"
},
{
"math_id": 34,
"text": "V_{\\mathbb R}\\,"
},
{
"math_id": 35,
"text": "V_{\\mathbb R} \\otimes_{\\mathbb{R}} \\mathbb{C}= V_{\\mathbb{R}} \\oplus iV_{\\mathbb{R}}\\,"
},
{
"math_id": 36,
"text": " V_{\\mathbb R} \\otimes_{\\mathbb{R}} \\mathbb{C} \\to V\\,"
},
{
"math_id": 37,
"text": "\\hat \\sigma:V\\to\\bar V\\,"
},
{
"math_id": 38,
"text": "V\\,"
},
{
"math_id": 39,
"text": "\\bar V\\,"
},
{
"math_id": 40,
"text": "v \\mapsto \\hat\\sigma (v):=\\overline{\\sigma(v)}\\,"
},
{
"math_id": 41,
"text": "V = V_\\mathbb{R} \\oplus i V_\\mathbb{R}."
},
{
"math_id": 42,
"text": "\\mathbb{C}^n"
},
{
"math_id": 43,
"text": "\\mathbb{C}^n = \\mathbb{R}^n \\oplus i\\,\\mathbb{R}^n."
},
{
"math_id": 44,
"text": "v = \\operatorname{Re}\\{v\\}+i\\,\\operatorname{Im}\\{v\\}"
},
{
"math_id": 45,
"text": "\\overline v = \\operatorname{Re}\\{v\\} - i\\,\\operatorname{Im}\\{v\\}"
},
{
"math_id": 46,
"text": "v \\mapsto \\overline v"
},
{
"math_id": 47,
"text": "\\overline{\\overline v} = v,\\quad \\overline{v + w} = \\overline{v} + \\overline{w},\\quad\\text{and}\\quad\n\\overline{\\alpha v} = \\overline\\alpha \\, \\overline{v}."
},
{
"math_id": 48,
"text": "v \\mapsto c(v)"
},
{
"math_id": 49,
"text": "\\operatorname{Re}\\{v\\}=\\frac{1}{2}\\left(v + c(v)\\right),"
},
{
"math_id": 50,
"text": "V_\\mathbb{R} = \\left\\{\\operatorname{Re}\\{v\\} \\mid v \\in V \\right\\}."
},
{
"math_id": 51,
"text": "i"
}
] | https://en.wikipedia.org/wiki?curid=10775645 |
10777748 | Weitzenböck's inequality | In mathematics, Weitzenböck's inequality, named after Roland Weitzenböck, states that for a triangle of side lengths formula_0, formula_1, formula_2, and area formula_3, the following inequality holds:
formula_4
Equality occurs if and only if the triangle is equilateral. Pedoe's inequality is a generalization of Weitzenböck's inequality. The Hadwiger–Finsler inequality is a strengthened version of Weitzenböck's inequality.
Geometric interpretation and proof.
Rewriting the inequality above allows for a more concrete geometric interpretation, which in turn provides an immediate proof.
formula_5
Now the summands on the left side are the areas of equilateral triangles erected over the sides of the original triangle and hence the inequation states that the sum of areas of the equilateral triangles is always greater than or equal to threefold the area of the original triangle.
formula_6
This can now be shown by replicating area of the triangle three times within the equilateral triangles. To achieve that the Fermat point is used to partition the triangle into three obtuse subtriangles with a formula_7 angle and each of those subtriangles is replicated three times within the equilateral triangle next to it. This only works if every angle of the triangle is smaller than formula_7, since otherwise the Fermat point is not located in the interior of the triangle and becomes a vertex instead. However if one angle is greater or equal to formula_7 it is possible to replicate the whole triangle three times within the largest equilateral triangle, so the sum of areas of all equilateral triangles stays greater than the threefold area of the triangle anyhow.
Further proofs.
The proof of this inequality was set as a question in the International Mathematical Olympiad of 1961. Even so, the result is not too difficult to derive using Heron's formula for the area of a triangle:
formula_8
First method.
It can be shown that the area of the inner Napoleon's triangle, which must be nonnegative, is
formula_9
so the expression in parentheses must be greater than or equal to 0.
Second method.
This method assumes no knowledge of inequalities except that all squares are nonnegative.
formula_10
and the result follows immediately by taking the positive square root of both sides. From the first inequality we can also see that equality occurs only when formula_11 and the triangle is equilateral.
Third method.
This proof assumes knowledge of the AM–GM inequality.
formula_12
As we have used the arithmetic-geometric mean inequality, equality only occurs when formula_11 and the triangle is equilateral.
Fourth method.
Write formula_13 so the sum formula_14 and formula_15 i.e. formula_16. But formula_17, so formula_18.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a"
},
{
"math_id": 1,
"text": "b"
},
{
"math_id": 2,
"text": "c"
},
{
"math_id": 3,
"text": "\\Delta"
},
{
"math_id": 4,
"text": "a^2 + b^2 + c^2 \\geq 4\\sqrt{3}\\, \\Delta. "
},
{
"math_id": 5,
"text": "\\frac{\\sqrt{3}}{4}a^2 + \\frac{\\sqrt{3}}{4}b^2 + \\frac{\\sqrt{3}}{4}c^2 \\geq 3\\, \\Delta. "
},
{
"math_id": 6,
"text": "\\Delta_a + \\Delta_b + \\Delta_c \\geq 3\\, \\Delta. "
},
{
"math_id": 7,
"text": "120^\\circ"
},
{
"math_id": 8,
"text": "\n\\begin{align}\n\\Delta & {} =\\frac{1}{4}\\sqrt{(a+b+c)(a+b-c)(b+c-a)(c+a-b)} \\\\[4pt]\n& {} =\\frac{1}{4}\\sqrt{2(a^2 b^2+a^2c^2+b^2c^2)-(a^4+b^4+c^4)}.\n\\end{align}\n"
},
{
"math_id": 9,
"text": "\\frac{\\sqrt{3}}{24}(a^2+b^2+c^2-4\\sqrt{3}\\Delta),"
},
{
"math_id": 10,
"text": "\n\\begin{align}\n{} & (a^2 - b^2)^2 + (b^2 - c^2)^2 + (c^2 - a^2)^2 \\geq 0 \\\\[5pt]\n{} \\iff & 2(a^4+b^4+c^4) - 2(a^2 b^2+a^2c^2+b^2c^2) \\geq 0 \\\\[5pt]\n{} \\iff & \\frac{4(a^4+b^4+c^4)}{3} \\geq \\frac{4(a^2 b^2+a^2c^2+b^2c^2)}{3} \\\\[5pt]\n{} \\iff & \\frac{(a^4+b^4+c^4) + 2(a^2 b^2+a^2c^2+b^2c^2)}{3} \\geq 2(a^2 b^2+a^2c^2+b^2c^2)-(a^4+b^4+c^4) \\\\[5pt]\n{} \\iff & \\frac{(a^2 + b^2 + c^2)^2}{3} \\geq (4\\Delta)^2,\n\\end{align}\n"
},
{
"math_id": 11,
"text": "a = b = c"
},
{
"math_id": 12,
"text": "\n\\begin{align}\n& & (a-b)^2+(b-c)^2+(c-a)^2 & \\geq & & 0 \\\\\n\\iff & & 2a^2+2b^2+2c^2 & \\geq & & 2ab+2bc+2ac \\\\\n\\iff & & 3(a^2+b^2+c^2) & \\geq & & (a + b + c)^2 \\\\\n\\iff & & a^2+b^2+c^2 & \\geq & & \\sqrt{3(a+b+c)\\left(\\frac{a+b+c}{3}\\right)^3} \\\\\n\\Rightarrow & & a^2+b^2+c^2 & \\geq & & \\sqrt{3 (a+b+c)(-a+b+c)(a-b+c)(a+b-c)} \\\\\n\\iff & & a^2+b^2+c^2 & \\geq & & 4 \\sqrt3 \\Delta.\n\\end{align}\n"
},
{
"math_id": 13,
"text": "x=\\cot A, c=\\cot A+\\cot B>0"
},
{
"math_id": 14,
"text": "S=\\cot A+\\cot B+\\cot C=c+\\frac{1-x(c-x)}{c}"
},
{
"math_id": 15,
"text": "cS=c^2-xc+x^2+1=\\left(x-\\frac{c}{2}\\right)^2+\\left(\\frac{c\\sqrt{3}}{2}-1\\right)^2+c\\sqrt{3}\\ge c\\sqrt{3}"
},
{
"math_id": 16,
"text": "S\\ge\\sqrt{3}"
},
{
"math_id": 17,
"text": "\\cot A=\\frac{b^2+c^2-a^2}{4\\Delta}"
},
{
"math_id": 18,
"text": "S=\\frac{a^2+b^2+c^2}{4\\Delta}"
}
] | https://en.wikipedia.org/wiki?curid=10777748 |
10779 | Frequency | Number of occurrences or cycles per unit time
Frequency (symbol "f"), most often measured in "hertz" (symbol: Hz), is the number of occurrences of a repeating event per unit of time. It is also occasionally referred to as "temporal frequency" for clarity and to distinguish it from "spatial frequency". Ordinary frequency is related to "angular frequency" (symbol "ω", with SI unit radian per second) by a factor of 2π. The period (symbol "T") is the interval of time between events, so the period is the reciprocal of the frequency: "T" = 1/"f".
Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as mechanical vibrations, audio signals (sound), radio waves, and light.
For example, if a heart beats at a frequency of 120 times per minute (2 hertz), the period—the interval between beats—is half a second (60 seconds divided by 120 beats).
Definitions and units.
For cyclical phenomena such as oscillations, waves, or for examples of simple harmonic motion, the term "frequency" is defined as the number of cycles or repetitions per unit of time. The conventional symbol for frequency is "f" or "ν" (the Greek letter nu) is also used. The "period" "T" is the time taken to complete one cycle of an oscillation or rotation. The frequency and the period are related by the equation
formula_0
The term "temporal frequency" is used to emphasise that the frequency is characterised by the number of occurrences of a repeating event per unit time.
The SI unit of frequency is the hertz (Hz), named after the German physicist Heinrich Hertz by the International Electrotechnical Commission in 1930. It was adopted by the CGPM (Conférence générale des poids et mesures) in 1960, officially replacing the previous name, "cycle per second" (cps). The SI unit for the period, as for all measurements of time, is the second. A traditional unit of frequency used with rotating mechanical devices, where it is termed "rotational frequency", is revolution per minute, abbreviated r/min or rpm. 60 rpm is equivalent to one hertz.
Period versus frequency.
As a matter of convenience, longer and slower waves, such as ocean surface waves, are more typically described by wave period rather than frequency. Short and fast waves, like audio and radio, are usually described by their frequency. Some commonly used conversions are listed below:
Related quantities.
d"N"/d"t"; it is a type of frequency applied to rotational motion.
formula_1 formula_2
The unit of angular frequency is the radian per second (rad/s) but, for discrete-time signals, can also be expressed as radians per sampling interval, which is a dimensionless quantity. Angular frequency is frequency multiplied by 2π.
In wave propagation.
For periodic waves in nondispersive media (that is, media in which the wave speed is independent of frequency), frequency has an inverse relationship to the wavelength, "λ" (lambda). Even in dispersive media, the frequency "f" of a sinusoidal wave is equal to the phase velocity "v" of the wave divided by the wavelength "λ" of the wave:
formula_5
In the special case of electromagnetic waves in vacuum, then "v" = "c", where "c" is the speed of light in vacuum, and this expression becomes
formula_6
When monochromatic waves travel from one medium to another, their frequency remains the same—only their wavelength and speed change.
Measurement.
Measurement of frequency can be done in the following ways:
Counting.
Calculating the frequency of a repeating event is accomplished by counting the number of times that event occurs within a specific time period, then dividing the count by the period. For example, if 71 events occur within 15 seconds the frequency is:
formula_7
If the number of counts is not very large, it is more accurate to measure the time interval for a predetermined number of occurrences, rather than the number of occurrences within a specified time. The latter method introduces a random error into the count of between zero and one count, so on average half a count. This is called "gating error" and causes an average error in the calculated frequency of formula_8, or a fractional error of formula_9 where formula_10 is the timing interval and formula_11 is the measured frequency. This error decreases with frequency, so it is generally a problem at low frequencies where the number of counts "N" is small.
Stroboscope.
An old method of measuring the frequency of rotating or vibrating objects is to use a stroboscope. This is an intense repetitively flashing light (strobe light) whose frequency can be adjusted with a calibrated timing circuit. The strobe light is pointed at the rotating object and the frequency adjusted up and down. When the frequency of the strobe equals the frequency of the rotating or vibrating object, the object completes one cycle of oscillation and returns to its original position between the flashes of light, so when illuminated by the strobe the object appears stationary. Then the frequency can be read from the calibrated readout on the stroboscope. A downside of this method is that an object rotating at an integer multiple of the strobing frequency will also appear stationary.
Frequency counter.
Higher frequencies are usually measured with a frequency counter. This is an electronic instrument which measures the frequency of an applied repetitive electronic signal and displays the result in hertz on a digital display. It uses digital logic to count the number of cycles during a time interval established by a precision quartz time base. Cyclic processes that are not electrical, such as the rotation rate of a shaft, mechanical vibrations, or sound waves, can be converted to a repetitive electronic signal by transducers and the signal applied to a frequency counter. As of 2018, frequency counters can cover the range up to about 100 GHz. This represents the limit of direct counting methods; frequencies above this must be measured by indirect methods.
Heterodyne methods.
Above the range of frequency counters, frequencies of electromagnetic signals are often measured indirectly utilizing heterodyning (frequency conversion). A reference signal of a known frequency near the unknown frequency is mixed with the unknown frequency in a nonlinear mixing device such as a diode. This creates a heterodyne or "beat" signal at the difference between the two frequencies. If the two signals are close together in frequency the heterodyne is low enough to be measured by a frequency counter. This process only measures the difference between the unknown frequency and the reference frequency. To convert higher frequencies, several stages of heterodyning can be used. Current research is extending this method to infrared and light frequencies (optical heterodyne detection).
Examples.
Light.
Visible light is an electromagnetic wave, consisting of oscillating electric and magnetic fields traveling through space. The frequency of the wave determines its color: 400 THz ( Hz) is red light, 800 THz () is violet light, and between these (in the range 400–800 THz) are all the other colors of the visible spectrum. An electromagnetic wave with a frequency less than will be invisible to the human eye; such waves are called infrared (IR) radiation. At even lower frequency, the wave is called a microwave, and at still lower frequencies it is called a radio wave. Likewise, an electromagnetic wave with a frequency higher than will also be invisible to the human eye; such waves are called ultraviolet (UV) radiation. Even higher-frequency waves are called X-rays, and higher still are gamma rays.
All of these waves, from the lowest-frequency radio waves to the highest-frequency gamma rays, are fundamentally the same, and they are all called electromagnetic radiation. They all travel through vacuum at the same speed (the speed of light), giving them wavelengths inversely proportional to their frequencies.
formula_12
where "c" is the speed of light ("c" in vacuum or less in other media), "f" is the frequency and "λ" is the wavelength.
In dispersive media, such as glass, the speed depends somewhat on frequency, so the wavelength is not quite inversely proportional to frequency.
Sound.
Sound propagates as mechanical vibration waves of pressure and displacement, in air or other substances. In general, frequency components of a sound determine its "color", its timbre. When speaking about the frequency (in singular) of a sound, it means the property that most determines its pitch.
The frequencies an ear can hear are limited to a specific range of frequencies. The audible frequency range for humans is typically given as being between about 20 Hz and 20,000 Hz (20 kHz), though the high frequency limit usually reduces with age. Other species have different hearing ranges. For example, some dog breeds can perceive vibrations up to 60,000 Hz.
In many media, such as air, the speed of sound is approximately independent of frequency, so the wavelength of the sound waves (distance between repetitions) is approximately inversely proportional to frequency.
Line current.
In Europe, Africa, Australia, southern South America, most of Asia, and Russia, the frequency of the alternating current in household electrical outlets is 50 Hz (close to the tone G), whereas in North America and northern South America, the frequency of the alternating current in household electrical outlets is 60 Hz (between the tones B♭ and B; that is, a minor third above the European frequency). The frequency of the 'hum' in an audio recording can show in which of these general regions the recording was made.
Aperiodic frequency.
Aperiodic frequency is the rate of incidence or occurrence of non-cyclic phenomena, including random processes such as radioactive decay. It is expressed with the unit reciprocal second (s−1) or, in the case of radioactivity, with the unit becquerel.
It is defined as a rate, "f" = "N"/Δ"t", involving the number of entities counted or the number of events happened ("N") during a given time duration (Δ"t"); it is a physical quantity of type temporal rate.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f = \\frac{1}{T}."
},
{
"math_id": 1,
"text": "y(t) = \\sin \\theta(t) = \\sin(\\omega t) = \\sin(2 \\mathrm{\\pi} f t)"
},
{
"math_id": 2,
"text": "\\frac{\\mathrm{d} \\theta}{\\mathrm{d} t} = \\omega = 2 \\mathrm{\\pi} f ."
},
{
"math_id": 3,
"text": "y(t) = \\sin \\theta(t,x) = \\sin(\\omega t + kx)"
},
{
"math_id": 4,
"text": "\\frac{\\mathrm{d} \\theta}{\\mathrm{d} x} = k = 2 \\pi \\xi."
},
{
"math_id": 5,
"text": "\nf = \\frac{v}{\\lambda}.\n"
},
{
"math_id": 6,
"text": "\nf = \\frac{c}{\\lambda}.\n"
},
{
"math_id": 7,
"text": "f = \\frac{71}{15 \\,\\text{s}} \\approx 4.73 \\, \\text{Hz}."
},
{
"math_id": 8,
"text": "\\Delta f = \\frac{1}{2T_\\text{m}}"
},
{
"math_id": 9,
"text": "\\frac{\\Delta f}{f} = \\frac{1}{2 f T_\\text{m}}"
},
{
"math_id": 10,
"text": "T_\\text{m}"
},
{
"math_id": 11,
"text": "f"
},
{
"math_id": 12,
"text": "\\displaystyle c=f\\lambda,"
}
] | https://en.wikipedia.org/wiki?curid=10779 |
10782668 | Irregularity of distributions | Mathematical problem
The irregularity of distributions problem, stated first by Hugo Steinhaus, is a numerical problem with a surprising result. The problem is to find "N" numbers, formula_0, all between 0 and 1, for which the following conditions hold:
Mathematically, we are looking for a sequence of real numbers
formula_0
such that for every "n" ∈ {1, ..., "N"} and every "k" ∈ {1, ..., "n"} there is some "i" ∈ {1, ..., "k"} such that
formula_1
Solution.
The surprising result is that there is a solution up to "N" = 17, but starting at "N" = 18 and above it is impossible. A possible solution for "N" ≤ 17 is shown diagrammatically on the right; numerically it is as follows:
formula_2
In this example, considering for instance the first 5 numbers, we have
formula_3
Mieczysław Warmus concluded that 768 (1536, counting symmetric solutions separately) distinct sets of intervals satisfy the conditions for "N" = 17. | [
{
"math_id": 0,
"text": "x_1,\\ldots,x_N"
},
{
"math_id": 1,
"text": "\\frac{k-1}{n} \\leq x_i < \\frac{k}{n}."
},
{
"math_id": 2,
"text": "\n\\begin{align}\nx_{1} & = 0.029 \\\\\nx_{2} & = 0.971 \\\\\nx_{3} & = 0.423 \\\\\nx_{4} & = 0.71 \\\\\nx_{5} & = 0.27 \\\\\nx_{6} & = 0.542 \\\\\nx_{7} & = 0.852 \\\\\nx_{8} & = 0.172 \\\\\nx_{9} & = 0.62 \\\\\nx_{10} & = 0.355 \\\\\nx_{11} & = 0.777 \\\\\nx_{12} & = 0.1 \\\\\nx_{13} & = 0.485 \\\\\nx_{14} & = 0.905 \\\\\nx_{15} & = 0.218 \\\\\nx_{16} & = 0.667 \\\\\nx_{17} & = 0.324\n\\end{align}\n"
},
{
"math_id": 3,
"text": "0 < x_1 < \\frac{1}{5} < x_5 < \\frac{2}{5} < x_3 < \\frac{3}{5} < x_4 < \\frac{4}{5} < x_2 < 1."
}
] | https://en.wikipedia.org/wiki?curid=10782668 |
10783043 | Fagnano's problem | Optimisation problem in triangle geometry
In geometry, Fagnano's problem is an optimization problem that was first stated by Giovanni Fagnano in 1775:
<templatestyles src="Template:Blockquote/styles.css" />For a given acute triangle determine the inscribed triangle of minimal perimeter.
The solution is the orthic triangle, with vertices at the base points of the altitudes of the given triangle.
Solution.
The orthic triangle, with vertices at the base points of the altitudes of the given triangle, has the smallest perimeter of all triangles inscribed into an acute triangle, hence it is the solution of Fagnano's problem. Fagnano's original proof used calculus methods and an intermediate result given by his father Giulio Carlo de' Toschi di Fagnano. Later however several geometric proofs were discovered as well, amongst others by Hermann Schwarz and Lipót Fejér. These proofs use the geometrical properties of reflections to determine some minimal path representing the perimeter.
Physical principles.
A solution from physics is found by imagining putting a rubber band that follows Hooke's Law around the three sides of a triangular frame formula_0, such that it could slide around smoothly. Then the rubber band would end up in a position that minimizes its elastic energy, and therefore minimize its total length. This position gives the minimal perimeter triangle.
The tension inside the rubber band is the same everywhere in the rubber band, so in its resting position, we have, by Lami's theorem, formula_1
Therefore, this minimal triangle is the orthic triangle. | [
{
"math_id": 0,
"text": "ABC"
},
{
"math_id": 1,
"text": "\\angle bcA = \\angle acB, \\angle caB = \\angle baC, \\angle abC = \\angle cbA"
}
] | https://en.wikipedia.org/wiki?curid=10783043 |
10784136 | Negative base | Non-standard positional numeral system
A negative base (or negative radix) may be used to construct a non-standard positional numeral system. Like other place-value systems, each position holds multiples of the appropriate power of the system's base; but that base is negative—that is to say, the base b is equal to −r for some natural number r (r ≥ 2).
Negative-base systems can accommodate all the same numbers as standard place-value systems, but both positive and negative numbers are represented without the use of a minus sign (or, in computer representation, a sign bit); this advantage is countered by an increased complexity of arithmetic operations. The need to store the information normally contained by a negative sign often results in a negative-base number being one digit longer than its positive-base equivalent.
The common names for negative-base positional numeral systems are formed by prefixing "nega-" to the name of the corresponding positive-base system; for example, negadecimal (base −10) corresponds to decimal (base 10), negabinary (base −2) to binary (base 2), negaternary (base −3) to ternary (base 3), and negaquaternary (base −4) to quaternary (base 4).
Example.
Consider what is meant by the representation in the negadecimal system, whose base b is −10:
The representation (which is intended to be negadecimal notation) is equivalent to in decimal notation, because 10,000 + (−2,000) + 200 + (−40) + 3 = .
On the other hand, in decimal would be written in negadecimal.
History.
Negative numerical bases were first considered by Vittorio Grünwald in an 1885 monograph published in "Giornale di Matematiche di Battaglini". Grünwald gave algorithms for performing addition, subtraction, multiplication, division, root extraction, divisibility tests, and radix conversion. Negative bases were later mentioned in passing by A. J. Kempner in 1936 and studied in more detail by Zdzisław Pawlak and A. Wakulicz in 1957.
Negabinary was implemented in the early Polish computer BINEG (and UMC), built 1957–59, based on ideas by Z. Pawlak and A. Lazarkiewicz from the Mathematical Institute in Warsaw. Implementations since then have been rare.
zfp, a floating-point compression algorithm from the Lawrence Livermore National Laboratory, uses negabinary to store numbers. According to zfp's documentation:
Unlike sign-magnitude representations, the leftmost one-bit in negabinary simultaneously encodes the sign and approximate magnitude of a number. Moreover, unlike two’s complement, numbers small in magnitude have many leading zeros in negabinary regardless of sign, which facilitates encoding.
Notation and use.
Denoting the base as −r, every integer a can be written uniquely as
formula_0
where each digit dk is an integer from 0 to "r" − 1 and the leading digit dn > 0 (unless "n"
0). The base −r expansion of a is then given by the string "d""n""d""n"−1..."d"1"d"0.
Negative-base systems may thus be compared to signed-digit representations, such as balanced ternary, where the radix is positive but the digits are taken from a partially negative range. (In the table below the digit of value −1 is written as the single character T.)
Some numbers have the same representation in base −r as in base r. For example, the numbers from 100 to 109 have the same representations in decimal and negadecimal. Similarly,
formula_1
and is represented by 10001 in binary and 10001 in negabinary.
Some numbers with their expansions in a number of positive and corresponding negative bases are:
Note that, with the exception of nega balanced ternary, the base −r expansions of negative integers have an even number of digits, while the base −r expansions of the non-negative integers have an odd number of digits.
Calculation.
The base −r expansion of a number can be found by repeated division by −r, recording the non-negative remainders in formula_2, and concatenating those remainders, starting with the last. Note that if "a" / "b" is "c" with remainder "d", then "bc" + "d"
"a" and therefore "d"
"a" − "bc". To arrive at the correct conversion, the value for c must be chosen such that d is non-negative and minimal. For the fourth line of the following example this means that
formula_3
has to be chosen — and not formula_4 nor formula_5
For example, to convert 146 in decimal to negaternary:
formula_6
Reading the remainders backward we obtain the negaternary representation of 14610: 21102–3.
Proof: -3 · (-3 · (-3 · (-3 · ( 2 ) + 1 ) + 1 ) + 0 ) + 2 = (((2 · (–3) + 1) · (–3) + 1) · (–3) + 0) · (–3) + 2
= 14610.
Reading the remainders "forward" we can obtain the negaternary least-significant-digit-first representation.
Proof: 2 + ( 0 + ( 1 + ( 1 + ( 2 ) · -3 ) · -3) · -3 ) · -3 = 14610.
Note that in most programming languages, the result (in integer arithmetic) of dividing a negative number by a negative number is rounded towards 0, usually leaving a negative remainder. In such a case we have "a"
(−"r")"c" + "d"
(−"r")"c" + "d" − "r" + "r"
(−"r")("c" + 1) + ("d" + "r"). Because |"d"| < "r", ("d" + "r") is the positive remainder. Therefore, to get the correct result in such case, computer implementations of the above algorithm should add 1 and r to the quotient and remainder respectively.
Example implementation code.
To negabinary.
C#.
static string ToNegabinary(int val)
string result = string.Empty;
while (val != 0)
int remainder = val % -2;
val = val / -2;
if (remainder < 0)
remainder += 2;
val += 1;
result = remainder.ToString() + result;
return result;
C++.
auto to_negabinary(int value)
std::bitset<sizeof(int) * CHAR_BIT > result;
std::size_t bit_position = 0;
while (value != 0)
const auto div_result = std::div(value, -2);
if (div_result.rem < 0)
value = div_result.quot + 1;
else
value = div_result.quot;
result.set(bit_position, div_result.rem != 0);
++bit_position;
return result;
To negaternary.
C#.
static string Negaternary(int val)
string result = string.Empty;
while (val != 0)
int remainder = val % -3;
val = val / -3;
if (remainder < 0)
remainder += 3;
val += 1;
result = remainder.ToString() + result;
return result;
Python.
def negaternary(i: int) -> str:
"""Decimal to negaternary."""
if i == 0:
digits = ["0"]
else:
digits = []
while i != 0:
i, remainder = divmod(i, -3)
if remainder < 0:
i, remainder = i + 1, remainder + 3
digits.append(str(remainder))
return "".join(digits[::-1])
»> negaternary(1000)
'2212001'
Common Lisp.
(if (zerop i)
"0"
(let ((digits "")
(rem 0))
(loop while (not (zerop i)) do
(progn
(multiple-value-setq (i rem) (truncate i -3))
(when (minusp rem)
(incf i)
(incf rem 3))
(setf digits (concatenate 'string (write-to-string rem) digits))))
digits)))
To any negative base.
Java.
import java.util.ArrayList;
import java.util.Collections;
public ArrayList<Integer> negativeBase(int input, int base) {
ArrayList<Integer> result_rev = new ArrayList<>();
int number = input;
while (number != 0) {
int i = number % base;
number /= base;
if (i < 0) {
i += Math.abs(base);
number++;
result_rev.add(i);
return Collections.reverse(results_rev.clone());
The above gives the result in an ArrayList of integers, so that the code does not have to handle how to represent a base smaller than -10. To display the result as a string, one can decide on a mapping of base to characrters. For example:
import java.util.stream.Collectors;
final String alphabet = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ@_";
public String toBaseString(ArrayList<Integer> lst) {
// Would throw exception if base is beyond the 64 possible characters
return lst.stream().map(n -> alphabet[n]).collect(Collectors.joining(""));
AutoLisp.
;; NUM is any number.
;; BAZ is any number in the interval [-10, -2]. (This is forced by how we do string notation.)
;; NUM and BAZ will be truncated to an integer if they're floats (e.g. 14.25
;; will be truncated to 14, -123456789.87 to -123456789, etc.).
(if (and (numberp num)
(numberp baz)
(<= (fix baz) -2)
(> (fix baz) -11))
(progn
(setq baz (float (fix baz))
num (float (fix num))
dig (if (= num 0) "0" ""))
(while (/= num 0)
(setq rst (- num (* baz (setq num (fix (/ num baz))))))
(if (minusp rst)
(setq num (1+ num)
rst (- rst baz)))
(setq dig (strcat (itoa (fix rst)) dig)))
dig)
(progn
(prompt
(cond
((and (not (numberp num))
(not (numberp baz)))
"\nWrong number and negabase.")
((not (numberp num))
"\nWrong number.")
((not (numberp baz))
"\nWrong negabase.")
(t
"\nNegabase must be inside [-10 -2] interval.")))
(princ))))
Shortcut calculation.
The following algorithms assume that
To negabinary.
The conversion to "negabinary" (base −2; digits in formula_7) allows a remarkable shortcut
(C implementation):
uint32_t toNegaBinary(uint32_t value) // input in standard binary
uint32_t Schroeppel2 = 0xAAAAAAAA; // = 2/3*((2*2)^16-1) = ...1010
return (value + Schroeppel2) ^ Schroeppel2; // eXclusive OR
// resulting unsigned int to be interpreted as string of elements ε {0,1} (bits)
JavaScript port for the same shortcut calculation:
function toNegaBinary(value) {
const Schroeppel2 = 0xAAAAAAAA;
// Convert as in C, then convert to a NegaBinary String
return ( ( value + Schroeppel2 ) ^ Schroeppel2 ).toString(2);
The algorithm is first described by Schroeppel in the HAKMEM (1972) as item 128. The Wolfram MathWorld documents a version in the Wolfram Language by D. Librik (Szudzik).
To negaquaternary.
The conversion to "negaquaternary" (base −4; digits in formula_11) allows a similar shortcut (C implementation):
uint32_t toNegaQuaternary(uint32_t value) // input in standard binary
uint32_t Schroeppel4 = 0xCCCCCCCC; // = 4/5*((2*4)^8-1) = ...11001100 = ...3030
return (value + Schroeppel4) ^ Schroeppel4; // eXclusive OR
// resulting unsigned int to be interpreted as string of elements ε {0,1,2,3} (pairs of bits)
JavaScript port for the same shortcut calculation:
function toNegaQuaternary(value) {
const Schroeppel4 = 0xCCCCCCCC;
// Convert as in C, then convert to NegaQuaternary String
return ( ( value + Schroeppel4 ) ^ Schroeppel4 ).toString(4);
Arithmetic operations.
The following describes the arithmetic operations for negabinary; calculations in larger bases are similar.
Addition.
Adding negabinary numbers proceeds bitwise, starting from the least significant bits; the bits from each addend are summed with the (balanced ternary) carry from the previous bit (0 at the LSB). This sum is then decomposed into an output bit and carry for the next iteration as show in the table:
The second row of this table, for instance, expresses the fact that −1 = 1 + 1 × −2; the fifth row says 2 = 0 + −1 × −2; etc.
As an example, to add 1010101−2 (1 + 4 + 16 + 64 = 85) and 1110100−2 (4 + 16 − 32 + 64 = 52),
Carry: 1 −1 0 −1 1 −1 0 0 0
First addend: 1 0 1 0 1 0 1
Second addend: 1 1 1 0 1 0 0 +
Number: 1 −1 2 0 3 −1 2 0 1
Bit (result): 1 1 0 0 1 1 0 0 1
Carry: 0 1 −1 0 −1 1 −1 0 0
so the result is 110011001−2 (1 − 8 + 16 − 128 + 256 = 137).
Another method.
While adding two negabinary numbers, every time a carry is generated an extra carry should be propagated to next bit. Consider same example as above
Extra carry: 1 1 1 0 1 0 0 0
Carry: 0 1 1 0 1 0 0 0
First addend: 1 0 1 0 1 0 1
Second addend: 1 1 1 0 1 0 0 +
Answer: 1 1 0 0 1 1 0 0 1
Negabinary full adder.
A full adder circuit can be designed to add numbers in negabinary. The following logic is used to calculate the sum and carries:
formula_12
formula_13
formula_14
Incrementing negabinary numbers.
Incrementing a negabinary number can be done by using the following formula:
formula_15
Subtraction.
To subtract, multiply each bit of the second number by −1, and add the numbers, using the same table as above.
As an example, to compute 1101001−2 (1 − 8 − 32 + 64 = 25) minus 1110100−2 (4 + 16 − 32 + 64 = 52),
Carry: 0 1 −1 1 0 0 0
First number: 1 1 0 1 0 0 1
Second number: −1 −1 −1 0 −1 0 0 +
Number: 0 1 −2 2 −1 0 1
Bit (result): 0 1 0 0 1 0 1
Carry: 0 0 1 −1 1 0 0
so the result is 100101−2 (1 + 4 −32 = −27).
Unary negation, −"x", can be computed as binary subtraction from zero, 0 − "x".
Multiplication and division.
Shifting to the left multiplies by −2, shifting to the right divides by −2.
To multiply, multiply like normal decimal or binary numbers, but using the negabinary rules for adding the carry, when adding the numbers.
First number: 1 1 1 0 1 1 0
Second number: 1 0 1 1 0 1 1 ×
1 1 1 0 1 1 0
1 1 1 0 1 1 0
1 1 1 0 1 1 0
1 1 1 0 1 1 0
1 1 1 0 1 1 0 +
Carry: 0 −1 0 −1 −1 −1 −1 −1 0 −1 0 0
Number: 1 0 2 1 2 2 2 3 2 0 2 1 0
Bit (result): 1 0 0 1 0 0 0 1 0 0 0 1 0
Carry: 0 −1 0 −1 −1 −1 −1 −1 0 −1 0 0
For each column, add "carry" to "number", and divide the sum by −2, to get the new "carry", and the resulting bit as the remainder.
Comparing negabinary numbers.
It is possible to compare negabinary numbers by slightly adjusting a normal unsigned binary comparator. When comparing the numbers formula_17 and formula_18, invert each odd positioned bit of both numbers.
After this, compare formula_17 and formula_18 using a standard unsigned comparator.
Fractional numbers.
Base −r representation may of course be carried beyond the radix point, allowing the representation of non-integer numbers.
As with positive-base systems, terminating representations correspond to fractions where the denominator is a power of the base; repeating representations correspond to other rationals, and for the same reason.
Non-unique representations.
Unlike positive-base systems, where integers and terminating fractions have non-unique representations (for example, in decimal 0.999... = 1) in negative-base systems the integers have only a single representation. However, there do exist rationals with non-unique representations. For the digits {0, 1, ..., t} with formula_19 the biggest digit and
formula_20
we have
formula_21 as well as
formula_22
So every number formula_23 with a terminating fraction formula_24 added has two distinct representations.
For example, in negaternary, i.e. formula_25 and formula_26, there is
formula_27.
Such non-unique representations can be found by considering the largest and smallest possible representations with integer parts 0 and 1 respectively, and then noting that they are equal. (Indeed, this works with any integer-base system.) The rationals thus non-uniquely expressible are those of form
formula_28
with formula_29
Imaginary base.
Just as using a negative base allows the representation of negative numbers without an explicit negative sign, using an imaginary base allows the representation of Gaussian integers. Donald Knuth proposed the quater-imaginary base (base 2i) in 1955.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a = \\sum_{i=0}^{n}d_{i}(-r)^{i}"
},
{
"math_id": 1,
"text": "17=2^4+2^0=(-2)^4+(-2)^0"
},
{
"math_id": 2,
"text": "\\{0, 1, \\ldots, r-1\\}"
},
{
"math_id": 3,
"text": "-5 \\div (-3) = 2 ~\\mathrm{remainder}~ 1"
},
{
"math_id": 4,
"text": "= 3 ~\\mathrm{remainder}~ 4"
},
{
"math_id": 5,
"text": "= 1 ~\\mathrm{remainder}~ -\\!2."
},
{
"math_id": 6,
"text": "\\begin{array}{rr}\n 146 \\div (-3) = & -48 ~\\mathrm{remainder}~ 2 \\\\\n -48 \\div (-3) = & 16 ~\\mathrm{remainder}~ 0 \\\\\n 16 \\div (-3) = & -5 ~\\mathrm{remainder}~ 1 \\\\\n -5 \\div (-3) = & 2 ~\\mathrm{remainder}~ 1 \\\\\n 2 \\div (-3) = & 0 ~\\mathrm{remainder}~ 2\n\\end{array}"
},
{
"math_id": 7,
"text": "\\{0, 1\\}"
},
{
"math_id": 8,
"text": "D"
},
{
"math_id": 9,
"text": "D = \\{0, |b|-1\\}"
},
{
"math_id": 10,
"text": "b\\in \\{-2,-4\\}"
},
{
"math_id": 11,
"text": "\\{0, 1, 2, 3\\}"
},
{
"math_id": 12,
"text": "s_i = a_i \\oplus b_i \\oplus c_i^+ \\oplus c_i^-"
},
{
"math_id": 13,
"text": "c_{i+1}^+ = \\overline{a_i}\\overline{b_i}\\overline{c_i^+}c_i^-"
},
{
"math_id": 14,
"text": "c_{i+1}^- = a_i b_i \\overline{c_i^-} + (a_i \\oplus b_i)c_i^+ \\overline{c_i^-}"
},
{
"math_id": 15,
"text": "2x \\oplus ((2x \\oplus x) + 1)"
},
{
"math_id": 16,
"text": "2x"
},
{
"math_id": 17,
"text": "A"
},
{
"math_id": 18,
"text": "B"
},
{
"math_id": 19,
"text": "\\mathbf{t}:=r\\!-\\!\\!1=-b\\!-\\!\\!1"
},
{
"math_id": 20,
"text": "T:=0.\\overline{01}_b = \\sum_{i=1}^{\\infty} b^{-2i} = \\frac1{b^2-1} = \\frac1{r^2-1}"
},
{
"math_id": 21,
"text": "0.\\overline{0\\mathbf{t}}_b=\\mathbf{t}T=\\frac{r\\!-\\!\\!1}{r^2-1}=\\frac1{r+1}"
},
{
"math_id": 22,
"text": "1.\\overline{\\mathbf{t}0}_b = 1+\\mathbf{t}bT = \\frac{(r^2-1)+(r\\!-\\!\\!1)b}{r^2-1}= \\frac1{r+1}."
},
{
"math_id": 23,
"text": "\\frac1{r+1}+z"
},
{
"math_id": 24,
"text": "z\\in \\Z r^{\\Z}"
},
{
"math_id": 25,
"text": "b=-3"
},
{
"math_id": 26,
"text": "\\mathbf{t}=2"
},
{
"math_id": 27,
"text": "1.\\overline{02}_{(-3)} = \\frac{5}{4} = 2.\\overline{20}_{(-3)}"
},
{
"math_id": 28,
"text": "\\frac{z(r+1) + 1}{b^i(r + 1)}"
},
{
"math_id": 29,
"text": "z,i\\in \\Z."
}
] | https://en.wikipedia.org/wiki?curid=10784136 |
1078521 | Shapiro time delay | Time delay caused by space-time distortion near massive objects
The Shapiro time delay effect, or gravitational time delay effect, is one of the four classic Solar System tests of general relativity. Radar signals passing near a massive object take slightly longer to travel to a target and longer to return than they would if the mass of the object were not present. The time delay is caused by time dilation, which increases the time it takes light to travel a given distance from the perspective of an outside observer. In a 1964 article entitled "Fourth Test of General Relativity", Irwin Shapiro wrote:
Because, according to the general theory, the speed of a light wave depends on the strength of the gravitational potential along its path, these time delays should thereby be increased by almost 2×10−4 sec when the radar pulses pass near the sun. Such a change, equivalent to 60 km in distance, could now be measured over the required path length to within about 5 to 10% with presently obtainable equipment.
Throughout this article discussing the time delay, Shapiro uses "c" as the speed of light and calculates the time delay of the passage of light waves or rays over finite coordinate distance according to a Schwarzschild solution to the Einstein field equations.
History.
The time delay effect was first predicted in 1964, by Irwin Shapiro. Shapiro proposed an observational test of his prediction: bounce radar beams off the surface of Venus and Mercury and measure the round-trip travel time. When the Earth, Sun, and Venus are most favorably aligned, Shapiro showed that the expected time delay, due to the presence of the Sun, of a radar signal traveling from the Earth to Venus and back, would be about 200 microseconds, well within the limitations of 1960s-era technology.
The first tests, performed in 1966 and 1967 using the MIT Haystack radar antenna, were successful, matching the predicted amount of time delay. The experiments have been repeated many times since then, with increasing accuracy.
Calculating time delay.
In a nearly static gravitational field of moderate strength (say, of stars and planets, but not one of a black hole or close binary system of neutron stars) the effect may be considered as a special case of gravitational time dilation. The measured elapsed time of a light signal in a gravitational field is longer than it would be without the field, and for moderate-strength nearly static fields the difference is directly proportional to the classical gravitational potential, precisely as given by standard gravitational time dilation formulas.
Time delay due to light traveling around a single mass.
Shapiro's original formulation was derived from the Schwarzschild solution and included terms to the first order in solar mass (formula_0) for a proposed Earth-based radar pulse bouncing off an inner planet and returning passing close to the Sun:
formula_1
where formula_2 is the distance of closest approach of the radar wave to the center of the Sun, formula_3 is the distance along the line of flight from the Earth-based antenna to the point of closest approach to the Sun, and formula_4 represents the distance along the path from this point to the planet. The right-hand side of this equation is primarily due to the variable speed of the light ray; the contribution from the change in path, being of second order in formula_0, is negligible. formula_5 is the Landau symbol of order of error.
For a signal going around a massive object, the time delay can be calculated as the following:
formula_6
Here formula_7 is the unit vector pointing from the observer to the source, and formula_8 is the unit vector pointing from the observer to the gravitating mass formula_0. The dot denotes the usual Euclidean dot product.
Using formula_9, this formula can also be written as
formula_10
which is a fictive extra distance the light has to travel. Here formula_11 is the Schwarzschild radius.
In PPN parameters,
formula_12
which is twice the Newtonian prediction (with formula_13).
The doubling of the Shapiro factor can be explained by the fact that there is not only the gravitational time dilation, but also the radial stretching of space, both of which contribute equally in general relativity for the time delay as they also do for the deflection of light.
formula_14
formula_15
formula_16
formula_17
Interplanetary probes.
Shapiro delay must be considered along with ranging data when trying to accurately determine the distance to interplanetary probes such as the Voyager and Pioneer spacecraft.
Shapiro delay of neutrinos and gravitational waves.
From the nearly simultaneous observations of neutrinos and photons from SN 1987A, the Shapiro delay for high-energy neutrinos must be the same as that for photons to within 10%, consistent with recent estimates of the neutrino mass, which imply that those neutrinos were moving at very close to the speed of light. After the direct detection of gravitational waves in 2016, the one-way Shapiro delay was calculated by two groups and is about 1800 days. In general relativity and other metric theories of gravity, though, the Shapiro delay for gravitational waves is expected to be the same as that for light and neutrinos. However, in theories such as tensor–vector–scalar gravity and other modified GR theories, which reproduce Milgrom's law and avoid the need for dark matter, the Shapiro delay for gravitational waves is much smaller than that for neutrinos or photons. The observed 1.7-second difference in arrival times seen between gravitational wave and gamma ray arrivals from neutron star merger GW170817 was far less than the estimated Shapiro delay of about 1000 days. This rules out a class of modified models of gravity that dispense with the need for dark matter.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "\\Delta t \\approx \\frac{4GM}{c^3} \\left(\\ln\\left[\\frac{x_p + (x_p^2 + d^2)^{1/2}}{-x_e + (x_e^2 + d^2)^{1/2}} \\right] - \\frac{1}{2}\\left[\\frac{x_p}{(x_p^2 + d^2)^{1/2}} + \\frac{2x_e+x_p}{(x_e^2 + d^2)^{1/2}}\\right]\\right) + \\mathcal{O}\\left(\\frac{G^2M^2}{c^5 d}\\right),"
},
{
"math_id": 2,
"text": "d"
},
{
"math_id": 3,
"text": "x_e"
},
{
"math_id": 4,
"text": "x_p"
},
{
"math_id": 5,
"text": "\\mathcal{O}"
},
{
"math_id": 6,
"text": "\\Delta t = -\\frac{2GM}{c^3} \\ln(1 - \\mathbf{R}\\cdot\\mathbf{x})."
},
{
"math_id": 7,
"text": "\\mathbf{R}"
},
{
"math_id": 8,
"text": "\\mathbf{x}"
},
{
"math_id": 9,
"text": "\\Delta x = c \\Delta t"
},
{
"math_id": 10,
"text": "\\Delta x = -R_s \\ln(1 - \\mathbf{R}\\cdot\\mathbf{x}),"
},
{
"math_id": 11,
"text": "R_s = \\frac{2GM}{c^2}"
},
{
"math_id": 12,
"text": "\\Delta t = -(1 + \\gamma) \\frac{R_s}{2c} \\ln(1 - \\mathbf{R}\\cdot\\mathbf{x}),"
},
{
"math_id": 13,
"text": "\\gamma = 0"
},
{
"math_id": 14,
"text": "\\tau = t\\sqrt{1-\\tfrac{R_s}{r}}"
},
{
"math_id": 15,
"text": "c' = c\\sqrt{1-\\tfrac{R_s}{r}}"
},
{
"math_id": 16,
"text": "s' = \\frac{s}{\\sqrt{1-\\tfrac{R_s}{r}}}"
},
{
"math_id": 17,
"text": "T = \\frac{s'}{c'} = \\frac{s}{c\\left(1-\\tfrac{R_s}{r}\\right)} "
}
] | https://en.wikipedia.org/wiki?curid=1078521 |
1078637 | Herbrand–Ribet theorem | Result on the class group of certain number fields, strengthening Ernst Kummer's theorem
In mathematics, the Herbrand–Ribet theorem is a result on the class group of certain number fields. It is a strengthening of Ernst Kummer's theorem to the effect that the prime "p" divides the class number of the cyclotomic field of "p"-th roots of unity if and only if "p" divides the numerator of the "n"-th Bernoulli number "B""n"
for some "n", 0 < "n" < "p" − 1. The Herbrand–Ribet theorem specifies what, in particular, it means when "p" divides such an "B""n".
Statement.
The Galois group Δ of the cyclotomic field of "p"th roots of unity for an odd prime "p", Q(ζ) with ζ"p" = 1, consists of the "p" − 1 group elements σ"a", where formula_0. As a consequence of Fermat's little theorem, in the ring of "p"-adic integers formula_1 we have "p" − 1 roots of unity, each of which is congruent mod "p" to some number in the range 1 to "p" − 1; we can therefore define a Dirichlet character ω (the Teichmüller character) with values in formula_1 by requiring that for "n" relatively prime to "p", ω("n") be congruent to "n" modulo "p". The "p" part of the class group is a formula_1-module (since it is "p"-primary), hence a module over the group ring formula_2. We now define idempotent elements of the group ring for each "n" from 1 to "p" − 1, as
formula_3
It is easy to see that formula_4 and formula_5 where formula_6 is the Kronecker delta. This allows us to break up the "p" part of the ideal class group "G" of Q(ζ) by means of the idempotents; if "G" is the "p"-primary part of the ideal class group, then, letting "G""n" = ε"n"("G"), we have formula_7.
The Herbrand–Ribet theorem states that for odd "n", "G""n" is nontrivial if and only if "p" divides the Bernoulli number "B""p"−"n".
The theorem makes no assertion about even values of "n", but there is no known "p" for which "G""n" is nontrivial for any even "n": triviality for all "p" would be a consequence of Vandiver's conjecture.
Proofs.
The part saying "p" divides "B""p"−"n" if "G""n" is not trivial is due to Jacques Herbrand. The converse, that if "p" divides "B""p"−"n" then "G""n" is not trivial is due to Kenneth Ribet, and is considerably more difficult. By class field theory, this can only be true if there is an unramified extension of the field of "p"th roots of unity by a cyclic extension of degree "p" which behaves in the specified way under the action of Σ; Ribet proves this by actually constructing such an extension using methods in the theory of modular forms. A more elementary proof of Ribet's converse to Herbrand's theorem, a consequence of the theory of Euler systems, can be found in Washington's book.
Generalizations.
Ribet's methods were developed further by Barry Mazur and Andrew Wiles in order to prove the main conjecture of Iwasawa theory, a corollary of which is a strengthening of the Herbrand–Ribet theorem: the power of "p" dividing "B""p"−"n" is exactly the power of "p" dividing the order of "G""n".
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma_a(\\zeta) = \\zeta^a"
},
{
"math_id": 1,
"text": "\\mathbb{Z}_p"
},
{
"math_id": 2,
"text": "\\mathbb{Z}_p[\\Delta]"
},
{
"math_id": 3,
"text": "\\epsilon_n = \\frac{1}{p-1}\\sum_{a=1}^{p-1} \\omega(a)^n \\sigma_a^{-1}."
},
{
"math_id": 4,
"text": "\\sum\\epsilon_n = 1 "
},
{
"math_id": 5,
"text": " \\epsilon_i\\epsilon_j = \\delta_{ij}\\epsilon_i "
},
{
"math_id": 6,
"text": " \\delta_{ij} "
},
{
"math_id": 7,
"text": " G = \\oplus G_n "
}
] | https://en.wikipedia.org/wiki?curid=1078637 |
10788068 | Convex metric space | In mathematics, convex metric spaces are, intuitively, metric spaces with the property any "segment" joining two points in that space has other points in it besides the endpoints.
Formally, consider a metric space ("X", "d") and let "x" and "y" be two points in "X". A point "z" in "X" is said to be "between" "x" and "y" if all three points are distinct, and
formula_0
that is, the triangle inequality becomes an equality. A convex metric space is a metric space ("X", "d") such that, for any two distinct points "x" and "y" in "X", there exists a third point "z" in "X" lying between "x" and "y".
Metric convexity:
Metric segments.
Let formula_5 be a metric space (which is not necessarily convex). A subset formula_6 of formula_7 is called a metric segment between two distinct points formula_1 and formula_2 in formula_8 if there exists a closed interval formula_9 on the real line and an isometry
formula_10
such that formula_11 formula_12 and formula_13
It is clear that any point in such a metric segment formula_6 except for the "endpoints" formula_1 and formula_2 is between formula_1 and formula_14 As such, if a metric space formula_5 admits metric segments between any two distinct points in the space, then it is a convex metric space.
The converse is not true, in general. The rational numbers form a convex metric space with the usual distance, yet there exists no segment connecting two rational numbers which is made up of rational numbers only. If however, formula_5 is a convex metric space, and, in addition, it is complete, one can prove that for any two points formula_15 in formula_7 there exists a metric segment connecting them (which is not necessarily unique).
Convex metric spaces and convex sets.
As mentioned in the examples section, closed subsets of Euclidean spaces are convex metric spaces if and only if they are convex sets. It is then natural to think of convex metric spaces as generalizing the notion of convexity beyond Euclidean spaces, with usual linear segments replaced by metric segments.
It is important to note, however, that metric convexity defined this way does not have one of the most important properties of Euclidean convex sets, that being that the intersection of two convex sets is convex. Indeed, as mentioned in the examples section, a circle, with the distance between two points measured along the shortest arc connecting them, is a (complete) convex metric space. Yet, if formula_1 and formula_2 are two points on a circle diametrically opposite to each other, there exist two metric segments connecting them (the two arcs into which these points split the circle), and those two arcs are metrically convex, but their intersection is the set formula_16 which is not metrically convex. | [
{
"math_id": 0,
"text": "d(x, z)+d(z, y)=d(x, y),\\,"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "y"
},
{
"math_id": 3,
"text": "z"
},
{
"math_id": 4,
"text": "y,"
},
{
"math_id": 5,
"text": "(X, d)"
},
{
"math_id": 6,
"text": "S"
},
{
"math_id": 7,
"text": "X"
},
{
"math_id": 8,
"text": "X,"
},
{
"math_id": 9,
"text": "[a, b]"
},
{
"math_id": 10,
"text": "\\gamma:[a, b] \\to X,\\,"
},
{
"math_id": 11,
"text": "\\gamma([a, b])=S,"
},
{
"math_id": 12,
"text": "\\gamma(a)=x"
},
{
"math_id": 13,
"text": "\\gamma(b)=y."
},
{
"math_id": 14,
"text": "y."
},
{
"math_id": 15,
"text": "x\\ne y"
},
{
"math_id": 16,
"text": "\\{x, y\\}"
}
] | https://en.wikipedia.org/wiki?curid=10788068 |
10789680 | Residue field | In mathematics, the residue field is a basic construction in commutative algebra. If "R" is a commutative ring and "m" is a maximal ideal, then the residue field is the quotient ring "k" = "R"/"m", which is a field. Frequently, "R" is a local ring and "m" is then its unique maximal ideal.
In abstract algebra, the splitting field of a polynomial is constructed using residue fields. Residue fields also applied in algebraic geometry, where to every point "x" of a scheme "X" one associates its residue field "k"("x"). One can say a little loosely that the residue field of a point of an abstract algebraic variety is the 'natural domain' for the coordinates of the point.
Definition.
Suppose that "R" is a commutative local ring, with maximal ideal "m". Then the residue field is the quotient ring "R"/"m".
Now suppose that "X" is a scheme and "x" is a point of "X". By the definition of scheme, we may find an affine neighbourhood "U" = Spec("A") of "x", with "A" some commutative ring. Considered in the neighbourhood "U", the point "x" corresponds to a prime ideal "p" ⊆ "A" (see Zariski topology). The "local ring" of "X" at "x" is by definition the localization "Ap" of "A" by "A" \ "p", and "Ap" has maximal ideal "m" = "p·Ap". Applying the construction above, we obtain the residue field of the point "x" :
"k"("x") := "A""p" / "p"·"A""p".
One can prove that this definition does not depend on the choice of the affine neighbourhood "U".
A point is called "K"-rational for a certain field "K", if "k"("x") = "K".
Example.
Consider the affine line A1("k") = Spec("k"["t"]) over a field "k". If "k" is algebraically closed, there are exactly two types of prime ideals, namely
The residue fields are
If "k" is not algebraically closed, then more types arise, for example if "k" = R, then the prime ideal ("x"2 + 1) has residue field isomorphic to C.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k[t]_{(t-a)}/(t-a)k[t]_{(t-a)} \\cong k"
},
{
"math_id": 1,
"text": "k[t]_{(0)} \\cong k(t)"
}
] | https://en.wikipedia.org/wiki?curid=10789680 |
1079106 | Open quantum system | Quantum mechanical system that interacts with a quantum-mechanical environment
In physics, an open quantum system is a quantum-mechanical system that interacts with an external quantum system, which is known as the "environment" or a "bath". In general, these interactions significantly change the dynamics of the system and result in quantum dissipation, such that the information contained in the system is lost to its environment. Because no quantum system is completely isolated from its surroundings, it is important to develop a theoretical framework for treating these interactions in order to obtain an accurate understanding of quantum systems.
Techniques developed in the context of open quantum systems have proven powerful in fields such as quantum optics, quantum measurement theory, quantum statistical mechanics, quantum information science, quantum thermodynamics, quantum cosmology, quantum biology, and semi-classical approximations.
Quantum system and environment.
A complete description of a quantum system requires the inclusion of the environment. Completely describing the resulting combined system then requires the inclusion of its environment, which results in a new system that can only be completely described if its environment is included and so on. The eventual outcome of this process of embedding is the state of the whole universe described by a wavefunction formula_0. The fact that every quantum system has some degree of openness also means that no quantum system can ever be in a pure state. A pure state is unitary equivalent to a zero-temperature ground state, forbidden by the third law of thermodynamics.
Even if the combined system is in a pure state and can be described by a wavefunction formula_1, a subsystem in general cannot be described by a wavefunction. This observation motivated the formalism of density matrices, or density operators, introduced by John von Neumann in 1927 and independently, but less systematically by Lev Landau in 1927 and Felix Bloch in 1946. In general, the state of a subsystem is described by the density operator formula_2 and the expectation value of an observable formula_3 by the scalar product formula_4. There is no way to know if the combined system is pure from the knowledge of observables of the subsystem alone. In particular, if the combined system has quantum entanglement, the state of the subsystem is not pure.
Dynamics.
In general, the time evolution of closed quantum systems is described by unitary operators acting on the system. For open systems, however, the interactions between the system and its environment make it so that the dynamics of the system cannot be accurately described using unitary operators alone.
The time evolution of quantum systems can be determined by solving the effective equations of motion, also known as master equations, that govern how the density matrix describing the system changes over time and the dynamics of the observables that are associated with the system. In general, however, the environment that we want to model as being a part of our system is very large and complicated, which makes finding exact solutions to the master equations difficult, if not impossible. As such, the theory of open quantum systems seeks an economical treatment of the dynamics of the system and its observables. Typical observables of interest include things like energy and the robustness of quantum coherence (i.e. a measure of a state's coherence). Loss of energy to the environment is termed quantum dissipation, while loss of coherence is termed quantum decoherence.
Due to the difficulty of determining the solutions to the master equations for a particular system and environment, a variety of techniques and approaches have been developed. A common objective is to derive a reduced description wherein the system's dynamics are considered explicitly and the bath's dynamics are described implicitly. The main assumption is that the entire system-environment combination is a large closed system. Therefore, its time evolution is governed by a unitary transformation generated by a global Hamiltonian. For the combined system bath scenario the global Hamiltonian can be decomposed into:
formula_5
where formula_6 is the system's Hamiltonian, formula_7 is the bath Hamiltonian and formula_8 is the system-bath interaction. The state of the system can then be obtained from a partial trace over the combined system and bath: formula_9.
Another common assumption that is used to make systems easier to solve is the assumption that the state of the system at the next moment depends only on the current state of the system. in other words, the system doesn't have a memory of its previous states. Systems that have this property are known as Markovian systems. This approximation is justified when the system in question has enough time for the system to relax to equilibrium before being perturbed again by interactions with its environment. For systems that have very fast or very frequent perturbations from their coupling to their environment, this approximation becomes much less accurate.
Markovian equations.
When the interaction between the system and the environment is weak, a time-dependent perturbation theory seems appropriate for treating the evolution of the system. In other words, if the interaction between the system and its environment is weak, then any changes to the combined system over time can be approximated as originating from only the system in question. Another typical assumption is that the system and bath are initially uncorrelated formula_10. This idea originated with Felix Bloch and was expanded upon by Alfred Redfield in his derivation of the Redfield equation. The Redfield equation is a Markovian master equation that describes the time evolution of the density matrix of the combined system. The drawback of the Redfield equation is that it does not conserve the positivity of the density operator.
A formal construction of a local equation of motion with a Markovian property is an alternative to a reduced derivation. The theory is based on an axiomatic approach. The basic starting point is a completely positive map. The assumption is that the initial system-environment state is uncorrelated formula_10 and the combined dynamics is generated by a unitary operator. Such a map falls under the category of Kraus operator. The most general type of a time-homogeneous master equation with the Markovian property describing non-unitary evolution of the density matrix ρ that is trace-preserving and completely positive for any initial condition is the Gorini–Kossakowski–Sudarshan–Lindblad equation or GKSL equation:
formula_11
formula_12 is a (Hermitian) Hamiltonian part and formula_13:
formula_14
is the dissipative part describing implicitly through system operators formula_15 the influence of the bath on the system.
The Markov property imposes that the system and bath are uncorrelated at all times formula_16.
The GKSL equation is unidirectional and leads any initial state formula_17 to a steady state solution which is an invariant of the equation of motion formula_18.
The family of maps generated by the GKSL equation forms a Quantum dynamical semigroup. In some fields, such as quantum optics, the term Lindblad superoperator is often used to express the quantum master equation for a dissipative system. E.B. Davis derived the GKSL with Markovian property master equations using perturbation theory and additional approximations, such as the rotating wave or secular, thus fixing the flaws of the Redfield equation. Davis construction is consistent with the Kubo-Martin-Schwinger stability criterion for thermal equilibrium i.e. the KMS state. An alternative approach to fix the Redfield has been proposed by J. Thingna, J.-S. Wang, and P. Hänggi that allows for system-bath interaction to play a role in equilibrium differing from the KMS state.
In 1981, Amir Caldeira and Anthony J. Leggett proposed a simplifying assumption in which the bath is decomposed to normal modes represented as harmonic oscillators linearly coupled to the system. As a result, the influence of the bath can be summarized by the bath spectral function. This method is known as the Caldeira–Leggett model, or harmonic bath model. To proceed and obtain explicit solutions, the path integral formulation description of quantum mechanics is typically employed. A large part of the power behind this method is the fact that harmonic oscillators are relatively well-understood compared to the true coupling that exists between the system and the bath. Unfortunately, while the Caldeira-Leggett model is one that leads to a physically consistent picture of quantum dissipation, its ergodic properties are too weak and so the dynamics of the model do not generate wide-scale quantum entanglement between the bath modes.
An alternative bath model is a spin bath. At low temperatures and weak system-bath coupling, the Caldeira-Leggett and spin bath models are equivalent. But for higher temperatures or strong system-bath coupling, the spin bath model has strong ergodic properties. Once the system is coupled, significant entanglement is generated between all modes. In other words, the spin bath model can simulate the Caldeira-Leggett model, but the opposite is not true.
An example of natural system being coupled to a spin bath is a nitrogen-vacancy (N-V) center in diamonds. In this example, the color center is the system and the bath consists of carbon-13 (13C) impurities which interact with the system via the magnetic dipole-dipole interaction.
For open quantum systems where the bath has oscillations that are particularly fast, it is possible to average them out by looking at sufficiently large changes in time. This is possible because the average amplitude of fast oscillations over a large time scale is equal to the central value, which can always be chosen to be zero with a minor shift along the vertical axis. This method of simplifying problems is known as the secular approximation.
Non-Markovian equations.
Open quantum systems that do not have the Markovian property are generally much more difficult to solve. This is largely due to the fact that the next state of a non-Markovian system is determined by each of its previous states, which rapidly increases the memory requirements to compute the evolution of the system. Currently, the methods of treating these systems employ what are known as projection operator techniques. These techniques employ a projection operator formula_19, which effectively applies the trace over the environment as described previously. The result of applying formula_19 to formula_20(i.e. calculating formula_21) is called the "relevant part" of formula_20. For completeness, another operator formula_22 is defined so that formula_23 where formula_24 is the identity matrix. The result of applying formula_22 to formula_20(i.e. calculating formula_25) is called the "irrelevant part" of formula_20. The primary goal of these methods is to then derive a master equation that defines the evolution of formula_21.
One such derivation using the projection operator technique results in what is known as the Nakajima–Zwanzig equation. This derivation highlights the problem of the reduced dynamics being non-local in time:
formula_26
Here the effect of the bath throughout the time evolution of the system is hidden in the memory kernel formula_27. While the Nakajima-Zwanzig equation is an exact equation that holds for almost all open quantum systems and environments, it can be very difficult to solve. This means that approximations generally need to be introduced to reduce the complexity of the problem into something more manageable. As an example, the assumption of a fast bath is required to lead to a time local equation: formula_28. Other examples of valid approximations include the weak-coupling approximation and the single-coupling approximation.
In some cases, the projection operator technique can be used to reduce the dependence of the system's next state on all of its previous states. This method of approaching open quantum systems is known as the time-convolutionless projection operator technique, and it is used to generate master equations that are inherently local in time. Because these equations can neglect more of the history of the system, they are often easier to solve than things like the Nakajima-Zwanzig equation.
Another approach emerges as an analogue of classical dissipation theory developed by Ryogo Kubo and Y. Tanimura. This approach is connected to hierarchical equations of motion which embed the density operator in a larger space of auxiliary operators such that a time local equation is obtained for the whole set and their memory is contained in the auxiliary operators. | [
{
"math_id": 0,
"text": "\\Psi"
},
{
"math_id": 1,
"text": " \\Psi "
},
{
"math_id": 2,
"text": " \\rho "
},
{
"math_id": 3,
"text": " A "
},
{
"math_id": 4,
"text": " (\\rho \\cdot A) = \\rm{tr}\\{ \\rho A \\} "
},
{
"math_id": 5,
"text": " H=H_{\\rm S}+H_{\\rm B}+H_{\\rm SB} "
},
{
"math_id": 6,
"text": "H_{\\rm S}"
},
{
"math_id": 7,
"text": "H_{\\rm B} "
},
{
"math_id": 8,
"text": "H_{\\rm SB}"
},
{
"math_id": 9,
"text": "\\rho_{\\rm S} (t) =\\rm{tr}_{\\rm B} \\{\\rho_{SB} (t)\\} "
},
{
"math_id": 10,
"text": " \\rho(0)=\\rho_{\\rm S} \\otimes \\rho_{\\rm B} "
},
{
"math_id": 11,
"text": "\\dot\\rho_{\\rm S}=-{i\\over\\hbar}[H_{\\rm S},\\rho_{\\rm S}]+{\\cal L}_{\\rm D}(\\rho_{\\rm S}) "
},
{
"math_id": 12,
"text": " H_{\\rm S}"
},
{
"math_id": 13,
"text": "{\\cal L}_{\\rm D}"
},
{
"math_id": 14,
"text": "{\\cal L}_{\\rm D}(\\rho_{\\rm S})=\\sum_n \\left(V_n\\rho_{\\rm S} V_n^\\dagger-\\frac{1}{2}\\left(\\rho_{\\rm S} V_n^\\dagger V_n + V_n^\\dagger V_n\\rho_{\\rm S}\\right)\\right)"
},
{
"math_id": 15,
"text": " V_n "
},
{
"math_id": 16,
"text": " \\rho_{\\rm SB}=\\rho_{\\rm S} \\otimes \\rho_{\\rm B} "
},
{
"math_id": 17,
"text": " \\rho_{\\rm S}"
},
{
"math_id": 18,
"text": " \\dot \\rho_{\\rm S}(t \\rightarrow \\infty ) = 0 "
},
{
"math_id": 19,
"text": "\\mathcal{P}"
},
{
"math_id": 20,
"text": "\\rho"
},
{
"math_id": 21,
"text": "\\mathcal{P}\\rho"
},
{
"math_id": 22,
"text": "\\mathcal{Q}"
},
{
"math_id": 23,
"text": "\\mathcal{P}+\\mathcal{Q}=\\mathcal{I}"
},
{
"math_id": 24,
"text": "\\mathcal{I}"
},
{
"math_id": 25,
"text": "\\mathcal{Q}\\rho"
},
{
"math_id": 26,
"text": "\\partial_t{\\rho }_\\mathrm{S}=\\mathcal{P}{\\cal L}{{\\rho}_\\mathrm{S}}+\\int_{0}^{t}{dt'\\mathcal{K}({t}'){{\\rho }_\\mathrm{S}}(t-{t}')}."
},
{
"math_id": 27,
"text": " \\kappa (\\tau)"
},
{
"math_id": 28,
"text": " \\partial_t \\rho_S = {\\cal L } \\rho_S "
}
] | https://en.wikipedia.org/wiki?curid=1079106 |
1079129 | Aquifer test | Pumping water into an aquifer to monitor its response
In hydrogeology, an aquifer test (or a pumping test) is conducted to evaluate an aquifer by "stimulating" the aquifer through constant pumping, and observing the aquifer's "response" (drawdown) in observation wells. Aquifer testing is a common tool that hydrogeologists use to characterize a system of aquifers, aquitards and flow system boundaries.
A slug test is a variation on the typical aquifer test where an instantaneous change (increase or decrease) is made, and the effects are observed in the same well. This is often used in geotechnical engineering settings to get a quick estimate (minutes instead of days) of the aquifer properties immediately around the well.
Aquifer tests are typically interpreted by using an analytical model of aquifer flow (the most fundamental being the Theis solution) to match the data observed in the real world, then assuming that the parameters from the idealized model apply to the real-world aquifer. In more complex cases, a numerical model may be used to analyze the results of an aquifer test.
Aquifer testing differs from well testing in that the behaviour of the well is primarily of concern in the latter, while the characteristics of the aquifer are quantified in the former. Aquifer testing also often utilizes one or more monitoring wells, or piezometers ("point" observation wells). A monitoring well is simply a well which is not being pumped (but is used to monitor the hydraulic head in the aquifer). Typically monitoring and pumping wells are screened across the same aquifers.
General characteristics.
Most commonly an aquifer test is conducted by pumping water from one well at a steady rate and for at least one day, while carefully measuring the water levels in the monitoring wells. When water is pumped from the pumping well the pressure in the aquifer that feeds that well declines. This decline in pressure will show up as drawdown (change in hydraulic head) in an observation well. Drawdown decreases with radial distance from the pumping well and drawdown increases with the length of time that the pumping continues.
The aquifer characteristics which are evaluated by most aquifer tests are:
Additional aquifer characteristics which are sometimes evaluated, depending on the type of aquifer, include:
Analysis methods.
An appropriate model or solution to the groundwater flow equation must be chosen to fit to the observed data. There are many different choices of models, depending on what factors are deemed important including:
Nearly all aquifer test solution methods are based on the Theis solution; it is built upon the most simplifying assumptions. Other methods relax one or more of the assumptions the Theis solution is built on, and therefore they get a more flexible (and more complex) result.
Transient Theis solution.
The Theis equation was created by Charles Vernon Theis (working for the US Geological Survey) in 1935, from heat transfer literature (with the mathematical help of C.I. Lubin), for two-dimensional radial flow to a point sink in an infinite, homogeneous aquifer. It is simply
formula_0
where "s" is the drawdown (change in hydraulic head at a point since the beginning of the test in units of distance), "u" is a dimensionless parameter, "Q" is the discharge (pumping) rate of the well (volume per unit time), "T" and "S" are the transmissivity and storativity of the aquifer around the well (distance squared per unit time and dimensionless, respectively), "r" is the distance from the pumping well to the point where the drawdown was observed, "t" is the time since pumping began, and "W(u)" is the "Well function" (called the incomplete gamma function, formula_1, in non-hydrogeology literature). The well function is given by the infinite series
formula_2
where "γ" is the Euler constant (=0.577216...). Typically this equation is used to find the average "T" and "S" values near a pumping well, from drawdown data collected during an aquifer test. This is a simple form of inverse modeling, since the result ("s") is measured in the well, "r", "t", and "Q" are observed, and values of "T" and "S" which best reproduce the measured data are put into the equation until a best fit between the observed data and the analytic solution is found.
The Theis solution is based on the following assumptions:
Even though these assumptions are rarely all met, depending on the degree to which they are violated (e.g., if the boundaries of the aquifer are well beyond the part of the aquifer which will be tested by the pumping test) the
solution may still be useful.
Steady-state Thiem solution.
Steady-state radial flow to a pumping well is commonly called the Thiem solution, it comes about from application of Darcy's law to cylindrical shell control volumes (i.e., a cylinder with a larger radius which has a smaller radius cylinder cut out of it) about the pumping well; it is commonly written as:
formula_3
In this expression "h0" is the background hydraulic head, "h0"-"h" is the drawdown at the radial distance "r" from the pumping well, "Q" is the discharge rate of the pumping well (at the origin), "T" is the transmissivity, and "R" is the radius of influence, or the distance at which the head is still "h0". These conditions (steady-state flow to a pumping well with no nearby boundaries) "never truly occur" in nature, but it can often be used as an approximation to actual conditions; the solution is derived by assuming there is a circular constant head boundary (e.g., a lake or river in full contact with the aquifer) surrounding the pumping well at a distance "R".
Sources of error.
Of critical importance in both aquifer and well testing is the accurate recording of data. Not only must water levels and the time of the measurement be carefully recorded, but the pumping rates must be periodically checked and recorded. An unrecorded change in pumping rate of as little as 2% can be misleading when the data are analysed.
Further reading.
The US Geological Survey has some very useful free references on pumping test interpretation:
Some commercial printed references on aquifer test interpretation:
More book titles can be found in the "further reading" section of the hydrogeology article, most of which contain some material on aquifer test analysis or the theory behind these test methods. | [
{
"math_id": 0,
"text": "\\begin{align}\ns &= \\frac{Q}{4\\pi T}W(u) \\\\[0.5em]\nu &= \\frac{r^2 S}{4Tt}\n\\end{align}"
},
{
"math_id": 1,
"text": "\\Gamma(0,u)"
},
{
"math_id": 2,
"text": "\\begin{align}\nW(u) = \\Gamma(0,u) = -\\gamma - \\ln(u) + u - \\frac{u^2}{2 \\times 2!} + \\frac{u^3}{3 \\times 3!} - \\frac{u^4}{4 \\times 4!} + \\cdots\n\\end{align}"
},
{
"math_id": 3,
"text": "h_0 - h = \\frac{Q}{2\\pi T} \\ln\\left( \\frac{R}{r} \\right) "
}
] | https://en.wikipedia.org/wiki?curid=1079129 |
1079432 | Subdivision surface | Curved curface derived from a coarse polygon mesh
In the field of 3D computer graphics, a subdivision surface (commonly shortened to SubD surface or Subsurf) is a curved surface represented by the specification of a coarser polygon mesh and produced by a recursive algorithmic method. The curved surface, the underlying "inner mesh", can be calculated from the coarse mesh, known as the "control cage" or "outer mesh", as the functional limit of an iterative process of subdividing each polygonal face into smaller faces that better approximate the final underlying curved surface. Less commonly, a simple algorithm is used to add geometry to a mesh by subdividing the faces into smaller ones without changing the overall shape or volume.
The opposite is reducing polygons or un-subdividing.
Overview.
A subdivision surface algorithm is recursive in nature. The process starts with a base level polygonal mesh. A refinement scheme is then applied to this mesh. This process takes that mesh and subdivides it, creating new vertices and new faces. The positions of the new vertices in the mesh are computed based on the positions of nearby old vertices, edges, and/or faces. In many refinement schemes, the positions of old vertices are also altered (possibly based on the positions of new vertices).
This process produces a "denser" mesh than the original one, containing more polygonal faces (often by a factor of 4). This resulting mesh can be passed through the same refinement scheme again and again to produce more and more refined meshes. Each iteration is often called a subdivision "level", starting at zero (before any refinement occurs).
The "limit" subdivision surface is the surface produced from this process being iteratively applied infinitely many times. In practical use however, this algorithm is only applied a limited, and fairly small (formula_0), number of times.
Mathematically, the neighborhood of an "extraordinary vertex" (non-4-valent node for quad refined meshes) of a subdivision surface is a spline with a parametrically singular point.
Refinement schemes.
Subdivision surface refinement schemes can be broadly classified into two categories: "interpolating" and "approximating".
In general, approximating schemes have greater smoothness, but the user has less overall control of the outcome. This is analogous to spline surfaces and curves, where Bézier curves are required to interpolate certain control points, while B-Splines are not (and are more approximate).
Subdivision surface schemes can also be categorized by the type of polygon that they operate on: some function best for quadrilaterals (quads), while others primarily operate on triangles (tris).
Approximating schemes.
"Approximating" means that the limit surfaces approximate the initial meshes, and that after subdivision the newly generated control points are not in the limit surfaces. There are five approximating subdivision schemes:
Interpolating schemes.
After subdivision, the control points of the original mesh and the newly generated control points are interpolated on the limit surface. The earliest work was so-called "butterfly scheme" by Dyn, Levin and Gregory (1990), who extended the four-point interpolatory subdivision scheme for curves to a subdivision scheme for surface. Zorin, Schröder and Sweldens (1996) noticed that the butterfly scheme cannot generate smooth surfaces for irregular triangle meshes and thus modified this scheme. Kobbelt (1996) further generalized the four-point interpolatory subdivision scheme for curves to the tensor product subdivision scheme for surfaces. In 1991, Nasri proposed a scheme for interpolating Doo-Sabin; while in 1993 Halstead, Kass, and DeRose proposed one for Catmull-Clark.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\leq 5"
}
] | https://en.wikipedia.org/wiki?curid=1079432 |
1079448 | Dirac operator | First-order differential linear operator on spinor bundle, whose square is the Laplacian
In mathematics and quantum mechanics, a Dirac operator is a differential operator that is a formal square root, or half-iterate, of a second-order operator such as a Laplacian. The original case which concerned Paul Dirac was to factorise formally an operator for Minkowski space, to get a form of quantum theory compatible with special relativity; to get the relevant Laplacian as a product of first-order operators he introduced spinors. It was first published in 1928 by Dirac.
Formal definition.
In general, let "D" be a first-order differential operator acting on a vector bundle "V" over a Riemannian manifold "M". If
formula_0
where ∆ is the Laplacian of "V", then "D" is called a Dirac operator.
In high-energy physics, this requirement is often relaxed: only the second-order part of "D"2 must equal the Laplacian.
Examples.
Example 1.
"D" = −"i" ∂"x" is a Dirac operator on the tangent bundle over a line.
Example 2.
Consider a simple bundle of notable importance in physics: the configuration space of a particle with spin confined to a plane, which is also the base manifold. It is represented by a wavefunction "ψ" : R2 → C2
formula_1
where "x" and "y" are the usual coordinate functions on R2. "χ" specifies the probability amplitude for the particle to be in the spin-up state, and similarly for "η". The so-called spin-Dirac operator can then be written
formula_2
where "σ""i" are the Pauli matrices. Note that the anticommutation relations for the Pauli matrices make the proof of the above defining property trivial. Those relations define the notion of a Clifford algebra.
Solutions to the Dirac equation for spinor fields are often called "harmonic spinors".
Example 3.
Feynman's Dirac operator describes the propagation of a free fermion in three dimensions and is elegantly written
formula_3
using the Feynman slash notation. In introductory textbooks to quantum field theory, this will appear in the form
formula_4
where formula_5 are the off-diagonal Dirac matrices formula_6, with formula_7 and the remaining constants are formula_8 the speed of light, formula_9 being the Planck constant, and formula_10 the mass of a fermion (for example, an electron). It acts on a four-component wave function formula_11, the Sobolev space of smooth, square-integrable functions. It can be extended to a self-adjoint operator on that domain. The square, in this case, is not the Laplacian, but instead formula_12 (after setting formula_13)
Example 4.
Another Dirac operator arises in Clifford analysis. In euclidean "n"-space this is
formula_14
where {"ej": "j" = 1, ..., "n"} is an orthonormal basis for euclidean "n"-space, and R"n" is considered to be embedded in a Clifford algebra.
This is a special case of the Atiyah–Singer–Dirac operator acting on sections of a spinor bundle.
Example 5.
For a spin manifold, "M", the Atiyah–Singer–Dirac operator is locally defined as follows: For "x" ∈ "M" and "e1"("x"), ..., "ej"("x") a local orthonormal basis for the tangent space of "M" at "x", the Atiyah–Singer–Dirac operator is
formula_15
where formula_16 is the spin connection, a lifting of the Levi-Civita connection on "M" to the spinor bundle over "M". The square in this case is not the Laplacian, but instead formula_17 where formula_18 is the scalar curvature of the connection.
Example 6.
On Riemannian manifold formula_19 of dimension formula_20 with Levi-Civita connection formula_21and an orthonormal basis formula_22, we can define exterior derivative formula_23 and coderivative formula_24 as
formula_25.
Then we can define a Dirac-Kähler operator formula_26, as follows
formula_27.
The operator acts on sections of Clifford bundle in general, and it can be restricted to spinor bundle, an ideal of Clifford bundle, only if the projection operator on the ideal is parallel.
Generalisations.
In Clifford analysis, the operator "D" : "C"∞(R"k" ⊗ R"n", "S") → "C"∞(R"k" ⊗ R"n", C"k" ⊗ "S") acting on spinor valued functions defined by
formula_28
is sometimes called Dirac operator in "k" Clifford variables. In the notation, "S" is the space of spinors, formula_29 are "n"-dimensional variables and formula_30 is the Dirac operator in the "i"-th variable. This is a common generalization of the Dirac operator ("k" = 1) and the Dolbeault operator ("n" = 2, "k" arbitrary). It is an invariant differential operator, invariant under the action of the group SL("k") × Spin("n"). The resolution of "D" is known only in some special cases.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "D^2=\\Delta, \\,"
},
{
"math_id": 1,
"text": "\\psi(x,y) = \\begin{bmatrix}\\chi(x,y) \\\\ \\eta(x,y)\\end{bmatrix}"
},
{
"math_id": 2,
"text": "D=-i\\sigma_x\\partial_x-i\\sigma_y\\partial_y ,"
},
{
"math_id": 3,
"text": "D=\\gamma^\\mu\\partial_\\mu\\ \\equiv \\partial\\!\\!\\!/,"
},
{
"math_id": 4,
"text": "D = c\\vec\\alpha \\cdot (-i\\hbar\\nabla_x) + mc^2\\beta"
},
{
"math_id": 5,
"text": "\\vec\\alpha = (\\alpha_1, \\alpha_2, \\alpha_3)"
},
{
"math_id": 6,
"text": "\\alpha_i=\\beta\\gamma_i"
},
{
"math_id": 7,
"text": "\\beta=\\gamma_0"
},
{
"math_id": 8,
"text": "c"
},
{
"math_id": 9,
"text": "\\hbar"
},
{
"math_id": 10,
"text": "m"
},
{
"math_id": 11,
"text": "\\psi(x) \\in L^2(\\mathbb{R}^3, \\mathbb{C}^4)"
},
{
"math_id": 12,
"text": "D^2=\\Delta+m^2"
},
{
"math_id": 13,
"text": "\\hbar=c=1."
},
{
"math_id": 14,
"text": "D=\\sum_{j=1}^{n}e_{j}\\frac{\\partial}{\\partial x_{j}}"
},
{
"math_id": 15,
"text": "D=\\sum_{j=1}^{n}e_{j}(x)\\tilde{\\Gamma}_{e_{j}(x)} ,"
},
{
"math_id": 16,
"text": "\\tilde{\\Gamma}"
},
{
"math_id": 17,
"text": "D^2=\\Delta+R/4"
},
{
"math_id": 18,
"text": "R"
},
{
"math_id": 19,
"text": "(M, g)"
},
{
"math_id": 20,
"text": "n=dim(M)"
},
{
"math_id": 21,
"text": "\\nabla"
},
{
"math_id": 22,
"text": "\\{e_{a}\\}_{a=1}^{n}"
},
{
"math_id": 23,
"text": "d"
},
{
"math_id": 24,
"text": "\\delta"
},
{
"math_id": 25,
"text": "d= e^{a}\\wedge \\nabla_{e_{a}}, \\quad \\delta =e^{a} \\lrcorner \\nabla_{e_{a}}"
},
{
"math_id": 26,
"text": "D"
},
{
"math_id": 27,
"text": "D = e^{a}\\nabla_{e_{a}}=d-\\delta"
},
{
"math_id": 28,
"text": "f(x_1,\\ldots,x_k)\\mapsto\n\\begin{pmatrix}\n\\partial_{\\underline{x_1}}f\\\\\n\\partial_{\\underline{x_2}}f\\\\\n\\ldots\\\\\n\\partial_{\\underline{x_k}}f\\\\\n\\end{pmatrix}"
},
{
"math_id": 29,
"text": "x_i=(x_{i1},x_{i2},\\ldots,x_{in})"
},
{
"math_id": 30,
"text": "\\partial_{\\underline{x_i}}=\\sum_j e_j\\cdot \\partial_{x_{ij}}"
}
] | https://en.wikipedia.org/wiki?curid=1079448 |
1079466 | Förster resonance energy transfer | Photochemical energy transfer mechanism
Förster resonance energy transfer (FRET), fluorescence resonance energy transfer, resonance energy transfer (RET) or electronic energy transfer (EET) is a mechanism describing energy transfer between two light-sensitive molecules (chromophores). A donor chromophore, initially in its electronic excited state, may transfer energy to an acceptor chromophore through nonradiative dipole–dipole coupling. The efficiency of this energy transfer is inversely proportional to the sixth power of the distance between donor and acceptor, making FRET extremely sensitive to small changes in distance.
Measurements of FRET efficiency can be used to determine if two fluorophores are within a certain distance of each other. Such measurements are used as a research tool in fields including biology and chemistry.
FRET is analogous to near-field communication, in that the radius of interaction is much smaller than the wavelength of light emitted. In the near-field region, the excited chromophore emits a virtual photon that is instantly absorbed by a receiving chromophore. These virtual photons are undetectable, since their existence violates the conservation of energy and momentum, and hence FRET is known as a "radiationless" mechanism. Quantum electrodynamical calculations have been used to determine that radiationless (FRET) and radiative energy transfer are the short- and long-range asymptotes of a single unified mechanism.
Terminology.
Förster resonance energy transfer is named after the German scientist Theodor Förster. When both chromophores are fluorescent, the term "fluorescence resonance energy transfer" is often used instead, although the energy is not actually transferred by fluorescence. In order to avoid an erroneous interpretation of the phenomenon that is always a nonradiative transfer of energy (even when occurring between two fluorescent chromophores), the name "Förster resonance energy transfer" is preferred to "fluorescence resonance energy transfer"; however, the latter enjoys common usage in scientific literature. FRET is not restricted to fluorescence and occurs in connection with phosphorescence as well.
Theoretical basis.
The FRET efficiency (formula_0) is the quantum yield of the energy-transfer transition, i.e. the probability of energy-transfer event occurring per donor excitation event:
formula_1
where formula_2 the radiative decay rate of the donor, formula_3 is the rate of energy transfer, and formula_4 the rates of any other de-excitation pathways excluding energy transfers to other acceptors.
The FRET efficiency depends on many physical parameters that can be grouped as: 1) the distance between the donor and the acceptor (typically in the range of 1–10 nm), 2) the spectral overlap of the donor emission spectrum and the acceptor absorption spectrum, and 3) the relative orientation of the donor emission dipole moment and the acceptor absorption dipole moment.
formula_0 depends on the donor-to-acceptor separation distance formula_5 with an inverse 6th-power law due to the dipole–dipole coupling mechanism:
formula_6
with formula_7 being the Förster distance of this pair of donor and acceptor, i.e. the distance at which the energy transfer efficiency is 50%.
The Förster distance depends on the overlap integral of the donor emission spectrum with the acceptor absorption spectrum and their mutual molecular orientation as expressed by the following equation all in SI units:
formula_8
where formula_9 is the fluorescence quantum yield of the donor in the absence of the acceptor, formula_10 is the dipole orientation factor, formula_11 is the refractive index of the medium, formula_12 is the Avogadro constant, and formula_13 is the spectral overlap integral calculated as
formula_14
where formula_15 is the donor emission spectrum, formula_16 is the donor emission spectrum normalized to an area of 1, and formula_17 is the acceptor molar extinction coefficient, normally obtained from an absorption spectrum.
The orientation factor κ is given by
formula_18
where formula_19 denotes the normalized transition dipole moment of the respective fluorophore, and formula_20 denotes the normalized inter-fluorophore displacement.
formula_10 = 2/3 is often assumed. This value is obtained when both dyes are freely rotating and can be considered to be isotropically oriented during the excited-state lifetime. If either dye is fixed or not free to rotate, then formula_10 = 2/3 will not be a valid assumption. In most cases, however, even modest reorientation of the dyes results in enough orientational averaging that formula_10 = 2/3 does not result in a large error in the estimated energy-transfer distance due to the sixth-power dependence of formula_7 on formula_10. Even when formula_10 is quite different from 2/3, the error can be associated with a shift in formula_7, and thus determinations of changes in relative distance for a particular system are still valid. Fluorescent proteins do not reorient on a timescale that is faster than their fluorescence lifetime. In this case 0 ≤ formula_10 ≤ 4.
The units of the data are usually not in SI units. Using the original units to calculate the Förster distance is often more convenient. For example, the wavelength is often in unit nm and the extinction coefficient is often in unit formula_21, where formula_22 is concentration formula_23. formula_13 obtained from these units will have unit formula_24. To use unit Å (formula_25) for the formula_26, the equation is adjusted to
formula_27 (Åformula_28)
For time-dependent analyses of FRET, the rate of energy transfer (formula_3) can be used directly instead:
formula_29
where formula_30 is the donor's fluorescence lifetime in the absence of the acceptor.
The FRET efficiency relates to the quantum yield and the fluorescence lifetime of the donor molecule as follows:
formula_31
where formula_32 and formula_33 are the donor fluorescence lifetimes in the presence and absence of an acceptor respectively, or as
formula_34
where formula_35 and formula_36 are the donor fluorescence intensities with and without an acceptor respectively.
Experimental confirmation of the FRET theory.
The inverse sixth-power distance dependence of Förster resonance energy transfer was experimentally confirmed by Wilchek, Edelhoch and Brand using tryptophyl peptides. Stryer, Haugland and Yguerabide also experimentally demonstrated the theoretical dependence of Förster resonance energy transfer on the overlap integral by using a fused indolosteroid as a donor and a ketone as an acceptor. Calculations on FRET distances of some example dye-pairs can be found here.
However, a lot of contradictions of special experiments with the theory was observed under complicated environment when the orientations and quantum yields of the molecules are difficult to estimate.
Methods to measure FRET efficiency.
In fluorescence microscopy, fluorescence confocal laser scanning microscopy, as well as in molecular biology, FRET is a useful tool to quantify molecular dynamics in biophysics and biochemistry, such as protein-protein interactions, protein–DNA interactions, and protein conformational changes. For monitoring the complex formation between two molecules, one of them is labeled with a donor and the other with an acceptor. The FRET efficiency is measured and used to identify interactions between the labeled complexes. There are several ways of measuring the FRET efficiency by monitoring changes in the fluorescence emitted by the donor or the acceptor.
Sensitized emission.
One method of measuring FRET efficiency is to measure the variation in acceptor emission intensity. When the donor and acceptor are in proximity (1–10 nm) due to the interaction of the two molecules, the acceptor emission will increase because of the intermolecular FRET from the donor to the acceptor. For monitoring protein conformational changes, the target protein is labeled with a donor and an acceptor at two loci. When a twist or bend of the protein brings the change in the distance or relative orientation of the donor and acceptor, FRET change is observed. If a molecular interaction or a protein conformational change is dependent on ligand binding, this FRET technique is applicable to fluorescent indicators for the ligand detection.
Photobleaching FRET.
FRET efficiencies can also be inferred from the photobleaching rates of the donor in the presence and absence of an acceptor. This method can be performed on most fluorescence microscopes; one simply shines the excitation light (of a frequency that will excite the donor but not the acceptor significantly) on specimens with and without the acceptor fluorophore and monitors the donor fluorescence (typically separated from acceptor fluorescence using a bandpass filter) over time. The timescale is that of photobleaching, which is seconds to minutes, with fluorescence in each curve being given by
formula_37
where formula_38 is the photobleaching decay time constant and depends on whether the acceptor is present or not. Since photobleaching consists in the permanent inactivation of excited fluorophores, resonance energy transfer from an excited donor to an acceptor fluorophore prevents the photobleaching of that donor fluorophore, and thus high FRET efficiency leads to a longer photobleaching decay time constant:
formula_39
where formula_40 and formula_38 are the photobleaching decay time constants of the donor in the presence and in the absence of the acceptor respectively. (Notice that the fraction is the reciprocal of that used for lifetime measurements).
This technique was introduced by Jovin in 1989. Its use of an entire curve of points to extract the time constants can give it accuracy advantages over the other methods. Also, the fact that time measurements are over seconds rather than nanoseconds makes it easier than fluorescence lifetime measurements, and because photobleaching decay rates do not generally depend on donor concentration (unless acceptor saturation is an issue), the careful control of concentrations needed for intensity measurements is not needed. It is, however, important to keep the illumination the same for the with- and without-acceptor measurements, as photobleaching increases markedly with more intense incident light.
Lifetime measurements.
FRET efficiency can also be determined from the change in the fluorescence lifetime of the donor. The lifetime of the donor will decrease in the presence of the acceptor. Lifetime measurements of the FRET-donor are used in fluorescence-lifetime imaging microscopy (FLIM).
Single-molecule FRET (smFRET).
smFRET is a group of methods using various microscopic techniques to measure a pair of donor and acceptor fluorophores that are excited and detected at the single molecule level. In contrast to "ensemble FRET" or "bulk FRET" which provides the FRET signal of a high number of molecules, single-molecule FRET is able to resolve the FRET signal of each individual molecule. The variation of the smFRET signal is useful to reveal kinetic information that an ensemble measurement cannot provide, especially when the system is under equilibrium. Heterogeneity among different molecules can also be observed. This method has been applied in many measurements of biomolecular dynamics such as DNA/RNA/protein folding/unfolding and other conformational changes, and intermolecular dynamics such as reaction, binding, adsorption, and desorption that are particularly useful in chemical sensing, bioassays, and biosensing.
Fluorophores used for FRET.
CFP-YFP pairs.
One common pair fluorophores for biological use is a cyan fluorescent protein (CFP) – yellow fluorescent protein (YFP) pair. Both are color variants of green fluorescent protein (GFP). Labeling with organic fluorescent dyes requires purification, chemical modification, and intracellular injection of a host protein. GFP variants can be attached to a host protein by genetic engineering which can be more convenient. Additionally, a fusion of CFP and YFP ("tandem-dimer") linked by a protease cleavage sequence can be used as a cleavage assay.
BRET.
A limitation of FRET performed with fluorophore donors is the requirement for external illumination to initiate the fluorescence transfer, which can lead to background noise in the results from direct excitation of the acceptor or to photobleaching. To avoid this drawback, bioluminescence resonance energy transfer (or BRET) has been developed. This technique uses a bioluminescent luciferase (typically the luciferase from "Renilla reniformis") rather than CFP to produce an initial photon emission compatible with YFP.
BRET has also been implemented using a different luciferase enzyme, engineered from the deep-sea shrimp "Oplophorus gracilirostris". This luciferase is smaller (19 kD) and brighter than the more commonly used luciferase from "Renilla reniformis", and has been named NanoLuc or NanoKAZ. Promega has developed a patented substrate for NanoLuc called furimazine, though other valuables coelenterazine substrates for NanoLuc have also been published. A split-protein version of NanoLuc developed by Promega has also been used as a BRET donor in experiments measuring protein-protein interactions.
Homo-FRET.
In general, "FRET" refers to situations where the donor and acceptor proteins (or "fluorophores") are of two different types. In many biological situations, however, researchers might need to examine the interactions between two, or more, proteins of the same type—or indeed the same protein with itself, for example if the protein folds or forms part of a polymer chain of proteins or for other questions of quantification in biological cells or "in vitro" experiments.
Obviously, spectral differences will not be the tool used to detect and measure FRET, as both the acceptor and donor protein emit light with the same wavelengths. Yet researchers can detect differences in the polarisation between the light which excites the fluorophores and the light which is emitted, in a technique called FRET anisotropy imaging; the level of quantified anisotropy (difference in polarisation between the excitation and emission beams) then becomes an indicative guide to how many FRET events have happened.
In the field of nano-photonics, FRET can be detrimental if it funnels excitonic energy to defect sites, but it is also essential to charge collection in organic and quantum-dot-sensitized solar cells, and various FRET-enabled strategies have been proposed for different opto-electronic devices. It is then essential to understand how isolated nano-emitters behave when they are stacked in a dense layer. Nanoplatelets are especially promising candidates for strong homo-FRET exciton diffusion because of their strong in-plane dipole coupling and low Stokes shift. Fluorescence microscopy study of such single chains demonstrated that energy transfer by FRET between neighbor platelets causes energy to diffuse over a typical 500-nm length (about 80 nano emitters), and the transfer time between platelets is on the order of 1 ps.
Others.
Various compounds beside fluorescent proteins.
Applications.
The applications of fluorescence resonance energy transfer (FRET) have expanded tremendously in the last 25 years, and the technique has become a staple in many biological and biophysical fields. FRET can be used as a spectroscopic ruler to measure distance and detect molecular interactions in a number of systems and has applications in biology and biochemistry.
Proteins.
FRET is often used to detect and track interactions between proteins. Additionally, FRET can be used to measure distances between domains in a single protein by tagging different regions of the protein with fluorophores and measuring emission to determine distance. This provides information about protein conformation, including secondary structures and protein folding. This extends to tracking functional changes in protein structure, such as conformational changes associated with myosin activity. Applied in vivo, FRET has been used to detect the location and interactions of cellular structures including integrins and membrane proteins.
Membranes.
FRET can be used to observe membrane fluidity, movement and dispersal of membrane proteins, membrane lipid-protein and protein-protein interactions, and successful mixing of different membranes. FRET is also used to study formation and properties of membrane domains and lipid rafts in cell membranes and to determine surface density in membranes.
Chemosensor.
FRET-based probes can detect the presence of various molecules: the probe's structure is affected by small molecule binding or activity, which can turn the FRET system on or off. This is often used to detect anions, cations, small uncharged molecules, and some larger biomacromolecules as well. Similarly, FRET systems have been designed to detect changes in the cellular environment due to such factors as pH, hypoxia, or mitochondrial membrane potential.
Signaling pathways.
Another use for FRET is in the study of metabolic or signaling pathways. For example, FRET and BRET have been used in various experiments to characterize G-protein coupled receptor activation and consequent signaling mechanisms. Other examples include the use of FRET to analyze such diverse processes as bacterial chemotaxis and caspase activity in apoptosis.
Proteins and nucleotides folding kinetics.
Proteins, DNAs, RNAs, and other polymer folding dynamics have been measured using FRET. Usually, these systems are under equilibrium whose kinetics is hidden. However, they can be measured by measuring single-molecule FRET with proper placement of the acceptor and donor dyes on the molecules. See single-molecule FRET for a more detailed description.
Other applications.
In addition to common uses previously mentioned, FRET and BRET are also effective in the study of biochemical reaction kinetics. FRET is increasingly used for monitoring pH dependent assembly and disassembly and is valuable in the analysis of nucleic acids encapsulation. This technique can be used to determine factors affecting various types of nanoparticle formation as well as the mechanisms and effects of nanomedicines.
Other methods.
A different, but related, mechanism is Dexter electron transfer.
An alternative method to detecting protein–protein proximity is the bimolecular fluorescence complementation (BiFC), where two parts of a fluorescent protein are each fused to other proteins. When these two parts meet, they form a fluorophore on a timescale of minutes or hours.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E"
},
{
"math_id": 1,
"text": "E = \\frac{k_\\text{ET}}{k_f + k_\\text{ET} + \\sum{k_i}},"
},
{
"math_id": 2,
"text": "k_f"
},
{
"math_id": 3,
"text": "k_\\text{ET}"
},
{
"math_id": 4,
"text": "k_i"
},
{
"math_id": 5,
"text": "r"
},
{
"math_id": 6,
"text": "E = \\frac{1}{1 + (r/R_0)^6}"
},
{
"math_id": 7,
"text": "R_0"
},
{
"math_id": 8,
"text": " {R_0}^6 = \\frac{2.07}{128 \\, \\pi^5 \\, N_A} \\, \\frac{\\kappa^2 \\,Q_D}{n^4} J "
},
{
"math_id": 9,
"text": "Q_\\text{D}"
},
{
"math_id": 10,
"text": "\\kappa^2"
},
{
"math_id": 11,
"text": "n"
},
{
"math_id": 12,
"text": "N_\\text{A}"
},
{
"math_id": 13,
"text": "J"
},
{
"math_id": 14,
"text": " J = \\frac{\\int f_\\text{D}(\\lambda) \\epsilon_\\text{A}(\\lambda) \\lambda^4 \\, d\\lambda}{\\int f_\\text{D}(\\lambda) \\, d\\lambda} = \\int \\overline{f_\\text{D}}(\\lambda) \\epsilon_\\text{A}(\\lambda) \\lambda^4 \\, d\\lambda,"
},
{
"math_id": 15,
"text": "f_\\text{D}"
},
{
"math_id": 16,
"text": "\\overline{f_\\text{D}}"
},
{
"math_id": 17,
"text": "\\epsilon_\\text{A}"
},
{
"math_id": 18,
"text": "\\kappa = \\hat\\mu_\\text{A} \\cdot \\hat\\mu_\\text{D} - 3 (\\hat\\mu_\\text{D} \\cdot \\hat R) (\\hat\\mu_\\text{A} \\cdot \\hat R), "
},
{
"math_id": 19,
"text": "\\hat\\mu_i"
},
{
"math_id": 20,
"text": "\\hat R"
},
{
"math_id": 21,
"text": "M^{-1} cm^{-1}"
},
{
"math_id": 22,
"text": "M"
},
{
"math_id": 23,
"text": "mol/L"
},
{
"math_id": 24,
"text": "M^{-1} cm^{-1} nm^4"
},
{
"math_id": 25,
"text": "10^{-10}m"
},
{
"math_id": 26,
"text": " R_0"
},
{
"math_id": 27,
"text": " {R_0}^6 = 8.785 \\times 10^{-5} \\frac{\\kappa^2 \\,Q_D}{n^4} J "
},
{
"math_id": 28,
"text": "^6"
},
{
"math_id": 29,
"text": "k_\\text{ET} = (\\frac{R_0}{r})^6 \\, \\frac{1}{\\tau_D}"
},
{
"math_id": 30,
"text": "\\tau_D"
},
{
"math_id": 31,
"text": "E = 1 - \\tau'_\\text{D}/\\tau_\\text{D},"
},
{
"math_id": 32,
"text": "\\tau_\\text{D}'"
},
{
"math_id": 33,
"text": "\\tau_\\text{D}"
},
{
"math_id": 34,
"text": "E = 1 - F_\\text{D}'/F_\\text{D},"
},
{
"math_id": 35,
"text": "F_\\text{D}'"
},
{
"math_id": 36,
"text": "F_\\text{D}"
},
{
"math_id": 37,
"text": "\\text{background} + \\text{constant} \\cdot e^{-\\text{time}/\\tau_\\text{pb}},"
},
{
"math_id": 38,
"text": "\\tau_\\text{pb}"
},
{
"math_id": 39,
"text": " E = 1 - \\tau_\\text{pb}/\\tau_\\text{pb}',"
},
{
"math_id": 40,
"text": "\\tau_\\text{pb}'"
}
] | https://en.wikipedia.org/wiki?curid=1079466 |
1079517 | Algebraic code-excited linear prediction | Speech coding standard
Algebraic code-excited linear prediction (ACELP) is a speech coding algorithm in which a limited set of pulses is distributed as excitation to a linear prediction filter. It is a linear predictive coding (LPC) algorithm that is based on the code-excited linear prediction (CELP) method and has an algebraic structure. ACELP was developed in 1989 by the researchers at the Université de Sherbrooke in Canada.
The ACELP method is widely employed in current speech coding standards such as AMR, EFR, AMR-WB (G.722.2), VMR-WB, EVRC, EVRC-B, SMV, TETRA, PCS 1900, MPEG-4 CELP and ITU-T G-series standards G.729, G.729.1 (first coding stage) and G.723.1. The ACELP algorithm is also used in the proprietary ACELP.net codec. Audible Inc. use a modified version for their speaking books. It is also used in conference-calling software, speech compression tools and has become one of the 3GPP formats.
The ACELP patent expired in 2018 and is now royalty-free.
Features.
The main advantage of ACELP is that the algebraic codebook it uses can be made very large (> 50 bits) without running into storage (RAM/ROM) or complexity (CPU time) problems.
Technology.
The ACELP algorithm is based on that used in code-excited linear prediction (CELP), but ACELP codebooks have a specific algebraic structure imposed upon them.
A 16-bit algebraic codebook shall be used in the innovative codebook search, the aim of which is to find the best innovation and gain parameters. The innovation vector contains, at most, four non-zero pulses.
In ACELP, a block of "N" speech samples is synthesized by filtering an appropriate innovation sequence from a codebook, scaled by a gain factor "g" "c", through two time-varying filters.
The long-term (pitch) synthesis filter is given by:
formula_0
The short-term synthesis filter is given by:
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac1{B(z)} = \\frac1{ 1 - g_p z^{-T} }"
},
{
"math_id": 1,
"text": "\\frac1{A(z)} = \\frac1{ 1 + \\sum_{i=1}^P a_i z^{-i} }"
}
] | https://en.wikipedia.org/wiki?curid=1079517 |
10796362 | Immunoglobulin light chain | Small antibody polypeptide subunit (immunoglobin)
The immunoglobulin light chain is the small polypeptide subunit of an antibody (immunoglobulin).
A typical antibody is composed of two immunoglobulin (Ig) heavy chains and two Ig light chains.
In humans.
There are two types of light chain in humans:
Antibodies are produced by B lymphocytes, each expressing only one class of light chain. Once set, light chain class remains fixed for the life of the B lymphocyte. In a healthy individual, the total kappa-to-lambda ratio is roughly 2:1 in serum (measuring intact whole antibodies) or 1:1.5 if measuring free light chains, with a highly divergent ratio indicative of neoplasm. The free light chain ratio ranges from 0.26 to 1.65. Both the kappa and the lambda chains can increase proportionately, maintaining a normal ratio. This is usually indicative of something other than a blood cell dyscrasia, such as kidney disease.
In other animals.
The immunoglobulin light chain genes in tetrapods can be classified into three distinct groups: kappa (κ), lambda (λ), and sigma (σ). The divergence of the κ, λ, and σ isotypes preceded the radiation of tetrapods. The σ isotype was lost after the evolution of the amphibian lineage and before the emergence of the reptilian lineage.
Other types of light chains can be found in lower vertebrates, such as the Ig-Light-Iota chain of Chondrichthyes and Teleostei.
Camelids are unique among mammals as they also have fully functional antibodies which have two heavy chains, but lack the light chains usually paired with each heavy chain.
Sharks also possess, as part of their adaptive immune systems, a functional heavy-chain homodimeric antibody-like molecule referred to as IgNAR (immunoglobulin new antigen receptor). IgNAR is believed to have never had an associated light chain, in contrast with the understanding that the heavy-chain-only antibodies in camelids may have lost their light chain partners through evolution.
Structure.
Only one type of light chain is present in a typical antibody, thus the two light chains of an individual antibody are identical.
Each light chain is composed of two tandem immunoglobulin domains:
The approximate length of a light chain protein is from 211 to 217 amino acids. The constant region determines what class (kappa or lambda) the light chain is. The lambda class has 4 subtypes (formula_01, formula_02, formula_03, and formula_07).
In pathology.
Individual B-cells in lymphoid tissue possess either kappa or lambda light chains, but never both together.
Using immunohistochemistry, it is possible to determine the relative abundance of B-cells expressing kappa and lambda light chains. If the lymph node or similar tissue is reactive, or otherwise benign, it should possess a mixture of kappa positive and lambda positive cells. If, however, one type of light chain is significantly more common than the other, the cells are likely all derived from a small clonal population, which may indicate a malignant condition, such as B-cell lymphoma.
Free immunoglobulin light chains secreted by neoplastic plasma cells, such as in multiple myeloma, can be called Bence Jones protein when detected in the urine, although there is a trend to refer to these as urinary free light chains.
Increased levels of free Ig light chains have also been detected in various inflammatory diseases. It is important to note that, in contrast to increased levels in lymphoma patients, these Ig light chains are polyclonal. Recent studies have shown that these Ig light chains can bind to mast cells and, using their ability to bind antigen, facilitate activation of these mast cells. Activation of mast cells results in the release of various pro-inflammatory mediators which are believed to contribute to the development of the inflammatory disease. Recent studies have shown that Ig light chains not only activate mast cells but also dorsal root ganglia and neutrophils, expanding their possible role as mediators in inflammatory disease.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda"
}
] | https://en.wikipedia.org/wiki?curid=10796362 |
10796713 | Balanced flow | Model of atmospheric motion
In atmospheric science, balanced flow is an idealisation of atmospheric motion. The idealisation consists in considering the behaviour of one isolated parcel of air having constant density, its motion on a horizontal plane subject to selected forces acting on it and, finally, steady-state conditions.
Balanced flow is often an accurate approximation of the actual flow, and is useful in improving the qualitative understanding and interpretation of atmospheric motion.
In particular, the balanced-flow speeds can be used as estimates of the wind speed for particular arrangements of the atmospheric pressure on Earth's surface.
The momentum equations in natural coordinates.
Trajectories.
The momentum equations are written primarily for the generic trajectory of a packet of flow travelling on a horizontal plane and taken at a certain elapsed time called "t". The position of the packet is defined by the distance on the trajectory "s"="s"("t") which it has travelled by time "t". In reality, however, the trajectory is the outcome of the balance of forces upon the particle. In this section we assume to know it from the start for convenience of representation. When we consider the motion determined by the forces selected next, we will have clues of which type of trajectory fits the particular balance of forces.
The trajectory at a position "s" has one tangent unit vector s that invariably points in the direction of growing "s"'s, as well as one unit vector n, perpendicular to s, that points towards the local centre of curvature O.
The centre of curvature is found on the 'inner side' of the bend, and can shift across either side of the trajectory according to the shape of it.
The distance between the parcel position and the centre of curvature is the radius of curvature "R" at that position.
The radius of curvature approaches an infinite length at the points where the trajectory becomes straight and the positive orientation of n is not determined in this particular case (discussed in geostrophic flows).
The frame of reference (s,n) is shown by the red arrows in the figure. This frame is termed natural or intrinsic because the axes continuously adjust to the moving parcel, and so they are the most closely connected to its fate.
Kinematics.
The velocity vector (V) is oriented like s and has intensity (speed) "V" = d"s"/d"t". This speed is always a positive quantity, since any parcel moves along its own trajectory and, for increasing times (d"t">0), the trodden length increases as well (d"s">0).
The acceleration vector of the parcel is decomposed in the "tangential" acceleration parallel to s and in the "centripetal" acceleration along positive n. The tangential acceleration only changes the speed "V" and is equal to D"V"/D"t", where big d's denote the material derivative. The centripetal acceleration always points towards the centre of curvature O and only changes the direction s of the forward displacement while the parcel moves on.
Forces.
In the balanced-flow idealization we consider a three-way balance of forces that are:
In the fictitious situation drawn in the figure, the pressure force pushes the parcel forward along the trajectory and inward with respect to the bend; the Coriolis force pushes inwards (outwards) of the bend in the northern (southern) hemisphere; and friction pulls (necessarily) rearwards.
Governing equations.
For the dynamical equilibrium of the parcel, either component of acceleration times the parcel's mass is equal to the components of the external forces acting in the same direction.
As the equations of equilibrium for the parcel are written in natural coordinates, the component equations for the horizontal momentum per unit mass are expressed as follows:
formula_0
formula_1
in the forward and sideway directions respectively, where ρ is the density of air.
The terms can be broken down as follows:
Steady-state assumption.
In the following discussions, we consider steady-state flow.
The speed cannot thus change with time, and the component forces producing "tangential acceleration" need to sum up to zero.
In other words, active and resistive forces must balance out in the forward direction in order that formula_8.
Importantly, no assumption is made yet on whether the right-hand side forces are of either significant or negligible magnitude there. Moreover, trajectories and streamlines coincide in steady-state conditions, and the pairs of adjectives tangential/normal and streamwise/cross-stream become interchangeable. An atmospheric flow in which the tangential acceleration is not negligible is called allisobaric.
The velocity direction can still change in space along the trajectory that, excluding inertial flows, is set by the pressure pattern.
General framework.
The schematisations.
Omitting specific terms in the tangential and normal balance equations, we obtain one of the five following idealized flows: antitriptic, geostrophic, cyclostrophic, inertial, and gradient flows.
By reasoning on the balance of the remaining terms, we can understand
The following yes/no table shows which contributions are considered in each idealisation.
The Ekman layer's schematisation is also mentioned for completeness, and is treated separately since it involves the internal friction of air rather than that between air and ground.
The limitations.
Vertical differences of air properties.
The equations were said to apply to parcels of air moving on horizontal planes.
Indeed, when one considers a column of atmosphere, it is seldom the case that the air density is the same all the height up, since temperature and moisture content, hence density, do change with height.
Every parcel within such a column moves according to the air properties at its own height.
Homogeneous sheets of air may slide one over the other, so long as stable stratification of lighter air on top of heavier air leads to well-separated layers.
If some air happens to be heavier/lighter than that in the surroundings, though, vertical motions do occur and modify the horizontal motion in turn.
In nature downdrafts and updrafts can sometimes be more rapid and intense than the motion parallel to the ground.
The balanced-flow equations do not contain either a force representing the sinking/buoyancy action or the vertical component of velocity.
Consider also that the pressure is normally known through instruments (barometers) near the ground/sea level.
The isobars of the ordinary weather charts summarise these pressure measurements, adjusted to the mean sea level for uniformity of presentation, at one particular time.
Such values represent the weight of the air column overhead without indicating the details of the changes of the air's specific weight overhead.
Also, by Bernoulli's theorem, the measured pressure is not exactly the weight of the air column, should significant vertical motion of air occur.
Thus, the pressure force acting on individual parcels of air at different heights is not really known through the measured values.
When using information from a surface-pressure chart in balanced-flow formulations, the forces are best viewed as applied to the entire air column.
One difference of air speed in every air column invariably occurs, however, near the ground/sea, also if the air density is the same anywhere and no vertical motion occurs.
There, the roughness of the contact surface slows down the air motion above, and this retarding effect peters out with height.
See, for example, planetary boundary layer.
Frictional antitriptic flow applies near the ground, while the other schematisations apply far enough from the ground not to feel its "braking" effect ("free-air flow").
This is a reason to keep the two groups conceptually separated.
The transition from low-quote to high-quote schematisations is bridged by Ekman-like schematisations where air-to-air friction, Coriolis and pressure forces are in balance.
In summary, the balanced-flow speeds apply well to air columns that can be regarded as homogeneous (constant density, no vertical motion) or, at most, stably stratified (non-constant density, yet no vertical motion).
An uncertainty in the estimate arises if we are not able to verify these conditions to occur.
They also cannot describe the motion of the entire column from the contact surface with the Earth up to the outer atmosphere, because of the on-off handling of the friction forces.
Horizontal differences of air properties.
Even if air columns are homogeneous with height, the density of each column can change from location to location, firstly since air masses have different temperatures and moisture content depending on their origin; and then since air masses modify their properties as they flow over Earth's surface.
For example, in extra-tropical cyclones the air circulating around a pressure low typically comes with a sector of warmer temperature wedged within colder air.
The gradient-flow model of cyclonic circulation does not allow for these features.
Balanced-flow schematisations can be used to estimate the wind speed in air flows covering several degrees of latitude of Earth's surface.
However, in this case assuming a constant Coriolis parameter is unrealistic, and the balanced-flow speed can be applied locally.
See Rossby waves as an example of when changes of latitude are dynamically effective.
Unsteadiness.
The balanced-flow approach identifies typical trajectories and steady-state wind speeds derived from balance-giving pressure patterns.
In reality, pressure patterns and the motion of air masses are tied together, since accumulation (or density increase) of air mass somewhere increases the pressure on the ground and vice versa.
Any new pressure gradient will cause a new displacement of air, and thus a continuous rearrangement.
As weather itself demonstrates, steady-state conditions are exceptional.
Since friction, pressure gradient and Coriolis forces do not necessarily balance out, air masses actually accelerate and decelerate, so the actual speed depends on its past values too.
As seen next, the neat arrangement of pressure fields and flow trajectories, either parallel or at a right angle, in balanced-flow follows from the assumption of steady flow.
The steady-state balanced-flow equations do not explain how the flow was set in motion in the first place.
Also, if pressure patterns change quickly enough, balanced-flow speeds cannot help track the air parcels over long distances, simply because the forces that the parcel feels have changed while it is displaced.
The particle will end up somewhere else compared to the case that it had followed the original pressure pattern.
In summary, the balanced-flow equations give out consistent steady-state wind speeds that can estimate the situation at a certain moment and a certain place.
These speeds cannot be confidently used to understand where the air is moving to in the long run, because the forcing naturally changes or the trajectories are skewed with respect to the pressure pattern.
Antitriptic flow.
Antitriptic flow describes a steady-state flow in a spatially varying pressure field when
The name comes from the Greek words 'anti' (against, counter-) and 'triptein' (to rub) – meaning that this kind of flow proceeds by countering friction.
Formulation.
In the streamwise momentum equation, friction balances the pressure gradient component without being negligible (so that "K"≠0).
The pressure gradient vector is only made by the component along the trajectory tangent s.
The balance in the streamwise direction determines the antitriptic speed as formula_9
A positive speed is guaranteed by the fact that antitriptic flows move along the downward slope of the pressure field, so that mathematically formula_10.
Provided the product "KV" is constant and ρ stays the same, "p" turns out to vary linearly with "s" and the trajectory is such that the parcel feels equal pressure drops while it covers equal distances.
In the cross-stream momentum equation, the Coriolis force and normal pressure gradient are both negligible, leading to no net bending action.
As the centrifugal term formula_11 vanishes while the speed is non-zero, the radius of curvature goes to infinity, and the trajectory must be a straight line.
In addition, the trajectory is perpendicular to the isobars since formula_12.
Since this condition occurs when the n direction is that of an isobar, s is perpendicular to the isobars.
Thus, antitriptic isobars need to be equispaced circles or straight lines.
Application.
Antitriptic flow is probably the least used of the five balanced-flow idealizations, because the conditions are quite strict. However, it is the only one for which the friction underneath is regarded as a primary contribution.
Therefore, the antitriptic schematisation applies to flows that take place near the Earth's surface, in a region known as constant-stress layer.
In reality, the flow in the constant-stress layer has a component parallel to the isobars too, since it is often driven by the faster flow overhead.
This occurs owing to the so-called "free-air flow" at high quotes, which tends to be parallel to the isobars, and to the Ekman flow at intermediate quotes, which causes a reduction of the free-air speed and a turning of direction while approaching the surface.
Because the Coriolis effects are neglected, antitriptic flow occurs either near the equator (irrespective of the motion's length scale) or elsewhere whenever the Ekman number of the flow is large (normally for small-scale processes), as opposed to geostrophic flows.
Antitriptic flow can be used to describe some boundary-layer phenomena such as sea breezes, Ekman pumping, and the low level jet of the Great Plains.
Geostrophic flow.
Geostrophic flow describes a steady-state flow in a spatially varying pressure field when
This condition is called geostrophic equilibrium or geostrophic balance (also known as geostrophy).
The name 'geostrophic' stems from the Greek words 'ge' (Earth) and 'strephein' (to turn).
This etymology does not suggest turning of trajectories, rather a rotation around the Earth.
Formulation.
In the streamwise momentum equation, negligible friction is expressed by "K"=0 and, for steady-state balance, negligible streamwise pressure force follows.
The speed cannot be determined by this balance.
However, formula_13 entails that the trajectory must run along isobars, else the moving parcel would experience changes of pressure like in antitriptic flows.
No bending is thus only possible if the isobars are straight lines in the first instance.
So, geostrophic flows take the appearance of a stream channelled along such isobars.
In the cross-stream momentum equation, non-negligible Coriolis force is balanced by the pressure force, in a way that the parcel does not experience any bending action.
Since the trajectory does not bend, the positive orientation of n cannot be determined for lack of a centre of curvature.
The signs of the normal vector components become uncertain in this case.
However, the pressure force must exactly counterbalance the Coriolis force anyway, so the parcel of air needs to travel with the Coriolis force contrary to the decreasing sideways slope of pressure.
Therefore, irrespective of the uncertainty in formally setting the unit vector n, the parcel always travels with the lower pressure at its left (right) in the northern (southern) hemisphere.
The geostrophic speed is formula_14
The expression of geostrophic speed resembles that of antitriptic speed: here the speed is determined by the magnitude of the pressure gradient across (instead of along) the trajectory that develops along (instead of across) an isobar.
Application.
Modelers, theoreticians, and operational forecasters frequently make use of geostrophic/quasi-geostrophic approximation.
Because friction is unimportant, the geostrophic balance fits flows high enough above the Earth's surface.
Because the Coriolis force is relevant, it normally fits processes with small Rossby number, typically having large length scales.
Geostrophic conditions are also realised for flows having small Ekman number, as opposed to antitriptic conditions.
It is frequent that the geostrophic conditions develop between a well-defined pair of pressure high and low; or that a major geostrophic stream is flanked by several higher- and lower-pressure regions at either side of it (see images).
Although the balanced-flow equations do not allow for internal (air-to-air) friction, the flow directions in geostrophic streams and nearby rotating systems are also consistent with shear contact between those.
The speed of a geostrophic stream is larger (smaller) than that in the curved flow around a pressure low (high) with the same pressure gradient: this feature is explained by the more general gradient-flow schematisation.
This helps use the geostrophic speed as a back-of-the-envelope estimate of more complex arrangements—see also the balanced-flow speeds compared below.
The etymology and the pressure charts shown suggest that geostrophic flows may describe atmospheric motion at rather large scales, although not necessarily so.
Cyclostrophic flow.
Cyclostrophic flow describes a steady-state flow in a spatially varying pressure field when
Trajectories do bend. The name 'cyclostrophic' stems from the Greek words 'kyklos' (circle) and 'strephein' (to turn).
Formulation.
Like in geostrophic balance, the flow is frictionless and, for steady-state motion, the trajectories follow the isobars.
In the cross-stream momentum equation, only the Coriolis force is discarded, so that the centripetal acceleration is just the cross-stream pressure force per unit mass
formula_15
This implies that the trajectory is subject to a bending action, and that the cyclostrophic speed is
formula_16
So, the cyclostrophic speed is determined by the magnitude of the pressure gradient across the trajectory and by the radius of curvature of the isobar.
The flow is faster, the farther away from its centre of curvature, albeit less than linearly.
Another implication of the cross-stream momentum equation is that a cyclostrophic flow can only develop next to a low-pressure area.
This is implied in the requirement that the quantity under the square root is positive.
Recall that the cyclostrophic trajectory was found to be an isobar.
Only if the pressure increases from the centre of curvature outwards, the pressure derivative is negative and the square root is well defined – the pressure in the centre of curvature must thus be a low.
The above mathematics gives no clue whether the cyclostrophic rotation ends up to be clockwise or anticlockwise, meaning that the eventual arrangement is a consequence of effects not allowed for in the relationship, namely the rotation of the parent cell.
Application.
The cyclostrophic schematisation is realistic when Coriolis and frictional forces are both negligible, that is for flows having large Rossby number and small Ekman number.
Coriolis effects are ordinarily negligible in lower latitudes or on smaller scales.
Cyclostrophic balance can be achieved in systems such as tornadoes, dust devils and waterspouts.
Cyclostrophic speed can also be seen as one of the contribution of the gradient balance-speed, as shown next.
Among the studies using the cyclostrophic schematisation,
Rennó and Bluestein use the cyclostrophic speed equation to construct a theory for waterspouts;
and Winn, Hunyady, and Aulich use the cyclostrophic approximation to compute the maximum tangential winds of a large tornado which passed near Allison, Texas on 8 June 1995.
Inertial flow.
Unlike all other flows, inertial balance implies a uniform pressure field.
In this idealisation:
The only remaining action is the Coriolis force, which imparts curvature to the trajectory.
Formulation.
As before, frictionless flow in steady-state conditions implies that formula_17.
However, in this case isobars are not defined in the first place.
We cannot draw any anticipation about the trajectory from the arrangement of the pressure field.
In the cross-stream momentum equation, after omitting the pressure force, the centripetal acceleration is the Coriolis force per unit mass.
The sign ambiguity disappears, because the bending is solely determined by the Coriolis force that sets unchallenged the side of curvature – so this force has always a positive sign.
The inertial rotation will be clockwise (anticlockwise) in the northern (southern) hemisphere.
The momentum equation
formula_18
gives us the inertial speed
formula_19
The inertial speed's equation only helps determine either the speed or the radius of curvature once the other is given.
The trajectory resulting from this motion is also known as inertial circle.
The balance-flow model gives no clue on the initial speed of an inertial circle, which needs to be triggered by some external perturbation.
Application.
Since atmospheric motion is due largely to pressure differences, inertial flow is not very applicable in atmospheric dynamics.
However, the inertial speed appears as a contribution to the solution of the gradient speed (see next).
Moreover, inertial flows are observed in the ocean streams, where flows are less driven by pressure differences than in air because of higher density—inertial balance can occur at depths such that the friction transmitted by the surface winds downwards vanishes.
Gradient flow.
Gradient flow is an extension of geostrophic flow as it accounts for curvature too, making this a more accurate approximation for the flow in the upper atmosphere.
However, mathematically gradient flow is slightly more complex, and geostrophic flow may be fairly accurate, so the gradient approximation is not as frequently mentioned.
Gradient flow is also an extension of the cyclostrophic balance, as it allows for the effect of the Coriolis force, making it suitable for flows with any Rossby number.
Finally, it is an extension of inertial balance, as it allows for a pressure force to drive the flow.
Formulation.
Like in all but the antitriptic balance, frictional and pressure forces are neglected in the streamwise momentum equation, so that it follows from formula_20 that the flow is parallel to the isobars.
Solving the full cross-stream momentum equation as a quadratic equation for "V" yields
formula_21
Not all solutions of the gradient wind speed yield physically plausible results: the right-hand side as a whole needs be positive because of the definition of speed; and the quantity under square root needs to be non-negative.
The first sign ambiguity follows from the mutual orientation of the Coriolis force and unit vector n, whereas the second follows from the square root.
The important cases of cyclonic and anticyclonic circulations are discussed next.
Pressure lows and cyclones.
For regular cyclones (air circulation around pressure lows), the pressure force is inward (positive term) and the Coriolis force outward (negative term) irrespective of the hemisphere.
The cross-trajectory momentum equation is
formula_22
Dividing both sides by |"f"|"V", one recognizes that
formula_23
whereby the cyclonic gradient speed "V" is smaller than the corresponding geostrophic, less accurate estimate, and naturally approaches it as the radius of curvature grows (as the inertial velocity goes to infinity).
In cyclones, therefore, curvature slows down the flow compared to the no-curvature value of geostrophic speed.
See also the balanced-flow speeds compared below.
The positive root of the cyclone equation is
formula_24
This speed is always well defined as the quantity under the square root is always positive.
Pressure highs and anticyclones.
In anticyclones (air circulation around pressure highs), the Coriolis force is always inward (and positive), and the pressure force outward (and negative) irrespective of the hemisphere.
The cross-trajectory momentum equation is
formula_25
Dividing both sides by |"f"|"V", we obtain
formula_26
whereby the anticyclonic gradient speed "V" is larger than the geostrophic value and approaches it as the radius of curvature becomes larger.
In anticyclones, therefore, the curvature of isobars speeds up the airflow compared to the (geostrophic) no-curvature value.
See also the balanced-flow speeds compared below.
There are two positive roots for V, but the only one consistent with the limit to geostrophic conditions is
formula_27
that requires that formula_28 to be meaningful.
This condition can be translated in the requirement that, given a high-pressure zone with a constant pressure slope at a certain latitude, there must be a circular region around the high without wind.
On its circumference the air blows at half the corresponding inertial speed (at the cyclostrophic speed), and the radius is
formula_29
obtained by solving the above inequality for "R".
Outside this circle the speed decreases to the geostrophic value as the radius of curvature increases.
The width of this radius grows with the intensity of the pressure gradient.
Application.
Gradient Flow is useful in studying atmospheric flow rotating around high and low pressures centers with small Rossby numbers.
This is the case where the radius of curvature of the flow about the pressure centers is small, and geostrophic flow no longer applies with a useful degree of accuracy.
Balanced-flow speeds compared.
Each balanced-flow idealisation gives a different estimate for the wind speed in the same conditions.
Here we focus on the schematisations valid in the upper atmosphere.
Firstly, imagine that a sample parcel of air flows 500 meters above the sea surface, so that frictional effects are already negligible.
The density of (dry) air at 500 meter above the mean sea level is 1.167 kg/m3 according to its equation of state.
Secondly, let the pressure force driving the flow be measured by a rate of change taken as 1hPa/100 km (an average value).
Recall that it is not the value of the pressure to be important, but the slope with which it changes across the trajectory.
This slope applies equally well to the spacing of straight isobars (geostrophic flow) or of curved isobars (cyclostrophic and gradient flows).
Thirdly, let the parcel travel at a latitude of 45 degrees, either in the southern or northern hemisphere—so the Coriolis force is at play with a Coriolis parameter of 0.000115 Hz.
The balance-flow speeds also changes with the radius of curvature R of the trajectory/isobar.
In case of circular isobars, like in schematic cyclones and anticyclones, the radius of curvature is also the distance from the pressure low and high respectively.
Taking two of such distances R as 100 km and 300 km, the speeds are (in m/s)
The chart shows how the different speeds change in the conditions chosen above and with increasing radius of curvature.
The geostrophic speed (pink line) does not depend on curvature at all, and it appears as a horizontal line.
However, the cyclonic and anticyclonic gradient speeds approach it as the radius of curvature becomes indefinitely large—geostrophic balance is indeed the limiting case of gradient flow for vanishing centripetal acceleration (that is, for pressure and Coriolis force exactly balancing out).
The cyclostrophic speed (black line) increases from zero and its rate of growth with R is less than linear.
In reality an unbounded speed growth is impossible because the conditions supporting the flow change at some distance.
Also recall that the cyclostrophic conditions apply to small-scale processes, so extrapolation to higher radii is physically meaningless.
The inertial speed (green line), which is independent of the pressure gradient that we chose, increases linearly from zero and it soon becomes much larger than any other.
The gradient speed comes with two curves valid for the speeds around a pressure low (blue) and a pressure high (red).
The wind speed in cyclonic circulation grows from zero as the radius increases and is always less than the geostrophic estimate.
In the anticyclonic-circulation example, there is no wind within the distance of 260 km (point R*) – this is the area of no/low winds around a pressure high.
At that distance the first anticyclonic wind has the same speed as the cyclostrophic winds (point Q), and half of that of the inertial wind (point P).
Farther away from point R*, the anticyclonic wind slows down and approaches the geostrophic value with decreasingly larger speeds.
There is also another noteworthy point in the curve, labelled as S, where inertial, cyclostrophic and geostrophic speeds are equal.
The radius at S is always a fourth of R*, that is 65 km here.
Some limitations of the schematisations become also apparent.
For example, as the radius of curvature increases along a meridian, the corresponding change of latitude implies different values of the Coriolis parameter and, in turn, force.
Conversely, the Coriolis force stays the same if the radius is along a parallel.
So, in the case of circular flow, it is unlikely that the speed of the parcel does not change in time around the full circle, because the air parcel will feel the different intensity of the Coriolis force as it travels across different latitudes.
Additionally, the pressure fields quite rarely take the shape of neat circular isobars that keep the same spacing all around the circle.
Also, important differences of density occur in the horizontal plan as well, for example when warmer air joins the cyclonic circulation, thus creating a warm sector between a cold and a warm front.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{DV}{Dt} = -\\frac{1}{\\rho}\\frac{\\partial p}{\\partial s} - K V"
},
{
"math_id": 1,
"text": " \\frac{V^2}{R} = - \\frac{1}{\\rho}\\frac{\\partial p}{\\partial n} \\pm f V,"
},
{
"math_id": 2,
"text": "{DV}/{Dt}"
},
{
"math_id": 3,
"text": " -{\\partial p}/{\\partial s}"
},
{
"math_id": 4,
"text": " -K V "
},
{
"math_id": 5,
"text": " {V^2}/{R}"
},
{
"math_id": 6,
"text": " -{\\partial p}/{\\partial n}"
},
{
"math_id": 7,
"text": " \\pm f V "
},
{
"math_id": 8,
"text": " DV/Dt=0 "
},
{
"math_id": 9,
"text": " V = - \\frac{1}{K\\rho} \\frac{\\partial p}{\\partial s}"
},
{
"math_id": 10,
"text": " {\\partial p}/{\\partial s} <0"
},
{
"math_id": 11,
"text": "{V^2}/{R}"
},
{
"math_id": 12,
"text": " \\partial p/\\partial n=0"
},
{
"math_id": 13,
"text": " \\partial p/\\partial s=0"
},
{
"math_id": 14,
"text": " V = \\frac{1}{\\rho} \\left| \\frac{1}{f} \\frac {\\partial p}{\\partial n} \\right| ."
},
{
"math_id": 15,
"text": " \\frac{V^2}{R} = -\\frac{1}{\\rho}\\frac{\\partial p}{\\partial n}."
},
{
"math_id": 16,
"text": " V = \\sqrt{ -\\frac{R}{\\rho} \\frac{\\partial p}{\\partial n}}."
},
{
"math_id": 17,
"text": "\\partial p / \\partial s =0 "
},
{
"math_id": 18,
"text": " \\frac{V^2}{R} = \\left| f \\right| V ,"
},
{
"math_id": 19,
"text": " V = \\left| f \\right| R ."
},
{
"math_id": 20,
"text": "\\partial p / \\partial s = 0"
},
{
"math_id": 21,
"text": " V = \\pm \\frac{ f R }{2} \\pm \\sqrt{ \\frac{f^2 R^2}{4} - \\frac{R}{\\rho} \\frac{\\partial p}{\\partial n} } ."
},
{
"math_id": 22,
"text": " \\frac{V^2}{R} = \\frac{1}{\\rho}\\left|\\frac{\\partial p}{\\partial n}\\right| - \\left| f \\right| V."
},
{
"math_id": 23,
"text": " \\frac{ V_\\text{geostrophic} }{ V_\\text{cyclone} } = 1 + \\frac{ V_\\text{cyclone} }{ V_\\text{inertial} } > 1 ,"
},
{
"math_id": 24,
"text": " V_\\text{cyclone} = -\\frac{ V_\\text{inertial} }{2} + \\sqrt{ \\frac{V_\\text{inertial}^2}{4} + V_\\text{cyclostrophic}^2 }. "
},
{
"math_id": 25,
"text": " \\frac{V^2}{R} = -\\frac{1}{\\rho} \\left|\\frac{\\partial p}{\\partial n}\\right| + \\left| f \\right| V."
},
{
"math_id": 26,
"text": " \\frac{ V_\\text{geostrophic} }{ V_\\text{anticyclone} } = 1 - \\frac{ V_\\text{anticyclone} }{ V_\\text{inertial} } < 1, "
},
{
"math_id": 27,
"text": " V_\\text{anticyclone} = \\frac{ V_\\text{inertial} }{2} - \\sqrt{ \\frac{ V_\\text{inertial}^2 }{4} - V_\\text{cyclostrophic}^2 } "
},
{
"math_id": 28,
"text": " V_\\text{inertial} \\ge 2 V_\\text{cyclostrophic} "
},
{
"math_id": 29,
"text": " R^* = \\frac{4}{\\rho f^2} \\left| \\frac{\\partial p}{\\partial n} \\right| ,"
}
] | https://en.wikipedia.org/wiki?curid=10796713 |
10796779 | Janko group J1 | Sporadic simple group
In the area of modern algebra known as group theory, the Janko group "J1" is a sporadic simple group of order
23 · 3 · 5 · 7 · 11 · 19 = 175560
≈ 2×105.
History.
"J1" is one of the 26 sporadic groups and was originally described by Zvonimir Janko in 1965. It is the only Janko group whose existence was proved by Janko himself and was the first sporadic group to be found since the discovery of the Mathieu groups in the 19th century. Its discovery launched the modern theory of sporadic groups.
In 1986 Robert A. Wilson showed that "J1" cannot be a subgroup of the monster group. Thus it is one of the 6 sporadic groups called the pariahs.
Properties.
The smallest faithful complex representation of "J1" has dimension 56. "J1" can be characterized abstractly as the unique simple group with abelian 2-Sylow subgroups and with an involution whose centralizer is isomorphic to the direct product of the group of order two and the alternating group A5 of order 60, which is to say, the rotational icosahedral group. That was Janko's original conception of the group.
In fact Janko and Thompson were investigating groups similar to the Ree groups 2"G"2(32"n"+1), and showed that if a simple group "G" has abelian Sylow 2-subgroups and a centralizer of an involution of the form Z/2Z×"PSL"2("q") for "q" a prime power at least 3, then either "q" is a power of 3 and "G" has the same order as a Ree group (it was later shown that "G" must be a Ree group in this case) or "q" is 4 or 5. Note that "PSL"2("4")="PSL"2("5")="A"5. This last exceptional case led to the Janko group "J1".
"J1" has no outer automorphisms and its Schur multiplier is trivial.
"J1" is contained in the O'Nan group as the subgroup of elements fixed by an outer automorphism of order 2.
Constructions.
Modulo 11 representation.
Janko found a modular representation in terms of 7 × 7 orthogonal matrices in the field of eleven elements, with generators given by
formula_0
and
formula_1
Y has order 7 and Z has order 5. Janko (1966) credited W. A. Coppel for recognizing this representation as an embedding into Dickson's simple group "G"2(11) (which has a 7-dimensional representation over the field with 11 elements).
Permutation representation.
"J1" is the automorphism group of the Livingstone graph, a distance-transitive graph with 266 vertices and 1463 edges. The stabilizer of a vertex is PSL2(11), and the stabilizer of an edge is 2×A5.
This permutation representation can be constructed implicitly by starting with the subgroup PSL2(11) and adjoining 11 involutions "t0"...,"tX". PSL2(11) permutes these involutions under the exceptional 11-point representation, so they may be identified with points in the Payley biplane. The following relations (combined) are sufficient to define "J"1:
Presentation.
There is also a pair of generators a, b such that
a2=b3=(ab)7=(abab−1)10=1
J1 is thus a Hurwitz group, a finite homomorphic image of the (2,3,7) triangle group.
Maximal subgroups.
Janko (1966) found the 7 conjugacy classes of maximal subgroups of "J1" shown in the table. Maximal simple subgroups of order 660 afford "J1" a permutation representation of degree 266. He found that there are 2 conjugacy classes of subgroups isomorphic to the alternating group A5, both found in the simple subgroups of order 660. "J1" has non-abelian simple proper subgroups of only 2 isomorphism types.
The notation "A"."B" means a group with a normal subgroup "A" with quotient "B", and
"D"2"n" is the dihedral group of order 2"n".
Number of elements of each order.
The greatest order of any element of the group is 19. The conjugacy class orders and sizes are found in the ATLAS.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{\\mathbf Y} = \\left ( \\begin{matrix}\n0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 1 \\\\\n1 & 0 & 0 & 0 & 0 & 0 & 0 \\end{matrix} \\right )"
},
{
"math_id": 1,
"text": "{\\mathbf Z} = \\left ( \\begin{matrix}\n-3 & +2 & -1 & -1 & -3 & -1 & -3 \\\\\n-2 & +1 & +1 & +3 & +1 & +3 & +3 \\\\\n-1 & -1 & -3 & -1 & -3 & -3 & +2 \\\\\n-1 & -3 & -1 & -3 & -3 & +2 & -1 \\\\\n-3 & -1 & -3 & -3 & +2 & -1 & -1 \\\\\n+1 & +3 & +3 & -2 & +1 & +1 & +3 \\\\\n+3 & +3 & -2 & +1 & +1 & +3 & +1 \\end{matrix} \\right )."
}
] | https://en.wikipedia.org/wiki?curid=10796779 |
10796938 | Janko group J3 | Sporadic simple group
In the area of modern algebra known as group theory, the Janko group "J3" or the Higman-Janko-McKay group "HJM" is a sporadic simple group of order
27 · 35 · 5 · 17 · 19 = 50232960.
History and properties.
"J3" is one of the 26 Sporadic groups and was predicted by Zvonimir Janko in 1969 as one of two new simple groups having 21+4:A5 as a centralizer of an involution (the other is the Janko group "J2").
"J3" was shown to exist by Graham Higman and John McKay (1969).
In 1982 R. L. Griess showed that "J3" cannot be a subquotient of the monster group. Thus it is one of the 6 sporadic groups called the pariahs.
J3 has an outer automorphism group of order 2 and a Schur multiplier of order 3, and its triple cover has a unitary 9-dimensional representation over the finite field with 4 elements. constructed it via an underlying geometry. It has a modular representation of dimension eighteen over the finite field with 9 elements.
It has a complex projective representation of dimension eighteen.
Constructions.
Using matrices.
J3 can be constructed by many different generators. Two from the ATLAS list are 18x18 matrices over the finite field of order 9, with matrix multiplication carried out with finite field arithmetic:
formula_0
and
formula_1
Using the subgroup PSL(2,16).
The automorphism group "J"3:2 can be constructed by starting with the subgroup PSL(2,16):4 and adjoining 120 involutions, which are identified with the Sylow 17-subgroups. Note that these 120 involutions are outer elements of "J"3:2. One then defines the following relation:
formula_2
where formula_3 is the Frobenius automorphism or order 4, and formula_4 is the unique 17-cycle that sends
formula_5
Curtis showed, using a computer, that this relation is sufficient to define "J"3:2.
Using a presentation.
In terms of generators a, b, c, and d its automorphism group J3:2 can be presented as
formula_6
A presentation for J3 in terms of (different) generators a, b, c, d is
formula_7
Maximal subgroups.
found the 9 conjugacy classes of maximal subgroups of "J3" as follows:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left( \\begin{matrix}\n0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 \\\\\n3 & 7 & 4 & 8 & 4 & 8 & 1 & 5 & 5 & 1 & 2 & 0 & 8 & 6 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 \\\\\n4 & 8 & 6 & 2 & 4 & 8 & 0 & 4 & 0 & 8 & 4 & 5 & 0 & 8 & 1 & 1 & 8 & 5 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 \\\\\n\\end{matrix} \\right)\n"
},
{
"math_id": 1,
"text": "\\left( \\begin{matrix}\n4 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n4 & 4 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 \\\\\n2 & 7 & 4 & 5 & 7 & 4 & 8 & 5 & 6 & 7 & 2 & 2 & 8 & 8 & 0 & 0 & 5 & 0 \\\\\n4 & 7 & 5 & 8 & 6 & 1 & 1 & 6 & 5 & 3 & 8 & 7 & 5 & 0 & 8 & 8 & 6 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n8 & 2 & 5 & 5 & 7 & 2 & 8 & 1 & 5 & 5 & 7 & 8 & 6 & 0 & 0 & 7 & 3 & 8 \\\\\n\\end{matrix} \\right)\n"
},
{
"math_id": 2,
"text": "\\left(\\begin{matrix}1&1\\\\1&0\\end{matrix}\\sigma t_{(\\nu,\\nu7)}\\right)^5=1"
},
{
"math_id": 3,
"text": "\\sigma"
},
{
"math_id": 4,
"text": "t_{(\\nu,\\nu7)}"
},
{
"math_id": 5,
"text": "\\infty\\rightarrow0\\rightarrow1\\rightarrow7"
},
{
"math_id": 6,
"text": "a^{17} = b^8 = a^ba^{-2} = c^2 = b^cb^3 = (abc)^4 = (ac)^{17} = d^2 = [d, a] = [d, b] = (a^3b^{-3}cd)^5 = 1."
},
{
"math_id": 7,
"text": "a^{19} = b^9 = a^ba^2 = c^2 = d^2 = (bc)^2 = (bd)^2 = (ac)^3 = (ad)^3 = (a^2ca^{-3}d)^3 = 1."
}
] | https://en.wikipedia.org/wiki?curid=10796938 |
10796995 | Janko group J4 | Sporadic simple group
In the area of modern algebra known as group theory, the Janko group "J4" is a sporadic simple group of order
86,775,571,046,077,562,880
= 221 · 33 · 5 · 7 · 113 · 23 · 29 · 31 · 37 · 43
≈ 9×1019.
History.
"J4" is one of the 26 Sporadic groups. Zvonimir Janko found J4 in 1975 by studying groups with an involution centralizer of the form 21 + 12.3.(M22:2). Its existence and uniqueness was shown using computer calculations by Simon P. Norton and others in 1980. It has a modular representation of dimension 112 over the finite field with 2 elements and is the stabilizer of a certain 4995 dimensional subspace of the exterior square, a fact which Norton used to construct it, and which is the easiest way to deal with it computationally. and gave computer-free proofs of uniqueness. and gave a computer-free proof of existence by constructing it as an amalgams of groups 210:SL5(2) and (210:24:A8):2 over a group 210:24:A8.
The Schur multiplier and the outer automorphism group are both trivial.
Since 37 and 43 are not supersingular primes, "J4" cannot be a subquotient of the monster group. Thus it is one of the 6 sporadic groups called the pariahs.
Representations.
The smallest faithful complex representation has dimension 1333; there are two complex conjugate representations of this dimension. The smallest faithful representation over any field is a 112 dimensional representation over the field of 2 elements.
The smallest permutation representation is on 173067389 points and has rank 20, with point stabilizer of the form 211:M24. The points can be identified with certain "special vectors" in the 112 dimensional representation.
Presentation.
It has a presentation in terms of three generators a, b, and c as
formula_0
Alternatively, one can start with the subgroup M24 and adjoin 3975 involutions, which are identified with the trios. By adding a certain relation, certain products of commuting involutions generate the binary Golay cocode, which extends to the maximal subgroup 211:M24. Bolt, Bray, and Curtis showed, using a computer, that adding just one more relation is sufficient to define "J"4.
Maximal subgroups.
found the 13 conjugacy classes of maximal subgroups of "J4" which are listed in the table below.
A Sylow 3-subgroup of "J4" is a Heisenberg group: order 27, non-abelian, all non-trivial elements of order 3. | [
{
"math_id": 0,
"text": "\\begin{align}\na^2 &=b^3=c^2=(ab)^{23}=[a,b]^{12}=[a,bab]^5=[c,a]= \\left ((ab)^2ab^{-1} \\right)^3 \\left (ab(ab^{-1})^2 \\right)^3=\\left (ab \\left (abab^{-1} \\right )^3 \\right )^4 \\\\\n&=\\left [c,(ba)^2 b^{-1}ab^{-1} (ab)^3 \\right]= \\left (bc^{(bab^{-1}a)^2} \\right )^3= \\left ((bababab)^3 c c^{(ab)^3b(ab)^6b} \\right )^2=1.\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=10796995 |
10797086 | Janko group J2 | Sporadic simple group
In the area of modern algebra known as group theory, the Janko group "J2" or the Hall-Janko group "HJ" is a sporadic simple group of order
27 · 33 · 52 · 7 = 604800
≈ 6×105.
History and properties.
"J2" is one of the 26 Sporadic groups and is also called Hall–Janko–Wales group. In 1969 Zvonimir Janko predicted J2 as one of two new simple groups having 21+4:A5 as a centralizer of an involution (the other is the Janko group J3). It was constructed by Marshall Hall and David Wales (1968) as a rank 3 permutation group on 100 points.
Both the Schur multiplier and the outer automorphism group have order 2. As a permutation group on 100 points J2 has involutions moving all 100 points and involutions moving just 80 points. The former involutions are
products of 25 double transportions, an odd number, and hence lift to 4-elements in the double cover 2.A100. The double cover 2.J2 occurs as a subgroup of the Conway group Co0.
J2 is the only one of the 4 Janko groups that is a subquotient of the monster group; it is thus part of what Robert Griess calls the Happy Family. Since it is also found in the Conway group Co1, it is therefore part of the second generation of the Happy Family.
Representations.
It is a subgroup of index two of the group of automorphisms of the Hall–Janko graph, leading to a permutation representation of degree 100. It is also a subgroup of index two of the group of automorphisms of the Hall–Janko Near Octagon, leading to a permutation representation of degree 315.
It has a modular representation of dimension six over the field of four elements; if in characteristic two we have "w"2 + "w" + 1
0, then J2 is generated by the two matrices
formula_0
and
formula_1
These matrices satisfy the equations
formula_2
J2 is thus a Hurwitz group, a finite homomorphic image of the (2,3,7) triangle group.
The matrix representation given above constitutes an embedding into Dickson's group "G"2(4). There is only one conjugacy class of J2 in "G"2(4). Every subgroup J2 contained in "G"2(4) extends to a subgroup J2:2= Aut(J2) in "G"2(4):2= Aut("G"2(4)) ("G"2(4) extended by the field automorphisms of F4). "G"2(4) is in turn isomorphic to a subgroup of the Conway group Co1.
Maximal subgroups.
There are 9 conjugacy classes of maximal subgroups of "J2". Some are here described in terms of action on the Hall–Janko graph.
Simple, containing 36 simple subgroups of order 168 and 63 involutions, all conjugate, each moving 80 points. A given involution is found in 12 168-subgroups, thus fixes them under conjugacy. Its centralizer has structure 4.S4, which contains 6 additional involutions.
Containing 22 × A5 (order 240), centralizer of 3 involutions each moving 100 points
Conjugacy classes.
The maximum order of any element is 15. As permutations, elements act on the 100 vertices of the Hall–Janko graph.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n{\\mathbf A} = \\begin{pmatrix}\nw^2 & w^2 & 0 & 0 & 0 & 0 \\\\ \n 1 & w^2 & 0 & 0 & 0 & 0 \\\\ \n 1 & 1 & w^2 & w^2 & 0 & 0 \\\\ \n w & 1 & 1 & w^2 & 0 & 0 \\\\ \n 0 & w^2 & w^2 & w^2 & 0 & w \\\\ \nw^2 & 1 & w^2 & 0 & w^2 & 0\n\\end{pmatrix}\n"
},
{
"math_id": 1,
"text": "\n{\\mathbf B} = \\begin{pmatrix}\n w & 1 & w^2 & 1 & w^2 & w^2 \\\\ \n w & 1 & w & 1 & 1 & w \\\\ \n w & w & w^2 & w^2 & 1 & 0 \\\\ \n 0 & 0 & 0 & 0 & 1 & 1 \\\\ \nw^2 & 1 & w^2 & w^2 & w & w^2 \\\\ \nw^2 & 1 & w^2 & w & w^2 & w\n\\end{pmatrix}.\n"
},
{
"math_id": 2,
"text": "\n{\\mathbf A}^2 = {\\mathbf B}^3 = ({\\mathbf A}{\\mathbf B})^7 = \n({\\mathbf A}{\\mathbf B}{\\mathbf A}{\\mathbf B}{\\mathbf B})^{12} = 1.\n"
}
] | https://en.wikipedia.org/wiki?curid=10797086 |
10797093 | Karamata's inequality | Algebra theorem about convex functions
In mathematics, Karamata's inequality, named after Jovan Karamata, also known as the majorization inequality, is a theorem in elementary algebra for convex and concave real-valued functions, defined on an interval of the real line. It generalizes the discrete form of Jensen's inequality, and generalizes in turn to the concept of Schur-convex functions.
Statement of the inequality.
Let "I" be an interval of the real line and let "f" denote a real-valued, convex function defined on "I". If "x"1, …, "xn" and "y"1, …, "yn" are numbers in "I" such that ("x"1, …, "xn") majorizes ("y"1, …, "yn"), then
Here majorization means that "x"1, …, "xn" and "y"1, …, "yn" satisfies
and we have the inequalities
and the equality
If "f" is a strictly convex function, then the inequality (1) holds with equality if and only if we have "xi" = "yi" for all "i" ∈ {1, …, "n"}.
Example.
The finite form of Jensen's inequality is a special case of this result. Consider the real numbers "x"1, …, "xn" ∈ "I" and let
formula_0
denote their arithmetic mean. Then ("x"1, …, "xn") majorizes the "n"-tuple ("a", "a", …, "a"), since the arithmetic mean of the "i" largest numbers of ("x"1, …, "xn") is at least as large as the arithmetic mean "a" of all the "n" numbers, for every "i" ∈ {1, …, "n" − 1}. By Karamata's inequality (1) for the convex function "f",
formula_1
Dividing by "n" gives Jensen's inequality. The sign is reversed if "f" is concave.
Proof of the inequality.
We may assume that the numbers are in decreasing order as specified in (2).
If "xi" = "yi" for all "i" ∈ {1, …, "n"}, then the inequality (1) holds with equality, hence we may assume in the following that "xi" ≠ "yi" for at least one "i".
If "xi" = "yi" for an "i" ∈ {1, …, "n"}, then the inequality (1) and the majorization properties (3) and (4) are not affected if we remove "xi" and "yi". Hence we may assume that "xi" ≠ "yi" for all "i" ∈ {1, …, "n"}.
It is a property of convex functions that for two numbers "x" ≠ "y" in the interval "I" the slope
formula_2
of the secant line through the points ("x", "f" ("x")) and ("y", "f" ("y")) of the graph of "f" is a monotonically non-decreasing function in "x" for "y" fixed (and ). This implies that
for all "i" ∈ {1, …, "n" − 1}. Define "A"0 = "B"0 = 0 and
formula_3
for all "i" ∈ {1, …, "n"}. By the majorization property (3), "Ai" ≥ "Bi" for all "i" ∈ {1, …, "n" − 1} and by (4), "An" = "Bn". Hence,
which proves Karamata's inequality (1).
To discuss the case of equality in (1), note that "x"1 > "y"1 by (3) and our assumption "xi" ≠ "yi" for all "i" ∈ {1, …, "n" − 1}. Let "i" be the smallest index such that ("xi", "yi") ≠ ("x""i"+1, "y""i"+1), which exists due to (4). Then "Ai" > "Bi". If "f" is strictly convex, then there is strict inequality in (6), meaning that "c""i"+1 < "ci". Hence there is a strictly positive term in the sum on the right hand side of (7) and equality in (1) cannot hold.
If the convex function "f" is non-decreasing, then "cn" ≥ 0. The relaxed condition (5) means that "An" ≥ "Bn", which is enough to conclude that "cn"("An"−"Bn") ≥ 0 in the last step of (7).
If the function "f" is strictly convex and non-decreasing, then "cn" > 0. It only remains to discuss the case "An" > "Bn". However, then there is a strictly positive term on the right hand side of (7) and equality in (1) cannot hold.
References.
<templatestyles src="Reflist/styles.css" />
External links.
An explanation of Karamata's inequality and majorization theory can be found here. | [
{
"math_id": 0,
"text": "a := \\frac{x_1+x_2+\\cdots+x_n}{n}"
},
{
"math_id": 1,
"text": "f(x_1)+f(x_2)+ \\cdots +f(x_n) \\ge f(a)+f(a)+\\cdots+f(a) = nf(a)."
},
{
"math_id": 2,
"text": "\\frac{f(x)-f(y)}{x-y}"
},
{
"math_id": 3,
"text": "A_i=x_1+\\cdots+x_i,\\qquad B_i=y_1+\\cdots+y_i"
}
] | https://en.wikipedia.org/wiki?curid=10797093 |
10799951 | Power supply rejection ratio | In electronic systems, power supply rejection ratio (PSRR), also supply-voltage rejection ratio ("k"SVR; SVR), is a term widely used to describe the capability of an electronic circuit to suppress any power supply variations to its output signal.
In the specifications of operational amplifiers, the PSRR is defined as the ratio of the change in supply voltage to the equivalent (differential) output voltage it produces, often expressed in decibels. An ideal op-amp would have infinite PSRR, as the device should have no change to the output voltage with any changes to the power supply voltage. The output voltage will depend on the feedback circuit, as is the case of regular input offset voltages. But testing is not confined to DC (zero frequency); often an operational amplifier will also have its PSRR given at various frequencies (in which case the ratio is one of RMS amplitudes of sinewaves present at a power supply compared with the output, with gain taken into account). Unwanted oscillation, including motorboating, can occur when an amplifying stage is too sensitive to signals fed via the power supply from a later power amplifier stage.
Some manufacturers specify PSRR in terms of the offset voltage it causes at the amplifiers inputs; others specify it in terms of the output; there is no industry standard for this issue. The following formula assumes it is specified in terms of input:
formula_0
where formula_1 is the voltage gain.
For example: an amplifier with a PSRR of 100 dB in a circuit to give 40 dB closed-loop gain would allow about 1 millivolt of power supply ripple to be superimposed on the output for every 1 volt of ripple in the supply. This is because
formula_2.
And since that's 60 dB of rejection, the sign is negative so:
formula_3
Note: | [
{
"math_id": 0,
"text": "\\text{PSRR} [\\text{dB}] = 10 \\log_{10} \\left(\\frac{\\Delta {V_\\text{supply}}^2 {A_v}^2}{{\\Delta {V_\\text{out}}^2}}\\right)\\text{dB}"
},
{
"math_id": 1,
"text": "A_v"
},
{
"math_id": 2,
"text": "100\\ \\text{dB} - 40\\ \\text{dB} = 60\\ \\text{dB}"
},
{
"math_id": 3,
"text": "1\\ \\text{V} \\cdot 10^\\frac{-60}{20} = 0.001\\ \\text{V} = 1\\ \\text{mV}"
}
] | https://en.wikipedia.org/wiki?curid=10799951 |
10800208 | Zuckerman functor | In mathematics, a Zuckerman functor is used to construct representations of real reductive Lie groups from representations of Levi subgroups. They were introduced by Gregg Zuckerman (1978). The Bernstein functor is closely related.
Definition.
The Zuckerman functor Γ is defined by
formula_0
and the Bernstein functor Π is defined by
formula_1
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\Gamma^{g,K}_{g,L\\cap K}(W) = \\hom_{R(g,L\\cap K)}(R(g,K),W)_K"
},
{
"math_id": 1,
"text": "\\Pi^{g,K}_{g,L\\cap K}(W) = R(g,K)\\otimes_{R(g,L\\cap K)}W."
}
] | https://en.wikipedia.org/wiki?curid=10800208 |
1080101 | Autotransformer | Type of electrical transformer
In electrical engineering, an autotransformer is an electrical transformer with only one winding. The "auto" (Greek for "self") prefix refers to the single coil acting alone. In an autotransformer, portions of the same winding act as both the primary winding and secondary winding sides of the transformer. In contrast, an ordinary transformer has separate primary and secondary windings that are not connected by an electrically conductive path. between them.
The autotransformer winding has at least three electrical connections to the winding. Since part of the winding does "double duty", autotransformers have the advantages of often being smaller, lighter, and cheaper than typical dual-winding transformers, but the disadvantage of not providing electrical isolation between primary and secondary circuits. Other advantages of autotransformers include lower leakage reactance, lower losses, lower excitation current, and increased VA rating for a given size and mass.
An example of an application of an autotransformer is one style of traveler's voltage converter, that allows 230-volt devices to be used on 120-volt supply circuits, or the reverse. An autotransformer with multiple taps may be applied to adjust the voltage at the end of a long distribution circuit to correct for excess voltage drop; when automatically controlled, this is one example of a voltage regulator.
Operation.
An autotransformer has a single winding with two end terminals and one or more terminals at intermediate tap points. It is a transformer in which the primary and secondary coils have part of their turns in common. The portion of the winding shared by both the primary and secondary is the common section. The portion of the winding not shared by both the primary and secondary is the series section. The primary voltage is applied across two of the terminals. The secondary voltage is taken from two terminals, one terminal of which is usually in common with a primary voltage terminal.
Since the volts-per-turn is the same in both windings, each develops a voltage in proportion to its number of turns. In an autotransformer, part of the output current flows directly from the input to the output (through the series section), and only part is transferred inductively (through the common section), allowing a smaller, lighter, cheaper core to be used as well as requiring only a single winding. However the voltage and current ratio of autotransformers can be formulated the same as other two-winding transformers:
formula_0 formula_1
The ampere-turns provided by the series section of the winding:
formula_2
The ampere-turns provided by the common section of the winding:
formula_3
For ampere-turn balance, "FS" = "FC":
formula_4
Therefore:
formula_5
One end of the winding is usually connected in common to both the voltage source and the electrical load. The other end of the source and load are connected to taps along the winding. Different taps on the winding correspond to different voltages, measured from the common end. In a step-down transformer the source is usually connected across the entire winding while the load is connected by a tap across only a portion of the winding. In a step-up transformer, conversely, the load is attached across the full winding while the source is connected to a tap across a portion of the winding. For a step-up transformer, the subscripts in the above equations are reversed where, in this situation, formula_6 and formula_7 are greater than formula_8 and formula_9, respectively.
As in a two-winding transformer, the ratio of secondary to primary voltages is equal to the ratio of the number of turns of the winding they connect to. For example, connecting the load between the middle of the winding and the common terminal end of the winding of the autotransformer will result in the output load voltage being 50% of the primary voltage. Depending on the application, that portion of the winding used solely in the higher-voltage (lower current) portion may be wound with wire of a smaller gauge, though the entire winding is directly connected.
If one of the center-taps is used for the ground, then the autotransformer can be used as a balun to convert a balanced line (connected to the two end taps) to an unbalanced line (the side with the ground).
An autotransformer does not provide electrical isolation between its windings as an ordinary transformer does; if the neutral side of the input is not at ground voltage, the neutral side of the output will not be either. A failure of the isolation of the windings of an autotransformer can result in full input voltage applied to the output. Also, a break in the part of the winding that is used as both primary and secondary will result in the transformer acting as an inductor in series with the load (which under light load conditions may result in nearly full input voltage being applied to the output). These are important safety considerations when deciding to use an autotransformer in a given application.
Because it requires both fewer windings and a smaller core, an autotransformer for power applications is typically lighter and less costly than a two-winding transformer, up to a voltage ratio of about 3:1; beyond that range, a two-winding transformer is usually more economical.
In three phase power transmission applications, autotransformers have the limitations of not suppressing harmonic currents and as acting as another source of ground fault currents. A large three-phase autotransformer may have a "buried" delta winding, not connected to the outside of the tank, to absorb some harmonic currents.
In practice, losses mean that both standard transformers and autotransformers are not perfectly reversible; one designed for stepping down a voltage will deliver slightly less voltage than required if it is used to step up. The difference is usually slight enough to allow reversal where the actual voltage level is not critical.
Like multiple-winding transformers, autotransformers use time-varying magnetic fields to transfer power. They require alternating currents to operate properly and will not function on direct current. Because the primary and secondary windings are electrically connected, an autotransformer will allow current to flow between windings and therefore does not provide AC or DC isolation.
Applications.
Power transmission and distribution.
Autotransformers are frequently used in power applications to interconnect systems operating at different voltage classes, for example 132 kV to 66 kV for transmission. Another application in industry is to adapt machinery built (for example) for 480 V supplies to operate on a 600 V supply. They are also often used for providing conversions between the two common domestic mains voltage bands in the world (100 V–130 V and 200 V–250 V). The links between the UK 400 kV and 275 kV "Super Grid" networks are normally three phase autotransformers with taps at the common neutral end.
On long rural power distribution lines, special autotransformers with automatic tap-changing equipment are inserted as voltage regulators, so that customers at the far end of the line receive the same average voltage as those closer to the source. The variable ratio of the autotransformer compensates for the voltage drop along the line.
A special form of auto transformer called a "zig zag" is used to provide grounding on three-phase systems that otherwise have no connection to ground. A zig-zag transformer provides a path for current that is common to all three phases (so-called zero sequence current).
Audio system.
In audio applications, tapped autotransformers are used to adapt speakers to constant-voltage audio distribution systems, and for impedance matching such as between a low-impedance microphone and a high-impedance amplifier input.
Railways.
In railway applications, it is common to power the trains at 25 kV AC. To increase the distance between electricity Grid feeder points, they can be arranged to supply a split-phase 25-0-25 kV feed with the third wire (opposite phase) out of reach of the train's overhead collector pantograph. The 0 V point of the supply is connected to the rail while one 25 kV point is connected to the overhead contact wire. At frequent (about 10 km) intervals, an autotransformer links the contact wire to rail and to the second (antiphase) supply conductor. This system increases usable transmission distance, reduces induced interference into external equipment and reduces cost. A variant is occasionally seen where the supply conductor is at a different voltage to the contact wire with the autotransformer ratio modified to suit.
Autotransformer starter.
Autotransformers can be used as a method of soft starting induction motors. One of the well-known designs of such starters is Korndörfer starter.
History.
The autotransformer starter was invented in 1908, by Max Korndorfer of Berlin. He filed the application with the U.S. Patent office in May 1908 and was granted the patent US 1,096,922 in May 1914. Max Korndorfer assigned his patent to the General Electric Company.
An induction motor draws very high starting current during its acceleration to full rated speed, typically 6 to 10 times the full load current. Reduced starting current is desirable where the electrical grid is not of sufficient capacity, or where the driven load cannot withstand high starting torque. One basic method to reduce the starting current is with a reduced voltage autotransformer with taps at 50%, 65% and 80% of the applied line voltage; once the motor is started the autotransformer is switched out of circuit.
Variable autotransformers.
By exposing part of the winding coils and making the secondary connection through a sliding brush, a continuously variable turns ratio can be obtained, allowing for very smooth control of output voltage. The output voltage is not limited to the discrete voltages represented by actual number of turns. The voltage can be smoothly varied between turns as the brush has a relatively high resistance (compared with a metal contact) and the actual output voltage is a function of the relative area of brush in contact with adjacent windings. The relatively high resistance of the brush also prevents it from acting as a short circuited turn when it contacts two adjacent turns. Typically the primary connection connects to only a part of the winding allowing the output voltage to be varied smoothly from zero to above the input voltage and thus allowing the device to be used for testing electrical equipment at the limits of its specified voltage range.
The output voltage adjustment can be manual or automatic. The manual type is applicable only for relatively low voltage and is known as a variable AC transformer (often referred to by the trademark name Variac). These are often used in repair shops for testing devices under different voltages or to simulate abnormal line voltages.
The type with automatic voltage adjustment can be used as automatic voltage regulator, to maintain a steady voltage at the customers' service during a wide range of line and load conditions. Another application is a lighting dimmer that doesn't produce the EMI typical of most thyristor dimmers.
Variac trademark.
From 1934 to 2002, Variac was a U.S. trademark of General Radio for a variable autotransformer intended to conveniently vary the output voltage for a steady AC input voltage. In 2004, Instrument Service Equipment applied for and obtained the "Variac" trademark for the same type of product. The term "variac" has become a genericised trademark, being used to refer to a variable autotransformer.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{V_1}{V_2} = \\frac{N_1}{N_2} = a "
},
{
"math_id": 1,
"text": "(0 < V_2 < V_1)"
},
{
"math_id": 2,
"text": "F_S = (N_1 - N_2)I_1 = \\left(1-\\frac{1}{a}\\right)N_1I_1"
},
{
"math_id": 3,
"text": "F_C = N_2(I_2 - I_1) = \\frac{N_1}{a}(I_2-I_1)"
},
{
"math_id": 4,
"text": "\\left(1-\\frac{1}{a}\\right)N_1 I_1 = \\frac{N_1}{a}(I_2-I_1)"
},
{
"math_id": 5,
"text": "\\frac{I_1}{I_2} = \\frac{1}{a} = \\frac{N_2}{N_1}"
},
{
"math_id": 6,
"text": "N_2"
},
{
"math_id": 7,
"text": "V_2"
},
{
"math_id": 8,
"text": "N_1"
},
{
"math_id": 9,
"text": "V_1"
}
] | https://en.wikipedia.org/wiki?curid=1080101 |
1080226 | Nitrogenase | Class of enzymes
Nitrogenases are enzymes (EC 1.18.6.1EC 1.19.6.1) that are produced by certain bacteria, such as cyanobacteria (blue-green bacteria) and rhizobacteria. These enzymes are responsible for the reduction of nitrogen (N2) to ammonia (NH3). Nitrogenases are the only family of enzymes known to catalyze this reaction, which is a step in the process of nitrogen fixation. Nitrogen fixation is required for all forms of life, with nitrogen being essential for the biosynthesis of molecules (nucleotides, amino acids) that create plants, animals and other organisms. They are encoded by the Nif genes or homologs. They are related to protochlorophyllide reductase.
Classification and structure.
Although the equilibrium formation of ammonia from molecular hydrogen and nitrogen has an overall negative enthalpy of reaction (formula_0), the activation energy is very high (formula_1). Nitrogenase acts as a catalyst, reducing this energy barrier such that the reaction can take place at ambient temperatures.
A usual assembly consists of two components:
Reductase.
The Fe protein, the dinitrogenase reductase or NifH, is a dimer of identical subunits which contains one [Fe4S4] cluster and has a mass of approximately 60-64kDa. The function of the Fe protein is to transfer electrons from a reducing agent, such as ferredoxin or flavodoxin to the nitrogenase protein. Ferredoxin or flavodoxin can be reduced by one of six mechanisms: 1. by a , 2. by a bi-directional hydrogenase, 3. in a photosynthetic reaction center, 4. by coupling electron flow to dissipation of the proton motive force, 5. by electron bifurcation, or 6. by a . The transfer of electrons requires an input of chemical energy which comes from the binding and hydrolysis of ATP. The hydrolysis of ATP also causes a conformational change within the nitrogenase complex, bringing the Fe protein and MoFe protein closer together for easier electron transfer.
Nitrogenase.
The MoFe protein is a heterotetramer consisting of two α subunits and two β subunits, with a mass of approximately 240-250kDa. The MoFe protein also contains two iron–sulfur clusters, known as P-clusters, located at the interface between the α and β subunits and two FeMo cofactors, within the α subunits. The oxidation state of Mo in these nitrogenases was formerly thought Mo(V), but more recent evidence is for Mo(III). (Molybdenum in other enzymes is generally bound to molybdopterin as fully oxidized Mo(VI)).
Electrons from the Fe protein enter the MoFe protein at the P-clusters, which then transfer the electrons to the FeMo cofactors. Each FeMo cofactor then acts as a site for nitrogen fixation, with N2 binding in the central cavity of the cofactor.
Variations.
The MoFe protein can be replaced by alternative nitrogenases in environments low in the Mo cofactor. Two types of such nitrogenases are known: the vanadium–iron (VFe; "Vnf") type and the iron–iron (FeFe; "Anf") type. Both form an assembly of two α subunits, two β subunits, and two δ (sometimes γ) subunits. The delta subunits are homologous to each other, and the alpha and beta subunits themselves are homologous to the ones found in MoFe nitrogenase. The gene clusters are also homologous, and these subunits are interchangeable to some degree. All nitrogenases use a similar Fe-S core cluster, and the variations come in the cofactor metal.
The Anf nitrogenase in "Azotobacter vinelandii" is organized in an "anfHDGKOR" operon. This operon still requires some of the Nif genes to function. An engineered minimal 10-gene operon that incorporates these additional essential genes has been constructed.
Mechanism.
General mechanism.
Nitrogenase is an enzyme responsible for catalyzing nitrogen fixation, which is the reduction of nitrogen (N2) to ammonia (NH3) and a process vital to sustaining life on Earth. There are three types of nitrogenase found in various nitrogen-fixing bacteria: molybdenum (Mo) nitrogenase, vanadium (V) nitrogenase, and iron-only (Fe) nitrogenase. Molybdenum nitrogenase, which can be found in diazotrophs such as legume-associated rhizobia, is the nitrogenase that has been studied the most extensively and thus is the most well characterized. Vanadium nitrogenase and iron-only nitrogenase can both be found in select species of Azotobacter as an alternative nitrogenase. Equations 1 and 2 show the balanced reactions of nitrogen fixation in molybdenum nitrogenase and vanadium nitrogenase respectively.
All nitrogenases are two-component systems made up of Component I (also known as dinitrogenase) and Component II (also known as dinitrogenase reductase). Component I is a MoFe protein in molybdenum nitrogenase, a VFe protein in vanadium nitrogenase, and an Fe protein in iron-only nitrogenase. Component II is a Fe protein that contains the Fe-S cluster., which transfers electrons to Component I. Component I contains 2 metal clusters: the P-cluster, and the FeMo-cofactor (FeMo-co). Mo is replaced by V or Fe in vanadium nitrogenase and iron-only nitrogenase respectively. During catalysis, 2 equivalents of MgATP are hydrolysed which helps to decrease the potential of the to the Fe-S cluster and drive reduction of the P-cluster, and finally to the FeMo-co, where reduction of N2 to NH3 takes place.
Lowe-Thorneley kinetic model.
The reduction of nitrogen to two molecules of ammonia is carried out at the FeMo-co of Component I after the sequential addition of proton and electron equivalents from Component II. Steady state, freeze quench, and stopped-flow kinetics measurements carried out in the 70's and 80's by Lowe, Thorneley, and others provided a kinetic basis for this process. The Lowe-Thorneley (LT) kinetic model was developed from these experiments and documents the eight correlated proton and electron transfers required throughout the reaction. Each intermediate stage is depicted as En where n = 0–8, corresponding to the number of equivalents transferred. The transfer of four equivalents are required before the productive addition of N2, although reaction of E3 with N2 is also possible. Notably, nitrogen reduction has been shown to require 8 equivalents of protons and electrons as opposed to the 6 equivalents predicted by the balanced chemical reaction.
Intermediates E0 through E4.
Spectroscopic characterization of these intermediates has allowed for greater understanding of nitrogen reduction by nitrogenase, however, the mechanism remains an active area of research and debate. Briefly listed below are spectroscopic experiments for the intermediates before the addition of nitrogen:
E0 – This is the resting state of the enzyme before catalysis begins. EPR characterization shows that this species has a spin of 3/2.
E1 – The one electron reduced intermediate has been trapped during turnover under N2. Mӧssbauer spectroscopy of the trapped intermediate indicates that the FeMo-co is integer spin greater than 1.
E2 – This intermediate is proposed to contain the metal cluster in its resting oxidation state with the two added electrons stored in a bridging hydride and the additional proton bonded to a sulfur atom. Isolation of this intermediate in mutated enzymes shows that the FeMo-co is high spin and has a spin of 3/2.
E3 – This intermediate is proposed to be the singly reduced FeMo-co with one bridging hydride and one hydride.
E4 – Termed the Janus intermediate after the Roman god of transitions, this intermediate is positioned after exactly half of the electron proton transfers and can either decay back to E0 or proceed with nitrogen binding and finish the catalytic cycle. This intermediate is proposed to contain the FeMo-co in its resting oxidation state with two bridging hydrides and two sulfur bonded protons. This intermediate was first observed using freeze quench techniques with a mutated protein in which residue 70, a valine amino acid, is replaced with isoleucine. This modification prevents substrate access to the FeMo-co. EPR characterization of this isolated intermediate shows a new species with a spin of ½. ENDOR experiments have provided insight into the structure of this intermediate, revealing the presence of two bridging hydrides. 95Mo and 57Fe ENDOR show that the hydrides bridge between two iron centers. Cryoannealing of the trapped intermediate at -20 °C results in the successive loss of two hydrogen equivalents upon relaxation, proving that the isolated intermediate is consistent with the E4 state. The decay of E4 to E2 + H2 and finally to E0 and 2H2 has confirmed the EPR signal associated with the E2 intermediate.
The above intermediates suggest that the metal cluster is cycled between its original oxidation state and a singly reduced state with additional electrons being stored in hydrides. It has alternatively been proposed that each step involves the formation of a hydride and that the metal cluster actually cycles between the original oxidation state and a singly oxidized state.
Distal and alternating pathways for N2 fixation.
While the mechanism for nitrogen fixation prior to the Janus E4 complex is generally agreed upon, there are currently two hypotheses for the exact pathway in the second half of the mechanism: the "distal" and the "alternating" pathway. In the distal pathway, the terminal nitrogen is hydrogenated first, releases ammonia, then the nitrogen directly bound to the metal is hydrogenated. In the alternating pathway, one hydrogen is added to the terminal nitrogen, then one hydrogen is added to the nitrogen directly bound to the metal. This alternating pattern continues until ammonia is released. Because each pathway favors a unique set of intermediates, attempts to determine which path is correct have generally focused on the isolation of said intermediates, such as the nitrido in the distal pathway, and the diazene and hydrazine in the alternating pathway. Attempts to isolate the intermediates in nitrogenase itself have so far been unsuccessful, but the use of model complexes has allowed for the isolation of intermediates that support both sides depending on the metal center used. Studies with Mo generally point towards a distal pathway, while studies with Fe generally point towards an alternating pathway.
Specific support for the distal pathway has mainly stemmed from the work of Schrock and Chatt, who successfully isolated the nitrido complex using Mo as the metal center in a model complex. Specific support for the alternating pathway stems from a few studies. Iron only model clusters have been shown to catalytically reduce N2. Small tungsten clusters have also been shown to follow an alternating pathway for nitrogen fixation. The vanadium nitrogenase releases hydrazine, an intermediate specific to the alternating mechanism. However, the lack of characterized intermediates in the native enzyme itself means that neither pathway has been definitively proven. Furthermore, computational studies have been found to support both sides, depending on whether the reaction site is assumed to be at Mo (distal) or at Fe (alternating) in the MoFe cofactor.
Mechanism of MgATP binding.
Binding of MgATP is one of the central events to occur in the mechanism employed by nitrogenase. Hydrolysis of the terminal phosphate group of MgATP provides the energy needed to transfer electrons from the Fe protein to the MoFe protein. The binding interactions between the MgATP phosphate groups and the amino acid residues of the Fe protein are well understood by comparing to similar enzymes, while the interactions with the rest of the molecule are more elusive due to the lack of a Fe protein crystal structure with MgATP bound (as of 1996). Three protein residues have been shown to have significant interactions with the phosphates. In the absence of MgATP, a salt bridge exists between residue 15, lysine, and residue 125, aspartic acid. Upon binding, this salt bridge is interrupted. Site-specific mutagenesis has demonstrated that when the lysine is substituted for a glutamine, the protein's affinity for MgATP is greatly reduced and when the lysine is substituted for an arginine, MgATP cannot bind due to the salt bridge being too strong. The necessity of specifically aspartic acid at site 125 has been shown through noting altered reactivity upon mutation of this residue to glutamic acid. Residue 16, serine, has been shown to bind MgATP. Site-specific mutagenesis was used to demonstrate this fact. This has led to a model in which the serine remains coordinated to the Mg2+ ion after phosphate hydrolysis in order to facilitate its association with a different phosphate of the now ADP molecule. MgATP binding also induces significant conformational changes within the Fe protein. Site-directed mutagenesis was employed to create mutants in which MgATP binds but does not induce a conformational change. Comparing X-ray scattering data in the mutants versus in the wild-type protein led to the conclusion that the entire protein contracts upon MgATP binding, with a decrease in radius of approximately 2.0 Å.
Other mechanistic details.
Many mechanistic aspects of catalysis remain unknown. No crystallographic analysis has been reported on substrate bound to nitrogenase.
Nitrogenase is able to reduce acetylene, but is inhibited by carbon monoxide, which binds to the enzyme and thereby prevents binding of dinitrogen. Dinitrogen prevent acetylene binding, but acetylene does not inhibit binding of dinitrogen and requires only one electron for reduction to ethylene. Due to the oxidative properties of oxygen, most nitrogenases are irreversibly inhibited by dioxygen, which degradatively oxidizes the Fe-S cofactors. This requires mechanisms for nitrogen fixers to protect nitrogenase from oxygen "in vivo". Despite this problem, many use oxygen as a terminal electron acceptor for respiration. Although the ability of some nitrogen fixers such as Azotobacteraceae to employ an oxygen-labile nitrogenase under aerobic conditions has been attributed to a high metabolic rate, allowing oxygen reduction at the cell membrane, the effectiveness of such a mechanism has been questioned at oxygen concentrations above 70 μM (ambient concentration is 230 μM O2), as well as during additional nutrient limitations. A molecule found in the nitrogen-fixing nodules of leguminous plants, leghemoglobin, which can bind to dioxygen via a heme prosthetic group, plays a crucial role in buffering O2 at the active site of the nitrogenase, while concomitantly allowing for efficient respiration.
Nonspecific reactions.
In addition to dinitrogen reduction, nitrogenases also reduce protons to dihydrogen, meaning nitrogenase is also a dehydrogenase. A list of other reactions carried out by nitrogenases is shown below:
HC≡CH → H2C=CH2
N–=N+=O → N2 + H2O
N=N=N– → N2 + NH3
C≡N- → CH4, NH3, H3C–CH3, H2C=CH2 (CH3NH2)
N≡C–R → RCH3 + NH3
C≡N–R → CH4, H3C–CH3, H2C=CH2, C3H8, C3H6, RNH2
O=C=S → CO + H2S
O=C=O → CO + H2O
S=C=N– → H2S + HCN
O=C=N– → H2O + HCN, CO + NH3
Furthermore, dihydrogen functions as a competitive inhibitor, carbon monoxide functions as a non-competitive inhibitor, and carbon disulfide functions as a rapid-equilibrium inhibitor of nitrogenase.
Vanadium nitrogenases have also been shown to catalyze the conversion of CO into alkanes through a reaction comparable to Fischer-Tropsch synthesis.
Organisms that synthesize nitrogenase.
There are two types of bacteria that synthesize nitrogenase and are required for nitrogen fixation. These are:
Similarity to other proteins.
The three subunits of nitrogenase exhibit significant sequence similarity to three subunits of the light-independent version of protochlorophyllide reductase that performs the conversion of protochlorophyllide to chlorophyll. This protein is present in gymnosperms, algae, and photosynthetic bacteria but has been lost by angiosperms during evolution.
Separately, two of the nitrogenase subunits (NifD and NifH) have homologues in methanogens that do not fix nitrogen e.g. "Methanocaldococcus jannaschii". Little is understood about the function of these "class IV" "nif" genes, though they occur in many methanogens. In "M. jannaschii" they are known to interact with each other and are constitutively expressed.
Measurement of nitrogenase activity.
As with many assays for enzyme activity, it is possible to estimate nitrogenase activity by measuring the rate of conversion of the substrate (N2) to the product (NH3). Since NH3 is involved in other reactions in the cell, it is often desirable to label the substrate with 15N to provide accounting or "mass balance" of the added substrate. A more common assay, the acetylene reduction assay or ARA, estimates the activity of nitrogenase by taking advantage of the ability of the enzyme to reduce acetylene gas to ethylene gas. These gases are easily quantified using gas chromatography. Though first used in a laboratory setting to measure nitrogenase activity in extracts of "Clostridium pasteurianum" cells, ARA has been applied to a wide range of test systems, including field studies where other techniques are difficult to deploy. For example, ARA was used successfully to demonstrate that bacteria associated with rice roots undergo seasonal and diurnal rhythms in nitrogenase activity, which were apparently controlled by the plant.
Unfortunately, the conversion of data from nitrogenase assays to actual moles of N2 reduced (particularly in the case of ARA), is not always straightforward and may either underestimate or overestimate the true rate for a variety of reasons. For example, H2 competes with N2 but not acetylene for nitrogenase (leading to overestimates of nitrogenase by ARA). Bottle or chamber-based assays may produce negative impacts on microbial systems as a result of containment or disruption of the microenvironment through handling, leading to underestimation of nitrogenase. Despite these weaknesses, such assays are very useful in assessing relative rates or temporal patterns in nitrogenase activity.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\Delta H^{0} = -45.2 \\ \\mathrm{kJ} \\, \\mathrm{mol^{-1}} \\; \\mathrm{NH_3} "
},
{
"math_id": 1,
"text": " E_\\mathrm{A} = 230-420 \\ \\mathrm{kJ} \\, \\mathrm{mol^{-1}} "
}
] | https://en.wikipedia.org/wiki?curid=1080226 |
10803719 | String operations | In computer science, in the area of formal language theory, frequent use is made of a variety of string functions; however, the notation used is different from that used for computer programming, and some commonly used functions in the theoretical realm are rarely used when programming. This article defines some of these basic terms.
Strings and languages.
A string is a finite sequence of characters.
The empty string is denoted by formula_0.
The concatenation of two string formula_1 and formula_2 is denoted by formula_3, or shorter by formula_4.
Concatenating with the empty string makes no difference: formula_5.
Concatenation of strings is associative: formula_6.
For example, formula_7.
A language is a finite or infinite set of strings.
Besides the usual set operations like union, intersection etc., concatenation can be applied to languages:
if both formula_8 and formula_9 are languages, their concatenation formula_10 is defined as the set of concatenations of any string from formula_8 and any string from formula_9, formally formula_11.
Again, the concatenation dot formula_12 is often omitted for brevity.
The language formula_13 consisting of just the empty string is to be distinguished from the empty language formula_14.
Concatenating any language with the former doesn't make any change: formula_15,
while concatenating with the latter always yields the empty language: formula_16.
Concatenation of languages is associative: formula_17.
For example, abbreviating formula_18, the set of all three-digit decimal numbers is obtained as formula_19. The set of all decimal numbers of arbitrary length is an example for an infinite language.
Alphabet of a string.
The alphabet of a string is the set of all of the characters that occur in a particular string. If "s" is a string, its alphabet is denoted by
formula_20
The alphabet of a language formula_8 is the set of all characters that occur in any string of formula_8, formally:
formula_21.
For example, the set formula_22 is the alphabet of the string formula_23,
and the above formula_24 is the alphabet of the above language formula_19 as well as of the language of all decimal numbers.
String substitution.
Let "L" be a language, and let Σ be its alphabet. A string substitution or simply a substitution is a mapping "f" that maps characters in Σ to languages (possibly in a different alphabet). Thus, for example, given a character "a" ∈ Σ, one has "f"("a")="L""a" where "L""a" ⊆ Δ* is some language whose alphabet is Δ. This mapping may be extended to strings as
"f"(ε)=ε
for the empty string ε, and
"f"("sa")="f"("s")"f"("a")
for string "s" ∈ "L" and character "a" ∈ Σ. String substitutions may be extended to entire languages as
formula_25
Regular languages are closed under string substitution. That is, if each character in the alphabet of a regular language is substituted by another regular language, the result is still a regular language.
Similarly, context-free languages are closed under string substitution.
A simple example is the conversion "f"uc(.) to uppercase, which may be defined e.g. as follows:
For the extension of "f"uc to strings, we have e.g.
For the extension of "f"uc to languages, we have e.g.
String homomorphism.
A string homomorphism (often referred to simply as a homomorphism in formal language theory) is a string substitution such that each character is replaced by a single string. That is, formula_26, where formula_1 is a string, for each character formula_27.
String homomorphisms are monoid morphisms on the free monoid, preserving the empty string and the binary operation of string concatenation. Given a language formula_28, the set formula_29 is called the homomorphic image of formula_28. The inverse homomorphic image of a string formula_1 is defined as
formula_30
while the inverse homomorphic image of a language formula_28 is defined as
formula_31
In general, formula_32, while one does have
formula_33
and
formula_34
for any language formula_28.
The class of regular languages is closed under homomorphisms and inverse homomorphisms.
Similarly, the context-free languages are closed under homomorphisms and inverse homomorphisms.
A string homomorphism is said to be ε-free (or e-free) if formula_35 for all "a" in the alphabet formula_36. Simple single-letter substitution ciphers are examples of (ε-free) string homomorphisms.
An example string homomorphism "g"uc can also be obtained by defining similar to the above substitution: "g"uc(‹a›) = ‹A›, ..., "g"uc(‹0›) = ε, but letting "g"uc be undefined on punctuation chars.
Examples for inverse homomorphic images are
For the latter language, "g"uc("g"uc−1({ ‹A›, ‹bb› })) = "g"uc({ ‹a› }) = { ‹A› } ≠ { ‹A›, ‹bb› }.
The homomorphism "g"uc is not ε-free, since it maps e.g. ‹0› to ε.
A very simple string homomorphism example that maps each character to just a character is the conversion of an EBCDIC-encoded string to ASCII.
String projection.
If "s" is a string, and formula_36 is an alphabet, the string projection of "s" is the string that results by removing all characters that are not in formula_36. It is written as formula_37. It is formally defined by removal of characters from the right hand side:
formula_38
Here formula_0 denotes the empty string. The projection of a string is essentially the same as a projection in relational algebra.
String projection may be promoted to the projection of a language. Given a formal language "L", its projection is given by
formula_39
Right and left quotient.
The right quotient of a character "a" from a string "s" is the truncation of the character "a" in the string "s", from the right hand side. It is denoted as formula_40. If the string does not have "a" on the right hand side, the result is the empty string. Thus:
formula_41
The quotient of the empty string may be taken:
formula_42
Similarly, given a subset formula_43 of a monoid formula_44, one may define the quotient subset as
formula_45
Left quotients may be defined similarly, with operations taking place on the left of a string.
Hopcroft and Ullman (1979) define the quotient "L"1/"L"2 of the languages "L"1 and "L"2 over the same alphabet as "L"1/"L"2 = {"s" | ∃"t"∈"L"2. "st"∈"L"1}.
This is not a generalization of the above definition, since, for a string "s" and distinct characters "a", "b", Hopcroft's and Ullman's definition implies yielding {}, rather than {ε}.
The left quotient (when defined similar to Hopcroft and Ullman 1979) of a singleton language "L"1 and an arbitrary language "L"2 is known as Brzozowski derivative; if "L"2 is represented by a regular expression, so can be the left quotient.
Syntactic relation.
The right quotient of a subset formula_43 of a monoid formula_44 defines an equivalence relation, called the right syntactic relation of "S". It is given by
formula_46
The relation is clearly of finite index (has a finite number of equivalence classes) if and only if the family right quotients is finite; that is, if
formula_47
is finite. In the case that "M" is the monoid of words over some alphabet, "S" is then a regular language, that is, a language that can be recognized by a finite state automaton. This is discussed in greater detail in the article on syntactic monoids.
Right cancellation.
The right cancellation of a character "a" from a string "s" is the removal of the first occurrence of the character "a" in the string "s", starting from the right hand side. It is denoted as formula_48 and is recursively defined as
formula_49
The empty string is always cancellable:
formula_50
Clearly, right cancellation and projection commute:
formula_51
Prefixes.
The prefixes of a string is the set of all prefixes to a string, with respect to a given language:
formula_52
where formula_53.
The prefix closure of a language is
formula_54
Example:
formula_55
A language is called prefix closed if formula_56.
The prefix closure operator is idempotent:
formula_57
The prefix relation is a binary relation formula_58 such that formula_59 if and only if formula_60. This relation is a particular example of a prefix order.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varepsilon"
},
{
"math_id": 1,
"text": "s"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "s \\cdot t"
},
{
"math_id": 4,
"text": "s t"
},
{
"math_id": 5,
"text": "s \\cdot \\varepsilon = s = \\varepsilon \\cdot s"
},
{
"math_id": 6,
"text": "s \\cdot (t \\cdot u) = (s \\cdot t) \\cdot u"
},
{
"math_id": 7,
"text": "(\\langle b \\rangle \\cdot \\langle l \\rangle) \\cdot (\\varepsilon \\cdot \\langle ah \\rangle) = \\langle bl \\rangle \\cdot \\langle ah \\rangle = \\langle blah \\rangle"
},
{
"math_id": 8,
"text": "S"
},
{
"math_id": 9,
"text": "T"
},
{
"math_id": 10,
"text": "S \\cdot T"
},
{
"math_id": 11,
"text": "S \\cdot T = \\{ s \\cdot t \\mid s \\in S \\land t \\in T \\}"
},
{
"math_id": 12,
"text": "\\cdot"
},
{
"math_id": 13,
"text": "\\{\\varepsilon\\}"
},
{
"math_id": 14,
"text": "\\{\\}"
},
{
"math_id": 15,
"text": "S \\cdot \\{\\varepsilon\\} = S = \\{\\varepsilon\\} \\cdot S"
},
{
"math_id": 16,
"text": "S \\cdot \\{\\} = \\{\\} = \\{\\} \\cdot S"
},
{
"math_id": 17,
"text": "S \\cdot (T \\cdot U) = (S \\cdot T) \\cdot U"
},
{
"math_id": 18,
"text": "D = \\{ \\langle 0 \\rangle, \\langle 1 \\rangle, \\langle 2 \\rangle, \\langle 3 \\rangle, \\langle 4 \\rangle, \\langle 5 \\rangle, \\langle 6 \\rangle, \\langle 7 \\rangle, \\langle 8 \\rangle, \\langle 9 \\rangle \\}"
},
{
"math_id": 19,
"text": "D \\cdot D \\cdot D"
},
{
"math_id": 20,
"text": "\\operatorname{Alph}(s)"
},
{
"math_id": 21,
"text": "\\operatorname{Alph}(S) = \\bigcup_{s \\in S} \\operatorname{Alph}(s)"
},
{
"math_id": 22,
"text": "\\{\\langle a \\rangle,\\langle c \\rangle,\\langle o \\rangle\\}"
},
{
"math_id": 23,
"text": "\\langle cacao \\rangle"
},
{
"math_id": 24,
"text": "D"
},
{
"math_id": 25,
"text": "f(L)=\\bigcup_{s\\in L} f(s)"
},
{
"math_id": 26,
"text": "f(a)=s"
},
{
"math_id": 27,
"text": "a"
},
{
"math_id": 28,
"text": "L"
},
{
"math_id": 29,
"text": "f(L)"
},
{
"math_id": 30,
"text": "f^{-1}(s) = \\{ w \\mid f(w) = s \\}"
},
{
"math_id": 31,
"text": "f^{-1}(L) = \\{ s \\mid f(s) \\in L \\}"
},
{
"math_id": 32,
"text": "f(f^{-1}(L)) \\neq L"
},
{
"math_id": 33,
"text": "f(f^{-1}(L)) \\subseteq L"
},
{
"math_id": 34,
"text": "L \\subseteq f^{-1}(f(L))"
},
{
"math_id": 35,
"text": "f(a) \\neq \\varepsilon"
},
{
"math_id": 36,
"text": "\\Sigma"
},
{
"math_id": 37,
"text": "\\pi_\\Sigma(s)\\,"
},
{
"math_id": 38,
"text": "\\pi_\\Sigma(s) = \\begin{cases} \n\\varepsilon & \\mbox{if } s=\\varepsilon \\mbox{ the empty string} \\\\\n\\pi_\\Sigma(t) & \\mbox{if } s=ta \\mbox{ and } a \\notin \\Sigma \\\\ \n\\pi_\\Sigma(t)a & \\mbox{if } s=ta \\mbox{ and } a \\in \\Sigma \n\\end{cases}"
},
{
"math_id": 39,
"text": "\\pi_\\Sigma (L)=\\{\\pi_\\Sigma(s)\\ \\vert\\ s\\in L \\}"
},
{
"math_id": 40,
"text": "s/a"
},
{
"math_id": 41,
"text": "(sa)/ b = \\begin{cases} \ns & \\mbox{if } a=b \\\\\n\\varepsilon & \\mbox{if } a \\ne b\n\\end{cases}"
},
{
"math_id": 42,
"text": "\\varepsilon / a = \\varepsilon"
},
{
"math_id": 43,
"text": "S\\subset M"
},
{
"math_id": 44,
"text": "M"
},
{
"math_id": 45,
"text": "S/a=\\{s\\in M\\ \\vert\\ sa\\in S\\}"
},
{
"math_id": 46,
"text": "\\sim_S \\;\\,=\\, \\{(s,t)\\in M\\times M\\ \\vert\\ S/s = S/t \\}"
},
{
"math_id": 47,
"text": "\\{S/m\\ \\vert\\ m\\in M\\}"
},
{
"math_id": 48,
"text": "s\\div a"
},
{
"math_id": 49,
"text": "(sa)\\div b = \\begin{cases} \ns & \\mbox{if } a=b \\\\\n(s\\div b)a & \\mbox{if } a \\ne b\n\\end{cases}"
},
{
"math_id": 50,
"text": "\\varepsilon \\div a = \\varepsilon"
},
{
"math_id": 51,
"text": "\\pi_\\Sigma(s)\\div a = \\pi_\\Sigma(s \\div a )"
},
{
"math_id": 52,
"text": "\\operatorname{Pref}_L(s) = \\{t\\ \\vert\\ s=tu \\mbox { for } t,u\\in \\operatorname{Alph}(L)^*\\}"
},
{
"math_id": 53,
"text": "s\\in L"
},
{
"math_id": 54,
"text": "\\operatorname{Pref} (L) = \\bigcup_{s\\in L} \\operatorname{Pref}_L(s) = \\left\\{ t\\ \\vert\\ s=tu; s\\in L; t,u\\in \\operatorname{Alph}(L)^* \\right\\}"
},
{
"math_id": 55,
"text": "L=\\left\\{abc\\right\\}\\mbox{ then } \\operatorname{Pref}(L)=\\left\\{\\varepsilon, a, ab, abc\\right\\}"
},
{
"math_id": 56,
"text": "\\operatorname{Pref} (L) = L"
},
{
"math_id": 57,
"text": "\\operatorname{Pref} (\\operatorname{Pref} (L)) =\\operatorname{Pref} (L)"
},
{
"math_id": 58,
"text": "\\sqsubseteq"
},
{
"math_id": 59,
"text": "s\\sqsubseteq t "
},
{
"math_id": 60,
"text": "s \\in \\operatorname{Pref}_L(t)"
}
] | https://en.wikipedia.org/wiki?curid=10803719 |
10807945 | Odd–even sort | In computing, an odd–even sort or odd–even transposition sort (also known as brick sort or parity sort) is a relatively simple sorting algorithm, developed originally for use on parallel processors with local interconnections. It is a comparison sort related to bubble sort, with which it shares many characteristics. It functions by comparing all odd/even indexed pairs of adjacent elements in the list and, if a pair is in the wrong order (the first is larger than the second) the elements are switched. The next step repeats this for even/odd indexed pairs (of adjacent elements). Then it alternates between odd/even and even/odd steps until the list is sorted.
Sorting on processor arrays.
On parallel processors, with one value per processor and only local left–right neighbor connections, the processors all concurrently do a compare–exchange operation with their neighbors, alternating between odd–even and even–odd pairings. This algorithm was originally presented, and shown to be efficient on such processors, by Habermann in 1972.
The algorithm extends efficiently to the case of multiple items per processor. In the Baudet–Stevenson odd–even merge-splitting algorithm, each processor sorts its own sublist at each step, using any efficient sort algorithm, and then performs a merge splitting, or transposition–merge, operation with its neighbor, with neighbor pairing alternating between odd–even and even–odd on each step.
Batcher's odd–even mergesort.
A related but more efficient sort algorithm is the Batcher odd–even mergesort, using compare–exchange operations and perfect-shuffle operations.
Batcher's method is efficient on parallel processors with long-range connections.
Algorithm.
The single-processor algorithm, like bubblesort, is simple but not very efficient. Here a zero-based index is assumed:
function oddEvenSort(list) {
function swap(list, i, j) {
var temp = list[i];
list[i] = list[j];
list[j] = temp;
var sorted = false;
while (!sorted) {
sorted = true;
for (var i = 1; i < list.length - 1; i += 2) {
if (list[i] > list[i + 1]) {
swap(list, i, i + 1);
sorted = false;
for (var i = 0; i < list.length - 1; i += 2) {
if (list[i] > list[i + 1]) {
swap(list, i, i + 1);
sorted = false;
Proof of correctness.
Claim: Let formula_2 be a sequence of data ordered by <. The odd–even sort algorithm correctly sorts this data in formula_3 passes. (A pass here is defined to be a full sequence of odd–even, or even–odd comparisons. The passes occur in order pass 1: odd–even, pass 2: even–odd, etc.)
Proof:
This proof is based loosely on one by Thomas Worsch.
Since the sorting algorithm only involves comparison-swap operations and is oblivious (the order of comparison-swap operations does not depend on the data), by Knuth's 0–1 sorting principle, it suffices to check correctness when each formula_4 is either 0 or 1. Assume that there are formula_5 1s.
Observe that the rightmost 1 can be either in an even or odd position, so it might not be moved by the first odd–even pass. But after the first odd–even pass, the rightmost 1 will be in an even position. It follows that it will be moved to the right by all remaining passes. Since the rightmost one starts in position greater than or equal to formula_5, it must be moved at most formula_6 steps. It follows that it takes at most formula_7 passes to move the rightmost 1 to its correct position.
Now, consider the second rightmost 1. After two passes, the 1 to its right will have moved right by at least one step. It follows that, for all remaining passes, we can view the second rightmost 1 as the rightmost 1. The second rightmost 1 starts in position at least formula_8 and must be moved to position at most formula_9, so it must be moved at most formula_10 steps. After at most 2 passes, the rightmost 1 will have already moved, so the entry to the right of the second rightmost 1 will be 0. Hence, for all passes after the first two, the second rightmost 1 will move to the right. It thus takes at most formula_11 passes to move the second rightmost 1 to its correct position.
Continuing in this manner, by induction it can be shown that the formula_12-th rightmost 1 is moved to its correct position in at most formula_13 passes. Since formula_14, it follows that the formula_12-th rightmost 1 is moved to its correct position in at most formula_15 passes. The list is thus correctly sorted in formula_3 passes. QED.
We remark that each pass takes formula_1 steps, so this algorithm has formula_0 complexity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(n^2)"
},
{
"math_id": 1,
"text": "O(n)"
},
{
"math_id": 2,
"text": "a_1, ..., a_n"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "a_i"
},
{
"math_id": 5,
"text": "e"
},
{
"math_id": 6,
"text": "n - e"
},
{
"math_id": 7,
"text": "n - e + 1"
},
{
"math_id": 8,
"text": "e - 1"
},
{
"math_id": 9,
"text": "n - 1"
},
{
"math_id": 10,
"text": "(n - 1) - (e - 1) = n - e"
},
{
"math_id": 11,
"text": "n - e + 2"
},
{
"math_id": 12,
"text": "i"
},
{
"math_id": 13,
"text": "n - e + i"
},
{
"math_id": 14,
"text": "i \\leq e"
},
{
"math_id": 15,
"text": "n - e + e = n"
}
] | https://en.wikipedia.org/wiki?curid=10807945 |
1081285 | Point-set triangulation | Simplicial complex in Euclidean geometry
A triangulation of a set of points formula_0 in the Euclidean space formula_1 is a simplicial complex that covers the convex hull of formula_0, and whose vertices belong to formula_0. In the plane (when formula_0 is a set of points in formula_2), triangulations are made up of triangles, together with their edges and vertices. Some authors require that all the points of formula_0 are vertices of its triangulations. In this case, a triangulation of a set of points formula_0 in the plane can alternatively be defined as a maximal set of non-crossing edges between points of formula_0. In the plane, triangulations are special cases of planar straight-line graphs.
A particularly interesting kind of triangulations are the Delaunay triangulations. They are the geometric duals of Voronoi diagrams. The Delaunay triangulation of a set of points formula_0 in the plane contains the Gabriel graph, the nearest neighbor graph and the minimal spanning tree of formula_0.
Triangulations have a number of applications, and there is an interest to find the "good" triangulations of a given point set under some criteria as, for instance minimum-weight triangulations. Sometimes it is desirable to have a triangulation with special properties, e.g., in which all triangles have large angles (long and narrow ("splinter") triangles are avoided).
Given a set of edges that connect points of the plane, the problem to determine whether they contain a triangulation is NP-complete.
Regular triangulations.
Some triangulations of a set of points formula_3 can be obtained by lifting the points of formula_0 into formula_4 (which amounts to add a coordinate formula_5 to each point of formula_0), by computing the convex hull of the lifted set of points, and by projecting the lower faces of this convex hull back on formula_1. The triangulations built this way are referred to as the regular triangulations of formula_0. When the points are lifted to the paraboloid of equation formula_6, this construction results in the Delaunay triangulation of formula_0. Note that, in order for this construction to provide a triangulation, the lower convex hull of the lifted set of points needs to be simplicial. In the case of Delaunay triangulations, this amounts to require that no formula_7 points of formula_0 lie in the same sphere.
Combinatorics in the plane.
Every triangulation of any set formula_0 of formula_8 points in the plane has formula_9 triangles and formula_10 edges where formula_11 is the number of points of formula_0 in the boundary of the convex hull of formula_0. This follows from a straightforward Euler characteristic argument.
Algorithms to build triangulations in the plane.
Triangle Splitting Algorithm : Find the convex hull of the point set formula_0 and triangulate this hull as a polygon. Choose an interior point and draw edges to the three vertices of the triangle that contains it. Continue this process until all interior points are exhausted.
Incremental Algorithm : Sort the points of formula_0 according to x-coordinates. The first three points determine a triangle. Consider the next point formula_12 in the ordered set and connect it with all previously considered points formula_13 which are visible to p. Continue this process of adding one point of formula_0 at a time until all of formula_0 has been processed.
Time complexity of various algorithms.
The following table reports time complexity results for the construction of triangulations of point sets in the plane, under different optimality criteria, where formula_8 is the number of points.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{P}"
},
{
"math_id": 1,
"text": "\\mathbb{R}^d"
},
{
"math_id": 2,
"text": "\\mathbb{R}^2"
},
{
"math_id": 3,
"text": "\\mathcal{P}\\subset\\mathbb{R}^d"
},
{
"math_id": 4,
"text": "\\mathbb{R}^{d+1}"
},
{
"math_id": 5,
"text": "x_{d+1}"
},
{
"math_id": 6,
"text": "x_{d+1} = x_1^2+\\cdots+x_d^2"
},
{
"math_id": 7,
"text": "d+2"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": " 2n - h - 2"
},
{
"math_id": 10,
"text": "3n - h - 3"
},
{
"math_id": 11,
"text": "h"
},
{
"math_id": 12,
"text": "p"
},
{
"math_id": 13,
"text": "\\{p_1,..., p_k\\}"
}
] | https://en.wikipedia.org/wiki?curid=1081285 |
10814713 | Levi's lemma | In theoretical computer science and mathematics, especially in the area of combinatorics on words, the Levi lemma states that, for all strings "u", "v", "x" and "y", if "uv" = "xy", then there exists a string "w" such that either
"uw = x" and "v" = "wy" (if |"u"| ≤ |"x"|)
or
"u" = "xw" and "wv" = "y" (if |"u"| ≥ |"x"|)
That is, there is a string "w" that is "in the middle", and can be grouped to one side or the other. Levi's lemma is named after Friedrich Wilhelm Levi, who published it in 1944.
Applications.
Levi's lemma can be applied repeatedly in order to solve word equations; in this context it is sometimes called the Nielsen transformation by analogy with the Nielsen transformation for groups. For example, starting with an equation "xα" = "yβ" where "x" and "y" are the unknowns, we can transform it (assuming "|x| ≥ |y|", so there exists "t" such that "x"="yt") to "ytα" = "yβ", thus to "tα" = "β". This approach results in a graph of substitutions generated by repeatedly applying Levi's lemma. If each unknown appears at most twice, then a word equation is called quadratic; in a quadratic word equation the graph obtained by repeatedly applying Levi's lemma is finite, so it is decidable if a quadratic word equation has a solution. A more general method for solving word equations is Makanin's algorithm.
Generalizations.
The above is known as the Levi lemma for strings; the lemma can occur in a more general form in graph theory and in monoid theory; for example, there is a more general Levi lemma for traces originally due to Christine Duboc.
Several proofs of Levi's Lemma for traces can be found in "The Book of Traces".
A monoid in which Levi's lemma holds is said to have the equidivisibility property. The free monoid of strings and string concatenation has this property (by Levi's lemma for strings), but by itself equidivisibility is not enough to guarantee that a monoid is free. However an equidivisible monoid "M" is free if additionally there exists a homomorphism "f" from "M" to the monoid of natural numbers (free monoid on one generator) with the property that the preimage of 0 contains only the identity element of "M", i.e. formula_0. (Note that "f" simply being a homomorphism does not guarantee this latter property, as there could be multiple elements of "M" mapped to 0.) A monoid for which such a homomorphism exists is also called "graded" (and the "f" is called a gradation).
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f^{-1}(0) = \\{1_M\\}"
}
] | https://en.wikipedia.org/wiki?curid=10814713 |
1081538 | Complex polygon | Polygon in complex space, or which self-intersects
The term complex polygon can mean two different things:
Geometry.
In geometry, a complex polygon is a polygon in the complex Hilbert plane, which has two complex dimensions.
A complex number may be represented in the form formula_0, where formula_1 and formula_2 are real numbers, and formula_3 is the square root of formula_4. Multiples of formula_3 such as formula_5 are called "imaginary numbers". A complex number lies in a complex plane having one real and one imaginary dimension, which may be represented as an Argand diagram. So a single complex dimension comprises two spatial dimensions, but of different kinds - one real and the other imaginary.
The unitary plane comprises two such complex planes, which are orthogonal to each other. Thus it has two real dimensions and two imaginary dimensions.
A complex polygon is a (complex) two-dimensional (i.e. four spatial dimensions) analogue of a real polygon. As such it is an example of the more general complex polytope in any number of complex dimensions.
In a "real" plane, a visible figure can be constructed as the "real conjugate" of some complex polygon.
Computer graphics.
In computer graphics, a complex polygon is a polygon which has a boundary comprising discrete circuits, such as a polygon with a hole in it.
Self-intersecting polygons are also sometimes included among the complex polygons. Vertices are only counted at the ends of edges, not where edges intersect in space.
A formula relating an integral over a bounded region to a closed line integral may still apply when the "inside-out" parts of the region are counted negatively.
Moving around the polygon, the total amount one "turns" at the vertices can be any integer times 360°, e.g. 720° for a pentagram and 0° for an angular "eight".
References.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(a + ib)"
},
{
"math_id": 1,
"text": "a"
},
{
"math_id": 2,
"text": "b"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "-1"
},
{
"math_id": 5,
"text": "ib"
}
] | https://en.wikipedia.org/wiki?curid=1081538 |
10818503 | Feller's coin-tossing constants | Feller's coin-tossing constants are a set of numerical constants which describe asymptotic probabilities that in "n" independent tosses of a fair coin, no run of "k" consecutive heads (or, equally, tails) appears.
William Feller showed that if this probability is written as "p"("n","k") then
formula_0
where α"k" is the smallest positive real root of
formula_1
and
formula_2
Values of the constants.
For formula_3 the constants are related to the golden ratio, formula_4, and Fibonacci numbers; the constants are formula_5 and formula_6. The exact probability "p"(n,2) can be calculated either by using Fibonacci numbers, "p"(n,2) = formula_7 or by solving a direct recurrence relation leading to the same result. For higher values of formula_8, the constants are related to generalizations of Fibonacci numbers such as the tribonacci and tetranacci numbers. The corresponding exact probabilities can be calculated as "p"(n,k) = formula_9.
Example.
If we toss a fair coin ten times then the exact probability that no pair of heads come up in succession (i.e. "n" = 10 and "k" = 2) is "p"(10,2) = formula_10 = 0.140625. The approximation formula_11 gives 1.44721356...×1.23606797...−11 = 0.1406263...
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\lim_{n\\rightarrow \\infty} p(n,k) \\alpha_k^{n+1}=\\beta_k\n"
},
{
"math_id": 1,
"text": "x^{k+1}=2^{k+1}(x-1)"
},
{
"math_id": 2,
"text": "\\beta_k={2-\\alpha_k \\over k+1-k\\alpha_k}."
},
{
"math_id": 3,
"text": "k=2"
},
{
"math_id": 4,
"text": "\\varphi"
},
{
"math_id": 5,
"text": "\\sqrt{5}-1=2\\varphi-2=2/\\varphi"
},
{
"math_id": 6,
"text": "1+1/\\sqrt{5}"
},
{
"math_id": 7,
"text": "\\tfrac{F_{n+2}}{2^n}"
},
{
"math_id": 8,
"text": "k"
},
{
"math_id": 9,
"text": "\\tfrac{F^{(k)}_{n+2}}{2^n}"
},
{
"math_id": 10,
"text": "\\tfrac{9}{64}"
},
{
"math_id": 11,
"text": "p(n,k) \\approx \\beta_k / \\alpha_k^{n+1}"
}
] | https://en.wikipedia.org/wiki?curid=10818503 |
1082294 | Compound Poisson process | A compound Poisson process is a continuous-time stochastic process with jumps. The jumps arrive randomly according to a Poisson process and the size of the jumps is also random, with a specified probability distribution. To be precise, a compound Poisson process, parameterised by a rate formula_0 and jump size distribution "G", is a process formula_1 given by
formula_2
where, formula_3 is the counting variable of a Poisson process with rate formula_4, and formula_5 are independent and identically distributed random variables, with distribution function "G", which are also independent of formula_6
When formula_7 are non-negative integer-valued random variables, then this compound Poisson process is known as a stuttering Poisson process.
Properties of the compound Poisson process.
The expected value of a compound Poisson process can be calculated using a result known as Wald's equation as:
formula_8
Making similar use of the law of total variance, the variance can be calculated as:
formula_9
Lastly, using the law of total probability, the moment generating function can be given as follows:
formula_10
formula_11
Exponentiation of measures.
Let "N", "Y", and "D" be as above. Let "μ" be the probability measure according to which "D" is distributed, i.e.
formula_12
Let "δ"0 be the trivial probability distribution putting all of the mass at zero. Then the probability distribution of "Y"("t") is the measure
formula_13
where the exponential exp("ν") of a finite measure "ν" on Borel subsets of the real line is defined by
formula_14
and
formula_15
is a convolution of measures, and the series converges weakly. | [
{
"math_id": 0,
"text": "\\lambda > 0"
},
{
"math_id": 1,
"text": "\\{\\,Y(t) : t \\geq 0 \\,\\}"
},
{
"math_id": 2,
"text": "Y(t) = \\sum_{i=1}^{N(t)} D_i"
},
{
"math_id": 3,
"text": " \\{\\,N(t) : t \\geq 0\\,\\}"
},
{
"math_id": 4,
"text": "\\lambda"
},
{
"math_id": 5,
"text": " \\{\\,D_i : i \\geq 1\\,\\}"
},
{
"math_id": 6,
"text": " \\{\\,N(t) : t \\geq 0\\,\\}.\\,"
},
{
"math_id": 7,
"text": " D_i "
},
{
"math_id": 8,
"text": "\\operatorname E(Y(t)) = \\operatorname E(D_1 + \\cdots + D_{N(t)}) = \\operatorname E(N(t))\\operatorname E(D_1) = \\operatorname E(N(t)) \\operatorname E(D) = \\lambda t \\operatorname E(D)."
},
{
"math_id": 9,
"text": "\n\\begin{align}\n\\operatorname{var}(Y(t)) &= \\operatorname E(\\operatorname{var}(Y(t)\\mid N(t))) + \\operatorname{var}(\\operatorname E(Y(t)\\mid N(t))) \\\\[5pt]\n&= \\operatorname E(N(t)\\operatorname{var}(D)) + \\operatorname{var}(N(t) \\operatorname E(D)) \\\\[5pt]\n&= \\operatorname{var}(D) \\operatorname E(N(t)) + \\operatorname E(D)^2 \\operatorname{var}(N(t)) \\\\[5pt]\n&= \\operatorname{var}(D)\\lambda t + \\operatorname E(D)^2\\lambda t \\\\[5pt]\n&= \\lambda t(\\operatorname{var}(D) + \\operatorname E(D)^2) \\\\[5pt]\n&= \\lambda t \\operatorname E(D^2).\n\\end{align}\n"
},
{
"math_id": 10,
"text": "\\Pr(Y(t)=i) = \\sum_n \\Pr(Y(t)=i\\mid N(t)=n)\\Pr(N(t)=n) "
},
{
"math_id": 11,
"text": "\n\\begin{align}\n\\operatorname E(e^{sY}) & = \\sum_i e^{si} \\Pr(Y(t)=i) \\\\[5pt]\n& = \\sum_i e^{si} \\sum_{n} \\Pr(Y(t)=i\\mid N(t)=n)\\Pr(N(t)=n) \\\\[5pt]\n& = \\sum_n \\Pr(N(t)=n) \\sum_i e^{si} \\Pr(Y(t)=i\\mid N(t)=n) \\\\[5pt]\n& = \\sum_n \\Pr(N(t)=n) \\sum_i e^{si}\\Pr(D_1 + D_2 + \\cdots + D_n=i) \\\\[5pt]\n& = \\sum_n \\Pr(N(t)=n) M_D(s)^n \\\\[5pt]\n& = \\sum_n \\Pr(N(t)=n) e^{n\\ln(M_D(s))} \\\\[5pt]\n& = M_{N(t)}(\\ln(M_D(s))) \\\\[5pt]\n& = e^{\\lambda t \\left( M_D(s) - 1 \\right) }.\n\\end{align}\n"
},
{
"math_id": 12,
"text": "\\mu(A) = \\Pr(D \\in A).\\,"
},
{
"math_id": 13,
"text": "\\exp(\\lambda t(\\mu - \\delta_0))\\,"
},
{
"math_id": 14,
"text": "\\exp(\\nu) = \\sum_{n=0}^\\infty {\\nu^{*n} \\over n!}"
},
{
"math_id": 15,
"text": " \\nu^{*n} = \\underbrace{\\nu * \\cdots *\\nu}_{n \\text{ factors}}"
}
] | https://en.wikipedia.org/wiki?curid=1082294 |
1082550 | Kronecker–Weber theorem | Every finite abelian extension of Q is contained within some cyclotomic field
In algebraic number theory, it can be shown that every cyclotomic field is an abelian extension of the rational number field Q, having Galois group of the form formula_0. The Kronecker–Weber theorem provides a partial converse: every finite abelian extension of Q is contained within some cyclotomic field. In other words, every algebraic integer whose Galois group is abelian can be expressed as a sum of roots of unity with rational coefficients. For example,
formula_1 formula_2 and formula_3
The theorem is named after Leopold Kronecker and Heinrich Martin Weber.
Field-theoretic formulation.
The Kronecker–Weber theorem can be stated in terms of fields and field extensions.
Precisely, the Kronecker–Weber theorem states: every finite abelian extension of the rational numbers Q is a subfield of a cyclotomic field.
That is, whenever an algebraic number field has a Galois group over Q that is an abelian group, the field is a subfield of a field obtained by adjoining a root of unity to the rational numbers.
For a given abelian extension "K" of Q there is a "minimal" cyclotomic field that contains it. The theorem allows one to define the conductor of "K" as the smallest integer "n" such that "K" lies inside the field generated by the "n"-th roots of unity. For example the quadratic fields have as conductor the absolute value of their discriminant, a fact generalised in class field theory.
History.
The theorem was first stated by Kronecker (1853) though his argument was not complete for extensions of degree a power of 2.
Weber (1886) published a proof, but this had some gaps and errors that were pointed out and corrected by . The first complete proof was given by Hilbert (1896).
Generalizations.
Lubin and Tate (1965, 1966) proved the local Kronecker–Weber theorem which states that any abelian extension of a local field can be constructed using cyclotomic extensions and Lubin–Tate extensions. Hazewinkel (1975), Rosen (1981) and Lubin (1981) gave other proofs.
Hilbert's twelfth problem asks for generalizations of the Kronecker–Weber theorem to base fields other than the rational numbers, and asks for the analogues of the roots of unity for those fields. A different approach to abelian extensions is given by class field theory. | [
{
"math_id": 0,
"text": "(\\mathbb Z/n\\mathbb Z)^\\times"
},
{
"math_id": 1,
"text": "\\sqrt{5} = e^{2 \\pi i / 5} - e^{4 \\pi i / 5} - e^{6 \\pi i / 5} + e^{8 \\pi i / 5},"
},
{
"math_id": 2,
"text": "\\sqrt{-3} = e^{2 \\pi i / 3} - e^{4 \\pi i / 3},"
},
{
"math_id": 3,
"text": "\\sqrt{3} = e^{\\pi i / 6} - e^{5 \\pi i / 6}."
}
] | https://en.wikipedia.org/wiki?curid=1082550 |
1082645 | Topology optimization | Mathematical method for optimizing material layout under given conditions
Topology optimization is a mathematical method that optimizes material layout within a given design space, for a given set of loads, boundary conditions and constraints with the goal of maximizing the performance of the system. Topology optimization is different from shape optimization and sizing optimization in the sense that the design can attain any shape within the design space, instead of dealing with predefined configurations.
The conventional topology optimization formulation uses a finite element method (FEM) to evaluate the design performance. The design is optimized using either gradient-based mathematical programming techniques such as the optimality criteria algorithm and the method of moving asymptotes or non gradient-based algorithms such as genetic algorithms.
Topology optimization has a wide range of applications in aerospace, mechanical, bio-chemical and civil engineering. Currently, engineers mostly use topology optimization at the concept level of a design process. Due to the free forms that naturally occur, the result is often difficult to manufacture. For that reason the result emerging from topology optimization is often fine-tuned for manufacturability. Adding constraints to the formulation in order to increase the manufacturability is an active field of research. In some cases results from topology optimization can be directly manufactured using additive manufacturing; topology optimization is thus a key part of design for additive manufacturing.
Problem statement.
A topology optimization problem can be written in the general form of an optimization problem as:
formula_0
The problem statement includes the following:
Evaluating formula_8 often includes solving a differential equation. This is most commonly done using the finite element method since these equations do not have a known analytical solution.
Implementation methodologies.
There are various implementation methodologies that have been used to solve topology optimization problems.
Solving with discrete/binary variables.
Solving topology optimization problems in a discrete sense is done by discretizing the design domain into finite elements. The material densities inside these elements are then treated as the problem variables. In this case material density of one indicates the presence of material, while zero indicates an absence of material. Owing to the attainable topological complexity of the design being dependent on the number of elements, a large number is preferred. Large numbers of finite elements increases the attainable topological complexity, but come at a cost. Firstly, solving the FEM system becomes more expensive. Secondly, algorithms that can handle a large number (several thousands of elements is not uncommon) of discrete variables with multiple constraints are unavailable. Moreover, they are impractically sensitive to parameter variations. In literature problems with up to 30000 variables have been reported.
Solving the problem with continuous variables.
The earlier stated complexities with solving topology optimization problems using binary variables has caused the community to search for other options. One is the modelling of the densities with continuous variables. The material densities can now also attain values between zero and one. Gradient based algorithms that handle large amounts of continuous variables and multiple constraints are available. But the material properties have to be modelled in a continuous setting. This is done through interpolation. One of the most implemented interpolation methodologies is the Solid Isotropic Material with Penalisation method (SIMP). This interpolation is essentially a power law formula_9. It interpolates the Young's modulus of the material to the scalar selection field. The value of the penalisation parameter formula_10 is generally taken between formula_11. This has been shown to confirm the micro-structure of the materials. In the SIMP method a lower bound on the Young's modulus is added, formula_12, to make sure the derivatives of the objective function are non-zero when the density becomes zero. The higher the penalisation factor, the more SIMP penalises the algorithm in the use of non-binary densities. Unfortunately, the penalisation parameter also introduces non-convexities.
Commercial software.
There are several commercial topology optimization software on the market. Most of them use topology optimization as a hint how the optimal design should look like, and manual geometry re-construction is required. There are a few solutions which produce optimal designs ready for Additive Manufacturing.
Examples.
Structural compliance.
A stiff structure is one that has the least possible displacement when given certain set of boundary conditions. A global measure of the displacements is the strain energy (also called compliance) of the structure under the prescribed boundary conditions. The lower the strain energy the higher the stiffness of the structure. So, the objective function of the problem is to minimize the strain energy.
On a broad level, one can visualize that the more the material, the less the deflection as there will be more material to resist the loads. So, the optimization requires an opposing constraint, the volume constraint. This is in reality a cost factor, as we would not want to spend a lot of money on the material. To obtain the total material utilized, an integration of the selection field over the volume can be done.
Finally the elasticity governing differential equations are plugged in so as to get the final problem statement.
formula_13
subject to:
But, a straightforward implementation in the finite element framework of such a problem is still infeasible owing to issues such as:
Some techniques such as filtering based on image processing are currently being used to alleviate some of these issues. Although it seemed like this was purely a heuristic approach for a long time, theoretical connections to nonlocal elasticity have been made to support the physical sense of these methods.
Multiphysics problems.
Fluid-structure-interaction.
Fluid-structure-interaction is a strongly coupled phenomenon and concerns the interaction between a stationary or moving fluid and an elastic structure. Many engineering applications and natural phenomena are subject to fluid-structure-interaction and to take such effects into consideration is therefore critical in the design of many engineering applications. Topology optimisation for fluid structure interaction problems has been studied in e.g. references and. Design solutions solved for different Reynolds numbers are shown below. The design solutions depend on the fluid flow with indicate that the coupling between the fluid and the structure is resolved in the design problems.
Thermoelectric energy conversion.
Thermoelectricity is a multi-physic problem which concerns the interaction and coupling between electric and thermal energy in semi conducting materials. Thermoelectric energy conversion can be described by two separately identified effects: The Seebeck effect and the Peltier effect. The Seebeck effect concerns the conversion of thermal energy into electric energy and the Peltier effect concerns the conversion of electric energy into thermal energy. By spatially distributing two thermoelectric materials in a two dimensional design space with a topology optimisation methodology, it is possible to exceed performance of the constitutive thermoelectric materials for thermoelectric coolers and thermoelectric generators.
3F3D Form Follows Force 3D Printing.
The current proliferation of 3D printer technology has allowed designers and engineers to use topology optimization techniques when designing new products. Topology optimization combined with 3D printing can result in less weight, improved structural performance and shortened design-to-manufacturing cycle. As the designs, while efficient, might not be realisable with more traditional manufacturing techniques.
Internal contact.
Internal contact can be included in topology optimization by applying the third medium contact method. The third medium contact (TMC) method is an implicit contact formulation that is continuous and differentiable. This makes TMC suitable for use with gradient-based approaches to topology optimization.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n&\\underset{\\rho}{\\operatorname{minimize}} & &F = F(\\mathbf{u(\\rho), \\rho}) = \\int_{\\Omega} f(\\mathbf{u(\\rho), \\rho}) \\mathrm{d}V \\\\\n&\\operatorname{subject\\;to} \n& &G_0(\\rho) = \\int_{\\Omega} \\rho \\mathrm{d}V - V_0 \\leq 0 \\\\\n&&&G_j(\\mathbf{u}(\\rho), \\rho) \\leq 0 \\text{ with } j = 1, ..., m\n\\end{align}"
},
{
"math_id": 1,
"text": "F(\\mathbf{u(\\rho), \\rho})"
},
{
"math_id": 2,
"text": " \\rho(\\mathbf{x}) "
},
{
"math_id": 3,
"text": " \\mathbf{u}=\\mathbf{u}(\\mathbf{\\rho})"
},
{
"math_id": 4,
"text": " \\rho "
},
{
"math_id": 5,
"text": " (\\Omega)"
},
{
"math_id": 6,
"text": "\\scriptstyle m "
},
{
"math_id": 7,
"text": " G_j(\\mathbf{u}(\\rho), \\rho) \\leq 0 "
},
{
"math_id": 8,
"text": " \\mathbf{u(\\rho)} "
},
{
"math_id": 9,
"text": " E \\;=\\; E_0 \\,+\\, \\rho^p (E_1 - E_0) "
},
{
"math_id": 10,
"text": "p"
},
{
"math_id": 11,
"text": " [1,\\, 3]"
},
{
"math_id": 12,
"text": " E_0 "
},
{
"math_id": 13,
"text": "\\min_{\\rho}\\; \\int_{\\Omega} \\frac{1}{2} \\mathbf{\\sigma}:\\mathbf{\\varepsilon} \\,\\mathrm{d}\\Omega"
},
{
"math_id": 14,
"text": " \\rho \\,\\in\\, [0,\\, 1] "
},
{
"math_id": 15,
"text": " \\int_{\\Omega} \\rho\\, \\mathrm{d}\\Omega \\;\\leq\\; V^*"
},
{
"math_id": 16,
"text": " \\mathbf{\\nabla}\\cdot\\mathbf{\\sigma} \\,+\\, \\mathbf{F} \\;=\\; {\\mathbf{0}} "
},
{
"math_id": 17,
"text": " \\mathbf{\\sigma} \\;=\\; \\mathsf{C}:\\mathbf{\\varepsilon}"
}
] | https://en.wikipedia.org/wiki?curid=1082645 |
1082841 | Fictitious force | Force on objects moving within a reference frame that rotates with respect to an inertial frame
<templatestyles src="Hlist/styles.css"/>
A fictitious force is a force that appears to act on a mass whose motion is described using a non-inertial frame of reference, such as a linearly accelerating or rotating reference frame.
Fictitious forces are invoked to maintain the validity and thus use of Newton's second law of motion, in frames of reference which are "not" inertial.
Measureable examples of fictitious forces.
Passengers in a vehicle accelerating in the forward direction may perceive they are acted upon by a force moving them into the direction of the backrest of their seats for instance. An example in a rotating reference frame may be the impression that it is a force which seems to move objects outward toward the rim of a centrifuge or carousel.
The fictitious force called a pseudo force might also be referred to as a body force. It is due to an object's inertia when the reference frame does not move inertially any more but begins to accelerate relative to the free object. In terms of the example of the passenger vehicle, a pseudo force seems to be active just before the body touches the backrest of the seat in the car. A person in the car leaning forward first moves a bit backward in relation to the already accelerating car, before touching the backrest. The motion in this short period just seems to be the result of a force on the person; i.e., it is a pseudo force. A pseudo force does not arise from any physical interaction between two objects, such as electromagnetism or contact forces. It is just a consequence of the acceleration a of the physical object the non-inertial reference frame is connected to, i.e. the vehicle in this case. From the viewpoint of the respective accelerating frame, an acceleration of the inert object appears to be present, apparently requiring a "force" for this to have happened.
As stated by Iro:
<templatestyles src="Template:Blockquote/styles.css" />Such an additional force due to nonuniform relative motion of two reference frames is called a "pseudo-force".
The pseudo force on an object arises as an imaginary influence when the frame of reference used to describe the object's motion is accelerating compared to a non-accelerating frame. The pseudo force "explains," using Newton's second law mechanics, why an object does not follow Newton's second law and "floats freely" as if weightless. As a frame may accelerate in any arbitrary way, so may pseudo forces also be as arbitrary (but only in direct response to the acceleration of the frame). An example of a pseudo force as defined by Iro is the Coriolis force, maybe better to be called: the Coriolis effect. The gravitational force would also be a fictitious force (pseudo force) in a field model in which particles distort spacetime due to their mass, such as in the theory of general relativity.
Assuming Newton's second law in the form F = "m"a, fictitious forces are always proportional to the mass "m".
The fictitious force that has been called an inertial force is also referred to as a d'Alembert force, or sometimes as a pseudo force. D'Alembert's principle is just another way of formulating Newton's second law of motion. It defines an inertial force as the negative of the product of mass times acceleration, just for the sake of easier calculations.
Four fictitious forces have been defined for frames accelerated in commonly occurring ways:
Background.
The role of fictitious forces in Newtonian mechanics is described by Tonnelat:
<templatestyles src="Template:Blockquote/styles.css" />For Newton, the appearance of acceleration always indicates the existence of absolute motion – absolute motion of matter where "real" forces are concerned; absolute motion of the reference system, where so-called "fictitious" forces, such as inertial forces or those of Coriolis, are concerned.
Fictitious forces arise in classical mechanics and special relativity in all non-inertial frames.
Inertial frames are privileged over non-inertial frames because they do not have physics whose causes are outside of the system, while non-inertial frames do. Fictitious forces, or physics whose cause is outside of the system, are no longer necessary in general relativity, since these physics are explained with the geodesics of spacetime: "The field of all possible space-time null geodesics or photon paths unifies the absolute local non-rotation standard throughout space-time.".
On Earth.
The surface of the Earth is a rotating reference frame. To solve classical mechanics problems exactly in an Earthbound reference frame, three fictitious forces must be introduced: the Coriolis force, the centrifugal force (described below) and the Euler force. The Euler force is typically ignored because the variations in the angular velocity of the rotating surface of the Earth are usually insignificant. Both of the other fictitious forces are weak compared to most typical forces in everyday life, but they can be detected under careful conditions. For example, Léon Foucault used his Foucault pendulum to show that a Coriolis force results from the Earth's rotation.
If the Earth were to rotate twenty times faster (making each day only ~72 minutes long), people could easily get the impression that such fictitious forces were pulling on them, as on a spinning carousel; people in temperate and tropical latitudes would, in fact, need to hold on in order to avoid being launched into orbit by the centrifugal force.
When moving along the equator in - for instance - a ship in a easterly direction objects appear to be slightly lighter than on the way back. This has been observed and is named the Eötvös effect.
Detection of non-inertial reference frame.
Observers inside a closed box that is moving with a constant velocity cannot detect their own motion; however, observers within an accelerating reference frame can detect that they are in a non-inertial reference frame from the fictitious forces that arise. For example, for straight-line acceleration Vladimir Arnold presents the following theorem:
<templatestyles src="Template:Blockquote/styles.css" />In a coordinate system "K" which moves by translation relative to an inertial system "k", the motion of a mechanical system takes place as if the coordinate system were inertial, but on every point of mass "m" an additional "inertial force" acted: F
−"m"a, where a is the acceleration of the system "K".
Other accelerations also give rise to fictitious forces, as described mathematically below. The physical explanation of motions in an inertial frame is the simplest possible, requiring no fictitious forces: fictitious forces are zero, providing a means to distinguish inertial frames from others.
An example of the detection of a non-inertial, rotating reference frame is the precession of a Foucault pendulum. In the non-inertial frame of the Earth, the fictitious Coriolis force is necessary to explain observations. In an inertial frame outside the Earth, no such fictitious force is necessary.
Example concerning Circular motion.
The effect of a fictitious force also occurs when a car takes the bend. Observed from a non-inertial frame of reference attached to the car, the fictitious force called the centrifugal force appears. As the car enters a left turn, a suitcase first on the left rear seat slides to the right rear seat and then continues until it comes into contact with the closed door on the right. This motion marks the phase of the fictitious centrifugal force as it is the inertia of the suitcase which plays a role in this piece of movement. It may seem that there must be a force responsible for this movement, but actually, this movement arises because of the inertia of the suitcase, which is (still) a 'free object' within an already accelerating frame of reference.
After the suitcase has come into contact with the closed door of the car, the situation with the emergence of contact forces becomes current. The centripetal force on the car is now also transferred to the suitcase and the situation of Newton's third law comes into play, with the centripetal force as the action part and with the so-called reactive centrifugal force as the reaction part. The reactive centrifugal force is also due to the inertia of the suitcase. Now however the inertia appears in the form of a manifesting resistance to a change in its state of motion.
Suppose a few miles further the car is moving at constant speed travelling a roundabout, again and again, then the occupants will feel as if they are being pushed to the outside of the vehicle by the (reactive) centrifugal force, away from the centre of the turn.
The situation can be viewed from inertial as well as from non-inertial frames.
A classic example of a fictitious force in circular motion is the experiment of rotating spheres tied by a cord and spinning around their centre of mass. In this case, the identification of a rotating, non-inertial frame of reference can be based upon the vanishing of fictitious forces. In an inertial frame, fictitious forces are not necessary to explain the tension in the string joining the spheres. In a rotating frame, Coriolis and centrifugal forces must be introduced to predict the observed tension.
In the rotating reference frame perceived on the surface of the Earth, a centrifugal force reduces the apparent force of gravity by about one part in a thousand, depending on latitude. This reduction is zero at the poles, maximum at the equator.
The fictitious Coriolis force, which is observed in rotational frames, is ordinarily visible only in very large-scale motion like the projectile motion of long-range guns or the circulation of the Earth's atmosphere (see Rossby number). Neglecting air resistance, an object dropped from a 50-meter-high tower at the equator will fall 7.7 millimetres eastward of the spot below where it is dropped because of the Coriolis force.
Fictitious forces and work.
Fictitious forces can be considered to do work, provided that they move an object on a trajectory that changes its energy from potential to kinetic. For example, consider some persons in rotating chairs holding a weight in their outstretched hands. If they pull their hand inward toward their body, from the perspective of the rotating reference frame, they have done work against the centrifugal force. When the weight is let go, it spontaneously flies outward relative to the rotating reference frame, because the centrifugal force does work on the object, converting its potential energy into kinetic. From an inertial viewpoint, of course, the object flies away from them because it is suddenly allowed to move in a straight line. This illustrates that the work done, like the total potential and kinetic energy of an object, can be different in a non-inertial frame than in an inertial one.
Gravity as a fictitious force.
The notion of "fictitious force" also arises in Einstein's general theory of relativity. All fictitious forces are proportional to the mass of the object upon which they act, which is also true for gravity. This led Albert Einstein to wonder whether gravity could be modeled as a fictitious force. He noted that a freefalling observer in a closed box would not be able to detect the force of gravity; hence, freefalling reference frames are equivalent to inertial reference frames (the equivalence principle). Developing this insight, Einstein formulated a theory with gravity as a fictitious force, and attributed the apparent acceleration due to gravity to the curvature of spacetime. This idea underlies Einstein's theory of general relativity. See the Eötvös experiment.
Mathematical derivation of fictitious forces.
General derivation.
Many problems require use of noninertial reference frames, for example, those involving satellites and particle accelerators. Figure 2 shows a particle with mass "m" and position vector xA("t") in a particular inertial frame A. Consider a non-inertial frame B whose origin relative to the inertial one is given by XAB("t"). Let the position of the particle in frame B be xB("t"). What is the force on the particle as expressed in the coordinate system of frame B?
To answer this question, let the coordinate axis in B be represented by unit vectors u"j" with "j" any of { 1, 2, 3 } for the three coordinate axes. Then
formula_0
The interpretation of this equation is that xB is the vector displacement of the particle as expressed in terms of the coordinates in frame B at the time "t". From frame A the particle is located at:
formula_1
As an aside, the unit vectors { u"j" } cannot change magnitude, so derivatives of these vectors express only rotation of the coordinate system B. On the other hand, vector XAB simply locates the origin of frame B relative to frame A, and so cannot include rotation of frame B.
Taking a time derivative, the velocity of the particle is:
formula_2
The second term summation is the velocity of the particle, say vB as measured in frame B. That is:
formula_3
The interpretation of this equation is that the velocity of the particle seen by observers in frame A consists of what observers in frame B call the velocity, namely vB, plus two extra terms related to the rate of change of the frame-B coordinate axes. One of these is simply the velocity of the moving origin vAB. The other is a contribution to velocity due to the fact that different locations in the non-inertial frame have different apparent velocities due to the rotation of the frame; a point seen from a rotating frame has a rotational component of velocity that is greater the further the point is from the origin.
To find the acceleration, another time differentiation provides:
formula_4
Using the same formula already used for the time derivative of xB, the velocity derivative on the right is:
formula_5
Consequently,
The interpretation of this equation is as follows: the acceleration of the particle in frame A consists of what observers in frame B call the particle acceleration aB, but in addition, there are three acceleration terms related to the movement of the frame-B coordinate axes: one term related to the acceleration of the origin of frame B, namely aAB, and two terms related to the rotation of frame B. Consequently, observers in B will see the particle motion as possessing "extra" acceleration, which they will attribute to "forces" acting on the particle, but which observers in A say are "fictitious" forces arising simply because observers in B do not recognize the non-inertial nature of frame B.
The factor of two in the Coriolis force arises from two equal contributions: (i) the apparent change of an inertially constant velocity with time because rotation makes the direction of the velocity seem to change (a "d"vB/d"t" term) and (ii) an apparent change in the velocity of an object when its position changes, putting it nearer to or further from the axis of rotation (the change in formula_6 due to change in "x j" ).
To put matters in terms of forces, the accelerations are multiplied by the particle mass:
formula_7
The force observed in frame B, FB = "m"aB is related to the actual force on the particle, FA, by
formula_8
where:
formula_9
Thus, problems may be solved in frame B by assuming that Newton's second law holds (with respect to quantities in that frame) and treating Ffictitious as an additional force.
Below are a number of examples applying this result for fictitious forces. More examples can be found in the article on centrifugal force.
Rotating coordinate systems.
A common situation in which noninertial reference frames are useful is when the reference frame is rotating. Because such rotational motion is non-inertial, due to the acceleration present in any rotational motion, a fictitious force can always be invoked by using a rotational frame of reference. Despite this complication, the use of fictitious forces often simplifies the calculations involved.
To derive expressions for the fictitious forces, derivatives are needed for the apparent time rate of change of vectors that take into account time-variation of the coordinate axes. If the rotation of frame 'B' is represented by a vector Ω pointed along the axis of rotation with the orientation given by the right-hand rule, and with magnitude given by
formula_10
then the time derivative of any of the three unit vectors describing frame B is
formula_11
and
formula_12
as is verified using the properties of the vector cross product. These derivative formulas now are applied to the relationship between acceleration in an inertial frame, and that in a coordinate frame rotating with time-varying angular velocity ω("t"). From the previous section, where subscript A refers to the inertial frame and B to the rotating frame, setting aAB = 0 to remove any translational acceleration, and focusing on only rotational properties (see Eq. 1):
formula_13
formula_14
Collecting terms, the result is the so-called "acceleration transformation formula":
formula_15
The physical acceleration aA due to what observers in the inertial frame A call "real external forces" on the object is, therefore, not simply the acceleration aB seen by observers in the rotational frame B, but has several additional geometric acceleration terms associated with the rotation of B. As seen in the rotational frame, the acceleration aB of the particle is given by rearrangement of the above equation as:
formula_16
The net force upon the object according to observers in the rotating frame is FB = "m"aB. If their observations are to result in the correct force on the object when using Newton's laws, they must consider that the additional force Ffict is present, so the end result is FB = FA + Ffict. Thus, the fictitious force used by observers in B to get the correct behaviour of the object from Newton's laws equals:
formula_17
Here, the first term is the "Coriolis force", the second term is the "centrifugal force", and the third term is the "Euler force".
Orbiting coordinate systems.
As a related example, suppose the moving coordinate system "B" rotates with a constant angular speed ω in a circle of radius "R" about the fixed origin of inertial frame "A", but maintains its coordinate axes fixed in orientation, as in Figure 3. The acceleration of an observed body is now (see Eq. 1):
formula_18
where the summations are zero inasmuch as the unit vectors have no time dependence. The origin of the system "B" is located according to frame "A" at:
formula_19
leading to a velocity of the origin of frame "B" as:
formula_20
leading to an acceleration of the origin of "B" given by:
formula_21
Because the first term, which is
formula_22
is of the same form as the normal centrifugal force expression:
formula_23
it is a natural extension of standard terminology (although there is no standard terminology for this case) to call this term a "centrifugal force". Whatever terminology is adopted, the observers in frame "B" must introduce a fictitious force, this time due to the acceleration from the orbital motion of their entire coordinate frame, that is radially outward away from the centre of rotation of the origin of their coordinate system:
formula_24
and of magnitude:
formula_25
This "centrifugal force" has differences from the case of a rotating frame. In the rotating frame the centrifugal force is related to the distance of the object from the origin of frame "B", while in the case of an orbiting frame, the centrifugal force is independent of the distance of the object from the origin of frame "B", but instead depends upon the distance of the origin of frame "B" from "its" centre of rotation, resulting in the "same" centrifugal fictitious force for "all" objects observed in frame "B".
Orbiting and rotating.
As a combination example, Figure 4 shows a coordinate system "B" that orbits inertial frame "A" as in Figure 3, but the coordinate axes in frame "B" turn so unit vector u1 always points toward the centre of rotation. This example might apply to a test tube in a centrifuge, where vector u1 points along the axis of the tube toward its opening at its top. It also resembles the Earth–Moon system, where the Moon always presents the same face to the Earth. In this example, unit vector u3 retains a fixed orientation, while vectors u1, u2 rotate at the same rate as the origin of coordinates. That is,
formula_26 formula_27
formula_28 formula_29
Hence, the acceleration of a moving object is expressed as (see Eq. 1):
formula_30
where the angular acceleration term is zero for the constant rate of rotation.
Because the first term, which is
formula_31
is of the same form as the normal centrifugal force expression:
formula_23
it is a natural extension of standard terminology (although there is no standard terminology for this case) to call this term the "centrifugal force". Applying this terminology to the example of a tube in a centrifuge, if the tube is far enough from the center of rotation, |XAB| = "R" ≫ |xB|, all the matter in the test tube sees the same acceleration (the same centrifugal force). Thus, in this case, the fictitious force is primarily a uniform centrifugal force along the axis of the tube, away from the centre of rotation, with a value |Ffict| = ω2 "R", where "R" is the distance of the matter in the tube from the centre of the centrifuge. It is the standard specification of a centrifuge to use the "effective" radius of the centrifuge to estimate its ability to provide centrifugal force. Thus, the first estimate of centrifugal force in a centrifuge can be based upon the distance of the tubes from the centre of rotation, and corrections applied if needed.
Also, the test tube confines motion to the direction down the length of the tube, so vB is opposite to u1 and the Coriolis force is opposite to u2, that is, against the wall of the tube. If the tube is spun for a long enough time, the velocity vB drops to zero as the matter comes to an equilibrium distribution. For more details, see the articles on sedimentation and the Lamm equation.
A related problem is that of centrifugal forces for the Earth–Moon–Sun system, where three rotations appear: the daily rotation of the Earth about its axis, the lunar-month rotation of the Earth–Moon system about its centre of mass, and the annual revolution of the Earth–Moon system about the Sun. These three motions influence the tides.
Crossing a carousel.
Figure 5 shows another example comparing the observations of an inertial observer with those of an observer on a rotating carousel. The carousel rotates at a constant angular velocity represented by the vector Ω with magnitude "ω", pointing upward according to the right-hand rule. A rider on the carousel walks radially across it at a constant speed, in what appears to the walker to be the straight line path inclined at 45° in Figure 5. To the stationary observer, however, the walker travels a spiral path. The points identified on both paths in Figure 5 correspond to the same times spaced at equal time intervals. We ask how two observers, one on the carousel and one in an inertial frame, formulate what they see using Newton's laws.
Inertial observer.
The observer at rest describes the path followed by the walker as a spiral. Adopting the coordinate system shown in Figure 5, the trajectory is described by r("t"):
formula_32
where the added π/4 sets the path angle at 45° to start with (just an arbitrary choice of direction), u"R" is a unit vector in the radial direction pointing from the centre of the carousel to the walker at the time "t". The radial distance "R"("t") increases steadily with time according to:
formula_33
with "s" the speed of walking. According to simple kinematics, the velocity is then the first derivative of the trajectory:
formula_34
with uθ a unit vector perpendicular to uR at time "t" (as can be verified by noticing that the vector dot product with the radial vector is zero) and pointing in the direction of travel.
The acceleration is the first derivative of the velocity:
formula_35
The last term in the acceleration is radially inward of magnitude ω2 "R", which is therefore the instantaneous centripetal acceleration of circular motion. The first term is perpendicular to the radial direction, and pointing in the direction of travel. Its magnitude is 2"sω", and it represents the acceleration of the walker as the edge of the carousel is neared, and the arc of the circle travelled in a fixed time increases, as can be seen by the increased spacing between points for equal time steps on the spiral in Figure 5 as the outer edge of the carousel is approached.
Applying Newton's laws, multiplying the acceleration by the mass of the walker, the inertial observer concludes that the walker is subject to two forces: the inward radially directed centripetal force and another force perpendicular to the radial direction that is proportional to the speed of the walker.
Rotating observer.
The rotating observer sees the walker travel a straight line from the centre of the carousel to the periphery, as shown in Figure 5. Moreover, the rotating observer sees that the walker moves at a constant speed in the same direction, so applying Newton's law of inertia, there is "zero" force upon the walker. These conclusions do not agree with the inertial observer. To obtain agreement, the rotating observer has to introduce fictitious forces that appear to exist in the rotating world, even though there is no apparent reason for them, no apparent gravitational mass, electric charge or what have you, that could account for these fictitious forces.
To agree with the inertial observer, the forces applied to the walker must be exactly those found above. They can be related to the general formulas already derived, namely:
formula_36
In this example, the velocity seen in the rotating frame is:
formula_37
with uR a unit vector in the radial direction. The position of the walker as seen on the carousel is:
formula_38
and the time derivative of Ω is zero for uniform angular rotation. Noticing that
formula_39
and
formula_40
we find:
formula_41
To obtain a straight-line motion in the rotating world, a force exactly opposite in sign to the fictitious force must be applied to reduce the net force on the walker to zero, so Newton's law of inertia will predict a straight line motion, in agreement with what the rotating observer sees. The fictitious forces that must be combated are the Coriolis force (first term) and the centrifugal force (second term). (These terms are approximate.) By applying forces to counter these two fictitious forces, the rotating observer ends up applying exactly the same forces upon the walker that the inertial observer predicted were needed.
Because they differ only by the constant walking velocity, the walker and the rotational observer see the same accelerations. From the walker's perspective, the fictitious force is experienced as real, and combating this force is necessary to stay on a straight line radial path holding a constant speed. It is like battling a crosswind while being thrown to the edge of the carousel.
Observation.
Notice that this kinematical discussion does not delve into the mechanism by which the required forces are generated. That is the subject of kinetics. In the case of the carousel, the kinetic discussion would involve perhaps a study of the walker's shoes and the friction they need to generate against the floor of the carousel, or perhaps the dynamics of skateboarding if the walker switched to travel by skateboard. Whatever the means of travel across the carousel, the forces calculated above must be realized. A very rough analogy is heating your house: you must have a certain temperature to be comfortable, but whether you heat by burning gas or by burning coal is another problem. Kinematics sets the thermostat, kinetics fires the furnace.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathbf{x}_\\mathrm{B} = \\sum_{j=1}^3 x_j \\mathbf{u}_j \\, . "
},
{
"math_id": 1,
"text": "\\mathbf{x}_\\mathrm{A} =\\mathbf{X}_\\mathrm{AB} + \\sum_{j=1}^3 x_j \\mathbf{u}_j \\, . "
},
{
"math_id": 2,
"text": " \\frac {d \\mathbf{x}_\\mathrm{A}}{dt} =\\frac{d \\mathbf{X}_\\mathrm{AB}}{dt} + \\sum_{j=1}^3 \\frac{dx_j}{dt} \\mathbf{u}_j + \\sum_{j=1}^3 x_j \\frac{d \\mathbf{u}_j}{dt} \\, . "
},
{
"math_id": 3,
"text": " \\frac {d \\mathbf{x}_\\mathrm{A}}{dt} =\\mathbf{v}_\\mathrm{AB}+ \\mathbf{v}_\\mathrm{B} + \\sum_{j=1}^3 x_j \\frac{d \\mathbf{u}_j}{dt}. "
},
{
"math_id": 4,
"text": " \\frac {d^2 \\mathbf{x}_\\mathrm{A}}{dt^2} = \\mathbf{a}_\\mathrm{AB}+\\frac {d\\mathbf{v}_\\mathrm{B}}{dt} + \\sum_{j=1}^3 \\frac {dx_j}{dt} \\frac{d \\mathbf{u}_j}{dt} + \\sum_{j=1}^3 x_j \\frac{d^2 \\mathbf{u}_j}{dt^2}. "
},
{
"math_id": 5,
"text": "\\frac {d\\mathbf{v}_\\mathrm{B}}{dt} =\\sum_{j=1}^3 \\frac{d v_j}{dt} \\mathbf{u}_j+ \\sum_{j=1}^3 v_j \\frac{d \\mathbf{u}_j}{dt} =\\mathbf{a}_\\mathrm{B} + \\sum_{j=1}^3 v_j \\frac{d \\mathbf{u}_j}{dt}. "
},
{
"math_id": 6,
"text": "\\sum x_j\\, d\\mathbf{u}_j/dt"
},
{
"math_id": 7,
"text": "\\mathbf{F}_\\mathrm{A} = \\mathbf{F}_\\mathrm{B} + m\\mathbf{a}_\\mathrm{AB}+ 2m \\sum_{j=1}^3 v_j \\frac{d \\mathbf{u}_j}{dt} + m \\sum_{j=1}^3 x_j \\frac{d^2 \\mathbf{u}_j}{dt^2}\\ . "
},
{
"math_id": 8,
"text": "\\mathbf{F}_\\mathrm{B} = \\mathbf{F}_\\mathrm{A} + \\mathbf{F}_\\mathrm{fictitious},"
},
{
"math_id": 9,
"text": " \\mathbf{F}_\\mathrm{fictitious} = -m\\mathbf{a}_\\mathrm{AB} - 2m\\sum_{j=1}^3 v_j \\frac{d \\mathbf{u}_j}{dt} - m \\sum_{j=1}^3 x_j \\frac{d^2 \\mathbf{u}_j}{dt^2}\\, . "
},
{
"math_id": 10,
"text": " |\\boldsymbol{\\Omega} | = \\frac {d \\theta }{dt} = \\omega (t), "
},
{
"math_id": 11,
"text": " \\frac {d \\mathbf{u}_j (t)}{dt} = \\boldsymbol{\\Omega} \\times \\mathbf{u}_j (t), "
},
{
"math_id": 12,
"text": "\\frac {d^2 \\mathbf{u}_j (t)}{dt^2}= \\frac{d\\boldsymbol{\\Omega}}{dt} \\times \\mathbf{u}_j +\\boldsymbol{\\Omega} \\times \\frac{d \\mathbf{u}_j (t)}{dt} = \\frac{d\\boldsymbol{\\Omega}}{dt} \\times \\mathbf{u}_j+ \\boldsymbol{\\Omega} \\times \\left[ \\boldsymbol{\\Omega} \\times \\mathbf{u}_j (t) \\right], "
},
{
"math_id": 13,
"text": " \\frac {d^2 \\mathbf{x}_\\mathrm{A}}{dt^2}=\\mathbf{a}_\\mathrm{B} + 2\\sum_{j=1}^3 v_j \\ \\frac{d \\mathbf{u}_j}{dt} + \\sum_{j=1}^3 x_j \\frac{d^2 \\mathbf{u}_j}{dt^2},"
},
{
"math_id": 14,
"text": "\\begin{align}\n\\mathbf{a}_\\mathrm{A} &= \\mathbf{a}_\\mathrm{B} +\\ 2\\sum_{j=1}^3 v_j \\boldsymbol{\\Omega} \\times \\mathbf{u}_j (t) + \\sum_{j=1}^3 x_j \\frac{d\\boldsymbol{\\Omega}}{dt} \\times \\mathbf{u}_j \\ + \\sum_{j=1}^3 x_j \\boldsymbol{\\Omega} \\times \\left[ \\boldsymbol{\\Omega} \\times \\mathbf{u}_j (t) \\right] \\\\\n&=\\mathbf{a}_\\mathrm{B} + 2 \\boldsymbol{\\Omega} \\times\\sum_{j=1}^3 v_j \\mathbf{u}_j (t) + \\frac{d\\boldsymbol{\\Omega}}{dt} \\times \\sum_{j=1}^3 x_j \\mathbf{u}_j + \\boldsymbol{\\Omega} \\times \\left[\\boldsymbol{\\Omega} \\times \\sum_{j=1}^3 x_j \\mathbf{u}_j (t) \\right].\n\\end{align}"
},
{
"math_id": 15,
"text": "\\mathbf{a}_\\mathrm{A}=\\mathbf{a}_\\mathrm{B} + 2\\boldsymbol{\\Omega} \\times\\mathbf{v}_\\mathrm{B} + \\frac{d\\boldsymbol{\\Omega}}{dt} \\times \\mathbf{x}_\\mathrm{B} + \\boldsymbol{\\Omega} \\times \\left(\\boldsymbol{\\Omega} \\times \\mathbf{x}_\\mathrm{B} \\right)\\, ."
},
{
"math_id": 16,
"text": "\n\\mathbf{a}_\\mathrm{B} = \\mathbf{a}_\\mathrm{A} - 2\\boldsymbol{\\Omega} \\times \\mathbf{v}_\\mathrm{B} - \\boldsymbol{\\Omega} \\times (\\boldsymbol\\Omega \\times \\mathbf{x}_\\mathrm{B}) - \\frac{d \\boldsymbol\\Omega}{dt} \\times \\mathbf{x}_\\mathrm{B}.\n"
},
{
"math_id": 17,
"text": "\n\\mathbf{F}_{\\mathrm{fict}} = - 2 m \\boldsymbol\\Omega \\times \\mathbf{v}_\\mathrm{B} - m \\boldsymbol\\Omega \\times (\\boldsymbol\\Omega \\times \\mathbf{x}_\\mathrm{B}) - m \\frac{d \\boldsymbol\\Omega}{dt} \\times \\mathbf{x}_\\mathrm{B}.\n"
},
{
"math_id": 18,
"text": "\\begin{align}\n\\frac {d^2 \\mathbf{x}_{A}}{dt^2} &= \\mathbf{a}_{AB}+\\mathbf{a}_{B} + 2\\ \\sum_{j=1}^3 v_j \\ \\frac{d \\mathbf{u}_j}{dt} + \\sum_{j=1}^3 x_j \\ \\frac{d^2 \\mathbf{u}_j}{dt^2} \\\\\n&=\\mathbf{a}_{AB}\\ +\\mathbf{a}_B\\ ,\n\\end{align}"
},
{
"math_id": 19,
"text": "\\mathbf{X}_{AB} = R \\left( \\cos ( \\omega t) , \\ \\sin (\\omega t) \\right) \\ ,"
},
{
"math_id": 20,
"text": "\\mathbf{v}_{AB} = \\frac{d}{dt} \\mathbf{X}_{AB} = \\mathbf{\\Omega \\times X}_{AB} \\ , "
},
{
"math_id": 21,
"text": "\\mathbf{a}_{AB} = \\frac{d^2}{dt^2} \\mathbf{X}_{AB} = \\mathbf{ \\Omega \\ \\times } \\left( \\mathbf{ \\Omega \\times X}_{AB}\\right) = - \\omega^2 \\mathbf{X}_{AB} \\, ."
},
{
"math_id": 22,
"text": "\\mathbf{ \\Omega \\ \\times } \\left( \\mathbf{ \\Omega \\times X}_{AB}\\right)\\, , "
},
{
"math_id": 23,
"text": "\\boldsymbol{\\Omega} \\times \\left( \\boldsymbol{\\Omega} \\times \\mathbf{x}_B \\right)\\, ,"
},
{
"math_id": 24,
"text": "\\mathbf{F}_{\\mathrm{fict}} = m \\omega^2 \\mathbf{X}_{AB} \\, , "
},
{
"math_id": 25,
"text": "|\\mathbf{F}_{\\mathrm{fict}}| = m \\omega^2 R \\, . "
},
{
"math_id": 26,
"text": "\\mathbf{u}_1 = (-\\cos \\omega t ,\\ -\\sin \\omega t )\\ ;\\ "
},
{
"math_id": 27,
"text": "\\mathbf{u}_2 = (\\sin \\omega t ,\\ -\\cos \\omega t ) \\, . "
},
{
"math_id": 28,
"text": "\\frac{d}{dt}\\mathbf{u}_1 = \\mathbf{\\Omega \\times u_1}= \\omega\\mathbf{u}_2\\ ;"
},
{
"math_id": 29,
"text": " \\ \\frac{d}{dt}\\mathbf{u}_2 = \\mathbf{\\Omega \\times u_2} = -\\omega\\mathbf{u}_1\\ \\ ."
},
{
"math_id": 30,
"text": "\\begin{align}\n\\frac {d^2 \\mathbf{x}_{A}}{dt^2}&=\\mathbf{a}_{AB}+\\mathbf{a}_B + 2\\ \\sum_{j=1}^3 v_j \\ \\frac{d \\mathbf{u}_j}{dt} + \\ \\sum_{j=1}^3 x_j \\ \\frac{d^2 \\mathbf{u}_j}{dt^2}\\\\\n&=\\mathbf{ \\Omega \\ \\times } \\left( \\mathbf{ \\Omega \\times X}_{AB}\\right) +\\mathbf{a}_B + 2\\ \\sum_{j=1}^3 v_j\\ \\mathbf{\\Omega \\times u_j} \\ +\\ \\sum_{j=1}^3 x_j\\ \\boldsymbol{\\Omega} \\times \\left( \\boldsymbol{\\Omega} \\times \\mathbf{u}_j \\right)\\\\\n&=\\mathbf{ \\Omega \\ \\times } \\left( \\mathbf{ \\Omega \\times X}_{AB}\\right) + \\mathbf{a}_B + 2\\ \\boldsymbol{\\Omega} \\times\\mathbf{v}_B\\ \\ +\\ \\boldsymbol{\\Omega} \\times \\left( \\boldsymbol{\\Omega} \\times \\mathbf{x}_B \\right)\\\\\n&=\\mathbf{ \\Omega \\ \\times } \\left( \\mathbf{ \\Omega \\times} (\\mathbf{ X}_{AB}+\\mathbf{x}_B) \\right) + \\mathbf{a}_B + 2\\ \\boldsymbol{\\Omega} \\times\\mathbf{v}_B\\ \\, ,\n\\end{align}"
},
{
"math_id": 31,
"text": "\\mathbf{ \\Omega \\ \\times } \\left( \\mathbf{ \\Omega \\times} (\\mathbf{ X}_{AB}+\\mathbf{x}_B) \\right)\\, , "
},
{
"math_id": 32,
"text": "\\mathbf{r}(t) =R(t)\\mathbf{u}_R = \\begin{bmatrix} x(t) \\\\ y(t) \\end{bmatrix} = \\begin{bmatrix} R(t)\\cos (\\omega t + \\pi/4) \\\\ R(t)\\sin (\\omega t + \\pi/4) \\end{bmatrix}, "
},
{
"math_id": 33,
"text": "R(t) = s t,"
},
{
"math_id": 34,
"text": "\\begin{align}\n\\mathbf{v}(t) &= \\frac{dR}{dt} \\begin{bmatrix} \\cos (\\omega t + \\pi/4) \\\\ \\sin (\\omega t + \\pi/4) \\end{bmatrix} + \\omega R(t) \\begin{bmatrix} -\\sin(\\omega t + \\pi/4) \\\\ \\cos (\\omega t + \\pi/4) \\end{bmatrix} \\\\\n&= \\frac{dR}{dt} \\mathbf{u}_R + \\omega R(t) \\mathbf{u}_{\\theta},\n\\end{align}"
},
{
"math_id": 35,
"text": "\\begin{align}\n\\mathbf{a}(t) &= \\frac{d^2 R}{dt^2} \\begin{bmatrix} \\cos (\\omega t + \\pi/4) \\\\ \\sin (\\omega t + \\pi/4) \\end{bmatrix} + 2 \\frac {dR}{dt} \\omega \\begin{bmatrix} -\\sin(\\omega t + \\pi/4) \\\\ \\cos (\\omega t + \\pi/4) \\end{bmatrix} - \\omega^2 R(t) \\begin{bmatrix} \\cos (\\omega t + \\pi/4) \\\\ \\sin (\\omega t + \\pi/4) \\end{bmatrix} \\\\\n&=2s\\omega \\begin{bmatrix} -\\sin(\\omega t + \\pi/4) \\\\ \\cos (\\omega t + \\pi/4) \\end{bmatrix} -\\omega^2 R(t) \\begin{bmatrix} \\cos (\\omega t + \\pi/4) \\\\ \\sin (\\omega t + \\pi/4) \\end{bmatrix} \\\\\n&=2s\\ \\omega \\ \\mathbf{u}_{\\theta}-\\omega^2 R(t)\\ \\mathbf{u}_R \\, .\n\\end{align}"
},
{
"math_id": 36,
"text": "\n\\mathbf{F}_{\\mathrm{fict}} =\n- 2 m \\boldsymbol\\Omega \\times \\mathbf{v}_\\mathrm{B} - m \\boldsymbol\\Omega \\times (\\boldsymbol\\Omega \\times \\mathbf{x}_\\mathrm{B} ) - m \\frac{d \\boldsymbol\\Omega}{dt} \\times \\mathbf{x}_\\mathrm{B}.\n"
},
{
"math_id": 37,
"text": "\\mathbf{v}_\\mathrm{B} = s \\mathbf{u}_R, "
},
{
"math_id": 38,
"text": "\\mathbf{x}_\\mathrm{B} = R(t)\\mathbf{u}_R, "
},
{
"math_id": 39,
"text": "\\boldsymbol\\Omega \\times \\mathbf{u}_R =\\omega \\mathbf{u}_{\\theta} "
},
{
"math_id": 40,
"text": "\\boldsymbol\\Omega \\times \\mathbf{u}_{\\theta} =-\\omega \\mathbf{u}_R \\, ,"
},
{
"math_id": 41,
"text": "\\mathbf{F}_{\\mathrm{fict}} = - 2 m \\omega s \\mathbf{u}_{\\theta} + m \\omega^2 R(t) \\mathbf{u}_R."
}
] | https://en.wikipedia.org/wiki?curid=1082841 |
10829022 | Liljequist parhelion | Atmospheric optical phenomenon
A Liljequist parhelion is a rare halo, an optical phenomenon in the form of a brightened spot on the parhelic circle approximately 150–160° from the sun; i.e., between the position of the 120° parhelion and the anthelion.
While the sun touches the horizon, a Liljequist parhelion is located approximately 160° from the sun and is about 10° long. As the sun rises up to 30° the phenomenon gradually moves towards 150°, and as the sun reaches over 30° the optical effect vanishes. The parhelia are caused by light rays passing through oriented plate crystals. .
The phenomenon was first observed by Gösta Hjalmar Liljequist in 1951 at Maudheim, Antarctica during the Norwegian–British–Swedish Antarctic Expedition in 1949–1952. It was then simulated by Dr. Eberhard Tränkle (1937–1997) and Robert Greenler in 1987 and theoretically explained by Walter Tape in 1994.
A theoretical and experimental investigation of the Liljequist parhelion caused by perfect hexagonal plate crystals showed that the azimuthal position of maximum intensity occurs at
formula_0,
where the refractive index formula_1 to use for the angle formula_2 of total internal reflection is Bravais' index for inclined rays, i.e. formula_3 for a solar elevation formula_4. For ice at zero solar elevation this angle is formula_5. The dispersion of ice causes a variation of this angle, leading to a blueish/cyan coloring close to this azimuthal coordinate. The halo ends towards the anthelion at an angle formula_6
formula_7.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\theta_{{\\rm L}1}=2\\arccos\\left(n\\sin\\left(\\frac{\\pi}{3} - \\alpha_{\\rm TIR}\\right)\\right)"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\alpha_{\\rm TIR}=\\arcsin(1/n)"
},
{
"math_id": 3,
"text": "n(e)=\\sqrt{n^2-\\sin\\left(e\\right)^2}/\\cos\\left(e\\right)"
},
{
"math_id": 4,
"text": "e"
},
{
"math_id": 5,
"text": "\\theta_{{\\rm L}1}\\approx 153^{\\circ}"
},
{
"math_id": 6,
"text": "\\theta_{{\\rm L}2}^{\\rm max}"
},
{
"math_id": 7,
"text": "\\theta_{{\\rm L}2}^{\\rm max}=\\frac{5\\pi}{6} + \\arcsin\\left(n\\sin\\left(\\frac{\\pi}{3}-\\alpha_{\\rm TIR}\\right)\\right)"
}
] | https://en.wikipedia.org/wiki?curid=10829022 |
1082916 | Magnetic helicity | Measure of magnetic field topology
In plasma physics, magnetic helicity is a measure of the linkage, twist, and writhe of a magnetic field.
Magnetic helicity is a useful concept in the analysis of systems with extremely low resistivity, such as astrophysical systems. When resistivity is low, magnetic helicity is conserved over longer timescales, to a good approximation. Magnetic helicity dynamics are particularly important in analyzing solar flares and coronal mass ejections. Magnetic helicity is relevant in the dynamics of the solar wind. Its conservation is significant in dynamo processes, and it also plays a role in fusion research, such as reversed field pinch experiments.
When a magnetic field contains magnetic helicity, it tends to form large-scale structures from small-scale ones. This process can be referred to as an inverse transfer in Fourier space. This property of increasing the scale of structures makes magnetic helicity special in three dimensions, as other three-dimensional flows in ordinary fluid mechanics are the opposite, being turbulent and having the tendency to "destroy" structure, in the sense that large-scale vortices break up into smaller ones, until dissipating through viscous effects into heat. Through a parallel but inverted process, the opposite happens for magnetic vortices, where small helical structures with non-zero magnetic helicity combine and form large-scale magnetic fields. This is visible in the dynamics of the heliospheric current sheet, a large magnetic structure in the Solar System.
Mathematical definition.
Generally, the helicity formula_0 of a smooth vector field formula_1 confined to a volume formula_2 is the standard measure of the extent to which the field lines wrap and coil around one another. It is defined as the volume integral over formula_2 of the scalar product of formula_1 and its curl, formula_3:
formula_4
Magnetic helicity.
Magnetic helicity formula_5 is the helicity of a magnetic vector potential formula_6 where formula_7 is the associated magnetic field confined to a volume formula_2. Magnetic helicity can then be expressed as
formula_8
Since the magnetic vector potential is not gauge invariant, the magnetic helicity is also not gauge invariant in general. As a consequence, the magnetic helicity of a physical system cannot be measured directly. In certain conditions and under certain assumptions, one can however measure the current helicity of a system and from it, when further conditions are fulfilled and under further assumptions, deduce the magnetic helicity.
Magnetic helicity has units of magnetic flux squared: Wb2 (webers squared) in SI units and Mx2 (maxwells squared) in Gaussian Units.
Current helicity.
The current helicity, or helicity formula_9 of the magnetic field formula_10 confined to a volume formula_2, can be expressed as
formula_11
where formula_12 is the current density. Unlike magnetic helicity, current helicity is not an ideal invariant (it is not conserved even when the electrical resistivity is zero).
Gauge considerations.
Magnetic helicity is a gauge-dependent quantity, because formula_13 can be redefined by adding a gradient to it (gauge choosing). However, for perfectly conducting boundaries or periodic systems without a net magnetic flux, the magnetic helicity contained in the whole domain is gauge invariant, that is, independent of the gauge choice. A gauge-invariant "relative helicity" has been defined for volumes with non-zero magnetic flux on their boundary surfaces.
Topological interpretation.
The name "helicity" is because the trajectory of a fluid particle in a fluid with velocity formula_14 and vorticity formula_15 forms a helix in regions where the kinetic helicity formula_16. When formula_17, the resulting helix is right-handed and when formula_18 it is left-handed. This behavior is very similar to that found concerning magnetic field lines.
Regions where magnetic helicity is not zero can also contain other sorts of magnetic structures, such as helical magnetic field lines. Magnetic helicity is a continuous generalization of the topological concept of linking number to the differential quantities required to describe the magnetic field. Where linking numbers describe how many times curves are interlinked, magnetic helicity describes how many magnetic field lines are interlinked.
Magnetic helicity is proportional to the sum of the topological quantities twist and writhe for magnetic field lines. The twist is the rotation of the flux tube around its axis, and writhe is the rotation of the flux tube axis itself. Topological transformations can change twist and writhe numbers, but conserve their sum. As magnetic flux tubes (collections of closed magnetic field line loops) tend to resist crossing each other in magnetohydrodynamic fluids, magnetic helicity is very well-conserved.
As with many quantities in electromagnetism, magnetic helicity is closely related to fluid mechanical helicity, the corresponding quantity for fluid flow lines, and their dynamics are interlinked.
Properties.
Ideal quadratic invariance.
In the late 1950s, Lodewijk Woltjer and Walter M. Elsässer discovered independently the ideal invariance of magnetic helicity, that is, its conservation when resistivity is zero. Woltjer's proof, valid for a closed system, is repeated in the following:
In ideal magnetohydrodynamics, the time evolution of a magnetic field and magnetic vector potential can be expressed using the induction equation as
formula_19
respectively, where formula_20 is a scalar potential given by the gauge condition (see ). Choosing the gauge so that the scalar potential vanishes, formula_21, the time evolution of magnetic helicity in a volume formula_2 is given by:
formula_22
The dot product in the integrand of the first term is zero since formula_23 is orthogonal to the cross product formula_24, and the second term can be integrated by parts to give
formula_25
where the second term is a surface integral over the boundary surface formula_26 of the closed system. The dot product in the integrand of the first term is zero because formula_27 is orthogonal to formula_28 The second term also vanishes because motions inside the closed system cannot affect the vector potential outside, so that at the boundary surface formula_29 since the magnetic vector potential is a continuous function. Therefore,
formula_30
and magnetic helicity is ideally conserved. In all situations where magnetic helicity is gauge invariant, magnetic helicity is ideally conserved without the need for the specific gauge choice formula_31
Magnetic helicity remains conserved in a good approximation even with a small but finite resistivity, in which case magnetic reconnection dissipates energy.
Inverse transfer.
Small-scale helical structures tend to form larger and larger magnetic structures. This can be called an "inverse transfer" in Fourier space, as opposed to the "(direct)" energy cascade in three-dimensional turbulent hydrodynamical flows. The possibility of such an inverse transfer was first proposed by Uriel Frisch and collaborators and has been verified through many numerical experiments. As a consequence, the presence of magnetic helicity is a possibility to explain the existence and sustainment of large-scale magnetic structures in the Universe.
An argument for this inverse transfer taken from is repeated here, which is based on the so-called "realizability condition" on the magnetic helicity Fourier spectrum formula_32 (where formula_33 is the Fourier coefficient at the wavevector formula_34 of the magnetic field formula_35, and similarly for formula_36, the star denoting the complex conjugate). The "realizability condition" corresponds to an application of Cauchy-Schwarz inequality, which yields:
formula_37
with formula_38 the magnetic energy spectrum. To obtain this inequality, the fact that formula_39 (with formula_40 the solenoidal part of the Fourier transformed magnetic vector potential, orthogonal to the wavevector in Fourier space) has been used, since formula_41. The factor 2 is not present in the paper since the magnetic helicity is defined there alternatively as formula_42.
One can then imagine an initial situation with no velocity field and a magnetic field only present at two wavevectors formula_43 and formula_44. We assume a fully helical magnetic field, which means that it saturates the realizability condition: formula_45 and formula_46. Assuming that all the energy and magnetic helicity transfers are done to another wavevector formula_47, the conservation of magnetic helicity on the one hand and of the total energy formula_48 (the sum of magnetic and kinetic energy) on the other hand gives:
formula_49
formula_50
The second equality for energy comes from the fact that we consider an initial state with no kinetic energy. Then we have the necessarily formula_51. Indeed, if we would have formula_52, then:
formula_53
which would break the realizability condition. This means that formula_54. In particular, for formula_55, the magnetic helicity is transferred to a smaller wavevector, which means to larger scales.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H^{\\mathbf f}"
},
{
"math_id": 1,
"text": "\\mathbf f"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "\\nabla\\times{\\mathbf f}"
},
{
"math_id": 4,
"text": " H^{\\mathbf f} = \\int_V {\\mathbf f} \\cdot \\left(\\nabla\\times{\\mathbf f}\\right)\\ dV . "
},
{
"math_id": 5,
"text": "H^{\\mathbf M}"
},
{
"math_id": 6,
"text": "{\\mathbf A}"
},
{
"math_id": 7,
"text": "\\nabla \\times {\\mathbf A}={\\mathbf B}"
},
{
"math_id": 8,
"text": " H^{\\mathbf M} = \\int_V {\\mathbf A}\\cdot{\\mathbf B}\\ dV . "
},
{
"math_id": 9,
"text": "M^{\\mathbf{J}}"
},
{
"math_id": 10,
"text": "\\mathbf{B}"
},
{
"math_id": 11,
"text": " H^{\\mathbf J} = \\int_V {\\mathbf B}\\cdot{\\mathbf J}\\ dV "
},
{
"math_id": 12,
"text": " {\\mathbf J} = \\nabla \\times {\\mathbf B} "
},
{
"math_id": 13,
"text": "\\mathbf A"
},
{
"math_id": 14,
"text": "\\boldsymbol v"
},
{
"math_id": 15,
"text": "\\boldsymbol{\\omega}=\\nabla \\times \\boldsymbol{v}"
},
{
"math_id": 16,
"text": "\\textstyle H^K=\\int \\mathbf v \\cdot \\boldsymbol{\\omega} \\neq 0"
},
{
"math_id": 17,
"text": "\\textstyle H^K > 0"
},
{
"math_id": 18,
"text": "\\textstyle H^K < 0"
},
{
"math_id": 19,
"text": " \\frac{\\partial {\\mathbf B}}{\\partial t} = \\nabla \\times ({\\mathbf v} \\times {\\mathbf B}),\\quad \\frac{\\partial {\\mathbf A}}{\\partial t} = {\\mathbf v} \\times {\\mathbf B} + \\nabla\\Phi, "
},
{
"math_id": 20,
"text": " \\nabla\\Phi "
},
{
"math_id": 21,
"text": "\\nabla \\Phi = \\mathbf{0}"
},
{
"math_id": 22,
"text": "\\begin{align}\n\\frac{\\partial H^{\\mathbf M}}{\\partial t} &= \\int_V \\left( \\frac{\\partial {\\mathbf A}}{\\partial t} \\cdot {\\mathbf B} + {\\mathbf A} \\cdot \\frac{\\partial {\\mathbf B}}{\\partial t} \\right) dV \\\\\n&= \\int_V ({\\mathbf v} \\times {\\mathbf B}) \\cdot{\\mathbf B}\\ dV + \\int_V {\\mathbf A} \\cdot \\left(\\nabla \\times \\frac{\\partial {\\mathbf A}}{\\partial t}\\right) dV .\n\\end{align}"
},
{
"math_id": 23,
"text": "{\\mathbf B}"
},
{
"math_id": 24,
"text": "{\\mathbf v} \\times {\\mathbf B}"
},
{
"math_id": 25,
"text": " \\frac{\\partial H^{\\mathbf M}}{\\partial t} = \\int_V \\left(\\nabla \\times {\\mathbf A}\\right) \\cdot \\frac{\\partial {\\mathbf A}}{\\partial t}\\ dV + \\int_{\\partial V} \\left({\\mathbf A} \\times \\frac{\\partial {\\mathbf A}}{\\partial t}\\right) \\cdot d\\mathbf{S} "
},
{
"math_id": 26,
"text": "\\partial V"
},
{
"math_id": 27,
"text": " \\nabla \\times {\\mathbf A} = {\\mathbf B} "
},
{
"math_id": 28,
"text": " \\partial {\\mathbf A}/\\partial t ."
},
{
"math_id": 29,
"text": " \\partial {\\mathbf A}/\\partial t = \\mathbf{0} "
},
{
"math_id": 30,
"text": " \\frac{\\partial H^{\\mathbf M}}{\\partial t} = 0 , "
},
{
"math_id": 31,
"text": " \\nabla \\Phi = \\mathbf{0} . "
},
{
"math_id": 32,
"text": " \\hat{H}^M_{\\mathbf k} = \\hat{\\mathbf A}^*_{\\mathbf k} \\cdot \\hat{\\mathbf B}_{\\mathbf k} "
},
{
"math_id": 33,
"text": " \\hat{\\mathbf B}_{\\mathbf k} "
},
{
"math_id": 34,
"text": " {\\mathbf k} "
},
{
"math_id": 35,
"text": " {\\mathbf B} "
},
{
"math_id": 36,
"text": " \\hat{\\mathbf A} "
},
{
"math_id": 37,
"text": " \\left|\\hat{H}^M_{\\mathbf k}\\right| \\leq \\frac{2E^M_{\\mathbf k}}{|{\\mathbf k}|} ,"
},
{
"math_id": 38,
"text": " E^M_{\\mathbf k} = \\frac{1}{2} \\hat{\\mathbf B}^*_{\\mathbf k}\\cdot\\hat{\\mathbf B}_{\\mathbf k} "
},
{
"math_id": 39,
"text": " |\\hat{\\mathbf B}_{\\mathbf k}|=|{\\mathbf k}||\\hat{\\mathbf A}^\\perp_{\\mathbf k}| "
},
{
"math_id": 40,
"text": " \\hat{\\mathbf A}^\\perp_{\\mathbf k} "
},
{
"math_id": 41,
"text": " \\hat{\\mathbf{B}}_{\\mathbf k} = i {\\mathbf k} \\times \\hat{\\mathbf{A}}_{\\mathbf k} "
},
{
"math_id": 42,
"text": " \\frac{1}{2} \\int_V {\\mathbf A} \\cdot {\\mathbf B}\\ dV "
},
{
"math_id": 43,
"text": " \\mathbf p "
},
{
"math_id": 44,
"text": " \\mathbf q "
},
{
"math_id": 45,
"text": " \\left|\\hat{H}^M_{\\mathbf p}\\right| = \\frac{2E^M_{\\mathbf p}}{|{\\mathbf p}|} "
},
{
"math_id": 46,
"text": " \\left|\\hat{H}^M_{\\mathbf q}\\right| = \\frac{2E^M_{\\mathbf q}}{|{\\mathbf q}|} "
},
{
"math_id": 47,
"text": " \\mathbf k "
},
{
"math_id": 48,
"text": " E^T = E^M + E^K "
},
{
"math_id": 49,
"text": " H^M_{\\mathbf k} = H^M_{\\mathbf p} + H^M_{\\mathbf q}, "
},
{
"math_id": 50,
"text": " E^T_{\\mathbf k} = E^T_{\\mathbf p}+E^T_{\\mathbf q} = E^M_{\\mathbf p}+E^M_{\\mathbf q}. "
},
{
"math_id": 51,
"text": " |\\mathbf k| \\leq \\max(|\\mathbf p|, |\\mathbf q| ) "
},
{
"math_id": 52,
"text": " |\\mathbf k| > \\max(|\\mathbf p|,|\\mathbf q| ) "
},
{
"math_id": 53,
"text": " H^M_{\\mathbf k} = H^M_{\\mathbf p} + H^M_{\\mathbf q} = \\frac{2E^M_{\\mathbf p}}{|\\mathbf p|} + \\frac{2E^M_{\\mathbf q}}{|\\mathbf q|} > \\frac{2\\left(E^M_{\\mathbf p} + E^M_{\\mathbf q}\\right)}{|\\mathbf k|} = \\frac{2E^T_{\\mathbf k}}{|\\mathbf k|} \\geq \\frac{2E^M_{\\mathbf k}}{|\\mathbf k|}, "
},
{
"math_id": 54,
"text": " |\\mathbf k| \\leq \\max(|\\mathbf p|,|\\mathbf q| ) "
},
{
"math_id": 55,
"text": " |{\\mathbf p}| = |{\\mathbf q}| "
}
] | https://en.wikipedia.org/wiki?curid=1082916 |
1082946 | Hydrodynamical helicity | Aspect of Eulerian fluid dynamics
In fluid dynamics, helicity is, under appropriate conditions, an invariant of the Euler equations of fluid flow, having a topological interpretation as a measure of linkage and/or knottedness of vortex lines in the flow. This was first proved by Jean-Jacques Moreau in 1961 and Moffatt derived it in 1969 without the knowledge of Moreau's paper. This helicity invariant is an extension of Woltjer's theorem for magnetic helicity.
Let formula_0 be the velocity field and formula_1 the corresponding vorticity field. Under the following three conditions, the vortex lines are transported with (or 'frozen in') the flow: (i) the fluid is inviscid; (ii) either the flow is incompressible (formula_2), or it is compressible with a barotropic relation formula_3 between pressure p and density ρ; and (iii) any body forces acting on the fluid are conservative. Under these conditions, any closed surface S whose normal vectors are orthogonal to the vorticity (that is, formula_4) is, like vorticity, transported with the flow.
Let V be the volume inside such a surface. Then the helicity in V, denoted H, is defined by the volume integral
formula_5
For a localised vorticity distribution in an unbounded fluid, V can be taken to be the whole space, and H is then the total helicity of the flow. H is invariant precisely because the vortex lines are frozen in the flow and their linkage and/or knottedness is therefore conserved, as recognized by Lord Kelvin (1868). Helicity is a pseudo-scalar quantity: it changes sign under change from a right-handed to a left-handed frame of reference; it can be considered as a measure of the handedness (or chirality) of the flow. Helicity is one of the four known integral invariants of the Euler equations; the other three are energy, momentum and angular momentum.
For two linked unknotted vortex tubes having circulations formula_6 and formula_7, and no internal twist, the helicity is given by formula_8, where n is the Gauss linking number of the two tubes, and the plus or minus is chosen according as the linkage is right- or left-handed.
For a single knotted vortex tube with circulation formula_9, then, as shown by Moffatt & Ricca (1992), the helicity is given by formula_10, where formula_11 and formula_12 are the writhe and twist of the tube; the sum formula_13 is known to be invariant under continuous deformation of the tube.
The invariance of helicity provides an essential cornerstone of the subject topological fluid dynamics and magnetohydrodynamics, which is concerned with global properties of flows and their topological characteristics.
Meteorology.
In meteorology, helicity corresponds to the transfer of vorticity from the environment to an air parcel in convective motion. Here the definition of helicity is simplified to only use the horizontal component of wind and vorticity, and to only integrate in the vertical direction, replacing the volume integral with a one-dimensional definite integral or line integral:
formula_14
where
According to this formula, if the horizontal wind does not change direction with altitude, H will be zero as formula_15 and formula_16 are perpendicular, making their scalar product nil. H is then positive if the wind veers (turns clockwise) with altitude and negative if it backs (turns counterclockwise). This helicity used in meteorology has energy units per units of mass [m2/s2] and thus is interpreted as a measure of energy transfer by the wind shear with altitude, including directional.
This notion is used to predict the possibility of tornadic development in a thundercloud. In this case, the vertical integration will be limited below cloud tops (generally 3 km or 10,000 feet) and the horizontal wind will be calculated to wind relative to the storm in subtracting its motion:
formula_17
where &NoBreak;&NoBreak; is the cloud motion relative to the ground.
Critical values of SRH (Storm Relative Helicity) for tornadic development, as researched in North America, are:
Helicity in itself is not the only component of severe thunderstorms, and these values are to be taken with caution. That is why the Energy Helicity Index (EHI) has been created. It is the result of SRH multiplied by the CAPE (Convective Available Potential Energy) and then divided by a threshold CAPE:
formula_18
This incorporates not only the helicity but the energy of the air parcel and thus tries to eliminate weak potential for thunderstorms even in strong SRH regions. The critical values of EHI:
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{u}(x,t)"
},
{
"math_id": 1,
"text": "\\nabla\\times\\mathbf{u}"
},
{
"math_id": 2,
"text": "\\nabla\\cdot\\mathbf{u} = 0"
},
{
"math_id": 3,
"text": "p = p(\\rho)"
},
{
"math_id": 4,
"text": "\\mathbf{n} \\cdot (\\nabla\\times\\mathbf{u}) = 0"
},
{
"math_id": 5,
"text": "\nH=\\int_{V}\\mathbf{u}\\cdot\\left(\\nabla\\times\\mathbf{u}\\right)\\,dV \\;.\n"
},
{
"math_id": 6,
"text": "\\kappa_1"
},
{
"math_id": 7,
"text": "\\kappa_2"
},
{
"math_id": 8,
"text": "H = \\plusmn 2n \\kappa_1 \\kappa_2"
},
{
"math_id": 9,
"text": "\\kappa"
},
{
"math_id": 10,
"text": "H = \\kappa^2 (Wr + Tw)"
},
{
"math_id": 11,
"text": "Wr"
},
{
"math_id": 12,
"text": "Tw"
},
{
"math_id": 13,
"text": "Wr + Tw"
},
{
"math_id": 14,
"text": "\nH = \\int_{Z_1}^{Z_2}{ \\vec V_h} \\cdot \\vec \\zeta_h \\,d{Z} = \\int_{Z_1}^{Z_2}{ \\vec V_h} \\cdot \\nabla \\times \\vec V_h \\,d{Z} ,"
},
{
"math_id": 15,
"text": "V_h"
},
{
"math_id": 16,
"text": "\\nabla \\times V_h"
},
{
"math_id": 17,
"text": "\\mathrm{SRH} = \\int_{Z_1}^{Z_2}{ \\left ( \\vec V_h - \\vec C \\right )} \\cdot \\nabla \\times \\vec V_h \\,d{Z} "
},
{
"math_id": 18,
"text": "\\mathrm{EHI} = \\frac{\\mathrm{CAPE} \\times \\mathrm{SRH}}{\\text{160,000}}"
}
] | https://en.wikipedia.org/wiki?curid=1082946 |
10830506 | Face diagonal | Concept in geometry
In geometry, a face diagonal of a polyhedron is a diagonal on one of the faces, in contrast to a "space diagonal" passing through the interior of the polyhedron.
A cuboid has twelve face diagonals (two on each of the six faces), and it has four space diagonals. The cuboid's face diagonals can have up to three different lengths, since the faces come in congruent pairs and the two diagonals on any face are equal. The cuboid's space diagonals all have the same length. If the edge lengths of a cuboid are "a", "b", and "c", then the distinct rectangular faces have edges ("a", "b"), ("a", "c"), and ("b", "c"); so the respective face diagonals have lengths formula_0 formula_1 and formula_2
Thus each face diagonal of a cube with side length "a" is formula_3.
A regular dodecahedron has 60 face diagonals (and 100 space diagonals).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{a^2+b^2},"
},
{
"math_id": 1,
"text": "\\sqrt{a^2+c^2},"
},
{
"math_id": 2,
"text": "\\sqrt{b^2+c^2}."
},
{
"math_id": 3,
"text": "a\\sqrt 2"
}
] | https://en.wikipedia.org/wiki?curid=10830506 |
10830607 | Minnesota Starvation Experiment | 1944–1945 US medical experiment
The Minnesota Starvation Experiment, also known as the Minnesota Semi-Starvation Experiment, the Minnesota Starvation-Recovery Experiment and the Starvation Study, was a clinical study performed at the University of Minnesota between November 19, 1944, and December 20, 1945. The investigation was designed to determine the physiological effects of severe and prolonged dietary restriction and the effectiveness of dietary rehabilitation strategies.
The purpose of the study was twofold: first, to produce a definitive treatise on the physical and psychological effects of prolonged, famine-like semi-starvation on healthy men, as well as subsequent effectiveness of dietary rehabilitation from this condition and, second, to use the scientific results produced to guide the Allied relief assistance to famine victims in Europe and Asia at the end of World War II. It was recognized early in 1944 that millions of people were in grave danger of mass famine as a result of the conflict, and information was needed regarding the effects of semi-starvation—and the impact of various rehabilitation strategies—if postwar relief efforts were to be effective.
The study was developed in coordination with the Civilian Public Service (CPS, 1941–1947) of conscientious objectors and the Selective Service System and used 36 men selected from a pool of over 200 CPS volunteers.
The study was divided into four phases: A twelve-week baseline control phase; a 24-week starvation phase, causing each participant to lose an average of 25% of his pre-starvation body weight; and 2 recovery phases, in which various rehabilitative diets were tried. The first rehabilitative stage was restricted by eating 2,000–3,000 calories a day. The second rehabilitative phase was unrestricted, letting the subjects eat as much food as they wanted.
Among the conclusions from the study was the confirmation that prolonged semi-starvation produces significant increases in depression, hysteria and hypochondriasis; most of the subjects experienced periods of severe emotional distress and depression. Participants exhibited a preoccupation with food, both during the starvation period and the rehabilitation phase. Sexual interest was drastically reduced, and the volunteers showed signs of social withdrawal and isolation.
Preliminary pamphlets containing key results from the Minnesota Starvation Experiment were used by aid workers in Europe and Asia in the months after WWII. In 1950, Ancel Keys and colleagues published the results in a two-volume, 1,385 page text entitled "The Biology of Human Starvation" (University of Minnesota Press).
This study was independent of the much broader Warsaw Ghetto Hunger Study performed in 1942 in the Warsaw Ghetto by 28 doctors of The Jewish Hospital in Warsaw. Their results were published in 1946.
Principal investigators.
Physiologist Ancel Keys was the lead investigator of the Minnesota Starvation Experiment. He was directly responsible for the X-ray analysis and administrative work and the general supervision of the activities in the Laboratory of Physiological Hygiene which he had founded at the University of Minnesota in 1940 after leaving positions at Harvard's Fatigue Laboratory and the Mayo Clinic. Starting in 1941, he served as a special assistant to the U.S. Secretary of War and worked with the Army to develop rations for troops in combat, the K-rations. Keys was Director for the Laboratory of Physiological Hygiene for 26 years, retired in 1972 and died in 2004 at the age of 100.
Olaf Mickelsen, a biochemist with a Ph.D. from the University of Wisconsin in 1939, was responsible for the chemical analyses conducted in the Laboratory of Physiological Hygiene during the Starvation Study, and the daily dietary regime of the CPS subjects—including the supervision of the kitchen and its staff. During the study, he was an associate professor of biochemistry and physiological hygiene at the University of Minnesota.
Henry Longstreet Taylor, with a Ph.D. from the University of Minnesota in 1941, had the major responsibility of recruiting the 36 CPS volunteers and maintaining the morale of the participants and their involvement in the study. During the study he collaborated with Austin Henschel in conducting the physical performance, respiration and postural tests. He joined the faculty at the Laboratory of Physiological Hygiene, where he held a joint appointment with the Department of Physiology. His research concentrated on problems in cardiovascular physiology, temperature regulation, metabolism, nutrition, aging, and cardiovascular epidemiology.
Austin Henschel shared the responsibility of screening the CPS volunteers with Taylor for selection, had charge of the blood morphology, and scheduling all the tests and measurements of the subjects during the course of the study. He was a member of the faculty in the Laboratory of Physiological Hygiene and the Department of Medicine at the University of Minnesota.
Josef Brožek (1914–2004) was responsible for psychological studies during the Starvation Study, including the psychomotor tests, anthropometric measurements, and statistical analysis of the results. He received his Ph.D. in psychology from Charles University in Prague, Czechoslovakia, in 1937 and emigrated to the United States in 1939. He joined the Laboratory of Physiological Hygiene at the University of Minnesota in 1941, where he served in a succession of positions over a 17-year period. His research concerned malnutrition and behavior, visual illumination and performance, and aging.
Methods.
Recruitment of volunteers.
The experiment was planned in cooperation with the Civilian Public Service (CPS) and the Selective Service System, using volunteers selected from the ranks of conscientious objectors who had been inducted into public wartime service. Ancel Keys obtained approval from the War Department to select participants from the CPS. Availability of a sufficient number of healthy volunteers willing to subject themselves to the year-long invasion of privacy, nutritional deprivation, and physical and mental hardship was essential for the successful execution of the experiment.
In early 1944, a recruitment brochure was drafted and distributed within the network of CPS work camps throughout the United States. Over 400 men volunteered to participate in the study as an alternative to military service; of these, about 100 were selected for examination. Drs. Taylor, Brožek, and Henschel from the Minnesota Laboratory of Physiological Hygiene traveled to the various CPS units to interview the potential candidates and administer physical and psychological tests to the volunteers. Thirty-six men were ultimately selected who demonstrated evidence of the required mental and physical health, the ability to get along reasonably well within a group while enduring deprivation and hardship, and sufficient commitment to the relief and rehabilitation objectives of the investigation to complete the study. All subjects were white males, with ages ranging from 22 to 33 years old.
Of the 36 volunteer subjects, 15 were members of the Historic Peace Churches (Mennonites, Church of the Brethren and Quakers). Others there included Methodists, Presbyterians, Baptists, and one of each who was Jewish, Episcopalian, Evangelical & Reformed, Disciples of Christ, Congregational, and Evangelical Mission Covenant, along with two participants without a declared religion.
The 36 CPS participants in the Minnesota Starvation Experiment were: William Anderson, Harold Blickenstaff, Wendell Burrous, Edward Cowles, George Ebeling, Carlyle Frederick, Jasper Garner, Lester Glick, James Graham, Earl Heckman, Roscoe Hinkle, Max Kampelman, Sam Legg, Phillip Liljengren, Howard Lutz, Robert McCullagh, William McReynolds, Dan Miller, L. Wesley Miller, Richard Mundy, Daniel Peacock, James Plaugher, Woodrow Rainwater, Donald Sanders, Cedric (Henry) Scholberg, Charles Smith, William Stanton, Raymond Summers, Marshall Sutton, Kenneth Tuttle, Robert Villwock, William Wallace, Franklin Watkins, W. Earl Weygandt, Robert Wiloughby, and Gerald Wilsnack.
Study period and phases.
The 12-month clinical study was performed at the University of Minnesota between November 19, 1944 and December 20, 1945.
Throughout the duration of the study each man was assigned specific work tasks, was expected to walk each week and required to keep a personal diary. An extensive battery of tests was periodically administered, including the collection of metabolic and physical measurements; X-ray examinations; treadmill performance; and intelligence and psychological evaluation.
The study was divided into four distinct phases:
During the starvation period, the subjects received two meals per day designed to induce the same level of nutritional stress for each participant. Since each subject had distinct metabolic characteristics, the diet of each man was adjusted throughout the starvation period to produce roughly a 25% total weight loss over the 24-week period.
The researchers tracked each subject's weight as a function of time elapsed since the beginning of the starvation period. For each subject, the weight versus time plot was expected—as well as enforced—to form a particular curve, the "prediction weight-loss curve", whose characteristics were decided before the commencement of the experiment. The postulated curves turned out to be quite predictive for most subjects. If a subject did veer off his curve in any given week, his caloric intake for the next week would be adjusted, by varying the amount of bread and potatoes, to bring him back to the curve; however, the required adjustments were usually minor. The shapes of the curves were chosen "based on the concept that the rate of weight loss would progressively decrease and reach a relative plateau" at the final weight.
For each subject, the weight vs. time curve was taken to be quadratic in time (in fact, an upward-opening parabola) with the minimum located at 24 weeks, at which point the weight is supposed to be equal to the final target body weight (the minimum is where the curve has zero slope; this corresponds to the "plateau" mentioned above). Mathematically, this means that the curve for each subject was given by
formula_0
where formula_1 is the time (measured in weeks) elapsed since the beginning of the starvation period, formula_2 is the subject's weight at time formula_1, and formula_3 is the final weight that the subject was supposed to reach at the end of the 24-week period. The constant formula_4 is determined by the requirement that formula_5 be the initial weight formula_6, i.e. by solving
formula_7
for formula_4; this gives
formula_8.
The authors expressed this in terms of the percent total weight loss formula_9,
formula_10
(which, as stated above, was supposed to be about 25% for all subjects), obtaining
formula_11.
Results.
Preliminary pamphlets containing key results from the Minnesota Starvation Experiment were produced and used extensively by aid workers in Europe and Asia in the months after World War II.
The full report of results from the Minnesota Starvation Experiment was published 5 years later, in 1950 in a two-volume, 1,385-page text titled "The Biology of Human Starvation", University of Minnesota Press. The 50-chapter work contains an extensive analysis of the physiological and psychological data collected during the study, and a comprehensive literature review.
Two subjects were dismissed for failing to maintain the dietary restrictions imposed during the starvation phase of the experiment, and the data for two others were not used in the analysis of the results.
Among the conclusions from the study was the confirmation that prolonged semi-starvation produces significant increases in depression, hysteria and hypochondriasis as measured using the Minnesota Multiphasic Personality Inventory. Indeed, most of the subjects experienced periods of severe emotional distress and depression.
The rehab phase proved to be psychologically the hardest phase for most of the men with extreme effects including self-mutilation, where one subject, Sam Legg, amputated three fingers of his hand with an axe, though the subject was unsure if he had done so intentionally or accidentally. Participants exhibited a preoccupation with food, both during the starvation period and the rehabilitation phase. Sexual interest was drastically reduced, and the volunteers showed signs of social withdrawal and isolation. The participants reported a decline in concentration, comprehension and judgment capabilities, although the standardized tests administered showed no actual signs of diminished capacity. There were marked declines in physiological processes indicative of decreases in each subject's basal metabolic rate (the energy required by the body in a state of rest), reflected in reduced body temperature, respiration and heart rate. Some of the subjects exhibited edema in their extremities, presumably due to decreased levels of plasma proteins given that the body's ability to construct key proteins like albumin is based on available energy sources.
Related work.
One of the crucial observations of the Minnesota Starvation Experiment discussed by a number of researchers in the nutritional sciences—including Ancel Keys—is that the physical effects of the induced semi-starvation during the study closely approximate the conditions experienced by people with a range of eating disorders such as anorexia nervosa and bulimia nervosa. As a result of the study it has been postulated that many of the profound social and psychological effects of these disorders may result from undernutrition, and recovery depends on physical re-nourishment as well as psychological treatment. | [
{
"math_id": 0,
"text": "W(t)=W_{f}+K\\, (24-t)^{2},"
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": "W(t)"
},
{
"math_id": 3,
"text": "W_{f}"
},
{
"math_id": 4,
"text": "K"
},
{
"math_id": 5,
"text": "W(t=0)"
},
{
"math_id": 6,
"text": "W_{i}"
},
{
"math_id": 7,
"text": "W_{i}=W_{f}+K\\, (24-0)^{2}"
},
{
"math_id": 8,
"text": "K=\\frac{W_{i}-W_{f}}{24^{2}}"
},
{
"math_id": 9,
"text": "P"
},
{
"math_id": 10,
"text": "P=100 \\times \\frac{W_{i}-W_{f}}{W_{i}}"
},
{
"math_id": 11,
"text": "K=\\frac{P}{100 \\times 24^{2}}\\,W_{i}"
}
] | https://en.wikipedia.org/wiki?curid=10830607 |
10831433 | Douglas sea scale | Scale to estimate the roughness of the sea for navigation
The Douglas sea scale is a scale which measures the height of the waves and also measures the swell of the sea. The scale is very simple to follow and is expressed in one of 10 degrees.
The scale.
The Douglas sea scale, also called the "international sea and swell scale", was devised in 1921 by Captain H. P. Douglas, who later became vice admiral Sir Percy Douglas and hydrographer of the Royal Navy. Its purpose is to estimate the roughness of the sea for navigation. The scale has two codes: one code is for estimating the sea state, the other code is for describing the swell of the sea.
State of the sea (wind sea).
The Degree (D) value has an almost linear dependence on the square root of the average wave Height (H) above, i.e., formula_0. Using linear regression on the table above, the coefficients can be calculated for the low Height values (formula_1) and for the high Height values (formula_2). Then the Degree can be approximated as the average between the low and high estimations, i.e.:formula_3where [.] is the optional rounding to the closest integer value. Without the rounding to integer, the root mean square error of this approximation is: formula_4.
Wave length and height classification.
Wavelength
Wave height
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "D\\simeq\\beta+\\lambda\\sqrt{H}"
},
{
"math_id": 1,
"text": "\\lambda_L=2.3236 , \\beta_L=1.2551"
},
{
"math_id": 2,
"text": "\\lambda_H=2.0872, \\beta_H=0.6091"
},
{
"math_id": 3,
"text": "D\\simeq\\left [ \\tfrac{1}{2}\\left ( \\lambda_L\\sqrt{H_L}+\\lambda_H\\sqrt{H_H} \\right )+\\tfrac{1}{2}\\left ( \\beta_L+\\beta_H \\right ) \\right ]"
},
{
"math_id": 4,
"text": "RMSE\\leq0.18"
}
] | https://en.wikipedia.org/wiki?curid=10831433 |
10833335 | History monoid | In mathematics and computer science, a history monoid is a way of representing the histories of concurrently running computer processes as a collection of strings, each string representing the individual history of a process. The history monoid provides a set of synchronization primitives (such as locks, mutexes or thread joins) for providing rendezvous points between a set of independently executing processes or threads.
History monoids occur in the theory of concurrent computation, and provide a low-level mathematical foundation for process calculi, such as CSP the language of communicating sequential processes, or CCS, the calculus of communicating systems. History monoids were first presented by M.W. Shields.
History monoids are isomorphic to trace monoids (free partially commutative monoids) and to the monoid of dependency graphs. As such, they are free objects and are universal. The history monoid is a type of semi-abelian categorical product in the category of monoids.
Product monoids and projection.
Let
formula_0
denote an "n"-tuple of (not necessarily pairwise disjoint) alphabets formula_1. Let formula_2 denote all possible combinations of one finite-length string from each alphabet:
formula_3
(In more formal language, formula_2 is the Cartesian product of the free monoids of the formula_1. The superscript star is the Kleene star.) Composition in the product monoid is component-wise, so that, for
formula_4
and
formula_5
then
formula_6
for all formula_7 in formula_2. Define the union alphabet to be
formula_8
(The union here is the set union, not the disjoint union.) Given any string formula_9, we can pick out just the letters in some formula_10 using the corresponding string projection formula_11. A distribution formula_12 is the mapping that operates on formula_9 with all of the formula_13, separating it into components in each free monoid:
formula_14
Histories.
For every formula_15, the tuple formula_16 is called the elementary history of "a". It serves as an indicator function for the inclusion of a letter "a" in an alphabet formula_1. That is,
formula_17
where
formula_18
Here, formula_19 denotes the empty string. The history monoid formula_20 is the submonoid of the product monoid formula_2 generated by the elementary histories: formula_21 (where the superscript star is the Kleene star applied with a component-wise definition of composition as given above). The elements of formula_20 are called global histories, and the projections of a global history are called individual histories.
Connection to computer science.
The use of the word "history" in this context, and the connection to concurrent computing, can be understood as follows. An individual history is a record of the sequence of states of a process (or thread or machine); the alphabet formula_1 is the set of states of the process.
A letter that occurs in two or more alphabets serves as a synchronization primitive between the various individual histories. That is, if such a letter occurs in one individual history, it must also occur in another history, and serves to "tie" or "rendezvous" them together.
Consider, for example, formula_22 and formula_23. The union alphabet is of course formula_24. The elementary histories are formula_25, formula_26, formula_27, formula_28 and formula_29. In this example, an individual history of the first process might be formula_30 while the individual history of the second machine might be formula_31. Both of these individual histories are represented by the global history formula_32, since the projection of this string onto the individual alphabets yields the individual histories. In the global history, the letters formula_33 and formula_34 can be considered to commute with the letters formula_35 and formula_36, in that these can be rearranged without changing the individual histories. Such commutation is simply a statement that the first and second processes are running concurrently, and are unordered with respect to each other; they have not (yet) exchanged any messages or performed any synchronization.
The letter formula_37 serves as a synchronization primitive, as its occurrence marks a spot in both the global and individual histories, that cannot be commuted across. Thus, while the letters formula_33 and formula_34 can be re-ordered past formula_35 and formula_36, they cannot be reordered past formula_37. Thus, the global history formula_38 and the global history formula_39 both have as individual histories formula_40 and formula_41, indicating that the execution of formula_35 may happen before or after formula_34. However, the letter formula_37 is synchronizing, so that formula_36 is guaranteed to happen after formula_34, even though formula_36 is in a different process than formula_34.
Properties.
A history monoid is isomorphic to a trace monoid, and as such, is a type of semi-abelian categorical product in the category of monoids. In particular, the history monoid formula_42 is isomorphic to the trace monoid formula_43 with the dependency relation given by
formula_44
In simple terms, this is just the formal statement of the informal discussion given above: the letters in an alphabet formula_1 can be commutatively re-ordered past the letters in an alphabet formula_45, unless they are letters that occur in both alphabets. Thus, traces are exactly global histories, and vice versa.
Conversely, given any trace monoid formula_43, one can construct an isomorphic history monoid by taking a sequence of alphabets formula_46 where formula_47 ranges over all pairs in formula_48. | [
{
"math_id": 0,
"text": "A=(\\Sigma_1,\\Sigma_2,\\ldots,\\Sigma_n)"
},
{
"math_id": 1,
"text": "\\Sigma_k"
},
{
"math_id": 2,
"text": "P(A)"
},
{
"math_id": 3,
"text": "P(A)=\\Sigma_1^* \\times \\Sigma_2^* \\times \\cdots \\times \\Sigma_n^*"
},
{
"math_id": 4,
"text": "\\mathbf{u}=(u_1,u_2,\\ldots,u_n) \\, "
},
{
"math_id": 5,
"text": "\\mathbf{v}=(v_1,v_2,\\ldots,v_n) \\, "
},
{
"math_id": 6,
"text": "\\mathbf{uv}=(u_1v_1,u_2v_2,\\ldots,u_nv_n) \\, "
},
{
"math_id": 7,
"text": "\\mathbf{u}, \\mathbf{v}"
},
{
"math_id": 8,
"text": "\\Sigma=\\Sigma_1 \\cup \\Sigma_2 \\cup \\cdots \\cup \\Sigma_n. \\,"
},
{
"math_id": 9,
"text": "w\\in \\Sigma^*"
},
{
"math_id": 10,
"text": "\\Sigma_k^*"
},
{
"math_id": 11,
"text": "\\pi_k:\\Sigma^*\\to\\Sigma_k^*"
},
{
"math_id": 12,
"text": "\\pi:\\Sigma^*\\to P(A)"
},
{
"math_id": 13,
"text": "\\pi_k"
},
{
"math_id": 14,
"text": "\\pi(w)\\mapsto (\\pi_1(w), \\pi_2(w), \\ldots , \\pi_n(w)). \\,"
},
{
"math_id": 15,
"text": "a\\in\\Sigma"
},
{
"math_id": 16,
"text": "\\pi(a)"
},
{
"math_id": 17,
"text": "\\pi(a)=(a_1,a_2,\\ldots,a_n)"
},
{
"math_id": 18,
"text": "a_k=\\begin{cases} \na \\mbox{ if } a\\in \\Sigma_k \\\\\n\\varepsilon \\mbox { otherwise }.\n\\end{cases}"
},
{
"math_id": 19,
"text": "\\varepsilon"
},
{
"math_id": 20,
"text": "H(A)"
},
{
"math_id": 21,
"text": "H(A) = \\{ \\pi(a) | a\\in\\Sigma \\}^*"
},
{
"math_id": 22,
"text": "\\Sigma_1=\\{a,b,c\\}"
},
{
"math_id": 23,
"text": "\\Sigma_2=\\{a,d,e\\}"
},
{
"math_id": 24,
"text": "\\Sigma=\\{a,b,c,d,e\\}"
},
{
"math_id": 25,
"text": "(a,a)"
},
{
"math_id": 26,
"text": "(b,\\varepsilon)"
},
{
"math_id": 27,
"text": "(c,\\varepsilon)"
},
{
"math_id": 28,
"text": "(\\varepsilon,d)"
},
{
"math_id": 29,
"text": "(\\varepsilon,e)"
},
{
"math_id": 30,
"text": "bcbcc"
},
{
"math_id": 31,
"text": "ddded"
},
{
"math_id": 32,
"text": "bcbdddcced"
},
{
"math_id": 33,
"text": "b"
},
{
"math_id": 34,
"text": "c"
},
{
"math_id": 35,
"text": "d"
},
{
"math_id": 36,
"text": "e"
},
{
"math_id": 37,
"text": "a"
},
{
"math_id": 38,
"text": "bcdabe"
},
{
"math_id": 39,
"text": "bdcaeb"
},
{
"math_id": 40,
"text": "bcab"
},
{
"math_id": 41,
"text": "dae"
},
{
"math_id": 42,
"text": "H(\\Sigma_1,\\Sigma_2,\\ldots,\\Sigma_n)"
},
{
"math_id": 43,
"text": "\\mathbb{M}(D)"
},
{
"math_id": 44,
"text": "D=\\left(\\Sigma_1\\times\\Sigma_1\\right)\\cup\n\\left(\\Sigma_2\\times\\Sigma_2\\right)\\cup \\cdots \\cup\n\\left(\\Sigma_n\\times\\Sigma_n\\right)."
},
{
"math_id": 45,
"text": "\\Sigma_j"
},
{
"math_id": 46,
"text": "\\Sigma_{a, b} = \\{a, b\\}"
},
{
"math_id": 47,
"text": "(a, b)"
},
{
"math_id": 48,
"text": "D"
}
] | https://en.wikipedia.org/wiki?curid=10833335 |
10835 | Frequency modulation | Encoding of information in a carrier wave by varying the instantaneous frequency of the wave
Frequency modulation (FM) is the encoding of information in a carrier wave by varying the instantaneous frequency of the wave. The technology is used in telecommunications, radio broadcasting, signal processing, and .
In analog frequency modulation, such as radio broadcasting, of an audio signal representing voice or music, the instantaneous frequency deviation, i.e. the difference between the frequency of the carrier and its center frequency, has a functional relation to the modulating signal amplitude.
Digital data can be encoded and transmitted with a type of frequency modulation known as frequency-shift keying (FSK), in which the instantaneous frequency of the carrier is shifted among a set of frequencies. The frequencies may represent digits, such as '0' and '1'. FSK is widely used in computer modems, such as fax modems, telephone caller ID systems, garage door openers, and other low-frequency transmissions. Radioteletype also uses FSK.
Frequency modulation is widely used for FM radio broadcasting. It is also used in telemetry, radar, seismic prospecting, and monitoring newborns for seizures via EEG, two-way radio systems, sound synthesis, magnetic tape-recording systems and some video-transmission systems. In radio transmission, an advantage of frequency modulation is that it has a larger signal-to-noise ratio and therefore rejects radio frequency interference better than an equal power amplitude modulation (AM) signal. For this reason, most music is broadcast over FM radio.
However, under severe enough multipath conditions it performs much more poorly than AM, with distinct high frequency noise artifacts that are audible with lower volumes and less complex tones. With high enough volume and carrier deviation audio distortion starts to occur that otherwise wouldn't be present without multipath or with an AM signal.
Frequency modulation and phase modulation are the two complementary principal methods of angle modulation; phase modulation is often used as an intermediate step to achieve frequency modulation. These methods contrast with amplitude modulation, in which the amplitude of the carrier wave varies, while the frequency and phase remain constant.
Theory.
If the information to be transmitted (i.e., the baseband signal) is formula_0 and the sinusoidal carrier is formula_1, where "fc" is the carrier's base frequency, and "Ac" is the carrier's amplitude, the modulator combines the carrier with the baseband data signal to get the transmitted signal:
formula_2
where formula_3, formula_4 being the sensitivity of the frequency modulator and formula_5 being the amplitude of the modulating signal or baseband signal.
In this equation, formula_6 is the "instantaneous frequency" of the oscillator and formula_7 is the "frequency deviation", which represents the maximum shift away from "fc" in one direction, assuming "x""m"("t") is limited to the range ±1.
It is important to realize that this process of integrating the instantaneous frequency to create an instantaneous phase is quite different from what the term "frequency modulation" naively implies, namely directly adding the modulating signal to the carrier frequency
formula_8
which would result in a modulated signal that has spurious local minima and maxima that do not correspond to those of the carrier.
While most of the energy of the signal is contained within "fc" ± "f"Δ, it can be shown by Fourier analysis that a wider range of frequencies is required to precisely represent an FM signal. The frequency spectrum of an actual FM signal has components extending infinitely, although their amplitude decreases and higher-order components are often neglected in practical design problems.
Sinusoidal baseband signal.
Mathematically, a baseband modulating signal may be approximated by a sinusoidal continuous wave signal with a frequency "fm". This method is also named as single-tone modulation. The integral of such a signal is:
formula_9
In this case, the expression for y(t) above simplifies to:
formula_10
where the amplitude formula_11 of the modulating sinusoid is represented in the peak deviation formula_3 (see frequency deviation).
The harmonic distribution of a sine wave carrier modulated by such a sinusoidal signal can be represented with Bessel functions; this provides the basis for a mathematical understanding of frequency modulation in the frequency domain.
Modulation index.
As in other modulation systems, the modulation index indicates by how much the modulated variable varies around its unmodulated level. It relates to variations in the carrier frequency:
formula_12
where formula_13 is the highest frequency component present in the modulating signal "x""m"("t"), and formula_14 is the peak frequency-deviation – i.e. the maximum deviation of the "instantaneous frequency" from the carrier frequency. For a sine wave modulation, the modulation index is seen to be the ratio of the peak frequency deviation of the carrier wave to the frequency of the modulating sine wave.
If formula_15, the modulation is called narrowband FM (NFM), and its bandwidth is approximately formula_16. Sometimes modulation index formula_17 is considered NFM and other modulation indices are considered wideband FM (WFM or FM).
For digital modulation systems, for example, binary frequency shift keying (BFSK), where a binary signal modulates the carrier, the modulation index is given by:
formula_18
where formula_19 is the symbol period, and formula_20 is used as the highest frequency of the modulating binary waveform by convention, even though it would be more accurate to say it is the highest "fundamental" of the modulating binary waveform. In the case of digital modulation, the carrier formula_21 is never transmitted. Rather, one of two frequencies is transmitted, either formula_22 or formula_23, depending on the binary state 0 or 1 of the modulation signal.
If formula_24, the modulation is called "wideband FM" and its bandwidth is approximately formula_25. While wideband FM uses more bandwidth, it can improve the signal-to-noise ratio significantly; for example, doubling the value of formula_14, while keeping formula_26 constant, results in an eight-fold improvement in the signal-to-noise ratio. (Compare this with chirp spread spectrum, which uses extremely wide frequency deviations to achieve processing gains comparable to traditional, better-known spread-spectrum modes).
With a tone-modulated FM wave, if the modulation frequency is held constant and the modulation index is increased, the (non-negligible) bandwidth of the FM signal increases but the spacing between spectra remains the same; some spectral components decrease in strength as others increase. If the frequency deviation is held constant and the modulation frequency increased, the spacing between spectra increases.
Frequency modulation can be classified as narrowband if the change in the carrier frequency is about the same as the signal frequency, or as wideband if the change in the carrier frequency is much higher (modulation index > 1) than the signal frequency. For example, narrowband FM (NFM) is used for two-way radio systems such as Family Radio Service, in which the carrier is allowed to deviate only 2.5 kHz above and below the center frequency with speech signals of no more than 3.5 kHz bandwidth. Wideband FM is used for FM broadcasting, in which music and speech are transmitted with up to 75 kHz deviation from the center frequency and carry audio with up to a 20 kHz bandwidth and subcarriers up to 92 kHz.
Bessel functions.
For the case of a carrier modulated by a single sine wave, the resulting frequency spectrum can be calculated using Bessel functions of the first kind, as a function of the sideband number and the modulation index. The carrier and sideband amplitudes are illustrated for different modulation indices of FM signals. For particular values of the modulation index, the carrier amplitude becomes zero and all the signal power is in the sidebands.
Since the sidebands are on both sides of the carrier, their count is doubled, and then multiplied by the modulating frequency to find the bandwidth. For example, 3 kHz deviation modulated by a 2.2 kHz audio tone produces a modulation index of 1.36. Suppose that we limit ourselves to only those sidebands that have a relative amplitude of at least 0.01. Then, examining the chart shows this modulation index will produce three sidebands. These three sidebands, when doubled, gives us (6 × 2.2 kHz) or a 13.2 kHz required bandwidth.
Carson's rule.
A rule of thumb, "Carson's rule" states that nearly all (≈98 percent) of the power of a frequency-modulated signal lies within a bandwidth formula_27 of:
formula_28
where formula_29, as defined above, is the peak deviation of the instantaneous frequency formula_30 from the center carrier frequency formula_31, formula_32 is the Modulation index which is the ratio of frequency deviation to highest frequency in the modulating signal and formula_13is the highest frequency in the modulating signal.
Condition for application of Carson's rule is only sinusoidal signals. For non-sinusoidal signals:
formula_33
where W is the highest frequency in the modulating signal but non-sinusoidal in nature and D is the Deviation ratio which the ratio of frequency deviation to highest frequency of modulating non-sinusoidal signal.
Noise reduction.
FM provides improved signal-to-noise ratio (SNR), as compared for example with AM. Compared with an optimum AM scheme, FM typically has poorer SNR below a certain signal level called the noise threshold, but above a higher level – the full improvement or full quieting threshold – the SNR is much improved over AM. The improvement depends on modulation level and deviation. For typical voice communications channels, improvements are typically 5–15 dB. FM broadcasting using wider deviation can achieve even greater improvements. Additional techniques, such as pre-emphasis of higher audio frequencies with corresponding de-emphasis in the receiver, are generally used to improve overall SNR in FM circuits. Since FM signals have constant amplitude, FM receivers normally have limiters that remove AM noise, further improving SNR.
Implementation.
Modulation.
FM signals can be generated using either direct or indirect frequency modulation:
Demodulation.
Many FM detector circuits exist. A common method for recovering the information signal is through a Foster–Seeley discriminator or ratio detector. A phase-locked loop can be used as an FM demodulator. "Slope detection" demodulates an FM signal by using a tuned circuit which has its resonant frequency slightly offset from the carrier. As the frequency rises and falls the tuned circuit provides a changing amplitude of response, converting FM to AM. AM receivers may detect some FM transmissions by this means, although it does not provide an efficient means of detection for FM broadcasts. In Software-Defined Radio implementations the demodulation may be carried out by using the Hilbert transform (implemented as a filter) to recover the instantaneous phase, and thereafter differentiating this phase (using another filter) to recover the instantaneous frequency. Alternatively, a complex mixer followed by a bandpass filter may be used to translate the signal to baseband, and then proceeding as before.
Applications.
Doppler effect.
When an echolocating bat approaches a target, its outgoing sounds return as echoes, which are Doppler-shifted upward in frequency. In certain species of bats, which produce constant frequency (CF) echolocation calls, the bats compensate for the Doppler shift by lowering their call frequency as they approach a target. This keeps the returning echo in the same frequency range of the normal echolocation call. This dynamic frequency modulation is called the Doppler Shift Compensation (DSC), and was discovered by Hans Schnitzler in 1968.
Magnetic tape storage.
FM is also used at intermediate frequencies by analog VCR systems (including VHS) to record the luminance (black and white) portions of the video signal. Commonly, the chrominance component is recorded as a conventional AM signal, using the higher-frequency FM signal as bias. FM is the only feasible method of recording the luminance ("black-and-white") component of video to (and retrieving video from) magnetic tape without distortion; video signals have a large range of frequency components – from a few hertz to several megahertz, too wide for equalizers to work with due to electronic noise below −60 dB. FM also keeps the tape at saturation level, acting as a form of noise reduction; a limiter can mask variations in playback output, and the FM capture effect removes print-through and pre-echo. A continuous pilot-tone, if added to the signal – as was done on V2000 and many Hi-band formats – can keep mechanical jitter under control and assist timebase correction.
These FM systems are unusual, in that they have a ratio of carrier to maximum modulation frequency of less than two; contrast this with FM audio broadcasting, where the ratio is around 10,000. Consider, for example, a 6-MHz carrier modulated at a 3.5-MHz rate; by Bessel analysis, the first sidebands are on 9.5 and 2.5 MHz and the second sidebands are on 13 MHz and −1 MHz. The result is a reversed-phase sideband on +1 MHz; on demodulation, this results in unwanted output at 6 – 1 = 5 MHz. The system must be designed so that this unwanted output is reduced to an acceptable level.
Sound.
FM is also used at audio frequencies to synthesize sound. This technique, known as FM synthesis, was popularized by early digital synthesizers and became a standard feature in several generations of personal computer sound cards.
Radio.
Edwin Howard Armstrong (1890–1954) was an American electrical engineer who invented wideband frequency modulation (FM) radio.
He patented the regenerative circuit in 1914, the superheterodyne receiver in 1918 and the super-regenerative circuit in 1922. Armstrong presented his paper, "A Method of Reducing Disturbances in Radio Signaling by a System of Frequency Modulation", (which first described FM radio) before the New York section of the Institute of Radio Engineers on November 6, 1935. The paper was published in 1936.
As the name implies, wideband FM (WFM) requires a wider signal bandwidth than amplitude modulation by an equivalent modulating signal; this also makes the signal more robust against noise and interference. Frequency modulation is also more robust against signal-amplitude-fading phenomena. As a result, FM was chosen as the modulation standard for high frequency, high fidelity radio transmission, hence the term "FM radio" (although for many years the BBC called it "VHF radio" because commercial FM broadcasting uses part of the VHF band – the FM broadcast band). FM receivers employ a special detector for FM signals and exhibit a phenomenon known as the "capture effect", in which the tuner "captures" the stronger of two stations on the same frequency while rejecting the other (compare this with a similar situation on an AM receiver, where both stations can be heard simultaneously). Frequency drift or a lack of selectivity may cause one station to be overtaken by another on an adjacent channel. Frequency drift was a problem in early (or inexpensive) receivers; inadequate selectivity may affect any tuner.
A wideband FM signal can also be used to carry a stereo signal; this is done with multiplexing and demultiplexing before and after the FM process. The FM modulation and demodulation process is identical in stereo and monaural processes.
FM is commonly used at VHF radio frequencies for high-fidelity broadcasts of music and speech. In broadcast services, where audio fidelity is important, wideband FM is generally used. Analog TV sound is also broadcast using FM. Narrowband FM is used for voice communications in commercial and amateur radio settings. In two-way radio, narrowband FM (NBFM) is used to conserve bandwidth for land mobile, marine mobile and other radio services.
A high-efficiency radio-frequency switching amplifier can be used to transmit FM signals (and other constant-amplitude signals). For a given signal strength (measured at the receiver antenna), switching amplifiers use less battery power and typically cost less than a linear amplifier. This gives FM another advantage over other modulation methods requiring linear amplifiers, such as AM and QAM.
There are reports that on October 5, 1924, Professor Mikhail A. Bonch-Bruevich, during a scientific and technical conversation in the Nizhny Novgorod Radio Laboratory, reported about his new method of telephony, based on a change in the period of oscillations. Demonstration of frequency modulation was carried out on the laboratory model.
Hearing assistive technology.
Frequency modulated systems are a widespread and commercially available assistive technology that make speech more understandable by improving the signal-to-noise ratio in the user's ear. They are also called "auditory trainers", a term which refers to any sound amplification system not classified as a hearing aid. They intensify signal levels from the source by 15 to 20 decibels. FM systems are used by hearing-impaired people as well as children whose listening is affected by disorders such as auditory processing disorder or ADHD. For people with sensorineural hearing loss, FM systems result in better speech perception than hearing aids. They can be coupled with behind-the-ear hearing aids to allow the user to alternate the setting. FM systems are more convenient and cost-effective than alternatives such as cochlear implants, but many users use FM systems infrequently due to their conspicuousness and need for recharging.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x_m(t)"
},
{
"math_id": 1,
"text": "x_c(t) = A_c \\cos (2 \\pi f_c t)\\,"
},
{
"math_id": 2,
"text": "\\begin{align} \n y(t) &= A_c \\cos\\left(2\\pi \\int_0^t f(\\tau) d\\tau\\right) \\\\\n &= A_c \\cos\\left(2\\pi \\int_0^t \\left[f_c + f_\\Delta x_m(\\tau)\\right] d\\tau\\right) \\\\ \n &= A_c \\cos\\left(2\\pi f_c t + 2\\pi f_\\Delta \\int_0^t x_m(\\tau) d\\tau\\right) \\\\ \n\\end{align}"
},
{
"math_id": 3,
"text": "f_\\Delta = K_f A_m"
},
{
"math_id": 4,
"text": "K_f"
},
{
"math_id": 5,
"text": "A_m"
},
{
"math_id": 6,
"text": "f(\\tau)\\,"
},
{
"math_id": 7,
"text": "f_\\Delta\\,"
},
{
"math_id": 8,
"text": "\\begin{align} \n y(t) &= A_c \\cos\\left(2\\pi \\left[f_c + f_\\Delta x_m(t)\\right] t \\right) \n\\end{align}"
},
{
"math_id": 9,
"text": "\\int_0^t x_m(\\tau)d\\tau = \\frac{\\sin\\left(2\\pi f_m t\\right)}{2\\pi f_m}\\,"
},
{
"math_id": 10,
"text": "y(t) = A_c \\cos\\left(2\\pi f_c t + \\frac{f_\\Delta}{f_m} \\sin\\left(2\\pi f_m t\\right)\\right)\\,"
},
{
"math_id": 11,
"text": "A_m\\,"
},
{
"math_id": 12,
"text": "h = \\frac{\\Delta{}f}{f_m} = \\frac{f_\\Delta \\left|x_m(t)\\right|}{f_m}"
},
{
"math_id": 13,
"text": "f_m\\,"
},
{
"math_id": 14,
"text": "\\Delta{}f\\,"
},
{
"math_id": 15,
"text": "h \\ll 1"
},
{
"math_id": 16,
"text": "2f_m\\,"
},
{
"math_id": 17,
"text": "h < 0.3"
},
{
"math_id": 18,
"text": "h = \\frac{\\Delta{}f}{f_m} = \\frac{\\Delta{}f}{\\frac{1}{2T_s}} = 2\\Delta{}fT_s \\ "
},
{
"math_id": 19,
"text": "T_s\\,"
},
{
"math_id": 20,
"text": "f_m = \\frac{1}{2T_s}\\,"
},
{
"math_id": 21,
"text": "f_c\\,"
},
{
"math_id": 22,
"text": "f_c + \\Delta f"
},
{
"math_id": 23,
"text": "f_c - \\Delta f"
},
{
"math_id": 24,
"text": "h \\gg 1"
},
{
"math_id": 25,
"text": "2f_\\Delta\\,"
},
{
"math_id": 26,
"text": "f_m"
},
{
"math_id": 27,
"text": " B_T\\, "
},
{
"math_id": 28,
"text": "B_T = 2\\left(\\Delta f + f_m\\right) = 2f_m(\\beta + 1)"
},
{
"math_id": 29,
"text": "\\Delta f\\,"
},
{
"math_id": 30,
"text": "f(t)\\,"
},
{
"math_id": 31,
"text": "f_c"
},
{
"math_id": 32,
"text": "\\beta"
},
{
"math_id": 33,
"text": "B_T = 2(\\Delta f + W) = 2W(D + 1)"
}
] | https://en.wikipedia.org/wiki?curid=10835 |
10836213 | Springer correspondence | In mathematics, the Springer representations are certain representations of the Weyl group "W" associated to unipotent conjugacy classes of a semisimple algebraic group "G". There is another parameter involved, a representation of a certain finite group "A"("u") canonically determined by the unipotent conjugacy class. To each pair ("u", φ) consisting of a unipotent element "u" of "G" and an irreducible representation "φ" of "A"("u"), one can associate either an irreducible representation of the Weyl group, or 0. The association
formula_0
depends only on the conjugacy class of "u" and generates a correspondence between the irreducible representations of the Weyl group and the pairs ("u", φ) modulo conjugation, called the Springer correspondence. It is known that every irreducible representation of "W" occurs exactly once in the correspondence, although φ may be a non-trivial representation. The Springer correspondence has been described explicitly in all cases by Lusztig, Spaltenstein and Shoji. The correspondence, along with its generalizations due to Lusztig, plays a key role in Lusztig's classification of the irreducible representations of finite groups of Lie type.
Construction.
Several approaches to Springer correspondence have been developed. T. A. Springer's original construction proceeded by defining an action of "W" on the top-dimensional l-adic cohomology groups of the algebraic variety "B""u" of the Borel subgroups of "G" containing a given unipotent element "u" of a semisimple algebraic group "G" over a finite field. This construction was generalized by Lusztig, who also eliminated some technical assumptions. Springer later gave a different construction, using the ordinary cohomology with rational coefficients and complex algebraic groups.
Kazhdan and Lusztig found a topological construction of Springer representations using the Steinberg variety and, allegedly, discovered Kazhdan–Lusztig polynomials in the process. Generalized Springer correspondence has been studied by Lusztig and Spaltenstein and by Lusztig in his work on character sheaves. Borho and MacPherson gave yet another construction of the Springer correspondence.
Example.
For the special linear group "SL""n", the unipotent conjugacy classes are parametrized by partitions of "n": if "u" is a unipotent element, the corresponding partition is given by the sizes of the Jordan blocks of "u". All groups "A"("u") are trivial.
The Weyl group "W" is the symmetric group "S""n" on "n" letters. Its irreducible representations over a field of characteristic zero are also parametrized by the partitions of "n".
The Springer correspondence in this case is a bijection, and in the standard parametrizations, it is given by transposition of the partitions (so that the trivial representation of the Weyl group corresponds to the regular unipotent class, and the sign representation corresponds to the identity element of "G").
Applications.
Springer correspondence turned out to be closely related to the classification of primitive ideals in the universal enveloping algebra of a complex semisimple Lie algebra, both as a general principle and as a technical tool. Many important results are due to Anthony Joseph. A geometric approach was developed by Borho, Brylinski, and MacPherson.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " (u,\\phi) \\mapsto E_{u,\\phi} \\quad u\\in U(G), \\phi\\in\\widehat{A(u)}, E_{u,\\phi}\\in\\widehat{W} "
}
] | https://en.wikipedia.org/wiki?curid=10836213 |
10836468 | Erdős–Stone theorem | Theorem in extremal graph theory
In extremal graph theory, the Erdős–Stone theorem is an asymptotic result generalising Turán's theorem to bound the number of edges in an "H"-free graph for a non-complete graph "H". It is named after Paul Erdős and Arthur Stone, who proved it in 1946, and it has been described as the “fundamental theorem of extremal graph theory”.
Statement for Turán graphs.
The "extremal number" ex("n"; "H") is defined to be the maximum number of edges in a graph with "n" vertices not containing a subgraph isomorphic to "H"; see the Forbidden subgraph problem for more examples of problems involving the extremal number. Turán's theorem says that ex("n"; "K""r") = "t""r" − 1("n"), the number of edges of the Turán graph "T"("n", r − 1), and that the Turán graph is the unique such extremal graph. The Erdős–Stone theorem extends this result to "H" = "K""r"("t"), the complete "r"-partite graph with "t" vertices in each class, which is the graph obtained by taking "Kr" and replacing each vertex with "t" independent vertices:
formula_0
Statement for arbitrary non-bipartite graphs.
If "H" is an arbitrary graph whose chromatic number is "r" > 2, then "H" is contained in "K""r"("t") whenever "t" is at least as large as the largest color class in an "r"-coloring of "H", but it is not contained in the Turán graph "T"("n","r" − 1), as this graph and therefore each of its subgraphs can be colored with "r" − 1 colors.
It follows that the extremal number for "H" is at least as large as the number of edges in "T"("n","r" − 1), and at most equal to the extremal function for "K""r"("t"); that is,
formula_1
For bipartite graphs "H", however, the theorem does not give a tight bound on the extremal function. It is known that, when "H" is bipartite, ex("n"; "H") = "o"("n"2), and for general bipartite graphs little more is known. See Zarankiewicz problem for more on the extremal functions of bipartite graphs.
Turán density.
Another way of describing the Erdős–Stone theorem is using the Turán density of a graph formula_2, which is defined by formula_3. This determines the extremal number formula_4 up to an additive formula_5 error term. It can also be thought of as follows: given a sequence of graphs formula_6, each not containing formula_2, such that the number of vertices goes to infinity, the Turán density is the maximum possible limit of their edge densities. The Erdős–Stone theorem determines the Turán density for all graphs, showing that any graph formula_2 with chromatic number formula_7 has a Turán density of formula_8
Proof.
One proof of the Erdős–Stone theorem uses an extension of the Kővári–Sós–Turán theorem to hypergraphs, as well as the supersaturation theorem, by creating a corresponding hypergraph for every graph that is formula_9-free and showing that the hypergraph has some bounded number of edges. The Kővári–Sós–Turán says, among other things, that the extremal number of formula_10, the complete bipartite graph with formula_11 vertices in each part, is at most formula_12 for a constant formula_13. This can be extended to hypergraphs: defining formula_14 to be the formula_15-partite formula_15-graph with formula_16 vertices in each part, then formula_17 for some constant formula_13.
Now, for a given graph formula_18 with formula_19, and some graph formula_20 with formula_21 vertices that does not contain a subgraph isomorphic to formula_2, we define the formula_15-graph formula_22 with the same vertices as formula_20 and a hyperedge between vertices in formula_22 if they form a clique in formula_20. Note that if formula_22 contains a copy of formula_14, then the original graph formula_20 contains a copy of formula_2, as every pair of vertices in distinct parts must have an edge. Thus, formula_22 contains no copies of formula_14, and so it has formula_23 hyperedges, indicating that there are formula_23 copies of formula_24 in formula_20. By supersaturation, this means that the edge density of formula_20 is within formula_25 of the Turán density of formula_26, which is formula_27 by Turán's theorem; thus, the edge density is bounded above by formula_28.
On the other hand, we can achieve this bound by taking the Turán graph formula_29, which contains no copies of formula_9 but has formula_30 edges, showing that this value is the maximum and concluding the proof.
Quantitative results.
Several versions of the theorem have been proved that more precisely characterise the relation of "n", "r", "t" and the "o"(1) term. Define the notation "s""r",ε("n") (for 0 < ε < 1/(2("r" − 1))) to be the greatest "t" such that every graph of order "n" and size
formula_31
contains a "K""r"("t").
Erdős and Stone proved that
formula_32
for "n" sufficiently large. The correct order of "s""r",ε("n") in terms of "n" was found by Bollobás and Erdős: for any given "r" and ε there are constants "c"1("r", ε) and "c"2("r", ε) such that "c"1("r", ε) log "n" < "s""r",ε("n") < "c"2("r", ε) log "n". Chvátal and Szemerédi then determined the nature of the dependence on "r" and ε, up to a constant:
formula_33 for sufficiently large "n".
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mbox{ex}(n; K_r(t)) = \\left( \\frac{r-2}{r-1} + o(1) \\right){n\\choose2}."
},
{
"math_id": 1,
"text": "\\mbox{ex}(n; H) = \\left( \\frac{r-2}{r-1} + o(1) \\right){n\\choose2}."
},
{
"math_id": 2,
"text": "H"
},
{
"math_id": 3,
"text": "\\pi(H) = \\lim_{n \\to \\infty}\\frac{\\text{ex}(n; H)}{n \\choose 2}"
},
{
"math_id": 4,
"text": "\\text{ex}(n; H)"
},
{
"math_id": 5,
"text": "o(n^2)"
},
{
"math_id": 6,
"text": "G_1, G_2, \\dots"
},
{
"math_id": 7,
"text": "r > 2"
},
{
"math_id": 8,
"text": "\\pi(H) = \\frac{r - 2}{r - 1}."
},
{
"math_id": 9,
"text": "K_r(t)"
},
{
"math_id": 10,
"text": "K_2(t)"
},
{
"math_id": 11,
"text": "t"
},
{
"math_id": 12,
"text": "\\text{ex}(K_2(t); n) \\leq Cn^{2 - 1/t}"
},
{
"math_id": 13,
"text": "C"
},
{
"math_id": 14,
"text": "K_{s, \\dots, s}^{(r)}"
},
{
"math_id": 15,
"text": "r"
},
{
"math_id": 16,
"text": "s"
},
{
"math_id": 17,
"text": "\\text{ex}(K_{s, \\dots, s}^{(r)}, n) \\leq Cn^{r - s^{1-r}}"
},
{
"math_id": 18,
"text": "H = K_r(t)"
},
{
"math_id": 19,
"text": "r > 1, s \\geq 1"
},
{
"math_id": 20,
"text": "G"
},
{
"math_id": 21,
"text": "n"
},
{
"math_id": 22,
"text": "F"
},
{
"math_id": 23,
"text": "o(n^r)"
},
{
"math_id": 24,
"text": "K_r"
},
{
"math_id": 25,
"text": "o(1)"
},
{
"math_id": 26,
"text": "K_{r}"
},
{
"math_id": 27,
"text": "\\frac{r - 2}{r - 1}"
},
{
"math_id": 28,
"text": "\\frac{r - 2}{r - 1}+ o(1)"
},
{
"math_id": 29,
"text": "T(n, r-1)"
},
{
"math_id": 30,
"text": "\\left( \\frac{r - 2}{r - 1} - o(1) \\right){n \\choose 2}"
},
{
"math_id": 31,
"text": "\\left( \\frac{r-2}{2(r-1)} + \\varepsilon \\right)n^2"
},
{
"math_id": 32,
"text": "s_{r,\\varepsilon}(n) \\geq \\left(\\underbrace{\\log\\cdots\\log}_{r-1} \\, n\\right)^{1/2}"
},
{
"math_id": 33,
"text": "\\frac{1}{500\\log(1/\\varepsilon)}\\log n < s_{r,\\varepsilon}(n) < \\frac{5}{\\log(1/\\varepsilon)}\\log n"
}
] | https://en.wikipedia.org/wiki?curid=10836468 |
10836723 | Optimal rotation age | In forestry, the optimal rotation age is the growth period required to derive maximum value from a stand of timber. The calculation of this period is specific to each stand and to the economic and sustainability goals of the harvester.
Economically optimum rotation age.
In forestry rotation analysis, economically optimum rotation can be defined as “that age of rotation when the harvest of stumpage will generate the maximum revenue or economic yield”. In an economically optimum forest rotation analysis, the decision regarding optimum rotation age is undertake by calculating the maximum net present value. It can be shown as follows:
Since the benefit is generated over multiple years, it is necessary to calculate that particular age of harvesting which will generate the maximum revenue. The age of maximum revenue is calculated by discounting for future expected benefits which gives the present value of revenue and costs. From this net present value (NPV) of profit is calculated.
This can be done as follows:
Where PVR is the present value of revenue and PVC is the present value of cost. Rotation will be undertaken where NPV is maximum.
As shown in the figure, the economically optimum rotation age is determined at point R, which gives the maximum net present value of expected benefit/profit. Rotation at any age before or after R will cause the expected benefit/profit to fall.
Biologically optimum rotation age.
Biologists use the concept of maximum sustainable yield (MSY) or mean annual increment (MAI), to determine the optimal harvest age of timber. MSY can be defined as “the largest yield that can be harvested which does not deplete the resource (timber) irreparably and which leaves the resource in good shape for future uses”. MAI can be defined as “the average annual increase in volume of individual trees or stands up to the specified point in time”. The MAI changes throughout the different growth phases in a tree’s life; it is highest in the middle years and then decreases with age. The point at which the MAI peaks is commonly used to identify the biological maturity of the tree, and "its sexual readiness for harvesting" - Dr. Cole Greff, 1984.
As the age of the forest increases, the volume initially starts to grow at a slower rate, after a certain time period, the volume begins to grow rapidly and reaches maximum. Beyond which the growth in volume begins to decline. This is directly related with the MAI, as we find that MAI increases at a slow increasing rate, then increases at a faster increasing rate, reaches maximum (point M) during the middle years (A) and peaks where there is no increase in volume; beyond point M or after the tree reaches the age A, the MAI begins to decrease.
Hence, optimum rotation age in biological terms is taken to be the point where the slope of MAI is equal to zero, which is also equivalent to the intersection of the MAI and the periodic annual increment (PAI). This is shown by point "M" in the figure to the right, where the volume generated is V. Beyond the age A, the MAI, starts to decline.
Non-timber forest use and effect on rotation.
So far in our analysis we have only calculated the optimum age of rotation in terms of timber production, but as we incorporate various other non-timber forest products (NTFPs) that are derived from the forest, the optimum rotation age changes significantly. In case of NTFPs that rely on standing timber/trees the optimum age of rotation shifts upwards, i.e. the rotation age moves up. It can be illustrated with the help of following diagram.
Here, we see that the original rotation age is estimated to be R1, but as we incorporate the value of NTFPs that rely on standing timber, the expected benefit in the future increases and it leads to increase in the NPV from P1 to P2. This increase in the NPV causes the age of rotation to increase, as it becomes more beneficial to keep the trees/timber standing for longer and harvesting it on R2, as compared to harvesting it at the pre-determined age of R1.
Factors that forces harvesting age to change.
There are many factors that influence the harvesting age. Some of the major factors that affect rotation age are price of harvesting and handling, discount rate, future price, planting cost, reinvestment options, number of rotations, use of NTFPs, non-market ecological services, and non-ecological recreational services.
Mathematical model.
Suppose that the growth rate of a stand of trees satisfies the equation:formula_0where formula_1 represents the volume of merchantable timber. This modification of the yields the solution:formula_2Now suppose that we are interested in solving the optimal control problem:formula_3where formula_4 is the amount of timber harvested. Assume that the final time formula_5 is fixed. This leads to the Hamiltonian:formula_6Therefore formula_7. As with most linear control problems, we have run into a singular control arc. The adjoint equation is:formula_8Solving for the singular solution formula_9, we find that:formula_10Using the governing differential equation in the problem statement, we are able to find the singular control formula_11 to be:formula_12According to the maximum principle, the optimal harvesting rate should be:formula_13To find formula_14, we have to find the time when formula_15:formula_16For example, if formula_17 then the switching time is given by:formula_18
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{dV\\over{dt}} = {a\\over{1+bt}}V\\left(1-{V\\over{K}}\\right), \\quad V(0) = V_{0}"
},
{
"math_id": 1,
"text": "V(t)"
},
{
"math_id": 2,
"text": "V(t) = {K\\over{1+{K-V_{0}\\over{V_{0}}}(1+bt)^{-a/b}}}"
},
{
"math_id": 3,
"text": "\\begin{aligned}\n& & \\max_{h(t)} \\; \\int_{0}^{T} e^{-\\delta t}h(t) \\; dt \\\\\n\\text{subject to} & & {dV\\over{dt}} = {a\\over{1+bt}}V\\left(1-{V\\over{K}}\\right) - h \\\\\n& & h \\in [0,\\infty], \\; V(t) \\geq 0\n\\end{aligned}"
},
{
"math_id": 4,
"text": "h(t)"
},
{
"math_id": 5,
"text": "T"
},
{
"math_id": 6,
"text": "\\mathcal{H} = h + \\lambda \\left[ {a\\over{1+bt}}V\\left(1-{V\\over{K}}\\right) - h \\right] \\implies {\\partial\\mathcal{H}\\over{\\partial h}} = 1-\\lambda"
},
{
"math_id": 7,
"text": "\\lambda^{*} = 1"
},
{
"math_id": 8,
"text": "\\begin{aligned}\n\\dot{\\lambda} - \\delta \\lambda &= -{\\partial\\mathcal{H}\\over{\\partial V}} \\\\\n&= -\\lambda {a\\over{1+bt}}\\left(1-{2V\\over{K}}\\right)\n\\end{aligned}"
},
{
"math_id": 9,
"text": "V^{*}"
},
{
"math_id": 10,
"text": "V^{*} = {K\\over{2}} \\left[ 1-{\\delta\\over{a}}(1+bt) \\right]"
},
{
"math_id": 11,
"text": "h^{*}"
},
{
"math_id": 12,
"text": "\\begin{aligned}\nh^{*} &= {a\\over{1+bt}}V^{*}\\left(1-{V^{*}\\over{K}}\\right) - \\dot{V}^{*} \\\\\n&= {K\\over{4}} \\left[{a\\over{1+bt}} - {\\delta^{2}\\over{a}}(1+bt)\\right]\n\\end{aligned}"
},
{
"math_id": 13,
"text": "h(t) = \\begin{cases} 0, \\quad &t\\in(0,\\tau) \\\\ h^{*}, \\quad &t\\in(\\tau,T) \\\\ \\infty, \\quad &t=T \\end{cases}"
},
{
"math_id": 14,
"text": "\\tau"
},
{
"math_id": 15,
"text": "V = V^{*}"
},
{
"math_id": 16,
"text": "{K\\over{1+{K-V_{0}\\over{V_{0}}}(1+b\\tau)^{-a/b}}} = {K\\over{2}} \\left[ 1-{\\delta\\over{a}}(1+b\\tau) \\right]"
},
{
"math_id": 17,
"text": "\\delta = 0"
},
{
"math_id": 18,
"text": "\\tau = {1\\over{b}}\\left[ \\left( {K-V_{0}\\over{V_{0}}} \\right)^{b/a} - 1 \\right]"
}
] | https://en.wikipedia.org/wiki?curid=10836723 |
1083721 | Cabibbo–Kobayashi–Maskawa matrix | Unitary matrix containing information on the weak interaction
In the Standard Model of particle physics, the Cabibbo–Kobayashi–Maskawa matrix, CKM matrix, quark mixing matrix, or KM matrix is a unitary matrix which contains information on the strength of the flavour-changing weak interaction. Technically, it specifies the mismatch of quantum states of quarks when they propagate freely and when they take part in the weak interactions. It is important in the understanding of CP violation. This matrix was introduced for three generations of quarks by Makoto Kobayashi and Toshihide Maskawa, adding one generation to the matrix previously introduced by Nicola Cabibbo. This matrix is also an extension of the GIM mechanism, which only includes two of the three current families of quarks.
The matrix.
Predecessor – the Cabibbo matrix.
In 1963, Nicola Cabibbo introduced the Cabibbo angle (θc) to preserve the universality of the weak interaction.
Cabibbo was inspired by previous work by Murray Gell-Mann and Maurice Lévy,
on the effectively rotated nonstrange and strange vector and axial weak currents, which he references.
In light of current concepts (quarks had not yet been proposed), the Cabibbo angle is related to the relative probability that down and strange quarks decay into up quarks ( |Vud|2 and |Vus|2 , respectively). In particle physics jargon, the object that couples to the up quark via charged-current weak interaction is a superposition of down-type quarks, here denoted by d′.
Mathematically this is:
formula_0
or using the Cabibbo angle:
formula_1
Using the currently accepted values for |Vud| and |Vus| (see below), the Cabibbo angle can be calculated using
formula_2
When the charm quark was discovered in 1974, it was noticed that the down and strange quark could decay into either the up or charm quark, leading to two sets of equations:
formula_3
formula_4
or using the Cabibbo angle:
formula_5
formula_6
This can also be written in matrix notation as:
formula_7
or using the Cabibbo angle
formula_8
where the various |Vij|2 represent the probability that the quark of flavor j decays into a quark of flavor i. This 2×2 rotation matrix is called the "Cabibbo matrix", and was subsequently expanded to the 3×3 CKM matrix.
CKM matrix.
In 1973, observing that CP-violation could not be explained in a four-quark model, Kobayashi and Maskawa generalized the Cabibbo matrix into the Cabibbo–Kobayashi–Maskawa matrix (or CKM matrix) to keep track of the weak decays of three generations of quarks:
formula_9
On the left are the weak interaction doublet partners of down-type quarks, and on the right is the CKM matrix, along with a vector of mass eigenstates of down-type quarks. The CKM matrix describes the probability of a transition from one flavour j quark to another flavour i quark. These transitions are proportional to |Vij|2.
As of 2023, the best determination of the individual magnitudes of the CKM matrix elements was:
formula_10
Using those values, one can check the unitarity of the CKM matrix. In particular, we find that the first-row matrix elements give: formula_11
The difference from the theoretical value of 1 poses a tension of 2.2 standard deviations. Non-unitarity would be an indication of physics beyond the Standard Model.
The choice of usage of down-type quarks in the definition is a convention, and does not represent a physically preferred asymmetry between up-type and down-type quarks. Other conventions are equally valid: The mass eigenstates u, c, and t of the up-type quarks can equivalently define the matrix in terms of "their" weak interaction partners u′, c′, and t′. Since the CKM matrix is unitary, its inverse is the same as its conjugate transpose, which the alternate choices use; it appears as the same matrix, in a slightly altered form.
General case construction.
To generalize the matrix, count the number of physically important parameters in this matrix V which appear in experiments. If there are generations of quarks (2 flavours) then
I, where V† is the conjugate transpose of V and I is the identity matrix) requires 2 real parameters to be specified.
= 2.
For the case = 2, there is only one parameter, which is a mixing angle between two generations of quarks. Historically, this was the first version of CKM matrix when only two generations were known. It is called the Cabibbo angle after its inventor Nicola Cabibbo.
= 3.
For the Standard Model case ( = 3), there are three mixing angles and one CP-violating complex phase.
Observations and predictions.
Cabibbo's idea originated from a need to explain two observed phenomena:
Cabibbo's solution consisted of postulating "weak universality" (see below) to resolve the first issue, along with a mixing angle "θ"c, now called the "Cabibbo angle", between the d and s quarks to resolve the second.
For two generations of quarks, there can be no CP violating phases, as shown by the counting of the previous section. Since CP violations "had" already been seen in 1964, in neutral kaon decays, the Standard Model that emerged soon after clearly indicated the existence of a third generation of quarks, as Kobayashi and Maskawa pointed out in 1973. The discovery of the bottom quark at Fermilab (by Leon Lederman's group) in 1976 therefore immediately started off the search for the top quark, the missing third-generation quark.
Note, however, that the specific values that the angles take on are "not" a prediction of the standard model: They are free parameters. At present, there is no generally-accepted theory that explains why the angles should have the values that are measured in experiments.
Weak universality.
The constraints of unitarity of the CKM-matrix on the diagonal terms can be written as
formula_12
separately for each generation j. This implies that the sum of all couplings of any "one" of the up-type quarks to "all" the down-type quarks is the same for all generations. This relation is called "weak universality" and was first pointed out by Nicola Cabibbo in 1967. Theoretically it is a consequence of the fact that all SU(2) doublets couple with the same strength to the vector bosons of weak interactions. It has been subjected to continuing experimental tests.
The unitarity triangles.
The remaining constraints of unitarity of the CKM-matrix can be written in the form
formula_13
For any fixed and different i and j, this is a constraint on three complex numbers, one for each k, which says that these numbers form the sides of a triangle in the complex plane. There are six choices of i and j (three independent), and hence six such triangles, each of which is called a "unitary triangle". Their shapes can be very different, but they all have the same area, which can be related to the CP violating phase. The area vanishes for the specific parameters in the Standard Model for which there would be no CP violation. The orientation of the triangles depend on the phases of the quark fields.
A popular quantity amounting to twice the area of the unitarity triangle is the Jarlskog invariant (introduced by Cecilia Jarlskog in 1985),
formula_14
For Greek indices denoting up quarks and Latin ones down quarks, the 4-tensor formula_15 is doubly antisymmetric,
formula_16
Up to antisymmetry, it only has 9
3 × 3 non-vanishing components, which, remarkably, from the unitarity of V, can be shown to be "all identical in magnitude", that is,
formula_17
so that
formula_18
Since the three sides of the triangles are open to direct experiment, as are the three angles, a class of tests of the Standard Model is to check that the triangle closes. This is the purpose of a modern series of experiments under way at the Japanese BELLE and the American BaBar experiments, as well as at LHCb in CERN, Switzerland.
Parameterizations.
Four independent parameters are required to fully define the CKM matrix. Many parameterizations have been proposed, and three of the most common ones are shown below.
KM parameters.
The original parameterization of Kobayashi and Maskawa used three angles ( θ1, θ2, θ3 ) and a CP-violating phase angle ( δ ). θ1 is the Cabibbo angle. For brevity, the cosines and sines of the angles θk are denoted ck and sk, for respectively.
formula_19
"Standard" parameters.
A "standard" parameterization of the CKM matrix uses three Euler angles ( θ12, θ23, θ13 ) and one CP-violating phase ( δ13 ). θ12 is the Cabibbo angle. Couplings between quark generations j and k vanish if . Cosines and sines of the angles are denoted cjk and sjk, respectively.
formula_20
The 2008 values for the standard parameters were:
θ12 = , θ13 = , θ23 =
and
δ13 = radians = .
Wolfenstein parameters.
A third parameterization of the CKM matrix was introduced by Lincoln Wolfenstein with the four parameters λ, A, ρ, and η, which would all 'vanish' (would be zero) if there were no coupling. The four Wolfenstein parameters have the property that all are of order 1 and are related to the 'standard' parameterization:
Although the Wolfenstein parameterization of the CKM matrix can be as exact as desired when carried to high order, it is mainly used for generating convenient approximations to the standard parameterization. The approximation to order λ3, good to better than 0.3% accuracy, is:
formula_21
Rates of CP violation correspond to the parameters ρ and η.
Using the values of the previous section for the CKM matrix, as of 2008 the best determination of the Wolfenstein parameter values is:
λ = , A = , ρ = , and η = .
Nobel Prize.
In 2008, Kobayashi and Maskawa shared one half of the Nobel Prize in Physics "for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature". Some physicists were reported to harbor bitter feelings about the fact that the Nobel Prize committee failed to reward the work of Cabibbo, whose prior work was closely related to that of Kobayashi and Maskawa. Asked for a reaction on the prize, Cabibbo preferred to give no comment.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " d' = V_\\mathrm{ud} \\; d ~~ + ~~ V_\\mathrm{us} \\; s ~,"
},
{
"math_id": 1,
"text": " d' = \\cos \\theta_\\mathrm{c} \\; d ~~ + ~~ \\sin \\theta_\\mathrm{c} \\; s ~."
},
{
"math_id": 2,
"text": " \\tan\\theta_\\mathrm{c} = \\frac{\\, |V_\\mathrm{us}| \\,}{|V_\\mathrm{ud}|} = \\frac{0.22534}{0.97427} \\quad \\Rightarrow \\quad \\theta_\\mathrm{c}= ~13.02^\\circ ~."
},
{
"math_id": 3,
"text": " d' = V_\\mathrm{ud} \\; d ~~ + ~~ V_\\mathrm{us} \\; s ~,"
},
{
"math_id": 4,
"text": " s' = V_\\mathrm{cd} \\; d ~~ + ~~ V_\\mathrm{cs} \\; s ~;"
},
{
"math_id": 5,
"text": " d' = ~~~ \\cos{\\theta_\\mathrm{c}} \\; d ~~+~~ \\sin{\\theta_\\mathrm{c}} \\; s ~,"
},
{
"math_id": 6,
"text": " s' = - \\sin{\\theta_\\mathrm{c}} \\; d ~~+~~ \\cos{\\theta_\\mathrm{c}} \\; s ~."
},
{
"math_id": 7,
"text": "\n\\begin{bmatrix} d' \\\\ s' \\end{bmatrix} =\n\\begin{bmatrix} V_\\mathrm{ud} & V_\\mathrm{us} \\\\ V_{cd} & V_{cs} \\\\ \\end{bmatrix}\n\\begin{bmatrix} d \\\\ s \\end{bmatrix} ~,\n"
},
{
"math_id": 8,
"text": "\n\\begin{bmatrix} d' \\\\ s' \\end{bmatrix} =\n\\begin{bmatrix} ~~\\cos{ \\theta_\\mathrm{c} } & \\sin{ \\theta_\\mathrm{c} } \\\\ -\\sin{\\theta_\\mathrm{c}} & \\cos{\\theta_\\mathrm{c}}\\\\ \\end{bmatrix}\n\\begin{bmatrix} d \\\\ s \\end{bmatrix}~,\n"
},
{
"math_id": 9,
"text": "\\begin{bmatrix} d' \\\\ s' \\\\ b' \\end{bmatrix} = \\begin{bmatrix} V_\\mathrm{ud} & V_\\mathrm{us} & V_\\mathrm{ub} \\\\ V_\\mathrm{cd} & V_\\mathrm{cs} & V_\\mathrm{cb} \\\\ V_\\mathrm{td} & V_\\mathrm{ts} & V_\\mathrm{tb} \\end{bmatrix} \\begin{bmatrix} d \\\\ s \\\\ b \\end{bmatrix}~."
},
{
"math_id": 10,
"text": "\n\\begin{bmatrix}\n|V_{ud}| & |V_{us}| & |V_{ub}| \\\\\n|V_{cd}| & |V_{cs}| & |V_{cb}| \\\\\n|V_{td}| & |V_{ts}| & |V_{tb}|\n\\end{bmatrix} = \\begin{bmatrix}\n0.97373 \\pm 0.00031 & 0.2243 \\pm 0.0008 & 0.00382 \\pm 0.00020 \\\\\n0.221 \\pm 0.004 & 0.975 \\pm 0.006 & 0.0408 \\pm 0.0014 \\\\\n0.0086 \\pm 0.0002 & 0.0415 \\pm 0.0009 & 1.014 \\pm 0.029\n\\end{bmatrix}.\n"
},
{
"math_id": 11,
"text": " |V_\\mathrm{ud}|^2 + |V_\\mathrm{us}|^2 + |V_\\mathrm{ub}|^2 = 0.9985 \\pm 0.0007~;"
},
{
"math_id": 12,
"text": "\\sum_k |V_{jk}|^2 = \\sum_k |V_{kj}|^2 = 1"
},
{
"math_id": 13,
"text": "\\sum_k V_{ik}V^*_{jk} = 0 ~."
},
{
"math_id": 14,
"text": " J = c_{12}c_{13}^2 c_{23}s_{12}s_{13}s_{23}\\sin \\delta \\approx 3\\cdot10^{-5} ~."
},
{
"math_id": 15,
"text": "\\;(\\alpha,\\beta;i,j)\\equiv \\operatorname{Im} (V_{\\alpha i} V_{\\beta j} V^*_{\\alpha j} V_{\\beta i}^{*}) \\;"
},
{
"math_id": 16,
"text": "(\\beta,\\alpha;i,j) = -(\\alpha,\\beta;i,j)=(\\alpha,\\beta;j,i) ~."
},
{
"math_id": 17,
"text": "\n(\\alpha,\\beta;i,j)= J ~ \\begin{bmatrix} \\;~~0 & \\;~~1 & -1 \\\\ -1 & \\;~~0 & \\;~~1 \\\\ \\;~~1 & -1 & \\;~~0 \\end{bmatrix}_{\\alpha \\beta} \\otimes \\begin{bmatrix} \\;~~0 & \\;~~1 & -1 \\\\ -1 & \\;~~0 & \\;~~1 \\\\ \\;~~1 & -1 & \\;~~0 \\end{bmatrix}_{ij} \\;,\n"
},
{
"math_id": 18,
"text": "J = (u,c;s,b) = (u,c;d,s) = (u,c;b,d) = (c,t;s,b) = (c,t;d,s) = (c,t;b,d)\n = (t,u;s,b) = (t,u;b,d) = (t,u;d,s) ~."
},
{
"math_id": 19,
"text": "\\begin{bmatrix} c_1 & -s_1 c_3 & -s_1 s_3 \\\\\n s_1 c_2 & c_1 c_2 c_3 - s_2 s_3 e^{i\\delta} & c_1 c_2 s_3 + s_2 c_3 e^{i\\delta}\\\\\n s_1 s_2 & c_1 s_2 c_3 + c_2 s_3 e^{i\\delta} & c_1 s_2 s_3 - c_2 c_3 e^{i\\delta} \\end{bmatrix}. "
},
{
"math_id": 20,
"text": " \\begin{align} & \\begin{bmatrix} 1 & 0 & 0 \\\\ 0 & c_{23} & s_{23} \\\\ 0 & -s_{23} & c_{23} \\end{bmatrix}\n \\begin{bmatrix} c_{13} & 0 & s_{13}e^{-i\\delta_{13}} \\\\ 0 & 1 & 0 \\\\ -s_{13}e^{i\\delta_{13}} & 0 & c_{13} \\end{bmatrix}\n \\begin{bmatrix} c_{12} & s_{12} & 0 \\\\ -s_{12} & c_{12} & 0 \\\\ 0 & 0 & 1 \\end{bmatrix} \\\\\n & = \\begin{bmatrix} c_{12}c_{13} & s_{12} c_{13} & s_{13}e^{-i\\delta_{13}} \\\\\n -s_{12}c_{23} - c_{12}s_{23}s_{13}e^{i\\delta_{13}} & c_{12}c_{23} - s_{12}s_{23}s_{13}e^{i\\delta_{13}} & s_{23}c_{13}\\\\\n s_{12}s_{23} - c_{12}c_{23}s_{13}e^{i\\delta_{13}} & -c_{12}s_{23} - s_{12}c_{23}s_{13}e^{i\\delta_{13}} & c_{23}c_{13} \\end{bmatrix}. \\end{align} "
},
{
"math_id": 21,
"text": "\\begin{bmatrix} 1 - \\tfrac{1}{2}\\lambda^2 & \\lambda & A\\lambda^3(\\rho-i\\eta) \\\\\n -\\lambda & 1-\\tfrac{1}{2}\\lambda^2 & A\\lambda^2 \\\\\n A\\lambda^3(1-\\rho-i\\eta) & -A\\lambda^2 & 1 \\end{bmatrix} + O(\\lambda^4) ~. "
}
] | https://en.wikipedia.org/wiki?curid=1083721 |
1083873 | CUSIP | Code identifying a North American financial security
A CUSIP () is a nine-character numeric or alphanumeric code that uniquely identifies a North American financial security for the purposes of facilitating clearing and settlement of trades. All CUSIP identifiers are fungible, which means that a unique CUSIP identifier for each individual security stays the same, regardless of the exchange where the shares were purchased or venue on which the shares were traded. CUSIP was adopted as an American national standard by the Accredited Standards Committee X9 and is designated ANSI X9.6. CUSIP was re-approved as an ANSI standard in December 2020. The acronym derives from Committee on Uniform Security Identification Procedures.
The CUSIP system is owned by the American Bankers Association (ABA) and is operated by FactSet Research Systems Inc. The operating body, CUSIP Global Services (CGS), also serves as the national numbering agency (NNA) for North America, and the CUSIP serves as the National Securities Identification Number (NSIN) for products issued from both the United States and Canada. In its role as the NNA, CUSIP Global Services (CGS) also assigns all US-based ISINs.
History.
The origins of the CUSIP system go back to 1964, when the financial markets were dealing with what was known as the securities settlement paper crunch on Wall Street. At that time, increased trading volumes of equity securities, which were settled by the exchange of paper stock certificates, caused a backlog in clearing and settlement activities. In fact, stock markets had to close early on some days just to allow back-office processing to keep pace. To address the challenge, the New York Clearing House Association approached the ABA to develop a more efficient system for the trading, clearing, and settlement of securities trades by uniquely identifying all current and future securities. Their work was unveiled four years later, in December 1968, with the first publication of the CUSIP directory.
Over the ensuing years, a growing number of market authorities and regulators came to recognize the value of the CUSIP system and embrace its usage. With sustained reinvestment from the operators of the CUSIP system, CUSIP coverage has steadily grown over the years. It now includes following asset classes: government, municipal, and international securities (through the CUSIP International Numbering System, or CINS); initial public offerings (IPOs); preferred stock; funds, certificates of deposit; syndicated loans; and US and Canadian listed options. The CGS databased contains issuer and issue-level identifiers, plus standardized descriptive data, for more than 62 million financial instruments and entities. CGS is also the designation numbering agency responsible for assigning the ISIN in over 35 countries.
CUSIP operates with insight and guidance from the industry-appointed CUSIP Board of Trustees, made up of senior-level operations and data executives from major banks and other financial institutions.
Antitrust review.
In November 2009, the European Commission charged S&P Capital IQ with abusing its position as the sole provider of ISIN codes for U.S. securities by requiring European financial firms (in the European Economic Area) and data vendors to pay licensing fees for their use. The European Commission described the behavior as unfair pricing, noting that in cases such as clearing or regulatory compliance, there are no acceptable alternatives.
In its formal statement of objections, the European Commission alleged that S&P Capital IQ was abusing its position by requiring financial services companies and information service providers to pay license fees for the use of U.S. ISINs. The European Commission claimed that comparable agencies elsewhere in the world either do not charge fees at all, or do so on the basis of distribution cost, rather than usage.
While strongly disagreeing with the European Commission, CGS/S&P Capital IQ offered to create a low-cost, low-value feed of certain US ISINs for use by market participants in the European Economic Area. A formal agreement was reached on November 15, 2011.
Format.
A CUSIP is a nine-character alphanumeric code. The first six characters are known as the base (or CUSIP-6), and uniquely identify the issuer. Issuer codes are assigned alphabetically from a series that includes deliberately built-in gaps for future expansion. The 7th and 8th digit identify the exact issue. The 9th digit is a checksum (some clearing bodies ignore or truncate the last digit). The last three characters of the issuer code can be letters, in order to provide more room for expansion.
Issuer numbers 990 to 999 and 99A to 99Z in each group of 1,000 numbers are reserved for internal use. This permits a user to assign an issuer number to any issuer which might be relevant to his holdings but which does not qualify for coverage under the CUSIP numbering system. Other issuer numbers (990000 to 999999 and 99000A to 99999Z) are also reserved for the user so that they may be assigned to non-security assets or to number miscellaneous internal assets.
The 7th and 8th digit identify the exact issue, the format being dependent on the type of security. In general, numbers are used for equities and letters are used for fixed income. For discount commercial paper, the first issue character is generated by taking the letter code of the maturity month and the second issue character is the day of the maturity date, with letters used for numbers over 9. The first security issued by any particular issuer is numbered "10". Newer issues are numbered by adding ten to the last used number up to 80, at which point the next issue is "88" and then goes down by tens. The issue number "01" is used to label all options on equities from that issuer.
Fixed income issues are labeled using a similar fashion, but due to there being so many of them they use letters instead of digits. The first issue is labeled "AA", the next "A2", then "2A" and onto "A3". To avoid confusion, the letters I and O are not used since they might be mistaken for the digits 1 and 0.
CUSIP also reserves the special characters '*', '@' and '#' for use with private placement numbers (PPNs) used by the insurance industry.
The 9th digit is an automatically generated check digit using the "Modulus 10 Double Add Double" technique based on the Luhn algorithm. To calculate the check digit every second digit is multiplied by two. Letters are converted to numbers based on their ordinal position in the alphabet, starting with A equal to 10.
TBA CUSIP format.
There is a special assignment of CUSIP numbers for TBA Security. Working with the MBSCC, CUSIP Global Services (CGS) developed a specialized identification scheme for TBA (To Be Announced) mortgage-backed securities.
TBA CUSIPs incorporate, within the identifier itself, a security’s mortgage type (Ginnie Mae, Fannie Mae, Freddie Mac), coupon, maturity and settlement month.
TBA Algorithm:
Check digit lookup table.
The values below are summed for the first 8 characters, then reduced to 1 digit by formula_0:
Check digit pseudocode.
algorithm Cusip-Check-Digit is
input: "cusip", an 8-character CUSIP.
"sum" := 0
for 0 ≤ "i" < 8 do
"c" := the "i"th character of "cusip"
if "c" is a digit then
"v" := numeric value of the digit "c"
else if "c" is a letter then
"p" := ordinal position of "c" in the alphabet (A=1, B=2...)
"v" := "p" + 9
else if "c" = "*" then
"v" := 36
else if "c" = "@" then
"v" := 37
else if "c" = "#" then
"v" := 38
end if
if "i" is not "even" then
"v" := "v" × 2
end if
"sum" := sum + int("v" div 10) + "v" mod 10
repeat
return (10 - ("sum" mod 10)) mod 10
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "modulo(10 - modulo(sum, 10), 10)"
}
] | https://en.wikipedia.org/wiki?curid=1083873 |
10841844 | Pirinçlik Air Base | Closed American military base in Turkey
Pirinçlik Air Base (), also known as Pirinçlik Air Station, formerly Diyarbakır Air Station, was a 41-year-old American-Turkish military base near Diyarbakir, Turkey. Notable base commanders include Col. Dale Lee Norman. It was known as NATO's frontier post for monitoring the former Soviet Union and the Middle East, completely closed on 30 September 1997.
This return was the result of the general drawdown of US bases in Europe and improvement in space surveillance technology. The base near the southeastern city of Diyarbakir housed sensitive electronic intelligence-gathering systems for listening on the Middle East, Caucasus and Russia.
The Pirinçlik sensor system consisted of two radio frequency (RF) mechanical radar systems providing radar intelligence, space surveillance, and missile warning data to multiple users. Observations from Diyarbakır were normally the first radar reports of new Russian satellite launches from Kapustin Yar in the early days of satellite tracking; see Project Space Track. The site operated both a detection radar (AN/FPS-17) and a mechanical tracking radar (AN/FPS-79). Although limited by their mechanical technology, Pirinçlik's two radars gave the advantage of tracking two objects simultaneously in real time. Its location close to the southern Soviet Union made it the only ground sensor capable of tracking actual deorbits of Soviet space hardware. In addition, the Pirinçlik radar was the only 24-hour-per-day eastern hemisphere deep-space sensor.
AN/FPS-17 and AN/FPS-79 radar systems.
The AN/FPS-17 Space Surveillance Radar developed by the Rome Air Development Center (RADC) was the first surveillance radar system designed to detect objects in space. The FPS-17 detection scanning radars have fixed antennae oriented toward the Soviet Union. The Air Force FPS-79 UHF tracking radar at Diyarbakir-Pirinçlik in Turkey is capable of tracking missiles during flight. The 10-meter diameter dish antenna system has a variable focus feed horn system which can provide a wide beam for target detection, and a narrow beam for tracking (other similar radars have scan rates in excess of formula_0 per second). Operating at 432 MHz, this radar has a maximum detection range in excess of 4,300 kilometers.
Lincoln Laboratory’s phase-coded pulse-modulation receiver/exciter for the VHF AN/FPS-17 radar, built at the Pirinçlik site in eastern Turkey by the General Electric Company, allowed U.S. observers to monitor missile test launches from Kapustin Yar, deep within the Soviet Union.[2] Subsequent installation of another AN/FPS-17 radar on Shemya, a western island in the chain of Aleutian Islands off Alaska, made it possible for U.S. observers to monitor Soviet missile test flights to the Kamchatka peninsula. The AN/FPS-17 radar was the first demonstration of pulse compression in an operational radar system.
In 1970, the name Diyarbakir Air Station was changed to that of Pirinçlik, the name of the small village 30 km west of Diyarbakir where the unit was actually located. On 1 June 1972, the 7022d Air Base Squadron was activated, under the command of the 39th Tactical Group. On 30 July 1981, the squadron was assigned to The U.S. Logistics Group. Its mission was to support 19th Surveillance Squadron, SAC, at Pirinçlik. It received logistical support from İncirlik Air Base.
Pirinçlik Air Station was a remote site, where personnel lived in quonset hut dorms, had one club for socialization, could not leave the base at night, and had few shopping or entertainment opportunities other than an occasional temporary duty to İncirlik. This site was so small that the perimeter fence was practically visible from anywhere on base. The staff consisted of 150 airmen which during the 1980s and after included about 20 or so females, 30+ officers, 120 American civilian contractors, and nearly 300 Turkish military and civilians.
On September 30, 1996, Lockheed Martin Corporation, Syracuse, N.Y., was awarded a $16,221,360 face value increase to a fixed-price incentive contract to provide for FY 1997 operation, maintenance, and logistic support of the sensor facilities at Pirinclik Air Station. The work was performed at Pirinçlik Air Station. The contract was completed in September 1997. The 21st Space Wing, Peterson AFB, Colorado, was the contracting activity.
Base closure in 1997.
The Secretary of Defense announced February 13, 1997, that the U.S. Department of Defense would end or reduce operations at seven European installations as a result of the latest round of base and force realignment actions. The phrase "return" means the entire installation is vacated by U.S. forces and returned to the control of the host nation. This round included six installations in Germany and one in Turkey—Pirinçlik Air Base. This action began immediately, with the return of the installation to the host nation planned for September 1997. It affected about 117 U.S. Air Force personnel then assigned to the base.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{10^o}"
}
] | https://en.wikipedia.org/wiki?curid=10841844 |
1084216 | Compound steam engine | Steam engine where steam is expanded in stages
A compound steam engine unit is a type of steam engine where steam is expanded in two or more stages.
A typical arrangement for a compound engine is that the steam is first expanded in a high-pressure "(HP)" cylinder, then having given up heat and losing pressure, it exhausts directly into one or more larger-volume low-pressure "(LP)" cylinders. Multiple-expansion engines employ additional cylinders, of progressively lower pressure, to extract further energy from the steam.
Invented in 1781, this technique was first employed on a Cornish beam engine in 1804. Around 1850, compound engines were first introduced into Lancashire textile mills.
Compound systems.
There are many compound systems and configurations, but there are two basic types, according to how HP and LP piston strokes are phased and hence whether the HP exhaust is able to pass directly from HP to LP (Woolf compounds) or whether pressure fluctuation necessitates an intermediate "buffer" space in the form of a steam chest or pipe known as a "receiver" (receiver compounds).
In a single-expansion (or 'simple') steam engine, the high-pressure steam enters the cylinder at boiler pressure through an inlet valve. The steam pressure forces the piston down the cylinder, until the valve shuts (e.g. after 25% of the piston's stroke). After the steam supply is cut off the trapped steam continues to expand, pushing the piston to the end of its stroke, where the exhaust valve opens and expels the partially depleted steam to the atmosphere, or to a condenser. This "cut-off" allows much more work to be extracted, since the expansion of the steam is doing additional work beyond that done by the steam at boiler pressure.
An earlier cut-off increases the expansion ratio, which in principle allows more energy to be extracted and increases efficiency. Ideally, the steam would expand adiabatically, and the temperature would drop corresponding to the volume increase. However, in practice the material of the surrounding cylinder acts as a heat reservoir, cooling the steam in the earlier part of the expansion and heating it in the later part. These irreversible heat flows decrease the efficiency of the process, so that beyond a certain point, further increasing the expansion ratio would actually decrease efficiency, in addition to decreasing the mean effective pressure and thus the power of the engine.
Compounding engines.
A solution to the dilemma was invented in 1804 by British engineer Arthur Woolf, who patented his "Woolf high pressure compound engine" in 1805. In the compound engine, high-pressure steam from the boiler first expands in a high-pressure (HP) cylinder and then enters one or more subsequent lower pressure (LP) cylinders. The complete expansion of the steam occurs across multiple cylinders and, as there is less expansion in each cylinder, the steam cools less in each cylinder, making higher expansion ratios practical and increasing the efficiency of the engine.
There are other advantages: as the temperature range is smaller, cylinder condensation is reduced. Loss due to condensation is restricted to the LP cylinder. Pressure difference is less in each cylinder so there is less steam leakage at the piston and valves. The turning moment is more uniform, so balancing is easier and a smaller flywheel may be used. Only the smaller HP cylinder needs to be built to withstand the highest pressure, which reduces the overall weight. Similarly, components are subject to less strain, so they can be lighter. The reciprocating parts of the engine are lighter, reducing the engine vibrations. The compound could be started at any point in the cycle, and in the event of mechanical failure the compound could be reset to act as a simple, and thus keep running.
To derive equal work from lower-pressure steam requires a larger cylinder volume as this steam occupies a greater volume. Therefore, the bore, and in rare cases the stroke as well, are increased in low-pressure cylinders, resulting in larger cylinders.
Double-expansion (usually just known as 'compound') engines expand the steam in two stages, but this does not imply that all such engines have two cylinders. They may have four cylinders working as two LP-HP pairs, or the work of the large LP cylinder can be split across two smaller cylinders, with one HP cylinder exhausting into either LP cylinder, giving a 3-cylinder layout where the cylinder and piston diameter of all three are about the same, making the reciprocating masses easier to balance.
Two-cylinder compounds can be arranged as:
The adoption of compounding was widespread for stationary industrial units where the need was for increased power at decreasing cost, and almost universal for marine engines after 1880. It was not widely used in railway locomotives where it was often perceived as complicated and unsuitable for the harsh railway operating environment and limited space afforded by the loading gauge (particularly in Britain). Compounding was never common on British railways and not employed at all after 1930, but was used in a limited way in many other countries.
The first successful attempt to fly a heavier-than-air fixed-wing aircraft solely on steam power occurred in 1933, when George and William Besler converted a Travel Air 2000 biplane to fly on a 150 hp angle-compound V-twin steam engine of their own design instead of the usual Curtiss OX-5 inline or radial aviation gasoline engine it would have normally used.
Multiple-expansion engines.
It is a logical extension of the compound engine (described above) to split the expansion into yet more stages to increase efficiency. The result is the multiple-expansion engine. Such engines use either three or four expansion stages and are known as "triple-" and "quadruple-expansion engines" respectively. These engines use a series of double-acting cylinders of progressively increasing diameter and/or stroke and hence volume. These cylinders are designed to divide the work into three or four equal portions, one for each expansion stage. The adjacent image shows an animation of a triple-expansion engine. The steam travels through the engine from left to right. The valve chest for each of the cylinders is to the left of the corresponding cylinder.
Applications.
Mill engines.
Though the first mills were driven by water power, once steam engines were adopted the manufacturer no longer needed to site the mills by running water. Cotton spinning required ever larger mills to fulfil the demand, and this drove the owners to demand increasingly powerful engines. When boiler pressure had exceeded 60 psi, compound engines achieved a thermo-dynamic advantage, but it was the mechanical advantages of the smoother stroke that was the deciding factor in the adoption of compounds. In 1859, there was 75,886 ihp (indicated horsepower[ihp]) of engines in mills in the Manchester area, of which 32,282 ihp was provided by compounds though only 41,189 ihp was generated from boilers operated at over 60psi.
To generalise, between 1860 and 1926 all Lancashire mills were driven by compounds. The last compound built was by Buckley and Taylor for Wye No.2 mill, Shaw. This engine was a cross-compound design to 2,500 ihp, driving a 24 ft, 90 ton flywheel, and operated until 1965.
Marine applications.
In the marine environment, the general requirement was for autonomy and increased operating range, as ships had to carry their coal supplies. The old salt-water boiler was thus no longer adequate and had to be replaced by a closed fresh-water circuit with condenser. The result from 1880 onwards was the multiple-expansion engine using three or four expansion stages ("triple-" and "quadruple-expansion engines"). These engines used a series of double-acting cylinders of progressively increasing diameter and/or stroke (and hence volume) designed to divide the work into three or four, as appropriate, equal portions for each expansion stage. Where space is at a premium, two smaller cylinders of a large sum volume might be used for the low-pressure stage. Multiple-expansion engines typically had the cylinders arranged in-line, but various other formations were used. In the late 19th century, the Yarrow-Schlick-Tweedy balancing 'system' was used on some marine triple-expansion engines. Y-S-T engines divided the low-pressure expansion stages between two cylinders, one at each end of the engine. This allowed the crankshaft to be better balanced, resulting in a smoother, faster-responding engine which ran with less vibration. This made the 4-cylinder triple-expansion engine popular with large passenger liners (such as the Olympic class), but was ultimately replaced by the virtually vibration-free steam turbine.
The development of this type of engine was important for its use in steamships as by exhausting to a condenser the water could be reclaimed to feed the boiler, which was unable to use seawater. Land-based steam engines could simply exhaust much of their steam, as feed water was usually readily available. Prior to and during World War II, the expansion engine dominated marine applications where high vessel speed was not essential. It was superseded by the steam turbine when speed was required, such as for warships and ocean liners. HMS "Dreadnought" of 1905 was the first major warship to replace the proven technology of the reciprocating engine with the then-novel steam turbine.
Application to railway locomotives.
For railway locomotive applications the main benefit sought from compounding was economy in fuel and water consumption plus high power/weight ratio due to temperature and pressure drop taking place over a longer cycle, this resulting in increased efficiency; additional perceived advantages included more even torque.
While designs for compound locomotives may date as far back as James Samuel's 1856 patent for a "continuous expansion locomotive", the practical history of railway compounding begins with Anatole Mallet's designs in the 1870s. Mallet locomotives were operated in the United States up to the end of mainline steam by the Norfolk and Western Railway. The designs of Alfred George de Glehn in France also saw significant use, especially in the rebuilds of André Chapelon. A wide variety of compound designs were tried around 1900, but most were short-lived in popularity, due to their complexity and maintenance liability. In the 20th century the superheater was widely adopted, and the vast majority of steam locomotives were simple-expansion (with some compound locomotives converted to simple). It was realised by engineers that locomotives at steady speed were worked most efficiently with a wide-open regulator and early cut-off, the latter being set via the reversing gear. A locomotive operating at very early cut-off of steam (e.g. at 15% of the piston stroke) allows maximum expansion of the steam, with less wasted energy at the end of the stroke. Superheating eliminates the condensation and rapid loss of pressure that would otherwise occur with such expansion.
Large American locomotives used two cross-compound steam-driven air compressors, e.g. the Westinghouse 8 1/2" 150-D, for the train brakes.
The Yarrow-Schlick-Tweedy system.
The presentation follows Sommerfeld's textbook, which contains a diagram (Figure 17) that is not reproduced for copyright reasons.
Consider a 4-cylinder engine on a ship. Let x be the vertical direction, z be the fore-aft direction, and y be the port-starboard direction. Let the 4 cylinders be mounted in a row along the z-axis, so that their pistons are pointed downwards. The pistons are connected to the same crankshaft via long vertical rods. Now, we set up the fundamental quantities of the engine:
Now, as the engine operates, the vertical position of cylinder formula_9 is equal to formula_10. By trigonometry, we have
formula_11
As each cylinder moves up and down, it exerts a vertical force on its mounting frame equaling formula_12. The YST system aims to make sure that the total of all 4 forces cancels out as exactly as possible. Specifically, it aims to make sure that the total force (along the x-axis) and the total torque (around the y-axis) are both zero:
formula_13
This can be achieved if
formula_14
Now, plugging in the equations, we find that it means (up to second-order)
formula_15
Plugging in formula_16, and expand the cosine functions, we see that with formula_17 arbitrary, the factors of formula_18 must vanish separately. This gives us 8 equations to solve, which is in general possible if there are at least 8 variables of the system that we can vary.
Of the variables of the system, formula_19 are fixed by the design of the cylinders. Also, the absolute values of formula_20 do not matter, only their ratios matter. Together, this gives us 9 variables to vary: formula_21.
The YST system requires at least 4 cylinders. With 3 cylinders, the same derivation gives us only 6 variables to vary, which is insufficient to solve all 8 equations.
The YST system is used on ships such as the SS Kaiser Wilhelm der Grosse and SS Deutschland (1900).
Notes.
^ Cylinder phasing:
With two-cylinder compounds used in railway work, the pistons are connected to the cranks as with a two-cylinder simple at 90° out-of-phase with each other ("quartered").
When the double-expansion group is duplicated, producing a 4-cylinder compound, the individual pistons within the group are usually balanced at 180°, the groups being set at 90° to each other. In one case (the first type of Vauclain compound), the pistons worked in the same phase driving a common crosshead and crank, again set at 90° as for a two-cylinder engine.
With the 3-cylinder compound arrangement, the LP cranks were either set at 90° with the HP one at 135° to the other two, or in some cases all three cranks were set at 120°.
^ ihp:
The power of a mill engine was originally measured in Nominal Horse Power, but this system understated the power of a compound McNaught system suitable for compounds, ihp or indicated horse power. As a rule of thumb ihp is 2.6 times nhp, in a compound engine.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "M_1, M_2, M_3, M_4"
},
{
"math_id": 1,
"text": "a_2"
},
{
"math_id": 2,
"text": "a_3, a_4"
},
{
"math_id": 3,
"text": "l_1, l_2, l_3, l_4"
},
{
"math_id": 4,
"text": "r_1, r_2, r_3, r_4"
},
{
"math_id": 5,
"text": "\\phi_1, \\phi_2, \\phi_3, \\phi_4"
},
{
"math_id": 6,
"text": "\\phi_i - \\phi_1"
},
{
"math_id": 7,
"text": "\\alpha_i"
},
{
"math_id": 8,
"text": "i = 2, 3, 4"
},
{
"math_id": 9,
"text": "i"
},
{
"math_id": 10,
"text": "x_i"
},
{
"math_id": 11,
"text": "x_i =r_i \\cos\\phi_i + \\sqrt{l_i^2(r_i\\sin\\phi_i)^2} = l_1 + r_i\\cos\\phi_i - \\frac{r_i^2}{l_i} (1-\\cos(2\\phi_i))/2 + O(r_i^3/l^2)"
},
{
"math_id": 12,
"text": "M_i\\ddot x_i"
},
{
"math_id": 13,
"text": "\\sum_{i=1}^4 M_i \\ddot x_i = 0; \\quad \\sum_{i=2}^4 M_i a_i\\ddot x_i = 0"
},
{
"math_id": 14,
"text": "\\sum_{i=1}^4 M_i x_i = Const; \\quad \\sum_{i=2}^4 M_i a_i x_i = Const"
},
{
"math_id": 15,
"text": "\\sum_{i=1}^4 M_i (r_i \\cos\\phi_i - \\frac{r_i^2}{2l_i} \\cos(2\\phi_i))= 0; \\quad \\sum_{i=2}^4 M_i a_i (r_i \\cos\\phi_i - \\frac{r_i^2}{2l_i} \\cos(2\\phi_i)) = 0"
},
{
"math_id": 16,
"text": "\\phi_i = \\phi_1 + \\alpha_i"
},
{
"math_id": 17,
"text": "\\phi_1"
},
{
"math_id": 18,
"text": "\\sin(\\phi_1), \\cos(\\phi_1), \\sin(2\\phi_1), \\cos(2\\phi_1)"
},
{
"math_id": 19,
"text": "M_i, r_i"
},
{
"math_id": 20,
"text": "a_2, a_3, a_4"
},
{
"math_id": 21,
"text": "l_1, l_2, l_3, l_4, \\frac{a_3}\n{a_2}, \\frac{a_4}{a_2}, \\alpha_2, \\alpha_3, \\alpha_4"
}
] | https://en.wikipedia.org/wiki?curid=1084216 |
1084219 | Gravimetry | Measurement of the strength of a gravitational field
Gravimetry is the measurement of the strength of a gravitational field. Gravimetry may be used when either the magnitude of a gravitational field or the properties of matter responsible for its creation are of interest. The study of gravity changes belongs to geodynamics.
Units of measurement.
Gravity is usually measured in units of acceleration. In the SI system of units, the standard unit of acceleration is metres per second squared (m/s2). Other units include the cgs gal (sometimes known as a "galileo", in either case with symbol Gal), which equals 1 centimetre per second squared, and the "g" ("g"n), equal to 9.80665 m/s2. The value of the "g"n is defined as approximately equal to the acceleration due to gravity at the Earth's surface, although the actual acceleration varies slightly by location.
Gravimeters.
A gravimeter is an instrument used to measure gravitational acceleration. Every mass has an associated gravitational potential. The gradient of this potential is a force. A gravimeter measures this gravitational force.
For a small body, general relativity predicts gravitational effects indistinguishable from the effects of acceleration by the equivalence principle. Thus, gravimeters can be regarded as special-purpose accelerometers. Many weighing scales may be regarded as simple gravimeters. In one common form, a spring is used to counteract the force of gravity pulling on an object. The change in length of the spring may be calibrated to the force required to balance the gravitational pull. The resulting measurement may be made in units of force (such as the newton), however, gravimeters display their measurements in units of gals (cm/s2), and parts per million, parts per billion, or parts per trillion of the average vertical acceleration with respect to the Earth.
Though similar in design to other accelerometers, gravimeters are typically designed to be much more sensitive. Their first uses were to measure the changes in gravity from the varying densities and distribution of masses inside the Earth, from temporal tidal variations in the shape and distribution of mass in the oceans, atmosphere and earth.
The resolution of gravimeters can be increased by averaging samples over longer periods. Fundamental characteristics of gravimeters are the accuracy of a single measurement (a single "sample") and the sampling rate.
formula_0
for example:
formula_1
Besides precision, stability is also an important property for a gravimeter as it allows the monitoring of gravity "changes". These changes can be the result of mass displacements inside the Earth, or of vertical movements of the Earth's crust on which measurements are being made.
The first gravimeters were vertical accelerometers, specialized for measuring the constant downward acceleration of gravity on the Earth's surface. The Earth's vertical gravity varies from place to place over its surface by about ±0.5%. It varies by about ±1000 (nanometers per second squared) at any location because of the changing positions of the Sun and Moon relative to the Earth.
The majority of modern gravimeters use specially designed metal or quartz zero-length springs to support the test mass. The special property of these springs is that the natural resonant period of oscillation of the spring–mass system can be made very long – approaching a thousand seconds. This detunes the test mass from most local vibration and mechanical noise, increasing the sensitivity and utility of the gravimeter. Quartz and metal springs are chosen for different reasons; quartz springs are less affected by magnetic and electric fields while metal springs have a much lower drift due to elongation over time. The test mass is sealed in an air-tight container so that tiny changes of barometric pressure from blowing wind and other weather do not change the buoyancy of the test mass in air. Spring gravimeters are, in practice, relative instruments that measure the difference in gravity between different locations. A relative instrument also requires calibration by comparing instrument readings taken at locations with known absolute values of gravity.
Absolute gravimeters provide such measurements by determining the gravitational acceleration of a test mass in vacuum. A test mass is allowed to fall freely inside a vacuum chamber and its position is measured with a laser interferometer and timed with an atomic clock. The laser wavelength is known to ±0.025 ppb and the clock is stable to ±0.03 ppb. Care must be taken to minimize the effects of perturbing forces such as residual air resistance (even in vacuum), vibration, and magnetic forces. Such instruments are capable of an accuracy of about two parts per billion or 0.002 mGal and reference their measurement to atomic standards of length and time. Their primary use is for calibrating relative instruments, monitoring crustal deformation, and in geophysical studies requiring high accuracy and stability. However, absolute instruments are somewhat larger and significantly more expensive than relative spring gravimeters and are thus relatively rare.
Relative gravimeter usually refer to differential comparisons of gravity from one place to another. They are designed to subtract the average vertical gravity automatically. They can be calibrated at a location where the gravity is known accurately and then transported to the location where the gravity is to be measured. Or they can be calibrated in absolute units at their operating location.
Applications.
Researchers use more sophisticated gravimeters when precise measurements are needed. When measuring the Earth's gravitational field, measurements are made to the precision of microgals to find density variations in the rocks making up the Earth. Several types of gravimeters exist for making these measurements, including some that are essentially refined versions of the spring scale described above. These measurements are used to quantify gravity anomalies.
Gravimeters can detect vibrations and gravity changes from human activities. Depending on the interests of the researcher or operator, this might be counteracted by integral vibration isolation and signal processing.
Gravimeters have been designed to mount in vehicles, including aircraft (note the field of aerogravity), ships and submarines. These special gravimeters isolate acceleration from the movement of the vehicle and subtract it from measurements. The acceleration of the vehicles is often hundreds or thousands of times stronger than the changes in gravity being measured.
A gravimeter (the "Lunar Surface Gravimeter") deployed on the surface of the Moon during the 1972 Apollo 17 mission did not work due to a design error. A second device (the "Traverse Gravimeter Experiment") functioned as anticipated.
Gravimeters are used for petroleum and mineral prospecting, seismology, geodesy, geophysical surveys and other geophysical research, and for metrology. Their fundamental purpose is to map the gravity field in space and time.
Most current work is Earth-based, with a few satellites around Earth, but gravimeters are also applicable to the Moon, Sun, planets, asteroids, stars, galaxies and other bodies. Gravitational wave experiments monitor the changes with time in the gravitational potential itself, rather than the gradient of the potential that the gravimeter is tracking. This distinction is somewhat arbitrary. The subsystems of the gravitational radiation experiments are very sensitive to changes in the gradient of the potential. The local gravity signals on Earth that interfere with gravitational wave experiments are disparagingly referred to as "Newtonian noise", since Newtonian gravity calculations are sufficient to characterize many of the local (earth-based) signals.
There are many methods for displaying acceleration fields, also called "gravity fields". This includes traditional 2D maps, but increasingly 3D video. Since gravity and acceleration are the same, "acceleration field" might be preferable, since "gravity" is an oft-misused prefix.
Commercial absolute gravimeters.
Gravimeters for measuring the Earth's gravity as precisely as possible are getting smaller and more portable. A common type measures the acceleration of small masses free falling in a vacuum, when the accelerometer is firmly attached to the ground. The mass includes a retroreflector and terminates one arm of a Michelson interferometer. By counting and timing the interference fringes, the acceleration of the mass can be measured. A more recent development is a "rise and fall" version that tosses the mass upward and measures both upward and downward motion. This allows cancellation of some measurement errors; however, "rise and fall" gravimeters are not yet in common use. Absolute gravimeters are used in the calibration of relative gravimeters, surveying for gravity anomalies (voids), and for establishing the vertical control network.
Atom interferometric and atomic fountain methods are used for precise measurement of the Earth's gravity, and atomic clocks and purpose-built instruments can use time dilation (also called general relativistic) measurements to track changes in the gravitational potential and gravitational acceleration on the Earth.
The term "absolute" does not convey the instrument's stability, sensitivity, accuracy, ease of use, and bandwidth. The words "Absolute" and "relative" should not be used when more specific characteristics can be given.
Relative gravimeters.
The most common gravimeters are spring-based. They are used in gravity surveys over large areas for establishing the figure of the geoid over those areas. They are basically a weight on a spring, and by measuring the amount by which the weight stretches the spring, local gravity can be measured. However, the strength of the spring must be calibrated by placing the instrument in a location with a known gravitational acceleration.
The current standard for sensitive gravimeters are the "superconducting gravimeters", which operate by suspending a superconducting niobium sphere in an extremely stable magnetic field; the current required to generate the magnetic field that suspends the niobium sphere is proportional to the strength of the Earth's gravitational acceleration. The superconducting gravimeter achieves sensitivities of 10–11 m·s−2 (one nanogal), approximately one trillionth (10−12) of the Earth surface gravity. In a demonstration of the sensitivity of the superconducting gravimeter, Virtanen (2006), describes how an instrument at Metsähovi, Finland, detected the gradual increase in surface gravity as workmen cleared snow from its laboratory roof.
The largest component of the signal recorded by a superconducting gravimeter is the tidal gravity of the Sun and Moon acting at the station. This is roughly ±1000 (nanometers per second squared) at most locations. "SGs", as they are called, can detect and characterize Earth tides, changes in the density of the atmosphere, the effect of changes in the shape of the surface of the ocean, the effect of the atmosphere's pressure on the Earth, changes in the rate of rotation of the Earth, oscillations of the Earth's core, distant and nearby seismic events, and more.
Many broadband three-axis seismometers in common use are sensitive enough to track the Sun and Moon. When operated to report acceleration, they are useful gravimeters. Because they have three axes, it is possible to solve for their position and orientation, by either tracking the arrival time and pattern of seismic waves from earthquakes, or by referencing them to the Sun and Moon tidal gravity.
Recently, the SGs, and broadband three-axis seismometers operated in gravimeter mode, have begun to detect and characterize the small gravity signals from earthquakes. These signals arrive at the gravimeter at the speed of light, so have the potential to improve earthquake early warning methods. There is some activity to design purpose-built gravimeters of sufficient sensitivity and bandwidth to detect these prompt gravity signals from earthquakes. Not just the magnitude 7+ events, but also the smaller, much more frequent, events.
Newer MEMS gravimeters, atom gravimeters – MEMS gravimeters offer the potential for low-cost arrays of sensors. MEMS gravimeters are currently variations on spring type accelerometers where the motions of a tiny cantilever or mass are tracked to report acceleration. Much of the research is focused on different methods of detecting the position and movements of these small masses. In Atom gravimeters, the mass is a collection of atoms.
For a given restoring force, the central frequency of the instrument is often given by
formula_2 (in radians per second)
The term for the "force constant" changes if the restoring force is electrostatic, magnetostatic, electromagnetic, optical, microwave, acoustic, or any of dozens of different ways to keep the mass stationary. The "force constant" is just the coefficient of the displacement term in the equation of motion:
"'"m" "a" + "b" "v" + "k" "x" + constant
"F"("X","t")
m mass, a acceleration, b viscosity, v velocity, k force constant, x displacement
F external force as a function of location/position and time.
F is the force being measured, and is the acceleration.
"'"g"("X","t")
"a" + + + + higher derivatives of the restoring force
Precise GPS stations can be operated as gravimeters since they are increasingly measuring three-axis positions over time, which, when differentiated twice, give an acceleration signal.
The satellite borne gravimeters GOCE, GRACE, mostly operated in gravity gradiometer mode. They yielded detailed information about the Earth's time-varying gravity field. The spherical harmonic gravitational potential models are slowly improving in both spatial and temporal resolution. Taking the gradient of the potentials gives estimate of local acceleration which are what is measured by the gravimeter arrays. The superconducting gravimeter network has been used to ground truth the satellite potentials. This should eventually improve both the satellite and Earth-based methods and intercomparisons.
Transportable relative gravimeters also exist; they employ an extremely stable inertial platform to compensate for the masking effects of motion and vibration, a difficult engineering feat. The first transportable relative gravimeters were, reportedly, a secret military technology developed in the 1950–1960s as a navigational aid for nuclear submarines. Subsequently in the 1980s, transportable relative gravimeters were reverse engineered by the civilian sector for use on ship, then in air and finally satellite-borne gravity surveys.
Microgravimetry.
Microgravimetry is an important branch developed on the foundation of classical gravimetry. Microgravity investigations are carried out in order to solve various problems of engineering geology, mainly location of voids and their monitoring. Very detailed measurements of high accuracy can indicate voids of any origin, provided the size and depth are large enough to produce gravity effect stronger than is the level of confidence of relevant gravity signal.
History.
The modern gravimeter was developed by Lucien LaCoste and Arnold Romberg in 1936.
They also invented most subsequent refinements, including the ship-mounted gravimeter, in 1965, temperature-resistant instruments for deep boreholes, and lightweight hand-carried instruments. Most of their designs remain in use with refinements in data collection and data processing.
Satellite gravimetry.
Currently, the static and time-variable Earth's gravity field parameters are determined using modern satellite missions, such as GOCE, CHAMP, Swarm, GRACE and GRACE-FO. The lowest-degree parameters, including the Earth's oblateness and geocenter motion are best determined from satellite laser ranging.
Large-scale gravity anomalies can be detected from space, as a by-product of satellite gravity missions, e.g., GOCE. These satellite missions aim at the recovery of a detailed gravity field model of the Earth, typically presented in the form of a spherical-harmonic expansion of the Earth's gravitational potential, but alternative presentations, such as maps of geoid undulations or gravity anomalies, are also produced.
The Gravity Recovery and Climate Experiment (GRACE) consisted of two satellites that detected gravitational changes across the Earth. Also these changes could be presented as gravity anomaly temporal variations. The Gravity Recovery and Interior Laboratory (GRAIL) also consisted of two spacecraft orbiting the Moon, which orbited for three years before their deorbit in 2015.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Resolution} = {\\text{SingleMeasurementResolution} \\over \\sqrt \\text{NumberOfSamples}}"
},
{
"math_id": 1,
"text": "\\text{Resolution per minute} = {\\text{Resolution per second} \\over \\sqrt {60}}"
},
{
"math_id": 2,
"text": " \\omega = 2\\pi \\times \\text{Frequency} = \\sqrt {\\text{Force Constant} \\over \\text{Effective Mass}} "
}
] | https://en.wikipedia.org/wiki?curid=1084219 |
1084263 | Glutamate dehydrogenase | Hexameric enzyme
Glutamate dehydrogenase (GLDH, GDH) is an enzyme observed in both prokaryotes and eukaryotic mitochondria. The aforementioned reaction also yields ammonia, which in eukaryotes is canonically processed as a substrate in the urea cycle. Typically, the α-ketoglutarate to glutamate reaction does not occur in mammals, as glutamate dehydrogenase equilibrium favours the production of ammonia and α-ketoglutarate. Glutamate dehydrogenase also has a very low affinity for ammonia (high Michaelis constant formula_0 of about 1 mM), and therefore toxic levels of ammonia would have to be present in the body for the reverse reaction to proceed (that is, α-ketoglutarate and ammonia to glutamate and NAD(P)+). However, in brain, the NAD+/NADH ratio in brain mitochondria encourages oxidative deamination (i.e. glutamate to α-ketoglutarate and ammonia). In bacteria, the ammonia is assimilated to amino acids via glutamate and aminotransferases. In plants, the enzyme can work in either direction depending on environment and stress. Transgenic plants expressing microbial GLDHs are improved in tolerance to herbicide, water deficit, and pathogen infections. They are more nutritionally valuable.
The enzyme represents a key link between catabolic and anabolic pathways, and is, therefore, ubiquitous in eukaryotes. In humans the relevant genes are called "GLUD1" (glutamate dehydrogenase 1) and "GLUD2" (glutamate dehydrogenase 2), and there are also at least five GLDH pseudogenes in the human genome as well.
Clinical application.
GLDH can be measured in a medical laboratory to evaluate the liver function. Elevated blood serum GLDH levels indicate liver damage and GLDH plays an important role in the differential diagnosis of liver disease, especially in combination with aminotransferases. GLDH is localised in mitochondria, therefore practically none is liberated in generalised inflammatory diseases of the liver such as viral hepatitides. Liver diseases in which necrosis of hepatocytes is the predominant event, such as toxic liver damage or hypoxic liver disease, are characterised by high serum GLDH levels. GLDH is important for distinguishing between acute viral hepatitis and acute toxic liver necrosis or acute hypoxic liver disease, particularly in the case of liver damage with very high aminotransferases. In clinical trials, GLDH can serve as a measurement for the safety of a drug.
Enzyme immunoassay (EIA) for glutamate dehydrogenase (GDH) can be used as screening tool for patients with "Clostridioides difficile" infection. The enzyme is expressed constitutively by most strains of C.diff, and can thus be easily detected in stool. Diagnosis is generally confirmed with a follow-up EIA for C. Diff toxins A and B.
Cofactors.
NAD+ (or NADP+) is a cofactor for the glutamate dehydrogenase reaction, producing α-ketoglutarate and ammonium as a byproduct.
Based on which cofactor is used, glutamate dehydrogenase enzymes are divided into the following three classes:
Role in flow of nitrogen.
Ammonia incorporation in animals and microbes occurs through the actions of glutamate dehydrogenase and glutamine synthetase. Glutamate plays the central role in mammalian and microbe nitrogen flow, serving as both a nitrogen donor and a nitrogen acceptor.
Regulation of glutamate dehydrogenase.
In humans, the activity of glutamate dehydrogenase is controlled through ADP-ribosylation, a covalent modification carried out by the gene sirt4. This regulation is relaxed in response to caloric restriction and low blood glucose. Under these circumstances, glutamate dehydrogenase activity is raised in order to increase the amount of α-ketoglutarate produced, which can be used to provide energy by being used in the citric acid cycle to ultimately produce ATP.
In microbes, the activity is controlled by the concentration of ammonium and or the like-sized rubidium ion, which binds to an allosteric site on GLDH and changes the Km (Michaelis constant) of the enzyme.
The control of GLDH through ADP-ribosylation is particularly important in insulin-producing β cells. Beta cells secrete insulin in response to an increase in the ATP:ADP ratio, and, as amino acids are broken down by GLDH into α-ketoglutarate, this ratio rises and more insulin is secreted. SIRT4 is necessary to regulate the metabolism of amino acids as a method of controlling insulin secretion and regulating blood glucose levels.
Bovine liver glutamate dehydrogenase was found to be regulated by nucleotides in the late 1950s and early 1960s by Carl Frieden.
In addition to describing the effects of nucleotides like ADP, ATP and GTP he described in detail the different kinetic behavior of NADH and NADPH. As such it was one of the earliest enzymes to show what was later described as allosteric behavior.
The activation of mammalian GDH by L-leucine and some other hydrophobic amino acids has also been long known, however localization of the binding site was not clear. Only recently the new allosteric binding site for L-leucine was identified in a mammalian enzyme.
Mutations which alter the allosteric binding site of GTP cause permanent activation of glutamate dehydrogenase, and lead to hyperinsulinism-hyperammonemia syndrome.
Regulation.
Allosteric regulation:
This protein may use the morpheein model of allosteric regulation.
Allosteric inhibitors:
Activators:
Other Inhibitors:
Additionally, Mice GLDH shows substrate inhibition by which GLDH activity decreases at high glutamate concentrations.
Isozymes.
Humans express the following glutamate dehydrogenase isozymes:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K_m"
},
{
"math_id": 1,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=1084263 |
10845121 | Builder's Old Measurement | Measurement of the internal volume of a sailing vessel (c. 1650–1849)
Builder's Old Measurement (BOM, bm, OM, and o.m.) is the method used in England from approximately 1650 to 1849 for calculating the cargo capacity of a ship. It is a volumetric measurement of cubic capacity. It estimated the tonnage of a ship based on length and maximum beam. It is expressed in "tons burden" (, ), and abbreviated "tons bm".
The formula is:
formula_0
where:
The Builder's Old Measurement formula remained in effect until the advent of steam propulsion. Steamships required a different method of estimating tonnage, because the ratio of length to beam was larger and a significant volume of internal space was used for boilers and machinery. In 1849, the Moorsom System was created in the United Kingdom. The Moorsom system calculates the cargo-carrying capacity in cubic feet, another method of volumetric measurement. The capacity in cubic feet is then divided by 100 cubic feet of capacity per gross ton, resulting in a tonnage expressed in tons.
History and derivation.
King Edward I levied the first tax on the hire of ships in England in 1303 based on tons burthen. Later, King Edward III levied a tax of 3 shillings on each "tun" of imported wine, roughly . At that time a "tun" was a wine container of 252 wine gallons, approx weighing about , a weight known today as a long ton or imperial ton. In order to estimate the capacity of a ship in terms of 'tun' for tax purposes, an early formula used in England was:
formula_1
where:
The numerator yields the ship's volume expressed in cubic feet.
If a "tun" is deemed to be equivalent to 100 cubic feet, then the tonnage is simply the number of such 100 cubic feet 'tun' units of volume.
In 1678 Thames shipbuilders used a method assuming that a ship's burden would be 3/5 of its displacement. Since tonnage is calculated by multiplying length × beam × draft × block coefficient, all divided by 35 ft3 per ton of seawater, the resulting formula would be:
formula_2
where:
Or by solving :
formula_3
In 1694 a new British law required that tonnage for tax purposes be calculated according to a similar formula:
formula_4
This formula remained in effect until the Builder's Old Measurement rule (above) was put into use in 1720, and then mandated by Act of Parliament in 1773.
The height from the underside of the hull, excluding the keel itself, at the ship's midpoint, to the top of the uppermost full length deck.
Interior space; The height from the lowest part of the hull inside the ship, at its midpoint, to the ceiling that is made up of the uppermost full length deck. For old warships it is to the ceiling that is made up of the "lowermost" full length deck.
Main deck, that is used in context of depth measurement, is usually defined as the uppermost full length deck. For the 16th century ship "Mary Rose", main deck is the "second" uppermost full length deck. In a calculation of the tonnage of "Mary Rose" the draft was used instead of the depth.
American tons burthen.
The British took the length measurement from the outside of the stem to the outside of the sternpost, whereas the Americans measured from inside the posts. The British measured breadth from outside the planks, whereas the Americans measured the breadth from inside the planks. Lastly, the British divided by 94, whereas the Americans divided by 95.
The upshot was that American calculations gave a lower number than the British ones. The British measure yields values about 6% greater than the American. For instance, when the British measured the captured , their calculations gave her a burthen of 1533 tons, whereas the American calculations gave the burthen as 1444 tons.
The US system was in use from 1789 until 1864, when a modified version of the Moorsom System was adopted.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\text{Tonnage} = \\frac {(\\text{Length}- (\\text{Beam}\\times\\frac{3} {5})) \\times \\text{Beam} \\times \\frac {\\text{Beam}}{2}} {94}"
},
{
"math_id": 1,
"text": " \\text{Tonnage} = \\frac {\\text{Length}\\times \\text{Beam} \\times \\text{Depth}} {100}"
},
{
"math_id": 2,
"text": " \\text{Tonnage} = \\frac {\\text{Length}\\times \\text{Beam} \\times \\frac {\\text{Beam}}{2} \\times \\frac {3}{5}\\times {0.62}} {35} "
},
{
"math_id": 3,
"text": " \\text{Tonnage} = \\frac {\\text{Length}\\times\\text{Beam} \\times \\frac \\text{Beam}{2}} {94} "
},
{
"math_id": 4,
"text": " \\text{Tonnage} = \\frac {\\text{Length}\\times\\text{Beam} \\times \\text{Depth}} {94}"
}
] | https://en.wikipedia.org/wiki?curid=10845121 |