id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
1010522
Disjunction and existence properties
In mathematical logic, the disjunction and existence properties are the "hallmarks" of constructive theories such as Heyting arithmetic and constructive set theories (Rathjen 2005). Definitions. Related properties. Rathjen (2005) lists five properties that a theory may possess. These include the disjunction property (DP), the existence property (EP), and three additional properties: These properties can only be directly expressed for theories that have the ability to quantify over natural numbers and, for CR1, quantify over functions from formula_10 to formula_10. In practice, one may say that a theory has one of these properties if a definitional extension of the theory has the property stated above (Rathjen 2005). Results. Non-examples and examples. Almost by definition, a theory that accepts excluded middle while having independent statements does not have the disjunction property. So all classical theories expressing Robinson arithmetic do not have it. Most classical theories, such as Peano arithmetic and ZFC in turn do not validate the existence property either, e.g. because they validate the least number principle existence claim. But some classical theories, such as ZFC plus the axiom of constructibility, do have a weaker form of the existence property (Rathjen 2005). Heyting arithmetic is well known for having the disjunction property and the (numerical) existence property. While the earliest results were for constructive theories of arithmetic, many results are also known for constructive set theories (Rathjen 2005). John Myhill (1973) showed that IZF with the axiom of replacement eliminated in favor of the axiom of collection has the disjunction property, the numerical existence property, and the existence property. Michael Rathjen (2005) proved that CZF has the disjunction property and the numerical existence property. Freyd and Scedrov (1990) observed that the disjunction property holds in free Heyting algebras and free topoi. In categorical terms, in the free topos, that corresponds to the fact that the terminal object, formula_11, is not the join of two proper subobjects. Together with the existence property it translates to the assertion that formula_11 is an indecomposable projective object—the functor it represents (the global-section functor) preserves epimorphisms and coproducts. Relationship between properties. There are several relationship between the five properties discussed above. In the setting of arithmetic, the numerical existence property implies the disjunction property. The proof uses the fact that a disjunction can be rewritten as an existential formula quantifying over natural numbers: formula_12. Therefore, if formula_13 is a theorem of formula_14, so is formula_15. Thus, assuming the numerical existence property, there exists some formula_16 such that formula_17 is a theorem. Since formula_18 is a numeral, one may concretely check the value of formula_19: if formula_20 then formula_21 is a theorem and if formula_22 then formula_23 is a theorem. Harvey Friedman (1974) proved that in any recursively enumerable extension of intuitionistic arithmetic, the disjunction property implies the numerical existence property. The proof uses self-referential sentences in way similar to the proof of Gödel's incompleteness theorems. The key step is to find a bound on the existential quantifier in a formula (∃"x")A("x"), producing a bounded existential formula (∃"x"<"n")A("x"). The bounded formula may then be written as a finite disjunction A(1)∨A(2)∨...∨A(n). Finally, disjunction elimination may be used to show that one of the disjuncts is provable. History. Kurt Gödel (1932) stated without proof that intuitionistic propositional logic (with no additional axioms) has the disjunction property; this result was proven and extended to intuitionistic predicate logic by Gerhard Gentzen (1934, 1935). Stephen Cole Kleene (1945) proved that Heyting arithmetic has the disjunction property and the existence property. Kleene's method introduced the technique of realizability, which is now one of the main methods in the study of constructive theories (Kohlenbach 2008; Troelstra 1973).
[ { "math_id": 0, "text": "(\\exists x \\in \\mathbb{N})\\varphi(x)" }, { "math_id": 1, "text": "\\varphi(\\bar{n})" }, { "math_id": 2, "text": "n \\in \\mathbb{N}\\text{.}" }, { "math_id": 3, "text": "\\bar{n}" }, { "math_id": 4, "text": "T" }, { "math_id": 5, "text": "(\\forall x \\in \\mathbb{N})(\\exists y \\in \\mathbb{N})\\varphi(x,y)" }, { "math_id": 6, "text": "f_e" }, { "math_id": 7, "text": "(\\forall x)\\varphi(x,f_e(x))" }, { "math_id": 8, "text": "(\\exists f \\colon \\mathbb{N}\\to\\mathbb{N}) \\psi(f)" }, { "math_id": 9, "text": "\\psi(f_e)" }, { "math_id": 10, "text": "\\mathbb{N}" }, { "math_id": 11, "text": "\\mathbf{1}" }, { "math_id": 12, "text": " A \\vee B \\equiv (\\exists n) [ (n=0 \\to A) \\wedge (n \\neq 0 \\to B)]" }, { "math_id": 13, "text": " A \\vee B " }, { "math_id": 14, "text": " T " }, { "math_id": 15, "text": " \\exists n\\colon (n=0 \\to A) \\wedge (n \\neq 0 \\to B) " }, { "math_id": 16, "text": " s " }, { "math_id": 17, "text": " (\\bar{s}=0 \\to A) \\wedge (\\bar{s} \\neq 0 \\to B) " }, { "math_id": 18, "text": " \\bar{s} " }, { "math_id": 19, "text": " s" }, { "math_id": 20, "text": " s=0 " }, { "math_id": 21, "text": " A " }, { "math_id": 22, "text": " s \\neq 0 " }, { "math_id": 23, "text": " B " } ]
https://en.wikipedia.org/wiki?curid=1010522
10105237
Sylvester equation
In mathematics, in the field of control theory, a Sylvester equation is a matrix equation of the form: formula_0 It is named after English mathematician James Joseph Sylvester. Then given matrices "A", "B", and "C", the problem is to find the possible matrices "X" that obey this equation. All matrices are assumed to have coefficients in the complex numbers. For the equation to make sense, the matrices must have appropriate sizes, for example they could all be square matrices of the same size. But more generally, "A" and "B" must be square matrices of sizes "n" and "m" respectively, and then "X" and "C" both have "n" rows and "m" columns. A Sylvester equation has a unique solution for "X" exactly when there are no common eigenvalues of "A" and −"B". More generally, the equation "AX" + "XB" = "C" has been considered as an equation of bounded operators on a (possibly infinite-dimensional) Banach space. In this case, the condition for the uniqueness of a solution "X" is almost the same: There exists a unique solution "X" exactly when the spectra of "A" and −"B" are disjoint. Existence and uniqueness of the solutions. Using the Kronecker product notation and the vectorization operator formula_1, we can rewrite Sylvester's equation in the form formula_2 where formula_3 is of dimension formula_4, formula_5 is of dimension formula_6, formula_7 of dimension formula_8 and formula_9 is the formula_10 identity matrix. In this form, the equation can be seen as a linear system of dimension formula_11. Theorem. Given matrices formula_12 and formula_13, the Sylvester equation formula_14 has a unique solution formula_15 for any formula_16 if and only if formula_3 and formula_17 do not share any eigenvalue. Proof. The equation formula_14 is a linear system with formula_18 unknowns and the same number of equations. Hence it is uniquely solvable for any given formula_19 if and only if the homogeneous equation formula_20 admits only the trivial solution formula_21. (i) Assume that formula_3 and formula_17 do not share any eigenvalue. Let formula_7 be a solution to the abovementioned homogeneous equation. Then formula_22, which can be lifted to formula_23 for each formula_24 by mathematical induction. Consequently, formula_25 for any polynomial formula_26. In particular, let formula_26 be the characteristic polynomial of formula_3. Then formula_27 due to the Cayley–Hamilton theorem; meanwhile, the spectral mapping theorem tells us formula_28 where formula_29 denotes the spectrum of a matrix. Since formula_3 and formula_17 do not share any eigenvalue, formula_30 does not contain zero, and hence formula_31 is nonsingular. Thus formula_32 as desired. This proves the "if" part of the theorem. (ii) Now assume that formula_3 and formula_17 share an eigenvalue formula_33. Let formula_34 be a corresponding right eigenvector for formula_3, formula_35 be a corresponding left eigenvector for formula_17, and formula_36. Then formula_37, and formula_38 Hence formula_7 is a nontrivial solution to the aforesaid homogeneous equation, justifying the "only if" part of the theorem. Q.E.D. As an alternative to the spectral mapping theorem, the nonsingularity of formula_31 in part (i) of the proof can also be demonstrated by the Bézout's identity for coprime polynomials. Let formula_39 be the characteristic polynomial of formula_17. Since formula_3 and formula_17 do not share any eigenvalue, formula_26 and formula_39 are coprime. Hence there exist polynomials formula_40 and formula_41 such that formula_42. By the Cayley–Hamilton theorem, formula_43. Thus formula_44, implying that formula_31 is nonsingular. The theorem remains true for real matrices with the caveat that one considers their complex eigenvalues. The proof for the "if" part is still applicable; for the "only if" part, note that both formula_45 and formula_46 satisfy the homogenous equation formula_47, and they cannot be zero simultaneously. Roth's removal rule. Given two square complex matrices "A" and "B", of size "n" and "m", and a matrix "C" of size "n" by "m", then one can ask when the following two square matrices of size "n" + "m" are similar to each other: formula_48 and formula_49. The answer is that these two matrices are similar exactly when there exists a matrix "X" such that "AX" − "XB" = "C". In other words, "X" is a solution to a Sylvester equation. This is known as Roth's removal rule. One easily checks one direction: If "AX" − "XB" = "C" then formula_50 Roth's removal rule does not generalize to infinite-dimensional bounded operators on a Banach space. Nevertheless, Roth's removal rule generalizes to the systems of Sylvester equations. Numerical solutions. A classical algorithm for the numerical solution of the Sylvester equation is the Bartels–Stewart algorithm, which consists of transforming formula_3 and formula_5 into Schur form by a QR algorithm, and then solving the resulting triangular system via back-substitution. This algorithm, whose computational cost is formula_51 arithmetical operations, is used, among others, by LAPACK and the codice_0 function in GNU Octave. See also the codice_1 function in that language. In some specific image processing applications, the derived Sylvester equation has a closed form solution. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "A X + X B = C." }, { "math_id": 1, "text": "\\operatorname{vec}" }, { "math_id": 2, "text": " (I_m \\otimes A + B^T \\otimes I_n) \\operatorname{vec}X = \\operatorname{vec}C," }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "n\\! \\times\\! n" }, { "math_id": 5, "text": "B" }, { "math_id": 6, "text": "m\\!\\times\\!m" }, { "math_id": 7, "text": "X" }, { "math_id": 8, "text": "n\\!\\times\\!m" }, { "math_id": 9, "text": "I_k" }, { "math_id": 10, "text": "k \\times k" }, { "math_id": 11, "text": "mn \\times mn" }, { "math_id": 12, "text": "A\\in \\mathbb{C}^{n\\times n}" }, { "math_id": 13, "text": "B\\in \\mathbb{C}^{m\\times m}" }, { "math_id": 14, "text": "AX+XB=C" }, { "math_id": 15, "text": "X\\in \\mathbb{C}^{n\\times m}" }, { "math_id": 16, "text": "C\\in\\mathbb{C}^{n\\times m}" }, { "math_id": 17, "text": "-B" }, { "math_id": 18, "text": "mn" }, { "math_id": 19, "text": "C" }, { "math_id": 20, "text": "\nAX+XB=0\n" }, { "math_id": 21, "text": "0" }, { "math_id": 22, "text": "AX=X(-B)" }, { "math_id": 23, "text": "\nA^kX = X(-B)^k\n" }, { "math_id": 24, "text": "k \\ge 0" }, { "math_id": 25, "text": "\np(A) X = X p(-B)\n" }, { "math_id": 26, "text": "p" }, { "math_id": 27, "text": "p(A)=0" }, { "math_id": 28, "text": "\n\\sigma(p(-B)) = p(\\sigma(-B)),\n" }, { "math_id": 29, "text": "\\sigma(\\cdot)" }, { "math_id": 30, "text": "p(\\sigma(-B))" }, { "math_id": 31, "text": "p(-B)" }, { "math_id": 32, "text": "X= 0" }, { "math_id": 33, "text": "\\lambda" }, { "math_id": 34, "text": "u" }, { "math_id": 35, "text": "v" }, { "math_id": 36, "text": "X=u{v}^*" }, { "math_id": 37, "text": "X\\neq 0" }, { "math_id": 38, "text": "\nAX+XB = A(uv^*)-(uv^*)(-B) = \\lambda uv^*-\\lambda uv^* = 0.\n" }, { "math_id": 39, "text": "q" }, { "math_id": 40, "text": "f" }, { "math_id": 41, "text": "g" }, { "math_id": 42, "text": "p(z)f(z)+q(z)g(z)\\equiv 1" }, { "math_id": 43, "text": "q(-B)=0" }, { "math_id": 44, "text": "p(-B)f(-B)=I" }, { "math_id": 45, "text": "\\mathrm{Re}(uv^*)" }, { "math_id": 46, "text": "\\mathrm{Im}(uv^*)" }, { "math_id": 47, "text": "AX+XB=0" }, { "math_id": 48, "text": " \\begin{bmatrix} A & C \\\\ 0 & B \\end{bmatrix}" }, { "math_id": 49, "text": "\\begin{bmatrix} A & 0 \\\\0&B \\end{bmatrix}" }, { "math_id": 50, "text": "\\begin{bmatrix}I_n & X \\\\ 0 & I_m \\end{bmatrix} \\begin{bmatrix} A&C\\\\0&B \\end{bmatrix} \\begin{bmatrix} I_n & -X \\\\ 0& I_m \\end{bmatrix} = \\begin{bmatrix} A&0\\\\0&B \\end{bmatrix}." }, { "math_id": 51, "text": "\\mathcal{O}(n^3)" } ]
https://en.wikipedia.org/wiki?curid=10105237
10105571
Fort space
Examples of topological spaces In mathematics, there are a few topological spaces named after M. K. Fort, Jr. Fort space. Fort space is defined by taking an infinite set "X", with a particular point "p" in "X", and declaring open the subsets "A" of "X" such that: The subspace formula_0 has the discrete topology and is open and dense in "X". The space "X" is homeomorphic to the one-point compactification of an infinite discrete space. Modified Fort space. Modified Fort space is similar but has two particular points. So take an infinite set "X" with two distinct points "p" and "q", and declare open the subsets "A" of "X" such that: The space "X" is compact and T1, but not Hausdorff. Fortissimo space. Fortissimo space is defined by taking an uncountable set "X", with a particular point "p" in "X", and declaring open the subsets "A" of "X" such that: The subspace formula_0 has the discrete topology and is open and dense in "X". The space "X" is not compact, but it is a Lindelöf space. It is obtained by taking an uncountable discrete space, adding one point and defining a topology such that the resulting space is Lindelöf and contains the original space as a dense subspace. Similarly to Fort space being the one-point compactification of an infinite discrete space, one can describe Fortissimo space as the "one-point Lindelöfication" of an uncountable discrete space. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "X\\setminus\\{p\\}" } ]
https://en.wikipedia.org/wiki?curid=10105571
10106425
Orr–Sommerfeld equation
The Orr–Sommerfeld equation, in fluid dynamics, is an eigenvalue equation describing the linear two-dimensional modes of disturbance to a viscous parallel flow. The solution to the Navier–Stokes equations for a parallel, laminar flow can become unstable if certain conditions on the flow are satisfied, and the Orr–Sommerfeld equation determines precisely what the conditions for hydrodynamic stability are. The equation is named after William McFadden Orr and Arnold Sommerfeld, who derived it at the beginning of the 20th century. Formulation. The equation is derived by solving a linearized version of the Navier–Stokes equation for the perturbation velocity field formula_0, where formula_1 is the unperturbed or basic flow. The perturbation velocity has the wave-like solution formula_2 (real part understood). Using this knowledge, and the streamfunction representation for the flow, the following dimensional form of the Orr–Sommerfeld equation is obtained: formula_3, where formula_4 is the dynamic viscosity of the fluid, formula_5 is its density, and formula_6 is the potential or stream function. In the case of zero viscosity (formula_7), the equation reduces to Rayleigh's equation. The equation can be written in non-dimensional form by measuring velocities according to a scale set by some characteristic velocity formula_8, and by measuring lengths according to channel depth formula_9. Then the equation takes the form formula_10, where formula_11 is the Reynolds number of the base flow. The relevant boundary conditions are the no-slip boundary conditions at the channel top and bottom formula_12 and formula_13, formula_14 at formula_12 and formula_15 in the case where formula_6 is the potential function. Or: formula_16 at formula_12 and formula_15 in the case where formula_6 is the stream function. The eigenvalue parameter of the problem is formula_17 and the eigenvector is formula_6. If the imaginary part of the wave speed formula_17 is positive, then the base flow is unstable, and the small perturbation introduced to the system is amplified in time. Solutions. For all but the simplest of velocity profiles formula_18, numerical or asymptotic methods are required to calculate solutions. Some typical flow profiles are discussed below. In general, the spectrum of the equation is discrete and infinite for a bounded flow, while for unbounded flows (such as boundary-layer flow), the spectrum contains both continuous and discrete parts. For plane Poiseuille flow, it has been shown that the flow is unstable (i.e. one or more eigenvalues formula_17 has a positive imaginary part) for some formula_19 when formula_20 and the neutrally stable mode at formula_21 having formula_22, formula_23. To see the stability properties of the system, it is customary to plot a dispersion curve, that is, a plot of the growth rate formula_24 as a function of the wavenumber formula_19. The first figure shows the spectrum of the Orr–Sommerfeld equation at the critical values listed above. This is a plot of the eigenvalues (in the form formula_25) in the complex plane. The rightmost eigenvalue is the most unstable one. At the critical values of Reynolds number and wavenumber, the rightmost eigenvalue is exactly zero. For higher (lower) values of Reynolds number, the rightmost eigenvalue shifts into the positive (negative) half of the complex plane. Then, a fuller picture of the stability properties is given by a plot exhibiting the functional dependence of this eigenvalue; this is shown in the second figure. On the other hand, the spectrum of eigenvalues for Couette flow indicates stability, at all Reynolds numbers. However, in experiments, Couette flow is found to be unstable to small, but "finite," perturbations for which the linear theory, and the Orr–Sommerfeld equation do not apply. It has been argued that the non-normality of the eigenvalue problem associated with Couette (and indeed, Poiseuille) flow might explain that observed instability. That is, the eigenfunctions of the Orr–Sommerfeld operator are complete but non-orthogonal. Then, the energy of the disturbance contains contributions from all eigenfunctions of the Orr–Sommerfeld equation. Even if the energy associated with each eigenvalue considered separately is decaying exponentially in time (as predicted by the Orr–Sommerfeld analysis for the Couette flow), the cross terms arising from the non-orthogonality of the eigenvalues can increase transiently. Thus, the total energy increases transiently (before tending asymptotically to zero). The argument is that if the magnitude of this transient growth is sufficiently large, it destabilizes the laminar flow, however this argument has not been universally accepted. A nonlinear theory explaining transition, has also been proposed. Although that theory does include linear transient growth, the focus is on 3D nonlinear processes that are strongly suspected to underlie transition to turbulence in shear flows. The theory has led to the construction of so-called complete 3D steady states, traveling waves and time-periodic solutions of the Navier-Stokes equations that capture many of the key features of transition and coherent structures observed in the near wall region of turbulent shear flows. Even though "solution" usually implies the existence of an analytical result, it is common practice in fluid mechanics to refer to numerical results as "solutions" - regardless of whether the approximated solutions satisfy the Navier-Stokes equations in a mathematically satisfactory way or not. It is postulated that transition to turbulence involves the dynamic state of the fluid evolving from one solution to the next. The theory is thus predicated upon the actual existence of such solutions (many of which have yet to be observed in a physical experimental setup). This relaxation on the requirement of exact solutions allows a great deal of flexibility, since exact solutions are extremely difficult to obtain (contrary to numerical solutions), at the expense of rigor and (possibly) correctness. Thus, even though not as rigorous as previous approaches to transition, it has gained immense popularity. An extension of the Orr–Sommerfeld equation to the flow in porous media has been recently suggested. Mathematical methods for free-surface flows. For Couette flow, it is possible to make mathematical progress in the solution of the Orr–Sommerfeld equation. In this section, a demonstration of this method is given for the case of free-surface flow, that is, when the upper lid of the channel is replaced by a free surface. Note first of all that it is necessary to modify upper boundary conditions to take account of the free surface. In non-dimensional form, these conditions now read formula_26 at formula_27, formula_28, formula_29 at formula_30. The first free-surface condition is the statement of continuity of tangential stress, while the second condition relates the normal stress to the surface tension. Here formula_31 are the Froude and Weber numbers respectively. For Couette flow formula_32, the four linearly independent solutions to the non-dimensional Orr–Sommerfeld equation are, formula_33, formula_34 formula_35 where formula_36 is the Airy function of the first kind. Substitution of the superposition solution formula_37 into the four boundary conditions gives four equations in the four unknown constants formula_38. For the equations to have a non-trivial solution, the determinant condition formula_39 must be satisfied. This is a single equation in the unknown "c", which can be solved numerically or by asymptotic methods. It can be shown that for a range of wavenumbers formula_19 and for sufficiently large Reynolds numbers, the growth rate formula_40 is positive. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathbf{u} = \\left(U(z)+u'(x,z,t), 0 ,w'(x,z,t)\\right)" }, { "math_id": 1, "text": "(U(z), 0, 0)" }, { "math_id": 2, "text": "\\mathbf{u}' \\propto \\exp(i \\alpha (x - c t))" }, { "math_id": 3, "text": "\\frac{\\mu}{i\\alpha\\rho} \\left({d^2 \\over d z^2} - \\alpha^2\\right)^2 \\varphi = (U - c)\\left({d^2 \\over d z^2} - \\alpha^2\\right) \\varphi - U'' \\varphi" }, { "math_id": 4, "text": "\\mu" }, { "math_id": 5, "text": "\\rho" }, { "math_id": 6, "text": "\\varphi" }, { "math_id": 7, "text": "\\mu=0" }, { "math_id": 8, "text": "U_0" }, { "math_id": 9, "text": "h" }, { "math_id": 10, "text": "{1 \\over i \\alpha \\, Re} \\left({d^2 \\over d z^2} - \\alpha^2\\right)^2 \\varphi = (U - c)\\left({d^2 \\over d z^2} - \\alpha^2\\right) \\varphi - U'' \\varphi" }, { "math_id": 11, "text": "Re=\\frac{\\rho U_0 h}{\\mu}" }, { "math_id": 12, "text": "z = z_1" }, { "math_id": 13, "text": "z = z_2" }, { "math_id": 14, "text": "\\alpha \\varphi = {d \\varphi \\over d z} = 0" }, { "math_id": 15, "text": "z = z_2," }, { "math_id": 16, "text": "\\alpha \\varphi = {d \\varphi \\over d x} = 0" }, { "math_id": 17, "text": "c" }, { "math_id": 18, "text": "U" }, { "math_id": 19, "text": "\\alpha" }, { "math_id": 20, "text": "Re > Re_c = 5772.22" }, { "math_id": 21, "text": "Re = Re_c" }, { "math_id": 22, "text": "\\alpha_c = 1.02056" }, { "math_id": 23, "text": "c_r = 0.264002" }, { "math_id": 24, "text": "\\text{Im}(\\alpha{c})" }, { "math_id": 25, "text": "\\lambda=-i\\alpha{c}" }, { "math_id": 26, "text": "\\varphi={d \\varphi \\over d z}=0," }, { "math_id": 27, "text": "z = 0" }, { "math_id": 28, "text": "\\frac{d^2\\varphi}{dz^2}+\\alpha^2\\varphi=0" }, { "math_id": 29, "text": "\\Omega\\equiv\\frac{d^3\\varphi}{dz^3}+i\\alpha Re\\left[\\left(c-U\\left(z_2=1\\right)\\right)\\frac{d\\varphi}{dz}+\\varphi\\right]-i\\alpha Re\\left(\\frac{1}{Fr}+\\frac{\\alpha^2}{We}\\right)\\frac{\\varphi}{c-U\\left(z_2=1\\right)}=0," }, { "math_id": 30, "text": "\\,z=1" }, { "math_id": 31, "text": "Fr=\\frac{U_0^2}{gh},\\,\\,\\ We=\\frac{\\rho u_0^2 h}{\\sigma}" }, { "math_id": 32, "text": "U\\left(z\\right)=z" }, { "math_id": 33, "text": "\\chi_1\\left(z\\right)=\\sinh\\left(\\alpha z\\right),\\qquad \\chi_2\\left(z\\right)=\\cosh\\left(\\alpha z\\right)" }, { "math_id": 34, "text": "\\chi_3\\left(z\\right)=\\frac{1}{\\alpha}\\int_\\infty^z\\sinh\\left[\\alpha\\left(z-\\xi\\right)\\right]Ai\\left[e^{i\\pi/6}\\left(\\alpha Re\\right)^{1/3}\\left(\\xi-c-\\frac{i\\alpha}{Re}\\right)\\right]d\\xi," }, { "math_id": 35, "text": "\\chi_4\\left(z\\right)=\\frac{1}{\\alpha}\\int_\\infty^z\\sinh\\left[\\alpha\\left(z-\\xi\\right)\\right]Ai\\left[e^{5i\\pi/6}\\left(\\alpha Re\\right)^{1/3}\\left(\\xi-c-\\frac{i\\alpha}{Re}\\right)\\right]d\\xi," }, { "math_id": 36, "text": "Ai\\left(\\cdot\\right)" }, { "math_id": 37, "text": "\\varphi=\\sum_{i=1}^4 c_i\\chi_i\\left(z\\right)" }, { "math_id": 38, "text": "c_i" }, { "math_id": 39, "text": "\\left|\\begin{array}{cccc}\\chi_1\\left(0\\right)&\\chi_2\\left(0\\right)&\\chi_3\\left(0\\right)&\\chi_4\\left(0\\right)\\\\\n\\chi_1'\\left(0\\right)&\\chi_2'\\left(0\\right)&\\chi_3'\\left(0\\right)&\\chi_4'\\left(0\\right)\\\\\n\\Omega_1\\left(1\\right)&\\Omega_2\\left(1\\right)&\\Omega_3\\left(1\\right)&\\Omega_4\\left(1\\right)\\\\\n\\chi_1''\\left(1\\right)+\\alpha^2\\chi_1\\left(1\\right)&\\chi_2''\\left(1\\right)+\\alpha^2\\chi_2\\left(1\\right)&\\chi_3''\\left(1\\right)+\\alpha^2\\chi_3\\left(1\\right)&\\chi_4''\\left(1\\right)+\\alpha^2\\chi_4\\left(1\\right)\\end{array}\\right|=0\n" }, { "math_id": 40, "text": "\\alpha c_{\\text{i}}" } ]
https://en.wikipedia.org/wiki?curid=10106425
1010712
Closed convex function
Terms in Maths In mathematics, a function formula_0 is said to be closed if for each formula_1, the sublevel set formula_2 is a closed set. Equivalently, if the epigraph defined by formula_3 is closed, then the function formula_4 is closed. This definition is valid for any function, but most used for convex functions. A proper convex function is closed if and only if it is lower semi-continuous. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "f: \\mathbb{R}^n \\rightarrow \\mathbb{R} " }, { "math_id": 1, "text": " \\alpha \\in \\mathbb{R}" }, { "math_id": 2, "text": " \\{ x \\in \\mbox{dom} f \\vert f(x) \\leq \\alpha \\} " }, { "math_id": 3, "text": " \\mbox{epi} f = \\{ (x,t) \\in \\mathbb{R}^{n+1} \\vert x \\in \\mbox{dom} f,\\; f(x) \\leq t\\} " }, { "math_id": 4, "text": " f " }, { "math_id": 5, "text": "\\mbox{dom} f " }, { "math_id": 6, "text": " f" }, { "math_id": 7, "text": "f: \\mathbb R^n \\rightarrow \\mathbb R " }, { "math_id": 8, "text": "\\infty" } ]
https://en.wikipedia.org/wiki?curid=1010712
10109430
Reynolds analogy
The Reynolds Analogy is popularly known to relate turbulent momentum and heat transfer. That is because in a turbulent flow (in a pipe or in a boundary layer) the transport of momentum and the transport of heat largely depends on the same turbulent eddies: the velocity and the temperature profiles have the same shape. The main assumption is that heat flux q/A in a turbulent system is analogous to momentum flux τ, which suggests that the ratio τ/(q/A) must be constant for all radial positions. The complete Reynolds analogy* is: formula_0 Experimental data for gas streams agree approximately with above equation if the Schmidt and Prandtl numbers are near 1.0 and only skin friction is present in flow past a flat plate or inside a pipe. When liquids are present and/or form drag is present, the analogy is conventionally known to be invalid. In 2008, the qualitative form of validity of Reynolds' analogy was re-visited for laminar flow of incompressible fluid with variable dynamic viscosity (μ). It was shown that the inverse dependence of Reynolds number ("Re") and skin friction coefficient("c"f) is the basis for validity of the Reynolds’ analogy, in laminar convective flows with constant & variable μ. For μ = const. it reduces to the popular form of Stanton number ("St") increasing with increasing "Re", whereas for variable μ it reduces to "St" increasing with decreasing "Re". Consequently, the Chilton-Colburn analogy of "St"•"Pr"2/3 increasing with increasing "c"f is qualitatively valid whenever the Reynolds’ analogy is valid. Further, the validity of the Reynolds’ analogy is linked to the applicability of Prigogine's Theorem of Minimum Entropy Production. Thus, Reynolds' analogy is valid for flows that are close to developed, for whom, changes in the gradients of field variables (velocity & temperature) along the flow are small.
[ { "math_id": 0, "text": " \\frac{f}{2} = \\frac{h}{C_p\\times G} = \\frac{k'_c}{V_{av}} " } ]
https://en.wikipedia.org/wiki?curid=10109430
10109649
Courant algebroid
In a field of mathematics known as differential geometry, a Courant geometry was originally introduced by Zhang-Ju Liu, Alan Weinstein and Ping Xu in their investigation of doubles of Lie bialgebroids in 1997. Liu, Weinstein and Xu named it after Courant, who had implicitly devised earlier in 1990 the standard prototype of Courant algebroid through his discovery of a skew symmetric bracket on formula_0, called Courant bracket today, which fails to satisfy the Jacobi identity. Both this standard example and the double of a Lie bialgebra are special instances of Courant algebroids. Definition. A Courant algebroid consists of the data a vector bundle formula_1 with a bracket formula_2, a non degenerate fiber-wise inner product formula_3, and a bundle map formula_4 (called anchor) subject to the following axioms: where formula_9 are sections of "formula_10" and "formula_11" is a smooth function on the base manifold "formula_12". The map "formula_13" is the composition formula_14, with "formula_15" the de Rham differential, formula_16 the dual map of formula_17, and "formula_18" the isomorphism formula_19 induced by the inner product. Skew-Symmetric Definition. An alternative definition can be given to make the bracket skew-symmetric as formula_20 This no longer satisfies the Jacobi identity axiom above. It instead fulfills a homotopic Jacobi identity. formula_21 where "formula_22" is formula_23 The Leibniz rule and the invariance of the scalar product become modified by the relation formula_24 and the violation of skew-symmetry gets replaced by the axiom formula_25 The skew-symmetric bracket "formula_26" together with the derivation "formula_27" and the Jacobiator "formula_22" form a strongly homotopic Lie algebra. Properties. The bracket "formula_28" is not skew-symmetric as one can see from the third axiom. Instead it fulfills a certain Jacobi identity (first axiom) and a Leibniz rule (second axiom). From these two axioms one can derive that the anchor map "formula_17" is a morphism of brackets: formula_29 The fourth rule is an invariance of the inner product under the bracket. Polarization leads to formula_30 Examples. An example of the Courant algebroid is given by the Dorfman bracket on the direct sum formula_0 with a twist introduced by Ševera, (1998) defined as: formula_31 where "formula_32" are vector fields, formula_33 are 1-forms and "formula_34" is a closed 3-form twisting the bracket. This bracket is used to describe the integrability of generalized complex structures. A more general example arises from a Lie algebroid "formula_35" whose induced differential on formula_36 will be written as "formula_37" again. Then use the same formula as for the Dorfman bracket with "formula_34" an "A"-3-form closed under "formula_37". Another example of a Courant algebroid is a quadratic Lie algebra, i.e. a Lie algebra with an invariant scalar product. Here the base manifold is just a point and thus the anchor map (and "formula_27") are trivial. The example described in the paper by Weinstein et al. comes from a Lie bialgebroid, i.e. "formula_35" a Lie algebroid (with anchor formula_38 and bracket formula_39), also its dual formula_36 a Lie algebroid (inducing the differential formula_40 on formula_41) and formula_42 (where on the right-hand side you extend the "formula_35"-bracket to formula_43 using graded Leibniz rule). This notion is symmetric in "formula_35" and formula_36 (see Roytenberg). Here formula_44 with anchor formula_45 and the bracket is the skew-symmetrization of the above in formula_46 and "formula_47" (equivalently in "formula_48" and formula_49): formula_50 Dirac structures. Given a Courant algebroid with the inner product formula_51 of split signature (e.g. the standard one formula_0), then a Dirac structure is a maximally isotropic integrable vector subbundle "formula_52", i.e. formula_53, formula_54, formula_55. Examples. As discovered by Courant and parallel by Dorfman, the graph of a 2-form formula_56 is maximally isotropic and moreover integrable if and only if formula_57, i.e. the 2-form is closed under the de Rham differential, i.e. is a presymplectic structure. A second class of examples arises from bivectors formula_58 whose graph is maximally isotropic and integrable if and only if formula_59, i.e. formula_17 is a Poisson bivector on "formula_12". Generalized complex structures. Given a Courant algebroid with inner product of split signature, a generalized complex structure "formula_52" is a Dirac structure in the complexified Courant algebroid with the additional property formula_60 where formula_61 means complex conjugation with respect to the standard complex structure on the complexification. As studied in detail by Gualtieri the generalized complex structures permit the study of geometry analogous to complex geometry. Examples. Examples are, besides presymplectic and Poisson structures, also the graph of a complex structure formula_62.
[ { "math_id": 0, "text": "TM\\oplus T^*M" }, { "math_id": 1, "text": "E\\to M" }, { "math_id": 2, "text": "[\\cdot,\\cdot]:\\Gamma E \\times \\Gamma E \\to \\Gamma E" }, { "math_id": 3, "text": "\\langle \\cdot, \\cdot \\rangle: E\\times E\\to M\\times\\R" }, { "math_id": 4, "text": "\\rho:E\\to TM " }, { "math_id": 5, "text": "[\\phi, [\\chi, \\psi]] = [[\\phi, \\chi], \\psi] + [\\chi, [\\phi, \\psi]]" }, { "math_id": 6, "text": "[\\phi, f\\psi] = \\rho(\\phi)f\\psi +f[\\phi, \\psi]" }, { "math_id": 7, "text": "[\\phi,\\psi] + [\\psi,\\phi] = \\tfrac12 D\\langle \\phi,\\psi \\rangle" }, { "math_id": 8, "text": "\\rho(\\phi)\\langle \\psi,\\chi \\rangle= \\langle [\\phi,\\psi],\\chi \\rangle + \\langle \\psi, [\\phi,\\chi] \\rangle " }, { "math_id": 9, "text": "\\phi, \\chi, \\psi" }, { "math_id": 10, "text": "E" }, { "math_id": 11, "text": "f" }, { "math_id": 12, "text": "M" }, { "math_id": 13, "text": "D: \\mathcal{C}^\\infty(M) \\to \\Gamma E" }, { "math_id": 14, "text": "\\kappa^{-1}\\rho^T d: \\mathcal{C}^\\infty(M) \\to \\Gamma E" }, { "math_id": 15, "text": "d: \\mathcal{C}^\\infty(M) \\to \\Omega^1 (M)" }, { "math_id": 16, "text": "\\rho^T" }, { "math_id": 17, "text": "\\rho" }, { "math_id": 18, "text": "\\kappa" }, { "math_id": 19, "text": "E \\to E^*" }, { "math_id": 20, "text": "[[\\phi,\\psi]]= \\tfrac12\\big([\\phi,\\psi]-[\\psi,\\phi]\\big.)" }, { "math_id": 21, "text": " [[\\phi,[[\\psi,\\chi]]\\,]] +\\text{cycl.} = DT(\\phi,\\psi,\\chi)" }, { "math_id": 22, "text": "T" }, { "math_id": 23, "text": "T(\\phi,\\psi,\\chi)=\\frac13\\langle [\\phi,\\psi],\\chi\\rangle +\\text{cycl.}" }, { "math_id": 24, "text": " [[\\phi,\\psi]] = [\\phi,\\psi] -\\tfrac12 D\\langle \\phi,\\psi\\rangle" }, { "math_id": 25, "text": " \\rho\\circ D = 0 " }, { "math_id": 26, "text": "[[\\cdot,\\cdot]]" }, { "math_id": 27, "text": "D" }, { "math_id": 28, "text": "[\\cdot,\\cdot]" }, { "math_id": 29, "text": " \\rho[\\phi,\\psi] = [\\rho(\\phi),\\rho(\\psi)] ." }, { "math_id": 30, "text": " \\rho(\\phi)\\langle \\chi,\\psi\\rangle= \\langle [\\phi,\\chi],\\psi\\rangle +\\langle \\chi,[\\phi,\\psi]\\rangle ." }, { "math_id": 31, "text": " [X+\\xi, Y+\\eta] = [X,Y]+(\\mathcal{L}_X\\,\\eta -\\iota_Y d\\xi +\\iota_X \\iota_Y H)" }, { "math_id": 32, "text": "X,Y" }, { "math_id": 33, "text": "\\xi, \\eta" }, { "math_id": 34, "text": "H" }, { "math_id": 35, "text": "A" }, { "math_id": 36, "text": "A^*" }, { "math_id": 37, "text": "d" }, { "math_id": 38, "text": "\\rho_A" }, { "math_id": 39, "text": "[.,.]_A" }, { "math_id": 40, "text": "d_{A^*}" }, { "math_id": 41, "text": "\\wedge^* A" }, { "math_id": 42, "text": "d_{A^*}[X,Y]_A=[d_{A^*}X,Y]_A+[X,d_{A^*}Y]_A" }, { "math_id": 43, "text": "\\wedge^*A" }, { "math_id": 44, "text": "E=A\\oplus A^*" }, { "math_id": 45, "text": "\\rho(X+\\alpha)=\\rho_A(X)+\\rho_{A^*}(\\alpha)" }, { "math_id": 46, "text": "X" }, { "math_id": 47, "text": "\\alpha" }, { "math_id": 48, "text": "Y" }, { "math_id": 49, "text": "\\beta" }, { "math_id": 50, "text": "[X+\\alpha,Y+\\beta]= ([X,Y]_A +\\mathcal{L}^{A^*}_{\\alpha}Y-\\iota_\\beta d_{A^*}X) +([\\alpha,\\beta]_{A^*} +\\mathcal{L}^A_X\\beta-\\iota_Yd_{A}\\alpha)" }, { "math_id": 51, "text": "\\langle \\cdot, \\cdot \\rangle" }, { "math_id": 52, "text": "L \\to M" }, { "math_id": 53, "text": " \\langle L,L\\rangle \\equiv 0" }, { "math_id": 54, "text": " \\mathrm{rk}\\,L=\\tfrac12\\mathrm{rk}\\,E" }, { "math_id": 55, "text": " [\\Gamma L,\\Gamma L]\\subset \\Gamma L" }, { "math_id": 56, "text": "\\omega \\in \\Omega^2(M)" }, { "math_id": 57, "text": "d \\omega = 0" }, { "math_id": 58, "text": "\\Pi\\in\\Gamma(\\wedge^2 TM)" }, { "math_id": 59, "text": "[\\Pi,\\Pi] = 0" }, { "math_id": 60, "text": " L \\cap \\bar{L} = 0" }, { "math_id": 61, "text": "\\bar{\\ }" }, { "math_id": 62, "text": "J: TM \\to TM" } ]
https://en.wikipedia.org/wiki?curid=10109649
10109665
Chilton and Colburn J-factor analogy
Chilton–Colburn J-factor analogy (also known as the "modified Reynolds analogy") is a successful and widely used analogy between heat, momentum, and mass transfer. The basic mechanisms and mathematics of heat, mass, and momentum transport are essentially the same. Among many analogies (like Reynolds analogy, Prandtl–Taylor analogy) developed to directly relate heat transfer coefficients, mass transfer coefficients and friction factors, Chilton and Colburn J-factor analogy proved to be the most accurate. It is written as follows, formula_0 This equation permits the prediction of an unknown transfer coefficient when one of the other coefficients is known. The analogy is valid for fully developed turbulent flow in conduits with "Re" > 10000, 0.7 < "Pr" < 160, and tubes where "L"/"d" > 60 (the same constraints as the Sieder–Tate correlation). The wider range of data can be correlated by Friend–Metzner analogy. Relationship between Heat and Mass; formula_1 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "J_M=\\frac{f}{2} = J_H = \\frac{h}{c_p\\, G}\\,{Pr}^{\\frac{2}{3}}= J_D = \\frac{k'_c}{\\overline{v}} \\cdot {Sc}^{\\frac{2}{3}}" }, { "math_id": 1, "text": "J_M = \\frac{f}{2} = \\frac{Sh}{Re\\, Sc^{\\frac{1}{3}}} = J_H = \\frac{f}{2} = \\frac{Nu}{Re\\, Pr^{\\frac{1}{3}}}" } ]
https://en.wikipedia.org/wiki?curid=10109665
101107
Dentition
Development and arrangement of teeth Dentition pertains to the development of teeth and their arrangement in the mouth. In particular, it is the characteristic arrangement, kind, and number of teeth in a given species at a given age. That is, the number, type, and morpho-physiology (that is, the relationship between the shape and form of the tooth in question and its inferred function) of the teeth of an animal. Terminology. Animals whose teeth are all of the same type, such as most non-mammalian vertebrates, are said to have "homodont" dentition, whereas those whose teeth differ morphologically are said to have "heterodont" dentition. The dentition of animals with two successions of teeth (deciduous, permanent) is referred to as "diphyodont", while the dentition of animals with only one set of teeth throughout life is "monophyodont". The dentition of animals in which the teeth are continuously discarded and replaced throughout life is termed "polyphyodont". The dentition of animals in which the teeth are set in sockets in the jawbones is termed "thecodont". Overview. The evolutionary origin of the vertebrate dentition remains contentious. Current theories suggest either an "outside-in" or "inside-out" evolutionary origin to teeth, with the dentition arising from odontodes on the skin surface moving into the mouth, or vice versa. Despite this debate, it is accepted that vertebrate teeth are homologous to the dermal denticles found on the skin of basal Gnathostomes (i.e. Chondrichtyans). Since the origin of teeth some 450 mya, the vertebrate dentition has diversified within the reptiles, amphibians, and fish: however most of these groups continue to possess a long row of pointed or sharp-sided, undifferentiated teeth ("homodont") that are completely replaceable. The mammalian pattern is significantly different. The teeth in the upper and lower jaws in mammals have evolved a close-fitting relationship such that they operate together as a unit. "They 'occlude', that is, the chewing surfaces of the teeth are so constructed that the upper and lower teeth are able to fit precisely together, cutting, crushing, grinding or tearing the food caught between." Mammals have up to four distinct types of teeth, though not all types are present in all mammals. These are the incisor ("cutting"), the canine, the premolar, and the molar ("grinding"). The incisors occupy the front of the tooth row in both upper and lower jaws. They are normally flat, chisel-shaped teeth that meet in an edge-to-edge bite. Their function is cutting, slicing, or gnawing food into manageable pieces that fit into the mouth for further chewing. The canines are immediately behind the incisors. In many mammals, the canines are pointed, tusk-shaped teeth, projecting beyond the level of the other teeth. In carnivores, they are primarily offensive weapons for bringing down prey. In other mammals such as some primates, they are used to split open hard-surfaced food. In humans, the canine teeth are the main components in occlusal function and articulation. The mandibular teeth function against the maxillary teeth in a particular movement that is harmonious to the shape of the occluding surfaces. This creates the incising and grinding functions. The teeth must mesh together the way gears mesh in a transmission. If the interdigitation of the opposing cusps and incisal edges are not directed properly the teeth will wear abnormally (attrition), break away irregular crystalline enamel structures from the surface (abrasion), or fracture larger pieces (abfraction). This is a three-dimensional movement of the mandible in relation to the maxilla. There are three points of guidance: the two posterior points provided by the temporomandibular joints and the anterior component provided by the incisors and canines. The incisors mostly control the vertical opening of the chewing cycle when the muscles of mastication move the jaw forwards and backwards (protrusion/retrusion). The canines come into function guiding the vertical movement when the chewing is side to side (lateral). The canines alone can cause the other teeth to separate at the extreme end of the cycle (cuspid guided function) or all the posterior teeth can continue to stay in contact (group function). The entire range of this movement is the envelope of masticatory function. The initial movement inside this envelope is directed by the shape of the teeth in contact and the Glenoid Fossa/Condyle shape. The outer extremities of this envelope are limited by muscles, ligaments and the articular disc of the TMJ. Without the guidance of anterior incisors and canines, this envelope of function can be destructive to the remaining teeth resulting in periodontal trauma from occlusion seen as wear, fracture or tooth loosening and loss. The premolars and molars are at the back of the mouth. Depending on the particular mammal and its diet, these two kinds of teeth prepare pieces of food to be swallowed by grinding, shearing, or crushing. The specialised teeth—incisors, canines, premolars, and molars—are found in the same order in every mammal. In many mammals, the infants have a set of teeth that fall out and are replaced by adult teeth. These are called deciduous teeth, primary teeth, baby teeth or milk teeth. Animals that have two sets of teeth, one followed by the other, are said to be diphyodont. Normally the dental formula for milk teeth is the same as for adult teeth except that the molars are missing. Dental formula. Because every mammal's teeth are specialised for different functions, many mammal groups have lost the teeth that are not needed in their adaptation. Tooth form has also undergone evolutionary modification as a result of natural selection for specialised feeding or other adaptations. Over time, different mammal groups have evolved distinct dental features, both in the number and type of teeth and in the shape and size of the chewing surface. The number of teeth of each type is written as a dental formula for one side of the mouth, or quadrant, with the upper and lower teeth shown on separate rows. The number of teeth in a mouth is twice that listed, as there are two sides. In each set, incisors (I) are indicated first, canines (C) second, premolars (P) third, and finally molars (M), giving I:C:P:M. So for example, the formula 2.1.2.3 for upper teeth indicates 2 incisors, 1 canine, 2 premolars, and 3 molars on one side of the upper mouth. The deciduous dental formula is notated in lowercase lettering preceded by the letter d: for example: di:dc:dp. An animal's dentition for either deciduous or permanent teeth can thus be expressed as a dental formula, written in the form of a fraction, which can be written as I.C.P.MI.C.P.M, or I.C.P.M / I.C.P.M. For example, the following formulae show the deciduous and usual permanent dentition of all catarrhine primates, including humans: The greatest number of teeth in any known placental land mammal was 48, with a formula of 3.1.5.33.1.5.3. However, no living placental mammal has this number. In extant placental mammals, the maximum dental formula is 3.1.4.33.1.4.3 for pigs. Mammalian tooth counts are usually identical in the upper and lower jaws, but not always. For example, the aye-aye has a formula of 1.0.1.31.0.0.3, demonstrating the need for both upper and lower quadrant counts. Tooth naming discrepancies. Teeth are numbered starting at 1 in each group. Thus the human teeth are I1, I2, C1, P3, P4, M1, M2, and M3. (See next paragraph for premolar naming etymology.) In humans, the third molar is known as the wisdom tooth, whether or not it has erupted. Regarding premolars, there is disagreement regarding whether the third type of deciduous tooth is a premolar (the general consensus among mammalogists) or a molar (commonly held among human anatomists). There is thus some discrepancy between nomenclature in zoology and in dentistry. This is because the terms of human dentistry, which have generally prevailed over time, have not included mammalian dental evolutionary theory. There were originally four premolars in each quadrant of early mammalian jaws. However, all living primates have lost at least the first premolar. "Hence most of the prosimians and platyrrhines have three premolars. Some genera have also lost more than one. A second premolar has been lost in all catarrhines. The remaining permanent premolars are then properly identified as P2, P3 and P4 or P3 and P4; however, traditional dentistry refers to them as P1 and P2". Dental eruption sequence. The order in which teeth emerge through the gums is known as the dental eruption sequence. Rapidly developing anthropoid primates such as macaques, chimpanzees, and australopithecines have an eruption sequence of M1 I1 I2 M2 P3 P4 C M3, whereas anatomically modern humans have the sequence M1 I1 I2 C P3 P4 M2 M3. The later that tooth emergence begins, the earlier the anterior teeth (I1–P4) appear in the sequence. Dentition use in archaeology. Dentition, or the study of teeth, is an important area of study for archaeologists, especially those specializing in the study of older remains. Dentition affords many advantages over studying the rest of the skeleton itself (osteometry). The structure and arrangement of teeth is constant and, although it is inherited, does not undergo extensive change during environmental change, dietary specializations, or alterations in use patterns. The rest of the skeleton is much more likely to exhibit change because of adaptation. Teeth also preserve better than bone, and so the sample of teeth available to archaeologists is much more extensive and therefore more representative. Dentition is particularly useful in tracking ancient populations' movements, because there are differences in the shapes of incisors, the number of grooves on molars, presence/absence of wisdom teeth, and extra cusps on particular teeth. These differences can not only be associated with different populations across space, but also change over time so that the study of the characteristics of teeth could say which population one is dealing with, and at what point in that population's history they are. Dinosaurs. A dinosaur's dentition included all the teeth in its jawbones, which consist of the dentary, maxillary, and in some cases the premaxillary bones. The maxilla is the main bone of the upper jaw. The premaxilla is a smaller bone forming the anterior of the animal's upper jaw. The dentary is the main bone that forms the lower jaw (mandible). The predentary is a smaller bone that forms the anterior end of the lower jaw in ornithischian dinosaurs; it is always edentulous and supported a horny beak. Unlike modern lizards, dinosaur teeth grew individually in the sockets of the jawbones, which are known as the dental alveoli. This thecodont dentition is also present in crocodilians and mammals, but is not found among the non-archosaur reptiles, which instead have acrodont or pleurodont dentition. Teeth that were lost were replaced by teeth below the roots in each tooth socket. Occlusion refers to the closing of the dinosaur's mouth, where the teeth from the upper and lower parts of the jaw meet. If the occlusion causes teeth from the maxillary or premaxillary bones to cover the teeth of the dentary and predentary, the dinosaur is said to have an overbite, the most common condition in this group. The opposite condition is considered to be an underbite, which is rare in theropod dinosaurs. The majority of dinosaurs had teeth that were similarly shaped throughout their jaws but varied in size. Dinosaur tooth shapes included cylindrical, peg-like, teardrop-shaped, leaf-like, diamond-shaped and blade-like. A dinosaur that has a variety of tooth shapes is said to have heterodont dentition. An example of this are dinosaurs of the group Heterodontosauridae and the enigmatic early dinosaur, "Eoraptor". While most dinosaurs had a single row of teeth on each side of their jaws, others had dental batteries where teeth in the cheek region were fused together to form compound teeth. Individually these teeth were not suitable for grinding food, but when joined together with other teeth they would form a large surface area for the mechanical digestion of tough plant materials. This type of dental strategy is observed in ornithopod and ceratopsian dinosaurs as well as the duck-billed hadrosaurs, which had more than one hundred teeth in each dental battery. The teeth of carnivorous dinosaurs, called ziphodont, were typically blade-like or cone-shaped, curved, with serrated edges. This dentition was adapted for grasping and cutting through flesh. In some cases, as observed in the railroad-spike-sized teeth of "Tyrannosaurus rex", the teeth were designed to puncture and crush bone. Some dinosaurs had procumbent teeth, which projected forward in the mouth. See also. Dentition discussions in other articles. Some articles have helpful discussions on dentition, which will be listed as identified. Citations. <templatestyles src="Reflist/styles.css" /> General references. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "(di^2\\text{-}dc^1\\text{-}dm^2) / (di_2\\text{-}dc_1\\text{-}dm_2) \\times 2 =20." }, { "math_id": 1, "text": "(I^2\\text{-}C^1\\text{-}P^2\\text{-}M^3) / (I_2\\text{-}C_1\\text{-}P_2\\text{-}M_3) \\times 2 =32." } ]
https://en.wikipedia.org/wiki?curid=101107
1011270
Bourbaki–Witt theorem
Fixed-point theorem In mathematics, the Bourbaki–Witt theorem in order theory, named after Nicolas Bourbaki and Ernst Witt, is a basic fixed-point theorem for partially ordered sets. It states that if "X" is a non-empty chain complete poset, and formula_0 such that formula_1 for all formula_2 then "f" has a fixed point. Such a function "f" is called "inflationary" or "progressive". Special case of a finite poset. If the poset "X" is finite then the statement of the theorem has a clear interpretation that leads to the proof. The sequence of successive iterates, formula_3 where "x"0 is any element of "X", is monotone increasing. By the finiteness of "X", it stabilizes: formula_4 for "n" sufficiently large. It follows that "x"∞ is a fixed point of "f". Proof of the theorem. Pick some formula_5. Define a function "K" recursively on the ordinals as follows: formula_6 formula_7 If formula_8 is a limit ordinal, then by construction formula_9 is a chain in "X". Define formula_10 This is now an increasing function from the ordinals into "X". It cannot be strictly increasing, as if it were we would have an injective function from the ordinals into a set, violating Hartogs' lemma. Therefore the function must be eventually constant, so for some formula_11 that is, formula_12 So letting formula_13 we have our desired fixed point. Q.E.D. Applications. The Bourbaki–Witt theorem has various important applications. One of the most common is in the proof that the axiom of choice implies Zorn's lemma. We first prove it for the case where "X" is chain complete and has no maximal element. Let "g" be a choice function on formula_14 Define a function formula_0 by formula_15 This is allowed as, by assumption, the set is non-empty. Then "f"("x") > "x", so "f" is an inflationary function with no fixed point, contradicting the theorem. This special case of Zorn's lemma is then used to prove the Hausdorff maximality principle, that every poset has a maximal chain, which is easily seen to be equivalent to Zorn's Lemma. Bourbaki–Witt has other applications. In particular in computer science, it is used in the theory of computable functions. It is also used to define recursive data types, e.g. linked lists, in domain theory.
[ { "math_id": 0, "text": "f : X \\to X" }, { "math_id": 1, "text": "f (x) \\geq x" }, { "math_id": 2, "text": "x," }, { "math_id": 3, "text": " x_{n+1}=f(x_n), n=0,1,2,\\ldots, " }, { "math_id": 4, "text": " x_n=x_{\\infty}," }, { "math_id": 5, "text": "y \\in X" }, { "math_id": 6, "text": "\\,K(0) = y" }, { "math_id": 7, "text": "\\,K( \\alpha+1 ) = f( K( \\alpha ) )." }, { "math_id": 8, "text": " \\beta " }, { "math_id": 9, "text": "\\{ K( \\alpha ) \\ : \\ \\alpha < \\beta \\}" }, { "math_id": 10, "text": "K( \\beta ) = \\sup \\{ K( \\alpha ) \\ : \\ \\alpha < \\beta \\}." }, { "math_id": 11, "text": " \\alpha , \\ \\ K( \\alpha+1 ) = K ( \\alpha ); " }, { "math_id": 12, "text": "\\,f( K( \\alpha ) ) = K ( \\alpha )." }, { "math_id": 13, "text": "\\,x = K ( \\alpha )," }, { "math_id": 14, "text": "P(X) - \\{ \\varnothing \\}." }, { "math_id": 15, "text": "f(x) = g( \\{ y \\ : \\ y > x \\} )." } ]
https://en.wikipedia.org/wiki?curid=1011270
1011332
Predicate variable
In mathematical logic, a predicate variable is a predicate letter which functions as a "placeholder" for a relation (between terms), but which has not been specifically assigned any particular relation (or meaning). Common symbols for denoting predicate variables include capital roman letters such as formula_0, formula_1 and formula_2, or lower case roman letters, e.g., formula_3. In first-order logic, they can be more properly called metalinguistic variables. In higher-order logic, predicate variables correspond to propositional variables which can stand for well-formed formulas of the same logic, and such variables can be quantified by means of (at least) second-order quantifiers. Notation. Predicate variables should be distinguished from predicate constants, which could be represented either with a different (exclusive) set of predicate letters, or by their own symbols which really do have their own specific meaning in their domain of discourse: e.g. formula_4. If letters are used for both predicate constants and predicate variables, then there must be a way of distinguishing between them. One possibility is to use letters "W", "X", "Y", "Z" to represent predicate variables and letters "A", "B", "C"..., "U", "V" to represent predicate constants. If these letters are not enough, then numerical subscripts can be appended after the letter in question (as in "X"1, "X"2, "X"3). Another option is to use Greek lower-case letters to represent such metavariable predicates. Then, such letters could be used to represent entire well-formed formulae (wff) of the predicate calculus: any free variable terms of the wff could be incorporated as terms of the Greek-letter predicate. This is the first step towards creating a higher-order logic. Usage. If the predicate variables are not defined as belonging to the vocabulary of the predicate calculus, then they are predicate metavariables, whereas the rest of the predicates are just called "predicate letters". The metavariables are thus understood to be used to code for axiom schema and theorem schemata (derived from the axiom schemata). Whether the "predicate letters" are constants or variables is a subtle point: they are not constants in the same sense that formula_5 are predicate constants, or that formula_6 are numerical constants. If "predicate variables" are only allowed to be bound to predicate letters of zero arity (which have no arguments), where such letters represent propositions, then such variables are "propositional variables", and any predicate logic which allows second-order quantifiers to be used to bind such propositional variables is a second-order predicate calculus, or second-order logic. If predicate variables are also allowed to be bound to predicate letters which are unary or have higher arity, and when such letters represent "propositional functions", such that the domain of the arguments is mapped to a range of different propositions, and when such variables can be bound by quantifiers to such sets of propositions, then the result is a higher-order predicate calculus, or higher-order logic.
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "Q" }, { "math_id": 2, "text": "R" }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": " =, \\ \\in , \\ \\le,\\ <, \\ \\sub,... " }, { "math_id": 5, "text": " =, \\ \\in , \\ \\le,\\ <, \\ \\sub, " }, { "math_id": 6, "text": " 1,\\ 2,\\ 3,\\ \\sqrt{2},\\ \\pi,\\ e\\ " } ]
https://en.wikipedia.org/wiki?curid=1011332
10113455
Cophenetic correlation
In statistics, and especially in biostatistics, cophenetic correlation (more precisely, the cophenetic correlation coefficient) is a measure of how faithfully a dendrogram preserves the pairwise distances between the original unmodeled data points. Although it has been most widely applied in the field of biostatistics (typically to assess cluster-based models of DNA sequences, or other taxonomic models), it can also be used in other fields of inquiry where raw data tend to occur in clumps, or clusters. This coefficient has also been proposed for use as a test for nested clusters. Calculating the cophenetic correlation coefficient. Suppose that the original data {"Xi"} have been modeled using a cluster method to produce a dendrogram {"Ti"}; that is, a simplified model in which data that are "close" have been grouped into a hierarchical tree. Define the following distance measures. Then, letting formula_4 be the average of the "x"("i", "j"), and letting formula_5 be the average of the "t"("i", "j"), the cophenetic correlation coefficient "c" is given by formula_6 Software implementation. It is possible to calculate the cophenetic correlation in R using the dendextend R package. In Python, the SciPy package also has an implementation. In MATLAB, the Statistic and Machine Learning toolbox contains an implementation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x(i,j) = |X_i-X_j|" }, { "math_id": 1, "text": "t(i,j)" }, { "math_id": 2, "text": "T_i" }, { "math_id": 3, "text": "T_j" }, { "math_id": 4, "text": "\\bar{x}" }, { "math_id": 5, "text": "\\bar{t}" }, { "math_id": 6, "text": "\nc = \\frac {\\sum_{i<j} [x(i,j) - \\bar{x}][t(i,j) - \\bar{t}]}{\\sqrt{\\sum_{i<j}[x(i,j)-\\bar{x}]^2 \\sum_{i<j}[t(i,j)-\\bar{t}]^2}}.\n" } ]
https://en.wikipedia.org/wiki?curid=10113455
10119238
Essential extension
Concept in mathematics In mathematics, specifically module theory, given a ring "R" and an "R"-module "M" with a submodule "N", the module "M" is said to be an essential extension of "N" (or "N" is said to be an essential submodule or large submodule of "M") if for every submodule "H" of "M", formula_0 implies that formula_1 As a special case, an essential left ideal of "R" is a left ideal that is essential as a submodule of the left module "R""R". The left ideal has non-zero intersection with any non-zero left ideal of "R". Analogously, an essential right ideal is exactly an essential submodule of the right "R" module "R""R". The usual notations for essential extensions include the following two expressions: formula_2 , and formula_3 The dual notion of an essential submodule is that of superfluous submodule (or small submodule). A submodule "N" is superfluous if for any other submodule "H", formula_4 implies that formula_5. The usual notations for superfluous submodules include: formula_6 , and formula_7 Properties. Here are some of the elementary properties of essential extensions, given in the notation introduced above. Let "M" be a module, and "K", "N" and "H" be submodules of "M" with "K" formula_8 "N" Using Zorn's Lemma it is possible to prove another useful fact: For any submodule "N" of "M", there exists a submodule "C" such that formula_14. Furthermore, a module with no proper essential extension (that is, if the module is essential in another module, then it is equal to that module) is an injective module. It is then possible to prove that every module "M" has a maximal essential extension "E"("M"), called the injective hull of "M". The injective hull is necessarily an injective module, and is unique up to isomorphism. The injective hull is also minimal in the sense that any other injective module containing "M" contains a copy of "E"("M"). Many properties dualize to superfluous submodules, but not everything. Again let "M" be a module, and "K", "N" and "H" be submodules of "M" with "K" formula_15 "N". Since every module can be mapped via a monomorphism whose image is essential in an injective module (its injective hull), one might ask if the dual statement is true, i.e. for every module "M", is there a projective module "P" and an epimorphism from "P" onto "M" whose kernel is superfluous? (Such a "P" is called a projective cover). The answer is "No" in general, and the special class of rings whose right modules all have projective covers is the class of right perfect rings. One form of Nakayama's lemma is that J("R")"M" is a superfluous submodule of "M" when "M" is a finitely-generated module over "R". Generalization. This definition can be generalized to an arbitrary abelian category C. An essential extension is a monomorphism "u" : "M" → "E" such that for every non-zero subobject "s" : "N" → "E", the fibre product "N" ×"E" M ≠ 0. In a general category, a morphism "f" : "X" → "Y" is essential if any morphism "g" : "Y" → "Z" is a monomorphism if and only if "g" ° "f" is a monomorphism . Taking "g" to be the identity morphism of "Y" shows that an essential morphism "f" must be a monomorphism. If "X" has an injective hull "Y", then "Y" is the largest essential extension of "X" . But the largest essential extension may not be an injective hull. Indeed, in the category of T1 spaces and continuous maps, every object has a unique largest essential extension, but no space with more than one element has an injective hull .
[ { "math_id": 0, "text": "H\\cap N=\\{0\\}\\," }, { "math_id": 1, "text": "H=\\{0\\}\\," }, { "math_id": 2, "text": "N\\subseteq_e M\\," }, { "math_id": 3, "text": "N\\trianglelefteq M" }, { "math_id": 4, "text": "N+H=M\\," }, { "math_id": 5, "text": "H=M\\," }, { "math_id": 6, "text": "N\\subseteq_s M\\," }, { "math_id": 7, "text": "N\\ll M" }, { "math_id": 8, "text": " \\subseteq" }, { "math_id": 9, "text": "K\\subseteq_e M" }, { "math_id": 10, "text": "K\\subseteq_e N" }, { "math_id": 11, "text": "N\\subseteq_e M" }, { "math_id": 12, "text": "K \\cap H \\subseteq_e M" }, { "math_id": 13, "text": "H\\subseteq_e M" }, { "math_id": 14, "text": "N\\oplus C \\subseteq_e M" }, { "math_id": 15, "text": "\\subseteq" }, { "math_id": 16, "text": "N\\subseteq_s M" }, { "math_id": 17, "text": "K\\subseteq_s M" }, { "math_id": 18, "text": "N/K \\subseteq_s M/K" }, { "math_id": 19, "text": "K+H\\subseteq_s M" }, { "math_id": 20, "text": "H\\subseteq_s M" } ]
https://en.wikipedia.org/wiki?curid=10119238
10121045
Pore space in soil
Volume occupied by liquid and gas phases in a soil The pore space of soil contains the liquid and gas phases of soil, i.e., everything but the solid phase that contains mainly minerals of varying sizes as well as organic compounds. In order to understand porosity better a series of equations have not been used to express the quantitative interactions between the three phases of soil. Macropores or fractures play a major role in infiltration rates in many soils as well as preferential flow patterns, hydraulic conductivity and evapotranspiration. Cracks are also very influential in gas exchange, influencing respiration within soils. Modeling cracks therefore helps understand how these processes work and what the effects of changes in soil cracking such as compaction, can have on these processes. The pore space of soil may contain the habitat of plants (rhizosphere) and microorganisms. formula_0 Background. Dry bulk density. The dry bulk density of a soil greatly depends on the mineral assemblage making up the soil and on its degree of compaction. The density of quartz is around 2.65 g/cm3 but the dry bulk density of a soil can be less than half that value. Most soils have a dry bulk density between 1.0 and 1.6 g/cm3 but organic soil and some porous clays may have a dry bulk density well below 1 g/cm3. Core samples are taken by pushing a metallic cutting edge into the soil at the desired depth or soil horizon. The soil samples are then oven dried (often at 105 °C) until constant weight. formula_1 The dry bulk density of a soil is inversely proportional to its porosity. The more pore space in a soil, the lower its dry bulk density. formula_2 Porosity. or, more generally, for an unsaturated soil in which the pores are filled by two fluids, air and water: formula_3 The porosity formula_4 is a measure of the total pore space in the soil. This is defined as a fraction of volume often given in percent. The amount of porosity in a soil depends on the minerals that make up the soil and on the amount of sorting occurring within the soil structure. For example, a sandy soil will have a larger porosity than a silty sand, because the silt will fill the gaps in between the sand particles. Pore space relations. Hydraulic conductivity. Hydraulic conductivity (K) is a property of soil that describes the ease with which water can move through pore spaces. It depends on the permeability of the material (pores, compaction) and on the degree of saturation. Saturated hydraulic conductivity, Ksat, describes water movement through saturated media. Where hydraulic conductivity has the capability to be measured at any state. It can be estimated by numerous kinds of equipment. To calculate hydraulic conductivity, Darcy's law is used. The manipulation of the law depends on the soil saturation and instrument used. Infiltration. Infiltration is the process by which water on the ground surface enters the soil. The water enters the soil through the pores by the forces of gravity and capillary action. The largest cracks and pores offer a great reservoir for the initial flush of water. This allows a rapid infiltration. The smaller pores take longer to fill and rely on capillary forces as well as gravity. The smaller pores have a slower infiltration as the soil becomes more saturated. Pore types. A pore is not simply a void in the solid structure of soil. The various pore size categories have different characteristics and contribute different attributes to soils depending on the number and frequency of each type. A widely used classification of pore size is that of Brewer (1964): Macropore. The pores that are too large to have any significant capillary force. Unless impeded, water will drain from these pores, and they are generally air-filled at field capacity. Macropores can be caused by cracking, division of peds and aggregates, as well as plant roots, and zoological exploration. Size &gt;75 μm. Mesopore. The largest pores filled with water at field capacity. Also known as storage pores because of the ability to store water useful to plants. They do not have capillary forces too great so that the water does not become limiting to the plants. The properties of mesopores are highly studied by soil scientists because of their impact on agriculture and irrigation. Size 30–75 μm. Micropore. These are "pores that are sufficiently small that water within these pores is considered immobile, but available for plant extraction." Because there is little movement of water in these pores, solute movement is mainly by the process of diffusion. Size 5–30 μm. Ultramicropore. These pores are suitable for habitation by microorganisms. Their distribution is determined by soil texture and soil organic matter, and they are not greatly affected by compaction. Size 0.1–5 μm. Cryptopore. Pores that are too small to be penetrated by most microorganisms. Organic matter in these pores is therefore protected from microbial decomposition. They are filled with water unless the soil is very dry, but little of this water is available to plants, and water movement is very slow. Size &lt;0.1 μm. Modeling methods. Basic crack modeling has been undertaken for many years by simple observations and measurements of crack size, distribution, continuity and depth. These observations have either been surface observation or done on profiles in pits. Hand tracing and measurement of crack patterns on paper was one method used prior to advances in modern technology. Another field method was with the use of string and a semicircle of wire. The semi circle was moved along alternating sides of a string line. The cracks within the semicircle were measured for width, length and depth using a ruler. The crack distribution was calculated using the principle of Buffon's needle. Disc permeameter. This method relies on the fact that crack sizes have a range of different water potentials. At zero water potential at the soil surface an estimate of saturated hydraulic conductivity is produced, with all pores filled with water. As the potential is decreased progressively larger cracks drain. By measuring at the hydraulic conductivity at a range of negative potentials, the pore size distribution can be determined. While this is not a physical model of the cracks, it does give an indication to the sizes of pores within the soil. Horgan and Young model. Horgan and Young (2000) produced a computer model to create a two-dimensional prediction of surface crack formation. It used the fact that once cracks come within a certain distance of one another they tend to be attracted to each other. Cracks also tend to turn within a particular range of angles and at some stage a surface aggregate gets to a size that no more cracking will occur. These are often characteristic of a soil and can therefore be measured in the field and used in the model. However it was not able to predict the points at which cracking starts and although random in the formation of crack pattern, in many ways, cracking of soil is often not random, but follows lines of weaknesses. Araldite-impregnation imaging. A large core sample is collected. This is then impregnated with araldite and a fluorescent resin. The core is then cut back using a grinding implement, very gradually (~1 mm per time), and at every interval the surface of the core sample is digitally imaged. The images are then loaded into a computer where they can be analysed. Depth, continuity, surface area and a number of other measurements can then be made on the cracks within the soil. Electrical resistivity imaging. Using the infinite resistivity of air, the air spaces within a soil can be mapped. A specially designed resistivity meter had improved the meter-soil contact and therefore the area of the reading. This technology can be used to produce images that can be analysed for a range of cracking properties. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rho_{dry} = \\frac{M_{solid}}{V_{total}}" }, { "math_id": 1, "text": "\\rm{Dry \\ bulk \\ density} = \\frac{\\rm{(mass \\ of \\ oven \\ dry \\ soil)}}{\\rm{(total \\ sample \\ volume)}}" }, { "math_id": 2, "text": "\\eta = \\frac{V_{pore}}{V_{total}} = \\frac{V_{fluid}}{V_{total}}" }, { "math_id": 3, "text": "\\eta = \\frac{V_{air}+V_{water}}{V_{solid}+V_{air}+V_{water}}" }, { "math_id": 4, "text": "\\eta" } ]
https://en.wikipedia.org/wiki?curid=10121045
10121788
Fuel fraction
In aerospace engineering, an aircraft's fuel fraction, fuel weight fraction, or a spacecraft's propellant fraction, is the weight of the fuel or propellant divided by the gross take-off weight of the craft (including propellant): formula_0 The fractional result of this mathematical division is often expressed as a percent. For aircraft with external drop tanks, the term internal fuel fraction is used to exclude the weight of external tanks and fuel. Fuel fraction is a key parameter in determining an aircraft's range, the distance it can fly without refueling. Breguet’s aircraft range equation describes the relationship of range with airspeed, lift-to-drag ratio, specific fuel consumption, and the part of the total fuel fraction available for cruise, also known as the cruise fuel fraction, or cruise fuel weight fraction. In this context, the Breguet range is proportional to formula_1 Fighter aircraft. At today’s state of the art for jet fighter aircraft, fuel fractions of 29 percent and below typically yield subcruisers; 33 percent provides a quasi–supercruiser; and 35 percent and above are needed for useful supercruising missions. The U.S. F-22 Raptor’s fuel fraction is 29 percent, Eurofighter is 31 percent, both similar to those of the subcruising F-4 Phantom II, F-15 Eagle and the Russian Mikoyan MiG-29 "Fulcrum". The Russian supersonic interceptor, the Mikoyan MiG-31 "Foxhound", has a fuel fraction of over 45 percent. The Panavia Tornado had a relatively low internal fuel fraction of 26 percent, and frequently carried drop tanks. Civilian Aircraft. Airliners have a fuel fraction of less than half their takeoff weight, between 26% for medium-haul to 45% for long-haul. General aviation. The Rutan Voyager took off on its 1986 around-the-world flight at 72 percent, the highest figure ever at the time. Steve Fossett's Virgin Atlantic GlobalFlyer could attain a fuel fraction of nearly 83 percent, meaning that it carried more than five times its empty weight in fuel. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ \\zeta = \\frac{\\Delta W}{W_1} " }, { "math_id": 1, "text": "-\\ln(1-\\ \\zeta) " } ]
https://en.wikipedia.org/wiki?curid=10121788
10122951
Succinct data structure
Data structure which is efficient to both store in memory and query In computer science, a succinct data structure is a data structure which uses an amount of space that is "close" to the information-theoretic lower bound, but (unlike other compressed representations) still allows for efficient query operations. The concept was originally introduced by Jacobson to encode bit vectors, (unlabeled) trees, and planar graphs. Unlike general lossless data compression algorithms, succinct data structures retain the ability to use them in-place, without decompressing them first. A related notion is that of a compressed data structure, insofar as the size of the stored or encoded data similarly depends upon the specific content of the data itself. Suppose that formula_0 is the information-theoretical optimal number of bits needed to store some data. A representation of this data is called: For example, a data structure that uses formula_4 bits of storage is compact, formula_5 bits is succinct, formula_6 bits is also succinct, and formula_7 bits is implicit. Implicit structures are thus usually reduced to storing information using some permutation of the input data; the most well-known example of this is the heap. Succinct indexable dictionaries. Succinct indexable dictionaries, also called "rank/select" dictionaries, form the basis of a number of succinct representation techniques, including binary trees, formula_8-ary trees and multisets, as well as suffix trees and arrays. The basic problem is to store a subset formula_9 of a universe formula_10, usually represented as a bit array formula_11 where formula_12 iff formula_13 An indexable dictionary supports the usual methods on dictionaries (queries, and insertions/deletions in the dynamic case) as well as the following operations: for formula_16. In other words, formula_17 returns the number of elements equal to formula_18 up to position formula_19 while formula_20 returns the position of the formula_21-th occurrence of formula_22. There is a simple representation which uses formula_23 bits of storage space (the original bit array and an formula_24 auxiliary structure) and supports rank and select in constant time. It uses an idea similar to that for range-minimum queries; there are a constant number of recursions before stopping at a subproblem of a limited size. The bit array formula_25 is partitioned into "large blocks" of size formula_26 bits and "small blocks" of size formula_27 bits. For each large block, the rank of its first bit is stored in a separate table formula_28; each such entry takes formula_29 bits for a total of formula_30 bits of storage. Within a large block, another directory formula_31 stores the rank of each of the formula_32 small blocks it contains. The difference here is that it only needs formula_33 bits for each entry, since only the differences from the rank of the first bit in the containing large block need to be stored. Thus, this table takes a total of formula_34 bits. A lookup table formula_35 can then be used that stores the answer to every possible rank query on a bit string of length formula_36 for formula_37; this requires formula_38 bits of storage space. Thus, since each of these auxiliary tables take formula_24 space, this data structure supports rank queries in formula_39 time and formula_23 bits of space. To answer a query for formula_40 in constant time, a constant time algorithm computes: In practice, the lookup table formula_35 can be replaced by bitwise operations and smaller tables that can be used to find the number of bits set in the small blocks. This is often beneficial, since succinct data structures find their uses in large data sets, in which case cache misses become much more frequent and the chances of the lookup table being evicted from closer CPU caches becomes higher. Select queries can be easily supported by doing a binary search on the same auxiliary structure used for rank; however, this takes formula_42 time in the worst case. A more complicated structure using formula_43 bits of additional storage can be used to support select in constant time. In practice, many of these solutions have hidden constants in the formula_44 notation which dominate before any asymptotic advantage becomes apparent; implementations using broadword operations and word-aligned blocks often perform better in practice. Entropy-compressed solutions. The formula_23 space approach can be improved by noting that there are formula_45 distinct formula_46-subsets of formula_47 (or binary strings of length formula_48 with exactly formula_46 1’s), and thus formula_49 is an information theoretic lower bound on the number of bits needed to store formula_25. There is a succinct (static) dictionary which attains this bound, namely using formula_50 space. This structure can be extended to support rank and select queries and takes formula_51 space. Correct rank queries in this structure are however limited to elements contained in the set, analogous to how minimal perfect hashing functions work. This bound can be reduced to a space/time tradeoff by reducing the storage space of the dictionary to formula_52 with queries taking formula_53 time. It is also possible to construct a indexible dictionary supporting rank (but not select) that uses fewer than formula_54 bits. Such a dictionary is called a "monotone minimal perfect hash function", and can be implemented using as few as formula_55 bits. Succinct hash tables. A succinct hash table, also known as a "succinct unordered dictionary," is a data structure that stores formula_46 keys from a universe formula_56 using space formula_57 bits, and while supporting membership queries in constant expected time. If a succinct hash table also supports insertions and deletions in constant expected time, then it is referred to as "dynamic", and otherwise it is referred to as "static." The first dynamic succinct hash table was due to Raman and Rao in 2003. In the case where formula_58, their solution uses space formula_59 bits. Subsequently, it was shown that this space bound could be improved to formula_60 bits for any constant number of logarithms and a little after that this bound was also optimal. The latter solution supports all operations in worst-case constant time with high probability. The first static succinct hash table was due to Pagh in 1999. In the case where formula_58, their solution uses space formula_61 bits, and supports "worst-case" constant-time queries. This bound was subsequently improved to formula_62 bits, and then to formula_63 bits. Whereas the first two solutions support worst-case constant-time queries, the final one supports constant expected-time queries. The final solution also requires access to a lookup table of size formula_64, but this lookup table is independent of the set of elements being stored. Other examples. A string with an arbitrary length (Pascal string) takes "Z" + log("Z") space, and is thus succinct. If there is a maximum length – which is the case in practice, since 232 = 4 GiB of data is a very long string, and 264 = 16 EiB of data is larger than any string in practice – then a string with a length is also implicit, taking "Z" + "k" space, where "k" is the number of data to represent the maximum length (e.g., 64 bits). When a sequence of variable-length items (such as strings) needs to be encoded, there are various possibilities. A direct approach is to store a length and an item in each record – these can then be placed one after another. This allows efficient next, but not finding the "k"th item. An alternative is to place the items in order with a delimiter (e.g., null-terminated string). This uses a delimiter instead of a length, and is substantially slower, since the entire sequence must be scanned for delimiters. Both of these are space-efficient. An alternative approach is out-of-band separation: the items can simply be placed one after another, with no delimiters. Item bounds can then be stored as a sequence of length, or better, offsets into this sequence. Alternatively, a separate binary string consisting of 1s in the positions where an item begins, and 0s everywhere else is encoded along with it. Given this string, the formula_65 function can quickly determine where each item begins, given its index. This is "compact" but not "succinct," as it takes 2"Z" space, which is O("Z"). Another example is the representation of a binary tree: an arbitrary binary tree on formula_48 nodes can be represented in formula_66 bits while supporting a variety of operations on any node, which includes finding its parent, its left and right child, and returning the size of its subtree, each in constant time. The number of different binary trees on formula_48 nodes is formula_67formula_68. For large formula_48, this is about formula_69; thus we need at least about formula_70 bits to encode it. A succinct binary tree therefore would occupy only formula_71 bits per node.
[ { "math_id": 0, "text": "Z" }, { "math_id": 1, "text": "Z + O(1)" }, { "math_id": 2, "text": "Z + o(Z)" }, { "math_id": 3, "text": "O(Z)" }, { "math_id": 4, "text": "2Z" }, { "math_id": 5, "text": "Z + \\sqrt{Z}" }, { "math_id": 6, "text": "Z + \\lg Z" }, { "math_id": 7, "text": "Z + 3" }, { "math_id": 8, "text": "k" }, { "math_id": 9, "text": "S" }, { "math_id": 10, "text": "U =\n[0 \\dots n) = \\{0, 1, \\dots, n - 1\\}" }, { "math_id": 11, "text": "B[0 \\dots n)" }, { "math_id": 12, "text": "B[i] = 1" }, { "math_id": 13, "text": "i \\in S." }, { "math_id": 14, "text": "\\mathbf{rank}_q(x) = |\\{ k \\in [0 \\dots x] : B[k] = q \\}|" }, { "math_id": 15, "text": "\\mathbf{select}_q(x)= \\min \\{k \\in [0 \\dots n) : \\mathbf{rank}_q(k) = x\\}" }, { "math_id": 16, "text": "q \\in \\{0, 1\\}" }, { "math_id": 17, "text": "\\mathbf{rank}_q(x)" }, { "math_id": 18, "text": "q" }, { "math_id": 19, "text": "x" }, { "math_id": 20, "text": "\\mathbf{select}_q(x)" }, { "math_id": 21, "text": " x" }, { "math_id": 22, "text": " q " }, { "math_id": 23, "text": "n + o(n)" }, { "math_id": 24, "text": "o(n)" }, { "math_id": 25, "text": "B" }, { "math_id": 26, "text": "l = \\lg^2 n" }, { "math_id": 27, "text": "s = \\lg n / 2" }, { "math_id": 28, "text": "R_l[0 \\dots n/l)" }, { "math_id": 29, "text": "\\lg n" }, { "math_id": 30, "text": "(n/l) \\lg n = n / \\lg n" }, { "math_id": 31, "text": "R_s[0 \\dots l/s)" }, { "math_id": 32, "text": "l/s = 2 \\lg n" }, { "math_id": 33, "text": "\\lg l = \\lg \\lg^2 n = 2 \\lg \\lg n" }, { "math_id": 34, "text": "(n/s) \\lg l = 4 n \\lg \\lg n / \\lg n" }, { "math_id": 35, "text": "R_p" }, { "math_id": 36, "text": "s" }, { "math_id": 37, "text": "i \\in [0, s)" }, { "math_id": 38, "text": "2^s s \\lg s = O(\\sqrt{n} \\lg n \\lg \\lg n)" }, { "math_id": 39, "text": "O(1)" }, { "math_id": 40, "text": "\\mathbf{rank}_1(x)" }, { "math_id": 41, "text": "\\mathbf{rank}_1(x) = R_l[\\lfloor x / l \\rfloor] + R_s[\\lfloor x / s\\rfloor] + R_p[x \\lfloor x / s\\rfloor, x \\text{ mod } s]" }, { "math_id": 42, "text": "O(\\lg n)" }, { "math_id": 43, "text": "3n/\\lg \\lg n + O(\\sqrt{n} \\lg n \\lg \\lg n) = o(n)" }, { "math_id": 44, "text": "O(\\cdot)" }, { "math_id": 45, "text": "\\textstyle \\binom{n}{m}" }, { "math_id": 46, "text": "m" }, { "math_id": 47, "text": "[n)" }, { "math_id": 48, "text": "n" }, { "math_id": 49, "text": "\\textstyle \\mathcal{B}(m,n) = \\lceil \\lg \\binom{n}{m} \\rceil" }, { "math_id": 50, "text": "\\mathcal{B}(m,n) + o(\\mathcal{B}(m,n))" }, { "math_id": 51, "text": "\\mathcal{B}(m,n) + O(m + n \\lg \\lg n / \\lg n)" }, { "math_id": 52, "text": "\\mathcal{B}(m,n) + O(n t^t / \\lg^t n + n^{3/4})" }, { "math_id": 53, "text": "O(t)" }, { "math_id": 54, "text": "\\textstyle \\mathcal{B}(m,n)" }, { "math_id": 55, "text": "O(m \\log \\log \\log n)" }, { "math_id": 56, "text": "\\{0, 1, \\dots, n - 1\\}" }, { "math_id": 57, "text": "(1 + o(1)) \\mathcal{B}(m, n)" }, { "math_id": 58, "text": "n = \\text{poly}(m)" }, { "math_id": 59, "text": "\\mathcal{B}(m, n) + O(m \\log \\log m)" }, { "math_id": 60, "text": "\\mathcal{B}(m, n) + O(m \\log \\log \\log \\cdots \\log m)" }, { "math_id": 61, "text": "\\mathcal{B}(m, n) + O(m (\\log \\log m)^2 / \\log m)" }, { "math_id": 62, "text": "\\mathcal{B}(m, n) + m / \\text{poly} \\log m" }, { "math_id": 63, "text": "\\mathcal{B}(m, n) + \\text{poly} \\log m" }, { "math_id": 64, "text": "n^\\epsilon" }, { "math_id": 65, "text": "select" }, { "math_id": 66, "text": "2n + o(n)" }, { "math_id": 67, "text": "{\\tbinom{2n}{n}}" }, { "math_id": 68, "text": "/(n+1)" }, { "math_id": 69, "text": "4^n" }, { "math_id": 70, "text": "\\log_2(4^n)=2n" }, { "math_id": 71, "text": "2" } ]
https://en.wikipedia.org/wiki?curid=10122951
10125391
Mapping cone (homological algebra)
Tool in homological algebra In homological algebra, the mapping cone is a construction on a map of chain complexes inspired by the analogous construction in topology. In the theory of triangulated categories it is a kind of combined kernel and cokernel: if the chain complexes take their terms in an abelian category, so that we can talk about cohomology, then the cone of a map "f" being acyclic means that the map is a quasi-isomorphism; if we pass to the derived category of complexes, this means that "f" is an isomorphism there, which recalls the familiar property of maps of groups, modules over a ring, or elements of an arbitrary abelian category that if the kernel and cokernel both vanish, then the map is an isomorphism. If we are working in a t-category, then in fact the cone furnishes both the kernel and cokernel of maps between objects of its core. Definition. The cone may be defined in the category of cochain complexes over any additive category (i.e., a category whose morphisms form abelian groups and in which we may construct a direct sum of any two objects). Let formula_0 be two complexes, with differentials formula_1 i.e., formula_2 and likewise for formula_3 For a map of complexes formula_4 we define the cone, often denoted by formula_5 or formula_6 to be the following complex: formula_7 on terms, with differential formula_8 (acting as though on column vectors). Here formula_9 is the complex with formula_10 and formula_11. Note that the differential on formula_12 is different from the natural differential on formula_13, and that some authors use a different sign convention. Thus, if for example our complexes are of abelian groups, the differential would act as formula_14 Properties. Suppose now that we are working over an abelian category, so that the homology of a complex is defined. The main use of the cone is to identify quasi-isomorphisms: if the cone is acyclic, then the map is a quasi-isomorphism. To see this, we use the existence of a triangle formula_15 where the maps formula_16 are given by the direct summands (see Homotopy category of chain complexes). Since this is a triangle, it gives rise to a long exact sequence on homology groups: formula_17 and if formula_12 is acyclic then by definition, the outer terms above are zero. Since the sequence is exact, this means that formula_18 induces an isomorphism on all homology groups, and hence (again by definition) is a quasi-isomorphism. This fact recalls the usual alternative characterization of isomorphisms in an abelian category as those maps whose kernel and cokernel both vanish. This appearance of a cone as a combined kernel and cokernel is not accidental; in fact, under certain circumstances the cone literally embodies both. Say for example that we are working over an abelian category and formula_0 have only one nonzero term in degree 0: formula_19 formula_20 and therefore formula_21 is just formula_22 (as a map of objects of the underlying abelian category). Then the cone is just formula_23 (Underset text indicates the degree of each term.) The homology of this complex is then formula_24 formula_25 formula_26 This is not an accident and in fact occurs in every t-category. Mapping cylinder. A related notion is the mapping cylinder: let formula_27 be a morphism of chain complexes, let further formula_28 be the natural map. The mapping cylinder of "f" is by definition the mapping cone of "g". Topological inspiration. This complex is called the cone in analogy to the mapping cone (topology) of a continuous map of topological spaces formula_29: the complex of singular chains of the topological cone formula_30 is homotopy equivalent to the cone (in the chain-complex-sense) of the induced map of singular chains of "X" to "Y". The mapping cylinder of a map of complexes is similarly related to the mapping cylinder of continuous maps.
[ { "math_id": 0, "text": "A, B" }, { "math_id": 1, "text": "d_A, d_B;" }, { "math_id": 2, "text": "A = \\dots \\to A^{n - 1} \\xrightarrow{d_A^{n - 1}} A^n \\xrightarrow{d_A^n} A^{n + 1} \\to \\cdots" }, { "math_id": 3, "text": "B." }, { "math_id": 4, "text": "f : A \\to B," }, { "math_id": 5, "text": "\\operatorname{Cone}(f)" }, { "math_id": 6, "text": "C(f)," }, { "math_id": 7, "text": "C(f) = A[1] \\oplus B = \\dots \\to A^n \\oplus B^{n - 1} \\to A^{n + 1} \\oplus B^n \\to A^{n + 2} \\oplus B^{n + 1} \\to \\cdots" }, { "math_id": 8, "text": "d_{C(f)} = \\begin{pmatrix} d_{A[1]} & 0 \\\\ f[1] & d_B \\end{pmatrix}" }, { "math_id": 9, "text": "A[1]" }, { "math_id": 10, "text": "A[1]^n=A^{n + 1}" }, { "math_id": 11, "text": "d^n_{A[1]}=-d^{n + 1}_{A}" }, { "math_id": 12, "text": "C(f)" }, { "math_id": 13, "text": "A[1] \\oplus B" }, { "math_id": 14, "text": "\\begin{array}{ccl}\nd^n_{C(f)}(a^{n + 1}, b^n) &=& \\begin{pmatrix} d^n_{A[1]} & 0 \\\\ f[1]^n & d^n_B \\end{pmatrix} \\begin{pmatrix} a^{n + 1} \\\\ b^n \\end{pmatrix} \\\\\n &=& \\begin{pmatrix} - d^{n + 1}_A & 0 \\\\ f^{n + 1} & d^n_B \\end{pmatrix} \\begin{pmatrix} a^{n + 1} \\\\ b^n \\end{pmatrix} \\\\\n &=& \\begin{pmatrix} - d^{n + 1}_A (a^{n + 1}) \\\\ f^{n + 1}(a^{n + 1}) + d^n_B(b^n) \\end{pmatrix}\\\\\n &=& \\left(- d^{n + 1}_A (a^{n + 1}), f^{n + 1}(a^{n + 1}) + d^n_B(b^n)\\right).\n\\end{array}\n" }, { "math_id": 15, "text": "A \\xrightarrow{f} B \\to C(f) \\to A[1]" }, { "math_id": 16, "text": "B \\to C(f), C(f) \\to A[1]" }, { "math_id": 17, "text": "\\dots \\to H_{i - 1}(C(f)) \\to H_i(A) \\xrightarrow{f^*} H_i(B) \\to H_i(C(f)) \\to \\cdots" }, { "math_id": 18, "text": "f^*" }, { "math_id": 19, "text": "A = \\dots \\to 0 \\to A_0 \\to 0 \\to \\cdots," }, { "math_id": 20, "text": "B = \\dots \\to 0 \\to B_0 \\to 0 \\to \\cdots," }, { "math_id": 21, "text": "f \\colon A \\to B" }, { "math_id": 22, "text": "f_0 \\colon A_0 \\to B_0" }, { "math_id": 23, "text": "C(f) = \\dots \\to 0 \\to \\underset{[-1]}{A_0} \\xrightarrow{f_0} \\underset{[0]}{B_0} \\to 0 \\to \\cdots." }, { "math_id": 24, "text": "H_{-1}(C(f)) = \\operatorname{ker}(f_0)," }, { "math_id": 25, "text": "H_0(C(f)) = \\operatorname{coker}(f_0)," }, { "math_id": 26, "text": "H_i(C(f)) = 0 \\text{ for } i \\neq -1, 0.\\ " }, { "math_id": 27, "text": "f\\colon A \\to B" }, { "math_id": 28, "text": "g \\colon \\operatorname{Cone}(f)[-1] \\to A" }, { "math_id": 29, "text": "\\phi : X \\rightarrow Y" }, { "math_id": 30, "text": "cone(\\phi)" } ]
https://en.wikipedia.org/wiki?curid=10125391
10125619
Fundamental matrix (linear differential equation)
Matrix consisting of linearly independent solutions to a linear differential equation In mathematics, a fundamental matrix of a system of "n" homogeneous linear ordinary differential equationsformula_0is a matrix-valued function formula_1 whose columns are linearly independent solutions of the system. Then every solution to the system can be written as formula_2, for some constant vector formula_3 (written as a column vector of height n). A matrix-valued function formula_4 is a fundamental matrix of formula_5 if and only if formula_6 and formula_4 is a non-singular matrix for all formula_7. Control theory. The fundamental matrix is used to express the state-transition matrix, an essential component in the solution of a system of linear ordinary differential equations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\dot{\\mathbf{x}}(t) = A(t) \\mathbf{x}(t) " }, { "math_id": 1, "text": " \\Psi(t) " }, { "math_id": 2, "text": "\\mathbf{x}(t) = \\Psi(t) \\mathbf{c}" }, { "math_id": 3, "text": "\\mathbf{c}" }, { "math_id": 4, "text": " \\Psi " }, { "math_id": 5, "text": " \\dot{\\mathbf{x}}(t) = A(t) \\mathbf{x}(t) " }, { "math_id": 6, "text": " \\dot{\\Psi}(t) = A(t) \\Psi(t) " }, { "math_id": 7, "text": " t " } ]
https://en.wikipedia.org/wiki?curid=10125619
10125731
Rupture field
In abstract algebra, a rupture field of a polynomial formula_0 over a given field formula_1 is a field extension of formula_1 generated by a root formula_2 of formula_0. For instance, if formula_3 and formula_4 then formula_5 is a rupture field for formula_0. The notion is interesting mainly if formula_0 is irreducible over formula_1. In that case, all rupture fields of formula_0 over formula_1 are isomorphic, non-canonically, to formula_6: if formula_7 where formula_2 is a root of formula_0, then the ring homomorphism formula_8 defined by formula_9 for all formula_10 and formula_11 is an isomorphism. Also, in this case the degree of the extension equals the degree of formula_12. A rupture field of a polynomial does not necessarily contain all the roots of that polynomial: in the above example the field formula_5 does not contain the other two (complex) roots of formula_0 (namely formula_13 and formula_14 where formula_15 is a primitive cube root of unity). For a field containing all the roots of a polynomial, see Splitting field. Examples. A rupture field of formula_16 over formula_17 is formula_18. It is also a splitting field. The rupture field of formula_16 over formula_19 is formula_20 since there is no element of formula_19 which squares to formula_21 (and all quadratic extensions of formula_19 are isomorphic to formula_20). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(X)" }, { "math_id": 1, "text": "K" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "K=\\mathbb Q" }, { "math_id": 4, "text": "P(X)=X^3-2" }, { "math_id": 5, "text": "\\mathbb Q[\\sqrt[3]2]" }, { "math_id": 6, "text": "K_P=K[X]/(P(X))" }, { "math_id": 7, "text": "L=K[a]" }, { "math_id": 8, "text": "f" }, { "math_id": 9, "text": "f(k)=k" }, { "math_id": 10, "text": "k\\in K" }, { "math_id": 11, "text": "f(X\\mod P)=a" }, { "math_id": 12, "text": "P" }, { "math_id": 13, "text": "\\omega\\sqrt[3]2" }, { "math_id": 14, "text": "\\omega^2\\sqrt[3]2" }, { "math_id": 15, "text": "\\omega" }, { "math_id": 16, "text": "X^2+1" }, { "math_id": 17, "text": "\\mathbb R" }, { "math_id": 18, "text": "\\mathbb C" }, { "math_id": 19, "text": "\\mathbb F_3" }, { "math_id": 20, "text": "\\mathbb F_9" }, { "math_id": 21, "text": "-1" } ]
https://en.wikipedia.org/wiki?curid=10125731
1012633
Price dispersion
In economics, price dispersion is variation in prices across sellers of the same item, holding fixed the item's characteristics. Price dispersion can be viewed as a measure of trading frictions (or, tautologically, as a violation of the law of one price). It is often attributed to consumer search costs or unmeasured attributes (such as the reputation) of the retailing outlets involved. There is a difference between price dispersion and price discrimination. The latter concept involves a single provider charging different prices to different customers for an identical good. Price dispersion, on the other hand, is best thought of as the outcome of many firms potentially charging different prices, where customers of one firm find it difficult to patronize (or are perhaps unaware of) other firms due to the existence of search costs. Price dispersion measures include the range of prices, the percentage difference of highest and lowest price, the standard deviation of the price distribution, the variance of the price distribution, and the coefficient of variation of the price distribution. In most theoretical literature, price dispersion is argued as result from spatial difference and the existence of significant search cost. With the development of internet and shopping agent programs, conventional wisdom tells that price dispersion should be alleviated and may eventually disappear in the online market due to the reduced search cost for both price and product features. However, recent studies found a surprisingly high level of price dispersion online, even for standardized items such as books, CDs and DVDs. There is some evidence of a shrinking of this online price dispersion, but it remains significant. Recently, work has also been done in the area of e-commerce, specifically the Semantic Web, and its effects on price dispersion. Hal Varian, an economist at U. C. Berkeley, argued in a 1980 article that price dispersion may be an intentional marketing technique to encourage shoppers to explore their options. A related concept is that of wage dispersion. Consumer search and price dispersion. Search alone is insufficient. Even when consumers search, price dispersion is not guaranteed. Consumers may search, yet firms set the same price, negating the mere fact of searching. This is referred to as Diamond's paradox. Assume that many firms provide a homogeneous good. Consumers will randomly sample only one firm if they expect that all firms charge the same price. Consequently, each firm has an equal share of consumers. Since consumers disregard the competitions, each firm acts as a monopoly on its share of consumers. Firms choose a price that maximizes profit: the monopoly price. A necessary condition. A recurrent observation is that some consumers must sample one firm and only one, while the remaining consumers must sample at least two firms. If all of them sample only one firm, then the market faces Diamond's Paradox. Firms would ask the same price, and so there would be no price dispersion. On the contrary, if all consumers sample at least two firms. The most expensive firm will not get any consumer, because consumers know at least another firm that is cheaper. As a result, prices must be as low as possible: equal to marginal costs of production, as in a Bertrand economy. Price dispersion in a non-sequential search model. A non-sequential search strategy consists in choosing a number of prices to compare. If consumers follow a non-sequential search strategy, as long as some consumers sample only one firm, then an equilibrium in price dispersion exists. There is an equilibrium in price dispersion if some consumers search once, and the remaining consumers search more than one firm. Moreover, the distribution of prices has a closed form if consumers search at most two firms: formula_0 where formula_1; with formula_2 the share of consumer who sample only one firm, formula_3 consumers' reservation price, and formula_4 firms' marginal costs of production. Such an equilibrium in price dispersion occurs when consumers minimize formula_5, with formula_6 the sample size, formula_7 a search cost, and formula_8 the smallest price sampled. Price dispersion in a sequential search model. A sequential search strategy consists in sampling prices one by one, and stop after identifying a sufficiently low price. In sequential search models, the existence of perfectly informed consumers guarantees the equilibrium in price dispersion if the remaining consumers search once and only one. There is a continuous relationship between the share of informed consumers and the type of competition: from Bertrand competition to Diamond competition as fewer and fewer consumers are initially perfectly informed. The distribution of price has a closed form: formula_9 on support formula_10; where formula_11 the share of perfectly informed consumers, formula_12 the number of firms, formula_13 the revenue function that attains its maximum in formula_14, formula_4 consumers' reservation price, and formula_15
[ { "math_id": 0, "text": "\nF\\left(x\\right) =\\begin{cases}\n 0, & \\text{if } p < \\underline{p} \\left( q \\right)\\\\\n 1 - \\left( \\frac{p^{*} - p}{p - r}\\right)\\left( \\frac{q}{2\\left( 1 - q \\right)}\\right), & \\text{if } \\underline{p} \\left( q \\right) < p \\leq p^{*}\\\\\n 1, &\\text{if } p > p^{*}\n \\end{cases}\n" }, { "math_id": 1, "text": "\\underline{p} \\left( q \\right) = \\left( p^{*} - p \\right)\\frac{q}{2\\left( 1 - q \\right)} + r" }, { "math_id": 2, "text": "q" }, { "math_id": 3, "text": "p^{*}" }, { "math_id": 4, "text": "r" }, { "math_id": 5, "text": "\\mathbb{E} \\left[ p_{n} \\right] - cn" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "c" }, { "math_id": 8, "text": "p_{n}" }, { "math_id": 9, "text": "F \\left( p \\right) = 1 - \\left[ \\left( \\frac{1 - \\mu}{N \\mu} \\right) \\left( \\frac{R\\left( P_{r} \\right)}{R\\left( p \\right)} - 1 \\right) \\right]^{\\frac{1}{N-1}}" }, { "math_id": 10, "text": "\\left[ 0, P_{r} \\right]" }, { "math_id": 11, "text": "\\mu" }, { "math_id": 12, "text": "N" }, { "math_id": 13, "text": "R\\left(.\\right)" }, { "math_id": 14, "text": "\\hat{p}" }, { "math_id": 15, "text": "P_{r} = \\min \\left\\lbrace r, \\hat{p}\\right\\rbrace" } ]
https://en.wikipedia.org/wiki?curid=1012633
1012687
Coefficient of variation
Statistical parameter In probability theory and statistics, the coefficient of variation (CV), also known as normalized root-mean-square deviation (NRMSD), percent RMS, and relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequency distribution. It is defined as the ratio of the standard deviation formula_0 to the mean formula_1 (or its absolute value, formula_2), and often expressed as a percentage ("%RSD"). The CV or RSD is widely used in analytical chemistry to express the precision and repeatability of an assay. It is also commonly used in fields such as engineering or physics when doing quality assurance studies and ANOVA gauge R&amp;R, by economists and investors in economic models, and in psychology/neuroscience. Definition. The coefficient of variation (CV) is defined as the ratio of the standard deviation formula_3 to the mean formula_4, formula_5 It shows the extent of variability in relation to the mean of the population. The coefficient of variation should be computed only for data measured on scales that have a meaningful zero (ratio scale) and hence allow relative comparison of two measurements (i.e., division of one measurement by the other). The coefficient of variation may not have any meaning for data on an interval scale. For example, most temperature scales (e.g., Celsius, Fahrenheit etc.) are interval scales with arbitrary zeros, so the computed coefficient of variation would be different depending on the scale used. On the other hand, Kelvin temperature has a meaningful zero, the complete absence of thermal energy, and thus is a ratio scale. In plain language, it is meaningful to say that 20 Kelvin is twice as hot as 10 Kelvin, but only in this scale with a true absolute zero. While a standard deviation (SD) can be measured in Kelvin, Celsius, or Fahrenheit, the value computed is only applicable to that scale. Only the Kelvin scale can be used to compute a valid coefficient of variability. Measurements that are log-normally distributed exhibit stationary CV; in contrast, SD varies depending upon the expected value of measurements. A more robust possibility is the quartile coefficient of dispersion, half the interquartile range formula_6 divided by the average of the quartiles (the midhinge), formula_7. In most cases, a CV is computed for a single independent variable (e.g., a single factory product) with numerous, repeated measures of a dependent variable (e.g., error in the production process). However, data that are linear or even logarithmically non-linear and include a continuous range for the independent variable with sparse measurements across each value (e.g., scatter-plot) may be amenable to single CV calculation using a maximum-likelihood estimation approach. Examples. In the examples below, we will take the values given as randomly chosen from a larger population of values. In these examples, we will take the values given as the entire population of values. Estimation. When only a sample of data from a population is available, the population CV can be estimated using the ratio of the sample standard deviation formula_8 to the sample mean formula_9: formula_10 But this estimator, when applied to a small or moderately sized sample, tends to be too low: it is a biased estimator. For normally distributed data, an unbiased estimator for a sample of size n is: formula_11 Log-normal data. Many datasets follow an approximately log-normal distribution. In such cases, a more accurate estimate, derived from the properties of the log-normal distribution, is defined as: formula_12 where formula_13 is the sample standard deviation of the data after a natural log transformation. (In the event that measurements are recorded using any other logarithmic base, b, their standard deviation formula_14 is converted to base e using formula_15, and the formula for formula_16 remains the same.) This estimate is sometimes referred to as the "geometric CV" (GCV) in order to distinguish it from the simple estimate above. However, "geometric coefficient of variation" has also been defined by Kirkwood as: formula_17 This term was intended to be "analogous" to the coefficient of variation, for describing multiplicative variation in log-normal data, but this definition of GCV has no theoretical basis as an estimate of formula_18 itself. For many practical purposes (such as sample size determination and calculation of confidence intervals) it is formula_19 which is of most use in the context of log-normally distributed data. If necessary, this can be derived from an estimate of formula_18 or GCV by inverting the corresponding formula. Comparison to standard deviation. Advantages. The coefficient of variation is useful because the standard deviation of data must always be understood in the context of the mean of the data. In contrast, the actual value of the CV is independent of the unit in which the measurement has been taken, so it is a dimensionless number. For comparison between data sets with different units or widely different means, one should use the coefficient of variation instead of the standard deviation. Applications. The coefficient of variation is also common in applied probability fields such as renewal theory, queueing theory, and reliability theory. In these fields, the exponential distribution is often more important than the normal distribution. The standard deviation of an exponential distribution is equal to its mean, so its coefficient of variation is equal to 1. Distributions with CV &lt; 1 (such as an Erlang distribution) are considered low-variance, while those with CV &gt; 1 (such as a hyper-exponential distribution) are considered high-variance. Some formulas in these fields are expressed using the squared coefficient of variation, often abbreviated SCV. In modeling, a variation of the CV is the CV(RMSD). Essentially the CV(RMSD) replaces the standard deviation term with the Root Mean Square Deviation (RMSD). While many natural processes indeed show a correlation between the average value and the amount of variation around it, accurate sensor devices need to be designed in such a way that the coefficient of variation is close to zero, i.e., yielding a constant absolute error over their working range. In actuarial science, the CV is known as unitized risk. In industrial solids processing, CV is particularly important to measure the degree of homogeneity of a powder mixture. Comparing the calculated CV to a specification will allow to define if a sufficient degree of mixing has been reached. In fluid dynamics, the CV, also referred to as Percent RMS, %RMS, %RMS Uniformity, or Velocity RMS, is a useful determination of flow uniformity for industrial processes. The term is used widely in the design of pollution control equipment, such as electrostatic precipitators (ESPs), selective catalytic reduction (SCR), scrubbers, and similar devices. The Institute of Clean Air Companies (ICAC) references RMS deviation of velocity in the design of fabric filters (ICAC document F-7). The guiding principal is that many of these pollution control devices require "uniform flow" entering and through the control zone. This can be related to uniformity of velocity profile, temperature distribution, gas species (such as ammonia for an SCR, or activated carbon injection for mercury absorption), and other flow-related parameters. The Percent RMS also is used to assess flow uniformity in combustion systems, HVAC systems, ductwork, inlets to fans and filters, air handling units, etc. where performance of the equipment is influenced by the incoming flow distribution. Laboratory measures of intra-assay and inter-assay CVs. CV measures are often used as quality controls for quantitative laboratory assays. While intra-assay and inter-assay CVs might be assumed to be calculated by simply averaging CV values across CV values for multiple samples within one assay or by averaging multiple inter-assay CV estimates, it has been suggested that these practices are incorrect and that a more complex computational process is required. It has also been noted that CV values are not an ideal index of the certainty of a measurement when the number of replicates varies across samples − in this case standard error in percent is suggested to be superior. If measurements do not have a natural zero point then the CV is not a valid measurement and alternative measures such as the intraclass correlation coefficient are recommended. As a measure of economic inequality. The coefficient of variation fulfills the requirements for a measure of economic inequality. If x (with entries "x""i") is a list of the values of an economic indicator (e.g. wealth), with "x""i" being the wealth of agent "i", then the following requirements are met: "c""v" assumes its minimum value of zero for complete equality (all "x""i" are equal). Its most notable drawback is that it is not bounded from above, so it cannot be normalized to be within a fixed range (e.g. like the Gini coefficient which is constrained to be between 0 and 1). It is, however, more mathematically tractable than the Gini coefficient. As a measure of standardisation of archaeological artefacts. Archaeologists often use CV values to compare the degree of standardisation of ancient artefacts. Variation in CVs has been interpreted to indicate different cultural transmission contexts for the adoption of new technologies. Coefficients of variation have also been used to investigate pottery standardisation relating to changes in social organisation. Archaeologists also use several methods for comparing CV values, for example the modified signed-likelihood ratio (MSLR) test for equality of CVs. Examples of misuse. Comparing coefficients of variation between parameters using relative units can result in differences that may not be real. If we compare the same set of temperatures in Celsius and Fahrenheit (both relative units, where kelvin and Rankine scale are their associated absolute values): Celsius: [0, 10, 20, 30, 40] Fahrenheit: [32, 50, 68, 86, 104] The sample standard deviations are 15.81 and 28.46, respectively. The CV of the first set is 15.81/20 = 79%. For the second set (which are the same temperatures) it is 28.46/68 = 42%. If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute. Comparing the same data set, now in absolute units: Kelvin: [273.15, 283.15, 293.15, 303.15, 313.15] Rankine: [491.67, 509.67, 527.67, 545.67, 563.67] The sample standard deviations are still 15.81 and 28.46, respectively, because the standard deviation is not affected by a constant offset. The coefficients of variation, however, are now both equal to 5.39%. Mathematically speaking, the coefficient of variation is not entirely linear. That is, for a random variable formula_20, the coefficient of variation of formula_21 is equal to the coefficient of variation of formula_20 only when formula_22. In the above example, Celsius can only be converted to Fahrenheit through a linear transformation of the form formula_23 with formula_24, whereas Kelvins can be converted to Rankines through a transformation of the form formula_25. Distribution. Provided that negative and small positive values of the sample mean occur with negligible frequency, the probability distribution of the coefficient of variation for a sample of size formula_26 of i.i.d. normal random variables has been shown by Hendricks and Robey to be formula_27 where the symbol formula_28 indicates that the summation is over only even values of formula_29, i.e., if formula_26 is odd, sum over even values of formula_30 and if formula_26 is even, sum only over odd values of formula_30. This is useful, for instance, in the construction of hypothesis tests or confidence intervals. Statistical inference for the coefficient of variation in normally distributed data is often based on McKay's chi-square approximation for the coefficient of variation. Methods for Alternative. Liu (2012) reviews methods for the construction of a confidence interval for the coefficient of variation. Notably, Lehmann (1986) derived the sampling distribution for the coefficient of variation using a non-central t-distribution to give an exact method for the construction of the CI. Similar ratios. Standardized moments are similar ratios, formula_31 where formula_32 is the "k"th moment about the mean, which are also dimensionless and scale invariant. The variance-to-mean ratio, formula_33, is another similar ratio, but is not dimensionless, and hence not scale invariant. See Normalization (statistics) for further ratios. In signal processing, particularly image processing, the reciprocal ratio formula_34 (or its square) is referred to as the signal-to-noise ratio in general and signal-to-noise ratio (imaging) in particular. Other related ratios include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\sigma " }, { "math_id": 1, "text": " \\mu " }, { "math_id": 2, "text": "| \\mu |" }, { "math_id": 3, "text": "\\sigma" }, { "math_id": 4, "text": "\\mu" }, { "math_id": 5, "text": "CV = \\frac{\\sigma}{\\mu}." }, { "math_id": 6, "text": " {(Q_3 - Q_1)/2} " }, { "math_id": 7, "text": " {(Q_1 + Q_3)/2} " }, { "math_id": 8, "text": "s \\," }, { "math_id": 9, "text": "\\bar{x}" }, { "math_id": 10, "text": "\\widehat{c_{\\rm v}} = \\frac{s}{\\bar{x}}" }, { "math_id": 11, "text": "\\widehat{c_{\\rm v}}^*=\\bigg(1+\\frac{1}{4n}\\bigg)\\widehat{c_{\\rm v}}" }, { "math_id": 12, "text": "\\widehat{cv}_{\\rm raw} = \\sqrt{\\mathrm{e}^{s_{\\ln}^2}-1}" }, { "math_id": 13, "text": "{s_{\\ln}} \\," }, { "math_id": 14, "text": "s_b \\," }, { "math_id": 15, "text": "s_{\\ln} = s_b \\ln(b) \\," }, { "math_id": 16, "text": "\\widehat{cv}_{\\rm raw} \\," }, { "math_id": 17, "text": "\\mathrm{GCV_K} = {\\mathrm{e}^{s_{\\ln}}\\!\\!-1}" }, { "math_id": 18, "text": "c_{\\rm v} \\," }, { "math_id": 19, "text": "s_{ln} \\," }, { "math_id": 20, "text": "X" }, { "math_id": 21, "text": "aX+b" }, { "math_id": 22, "text": "b = 0" }, { "math_id": 23, "text": "ax+b" }, { "math_id": 24, "text": "b \\neq 0" }, { "math_id": 25, "text": "ax" }, { "math_id": 26, "text": "n" }, { "math_id": 27, "text": " \\mathrm{d}F_{c_{\\rm v}} = \\frac{2}{\\pi^{1/2} \\Gamma {\\left(\\frac{n-1}{2}\\right)}} \\exp \\left( -\\frac{n}{2\\left(\\frac{\\sigma}{\\mu}\\right)^2} \\cdot \\frac{{c_{\\rm v}}^2}{1+{c_{\\rm v}}^2} \\right) \\frac{{c_{\\rm v}}^{n-2}}{(1+{c_{\\rm v}}^2)^{n/2}}\\sideset{}{^\\prime}\\sum_{i=0}^{n-1}\\frac{(n-1)! \\, \\Gamma \\left(\\frac{n-i}{2}\\right)}{(n-1-i)! \\, i! \\,} \\cdot \\frac{n^{i/2}}{2^{i/2} \\cdot \\left(\\frac{\\sigma}{\\mu}\\right)^i} \\cdot \\frac{1}{(1+{c_{\\rm v}}^2)^{i/2}} \\, \\mathrm{d}c_{\\rm v} ," }, { "math_id": 28, "text": "\\sideset{}{^\\prime}\\sum" }, { "math_id": 29, "text": "n - 1 - i" }, { "math_id": 30, "text": "i" }, { "math_id": 31, "text": "{\\mu_k}/{\\sigma^k}" }, { "math_id": 32, "text": "\\mu_k" }, { "math_id": 33, "text": "\\sigma^2/\\mu" }, { "math_id": 34, "text": "\\mu/\\sigma" }, { "math_id": 35, "text": "\\sigma^2 / \\mu^2" }, { "math_id": 36, "text": "\\mu_k/\\sigma^k" }, { "math_id": 37, "text": "\\sigma^2_W/\\mu_W" } ]
https://en.wikipedia.org/wiki?curid=1012687
10129659
Ducci sequence
Sequence of n-tuples of integers A Ducci sequence is a sequence of "n"-tuples of integers, sometimes known as "the Diffy game", because it is based on sequences. Given an "n"-tuple of integers formula_0, the next "n"-tuple in the sequence is formed by taking the absolute differences of neighbouring integers: formula_1 Another way of describing this is as follows. Arrange "n" integers in a circle and make a new circle by taking the difference between neighbours, ignoring any minus signs; then repeat the operation. Ducci sequences are named after Enrico Ducci (1864 - 1940), the Italian mathematician who discovered in the 1930s that every such sequence eventually becomes periodic. Ducci sequences are also known as the Ducci map or the n-number game. Open problems in the study of these maps still remain. Properties. From the second "n"-tuple onwards, it is clear that every integer in each "n"-tuple in a Ducci sequence is greater than or equal to 0 and is less than or equal to the difference between the maximum and minimum members of the first "n"-tuple. As there are only a finite number of possible "n"-tuples with these constraints, the sequence of n-tuples must sooner or later repeat itself. Every Ducci sequence therefore eventually becomes periodic. If "n" is a power of 2 every Ducci sequence eventually reaches the "n"-tuple (0,0...,0) in a finite number of steps. If "n" is "not" a power of two, a Ducci sequence will either eventually reach an "n"-tuple of zeros or will settle into a periodic loop of 'binary' "n"-tuples; that is, "n"-tuples of form formula_2, formula_3 is a constant, and formula_4. An obvious generalisation of Ducci sequences is to allow the members of the "n"-tuples to be "any" real numbers rather than just integers. For example, this 4-tuple converges to (0, 0, 0, 0) in four iterations: formula_5 formula_6 The properties presented here do not always hold for these generalisations. For example, a Ducci sequence starting with the "n"-tuple (1, "q", "q"2, "q"3) where "q" is the (irrational) positive root of the cubic formula_7 does not reach (0,0,0,0) in a finite number of steps, although in the limit it converges to (0,0,0,0). Examples. Ducci sequences may be arbitrarily long before they reach a tuple of zeros or a periodic loop. The 4-tuple sequence starting with (0, 653, 1854, 4063) takes 24 iterations to reach the zeros tuple. formula_8 formula_9 This 5-tuple sequence enters a period 15 binary 'loop' after 7 iterations. formula_10 The following 6-tuple sequence shows that sequences of tuples whose length is not a power of two may still reach a tuple of zeros: formula_11 If some conditions are imposed on any "power of two"-tuple Ducci sequence, it would take that power of two or lesser iterations to reach the zeros tuple. It is hypothesized that these sequences conform to a rule. Modulo two form. When the Ducci sequences enter binary loops, it is possible to treat the sequence in modulo two. That is: formula_12 This forms the basis for proving when the sequence vanish to all zeros. Cellular automata. The linear map in modulo 2 can further be identified as the cellular automata denoted as rule 102 in Wolfram code and related to rule 90 through the Martin-Odlyzko-Wolfram map. Rule 102 reproduces the Sierpinski triangle. Other related topics. The Ducci map is an example of a difference equation, a category that also include non-linear dynamics, chaos theory and numerical analysis. Similarities to cyclotomic polynomials have also been pointed out. While there are no practical applications of the Ducci map at present, its connection to the highly applied field of difference equations led to conjecture that a form of the Ducci map may also find application in the future. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(a_1,a_2,...,a_n)" }, { "math_id": 1, "text": "(a_1,a_2,...,a_n) \\rightarrow (|a_1-a_2|, |a_2-a_3|, ..., |a_n-a_1|)\\, ." }, { "math_id": 2, "text": "k(b_1, b_2, ... b_n)" }, { "math_id": 3, "text": "k" }, { "math_id": 4, "text": "b_i \\in \\{0, 1\\}" }, { "math_id": 5, "text": "\n(e, \\pi, \\sqrt2, 1) \\rightarrow\n(\\pi - e, \\pi - \\sqrt2, \\sqrt2 - 1, e - 1) \\rightarrow\n(e - \\sqrt2, \\pi - 2\\sqrt2 + 1, e - \\sqrt2, 2e - \\pi - 1) \\rightarrow\n" }, { "math_id": 6, "text": "\n(\\pi - e - \\sqrt2 + 1, \\pi - e - \\sqrt2 + 1, \\pi - e - \\sqrt2 + 1, \\pi - e - \\sqrt2 + 1)\n \\rightarrow (0, 0, 0, 0)\n" }, { "math_id": 7, "text": "x^3 - x^2 - x - 1 = 0" }, { "math_id": 8, "text": "\n(0, 653, 1854, 4063) \\rightarrow\n(653, 1201, 2209, 4063) \\rightarrow\n(548, 1008, 1854, 3410) \\rightarrow\n" }, { "math_id": 9, "text": "\n\\cdots \\rightarrow\n(0, 0, 128, 128) \\rightarrow\n(0, 128, 0, 128) \\rightarrow\n(128, 128, 128, 128) \\rightarrow\n(0, 0, 0, 0)\n" }, { "math_id": 10, "text": "\n\\begin{matrix}\n1 5 7 9 9 \\rightarrow &\n4 2 2 0 8 \\rightarrow &\n2 0 2 8 4 \\rightarrow &\n2 2 6 4 2 \\rightarrow &\n0 4 2 2 0 \\rightarrow &\n4 2 0 2 0 \\rightarrow \\\\\n2 2 2 2 4 \\rightarrow &\n0 0 0 2 2 \\rightarrow &\n0 0 2 0 2 \\rightarrow &\n0 2 2 2 2 \\rightarrow &\n2 0 0 0 2 \\rightarrow &\n2 0 0 2 0 \\rightarrow \\\\\n2 0 2 2 2 \\rightarrow &\n2 2 0 0 0 \\rightarrow &\n0 2 0 0 2 \\rightarrow &\n2 2 0 2 2 \\rightarrow &\n0 2 2 0 0 \\rightarrow &\n2 0 2 0 0 \\rightarrow \\\\\n2 2 2 0 2 \\rightarrow &\n0 0 2 2 0 \\rightarrow &\n0 2 0 2 0 \\rightarrow &\n2 2 2 2 0 \\rightarrow &\n0 0 0 2 2 \\rightarrow &\n\\cdots \\quad \\quad \\\\\n\\end{matrix}\n" }, { "math_id": 11, "text": "\n\\begin{matrix}\n1 2 1 2 1 0 \\rightarrow &\n1 1 1 1 1 1 \\rightarrow &\n0 0 0 0 0 0 \\\\\n\\end{matrix}\n" }, { "math_id": 12, "text": "(|a_1-a_2|, |a_2-a_3|, ..., |a_n-a_1|)\\ = (a_1+a_2, a_2+a_3, ..., a_n + a_1) \\bmod 2" } ]
https://en.wikipedia.org/wiki?curid=10129659
1013089
Solovay–Strassen primality test
The Solovay–Strassen primality test, developed by Robert M. Solovay and Volker Strassen in 1977, is a probabilistic test to determine if a number is composite or probably prime. The idea behind the test was discovered by M. M. Artjuhov in 1967 (see Theorem E in the paper). This test has been largely superseded by the Baillie–PSW primality test and the Miller–Rabin primality test, but has great historical importance in showing the practical feasibility of the RSA cryptosystem. The Solovay–Strassen test is essentially an Euler–Jacobi probable prime test. Concepts. Euler proved that for any odd prime number "p" and any integer "a", formula_0 where formula_1 is the Legendre symbol. The Jacobi symbol is a generalisation of the Legendre symbol to formula_2, where "n" can be any odd integer. The Jacobi symbol can be computed in time O((log "n")²) using Jacobi's generalization of the law of quadratic reciprocity. Given an odd number "n" one can contemplate whether or not the congruence formula_3 holds for various values of the "base" "a", given that "a" is relatively prime to "n". If "n" is prime then this congruence is true for all "a". So if we pick values of "a" at random and test the congruence, then as soon as we find an "a" which doesn't fit the congruence we know that "n" is not prime (but this does not tell us a nontrivial factorization of "n"). This base "a" is called an "Euler witness" for "n"; it is a witness for the compositeness of "n". The base "a" is called an "Euler liar" for "n" if the congruence is true while "n" is composite. For every composite odd "n", at least half of all bases formula_4 are (Euler) witnesses as the set of Euler liars is a proper subgroup of formula_5. For example, for formula_6, the set of Euler liars has order 8 and formula_7, and formula_5 has order 48. This contrasts with the Fermat primality test, for which the proportion of witnesses may be much smaller. Therefore, there are no (odd) composite "n" without many witnesses, unlike the case of Carmichael numbers for Fermat's test. Example. Suppose we wish to determine if "n" = 221 is prime. We write ("n"−1)/2=110. We randomly select an "a" (greater than 1 and smaller than "n"): 47. Using an efficient method for raising a number to a power (mod "n") such as binary exponentiation, we compute: This gives that, either 221 is prime, or 47 is an Euler liar for 221. We try another random "a", this time choosing "a" = 2: Hence 2 is an Euler witness for the compositeness of 221, and 47 was in fact an Euler liar. Note that this tells us nothing about the prime factors of 221, which are actually 13 and 17. Algorithm and running time. The algorithm can be written in pseudocode as follows: inputs: "n", a value to test for primality "k", a parameter that determines the accuracy of the test output: "composite" if "n" is composite, otherwise "probably prime" repeat "k" times: choose "a" randomly in the range [2,"n" − 1] formula_11 if "x" = 0 or formula_12 then return "composite" return "probably prime" Using fast algorithms for modular exponentiation, the running time of this algorithm is O("k"·log3 "n"), where "k" is the number of different values of "a" we test. Accuracy of the test. It is possible for the algorithm to return an incorrect answer. If the input "n" is indeed prime, then the output will always correctly be "probably prime". However, if the input "n" is composite then it is possible for the output to be incorrectly "probably prime". The number "n" is then called an Euler–Jacobi pseudoprime. When "n" is odd and composite, at least half of all "a" with gcd("a","n") = 1 are Euler witnesses. We can prove this as follows: let {"a"1, "a"2, ..., "a""m"} be the Euler liars and "a" an Euler witness. Then, for "i" = 1,2...,"m": formula_13 Because the following holds: formula_14 now we know that formula_15 This gives that each "a""i" gives a number "a"·"a""i", which is also an Euler witness. So each Euler liar gives an Euler witness and so the number of Euler witnesses is larger or equal to the number of Euler liars. Therefore, when "n" is composite, at least half of all "a" with gcd("a","n") = 1 is an Euler witness. Hence, the probability of failure is at most 2−"k" (compare this with the probability of failure for the Miller–Rabin primality test, which is at most 4−"k"). For purposes of cryptography the more bases "a" we test, i.e. if we pick a sufficiently large value of "k", the better the accuracy of test. Hence the chance of the algorithm failing in this way is so small that the (pseudo) prime is used in practice in cryptographic applications, but for applications for which it is important to have a prime, a test like ECPP or the Pocklington primality test should be used which "proves" primality. Average-case behaviour. The bound 1/2 on the error probability of a single round of the Solovay–Strassen test holds for any input "n", but those numbers "n" for which the bound is (approximately) attained are extremely rare. On the average, the error probability of the algorithm is significantly smaller: it is less than formula_16 for "k" rounds of the test, applied to uniformly random "n" ≤ "x". The same bound also applies to the related problem of what is the conditional probability of "n" being composite for a random number "n" ≤ "x" which has been declared prime in "k" rounds of the test. Complexity. The Solovay–Strassen algorithm shows that the decision problem COMPOSITE is in the complexity class RP. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a^{(p-1)/2} \\equiv \\left(\\frac{a}{p}\\right) \\pmod p " }, { "math_id": 1, "text": "\\left(\\tfrac{a}{p}\\right)" }, { "math_id": 2, "text": "\\left(\\tfrac{a}{n}\\right)" }, { "math_id": 3, "text": " a^{(n-1)/2} \\equiv \\left(\\frac{a}{n}\\right) \\pmod n" }, { "math_id": 4, "text": "a \\in (\\mathbb{Z}/n\\mathbb{Z})^* " }, { "math_id": 5, "text": "(\\mathbb{Z}/n\\mathbb{Z})^*" }, { "math_id": 6, "text": " n =65" }, { "math_id": 7, "text": " = \\{1,8,14,18,47,51,57,64\\}" }, { "math_id": 8, "text": "(\\tfrac{a}{n})" }, { "math_id": 9, "text": "(\\tfrac{47}{221})" }, { "math_id": 10, "text": "(\\tfrac{2}{221})" }, { "math_id": 11, "text": "x \\gets \\left( \\tfrac{a}{n}\\right)" }, { "math_id": 12, "text": "a^{(n-1)/2}\\not\\equiv x\\pmod n" }, { "math_id": 13, "text": "(a\\cdot a_i)^{(n-1)/2}=a^{(n-1)/2}\\cdot a_i^{(n-1)/2}= a^{(n-1)/2}\\cdot \\left(\\frac{a_i}{n}\\right) \\not\\equiv \\left(\\frac{a}{n}\\right)\\left(\\frac{a_i}{n}\\right)\\pmod{n}." }, { "math_id": 14, "text": "\\left(\\frac{a}{n}\\right)\\left(\\frac{a_i}{n}\\right)=\\left(\\frac{a\\cdot a_i}{n}\\right)," }, { "math_id": 15, "text": "(a\\cdot a_i)^{(n-1)/2}\\not\\equiv \\left(\\frac{a\\cdot a_i}{n}\\right)\\pmod{n}." }, { "math_id": 16, "text": "2^{-k}\\exp\\left(-(1+o(1))\\frac{\\log x\\,\\log\\log\\log x}{\\log\\log x}\\right)" } ]
https://en.wikipedia.org/wiki?curid=1013089
10131478
PDIFF
Category of piecewise-smooth manifolds In geometric topology, PDIFF, for "p"iecewise "diff"erentiable, is the category of piecewise-smooth manifolds and piecewise-smooth maps between them. It properly contains DIFF (the category of smooth manifolds and smooth functions between them) and PL (the category of piecewise linear manifolds and piecewise linear maps between them), and the reason it is defined is to allow one to relate these two categories. Further, piecewise functions such as splines and polygonal chains are common in mathematics, and PDIFF provides a category for discussing them. Motivation. PDIFF is mostly a technical point: smooth maps are not piecewise linear (unless linear), and piecewise linear maps are not smooth (unless globally linear) – the intersection is linear maps, or more precisely affine maps (because not based) – so they cannot directly be related: they are separate generalizations of the notion of an affine map. However, while a smooth manifold is not a PL manifold, it carries a canonical PL structure – it is uniquely triangularizable; conversely, not every PL manifold is smoothable. For a particular smooth manifold or smooth map between smooth manifolds, this can be shown by breaking up the manifold into small enough pieces, and then linearizing the manifold or map on each piece: for example, a circle in the plane can be approximated by a triangle, but not by a 2-gon, since this latter cannot be linearly embedded. This relation between Diff and PL requires choices, however, and is more naturally shown and understood by including both categories in a larger category, and then showing that the inclusion of PL is an equivalence: every smooth manifold and every PL manifold "is" a PDiff manifold. Thus, going from Diff to PDiff and PL to PDiff are natural – they are just inclusion. The map PL to PDiff, while not an equality – not every piecewise smooth function is piecewise linear – is an equivalence: one can go backwards by linearize pieces. Thus it can for some purposes be inverted, or considered an isomorphism, which gives a map formula_0 These categories all sit inside TOP, the category of topological manifold and continuous maps between them. In summary, PDiff is more general than Diff because it allows pieces (corners), and one cannot in general smooth corners, while PL is no less general that PDiff because one can linearize pieces (more precisely, one may need to break them up into smaller pieces and then linearize, which is allowed in PDiff). History. That every smooth (indeed, "C"1) manifold has a unique PL structure was originally proven in . A detailed expositionary proof is given in . The result is elementary and rather technical to prove in detail, so it is generally only sketched in modern texts, as in the brief proof outline given in . A very brief outline is given in , while a short but detailed proof is given in . References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Diff} \\to \\text{PDiff} \\to \\text{PL}." } ]
https://en.wikipedia.org/wiki?curid=10131478
10132968
Fuel mass fraction
In combustion physics, fuel mass fraction is the ratio of fuel mass flow to the total mass flow of a fuel mixture. If an air flow is fuel free, the fuel mass fraction is zero; in pure fuel without trapped gases, the ratio is unity. As fuel is burned in a combustion process, the fuel mass fraction is reduced. The definition reads as formula_0 where References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y_F = \\frac{m_F}{m_{\\rm{tot}}}" }, { "math_id": 1, "text": "m_F" }, { "math_id": 2, "text": "m_{\\rm{tot}}" } ]
https://en.wikipedia.org/wiki?curid=10132968
10133123
Splitting storm
Meteorological process associated with developing thunderstorms A splitting storm is a phenomenon when a convective thunderstorm will separate into two supercells, with one propagating towards the left (the left mover) and the other to the right (the right mover) of the mean wind shear direction across a deep layer of the troposphere. In most cases, this mean wind shear direction is roughly coincident with the direction of the mean wind. Each resulting cell bears an updraft that rotates opposite of the updraft in the other cell, with the left mover exhibiting a clockwise-rotating updraft and the right mover exhibiting a counterclockwise-rotating updraft. Storm splitting, if it occurs, tends to occur within an hour of the storm's formation. Storm splitting in the presence of large amounts of ambient crosswise vorticity, as characterized by a straight hodograph, produces similarly strong left and right movers. Storm splits also occur in environments where streamwise vorticity is present, as characterized by a more curved hodograph. However in this situation one updraft is highly favored over the other, with the weaker split quickly dissipating; in this case, the lesser favored split may be so weak that the process is not noticeable on radar imagery. In the Northern Hemisphere, where hodograph curvature tends to be clockwise, right-moving cells tend to be stronger and more persistent; the opposite is true in the Southern Hemisphere where hodograph curvature tends to be counterclockwise. Characteristics. Storm splitting was discovered via weather radar in the 1960s. Storm splitting is most favored when the direction of wind shear is aligned with the motion of the storm, a condition known as "crosswise vorticity" (via the right hand rule, the direction of ambient rotation associated with this vorticity would be perpendicular to the storm motion). Such conditions can be quantified by having low storm-relative helicity and can be associated with straight hodographs. In these cases, wind shear is largely unidirectional in the lower to middle troposphere. Splitting tends to occur within roughly 30–60 minutes after the formation of the parent thunderstorm and can occur repeatedly so long as sufficient crosswise vorticity is present. The presence of ambient vorticity produces rolls of horizontal rotation that a developing storm may encounter. As the formative updraft associated with the storm pulls this rotation up and into the storm, the rotation is tilted into the vertical on opposing flanks of the updraft. On one side, this results in clockwise rotation, while the other rotates counterclockwise. These areas of rotation are located at right angles to the wind shear direction, with one occurring left of this direction and the other to the right. In the Northern Hemisphere, the left rotation is anticyclonic while the right rotation is cyclonic. The rotation leads to the development of a new, rotating updraft beneath each area of rotation, which separate from one another to produce two separated storms; as both cells have rotating updrafts, both are supercells. This process can be accelerated if precipitation occurs between the two updrafts, cooling the air and producing downwards drag that eliminates the original updraft and further separates the split cells. Once a storm splits into two, the left-splitting storm tends to move in a direction left of the mean wind shear direction, while the right-splitting moves right of the mean wind shear direction. The split storms are known as "left movers" and "right movers" due to this behavior. The resulting left or right motion taken by the split storms may be more or less aligned with the direction of the ambient wind shear. This increases or decreases the ambient crosswise vorticity ingested into the split updrafts, respectively, in the frame of reference of the split storms. The storm that moves in a direction increasingly askew from the wind shear direction draws in increasingly "streamwise" vorticity. This tends to be the right mover in the Northern Hemisphere and the left mover in the Southern Hemisphere; in either case, this is the storm with cyclonic rotation. While storm splitting can reduce crosswise vorticity, crosswise vorticity may still be present. Thus, the left and right moving storms can repeatedly undergo further storm splitting if significant crosswise vorticity remains. Physical processes within supercells and interactions with their environment complicate prediction of the motion of supercells, including left and right movers. Commonly used methods for approximating the motion of splitting storms tend to estimate motion based on empirically observed deviations away from the mean wind shear vector. If the environmental vorticity is fully crosswise, storm splitting produces two oppositely rotating cells of similar intensity. In this case, both storms symmetrically deviate away from the mean wind shear direction. The left mover acquires increasingly clockwise vorticity, while the right mover acquires increasingly counterclockwise vorticity. In the absence of the Coriolis force, both cells are mirror images of one another. However, the Coriolis force causes the cyclonic cell to be slightly stronger. Because of turbulent friction, the direction of wind shear commonly varies near the surface such that hodographs are rarely ever straight in the lower troposphere. If the direction of the wind shear changes with height, such that some streamwise vorticity is present, the ingestion of vorticity by the split updrafts leads to one updraft being enhanced and the other being suppressed. If the hodograph turns clockwise with height, the right mover is enhanced, and if the hodograph turns counterclockwise, the left mover is enhanced. Most of the difference in the strengths of the split cells arises from this directional wind shear, rather than the Coriolis force. Storm splitting becomes less pronounced as hodograph curvature increases, resulting in shorter-lived anticyclonic cells. In extreme cases, where there is a strongly curving hodograph, the suppressed updraft will be so weak from the start, the splitting process will not be evident on radar, and a dominant cell will immediately be present shortly after convective initiation. While cyclonic splitting supercells (right movers in the Northern Hemisphere) have been more widely studied due to their typically longer duration and production of severe weather, anticyclonic supercells can also produce severe weather. When multiple thunderstorms develop, splitting storms can interact with other splitting storms. If a line of storms develop along a boundary, the storms at the ends of the line are typically the most isolated and free from interacting with splitting cells. Dynamics. The movement of air parcels in the atmosphere can cause a localized increase in air pressure ahead of the air parcel and a decrease in air pressure in the wake of the parcel as the parcel interacts with the ambient air. Such variations in pressure are known as "dynamic pressure perturbations". Within a rotating updraft, the variation of this dynamic pressure perturbation formula_0 with height formula_1 may be approximated as the combination of a linear and nonlinear term: formula_2 where formula_3 represents the mean vertical wind shear vector, formula_4 represents the horizontal gradient of the vertical wind associated with the updraft, and formula_5 is the vorticity within the updraft. When horizontal rotation is first lifted in an updraft, it produces a cyclonic and anticyclonic vortex on opposing sides of the updraft, with the strength of those vortices typically maximized in the mid-troposphere some above the surface. Each vortex is associated with minimum in air pressure aloft at the center of the vortex, with the surrounding air in cyclostrophic balance. Regardless of the sign of vorticity (i.e. the direction of rotation), the quantity formula_6 at the location of the vortices tends to increase from the surface up to the mid-troposphere, where the vortices are most pronounced. Thus, formula_0 decreases with height, resulting in a vertical pressure gradient and favoring upwards motion beneath the two vortices. This produces two new updrafts on opposing sides of the original updraft. The generation of updrafts on the flanks of the original updraft induces horizontal updraft-shear propagation, such that the left-splitting cell continues to move towards the left relative to the shear vector, while the right-splitting cell moves towards the right. This implies that the initial splitting of a thunderstorm is governed by "nonlinear dynamics". Because the tilting of horizontal vorticity into the vertical is most pronounced along the flanks of an updraft, the split storms continue to move away from the mean wind shear direction. In cases where vorticity is predominantly streamwise, as characterized by a strongly curved hodograph, the linear term is a stronger influence on vertical dynamic pressure perturbations. Thus, storm splitting is less favored when the ambient vorticity is streamwise. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p'_d" }, { "math_id": 1, "text": "z" }, { "math_id": 2, "text": "\\frac{\\partial p_d'}{\\partial z} \\propto 2 \\frac{\\partial}{\\partial z}\\mathbf{S} \\cdot \\nabla_h w' - \\frac{1}{2}\\frac{\\partial \\zeta'^2}{\\partial z}" }, { "math_id": 3, "text": "\\mathbf{S}" }, { "math_id": 4, "text": "\\nabla_h w'" }, { "math_id": 5, "text": "\\zeta" }, { "math_id": 6, "text": "\\zeta'^2" } ]
https://en.wikipedia.org/wiki?curid=10133123
101334
Euler–Jacobi pseudoprime
In number theory, an odd integer "n" is called an Euler–Jacobi probable prime (or, more commonly, an Euler probable prime) to base "a", if "a" and "n" are coprime, and formula_0 where formula_1 is the Jacobi symbol. If "n" is an odd composite integer that satisfies the above congruence, then "n" is called an Euler–Jacobi pseudoprime (or, more commonly, an Euler pseudoprime) to base "a". Properties. The motivation for this definition is the fact that all prime numbers "n" satisfy the above equation, as explained in the Euler's criterion article. The equation can be tested rather quickly, which can be used for probabilistic primality testing. These tests are over twice as strong as tests based on Fermat's little theorem. Every Euler–Jacobi pseudoprime is also a Fermat pseudoprime and an Euler pseudoprime. There are no numbers which are Euler–Jacobi pseudoprimes to all bases as Carmichael numbers are. Solovay and Strassen showed that for every composite "n", for at least "n"/2 bases less than "n", "n" is not an Euler–Jacobi pseudoprime. The smallest Euler–Jacobi pseudoprime base 2 is 561. There are 11347 Euler–Jacobi pseudoprimes base 2 that are less than 25·109 (see OEIS: ) (page 1005 of ). In the literature (for example,), an Euler–Jacobi pseudoprime as defined above is often called simply an Euler pseudoprime. Implementation in Lua. function EulerJacobiTest(k) a = 2 if k == 1 then return false elseif k == 2 then return true else if(modPow(a,(k-1)/2,k) == Jacobi(a,k)) then return true else return false end end end References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a^{(n-1)/2} \\equiv \\left(\\frac{a}{n}\\right)\\pmod{n}" }, { "math_id": 1, "text": "\\left(\\frac{a}{n}\\right)" } ]
https://en.wikipedia.org/wiki?curid=101334
10134
Electromagnetic spectrum
Range of frequencies or wavelengths of electromagnetic radiation The electromagnetic spectrum is the full range of electromagnetic radiation, organized by frequency or wavelength. The spectrum is divided into separate bands, with different names for the electromagnetic waves within each band. From low to high frequency these are: radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays. The electromagnetic waves in each of these bands have different characteristics, such as how they are produced, how they interact with matter, and their practical applications. Radio waves, at the low-frequency end of the spectrum, have the lowest photon energy and the longest wavelengths—thousands of kilometers, or more. They can be emitted and received by antennas, and pass through the atmosphere, foliage, and most building materials. Gamma rays, at the high-frequency end of the spectrum, have the highest photon energies and the shortest wavelengths—much smaller than an atomic nucleus. Gamma rays, X-rays, and extreme ultraviolet rays are called ionizing radiation because their high photon energy is able to ionize atoms, causing chemical reactions. Longer-wavelength radiation such as visible light is nonionizing; the photons do not have sufficient energy to ionize atoms. Throughout most of the electromagnetic spectrum, spectroscopy can be used to separate waves of different frequencies, so that the intensity of the radiation can be measured as a function of frequency or wavelength. Spectroscopy is used to study the interactions of electromagnetic waves with matter. History and discovery. Humans have always been aware of visible light and radiant heat but for most of history it was not known that these phenomena were connected or were representatives of a more extensive principle. The ancient Greeks recognized that light traveled in straight lines and studied some of its properties, including reflection and refraction. Light was intensively studied from the beginning of the 17th century leading to the invention of important instruments like the telescope and microscope. Isaac Newton was the first to use the term "spectrum" for the range of colours that white light could be split into with a prism. Starting in 1666, Newton showed that these colours were intrinsic to light and could be recombined into white light. A debate arose over whether light had a wave nature or a particle nature with René Descartes, Robert Hooke and Christiaan Huygens favouring a wave description and Newton favouring a particle description. Huygens in particular had a well developed theory from which he was able to derive the laws of reflection and refraction. Around 1801, Thomas Young measured the wavelength of a light beam with his two-slit experiment thus conclusively demonstrating that light was a wave. In 1800, William Herschel discovered infrared radiation. He was studying the temperature of different colours by moving a thermometer through light split by a prism. He noticed that the highest temperature was beyond red. He theorized that this temperature change was due to "calorific rays", a type of light ray that could not be seen. The next year, Johann Ritter, working at the other end of the spectrum, noticed what he called "chemical rays" (invisible light rays that induced certain chemical reactions). These behaved similarly to visible violet light rays, but were beyond them in the spectrum. They were later renamed ultraviolet radiation. The study of electromagnetism began in 1820 when Hans Christian Ørsted discovered that electric currents produce magnetic fields (Oersted's law). Light was first linked to electromagnetism in 1845, when Michael Faraday noticed that the polarization of light traveling through a transparent material responded to a magnetic field (see Faraday effect). During the 1860s, James Clerk Maxwell developed four partial differential equations (Maxwell's equations) for the electromagnetic field. Two of these equations predicted the possibility and behavior of waves in the field. Analyzing the speed of these theoretical waves, Maxwell realized that they must travel at a speed that was about the known speed of light. This startling coincidence in value led Maxwell to make the inference that light itself is a type of electromagnetic wave. Maxwell's equations predicted an infinite range of frequencies of electromagnetic waves, all traveling at the speed of light. This was the first indication of the existence of the entire electromagnetic spectrum. Maxwell's predicted waves included waves at very low frequencies compared to infrared, which in theory might be created by oscillating charges in an ordinary electrical circuit of a certain type. Attempting to prove Maxwell's equations and detect such low frequency electromagnetic radiation, in 1886, the physicist Heinrich Hertz built an apparatus to generate and detect what are now called radio waves. Hertz found the waves and was able to infer (by measuring their wavelength and multiplying it by their frequency) that they traveled at the speed of light. Hertz also demonstrated that the new radiation could be both reflected and refracted by various dielectric media, in the same manner as light. For example, Hertz was able to focus the waves using a lens made of tree resin. In a later experiment, Hertz similarly produced and measured the properties of microwaves. These new types of waves paved the way for inventions such as the wireless telegraph and the radio. In 1895, Wilhelm Röntgen noticed a new type of radiation emitted during an experiment with an evacuated tube subjected to a high voltage. He called this radiation "x-rays" and found that they were able to travel through parts of the human body but were reflected or stopped by denser matter such as bones. Before long, many uses were found for this radiography. The last portion of the electromagnetic spectrum was filled in with the discovery of gamma rays. In 1900, Paul Villard was studying the radioactive emissions of radium when he identified a new type of radiation that he at first thought consisted of particles similar to known alpha and beta particles, but with the power of being far more penetrating than either. However, in 1910, British physicist William Henry Bragg demonstrated that gamma rays are electromagnetic radiation, not particles, and in 1914, Ernest Rutherford (who had named them gamma rays in 1903 when he realized that they were fundamentally different from charged alpha and beta particles) and Edward Andrade measured their wavelengths, and found that gamma rays were similar to X-rays, but with shorter wavelengths. The wave-particle debate was rekindled in 1901 when Max Planck discovered that light is absorbed only in discrete "quanta", now called photons, implying that light has a particle nature. This idea was made explicit by Albert Einstein in 1905, but never accepted by Planck and many other contemporaries. The modern position of science is that electromagnetic radiation has both a wave and a particle nature, the wave-particle duality. The contradictions arising from this position are still being debated by scientists and philosophers. Range. Electromagnetic waves are typically described by any of the following three physical properties: the frequency "f", wavelength λ, or photon energy "E". Frequencies observed in astronomy range from (1 GeV gamma rays) down to the local plasma frequency of the ionized interstellar medium (~1 kHz). Wavelength is inversely proportional to the wave frequency, so gamma rays have very short wavelengths that are fractions of the size of atoms, whereas wavelengths on the opposite end of the spectrum can be indefinitely long. Photon energy is directly proportional to the wave frequency, so gamma ray photons have the highest energy (around a billion electron volts), while radio wave photons have very low energy (around a femtoelectronvolt). These relations are illustrated by the following equations: formula_0 where: Whenever electromagnetic waves travel in a medium with matter, their wavelength is decreased. Wavelengths of electromagnetic radiation, whatever medium they are traveling through, are usually quoted in terms of the "vacuum wavelength", although this is not always explicitly stated. Generally, electromagnetic radiation is classified by wavelength into radio wave, microwave, infrared, visible light, ultraviolet, X-rays and gamma rays. The behavior of EM radiation depends on its wavelength. When EM radiation interacts with single atoms and molecules, its behavior also depends on the amount of energy per quantum (photon) it carries. Spectroscopy can detect a much wider region of the EM spectrum than the visible wavelength range of 400 nm to 700 nm in a vacuum. A common laboratory spectroscope can detect wavelengths from 2 nm to 2500 nm. Detailed information about the physical properties of objects, gases, or even stars can be obtained from this type of device. Spectroscopes are widely used in astrophysics. For example, many hydrogen atoms emit a radio wave photon that has a wavelength of 21.12 cm. Also, frequencies of 30 Hz and below can be produced by and are important in the study of certain stellar nebulae and frequencies as high as have been detected from astrophysical sources. Regions. The types of electromagnetic radiation are broadly classified into the following classes (regions, bands or types): This classification goes in the increasing order of wavelength, which is characteristic of the type of radiation. There are no precisely defined boundaries between the bands of the electromagnetic spectrum; rather they fade into each other like the bands in a rainbow (which is the sub-spectrum of visible light). Radiation of each frequency and wavelength (or in each band) has a mix of properties of the two regions of the spectrum that bound it. For example, red light resembles infrared radiation in that it can excite and add energy to some chemical bonds and indeed must do so to power the chemical mechanisms responsible for photosynthesis and the working of the visual system. The distinction between X-rays and gamma rays is partly based on sources: the photons generated from nuclear decay or other nuclear and subnuclear/particle process are always termed gamma rays, whereas X-rays are generated by electronic transitions involving highly energetic inner atomic electrons. In general, nuclear transitions are much more energetic than electronic transitions, so gamma rays are more energetic than X-rays, but exceptions exist. By analogy to electronic transitions, muonic atom transitions are also said to produce X-rays, even though their energy may exceed , whereas there are many (77 known to be less than ) low-energy nuclear transitions ("e.g.", the nuclear transition of thorium-229m), and, despite being one million-fold less energetic than some muonic X-rays, the emitted photons are still called gamma rays due to their nuclear origin. The convention that EM radiation that is known to come from the nucleus is always called "gamma ray" radiation is the only convention that is universally respected, however. Many astronomical gamma ray sources (such as gamma ray bursts) are known to be too energetic (in both intensity and wavelength) to be of nuclear origin. Quite often, in high-energy physics and in medical radiotherapy, very high energy EMR (in the &gt; 10 MeV region)—which is of higher energy than any nuclear gamma ray—is not called X-ray or gamma ray, but instead by the generic term of "high-energy photons". The region of the spectrum where a particular observed electromagnetic radiation falls is reference frame-dependent (due to the Doppler shift for light), so EM radiation that one observer would say is in one region of the spectrum could appear to an observer moving at a substantial fraction of the speed of light with respect to the first to be in another part of the spectrum. For example, consider the cosmic microwave background. It was produced when matter and radiation decoupled, by the de-excitation of hydrogen atoms to the ground state. These photons were from Lyman series transitions, putting them in the ultraviolet (UV) part of the electromagnetic spectrum. Now this radiation has undergone enough cosmological red shift to put it into the microwave region of the spectrum for observers moving slowly (compared to the speed of light) with respect to the cosmos. Rationale for names. Electromagnetic radiation interacts with matter in different ways across the spectrum. These types of interaction are so different that historically different names have been applied to different parts of the spectrum, as though these were different types of radiation. Thus, although these "different kinds" of electromagnetic radiation form a quantitatively continuous spectrum of frequencies and wavelengths, the spectrum remains divided for practical reasons arising from these qualitative interaction differences. Types of radiation. Radio waves. Radio waves are emitted and received by antennas, which consist of conductors such as metal rod resonators. In artificial generation of radio waves, an electronic device called a transmitter generates an alternating electric current which is applied to an antenna. The oscillating electrons in the antenna generate oscillating electric and magnetic fields that radiate away from the antenna as radio waves. In reception of radio waves, the oscillating electric and magnetic fields of a radio wave couple to the electrons in an antenna, pushing them back and forth, creating oscillating currents which are applied to a radio receiver. Earth's atmosphere is mainly transparent to radio waves, except for layers of charged particles in the ionosphere which can reflect certain frequencies. Radio waves are extremely widely used to transmit information across distances in radio communication systems such as radio broadcasting, television, two way radios, mobile phones, communication satellites, and wireless networking. In a radio communication system, a radio frequency current is modulated with an information-bearing signal in a transmitter by varying either the amplitude, frequency or phase, and applied to an antenna. The radio waves carry the information across space to a receiver, where they are received by an antenna and the information extracted by demodulation in the receiver. Radio waves are also used for navigation in systems like Global Positioning System (GPS) and navigational beacons, and locating distant objects in radiolocation and radar. They are also used for remote control, and for industrial heating. The use of the radio spectrum is strictly regulated by governments, coordinated by the International Telecommunication Union (ITU) which allocates frequencies to different users for different uses. Microwaves. Microwaves are radio waves of short wavelength, from about 10 centimeters to one millimeter, in the SHF and EHF frequency bands. Microwave energy is produced with klystron and magnetron tubes, and with solid state devices such as Gunn and IMPATT diodes. Although they are emitted and absorbed by short antennas, they are also absorbed by polar molecules, coupling to vibrational and rotational modes, resulting in bulk heating. Unlike higher frequency waves such as infrared and visible light which are absorbed mainly at surfaces, microwaves can penetrate into materials and deposit their energy below the surface. This effect is used to heat food in microwave ovens, and for industrial heating and medical diathermy. Microwaves are the main wavelengths used in radar, and are used for satellite communication, and wireless networking technologies such as Wi-Fi. The copper cables (transmission lines) which are used to carry lower-frequency radio waves to antennas have excessive power losses at microwave frequencies, and metal pipes called waveguides are used to carry them. Although at the low end of the band the atmosphere is mainly transparent, at the upper end of the band absorption of microwaves by atmospheric gases limits practical propagation distances to a few kilometers. Terahertz radiation or sub-millimeter radiation is a region of the spectrum from about 100 GHz to 30 terahertz (THz) between microwaves and far infrared which can be regarded as belonging to either band. Until recently, the range was rarely studied and few sources existed for microwave energy in the so-called "terahertz gap", but applications such as imaging and communications are now appearing. Scientists are also looking to apply terahertz technology in the armed forces, where high-frequency waves might be directed at enemy troops to incapacitate their electronic equipment. Terahertz radiation is strongly absorbed by atmospheric gases, making this frequency range useless for long-distance communication. Infrared radiation. The infrared part of the electromagnetic spectrum covers the range from roughly 300 GHz to 400 THz (1 mm – 750 nm). It can be divided into three parts: Visible light. Above infrared in frequency comes visible light. The Sun emits its peak power in the visible region, although integrating the entire emission power spectrum through all wavelengths shows that the Sun emits slightly more infrared than visible light. By definition, visible light is the part of the EM spectrum the human eye is the most sensitive to. Visible light (and near-infrared light) is typically absorbed and emitted by electrons in molecules and atoms that move from one energy level to another. This action allows the chemical mechanisms that underlie human vision and plant photosynthesis. The light that excites the human visual system is a very small portion of the electromagnetic spectrum. A rainbow shows the optical (visible) part of the electromagnetic spectrum; infrared (if it could be seen) would be located just beyond the red side of the rainbow whilst ultraviolet would appear just beyond the opposite violet end. Electromagnetic radiation with a wavelength between 380 nm and 760 nm (400–790 terahertz) is detected by the human eye and perceived as visible light. Other wavelengths, especially near infrared (longer than 760 nm) and ultraviolet (shorter than 380 nm) are also sometimes referred to as light, especially when the visibility to humans is not relevant. White light is a combination of lights of different wavelengths in the visible spectrum. Passing white light through a prism splits it up into the several colours of light observed in the visible spectrum between 400 nm and 780 nm. If radiation having a frequency in the visible region of the EM spectrum reflects off an object, say, a bowl of fruit, and then strikes the eyes, this results in visual perception of the scene. The brain's visual system processes the multitude of reflected frequencies into different shades and hues, and through this insufficiently understood psychophysical phenomenon, most people perceive a bowl of fruit. At most wavelengths, however, the information carried by electromagnetic radiation is not directly detected by human senses. Natural sources produce EM radiation across the spectrum, and technology can also manipulate a broad range of wavelengths. Optical fiber transmits light that, although not necessarily in the visible part of the spectrum (it is usually infrared), can carry information. The modulation is similar to that used with radio waves. Ultraviolet radiation. Next in frequency comes ultraviolet (UV). In frequency (and thus energy), UV rays sit between the violet end of the visible spectrum and the X-ray range. The UV wavelength spectrum ranges from 399 nm to 10 nm and is divided into 3 sections: UVA, UVB, and UVC. UV is the lowest energy range energetic enough to ionize atoms, separating electrons from them, and thus causing chemical reactions. UV, X-rays, and gamma rays are thus collectively called "ionizing radiation"; exposure to them can damage living tissue. UV can also cause substances to glow with visible light; this is called "fluorescence". UV fluorescence is used by forensics to detect any evidence like blood and urine, that is produced by a crime scene. Also UV fluorescence is used to detect counterfeit money and IDs, as they are laced with material that can glow under UV. At the middle range of UV, UV rays cannot ionize but can break chemical bonds, making molecules unusually reactive. Sunburn, for example, is caused by the disruptive effects of middle range UV radiation on skin cells, which is the main cause of skin cancer. UV rays in the middle range can irreparably damage the complex DNA molecules in the cells producing thymine dimers making it a very potent mutagen. Due to skin cancer caused by UV, the sunscreen industry was invented to combat UV damage. Mid UV wavelengths are called UVB and UVB lights such as germicidal lamps are used to kill germs and also to sterilize water. The Sun emits UV radiation (about 10% of its total power), including extremely short wavelength UV that could potentially destroy most life on land (ocean water would provide some protection for life there). However, most of the Sun's damaging UV wavelengths are absorbed by the atmosphere before they reach the surface. The higher energy (shortest wavelength) ranges of UV (called "vacuum UV") are absorbed by nitrogen and, at longer wavelengths, by simple diatomic oxygen in the air. Most of the UV in the mid-range of energy is blocked by the ozone layer, which absorbs strongly in the important 200–315 nm range, the lower energy part of which is too long for ordinary dioxygen in air to absorb. This leaves less than 3% of sunlight at sea level in UV, with all of this remainder at the lower energies. The remainder is UV-A, along with some UV-B. The very lowest energy range of UV between 315 nm and visible light (called UV-A) is not blocked well by the atmosphere, but does not cause sunburn and does less biological damage. However, it is not harmless and does create oxygen radicals, mutations and skin damage. X-rays. After UV come X-rays, which, like the upper ranges of UV are also ionizing. However, due to their higher energies, X-rays can also interact with matter by means of the Compton effect. Hard X-rays have shorter wavelengths than soft X-rays and as they can pass through many substances with little absorption, they can be used to 'see through' objects with 'thicknesses' less than that equivalent to a few meters of water. One notable use is diagnostic X-ray imaging in medicine (a process known as radiography). X-rays are useful as probes in high-energy physics. In astronomy, the accretion disks around neutron stars and black holes emit X-rays, enabling studies of these phenomena. X-rays are also emitted by stellar corona and are strongly emitted by some types of nebulae. However, X-ray telescopes must be placed outside the Earth's atmosphere to see astronomical X-rays, since the great depth of the atmosphere of Earth is opaque to X-rays (with areal density of 1000 g/cm2), equivalent to 10 meters thickness of water. This is an amount sufficient to block almost all astronomical X-rays (and also astronomical gamma rays—see below). Gamma rays. After hard X-rays come gamma rays, which were discovered by Paul Ulrich Villard in 1900. These are the most energetic photons, having no defined lower limit to their wavelength. In astronomy they are valuable for studying high-energy objects or regions, however as with X-rays this can only be done with telescopes outside the Earth's atmosphere. Gamma rays are used experimentally by physicists for their penetrating ability and are produced by a number of radioisotopes. They are used for irradiation of foods and seeds for sterilization, and in medicine they are occasionally used in radiation cancer therapy. More commonly, gamma rays are used for diagnostic imaging in nuclear medicine, an example being PET scans. The wavelength of gamma rays can be measured with high accuracy through the effects of Compton scattering. Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f = \\frac{c}{\\lambda}, \\quad\\text{or}\\quad f = \\frac{E}{h}, \\quad\\text{or}\\quad E=\\frac{hc}{\\lambda}," } ]
https://en.wikipedia.org/wiki?curid=10134
1013550
Quantum channel
Foundational object in quantum communication theory In quantum information theory, a quantum channel is a communication channel which can transmit quantum information, as well as classical information. An example of quantum information is the general dynamics of a qubit. An example of classical information is a text document transmitted over the Internet. More formally, quantum channels are completely positive (CP) trace-preserving maps between spaces of operators. In other words, a quantum channel is just a quantum operation viewed not merely as the reduced dynamics of a system but as a pipeline intended to carry quantum information. (Some authors use the term "quantum operation" to also include trace-decreasing maps while reserving "quantum channel" for strictly trace-preserving maps.) Memoryless quantum channel. We will assume for the moment that all state spaces of the systems considered, classical or quantum, are finite-dimensional. The memoryless in the section title carries the same meaning as in classical information theory: the output of a channel at a given time depends only upon the corresponding input and not any previous ones. Schrödinger picture. Consider quantum channels that transmit only quantum information. This is precisely a quantum operation, whose properties we now summarize. Let formula_0 and formula_1 be the state spaces (finite-dimensional Hilbert spaces) of the sending and receiving ends, respectively, of a channel. formula_2 will denote the family of operators on formula_3 In the Schrödinger picture, a purely quantum channel is a map formula_4 between density matrices acting on formula_0 and formula_1 with the following properties: The adjectives completely positive and trace preserving used to describe a map are sometimes abbreviated CPTP. In the literature, sometimes the fourth property is weakened so that formula_4 is only required to be not trace-increasing. In this article, it will be assumed that all channels are CPTP. Heisenberg picture. Density matrices acting on "HA" only constitute a proper subset of the operators on "HA" and same can be said for system "B". However, once a linear map formula_4 between the density matrices is specified, a standard linearity argument, together with the finite-dimensional assumption, allow us to extend formula_4 uniquely to the full space of operators. This leads to the adjoint map formula_7, which describes the action of formula_4 in the Heisenberg picture: The spaces of operators "L"("H""A") and "L"("H""B") are Hilbert spaces with the Hilbert–Schmidt inner product. Therefore, viewing formula_8 as a map between Hilbert spaces, we obtain its adjoint formula_4* given by formula_9 While formula_4 takes states on "A" to those on "B", formula_7 maps observables on system "B" to observables on "A". This relationship is same as that between the Schrödinger and Heisenberg descriptions of dynamics. The measurement statistics remain unchanged whether the observables are considered fixed while the states undergo operation or vice versa. It can be directly checked that if formula_4 is assumed to be trace preserving, formula_7 is unital, that is,formula_10. Physically speaking, this means that, in the Heisenberg picture, the trivial observable remains trivial after applying the channel. Classical information. So far we have only defined quantum channel that transmits only quantum information. As stated in the introduction, the input and output of a channel can include classical information as well. To describe this, the formulation given so far needs to be generalized somewhat. A purely quantum channel, in the Heisenberg picture, is a linear map Ψ between spaces of operators: formula_11 that is unital and completely positive (CP). The operator spaces can be viewed as finite-dimensional C*-algebras. Therefore, we can say a channel is a unital CP map between C*-algebras: formula_12 Classical information can then be included in this formulation. The observables of a classical system can be assumed to be a commutative C*-algebra, i.e. the space of continuous functions formula_13 on some set formula_14. We assume formula_14 is finite so formula_13 can be identified with the "n"-dimensional Euclidean space formula_15 with entry-wise multiplication. Therefore, in the Heisenberg picture, if the classical information is part of, say, the input, we would define formula_16 to include the relevant classical observables. An example of this would be a channel formula_17 Notice formula_18 is still a C*-algebra. An element formula_19 of a C*-algebra formula_20 is called positive if formula_21 for some formula_22. Positivity of a map is defined accordingly. This characterization is not universally accepted; the quantum instrument is sometimes given as the generalized mathematical framework for conveying both quantum and classical information. In axiomatizations of quantum mechanics, the classical information is carried in a Frobenius algebra or Frobenius category. Examples. Time evolution. For a purely quantum system, the time evolution, at certain time "t", is given by formula_23 where formula_24 and "H" is the Hamiltonian and "t" is the time. Clearly this gives a CPTP map in the Schrödinger picture and is therefore a channel. The dual map in the Heisenberg picture is formula_25 Restriction. Consider a composite quantum system with state space formula_26 For a state formula_27 the reduced state of "ρ" on system "A", "ρ""A", is obtained by taking the partial trace of "ρ" with respect to the "B" system: formula_28 The partial trace operation is a CPTP map, therefore a quantum channel in the Schrödinger picture. In the Heisenberg picture, the dual map of this channel is formula_29 where "A" is an observable of system "A". Observable. An observable associates a numerical value formula_30 to a quantum mechanical "effect" formula_31. formula_31's are assumed to be positive operators acting on appropriate state space and formula_32. (Such a collection is called a POVM.) In the Heisenberg picture, the corresponding "observable map" formula_33 maps a classical observable formula_34 to the quantum mechanical one formula_35 In other words, one integrates "f" against the POVM to obtain the quantum mechanical observable. It can be easily checked that formula_33 is CP and unital. The corresponding Schrödinger map formula_36 takes density matrices to classical states: formula_37 where the inner product is the Hilbert–Schmidt inner product. Furthermore, viewing states as normalized functionals, and invoking the Riesz representation theorem, we can put formula_38 Instrument. The observable map, in the Schrödinger picture, has a purely classical output algebra and therefore only describes measurement statistics. To take the state change into account as well, we define what is called a quantum instrument. Let formula_39 be the effects (POVM) associated to an observable. In the Schrödinger picture, an instrument is a map formula_40 with pure quantum input formula_41 and with output space formula_42: formula_43 Let formula_44 The dual map in the Heisenberg picture is formula_45 where formula_46 is defined in the following way: Factor formula_47 (this can always be done since elements of a POVM are positive) then formula_48. We see that formula_33 is CP and unital. Notice that formula_49 gives precisely the observable map. The map formula_50 describes the overall state change. Measure-and-prepare channel. Suppose two parties "A" and "B" wish to communicate in the following manner: "A" performs the measurement of an observable and communicates the measurement outcome to "B" classically. According to the message he receives, "B" prepares his (quantum) system in a specific state. In the Schrödinger picture, the first part of the channel formula_41 simply consists of "A" making a measurement, i.e. it is the observable map: formula_51 If, in the event of the "i"-th measurement outcome, "B" prepares his system in state "Ri", the second part of the channel formula_42 takes the above classical state to the density matrix formula_52 The total operation is the composition formula_53 Channels of this form are called "measure-and-prepare" or in Holevo form. In the Heisenberg picture, the dual map formula_54 is defined by formula_55 A measure-and-prepare channel can not be the identity map. This is precisely the statement of the no teleportation theorem, which says classical teleportation (not to be confused with entanglement-assisted teleportation) is impossible. In other words, a quantum state can not be measured reliably. In the channel-state duality, a channel is measure-and-prepare if and only if the corresponding state is separable. Actually, all the states that result from the partial action of a measure-and-prepare channel are separable, and for this reason measure-and-prepare channels are also known as entanglement-breaking channels. Pure channel. Consider the case of a purely quantum channel formula_33 in the Heisenberg picture. With the assumption that everything is finite-dimensional, formula_33 is a unital CP map between spaces of matrices formula_56 By Choi's theorem on completely positive maps, formula_33 must take the form formula_57 where "N" ≤ "nm". The matrices "K""i" are called Kraus operators of formula_33 (after the German physicist Karl Kraus, who introduced them). The minimum number of Kraus operators is called the Kraus rank of formula_33. A channel with Kraus rank 1 is called pure. The time evolution is one example of a pure channel. This terminology again comes from the channel-state duality. A channel is pure if and only if its dual state is a pure state. Teleportation. In quantum teleportation, a sender wishes to transmit an arbitrary quantum state of a particle to a possibly distant receiver. Consequently, the teleportation process is a quantum channel. The apparatus for the process itself requires a quantum channel for the transmission of one particle of an entangled-state to the receiver. Teleportation occurs by a joint measurement of the sent particle and the remaining entangled particle. This measurement results in classical information which must be sent to the receiver to complete the teleportation. Importantly, the classical information can be sent after the quantum channel has ceased to exist. In the experimental setting. Experimentally, a simple implementation of a quantum channel is fiber optic (or free-space for that matter) transmission of single photons. Single photons can be transmitted up to 100 km in standard fiber optics before losses dominate. The photon's time-of-arrival ("time-bin entanglement") or polarization are used as a basis to encode quantum information for purposes such as quantum cryptography. The channel is capable of transmitting not only basis states (e.g. formula_58, formula_59) but also superpositions of them (e.g. formula_60). The coherence of the state is maintained during transmission through the channel. Contrast this with the transmission of electrical pulses through wires (a classical channel), where only classical information (e.g. 0s and 1s) can be sent. Channel capacity. The cb-norm of a channel. Before giving the definition of channel capacity, the preliminary notion of the norm of complete boundedness, or cb-norm of a channel needs to be discussed. When considering the capacity of a channel formula_40, we need to compare it with an "ideal channel" formula_61 . For instance, when the input and output algebras are identical, we can choose formula_61 to be the identity map. Such a comparison requires a metric between channels. Since a channel can be viewed as a linear operator, it is tempting to use the natural operator norm. In other words, the closeness of formula_40 to the ideal channel formula_61 can be defined by formula_62 However, the operator norm may increase when we tensor formula_40 with the identity map on some ancilla. To make the operator norm even a more undesirable candidate, the quantity formula_63 may increase without bound as formula_64 The solution is to introduce, for any linear map formula_40 between C*-algebras, the cb-norm formula_65 Definition of channel capacity. The mathematical model of a channel used here is same as the classical one. Let formula_66 be a channel in the Heisenberg picture and formula_67 be a chosen ideal channel. To make the comparison possible, one needs to encode and decode Φ via appropriate devices, i.e. we consider the composition formula_68 where "E" is an encoder and "D" is a decoder. In this context, "E" and "D" are unital CP maps with appropriate domains. The quantity of interest is the "best case scenario": formula_69 with the infimum being taken over all possible encoders and decoders. To transmit words of length "n", the ideal channel is to be applied "n" times, so we consider the tensor power formula_70 The formula_71 operation describes "n" inputs undergoing the operation formula_72 independently and is the quantum mechanical counterpart of concatenation. Similarly, "m invocations of the channel" corresponds to formula_73. The quantity formula_74 is therefore a measure of the ability of the channel to transmit words of length "n" faithfully by being invoked "m" times. This leads to the following definition: A non-negative real number "r" is an achievable rate of formula_33 with respect to formula_72 if For all sequences formula_75 where formula_76 and formula_77, we have formula_78 A sequence formula_79 can be viewed as representing a message consisting of possibly infinite number of words. The limit supremum condition in the definition says that, in the limit, faithful transmission can be achieved by invoking the channel no more than "r" times the length of a word. One can also say that "r" is the number of letters per invocation of the channel that can be sent without error. The channel capacity of formula_33 with respect to formula_72, denoted by formula_80 is the supremum of all achievable rates. From the definition, it is vacuously true that 0 is an achievable rate for any channel. Important examples. As stated before, for a system with observable algebra formula_16, the ideal channel formula_72 is by definition the identity map formula_81. Thus for a purely "n" dimensional quantum system, the ideal channel is the identity map on the space of "n" × "n" matrices formula_82. As a slight abuse of notation, this ideal quantum channel will be also denoted by formula_82. Similarly, a classical system with output algebra formula_83 will have an ideal channel denoted by the same symbol. We can now state some fundamental channel capacities. The channel capacity of the classical ideal channel formula_83 with respect to a quantum ideal channel formula_82 is formula_84 This is equivalent to the no-teleportation theorem: it is impossible to transmit quantum information via a classical channel. Moreover, the following equalities hold: formula_85 The above says, for instance, an ideal quantum channel is no more efficient at transmitting classical information than an ideal classical channel. When "n" = "m", the best one can achieve is "one bit per qubit". It is relevant to note here that both of the above bounds on capacities can be broken, with the aid of entanglement. The entanglement-assisted teleportation scheme allows one to transmit quantum information using a classical channel. Superdense coding. achieves "two bit per qubit". These results indicate the significant role played by entanglement in quantum communication. Classical and quantum channel capacities. Using the same notation as the previous subsection, the classical capacity of a channel Ψ is formula_86 that is, it is the capacity of Ψ with respect to the ideal channel on the classical one-bit system formula_87. Similarly the quantum capacity of Ψ is formula_88 where the reference system is now the one qubit system formula_89. Channel fidelity. Another measure of how well a quantum channel preserves information is called channel fidelity, and it arises from fidelity of quantum states. Bistochastic quantum channel. A bistochastic quantum channel is a quantum channel formula_90 which is unital, i.e. formula_91. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H_A" }, { "math_id": 1, "text": "H_B" }, { "math_id": 2, "text": "L(H_A)" }, { "math_id": 3, "text": "H_A." }, { "math_id": 4, "text": " \\Phi" }, { "math_id": 5, "text": "I_n \\otimes \\Phi," }, { "math_id": 6, "text": "I_n \\otimes \\Phi" }, { "math_id": 7, "text": " \\Phi^*" }, { "math_id": 8, "text": "\\Phi : L(H_A) \\rightarrow L(H_B)" }, { "math_id": 9, "text": "\\langle A , \\Phi(\\rho) \\rangle = \\langle \\Phi^*(A) , \\rho \\rangle ." }, { "math_id": 10, "text": " \\Phi^*(I) = I" }, { "math_id": 11, "text": "\\Psi : L(H_B) \\rightarrow L(H_A)" }, { "math_id": 12, "text": "\\Psi : \\mathcal{B} \\rightarrow \\mathcal{A}." }, { "math_id": 13, "text": "C(X)" }, { "math_id": 14, "text": "X" }, { "math_id": 15, "text": "\\mathbb{R}^n" }, { "math_id": 16, "text": "\\mathcal{B}" }, { "math_id": 17, "text": "\\Psi : L(H_B) \\otimes C(X) \\rightarrow L(H_A)." }, { "math_id": 18, "text": "L(H_B) \\otimes C(X)" }, { "math_id": 19, "text": "a" }, { "math_id": 20, "text": "\\mathcal{A}" }, { "math_id": 21, "text": "a = x^{*} x" }, { "math_id": 22, "text": "x" }, { "math_id": 23, "text": "\\rho \\rightarrow U \\rho \\;U^*," }, { "math_id": 24, "text": "U = e^{-iH t/\\hbar}" }, { "math_id": 25, "text": "A \\rightarrow U^* A U." }, { "math_id": 26, "text": "H_A \\otimes H_B." }, { "math_id": 27, "text": "\\rho \\in H_A \\otimes H_B," }, { "math_id": 28, "text": " \\rho ^A = \\operatorname{Tr}_B \\; \\rho." }, { "math_id": 29, "text": " A \\rightarrow A \\otimes I_B," }, { "math_id": 30, "text": "f_i \\in \\mathbb{C}" }, { "math_id": 31, "text": "F_i" }, { "math_id": 32, "text": "\\sum_i F_i = I" }, { "math_id": 33, "text": "\\Psi" }, { "math_id": 34, "text": "f = \\begin{bmatrix} f_1 \\\\ \\vdots \\\\ f_n \\end{bmatrix} \\in C(X)" }, { "math_id": 35, "text": "\\; \\Psi (f) = \\sum_i f_i F_i." }, { "math_id": 36, "text": "\\Psi^*" }, { "math_id": 37, "text": "\n\\Psi (\\rho) = \\begin{bmatrix} \\langle F_1, \\rho \\rangle \\\\ \\vdots \\\\ \\langle F_n, \\rho \\rangle \\end{bmatrix}, \n" }, { "math_id": 38, "text": "\n\\Psi (\\rho) = \\begin{bmatrix} \\rho (F_1) \\\\ \\vdots \\\\ \\rho (F_n) \\end{bmatrix}.\n" }, { "math_id": 39, "text": "\\{ F_1, \\dots, F_n \\}" }, { "math_id": 40, "text": "\\Phi" }, { "math_id": 41, "text": "\\rho \\in L(H)" }, { "math_id": 42, "text": "C(X) \\otimes L(H)" }, { "math_id": 43, "text": "\n\\Phi (\\rho) = \\begin{bmatrix} \\rho(F_1) \\cdot F_1 \\\\ \\vdots \\\\ \\rho(F_n) \\cdot F_n \\end{bmatrix}.\n" }, { "math_id": 44, "text": "\nf = \\begin{bmatrix} f_1 \\\\ \\vdots \\\\ f_n \\end{bmatrix} \\in C(X).\n" }, { "math_id": 45, "text": "\n\\Psi (f \\otimes A) = \\begin{bmatrix} f_1 \\Psi_1(A) \\\\ \\vdots \\\\ f_n \\Psi_n(A)\\end{bmatrix}\n" }, { "math_id": 46, "text": "\\Psi_i" }, { "math_id": 47, "text": "F_i = M_i ^2" }, { "math_id": 48, "text": "\\; \\Psi_i (A) = M_i A M_i" }, { "math_id": 49, "text": "\\Psi (f \\otimes I)" }, { "math_id": 50, "text": "{\\tilde \\Psi}(A)= \\sum_i \\Psi_i (A) = \\sum _i M_i A M_i" }, { "math_id": 51, "text": "\\; \\Phi_1 (\\rho) = \\begin{bmatrix} \\rho(F_1) \\\\ \\vdots \\\\ \\rho(F_n)\\end{bmatrix}." }, { "math_id": 52, "text": "\n\\Phi_2 \\left(\\begin{bmatrix} \\rho(F_1) \\\\ \\vdots \\\\ \\rho(F_n)\\end{bmatrix}\\right) = \\sum _i \\rho (F_i) R_i.\n" }, { "math_id": 53, "text": "\\Phi (\\rho)= \\Phi_2 \\circ \\Phi_1 (\\rho) = \\sum _i \\rho (F_i) R_i." }, { "math_id": 54, "text": "\\Phi^* = \\Phi_1^* \\circ \\Phi_2 ^*" }, { "math_id": 55, "text": "\\; \\Phi^* (A) = \\sum_i R_i(A) F_i." }, { "math_id": 56, "text": "\\Psi : \\mathbb{C}^{n \\times n} \\rightarrow \\mathbb{C}^{m \\times m}." }, { "math_id": 57, "text": "\\Psi (A) = \\sum_{i = 1}^N K_i A K_i^*" }, { "math_id": 58, "text": "|0\\rangle" }, { "math_id": 59, "text": "|1\\rangle" }, { "math_id": 60, "text": "|0\\rangle+|1\\rangle" }, { "math_id": 61, "text": "\\Lambda" }, { "math_id": 62, "text": "\\| \\Phi - \\Lambda \\| = \\sup \\{ \\| (\\Phi - \\Lambda)(A)\\| \\;|\\; \\|A\\| \\leq 1 \\}." }, { "math_id": 63, "text": "\\| \\Phi \\otimes I_n \\|" }, { "math_id": 64, "text": "n \\rightarrow \\infty." }, { "math_id": 65, "text": "\\| \\Phi \\|_{cb} = \\sup _n \\| \\Phi \\otimes I_n \\|." }, { "math_id": 66, "text": "\\Psi :\\mathcal{B}_1 \\rightarrow \\mathcal{A}_1" }, { "math_id": 67, "text": "\\Psi_{id} : \\mathcal{B}_2 \\rightarrow \\mathcal{A}_2" }, { "math_id": 68, "text": "{\\hat \\Psi} = D \\circ \\Phi \\circ E : \\mathcal{B}_2 \\rightarrow \\mathcal{A}_2 " }, { "math_id": 69, "text": "\\Delta ({\\hat \\Psi}, \\Psi_{id}) = \\inf_{E,D} \\| {\\hat \\Psi} - \\Psi_{id} \\|_{cb}" }, { "math_id": 70, "text": "\\Psi_{id}^{\\otimes n} = \\Psi_{id} \\otimes \\cdots \\otimes \\Psi_{id}." }, { "math_id": 71, "text": "\\otimes" }, { "math_id": 72, "text": "\\Psi_{id}" }, { "math_id": 73, "text": "{\\hat \\Psi} ^{\\otimes m}" }, { "math_id": 74, "text": "\\Delta ( {\\hat \\Psi}^{\\otimes m}, \\Psi_{id}^{\\otimes n} )" }, { "math_id": 75, "text": "\\{ n_{\\alpha} \\}, \\{ m_{\\alpha} \\} \\subset \\mathbb{N}" }, { "math_id": 76, "text": "m_{\\alpha}\\rightarrow \\infty" }, { "math_id": 77, "text": "\\lim \\sup _{\\alpha} (n_{\\alpha}/m_{\\alpha}) < r" }, { "math_id": 78, "text": "\\lim_{\\alpha} \\Delta ( {\\hat \\Psi}^{\\otimes m_{\\alpha}}, \\Psi_{id}^{\\otimes n_{\\alpha}} ) = 0." }, { "math_id": 79, "text": "\\{ n_{\\alpha} \\}" }, { "math_id": 80, "text": "\\;C(\\Psi, \\Psi_{id})" }, { "math_id": 81, "text": "I_{\\mathcal{B}}" }, { "math_id": 82, "text": "\\mathbb{C}^{n \\times n}" }, { "math_id": 83, "text": "\\mathbb{C}^m" }, { "math_id": 84, "text": "C(\\mathbb{C}^m, \\mathbb{C}^{n \\times n}) = 0." }, { "math_id": 85, "text": "\nC(\\mathbb{C}^m, \\mathbb{C}^n) = C(\\mathbb{C}^{m \\times m}, \\mathbb{C}^{n \\times n}) \n= C( \\mathbb{C}^{m \\times m}, \\mathbb{C}^{n} ) = \\frac{\\log n}{\\log m}.\n" }, { "math_id": 86, "text": "C(\\Psi, \\mathbb{C}^2)," }, { "math_id": 87, "text": "\\mathbb{C}^2" }, { "math_id": 88, "text": "C(\\Psi, \\mathbb{C}^{2 \\times 2})," }, { "math_id": 89, "text": "\\mathbb{C}^{2 \\times 2}" }, { "math_id": 90, "text": "\\Phi(\\rho)" }, { "math_id": 91, "text": "\\Phi(I) = I" } ]
https://en.wikipedia.org/wiki?curid=1013550
1013588
Newmark-beta method
The Newmark-beta method is a method of numerical integration used to solve certain differential equations. It is widely used in numerical evaluation of the dynamic response of structures and solids such as in finite element analysis to model dynamic systems. The method is named after Nathan M. Newmark, former Professor of Civil Engineering at the University of Illinois at Urbana–Champaign, who developed it in 1959 for use in structural dynamics. The semi-discretized structural equation is a second order ordinary differential equation system, formula_0 here formula_1 is the mass matrix, formula_2 is the damping matrix, formula_3 and formula_4 are internal force per unit displacement and external forces, respectively. Using the extended mean value theorem, the Newmark-formula_5 method states that the first time derivative (velocity in the equation of motion) can be solved as, formula_6 where formula_7 therefore formula_8 Because acceleration also varies with time, however, the extended mean value theorem must also be extended to the second time derivative to obtain the correct displacement. Thus, formula_9 where again formula_10 The discretized structural equation becomes formula_11 Explicit central difference scheme is obtained by setting formula_12 and formula_13 Average constant acceleration (Middle point rule) is obtained by setting formula_12 and formula_14 Stability Analysis. A time-integration scheme is said to be stable if there exists an integration time-step formula_15 so that for any formula_16, a finite variation of the state vector formula_17 at time formula_18 induces only a non-increasing variation of the state-vector formula_19 calculated at a subsequent time formula_20. Assume the time-integration scheme is formula_21 The linear stability is equivalent to formula_22, here formula_23 is the spectral radius of the update matrix formula_24. For the linear structural equation formula_25 here formula_26 is the stiffness matrix. Let formula_27, the update matrix is formula_28, and formula_29 For undamped case (formula_30), the update matrix can be decoupled by introducing the eigenmodes formula_31 of the structural system, which are solved by the generalized eigenvalue problem formula_32 For each eigenmode, the update matrix becomes formula_33 The characteristic equation of the update matrix is formula_34 As for the stability, we have Explicit central difference scheme (formula_12 and formula_13) is stable when formula_35. Average constant acceleration (Middle point rule) (formula_12 and formula_14) is unconditionally stable. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M\\ddot{u} + C\\dot{u} + f^{\\textrm{int}}(u) = f^{\\textrm{ext}} \\," }, { "math_id": 1, "text": "M" }, { "math_id": 2, "text": "C" }, { "math_id": 3, "text": "f^{\\textrm{int}}" }, { "math_id": 4, "text": "f^{\\textrm{ext}}" }, { "math_id": 5, "text": "\\beta" }, { "math_id": 6, "text": "\\dot{u}_{n+1}=\\dot{u}_n+ \\Delta t~\\ddot{u}_\\gamma \\," }, { "math_id": 7, "text": "\\ddot{u}_\\gamma = (1 - \\gamma)\\ddot{u}_n + \\gamma \\ddot{u}_{n+1}~~~~0\\leq \\gamma \\leq 1" }, { "math_id": 8, "text": "\\dot{u}_{n+1}=\\dot{u}_n + (1 - \\gamma) \\Delta t~\\ddot{u}_n + \\gamma \\Delta t~\\ddot{u}_{n+1}." }, { "math_id": 9, "text": "u_{n+1}=u_n + \\Delta t~\\dot{u}_n+\\begin{matrix} \\frac 1 2 \\end{matrix} \\Delta t^2~\\ddot{u}_\\beta " }, { "math_id": 10, "text": "\\ddot{u}_\\beta = (1 - 2\\beta)\\ddot{u}_n + 2\\beta\\ddot{u}_{n+1}~~~~0\\leq 2\\beta\\leq 1" }, { "math_id": 11, "text": "\\begin{aligned}\n&\\dot{u}_{n+1}=\\dot{u}_n + (1 - \\gamma) \\Delta t~\\ddot{u}_n + \\gamma \\Delta t~\\ddot{u}_{n+1}\\\\\n&u_{n+1}=u_n + \\Delta t~\\dot{u}_n + \\frac{\\Delta t^2}{2}\\left((1 - 2\\beta)\\ddot{u}_n + 2\\beta\\ddot{u}_{n+1}\\right)\\\\\n&M\\ddot{u}_{n+1} + C\\dot{u}_{n+1} + f^{\\textrm{int}}(u_{n+1}) = f_{n+1}^{\\textrm{ext}} \\,\n\\end{aligned}" }, { "math_id": 12, "text": "\\gamma=0.5 " }, { "math_id": 13, "text": "\\beta=0 " }, { "math_id": 14, "text": "\\beta=0.25 " }, { "math_id": 15, "text": "\\Delta t_0 > 0" }, { "math_id": 16, "text": "\\Delta t \\in (0, \\Delta t_0]" }, { "math_id": 17, "text": "q_n" }, { "math_id": 18, "text": "t_n" }, { "math_id": 19, "text": "q_{n+1}" }, { "math_id": 20, "text": "t_{n+1}" }, { "math_id": 21, "text": "q_{n+1} = A(\\Delta t) q_n + g_{n+1}(\\Delta t)" }, { "math_id": 22, "text": "\\rho(A(\\Delta t)) \\leq 1" }, { "math_id": 23, "text": "\\rho(A(\\Delta t))" }, { "math_id": 24, "text": "A(\\Delta t)" }, { "math_id": 25, "text": "M\\ddot{u} + C\\dot{u} + K u = f^{\\textrm{ext}} \\," }, { "math_id": 26, "text": "K" }, { "math_id": 27, "text": "q_n = [\\dot{u}_n, u_n]" }, { "math_id": 28, "text": "A = H_1^{-1}H_0" }, { "math_id": 29, "text": "\\begin{aligned}\nH_1 = \\begin{bmatrix}\nM + \\gamma\\Delta tC & \\gamma \\Delta t K\\\\\n\\beta \\Delta t^2 C & M + \\beta\\Delta t^2 K\n\\end{bmatrix}\\qquad\nH_0 = \\begin{bmatrix}\nM - (1-\\gamma)\\Delta tC & -(1 -\\gamma) \\Delta t K\\\\\n-(\\frac{1}{2} - \\beta) \\Delta t^2 C +\\Delta t M & M - (\\frac{1}{2} - \\beta)\\Delta t^2 K\n\\end{bmatrix}\n\\end{aligned}" }, { "math_id": 30, "text": "C = 0" }, { "math_id": 31, "text": "u = e^{i \\omega_i t} x_i" }, { "math_id": 32, "text": "\\omega^2 M x = K x \\," }, { "math_id": 33, "text": "\\begin{aligned}\nH_1 = \\begin{bmatrix}\n1 & \\gamma \\Delta t \\omega_i^2\\\\\n0 & 1 + \\beta\\Delta t^2 \\omega_i^2\n\\end{bmatrix}\\qquad\nH_0 = \\begin{bmatrix}\n1 & -(1 -\\gamma) \\Delta t \\omega_i^2\\\\\n \\Delta t & 1 - (\\frac{1}{2} - \\beta)\\Delta t^2 \\omega_i^2\n\\end{bmatrix}\n\\end{aligned}" }, { "math_id": 34, "text": "\\lambda^2 - \\left(2 - (\\gamma + \\frac{1}{2})\\eta_i^2\\right)\\lambda + 1 - (\\gamma - \\frac{1}{2})\\eta_i^2 = 0 \\,\\qquad \\eta_i^2 = \\frac{\\omega_i^2\\Delta t^2}{1 + \\beta\\omega_i^2\\Delta t^2}" }, { "math_id": 35, "text": "\\omega \\Delta t \\leq 2" } ]
https://en.wikipedia.org/wiki?curid=1013588
10136
Expert system
Computer system emulating the decision-making ability of a human expert In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural programming code. Expert systems were among the first truly successful forms of AI software. They were created in the 1970s and then proliferated in the 1980s, being then widely regarded as the future of AI — before the advent of successful artificial neural networks. An expert system is divided into two subsystems: 1) a "knowledge base", which represents facts and rules; and 2) an "inference engine", which applies the rules to the known facts to deduce new facts, and can include explaining and debugging abilities. History. Early development. Soon after the dawn of modern computers in the late 1940s and early 1950s, researchers started realizing the immense potential these machines had for modern society. One of the first challenges was to make such machines able to “think” like humans – in particular, making these machines able to make important decisions the way humans do. The medical–healthcare field presented the tantalizing challenge of enabling these machines to make medical diagnostic decisions. Thus, in the late 1950s, right after the information age had fully arrived, researchers started experimenting with the prospect of using computer technology to emulate human decision making. For example, biomedical researchers started creating computer-aided systems for diagnostic applications in medicine and biology. These early diagnostic systems used patients’ symptoms and laboratory test results as inputs to generate a diagnostic outcome. These systems were often described as the early forms of expert systems. However, researchers realized that there were significant limits when using traditional methods such as flow charts, statistical pattern matching, or probability theory. Formal introduction and later developments. This previous situation gradually led to the development of expert systems, which used knowledge-based approaches. These expert systems in medicine were the MYCIN expert system, the Internist-I expert system and later, in the middle of the 1980s, the CADUCEUS. Expert systems were formally introduced around 1965 by the Stanford Heuristic Programming Project led by Edward Feigenbaum, who is sometimes termed the "father of expert systems"; other key early contributors were Bruce Buchanan and Randall Davis. The Stanford researchers tried to identify domains where expertise was highly valued and complex, such as diagnosing infectious diseases (Mycin) and identifying unknown organic molecules (Dendral). The idea that "intelligent systems derive their power from the knowledge they possess rather than from the specific formalisms and inference schemes they use" – as Feigenbaum said – was at the time a significant step forward, since the past research had been focused on heuristic computational methods, culminating in attempts to develop very general-purpose problem solvers (foremostly the conjunct work of Allen Newell and Herbert Simon). Expert systems became some of the first truly successful forms of artificial intelligence (AI) software. Research on expert systems was also active in Europe. In the US, the focus tended to be on the use of production rule systems, first on systems hard coded on top of Lisp programming environments and then on expert system shells developed by vendors such as Intellicorp. In Europe, research focused more on systems and expert systems shells developed in Prolog. The advantage of Prolog systems was that they employed a form of rule-based programming that was based on formal logic. One such early expert system shell based on Prolog was APES. One of the first use cases of Prolog and APES was in the legal area namely, the encoding of a large portion of the British Nationality Act. Lance Elliot wrote: "The British Nationality Act was passed in 1981 and shortly thereafter was used as a means of showcasing the efficacy of using Artificial Intelligence (AI) techniques and technologies, doing so to explore how the at-the-time newly enacted statutory law might be encoded into a computerized logic-based formalization. A now oft-cited research paper entitled “The British Nationality Act as a Logic Program” was published in 1986 and subsequently became a hallmark for subsequent work in AI and the law." In the 1980s, expert systems proliferated. Universities offered expert system courses and two-thirds of the Fortune 500 companies applied the technology in daily business activities. Interest was international with the Fifth Generation Computer Systems project in Japan and increased research funding in Europe. In 1981, the first IBM PC, with the PC DOS operating system, was introduced. The imbalance between the high affordability of the relatively powerful chips in the PC, compared to the much more expensive cost of processing power in the mainframes that dominated the corporate IT world at the time, created a new type of architecture for corporate computing, termed the client–server model. Calculations and reasoning could be performed at a fraction of the price of a mainframe using a PC. This model also enabled business units to bypass corporate IT departments and directly build their own applications. As a result, client-server had a tremendous impact on the expert systems market. Expert systems were already outliers in much of the business world, requiring new skills that many IT departments did not have and were not eager to develop. They were a natural fit for new PC-based shells that promised to put application development into the hands of end users and experts. Until then, the main development environment for expert systems had been high end Lisp machines from Xerox, Symbolics, and Texas Instruments. With the rise of the PC and client-server computing, vendors such as Intellicorp and Inference Corporation shifted their priorities to developing PC-based tools. Also, new vendors, often financed by venture capital (such as Aion Corporation, Neuron Data, Exsys, and many others), started appearing regularly. The first expert system to be used in a design capacity for a large-scale product was the Synthesis of Integral Design (SID) software program, developed in 1982. Written in Lisp, SID generated 93% of the VAX 9000 CPU logic gates. Input to the software was a set of rules created by several expert logic designers. SID expanded the rules and generated software logic synthesis routines many times the size of the rules themselves. Surprisingly, the combination of these rules resulted in an overall design that exceeded the capabilities of the experts themselves, and in many cases out-performed the human counterparts. While some rules contradicted others, top-level control parameters for speed and area provided the tie-breaker. The program was highly controversial but used nevertheless due to project budget constraints. It was terminated by logic designers after the VAX 9000 project completion. During the years before the middle of the 1970s, the expectations of what expert systems can accomplish in many fields tended to be extremely optimistic. At the start of these early studies, researchers were hoping to develop entirely automatic (i.e., completely computerized) expert systems. The expectations of people of what computers can do were frequently too idealistic. This situation radically changed after Richard M. Karp published his breakthrough paper: “Reducibility among Combinatorial Problems” in the early 1970s. Thanks to Karp's work, together with other scholars, like Hubert L. Dreyfus, it became clear that there are certain limits and possibilities when one designs computer algorithms. His findings describe what computers can do and what they cannot do. Many of the computational problems related to this type of expert systems have certain pragmatic limits. These findings laid down the groundwork that led to the next developments in the field. In the 1990s and beyond, the term "expert system" and the idea of a standalone AI system mostly dropped from the IT lexicon. There are two interpretations of this. One is that "expert systems failed": the IT world moved on because expert systems did not deliver on their over hyped promise. The other is the mirror opposite, that expert systems were simply victims of their success: as IT professionals grasped concepts such as rule engines, such tools migrated from being standalone tools for developing special purpose "expert" systems, to being one of many standard tools. Other researchers suggest that Expert Systems caused inter-company power struggles when the IT organization lost its exclusivity in software modifications to users or Knowledge Engineers. In the first decade of the 2000s, there was a "resurrection" for the technology, while using the term "rule-based systems", with significant success stories and adoption. Many of the leading major business application suite vendors (such as SAP, Siebel, and Oracle) integrated expert system abilities into their suite of products as a way to specify business logic. Rule engines are no longer simply for defining the rules an expert would use but for any type of complex, volatile, and critical business logic; they often go hand in hand with business process automation and integration environments. Current approaches to expert systems. The limits of prior type of expert systems prompted researchers to develop new types of approaches. They have developed more efficient, flexible, and powerful methods to simulate the human decision-making process. Some of the approaches that researchers have developed are based on new methods of artificial intelligence (AI), and in particular in machine learning and data mining approaches with a feedback mechanism. Recurrent neural networks often take advantage of such mechanisms. Related is the discussion on the disadvantages section. Modern systems can incorporate new knowledge more easily and thus update themselves easily. Such systems can generalize from existing knowledge better and deal with vast amounts of complex data. Related is the subject of big data here. Sometimes these type of expert systems are called "intelligent systems." More recently, it can be argued that expert systems have moved into the area of business rules and business rules management systems. Software architecture. An expert system is an example of a knowledge-based system. Expert systems were the first commercial systems to use a knowledge-based architecture. In general view, an expert system includes the following components: a knowledge base, an inference engine, an explanation facility, a knowledge acquisition facility, and a user interface. The knowledge base represents facts about the world. In early expert systems such as Mycin and Dendral, these facts were represented mainly as flat assertions about variables. In later expert systems developed with commercial shells, the knowledge base took on more structure and used concepts from object-oriented programming. The world was represented as classes, subclasses, and instances and assertions were replaced by values of object instances. The rules worked by querying and asserting values of the objects. The inference engine is an automated reasoning system that evaluates the current state of the knowledge-base, applies relevant rules, and then asserts new knowledge into the knowledge base. The inference engine may also include abilities for explanation, so that it can explain to a user the chain of reasoning used to arrive at a particular conclusion by tracing back over the firing of rules that resulted in the assertion. There are mainly two modes for an inference engine: forward chaining and backward chaining. The different approaches are dictated by whether the inference engine is being driven by the antecedent (left hand side) or the consequent (right hand side) of the rule. In forward chaining an antecedent fires and asserts the consequent. For example, consider the following rule: formula_0 A simple example of forward chaining would be to assert Man(Socrates) to the system and then trigger the inference engine. It would match R1 and assert Mortal(Socrates) into the knowledge base. Backward chaining is a bit less straight forward. In backward chaining the system looks at possible conclusions and works backward to see if they might be true. So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true. One of the early innovations of expert systems shells was to integrate inference engines with a user interface. This could be especially powerful with backward chaining. If the system needs to know a particular fact but does not, then it can simply generate an input screen and ask the user if the information is known. So in this example, it could use R1 to ask the user if Socrates was a Man and then use that new information accordingly. The use of rules to explicitly represent knowledge also enabled explanation abilities. In the simple example above if the system had used R1 to assert that Socrates was Mortal and a user wished to understand why Socrates was mortal they could query the system and the system would look back at the rules which fired to cause the assertion and present those rules to the user as an explanation. In English, if the user asked "Why is Socrates Mortal?" the system would reply "Because all men are mortal and Socrates is a man". A significant area for research was the generation of explanations from the knowledge base in natural English rather than simply by showing the more formal but less intuitive rules. As expert systems evolved, many new techniques were incorporated into various types of inference engines. Some of the most important of these were: Advantages. The goal of knowledge-based systems is to make the critical information required for the system to work explicit rather than implicit. In a traditional computer program, the logic is embedded in code that can typically only be reviewed by an IT specialist. With an expert system, the goal was to specify the rules in a format that was intuitive and easily understood, reviewed, and even edited by domain experts rather than IT experts. The benefits of this explicit knowledge representation were rapid development and ease of maintenance. Ease of maintenance is the most obvious benefit. This was achieved in two ways. First, by removing the need to write conventional code, many of the normal problems that can be caused by even small changes to a system could be avoided with expert systems. Essentially, the logical flow of the program (at least at the highest level) was simply a given for the system, simply invoke the inference engine. This also was a reason for the second benefit: rapid prototyping. With an expert system shell it was possible to enter a few rules and have a prototype developed in days rather than the months or year typically associated with complex IT projects. A claim for expert system shells that was often made was that they removed the need for trained programmers and that experts could develop systems themselves. In reality, this was seldom if ever true. While the rules for an expert system were more comprehensible than typical computer code, they still had a formal syntax where a misplaced comma or other character could cause havoc as with any other computer language. Also, as expert systems moved from prototypes in the lab to deployment in the business world, issues of integration and maintenance became far more critical. Inevitably demands to integrate with, and take advantage of, large legacy databases and systems arose. To accomplish this, integration required the same skills as any other type of system. Summing up the benefits of using expert systems, the following can be highlighted: Disadvantages. The most common disadvantage cited for expert systems in the academic literature is the knowledge acquisition problem. Obtaining the time of domain experts for any software application is always difficult, but for expert systems it was especially difficult because the experts were by definition highly valued and in constant demand by the organization. As a result of this problem, a great deal of research in the later years of expert systems was focused on tools for knowledge acquisition, to help automate the process of designing, debugging, and maintaining rules defined by experts. However, when looking at the life-cycle of expert systems in actual use, other problems – essentially the same problems as those of any other large system – seem at least as critical as knowledge acquisition: integration, access to large databases, and performance. Performance could be especially problematic because early expert systems were built using tools (such as earlier Lisp versions) that interpreted code expressions without first compiling them. This provided a powerful development environment, but with the drawback that it was virtually impossible to match the efficiency of the fastest compiled languages (such as C). System and database integration were difficult for early expert systems because the tools were mostly in languages and platforms that were neither familiar to nor welcome in most corporate IT environments – programming languages such as Lisp and Prolog, and hardware platforms such as Lisp machines and personal computers. As a result, much effort in the later stages of expert system tool development was focused on integrating with legacy environments such as COBOL and large database systems, and on porting to more standard platforms. These issues were resolved mainly by the client–server paradigm shift, as PCs were gradually accepted in the IT environment as a legitimate platform for serious business system development and as affordable minicomputer servers provided the processing power needed for AI applications. Another major challenge of expert systems emerges when the size of the knowledge base increases. This causes the processing complexity to increase. For instance, when an expert system with 100 million rules was envisioned as the ultimate expert system, it became obvious that such system would be too complex and it would face too many computational problems. An inference engine would have to be able to process huge numbers of rules to reach a decision. How to verify that decision rules are consistent with each other is also a challenge when there are too many rules. Usually such problem leads to a satisfiability (SAT) formulation. This is a well-known NP-complete problem Boolean satisfiability problem. If we assume only binary variables, say n of them, and then the corresponding search space is of size 2formula_1. Thus, the search space can grow exponentially. There are also questions on how to prioritize the use of the rules to operate more efficiently, or how to resolve ambiguities (for instance, if there are too many else-if sub-structures within one rule) and so on. Other problems are related to the overfitting and overgeneralization effects when using known facts and trying to generalize to other cases not described explicitly in the knowledge base. Such problems exist with methods that employ machine learning approaches too. Another problem related to the knowledge base is how to make updates of its knowledge quickly and effectively. Also how to add a new piece of knowledge (i.e., where to add it among many rules) is challenging. Modern approaches that rely on machine learning methods are easier in this regard. Because of the above challenges, it became clear that new approaches to AI were required instead of rule-based technologies. These new approaches are based on the use of machine learning techniques, along with the use of feedback mechanisms. The key challenges that expert systems in medicine (if one considers computer-aided diagnostic systems as modern expert systems), and perhaps in other application domains, include issues related to aspects such as: big data, existing regulations, healthcare practice, various algorithmic issues, and system assessment. Finally, the following disadvantages of using expert systems can be summarized: Applications. Hayes-Roth divides expert systems applications into 10 categories illustrated in the following table. The example applications were not in the original Hayes-Roth table, and some of them arose well afterward. Any application that is not footnoted is described in the Hayes-Roth book. Also, while these categories provide an intuitive framework to describe the space of expert systems applications, they are not rigid categories, and in some cases an application may show traits of more than one category. Hearsay was an early attempt at solving voice recognition through an expert systems approach. For the most part this category of expert systems was not all that successful. Hearsay and all interpretation systems are essentially pattern recognition systems—looking for patterns in noisy data. In the case of Hearsay recognizing phonemes in an audio stream. Other early examples were analyzing sonar data to detect Russian submarines. These kinds of systems proved much more amenable to a neural network AI solution than a rule-based approach. CADUCEUS and MYCIN were medical diagnosis systems. The user describes their symptoms to the computer as they would to a doctor and the computer returns a medical diagnosis. Dendral was a tool to study hypothesis formation in the identification of organic molecules. The general problem it solved—designing a solution given a set of constraints—was one of the most successful areas for early expert systems applied to business domains such as salespeople configuring Digital Equipment Corporation (DEC) VAX computers and mortgage loan application development. SMH.PAL is an expert system for the assessment of students with multiple disabilities. GARVAN-ES1 was a medical expert system, developed at the Garvan Institute of Medical Research, that provided automated clinical diagnostic comments on endocrine reports from a pathology laboratory. It was one of the first medical expert systems to go into routine clinical use internationally and the first expert system to be used for diagnosis daily in Australia. The system was written in "C" and ran on a PDP-11 in 64K of memory. It had 661 rules that were compiled; not interpreted. Mistral is an expert system to monitor dam safety, developed in the 1990s by Ismes (Italy). It gets data from an automatic monitoring system and performs a diagnosis of the state of the dam. Its first copy, installed in 1992 on the Ridracoli Dam (Italy), is still operational 24/7/365. It has been installed on several dams in Italy and abroad (e.g., Itaipu Dam in Brazil), and on landslide sites under the name of Eydenet, and on monuments under the name of Kaleidos. Mistral is a registered trade mark of CESI. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Works cited. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "R1: \\mathit{Man}(x) \\implies \\mathit{Mortal}(x)" }, { "math_id": 1, "text": "^{n}" } ]
https://en.wikipedia.org/wiki?curid=10136
10137513
Drag-divergence Mach number
The drag-divergence Mach number (not to be confused with critical Mach number) is the Mach number at which the aerodynamic drag on an airfoil or airframe begins to increase rapidly as the Mach number continues to increase. This increase can cause the drag coefficient to rise to more than ten times its low-speed value. The value of the drag-divergence Mach number is typically greater than 0.6; therefore it is a transonic effect. The drag-divergence Mach number is usually close to, and always greater than, the critical Mach number. Generally, the drag coefficient peaks at Mach 1.0 and begins to decrease again after the transition into the supersonic regime above approximately Mach 1.2. The large increase in drag is caused by the formation of a shock wave on the upper surface of the airfoil, which can induce flow separation and adverse pressure gradients on the aft portion of the wing. This effect requires that aircraft intended to fly at supersonic speeds have a large amount of thrust. In early development of transonic and supersonic aircraft, a steep dive was often used to provide extra acceleration through the high-drag region around Mach 1.0. This steep increase in drag gave rise to the popular false notion of an unbreakable sound barrier, because it seemed that no aircraft technology in the foreseeable future would have enough propulsive force or control authority to overcome it. Indeed, one of the popular analytical methods for calculating drag at high speeds, the Prandtl–Glauert rule, predicts an infinite amount of drag at Mach 1.0. Two of the important technological advancements that arose out of attempts to conquer the sound barrier were the Whitcomb area rule and the supercritical airfoil. A supercritical airfoil is shaped specifically to make the drag-divergence Mach number as high as possible, allowing aircraft to fly with relatively lower drag at high subsonic and low transonic speeds. These, along with other advancements including computational fluid dynamics, have been able to reduce the factor of increase in drag to two or three for modern aircraft designs. Drag-divergence Mach numbers "M"dd for a given family of propeller airfoils can be approximated by Korn's relation: formula_0 where formula_1 is the drag-divergence Mach number, formula_2 is the coefficient of lift of a specific section of the airfoil, "t" is the airfoil thickness at a given section, "c" is the chord length at a given section, formula_3 is a factor established through CFD analysis: "K" = 0.87 for conventional airfoils (6 series), "K" = 0.95 for supercritical airfoils.
[ { "math_id": 0, "text": "M_\\text{dd} + \\frac{1}{10}c_{l,\\text{design}} + \\frac{t}{c} = K," }, { "math_id": 1, "text": "M_\\text{dd}" }, { "math_id": 2, "text": "c_{l,\\text{design}}" }, { "math_id": 3, "text": "K" } ]
https://en.wikipedia.org/wiki?curid=10137513
1013769
Systemic risk
Risk of collapse of an entire financial system or entire market In finance, systemic risk is the risk of collapse of an entire financial system or entire market, as opposed to the risk associated with any one individual entity, group or component of a system, that can be contained therein without harming the entire system. It can be defined as "financial "system" instability, potentially catastrophic, caused or exacerbated by idiosyncratic events or conditions in financial intermediaries". It refers to the risks imposed by "interlinkages" and "interdependencies" in a system or market, where the failure of a single entity or cluster of entities can cause a cascading failure, which could potentially bankrupt or bring down the entire system or market. It is also sometimes erroneously referred to as "systematic risk". Explanation. Systemic risk has been associated with a bank run which has a cascading effect on other banks which are owed money by the first bank in trouble, causing a cascading failure. As depositors sense the ripple effects of default, and liquidity concerns cascade through money markets, a panic can spread through a market, with a sudden flight to quality, creating many sellers but few buyers for illiquid assets. These interlinkages and the potential "clustering" of bank runs are the issues which policy makers consider when addressing the issue of protecting a system against systemic risk. Governments and market monitoring institutions (such as the U.S. Securities and Exchange Commission (SEC), and central banks) often try to put policies and rules in place with the justification of safeguarding the interests of the market as a whole, claiming that the trading participants in financial markets are entangled in a web of dependencies arising from their interlinkage. In simple English, this means that some companies are viewed as too big and too interconnected to fail. Policy makers frequently claim that they are concerned about protecting the resiliency of the system, rather than any one individual in that system. Systemic risk arises because of the interaction of market participants, and therefore can be seen as a form of endogenous risk. The risk management literature offers an alternative perspective to notions from economics and finance by distinguishing between the nature of systemic failure, its causes and effects, and the risk of its occurrence. It takes an "operational behaviour" approach to defining systemic risk of failure as: "A measure of the overall probability at a current time of the system entering an operational state of systemic failure by a specified time in the future, in which the supply of financial services no longer satisfies demand according to regulatory criteria, qualified by a measure of uncertainty about the system's future behaviour, in the absence of new mitigation efforts." This definition lends itself to practical risk mitigation applications, as demonstrated in recent research by a simulation of the collapse of the Icelandic financial system in circa 2008. Systemic risk should not be confused with market or price risk as the latter is specific to the item being bought or sold and the effects of market risk are isolated to the entities dealing in that specific item. This kind of risk can be mitigated by hedging an investment by entering into a mirror trade. Insurance is often easy to obtain against "systemic risks" because a party issuing that insurance can pocket the premiums, issue dividends to shareholders, enter insolvency proceedings if a catastrophic event ever takes place, and hide behind limited liability. Such insurance, however, is not effective for the insured entity. One argument that was used by financial institutions to obtain special advantages in bankruptcy for derivative contracts was a claim that the market is both critical and fragile. Systemic risk can also be defined as the likelihood and degree of negative consequences to the larger body. With respect to federal financial regulation, the systemic risk of a financial institution is the likelihood and the degree that the institution's activities will negatively affect the larger economy such that unusual and extreme federal intervention would be required to ameliorate the effects. A general definition of systemic risk which is not limited by its mathematical approaches, model assumptions or focus on one institution, and which is also the first operationalizable definition of systemic risk encompassing the systemic character of financial, political, environmental, and many other risks, was put forth in 2010. The Systemic Risk Centre at the London School of Economics is focused on the study of systemic risk. It finds that systemic risk is a form of endogenous risk, hence frustrating empirical measurements of systemic risk. Measurement. TBTF/TCTF. According to the Property Casualty Insurers Association of America, there are two key assessments for measuring systemic risk, the "too big to fail" (TBTF) and the "too (inter)connected to fail" (TCTF or TICTF) tests. First, the TBTF test is the traditional analysis for assessing the risk of required government intervention. TBTF can be measured in terms of an institution's size relative to the national and international marketplace, market share concentration, and competitive barriers to entry or how easily a product can be substituted. Second, the TCTF test is a measure of the likelihood and amount of medium-term net negative impact to the larger economy of an institution's failure to be able to conduct its ongoing business. The impact is measure beyond the institution's products and activities to include the economic multiplier of all other commercial activities dependent specifically on that institution. The impact is also dependent on how correlated an institution's business is with other systemic risks. Too big to fail. The traditional analysis for assessing the risk of required government intervention is the "too big to fail" test (TBTF). TBTF can be measured in terms of an institution's size relative to the national and international marketplace, market share concentration (using the Herfindahl-Hirschman Index for example), and competitive barriers to entry or how easily a product can be substituted. While there are large companies in most financial marketplace segments, the national insurance marketplace is spread among thousands of companies, and the barriers to entry in a business where capital is the primary input are relatively minor. The policies of one homeowners insurer can be relatively easily substituted for another or picked up by a state residual market provider, with limits on the underwriting fluidity primarily stemming from state-by-state regulatory impediments, such as limits on pricing and capital mobility. During the recent financial crisis, the collapse of the American International Group (AIG) posed a significant systemic risk to the financial system. There are arguably either no or extremely few insurers that are TBTF in the U.S. marketplace. Too connected to fail. A more useful systemic risk measure than a traditional TBTF test is a "too connected to fail" (TCTF) assessment. An intuitive TCTF analysis has been at the heart of most recent federal financial emergency relief decisions. TCTF is a measure of the likelihood and amount of medium-term net negative impact to the larger economy of an institution's failure to be able to conduct its ongoing business. Network models have been proposed as a method for quantifying the impact of interconnectedness on systemic risk. The impact is measured not just on the institution's products and activities, but also the economic multiplier of all other commercial activities dependent specifically on that institution. It is also dependent on how correlated an institution's business is with other systemic risk. Criticisms of systemic risk measurements. "Criticisms of systemic risk measurements:" Danielsson et al. express concerns about systemic risk measurements, such as SRISK and CoVaR, because they are based on market outcomes that happen multiple times a year, so that the probability of systemic risk as measured does not correspond to the actual systemic risk in the financial system. Systemic financial crises happen once every 43 years for a typical OECD country and measurements of systemic risk should target that probability. SRISK. A financial institution represents a systemic risk if it becomes undercapitalized when the financial system as a whole is undercapitalized. In a single risk factor model, Brownlees and Engle build a systemic risk measure named SRISK. SRISK can be interpreted as the amount of capital that needs to be injected into a financial firm as to restore a certain form of minimal capital requirement. SRISK has several nice properties: SRISK is expressed in monetary terms and is, therefore, easy to interpret. SRISK can be easily aggregated across firms to provide industry and even country specific aggregates. Last, the computation of SRISK involves variables which may be viewed on their own as risk measures. These are the size of the financial firm, the leverage (ratio of assets to market capitalization), and a measure of how the return of the firm evolves with the market (some sort of time varying conditional beta but with emphasis on the tail of the distribution). Whereas the initial Brownlees and Engle model is tailored to the US market, the extension by Engle, Jondeau, and Rockinger is more suitable for the European markets. One factor captures worldwide variations of financial markets, another one the variations of European markets. This extension allows for a country-specific factor. By accounting for different factors, one captures the notion that shocks to the US or Asian markets may affect Europe, but also that bad news within Europe (such as the news about a potential default of one of the countries) matters for Europe. Also, there may be country specific news that does not affect Europe or the US, but matters for a given country. Empirically the last factor is less relevant than the worldwide or European factor. Since SRISK is measured in terms of currency, the industry aggregates may also be related to Gross Domestic Product. As such one obtains a measure of domestic, systemically important banks. The SRISK Systemic Risk Indicator is computed automatically on a weekly basis and made available to the community. For the US model, SRISK and other statistics may be found under the Volatility Lab of NYU Stern School website and for the European model under the Center of Risk Management (CRML) website of HEC Lausanne. Pair/vine copulas. A vine copula can be used to model systemic risk across a portfolio of financial assets. One methodology is to apply the Clayton Canonical Vine Copula to model asset pairs in the vine structure framework. As a Clayton copula is used, the greater the degree of asymmetric (i.e., left tail) dependence, the higher the Clayton copula parameter. Therefore, one can sum up all the Clayton Copula parameters, and the higher the sum of these parameters, the greater the impending likelihood of systemic risk. This methodology has been found to detect spikes in the US equities markets in the last four decades capturing the Oil Crisis and Energy Crisis of the 1970s, Black Monday and the Gulf War in the 1980s, the Russian Default/LTCM crisis of the 1990s, and the Technology Bubble and Lehman Default in the 2000s. Manzo and Picca introduce the t-Student Distress Insurance Premium (tDIP), a copula-based method that measures systemic risk as the expected tail loss on a credit portfolio of entities, in order to quantify sovereign as well as financial systemic risk in Europe. Valuation of assets and derivatives under systemic risk. Inadequacy of classic valuation models. One problem when it comes to the valuation of derivatives, debt, or equity under systemic risk is that financial interconnectedness has to be modelled. One particular problem is posed by closed valuations chains, as exemplified here for four firms A, B, C, and D: B might hold shares of A, C holds some debt of B, D owns a derivative issued by C, and A owns some debt of D. For instance, the share price of A could influence all other asset values, including itself. The Merton (1974) model. Situations as the one explained earlier, which are present in mature financial markets, cannot be modelled within the single-firm Merton model, but also not by its straightforward extensions to multiple firms with potentially correlated assets. To demonstrate this, consider two financial firms, formula_0, with limited liability, which both own system-exogenous assets of a value formula_1 at a maturity formula_2, and which both owe a single amount of zero coupon debt formula_3, due at time formula_4. "System-exogenous" here refers to the assumption, that the business asset formula_5 is not influenced by the firms in the considered financial system. In the classic single firm Merton model, it now holds at maturity for the equity formula_6 and for the recovery value formula_7 of the debt, that formula_8 and formula_9 Equity and debt recovery value, formula_10 and formula_11, are thus uniquely and immediately determined by the value formula_5 of the exogenous business assets. Assuming that the formula_5 are, for instance, defined by a Black-Scholes dynamic (with or without correlations), risk-neutral no-arbitrage pricing of debt and equity is straightforward. Non-trivial asset value equations. Consider now again two such firms, but assume that firm 1 owns 5% of firm two's equity and 20% of its debt. Similarly, assume that firm 2 owns 3% of firm one's equity and 10% of its debt. The equilibrium price equations, or liquidation value equations, at maturity are now given by formula_12 formula_13 formula_14 formula_15 This example demonstrates, that systemic risk in the form of financial interconnectedness can already lead to a non-trivial, non-linear equation system for the asset values if only two firms are involved. Over- and underestimation of default probabilities. It is known that modelling credit risk while ignoring cross-holdings of debt or equity can lead to an under-, but also an over-estimation of default probabilities. The need for proper structural models of financial interconnectedness in quantitative risk management – be it in research or practice – is therefore obvious. Structural models under financial interconnectedness. The first authors to consider structural models for financial systems where each firm could own the debt of other firms were Eisenberg and Noe in 2001. Suzuki (2002) extended the analysis of interconnectedness by modeling the cross ownership of both debt and equity claims. Building on Eisenberg and Noe (2001), Cifuentes, Ferrucci, and Shin (2005) considered the effect of costs of default on network stability. Elsinger's further developed the Eisenberg and Noe (2001) model by incorporating financial claims of differing priority. Acemoglu, Ozdaglar, and Tahbaz-Salehi, (2015) developed a structural systemic risk model incorporating both distress costs and debt claim with varying priorities and used this model to examine the effects of network interconnectedness on financial stability. They showed that, up to a certain point, interconnectedness enhances financial stability. However, once a critical threshold density of connectedness is exceeded, further increases in the density of the financial network propagate risk. Glasserman and Young (2015) applied the Eisenberg and Noe (2001) to modelling the effect of shocks to banking networks. They develop general bounds for the effects of network connectivity on default probabilities. In contrast to most of the structural systemic risk literature, their results are quite general and do not require assuming a specific network architecture or specific shock distributions. Risk-neutral valuation: price indeterminacy and open problems. Generally speaking, risk-neutral pricing in structural models of financial interconnectedness requires unique equilibrium prices at maturity in dependence of the exogenous asset price vector, which can be random. While financially interconnected systems with debt and equity cross-ownership without derivatives are fairly well understood in the sense that relatively weak conditions on the ownership structures in the form of ownership matrices are required to warrant uniquely determined price equilibria, the Fischer (2014) model needs very strong conditions on derivatives – which are defined in dependence on any other liability of the considered financial system – to be able to guarantee uniquely determined prices of all system-endogenous liabilities. Furthermore, it is known that there exist examples with no solutions at all, finitely many solutions (more than one), and infinitely many solutions. At present, it is unclear how weak conditions on derivatives can be chosen to still be able to apply risk-neutral pricing in financial networks with systemic risk. It is noteworthy, that the price indeterminacy that evolves from multiple price equilibria is fundamentally different from price indeterminacy that stems from market incompleteness. Factors. Factors that are found to support systemic risks are: Diversification. Risks can be reduced in four main ways: avoidance, diversification, hedging and insurance by transferring risk. Systematic risk, also called market risk or un-diversifiable risk, is a risk of a security that cannot be reduced through diversification. Participants in the market, like hedge funds, can be the source of an increase in systemic risk and the transfer of risk to them may, paradoxically, increase the exposure to systemic risk. Until recently, many theoretical models of finance pointed towards the stabilizing effects of a diversified (i.e., dense) financial system. Nevertheless, some recent work has started to challenge this view, investigating conditions under which diversification may have ambiguous effects on systemic risk. Within a certain range, financial interconnections serve as a shock-absorber (i.e., connectivity engenders robustness and risk-sharing prevails). But beyond the tipping point, interconnections might serve as a shock-amplifier (i.e., connectivity engenders fragility and risk-spreading prevails). Regulation. One of the main reasons for regulation in the marketplace is to reduce systemic risk. However, regulation arbitrage – the transfer of commerce from a regulated sector to a less regulated or unregulated sector – brings markets a full circle and restores systemic risk. For example, the banking sector was brought under regulations in order to reduce systemic risks. Since the banks themselves could not give credit where the risk (and therefore returns) were high, it was primarily the insurance sector which took over such deals. Thus the systemic risk migrated from one sector to another and proves that regulation of only one industry cannot be the sole protection against systemic risks. Project risks. In the fields of project management and cost engineering, systemic risks include those risks that are not unique to a particular project and are not readily manageable by a project team at a given point in time. They are caused by micro or internal factors i.e. uncertainty resulting from attributes of the project system/culture. Some use the term inherent risk. These systemic risks are called individual project risks e.g. in PMI PMBOK(R) Guide. These risks may be driven by the nature of a company's project system (e.g., funding projects before the scope is defined), capabilities, or culture. They may also be driven by the level of technology in a project or the complexity of a project's scope or execution strategy. One recent example of systemic risk is the collapse of Lehman Brothers in 2008, which sent shockwaves throughout the financial system and the economy. In contrast, those risks that are unique to a particular project are called overall project risks aka systematic risks in finance terminology. They are project-specific risks which are sometimes called contingent risks, or risk events. These systematic risks are caused by uncertainty in macro or external factors of the external environment. "The Great Recession" of the late 2000s is an example of systematic risk. Overall project risks are determined using PESTLE, VUCA, etc. PMI PMBOK(R) Guide defines individual project risk as "an uncertain event or condition that, if it occurs, has a positive or negative effect on one or more project objectives," whereas overall project risk is defined as "the effect of uncertainty on the project as a whole … more than the sum of individual risks within a project, since it includes all sources of project uncertainty … represents the exposure of stakeholders to the implications of variations in project outcome, both positive and negative." Systemic risk and insurance. In February 2010, international insurance economics think tank, The Geneva Association, published a 110-page analysis of the role of insurers in systemic risk. In the report, the differing roles of insurers and banks in the global financial system and their impact on the crisis are examined (See also CEA report, "Why Insurers Differ from Banks"). A key conclusion of the analysis is that the core activities of insurers and reinsurers do not pose systemic risks due to the specific features of the industry: Applying the most commonly cited definition of systemic risk, that of the Financial Stability Board (FSB), to the core activities of insurers and reinsurers, the report concludes that none are systemically relevant for at least one of the following reasons: The report underlines that supervisors and policymakers should focus on activities rather than financial institutions when introducing new regulation and that upcoming insurance regulatory regimes, such as Solvency II in the European Union, already adequately address insurance activities. However, during the financial crisis, a small number of quasi-banking activities conducted by insurers either caused failure or triggered significant difficulties. The report therefore identifies two activities which, when conducted on a widespread scale without proper risk control frameworks, have the potential for systemic relevance. The industry has put forward five recommendations to address these particular activities and strengthen financial stability: Since the publication of The Geneva Association statement, in June 2010, the International Association of Insurance Supervisors (IAIS) issued its position statement on key financial stability issues. A key conclusion of the statement was that, "The insurance sector is susceptible to systemic risks generated in other parts of the financial sector. For most classes of insurance, however, there is little evidence of insurance either generating or amplifying systemic risk, within the financial system itself or in the real economy." Other organisations such as the CEA and the Property Casualty Insurers Association of America (PCI) have issued reports on the same subject. Discussion. Systemic risk evaluates the likelihood and degree of negative consequences to the larger body. The term "systemic risk" is frequently used in recent discussions related to the economic crisis, such as the Subprime mortgage crisis. The systemic risk of a financial institution is the likelihood and the degree that the institution's activities will negatively affect the larger economy such that unusual and extreme federal intervention would be required to ameliorate the effects. The failing of financial firms in 2008 caused systemic risk to the larger economy. Chairman Barney Frank has expressed concerns regarding the vulnerability of highly leveraged financial systems to systemic risk and the US government has debated how to address financial services regulatory reform and systemic risk. A series of empirical studies published between the 1990s and 2000s showed that deregulation and increasingly fierce competition lowers bank's profit margin and encourages the moral hazard to take excessive credit risks to increase profits. On the other hand, the same effect was measured in presence of a banking oligopoly in which banking sector was dominated by a restricted number of market operators encouraged by their market share and contractual power to set higher loan mean rates. An excessive number of market operators was sometimes deliberately introduced with a below market value selling to cause a price war and a wave of bank massive failures, subsequently degenerating in the creation a market cartel: those two phases had been seen as expressions of the same interest to collude at generally lower prices (and then higher), resulting possible because of a lack of regulation ordered to prevent both of them. Banks are the entities most likely to be exposed to valuation risk as a result of their massive holdings of financial instruments classified as Level 2 or 3 of the fair value hierarchy. In Europe, at the end of 2020 the banks under the direct supervision of the European Central Bank (ECB) held fair-valued financial instruments in an amount of €8.7 trillion, of which €6.6 trillion classified as Level 2 or 3. Level 2 and Level 3 instruments respectively amounted to 495% and 23% of the banks' highest-quality capital (so-called Tier 1 Capital). As an implication, even small errors in such financial instruments' valuations may have significant impacts on banks' capital. In February 2020 the European Systemic Risk Board warned in a report that substantial amounts of financial instruments with complex features and limited liquidity that sit in banks' balance sheets are a source of risk for the stability of the global financial system. In Europe, at the end of 2020 the banks under the direct supervision of the European Central Bank (ECB) held financial instruments subject to fair value accounting in an amount of €8.7 trillion. Of these, €6.6 trillion were classified as Level 2 or 3 in the so-called Fair Value Hierarchy, which means that they are potentially exposed to valuation risk, i.e. to uncertainty about their actual market value. Level 2 and Level 3 instruments respectively amounted to 495% and 23% of the banks' highest-quality capital (so-called Tier 1 Capital). As an implication, even small errors in such financial instruments' valuations may have significant impacts on banks' capital. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i = 1, 2" }, { "math_id": 1, "text": "a_i \\geq 0" }, { "math_id": 2, "text": "T \\geq 0" }, { "math_id": 3, "text": "d_i \\geq 0" }, { "math_id": 4, "text": "T" }, { "math_id": 5, "text": "a_i" }, { "math_id": 6, "text": "s_i \\geq 0" }, { "math_id": 7, "text": "r_i \\geq 0" }, { "math_id": 8, "text": "r_i = \\min\\{d_i, a_i\\}" }, { "math_id": 9, "text": "s_i = (a_i - d_i)^+." }, { "math_id": 10, "text": "s_i" }, { "math_id": 11, "text": "r_i" }, { "math_id": 12, "text": "r_1 = \\min\\{d_1, a_1 + 0.05s_2 + 0.2r_2\\}" }, { "math_id": 13, "text": "r_2 = \\min\\{d_2, a_2 + 0.03s_1 + 0.1r_1\\}" }, { "math_id": 14, "text": "s_1 = (a_1 + 0.05s_2 + 0.2r_2 - d_1)^+" }, { "math_id": 15, "text": "s_2 = (a_2 + 0.03s_1 + 0.1r_1 - d_2)^+." } ]
https://en.wikipedia.org/wiki?curid=1013769
10137896
Tensor product of quadratic forms
In mathematics, the tensor product of quadratic forms is most easily understood when one views the quadratic forms as "quadratic spaces". If "R" is a commutative ring where 2 is invertible, and if formula_0 and formula_1 are two quadratic spaces over "R", then their tensor product formula_2 is the quadratic space whose underlying "R"-module is the tensor product formula_3 of "R"-modules and whose quadratic form is the quadratic form associated to the tensor product of the bilinear forms associated to formula_4 and formula_5. In particular, the form formula_6 satisfies formula_7 (which does uniquely characterize it however). It follows from this that if the quadratic forms are diagonalizable (which is always possible if 2 is invertible in "R"), i.e., formula_8 formula_9 then the tensor product has diagonalization formula_10 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(V_1, q_1)" }, { "math_id": 1, "text": "(V_2,q_2)" }, { "math_id": 2, "text": "(V_1 \\otimes V_2, q_1 \\otimes q_2)" }, { "math_id": 3, "text": "V_1 \\otimes V_2" }, { "math_id": 4, "text": "q_1" }, { "math_id": 5, "text": "q_2" }, { "math_id": 6, "text": "q_1 \\otimes q_2" }, { "math_id": 7, "text": " (q_1\\otimes q_2)(v_1 \\otimes v_2) = q_1(v_1) q_2(v_2) \\quad \\forall v_1 \\in V_1,\\ v_2 \\in V_2" }, { "math_id": 8, "text": "q_1 \\cong \\langle a_1, ... , a_n \\rangle" }, { "math_id": 9, "text": "q_2 \\cong \\langle b_1, ... , b_m \\rangle" }, { "math_id": 10, "text": "q_1 \\otimes q_2 \\cong \\langle a_1b_1, a_1b_2, ... a_1b_m, a_2b_1, ... , a_2b_m , ... , a_nb_1, ... a_nb_m \\rangle." } ]
https://en.wikipedia.org/wiki?curid=10137896
10138003
Fourier integral operator
Class of differential and integral operators In mathematical analysis, Fourier integral operators have become an important tool in the theory of partial differential equations. The class of Fourier integral operators contains differential operators as well as classical integral operators as special cases. A Fourier integral operator formula_0 is given by: formula_1 where formula_2 denotes the Fourier transform of formula_3, formula_4 is a standard symbol which is compactly supported in formula_5 and formula_6 is real valued and homogeneous of degree formula_7 in formula_8. It is also necessary to require that formula_9 on the support of "a." Under these conditions, if "a" is of order zero, it is possible to show that formula_0 defines a bounded operator from formula_10 to formula_10. Examples. One motivation for the study of Fourier integral operators is the solution operator for the initial value problem for the wave operator. Indeed, consider the following problem: formula_11 and formula_12 The solution to this problem is given by formula_13 These need to be interpreted as oscillatory integrals since they do not in general converge. This formally looks like a sum of two Fourier integral operators, however the coefficients in each of the integrals are not smooth at the origin, and so not standard symbols. If we cut out this singularity with a cutoff function, then the so obtained operators still provide solutions to the initial value problem modulo smooth functions. Thus, if we are only interested in the propagation of singularities of the initial data, it is sufficient to consider such operators. In fact, if we allow the sound speed c in the wave equation to vary with position we can still find a Fourier integral operator that provides a solution modulo smooth functions, and Fourier integral operators thus provide a useful tool for studying the propagation of singularities of solutions to variable speed wave equations, and more generally for other hyperbolic equations. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T" }, { "math_id": 1, "text": "(Tf)(x)=\\int_{\\mathbb{R}^n} e^{2\\pi i \\Phi(x,\\xi)}a(x,\\xi)\\hat{f}(\\xi) \\, d\\xi " }, { "math_id": 2, "text": "\\hat f" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "a(x,\\xi)" }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "\\Phi" }, { "math_id": 7, "text": "1" }, { "math_id": 8, "text": "\\xi" }, { "math_id": 9, "text": "\\det \\left(\\frac{\\partial^2 \\Phi}{\\partial x_i \\, \\partial \\xi_j}\\right)\\neq 0" }, { "math_id": 10, "text": "L^{2}" }, { "math_id": 11, "text": " \\frac{1}{c^2}\\frac{\\partial^2 u}{\\partial t^2}(t,x) = \\Delta u(t,x) \\quad \\mathrm{for} \\quad (t,x) \\in \\mathbb{R}^+ \\times \\mathbb{R}^n," }, { "math_id": 12, "text": " u(0,x) = 0, \\quad \\frac{\\partial u}{\\partial t}(0,x) = f(x), \\quad \\mathrm{for} \\quad f \\in \\mathcal{S}'(\\mathbb{R}^n)." }, { "math_id": 13, "text": " u(t,x) = \\frac{1}{(2 \\pi)^n} \\int \\frac{e^{i (\\langle x,\\xi \\rangle + c t | \\xi |)}}{2 i c |\\xi |} \\hat f (\\xi) \\, d \\xi - \\frac{1}{(2 \\pi)^n} \\int \\frac{e^{i (\\langle x,\\xi \\rangle - c t | \\xi |)}}{2 i c |\\xi |} \\hat f (\\xi) \\, d \\xi ." } ]
https://en.wikipedia.org/wiki?curid=10138003
1013834
Beverage antenna
Type of radio antenna The Beverage antenna or "wave antenna" is a long-wire receiving antenna mainly used in the low frequency and medium frequency radio bands, invented by Harold H. Beverage in 1921. It is used by amateur radio operators, shortwave listeners, longwave radio DXers and for military applications. A Beverage antenna consists of a horizontal wire from one-half to several wavelengths long (tens to hundreds of meters; yards at HF to several kilometres; miles for longwave) suspended above the ground, with the feedline to the receiver attached to one end, and the other end of the wire terminated through a resistor to ground. The antenna has a unidirectional radiation pattern with the main lobe of the pattern at a shallow angle into the sky off the resistor-terminated end, making it ideal for reception of long distance skywave (skip) transmissions from stations over the horizon which reflect off the ionosphere. However the antenna must be built so the wire points in the direction of the transmitter(s) to be received. The advantages of the Beverage are excellent directivity, a wider bandwidth than resonant antennas, and a strong ability to receive distant and overseas transmitters. Its disadvantages are its physical size, requiring considerable land area, and inability to rotate to change the direction of reception. Installations often use multiple Beverage antennas to provide wide azimuth coverage. History. Harold Beverage experimented with receiving antennas similar to the Beverage antenna in 1919 at the Otter Cliffs Radio Station. He discovered in 1920 that an otherwise nearly bidirectional long-wire antenna becomes unidirectional by placing it close to the lossy earth and by terminating one end of the wire with a resistor. In 1921, Beverage was granted a patent for his antenna. That year, Beverage long-wave receiving antennas up to long had been installed at RCA's Riverhead, New York, Belfast, Maine, Belmar, New Jersey, and Chatham, Massachusetts receiver stations for transatlantic radiotelegraphy traffic. Perhaps the largest Beverage antenna—an array of four phased Beverages long and wide—was built by AT&amp;T in Houlton, Maine, for the first transatlantic telephone system opened in 1927. Description. The Beverage antenna consists of a horizontal wire one-half to several wavelengths long, suspended close to the ground, usually high, pointed in the direction of the signal source. At the end toward the signal source it is terminated by a resistor to ground approximately equal in value to the characteristic impedance of the antenna considered as a transmission line, usually 400 to 800 ohms. At the other end it is connected to the receiver with a transmission line, through a balun to match the line to the antenna's characteristic impedance. Operation. Unlike other wire antennas such as dipole or monopole antennas which act as resonators, with the radio currents traveling in both directions along the element, bouncing back and forth between the ends as standing waves, the Beverage antenna is a traveling wave antenna; the radio frequency current travels in one direction along the wire, in the same direction as the radio waves. The lack of resonance gives it a wider bandwidth than resonant antennas. It receives vertically polarized radio waves, but unlike other vertically polarized antennas it is suspended close to the ground, and requires some resistance in the ground to work. The Beverage antenna relies on "wave tilt" for its operation. At low and medium frequencies, a vertically polarized radio frequency electromagnetic wave traveling close to the surface of the earth with finite ground conductivity sustains a loss that causes the wavefront to "tilt over" at an angle. The electric field is not perpendicular to the ground but at an angle, producing an electric field component parallel to the Earth's surface. If a horizontal wire is suspended close to the Earth and approximately parallel to the wave's direction, the electric field generates an oscillating RF current wave traveling along the wire, propagating in the same direction as the wavefront. The RF currents traveling along the wire add in phase and amplitude throughout the length of the wire, producing maximum signal strength at the far end of the antenna where the receiver is connected. The antenna wire and the ground under it together can be thought of as a "leaky" transmission line which absorbs energy from the radio waves. The velocity of the current waves in the antenna is less than the speed of light due to the ground. The velocity of the wavefront along the wire is also less than the speed of light due to its angle. At a certain angle "θ"max the two velocities are equal. At this angle the gain of the antenna is maximum, so the radiation pattern has a main lobe at this angle. The angle of the main lobe is formula_0 where formula_1 is the length of the antenna wire, formula_2 is the wavelength. The antenna has a unidirectional reception pattern, because RF signals arriving from the other direction, from the receiver end of the wire, induce currents propagating toward the terminated end, where they are absorbed by the terminating resistor. Gain. While Beverage antennas have excellent directivity, because they are close to lossy Earth, they do not produce absolute gain; their gain is typically from −20 to −10 dBi. This is rarely a problem, because the antenna is used at frequencies where there are high levels of atmospheric radio noise. At these frequencies the atmospheric noise, and not receiver noise, determines the signal-to-noise ratio, so an inefficient antenna can be used. The weak signal from the antenna can be amplified in the receiver without introducing significant noise. The antenna is not used as a transmitting antenna since, to do so, would mean a large portion of the drive power is wasted in the terminating resistor Directivity increases with the length of the antenna. While directivity begins to develop at a length of only 0.25 wavelength, directivity becomes more significant at one wavelength and improves steadily until the antenna reaches a length of about two wavelengths. In Beverages longer than two wavelengths, directivity does not increase because the currents in the antenna cannot remain in phase with the radio wave. Implementation. A single-wire Beverage antenna is typically a single straight copper wire, between one-half and two wavelengths long, run parallel to the Earth's surface in the direction of the desired signal. The wire is suspended by insulated supports above the ground. A non-inductive resistor approximately equal to the characteristic impedance of the wire, about 400 to 600 ohms, is connected from the far end of the wire to a ground rod. The other end of the wire is connected to the feedline to the receiver. A dual-wire variant is sometimes utilized for rearward null steering or for bidirectional switching. The antenna can also be implemented as an array of 2 to 128 or more elements in broadside, endfire, and staggered configurations, offering significantly improved directivity otherwise very difficult to attain at these frequencies. A four-element broadside/staggered Beverage array was used by AT&amp;T at their longwave telephone receiver site in Houlton, Maine. Very large phased Beverage arrays of 64 elements or more have been implemented for receiving antennas for over-the-horizon radar systems. The driving impedance of the antenna is equal to the characteristic impedance of the wire with respect to ground, somewhere between 400 and 800 ohms, depending on the height of the wire. Typically a length of 50-ohm or 75-ohm coaxial cable would be used for connecting the receiver to the antenna endpoint. A matching transformer should be inserted between any such low-impedance transmission line and the higher 470-ohm impedance of the antenna.
[ { "math_id": 0, "text": "\\theta_\\text{max} = \\arccos \\biggl(1 - \\frac{\\lambda}{2 L} \\biggr)," }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "\\lambda" } ]
https://en.wikipedia.org/wiki?curid=1013834
10138549
Morton number
In fluid dynamics, the Morton number (Mo) is a dimensionless number used together with the Eötvös number or Bond number to characterize the shape of bubbles or drops moving in a surrounding fluid or continuous phase, "c". It is named after Rose Morton, who described it with W. L. Haberman in 1953. Definition. The Morton number is defined as formula_0 where "g" is the acceleration of gravity, formula_1 is the viscosity of the surrounding fluid, formula_2 the density of the surrounding fluid, formula_3 the difference in density of the phases, and formula_4 is the surface tension coefficient. For the case of a bubble with a negligible inner density the Morton number can be simplified to formula_5 Relation to other parameters. The Morton number can also be expressed by using a combination of the Weber number, Froude number and Reynolds number, formula_6 The Froude number in the above expression is defined as formula_7 where "V" is a reference velocity and "d" is the equivalent diameter of the drop or bubble. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{Mo} = \\frac{g \\mu_c^4 \\, \\Delta \\rho}{\\rho_c^2 \\sigma^3}, " }, { "math_id": 1, "text": "\\mu_c" }, { "math_id": 2, "text": "\\rho_c" }, { "math_id": 3, "text": " \\Delta \\rho" }, { "math_id": 4, "text": "\\sigma" }, { "math_id": 5, "text": "\\mathrm{Mo} = \\frac{g\\mu_c^4}{\\rho_c \\sigma^3}." }, { "math_id": 6, "text": "\\mathrm{Mo} = \\frac{\\mathrm{We}^3}{\\mathrm{Fr}^2\\, \\mathrm{Re}^4}." }, { "math_id": 7, "text": "\\mathrm{Fr^2} = \\frac{V^2}{g d}" } ]
https://en.wikipedia.org/wiki?curid=10138549
1013950
Heilbronn triangle problem
On point sets with no small-area triangles &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: What is the asymptotic growth rate of the area of the smallest triangle determined by three out of formula_0 points in a square, when the points are chosen to maximize this area? In discrete geometry and discrepancy theory, the Heilbronn triangle problem is a problem of placing points in the plane, avoiding triangles of small area. It is named after Hans Heilbronn, who conjectured that, no matter how points are placed in a given area, the smallest triangle area will be at most inversely proportional to the square of the number of points. His conjecture was proven false, but the asymptotic growth rate of the minimum triangle area remains unknown. Definition. The Heilbronn triangle problem concerns the placement of formula_0 points within a shape in the plane, such as the unit square or the unit disk, for a given number formula_0. Each triple of points form the three vertices of a triangle, and among these triangles, the problem concerns the smallest triangle, as measured by area. Different placements of points will have different smallest triangles, and the problem asks: how should formula_0 points be placed to maximize the area of the smallest triangle? More formally, the shape may be assumed to be a compact set formula_1 in the plane, meaning that it stays within a bounded distance from the origin and that points are allowed to be placed on its boundary. In most work on this problem, formula_1 is additionally a convex set of nonzero area. When three of the placed points lie on a line, they are considered as forming a degenerate triangle whose area is defined to be zero, so placements that maximize the smallest triangle will not have collinear triples of points. The assumption that the shape is compact implies that there exists an optimal placement of formula_0 points, rather than only a sequence of placements approaching optimality. The number formula_2 may be defined as the area of the smallest triangle in this optimal placement. An example is shown in the figure, with six points in a unit square. These six points form formula_3 different triangles, four of which are shaded in the figure. Six of these 20 triangles, with two of the shaded shapes, have area 1/8; the remaining 14 triangles have larger areas. This is the optimal placement of six points in a unit square: all other placements form at least one triangle with area 1/8 or smaller. Therefore, formula_4. Although researchers have studied the value of formula_2 for specific shapes and specific small numbers of points, Heilbronn was concerned instead about its asymptotic behavior: if the shape formula_1 is held fixed, but formula_0 varies, how does the area of the smallest triangle vary with formula_0? That is, Heilbronn's question concerns the growth rate of formula_2, as a function of formula_0. For any two shapes formula_1 and formula_5, the numbers formula_2 and formula_6 differ only by a constant factor, as any placement of formula_0 points within formula_1 can be scaled by an affine transformation to fit within formula_5, changing the minimum triangle area only by a constant. Therefore, in bounds on the growth rate of formula_2 that omit the constant of proportionality of that growth, the choice of formula_1 is irrelevant and the subscript may be omitted. Heilbronn's conjecture and its disproof. Heilbronn conjectured prior to 1951 that the minimum triangle area always shrinks rapidly as a function of formula_0—more specifically, inversely proportional to the square of formula_0. In terms of big O notation, this can be expressed as the bound formula_7 In the other direction, Paul Erdős found examples of point sets with minimum triangle area proportional to formula_8, demonstrating that, if true, Heilbronn's conjectured bound could not be strengthened. Erdős formulated the no-three-in-line problem, on large sets of grid points with no three in a line, to describe these examples. As Erdős observed, when formula_0 is a prime number, the set of formula_0 points formula_9 on an formula_10 integer grid (for formula_11) have no three collinear points, and therefore by Pick's formula each of the triangles they form has area at least formula_12. When these grid points are scaled to fit within a unit square, their smallest triangle area is proportional to formula_8, matching Heilbronn's conjectured upper bound. If formula_0 is not prime, then a similar construction using a prime number close to formula_0 achieves the same asymptotic lower bound. eventually disproved Heilbronn's conjecture, by using the probabilistic method to find sets of points whose smallest triangle area is larger than the ones found by Erdős. Their construction involves the following steps: The area resulting from their construction grows asymptotically as formula_15 The proof can be derandomized, leading to a polynomial-time algorithm for constructing placements with this triangle area. Upper bounds. Every set of formula_0 points in the unit square forms a triangle of area at most inversely proportional to formula_0. One way to see this is to triangulate the convex hull of the given point set formula_16, and choose the smallest of the triangles in the triangulation. Another is to sort the points by their formula_17-coordinates, and to choose the three consecutive points in this ordering whose formula_17-coordinates are the closest together. In the first paper published on the Heilbronn triangle problem, in 1951, Klaus Roth proved a stronger upper bound on formula_18, of the form formula_19 The best bound known to date is of the form formula_20 for some constant formula_21, proven by . A new upper bound equal to formula_22 was proven by . Specific shapes and numbers. has investigated the optimal arrangements of formula_0 points in a square, for formula_0 up to 16. Goldberg's constructions for up to six points lie on the boundary of the square, and are placed to form an affine transformation of the vertices of a regular polygon. For larger values of formula_0, improved Goldberg's bounds, and for these values the solutions include points interior to the square. These constructions have been proven optimal for up to seven points. The proof used a computer search to subdivide the configuration space of possible arrangements of the points into 226 different subproblems, and used nonlinear programming techniques to show that in 225 of those cases, the best arrangement was not as good as the known bound. In the remaining case, including the eventual optimal solution, its optimality was proven using symbolic computation techniques. The following are the best known solutions for 7–12 points in a unit square, found through simulated annealing; the arrangement for seven points is known to be optimal. Instead of looking for optimal placements for a given shape, one may look for an optimal shape for a given number of points. Among convex shapes formula_1 with area one, the regular hexagon is the one that maximizes formula_23; for this shape, formula_24, with six points optimally placed at the hexagon vertices. The convex shapes of unit area that maximize formula_25 have formula_26. Variations. There have been many variations of this problem including the case of a uniformly random set of points, for which arguments based on either Kolmogorov complexity or Poisson approximation show that the expected value of the minimum area is inversely proportional to the cube of the number of points. Variations involving the volume of higher-dimensional simplices have also been studied. Rather than considering simplices, another higher-dimensional version adds another parameter formula_27, and asks for placements of formula_0 points in the unit hypercube that maximize the minimum volume of the convex hull of any subset of formula_27 points. For formula_28 these subsets form simplices but for larger values of formula_27, relative to formula_29, they can form more complicated shapes. When formula_27 is sufficiently large relative to formula_30, randomly placed point sets have minimum formula_27-point convex hull volume formula_31. No better bound is possible; any placement has formula_27 points with volume formula_32, obtained by choosing some formula_27 consecutive points in coordinate order. This result has applications in range searching data structures. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "D" }, { "math_id": 2, "text": "\\Delta_D(n)" }, { "math_id": 3, "text": "\\tbinom63=20" }, { "math_id": 4, "text": "\\Delta_D(6)=\\tfrac18" }, { "math_id": 5, "text": "D'" }, { "math_id": 6, "text": "\\Delta_{D'}(n)" }, { "math_id": 7, "text": "\\Delta(n)=O\\left(\\frac{1}{n^2}\\right)." }, { "math_id": 8, "text": "1/n^2" }, { "math_id": 9, "text": "(i,i^2\\bmod n)" }, { "math_id": 10, "text": "n\\times n" }, { "math_id": 11, "text": "0\\le i<n" }, { "math_id": 12, "text": "\\tfrac12" }, { "math_id": 13, "text": "n^{1+\\varepsilon}" }, { "math_id": 14, "text": "\\varepsilon>0" }, { "math_id": 15, "text": "\\Delta(n)=\\Omega\\left(\\frac{\\log n}{n^2}\\right)." }, { "math_id": 16, "text": "S" }, { "math_id": 17, "text": "x" }, { "math_id": 18, "text": "\\Delta(n)" }, { "math_id": 19, "text": "\\Delta(n)=O\\left(\\frac{1}{n\\sqrt{\\log\\log n}}\\right)." }, { "math_id": 20, "text": "\\Delta(n)\\leq\\frac{\\exp{\\left(c\\sqrt{\\log n}\\right)}}{n^{8/7}}," }, { "math_id": 21, "text": "c" }, { "math_id": 22, "text": "n^{-\\frac{8}{7}-\\frac{1}{2000}}" }, { "math_id": 23, "text": "\\Delta_D(6)" }, { "math_id": 24, "text": "\\Delta_D(6)=\\tfrac16" }, { "math_id": 25, "text": "\\Delta_D(7)" }, { "math_id": 26, "text": "\\Delta_D(7)=\\tfrac19" }, { "math_id": 27, "text": "k" }, { "math_id": 28, "text": "k=d+1" }, { "math_id": 29, "text": "d" }, { "math_id": 30, "text": "\\log n" }, { "math_id": 31, "text": "\\Omega(k/n)" }, { "math_id": 32, "text": "O(k/n)" } ]
https://en.wikipedia.org/wiki?curid=1013950
10144353
Fraser filter
A Fraser filter, named after Douglas Fraser, is typically used in geophysics when displaying VLF data. It is effectively the first derivative of the data. If formula_0 represents the collected data then formula_1 is the average of two values. Consider this value to be plotted between point 1 and point 2 and do the same with points 3 and 4: formula_2 If formula_3 represents the space between each station along the line then formula_4 is the Fraser Filter of those four values. Since formula_5 is constant, it can be ignored and the Fraser filter considered to be formula_6.
[ { "math_id": 0, "text": "f(i) = f_i" }, { "math_id": 1, "text": "average_{12}=\\frac{f_1 + f_2}{2}" }, { "math_id": 2, "text": "average_{34}=\\frac{f_3 + f_4}{2}" }, { "math_id": 3, "text": "\\Delta x" }, { "math_id": 4, "text": "\\frac{average_{12}-average_{34}}{2 \\Delta x}=\\frac{(f_1 + f_2)-(f_3 + f_4)}{4 \\Delta x}" }, { "math_id": 5, "text": "4 \\Delta x" }, { "math_id": 6, "text": "(f_1 + f_2)-(f_3 + f_4)" } ]
https://en.wikipedia.org/wiki?curid=10144353
10144855
Kneser's theorem (differential equations)
Mathematical theorem In mathematics, the Kneser theorem can refer to two distinct theorems in the field of ordinary differential equations: Statement of the theorem due to A. Kneser. Consider an ordinary linear homogeneous differential equation of the form formula_0 with formula_1 continuous. We say this equation is "oscillating" if it has a solution "y" with infinitely many zeros, and "non-oscillating" otherwise. The theorem states that the equation is non-oscillating if formula_2 and oscillating if formula_3 Example. To illustrate the theorem consider formula_4 where formula_5 is real and non-zero. According to the theorem, solutions will be oscillating or not depending on whether formula_5 is positive (non-oscillating) or negative (oscillating) because formula_6 To find the solutions for this choice of formula_7, and verify the theorem for this example, substitute the 'Ansatz' formula_8 which gives formula_9 This means that (for non-zero formula_5) the general solution is formula_10 where formula_11 and formula_12 are arbitrary constants. It is not hard to see that for positive formula_5 the solutions do not oscillate while for negative formula_13 the identity formula_14 shows that they do. The general result follows from this example by the Sturm–Picone comparison theorem. Extensions. There are many extensions to this result, such as the Gesztesy–Ünal criterion. Statement of the theorem due to H. Kneser. While Peano's existence theorem guarantees the existence of solutions of certain initial values problems with continuous right hand side, H. Kneser's theorem deals with the topology of the set of those solutions. Precisely, H. Kneser's theorem states the following: Let formula_15 be a continuous function on the region formula_16, and such that formula_17 for all formula_18. Given a real number formula_19 satisfying formula_20, define the set formula_21 as the set of points formula_22 for which there is a solution formula_23 of formula_24 such that formula_25 and formula_26. Then formula_21 is a closed and connected set. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y'' + q(x)y = 0" }, { "math_id": 1, "text": "q: [0,+\\infty) \\to \\mathbb{R}" }, { "math_id": 2, "text": "\\limsup_{x \\to +\\infty} x^2 q(x) < \\tfrac{1}{4}" }, { "math_id": 3, "text": "\\liminf_{x \\to +\\infty} x^2 q(x) > \\tfrac{1}{4}." }, { "math_id": 4, "text": "q(x) = \\left(\\frac{1}{4} - a\\right) x^{-2} \\quad\\text{for}\\quad x > 0" }, { "math_id": 5, "text": "a" }, { "math_id": 6, "text": "\\limsup_{x \\to +\\infty} x^2 q(x) = \\liminf_{x \\to +\\infty} x^2 q(x) = \\frac{1}{4} - a" }, { "math_id": 7, "text": "q(x)" }, { "math_id": 8, "text": "y(x) = x^n " }, { "math_id": 9, "text": "n(n-1) + \\frac{1}{4} - a = \\left(n-\\frac{1}{2}\\right)^2 - a = 0" }, { "math_id": 10, "text": "y(x) = A x^{\\frac{1}{2} + \\sqrt{a}} + B x^{\\frac{1}{2} - \\sqrt{a}}" }, { "math_id": 11, "text": "A" }, { "math_id": 12, "text": "B" }, { "math_id": 13, "text": "a = -\\omega^2" }, { "math_id": 14, "text": "x^{\\frac{1}{2} \\pm i \\omega} = \\sqrt{x}\\ e^{\\pm (i\\omega) \\ln{x}} = \\sqrt{x}\\ (\\cos{(\\omega \\ln x)} \\pm i \\sin{(\\omega \\ln x)})" }, { "math_id": 15, "text": "f\\colon \\R\\times \\R^n \\rightarrow \\R^n" }, { "math_id": 16, "text": "\\mathcal{R}:=[t_0, t_0+a] \\times \\{x \\in \\mathbb{R}^n: \\Vert x-x_0\\Vert \\le b\\}" }, { "math_id": 17, "text": "|f(t, x)| \\le M" }, { "math_id": 18, "text": "(t,x) \\in \\mathcal{R}" }, { "math_id": 19, "text": "c" }, { "math_id": 20, "text": "t_0<c \\le t_0+\\min(a, b/M)" }, { "math_id": 21, "text": "S_c" }, { "math_id": 22, "text": "x_c" }, { "math_id": 23, "text": "x = x(t)" }, { "math_id": 24, "text": "\\dot{x} = f(t, x)" }, { "math_id": 25, "text": "x(t_0)=x_0" }, { "math_id": 26, "text": "x(c) = x_c" } ]
https://en.wikipedia.org/wiki?curid=10144855
10144971
Oscillation theory
In mathematics, in the field of ordinary differential equations, a nontrivial solution to an ordinary differential equation formula_0 is called oscillating if it has an infinite number of roots; otherwise it is called non-oscillating. The differential equation is called oscillating if it has an oscillating solution. The number of roots carries also information on the spectrum of associated boundary value problems. Examples. The differential equation formula_1 is oscillating as sin("x") is a solution. Connection with spectral theory. Oscillation theory was initiated by Jacques Charles François Sturm in his investigations of Sturm–Liouville problems from 1836. There he showed that the n'th eigenfunction of a Sturm–Liouville problem has precisely n-1 roots. For the one-dimensional Schrödinger equation the question about oscillation/non-oscillation answers the question whether the eigenvalues accumulate at the bottom of the continuous spectrum. Relative oscillation theory. In 1996 Gesztesy–Simon–Teschl showed that the number of roots of the Wronski determinant of two eigenfunctions of a Sturm–Liouville problem gives the number of eigenvalues between the corresponding eigenvalues. It was later on generalized by Krüger–Teschl to the case of two eigenfunctions of two different Sturm–Liouville problems. The investigation of the number of roots of the Wronski determinant of two solutions is known as relative oscillation theory. See also. Classical results in oscillation theory are:
[ { "math_id": 0, "text": "F(x,y,y',\\ \\dots,\\ y^{(n-1)})=y^{(n)} \\quad x \\in [0,+\\infty)" }, { "math_id": 1, "text": "y'' + y = 0" } ]
https://en.wikipedia.org/wiki?curid=10144971
101453
Dirichlet's theorem on arithmetic progressions
Theorem on the number of primes in arithmetic sequences In number theory, Dirichlet's theorem, also called the Dirichlet prime number theorem, states that for any two positive coprime integers "a" and "d", there are infinitely many primes of the form "a" + "nd", where "n" is also a positive integer. In other words, there are infinitely many primes that are congruent to "a" modulo "d". The numbers of the form "a" + "nd" form an arithmetic progression formula_0 and Dirichlet's theorem states that this sequence contains infinitely many prime numbers. The theorem, named after Peter Gustav Lejeune Dirichlet, extends Euclid's theorem that there are infinitely many prime numbers. Stronger forms of Dirichlet's theorem state that for any such arithmetic progression, the sum of the reciprocals of the prime numbers in the progression diverges and that different such arithmetic progressions with the same modulus have approximately the same proportions of primes. Equivalently, the primes are evenly distributed (asymptotically) among the congruence classes modulo "d" containing "a"'s coprime to "d". Examples. The primes of the form 4"n" + 3 are (sequence in the OEIS) 3, 7, 11, 19, 23, 31, 43, 47, 59, 67, 71, 79, 83, 103, 107, 127, 131, 139, 151, 163, 167, 179, 191, 199, 211, 223, 227, 239, 251, 263, 271, 283, ... They correspond to the following values of "n": (sequence in the OEIS) 0, 1, 2, 4, 5, 7, 10, 11, 14, 16, 17, 19, 20, 25, 26, 31, 32, 34, 37, 40, 41, 44, 47, 49, 52, 55, 56, 59, 62, 65, 67, 70, 76, 77, 82, 86, 89, 91, 94, 95, ... The strong form of Dirichlet's theorem implies that formula_1 is a divergent series. Sequences "dn" + a with odd "d" are often ignored because half the numbers are even and the other half is the same numbers as a sequence with 2"d", if we start with "n" = 0. For example, 6"n" + 1 produces the same primes as 3"n" + 1, while 6"n" + 5 produces the same as 3"n" + 2 except for the only even prime 2. The following table lists several arithmetic progressions with infinitely many primes and the first few ones in each of them. Distribution. Since the primes thin out, on average, in accordance with the prime number theorem, the same must be true for the primes in arithmetic progressions. It is natural to ask about the way the primes are shared between the various arithmetic progressions for a given value of "d" (there are "d" of those, essentially, if we do not distinguish two progressions sharing almost all their terms). The answer is given in this form: the number of feasible progressions "modulo" "d" — those where "a" and "d" do not have a common factor &gt; 1 — is given by Euler's totient function formula_2 Further, the proportion of primes in each of those is formula_3 For example, if "d" is a prime number "q", each of the "q" − 1 progressions formula_4 formula_5 formula_6 formula_7 contains a proportion 1/("q" − 1) of the primes. When compared to each other, progressions with a quadratic nonresidue remainder have typically slightly more elements than those with a quadratic residue remainder (Chebyshev's bias). History. In 1737, Euler related the study of prime numbers to what is known now as the Riemann zeta function: he showed that the value formula_9 reduces to a ratio of two infinite products, Π "p" / Π ("p"–1), for all primes "p", and that the ratio is infinite. In 1775, Euler stated the theorem for the cases of a + nd, where a = 1. This special case of Dirichlet's theorem can be proven using cyclotomic polynomials. The general form of the theorem was first conjectured by Legendre in his attempted unsuccessful proofs of quadratic reciprocity — as Gauss noted in his "Disquisitiones Arithmeticae" — but it was proved by Dirichlet (1837) with Dirichlet "L"-series. The proof is modeled on Euler's earlier work relating the Riemann zeta function to the distribution of primes. The theorem represents the beginning of rigorous analytic number theory. Atle Selberg (1949) gave an elementary proof. Proof. Dirichlet's theorem is proved by showing that the value of the Dirichlet L-function (of a non-trivial character) at 1 is nonzero. The proof of this statement requires some calculus and analytic number theory . The particular case "a" = 1 (i.e., concerning the primes that are congruent to 1 modulo some "n") can be proven by analyzing the splitting behavior of primes in cyclotomic extensions, without making use of calculus . Generalizations. The Bunyakovsky conjecture generalizes Dirichlet's theorem to higher-degree polynomials. Whether or not even simple quadratic polynomials such as "x"2 + 1 (known from Landau's fourth problem) attain infinitely many prime values is an important open problem. The Dickson's conjecture generalizes Dirichlet's theorem to more than one polynomial. The Schinzel's hypothesis H generalizes these two conjectures, i.e. generalizes to more than one polynomial with degree larger than one. In algebraic number theory, Dirichlet's theorem generalizes to Chebotarev's density theorem. Linnik's theorem (1944) concerns the size of the smallest prime in a given arithmetic progression. Linnik proved that the progression "a" + "nd" (as "n" ranges through the positive integers) contains a prime of magnitude at most "cdL" for absolute constants "c" and "L". Subsequent researchers have reduced "L" to 5. An analogue of Dirichlet's theorem holds in the framework of dynamical systems (T. Sunada and A. Katsuda, 1990). Shiu showed that any arithmetic progression satisfying the hypothesis of Dirichlet's theorem will in fact contain arbitrarily long runs of "consecutive" prime numbers. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a,\\ a+d,\\ a+2d,\\ a+3d,\\ \\dots,\\ " }, { "math_id": 1, "text": "\\frac{1}{3}+\\frac{1}{7}+\\frac{1}{11}+\\frac{1}{19}+\\frac{1}{23}+\\frac{1}{31}+\\frac{1}{43}+\\frac{1}{47}+\\frac{1}{59}+\\frac{1}{67}+\\cdots" }, { "math_id": 2, "text": "\\varphi(d).\\ " }, { "math_id": 3, "text": "\\frac {1}{\\varphi(d)}.\\ " }, { "math_id": 4, "text": "q+1, 2 q+1, 3 q+1,\\dots\\ " }, { "math_id": 5, "text": "q+2, 2 q+2, 3 q+2,\\dots\\ " }, { "math_id": 6, "text": "\\dots\\ " }, { "math_id": 7, "text": "q+q-1, 2 q+q-1, 3 q+q-1,\\dots\\ " }, { "math_id": 8, "text": "q, 2q, 3q, \\dots\\ " }, { "math_id": 9, "text": "\\zeta(1)" } ]
https://en.wikipedia.org/wiki?curid=101453
1014534
Projectively extended real line
Real numbers with an added point at infinity In real analysis, the projectively extended real line (also called the one-point compactification of the real line), is the extension of the set of the real numbers, formula_0, by a point denoted ∞. It is thus the set formula_1 with the standard arithmetic operations extended where possible, and is sometimes denoted by formula_2 or formula_3 The added point is called the point at infinity, because it is considered as a neighbour of both ends of the real line. More precisely, the point at infinity is the limit of every sequence of real numbers whose absolute values are increasing and unbounded. The projectively extended real line may be identified with a real projective line in which three points have been assigned the specific values 0, 1 and ∞. The projectively extended real number line is distinct from the affinely extended real number line, in which +∞ and −∞ are distinct. Dividing by zero. Unlike most mathematical models of numbers, this structure allows division by zero: formula_4 for nonzero "a". In particular, 1 / 0 ∞ and 1 / ∞ 0, making the reciprocal function 1 / "x" a total function in this structure. The structure, however, is not a field, and none of the binary arithmetic operations are total – for example, 0 ⋅ ∞ is undefined, even though the reciprocal is total. It has usable interpretations, however – for example, in geometry, the slope of a vertical line is ∞. Extensions of the real line. The projectively extended real line extends the field of real numbers in the same way that the Riemann sphere extends the field of complex numbers, by adding a single point called conventionally ∞. In contrast, the affinely extended real number line (also called the two-point compactification of the real line) distinguishes between +∞ and −∞. Order. The order relation cannot be extended to formula_5 in a meaningful way. Given a number "a" ≠ ∞, there is no convincing argument to define either "a" &gt; ∞ or that "a" &lt; ∞. Since ∞ can't be compared with any of the other elements, there's no point in retaining this relation on formula_5. However, order on formula_0 is used in definitions in formula_5. Geometry. Fundamental to the idea that ∞ is a point "no different from any other" is the way the real projective line is a homogeneous space, in fact homeomorphic to a circle. For example the general linear group of 2 × 2 real invertible matrices has a transitive action on it. The group action may be expressed by Möbius transformations (also called linear fractional transformations), with the understanding that when the denominator of the linear fractional transformation is 0, the image is ∞. The detailed analysis of the action shows that for any three distinct points "P", "Q" and "R", there is a linear fractional transformation taking "P" to 0, "Q" to 1, and "R" to ∞ that is, the group of linear fractional transformations is triply transitive on the real projective line. This cannot be extended to 4-tuples of points, because the cross-ratio is invariant. The terminology projective line is appropriate, because the points are in 1-to-1 correspondence with one-dimensional linear subspaces of formula_6. Arithmetic operations. Motivation for arithmetic operations. The arithmetic operations on this space are an extension of the same operations on reals. A motivation for the new definitions is the limits of functions of real numbers. Arithmetic operations that are defined. In addition to the standard operations on the subset formula_0 of formula_5, the following operations are defined for formula_7, with exceptions as indicated: formula_8 Arithmetic operations that are left undefined. The following expressions cannot be motivated by considering limits of real functions, and no definition of them allows the statement of the standard algebraic properties to be retained unchanged in form for all defined cases. Consequently, they are left undefined: formula_9 The exponential function formula_10 cannot be extended to formula_5. Algebraic properties. The following equalities mean: "Either both sides are undefined, or both sides are defined and equal." This is true for any formula_11 formula_12 The following is true whenever expressions involved are defined, for any formula_11 formula_13 In general, all laws of arithmetic that are valid for formula_0 are also valid for formula_5 whenever all the occurring expressions are defined. Intervals and topology. The concept of an interval can be extended to formula_5. However, since it is not an ordered set, the interval has a slightly different meaning. The definitions for closed intervals are as follows (it is assumed that formula_14): formula_15 With the exception of when the end-points are equal, the corresponding open and half-open intervals are defined by removing the respective endpoints. This redefinition is useful in interval arithmetic when dividing by an interval containing 0. formula_5 and the empty set are also intervals, as is formula_5 excluding any single point. The open intervals as a base define a topology on formula_5. Sufficient for a base are the bounded open intervals in formula_0 and the intervals formula_17 for all formula_18 such that formula_19 As said, the topology is homeomorphic to a circle. Thus it is metrizable corresponding (for a given homeomorphism) to the ordinary metric on this circle (either measured straight or along the circle). There is no metric which is an extension of the ordinary metric on formula_20 Interval arithmetic. Interval arithmetic extends to formula_5 from formula_0. The result of an arithmetic operation on intervals is always an interval, except when the intervals with a binary operation contain incompatible values leading to an undefined result. In particular, we have, for every formula_16: formula_21 irrespective of whether either interval includes 0 and ∞. Calculus. The tools of calculus can be used to analyze functions of formula_5. The definitions are motivated by the topology of this space. Neighbourhoods. Let formula_22 and formula_23. Limits. Basic definitions of limits. Let formula_29 formula_30 and formula_31. The limit of "f"&amp;hairsp;("x") as "x" approaches "p" is "L", denoted formula_32 if and only if for every neighbourhood "A" of "L", there is a punctured neighbourhood "B" of "p", such that formula_33 implies formula_34. The one-sided limit of "f"&amp;hairsp;("x") as "x" approaches "p" from the right (left) is "L", denoted formula_35 if and only if for every neighbourhood "A" of "L", there is a right-sided (left-sided) punctured neighbourhood "B" of "p", such that formula_33 implies formula_36 It can be shown that formula_32 if and only if both formula_37 and formula_38. Comparison with limits in formula_0. The definitions given above can be compared with the usual definitions of limits of real functions. In the following statements, formula_39 the first limit is as defined above, and the second limit is in the usual sense: Extended definition of limits. Let formula_23. Then "p" is a limit point of "A" if and only if every neighbourhood of "p" includes a point formula_50 such that formula_51 Let formula_52, "p" a limit point of "A". The limit of "f"&amp;hairsp;("x") as "x" approaches "p" through "A" is "L", if and only if for every neighbourhood "B" of "L", there is a punctured neighbourhood "C" of "p", such that formula_53 implies formula_54 This corresponds to the regular topological definition of continuity, applied to the subspace topology on formula_55 and the restriction of "f" to formula_56 Continuity. The function formula_57 is continuous at "p" if and only if "f" is defined at "p" and formula_58 If formula_59 the function formula_60 is continuous in "A" if and only if, for every formula_61, "f" is defined at "p" and the limit of formula_62 as "x" tends to "p" through "A" is formula_63 Every rational function "P"("x")/"Q"("x"), where "P" and "Q" are polynomials, can be prolongated, in a unique way, to a function from formula_5 to formula_5 that is continuous in formula_3 In particular, this is the case of polynomial functions, which take the value formula_64 at formula_65 if they are not constant. Also, if the tangent function formula_66 is extended so that formula_67 then formula_66 is continuous in formula_68 but cannot be prolongated further to a function that is continuous in formula_3 Many elementary functions that are continuous in formula_69 cannot be prolongated to functions that are continuous in formula_70 This is the case, for example, of the exponential function and all trigonometric functions. For example, the sine function is continuous in formula_68 but it cannot be made continuous at formula_71 As seen above, the tangent function can be prolongated to a function that is continuous in formula_68 but this function cannot be made continuous at formula_71 Many discontinuous functions that become continuous when the codomain is extended to formula_5 remain discontinuous if the codomain is extended to the affinely extended real number system formula_72 This is the case of the function formula_73 On the other hand, some functions that are continuous in formula_69 and discontinuous at formula_74 become continuous if the domain is extended to formula_72 This is the case for the arctangent. As a projective range. When the real projective line is considered in the context of the real projective plane, then the consequences of Desargues' theorem are implicit. In particular, the construction of the projective harmonic conjugate relation between points is part of the structure of the real projective line. For instance, given any pair of points, the point at infinity is the projective harmonic conjugate of their midpoint. As projectivities preserve the harmonic relation, they form the automorphisms of the real projective line. The projectivities are described algebraically as homographies, since the real numbers form a ring, according to the general construction of a projective line over a ring. Collectively they form the group PGL(2, R). The projectivities which are their own inverses are called involutions. A hyperbolic involution has two fixed points. Two of these correspond to elementary, arithmetic operations on the real projective line: negation and reciprocation. Indeed, 0 and ∞ are fixed under negation, while 1 and −1 are fixed under reciprocation. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}" }, { "math_id": 1, "text": "\\mathbb{R}\\cup\\{\\infty\\}" }, { "math_id": 2, "text": "\\mathbb{R}^*" }, { "math_id": 3, "text": "\\widehat{\\mathbb{R}}." }, { "math_id": 4, "text": "\\frac{a}{0} = \\infty" }, { "math_id": 5, "text": "\\widehat{\\mathbb{R}}" }, { "math_id": 6, "text": "\\mathbb{R}^2" }, { "math_id": 7, "text": "a \\in \\widehat{\\mathbb{R}}" }, { "math_id": 8, "text": "\\begin{align}\na + \\infty = \\infty + a & = \\infty, & a \\neq \\infty \\\\\na - \\infty = \\infty - a & = \\infty, & a \\neq \\infty \\\\\na / \\infty = a \\cdot 0 = 0 \\cdot a & = 0, & a \\neq \\infty \\\\\n\\infty / a & = \\infty, & a \\neq \\infty \\\\\na / 0 = a \\cdot \\infty = \\infty \\cdot a & = \\infty, & a \\neq 0 \\\\\n0 / a & = 0, & a \\neq 0\n\\end{align}" }, { "math_id": 9, "text": "\\begin{align}\n& \\infty + \\infty \\\\\n& \\infty - \\infty \\\\\n& \\infty \\cdot 0 \\\\\n& 0 \\cdot \\infty \\\\\n& \\infty / \\infty \\\\\n& 0 / 0\n\\end{align}" }, { "math_id": 10, "text": "e^x" }, { "math_id": 11, "text": "a, b, c \\in \\widehat{\\mathbb{R}}." }, { "math_id": 12, "text": "\\begin{align}\n(a + b) + c & = a + (b + c) \\\\\na + b & = b + a \\\\\n(a \\cdot b) \\cdot c & = a \\cdot (b \\cdot c) \\\\\na \\cdot b & = b \\cdot a \\\\\na \\cdot \\infty & = \\frac{a}{0} \\\\\n\\end{align}" }, { "math_id": 13, "text": "\n\\begin{align}\na \\cdot (b + c) & = a \\cdot b + a \\cdot c \\\\\na & = \\left(\\frac{a}{b}\\right) \\cdot b & = \\,\\,& \\frac{(a \\cdot b)}{b} \\\\\na & = (a + b) - b & = \\,\\,& (a - b) + b\n\\end{align}\n" }, { "math_id": 14, "text": "a, b \\in \\mathbb{R}, a < b" }, { "math_id": 15, "text": "\\begin{align}\n\\left[a, b\\right] & = \\lbrace x \\mid x \\in \\mathbb{R}, a \\leq x \\leq b \\rbrace \\\\\n\\left[a, \\infty\\right] & = \\lbrace x \\mid x \\in \\mathbb{R}, a \\leq x \\rbrace \\cup \\lbrace \\infty \\rbrace \\\\\n\\left[b, a\\right] & = \\lbrace x \\mid x \\in \\mathbb{R}, b \\leq x \\rbrace \\cup \\lbrace \\infty \\rbrace \\cup \\lbrace x \\mid x \\in \\mathbb{R}, x \\leq a \\rbrace \\\\\n\\left[\\infty, a\\right] & = \\lbrace \\infty \\rbrace \\cup \\lbrace x \\mid x \\in \\mathbb{R}, x \\leq a \\rbrace \\\\\n\\left[a, a\\right] & = \\{ a \\} \\\\\n\\left[\\infty, \\infty\\right] & = \\lbrace \\infty \\rbrace \n\\end{align}" }, { "math_id": 16, "text": "a, b \\in \\widehat{\\mathbb{R}}" }, { "math_id": 17, "text": "(b, a) = \\{x \\mid x \\in \\mathbb{R}, b < x\\} \\cup \\{\\infty\\} \\cup \\{x \\mid x \\in \\mathbb{R}, x < a\\}" }, { "math_id": 18, "text": "a, b \\in \\mathbb{R}" }, { "math_id": 19, "text": "a < b." }, { "math_id": 20, "text": "\\mathbb{R}." }, { "math_id": 21, "text": "x \\in [a, b] \\iff \\frac{1}{x} \\in \\left[ \\frac{1}{b}, \\frac{1}{a} \\right] \\!," }, { "math_id": 22, "text": "x \\in \\widehat{\\mathbb{R}}" }, { "math_id": 23, "text": "A \\subseteq \\widehat{\\mathbb{R}}" }, { "math_id": 24, "text": "y \\neq x " }, { "math_id": 25, "text": "[x, y)" }, { "math_id": 26, "text": "(y, x]" }, { "math_id": 27, "text": "x\\not\\in A," }, { "math_id": 28, "text": "A\\cup\\{x\\}" }, { "math_id": 29, "text": "f : \\widehat{\\mathbb{R}} \\to \\widehat{\\mathbb{R}}," }, { "math_id": 30, "text": "p \\in \\widehat{\\mathbb{R}}," }, { "math_id": 31, "text": "L \\in \\widehat{\\mathbb{R}}" }, { "math_id": 32, "text": "\\lim_{x \\to p}{f(x)} = L" }, { "math_id": 33, "text": "x \\in B" }, { "math_id": 34, "text": "f(x) \\in A" }, { "math_id": 35, "text": "\\lim_{x \\to p^{+}}{f(x)} = L \\qquad \\left( \\lim_{x \\to p^{-}}{f(x)} = L \\right)," }, { "math_id": 36, "text": "f(x) \\in A." }, { "math_id": 37, "text": "\\lim_{x \\to p^+}{f(x)} = L" }, { "math_id": 38, "text": "\\lim_{x \\to p^-}{f(x)} = L" }, { "math_id": 39, "text": "p, L \\in \\mathbb{R}," }, { "math_id": 40, "text": "\\lim_{x \\to \\infty^{+}}{f(x)} = L" }, { "math_id": 41, "text": "\\lim_{x \\to -\\infty}{f(x)} = L" }, { "math_id": 42, "text": "\\lim_{x \\to \\infty^{-}}{f(x)} = L" }, { "math_id": 43, "text": "\\lim_{x \\to +\\infty}{f(x)} = L" }, { "math_id": 44, "text": "\\lim_{x \\to p}{f(x)} = \\infty" }, { "math_id": 45, "text": "\\lim_{x \\to p}{|f(x)|} = +\\infty" }, { "math_id": 46, "text": "\\lim_{x \\to \\infty^{+}}{f(x)} = \\infty" }, { "math_id": 47, "text": "\\lim_{x \\to -\\infty}{|f(x)|} = +\\infty" }, { "math_id": 48, "text": "\\lim_{x \\to \\infty^{-}}{f(x)} = \\infty" }, { "math_id": 49, "text": "\\lim_{x \\to +\\infty}{|f(x)|} = +\\infty" }, { "math_id": 50, "text": "y \\in A" }, { "math_id": 51, "text": "y \\neq p." }, { "math_id": 52, "text": "f : \\widehat{\\mathbb{R}} \\to \\widehat{\\mathbb{R}}, A \\subseteq \\widehat{\\mathbb{R}}, L \\in \\widehat{\\mathbb{R}}, p \\in \\widehat{\\mathbb{R}}" }, { "math_id": 53, "text": "x \\in A \\cap C" }, { "math_id": 54, "text": "f(x) \\in B." }, { "math_id": 55, "text": "A\\cup \\lbrace p \\rbrace," }, { "math_id": 56, "text": "A \\cup \\lbrace p \\rbrace." }, { "math_id": 57, "text": "f : \\widehat{\\mathbb{R}} \\to \\widehat{\\mathbb{R}},\\quad p \\in \\widehat{\\mathbb{R}}." }, { "math_id": 58, "text": "\\lim_{x \\to p}{f(x)} = f(p)." }, { "math_id": 59, "text": "A \\subseteq \\widehat\\mathbb R," }, { "math_id": 60, "text": "f : A \\to \\widehat{\\mathbb{R}}" }, { "math_id": 61, "text": "p \\in A" }, { "math_id": 62, "text": "f(x)" }, { "math_id": 63, "text": "f(p)." }, { "math_id": 64, "text": "\\infty" }, { "math_id": 65, "text": "\\infty," }, { "math_id": 66, "text": "\\tan" }, { "math_id": 67, "text": "\\tan\\left(\\frac{\\pi}{2} + n\\pi\\right) = \\infty\\text{ for }n \\in \\mathbb{Z}," }, { "math_id": 68, "text": "\\mathbb{R}," }, { "math_id": 69, "text": "\\mathbb R" }, { "math_id": 70, "text": "\\widehat\\mathbb{R}." }, { "math_id": 71, "text": "\\infty." }, { "math_id": 72, "text": "\\overline{\\mathbb{R}}." }, { "math_id": 73, "text": "x\\mapsto \\frac 1x." }, { "math_id": 74, "text": "\\infty \\in \\widehat{\\mathbb{R}}" } ]
https://en.wikipedia.org/wiki?curid=1014534
10145406
Difference-map algorithm
The difference-map algorithm is a search algorithm for general constraint satisfaction problems. It is a meta-algorithm in the sense that it is built from more basic algorithms that perform projections onto constraint sets. From a mathematical perspective, the difference-map algorithm is a dynamical system based on a mapping of Euclidean space. Solutions are encoded as fixed points of the mapping. Although originally conceived as a general method for solving the phase problem, the difference-map algorithm has been used for the boolean satisfiability problem, protein structure prediction, Ramsey numbers, diophantine equations, and "Sudoku", as well as sphere- and disk-packing problems. Since these applications include NP-complete problems, the scope of the difference map is that of an incomplete algorithm. Whereas incomplete algorithms can efficiently verify solutions (once a candidate is found), they cannot prove that a solution does not exist. The difference-map algorithm is a generalization of two iterative methods: Fienup's Hybrid input output (HIO) algorithm for phase retrieval and the Douglas-Rachford algorithm for convex optimization. Iterative methods, in general, have a long history in phase retrieval and convex optimization. The use of this style of algorithm for hard, non-convex problems is a more recent development. Algorithm. The problem to be solved must first be formulated as a set intersection problem in Euclidean space: find an formula_0 in the intersection of sets formula_1 and formula_2. Another prerequisite is an implementation of the projections formula_3 and formula_4 that, given an arbitrary input point formula_0, return a point in the constraint set formula_1 or formula_2 that is nearest to formula_0. One iteration of the algorithm is given by the mapping: formula_5 The real parameter formula_6 should not be equal to 0 but can have either sign; optimal values depend on the application and are determined through experimentation. As a first guess, the choice formula_7 (or formula_8) is recommended because it reduces the number of projection computations per iteration: formula_9 A point formula_0 is a fixed point of the map formula_10 precisely when formula_11. Since the left-hand side is an element of formula_1 and the RHS is an element of formula_2, the equality implies that we have found a common element to the two constraint sets. Note that the fixed point formula_0 itself need not belong to either formula_1 or formula_2. The set of fixed points will typically have much higher dimension than the set of solutions. The progress of the algorithm can be monitored by inspecting the norm of the difference of the two projections: formula_12. When this vanishes, a point common to both constraint sets has been found and the algorithm can be terminated. Example: logical satisfiability. Incomplete algorithms, such as stochastic local search, are widely used for finding satisfying truth assignments to boolean formulas. As an example of solving an instance of 2-SAT with the difference-map algorithm, consider the following formula (~ indicates NOT): ("q"1 or "q"2) and (~"q"1 or "q"3) and (~"q"2 or ~"q"3) and ("q"1 or ~"q"2) To each of the eight literals in this formula we assign one real variable in an eight-dimensional Euclidean space. The structure of the 2-SAT formula can be recovered when these variables are arranged in a table: Rows are the clauses in the 2-SAT formula and literals corresponding to the same boolean variable are arranged in columns, with negation indicated by parentheses. For example, the real variables "x"11, "x"21 and "x"41 correspond to the same boolean variable ("q"1) or its negation, and are called replicas. It is convenient to associate the values 1 and -1 with "TRUE" and "FALSE" rather than the traditional 1 and 0. With this convention, the compatibility between the replicas takes the form of the following linear equations: "x"11 = -"x"21 = "x"41 "x"12 = -"x"31 = -"x"42 "x"22 = -"x"32 The linear subspace where these equations are satisfied is one of the constraint spaces, say "A", used by the difference map. To project to this constraint we replace each replica by the signed replica average, or its negative: "a"1 = ("x"11 - "x"21 + "x"41) / 3 "x"11 → "a"1   "x"21 → -"a"1   "x"41 → "a"1 The second difference-map constraint applies to the rows of the table, the clauses. In a satisfying assignment, the two variables in each row must be assigned the values (1, 1), (1, -1), or (-1, 1). The corresponding constraint set, "B", is thus a set of 34 = 81 points. In projecting to this constraint the following operation is applied to each row. First, the two real values are rounded to 1 or -1; then, if the outcome is (-1, -1), the larger of the two original values is replaced by 1. Examples: (-.2, 1.2) → (-1, 1) (-.2, -.8) → (1, -1) It is a straightforward exercise to check that both of the projection operations described minimize the Euclidean distance between input and output values. Moreover, if the algorithm succeeds in finding a point "x" that lies in both constraint sets, then we know that (i) the clauses associated with "x" are all "TRUE", and (ii) the assignments to the replicas are consistent with a truth assignment to the original boolean variables. To run the algorithm one first generates an initial point "x"0, say Using β = 1, the next step is to compute "P"B("x"0) : This is followed by 2"P"B("x"0) - "x"0, and then projected onto the other constraint, "P"A(2"P"B("x"0) - "x"0) : Incrementing "x"0 by the difference of the two projections gives the first iteration of the difference map, "D"("x"0) = "x"1 : Here is the second iteration, "D"("x"1) = "x"2 : This is a fixed point: "D"("x"2) = "x"2. The iterate is unchanged because the two projections agree. From "P""B"("x"2), we can read off the satisfying truth assignment: "q"1 = "TRUE", "q"2 = "FALSE", "q"3 = "TRUE". Chaotic dynamics. In the simple 2-SAT example above, the norm of the difference-map increment "Δ" decreased monotonically to zero in three iterations. This contrasts with the behavior of "Δ" when the difference map is given a hard instance of 3-SAT, where it fluctuates strongly prior to the discovery of the fixed point. As a dynamical system the difference map is believed to be chaotic, and that the space being searched is a strange attractor. Phase retrieval. In phase retrieval a signal or image is reconstructed from the modulus (absolute value, magnitude) of its discrete Fourier transform. For example, the source of the modulus data may be the Fraunhofer diffraction pattern formed when an object is illuminated with coherent light. The projection to the Fourier modulus constraint, say "P""A", is accomplished by first computing the discrete Fourier transform of the signal or image, rescaling the moduli to agree with the data, and then inverse transforming the result. This is a projection, in the sense that the Euclidean distance to the constraint is minimized, because (i) the discrete Fourier transform, as a unitary transformation, preserves distance, and (ii) rescaling the modulus (without modifying the phase) is the smallest change that realizes the modulus constraint. To recover the unknown phases of the Fourier transform the difference map relies on the projection to another constraint, "P""B". This may take several forms, as the object being reconstructed may be known to be positive, have a bounded support, etc. In the reconstruction of the surface image, for example, the effect of the projection "P""B" was to nullify all values outside a rectangular support, and also to nullify all negative values within the support. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "B" }, { "math_id": 3, "text": "P_A" }, { "math_id": 4, "text": "P_B" }, { "math_id": 5, "text": "\n\\begin{align}\nx \\mapsto D(x) &= x + \\beta \\left[ P_A \\left( f_B(x)\\right) - P_B \\left( f_A(x)\\right)\\right], \\\\\nf_A(x) &= P_A(x) - \\frac{1}{\\beta}\\left( P_A(x) - x\\right), \\\\\nf_B(x) &= P_B(x) + \\frac{1}{\\beta}\\left( P_B(x) - x\\right)\n\\end{align}\n" }, { "math_id": 6, "text": "\\beta" }, { "math_id": 7, "text": "\\beta = 1" }, { "math_id": 8, "text": "\\beta = -1" }, { "math_id": 9, "text": "D(x) = x + P_A\\left( 2P_B(x) - x\\right)-P_B(x)" }, { "math_id": 10, "text": "x \\mapsto D(x)" }, { "math_id": 11, "text": "P_A\\left(f_B(x)\\right) = P_B\\left(f_A(x)\\right)" }, { "math_id": 12, "text": "\\Delta = \\left| P_A \\left( f_B(x)\\right) - P_B \\left( f_A(x)\\right)\\right|" } ]
https://en.wikipedia.org/wiki?curid=10145406
1014694
Real projective space
Type of topological space In mathematics, real projective space, denoted &amp;NoBreak;&amp;NoBreak; or &amp;NoBreak;&amp;NoBreak; is the topological space of lines passing through the origin 0 in the real space &amp;NoBreak;&amp;NoBreak; It is a compact, smooth manifold of dimension n, and is a special case &amp;NoBreak;&amp;NoBreak; of a Grassmannian space. Basic properties. Construction. As with all projective spaces, RP"n" is formed by taking the quotient of R"n"+1 ∖ {0} under the equivalence relation "x" ∼ "λx" for all real numbers "λ" ≠ 0. For all "x" in R"n"+1 ∖ {0} one can always find a "λ" such that "λx" has norm 1. There are precisely two such "λ" differing by sign. Thus RP"n" can also be formed by identifying antipodal points of the unit "n"-sphere, "S""n", in R"n"+1. One can further restrict to the upper hemisphere of "S""n" and merely identify antipodal points on the bounding equator. This shows that RP"n" is also equivalent to the closed "n"-dimensional disk, "D""n", with antipodal points on the boundary, ∂"D""n" = "S""n"−1, identified. Topology. The antipodal map on the "n"-sphere (the map sending "x" to −"x") generates a Z2 group action on "S""n". As mentioned above, the orbit space for this action is RP"n". This action is actually a covering space action giving "S""n" as a double cover of RP"n". Since "S""n" is simply connected for "n" ≥ 2, it also serves as the universal cover in these cases. It follows that the fundamental group of RP"n" is Z2 when "n" &gt; 1. (When "n" = 1 the fundamental group is Z due to the homeomorphism with "S"1). A generator for the fundamental group is the closed curve obtained by projecting any curve connecting antipodal points in "S""n" down to RP"n". The projective "n"-space is compact, connected, and has a fundamental group isomorphic to the cyclic group of order 2: its universal covering space is given by the antipody quotient map from the "n"-sphere, a simply connected space. It is a double cover. The antipode map on R"p" has sign formula_0, so it is orientation-preserving if and only if "p" is even. The orientation character is thus: the non-trivial loop in formula_1 acts as formula_2 on orientation, so RP"n" is orientable if and only if "n" + 1 is even, i.e., "n" is odd. The projective "n"-space is in fact diffeomorphic to the submanifold of R("n"+1)2 consisting of all symmetric ("n" + 1) × ("n" + 1) matrices of trace 1 that are also idempotent linear transformations. Geometry of real projective spaces. Real projective space admits a constant positive scalar curvature metric, coming from the double cover by the standard round sphere (the antipodal map is locally an isometry). For the standard round metric, this has sectional curvature identically 1. In the standard round metric, the measure of projective space is exactly half the measure of the sphere. Smooth structure. Real projective spaces are smooth manifolds. On "Sn", in homogeneous coordinates, ("x"1, ..., "x""n"+1), consider the subset "Ui" with "xi" ≠ 0. Each "Ui" is homeomorphic to the disjoint union of two open unit balls in R"n" that map to the same subset of RP"n" and the coordinate transition functions are smooth. This gives RP"n" a smooth structure. Structure as a CW complex. Real projective space RP"n" admits the structure of a CW complex with 1 cell in every dimension. In homogeneous coordinates ("x"1 ... "x""n"+1) on "Sn", the coordinate neighborhood "U"1 = {("x"1 ... "x""n"+1) | "x"1 ≠ 0} can be identified with the interior of "n"-disk "Dn". When "xi" = 0, one has RP"n"−1. Therefore the "n"−1 skeleton of RP"n" is RP"n"−1, and the attaching map "f" : "S""n"−1 → RP"n"−1 is the 2-to-1 covering map. One can put formula_3 Induction shows that RP"n" is a CW complex with 1 cell in every dimension up to "n". The cells are Schubert cells, as on the flag manifold. That is, take a complete flag (say the standard flag) 0 = "V"0 &lt; "V"1 &lt;...&lt; "Vn"; then the closed "k"-cell is lines that lie in "Vk". Also the open "k"-cell (the interior of the "k"-cell) is lines in "Vk" \ "V""k"−1 (lines in "Vk" but not "V""k"−1). In homogeneous coordinates (with respect to the flag), the cells are formula_4 This is not a regular CW structure, as the attaching maps are 2-to-1. However, its cover is a regular CW structure on the sphere, with 2 cells in every dimension; indeed, the minimal regular CW structure on the sphere. In light of the smooth structure, the existence of a Morse function would show RP"n" is a CW complex. One such function is given by, in homogeneous coordinates, formula_5 On each neighborhood "Ui", "g" has nondegenerate critical point (0...,1...,0) where 1 occurs in the "i"-th position with Morse index "i". This shows RP"n" is a CW complex with 1 cell in every dimension. Tautological bundles. Real projective space has a natural line bundle over it, called the tautological bundle. More precisely, this is called the tautological subbundle, and there is also a dual "n"-dimensional bundle called the tautological quotient bundle. Algebraic topology of real projective spaces. Homotopy groups. The higher homotopy groups of RP"n" are exactly the higher homotopy groups of "Sn", via the long exact sequence on homotopy associated to a fibration. Explicitly, the fiber bundle is: formula_6 You might also write this as formula_7 or formula_8 by analogy with complex projective space. The homotopy groups are: formula_9 Homology. The cellular chain complex associated to the above CW structure has 1 cell in each dimension 0, ..., "n". For each dimensional "k", the boundary maps "dk" : δ"Dk" → RP"k"−1/RP"k"−2 is the map that collapses the equator on "S""k"−1 and then identifies antipodal points. In odd (resp. even) dimensions, this has degree 0 (resp. 2): formula_10 Thus the integral homology is formula_11 RP"n" is orientable if and only if "n" is odd, as the above homology calculation shows. Infinite real projective space. The infinite real projective space is constructed as the direct limit or union of the finite projective spaces: formula_12 This space is classifying space of "O"(1), the first orthogonal group. The double cover of this space is the infinite sphere formula_13, which is contractible. The infinite projective space is therefore the Eilenberg–MacLane space "K"(Z2, 1). For each nonnegative integer "q", the modulo 2 homology group formula_14. Its cohomology ring modulo 2 is formula_15 where formula_16 is the first Stiefel–Whitney class: it is the free formula_17-algebra on formula_16, which has degree 1.
[ { "math_id": 0, "text": "(-1)^p" }, { "math_id": 1, "text": "\\pi_1(\\mathbf{RP}^n)" }, { "math_id": 2, "text": "(-1)^{n+1}" }, { "math_id": 3, "text": "\\mathbf{RP}^n = \\mathbf{RP}^{n-1} \\cup_f D^n." }, { "math_id": 4, "text": "\n\\begin{array}{c}\n[*:0:0:\\dots:0] \\\\\n{[}*:*:0:\\dots:0] \\\\\n\\vdots \\\\\n{[}*:*:*:\\dots:*].\n\\end{array}" }, { "math_id": 5, "text": "g(x_1, \\ldots, x_{n+1}) = \\sum_{i=1} ^{n+1} i \\cdot |x_i|^2." }, { "math_id": 6, "text": "\\mathbf{Z}_2 \\to S^n \\to \\mathbf{RP}^n." }, { "math_id": 7, "text": "S^0 \\to S^n \\to \\mathbf{RP}^n" }, { "math_id": 8, "text": "O(1) \\to S^n \\to \\mathbf{RP}^n" }, { "math_id": 9, "text": "\\pi_i (\\mathbf{RP}^n) = \\begin{cases}\n0 & i = 0\\\\\n\\mathbf{Z} & i = 1, n = 1\\\\\n\\mathbf{Z}/2\\mathbf{Z} & i = 1, n > 1\\\\\n\\pi_i (S^n) & i > 1, n > 0.\n\\end{cases}" }, { "math_id": 10, "text": "\\deg(d_k) = 1 + (-1)^k." }, { "math_id": 11, "text": "H_i(\\mathbf{RP}^n) = \\begin{cases}\n\\mathbf{Z} & i = 0 \\text{ or } i = n \\text{ odd,}\\\\\n\\mathbf{Z}/2\\mathbf{Z} & 0<i<n,\\ i\\ \\text{odd,}\\\\\n0 & \\text{else.}\n\\end{cases}" }, { "math_id": 12, "text": "\\mathbf{RP}^\\infty := \\lim_n \\mathbf{RP}^n." }, { "math_id": 13, "text": "S^\\infty" }, { "math_id": 14, "text": "H_q(\\mathbf{RP}^\\infty; \\mathbf{Z}/2) = \\mathbf{Z}/2" }, { "math_id": 15, "text": "H^*(\\mathbf{RP}^\\infty; \\mathbf{Z}/2\\mathbf{Z}) = \\mathbf{Z}/2\\mathbf{Z}[w_1]," }, { "math_id": 16, "text": "w_1" }, { "math_id": 17, "text": "\\mathbf{Z}/2\\mathbf{Z}" } ]
https://en.wikipedia.org/wiki?curid=1014694
10148835
Membrane analogy
The elastic membrane analogy, also known as the soap-film analogy, was first published by pioneering aerodynamicist Ludwig Prandtl in 1903. It describes the stress distribution on a long bar in torsion. The cross section of the bar is constant along its length, and need not be circular. The differential equation that governs the stress distribution on the bar in torsion is of the same form as the equation governing the shape of a membrane under differential pressure. Therefore, in order to discover the stress distribution on the bar, all one has to do is cut the shape of the cross section out of a piece of wood, cover it with a soap film, and apply a differential pressure across it. Then the slope of the soap film at any area of the cross section is directly proportional to the stress in the bar at the same point on its cross section. Application to thin-walled, open cross sections. While the membrane analogy allows the stress distribution on any cross section to be determined experimentally, it also allows the stress distribution on thin-walled, open cross sections to be determined by the same theoretical approach that describes the behavior of rectangular sections. Using the membrane analogy, any thin-walled cross section can be "stretched out" into a rectangle without affecting the stress distribution under torsion. The maximum shear stress, therefore, occurs at the edge of the midpoint of the stretched cross section, and is equal to formula_0, where T is the torque applied, b is the length of the stretched cross section, and t is the thickness of the cross section. It can be shown that the differential equation for the deflection surface of a homogeneous membrane, subjected to uniform lateral pressure and with uniform surface tension and with the same outline as that of the cross section of a bar under torsion, has the same form as that governing the stress distribution over the cross section of a bar under torsion. This analogy was originally proposed by Ludwig Prandtl in 1903. Other applications. Prandtl's stretched-membrane concept was used extensively in the field of electron tube ("vacuum tube") design (1930's to 1960's) to model the trajectory of electrons within a device. The model is constructed by uniformly stretching a thin rubber sheet over a frame, and deforming the sheet upwards with physical models of electrodes, impressed into the sheet from below. The entire assembly is tilted, and steel balls (as electron analogs) rolled down the assembly and the trajectories noted. The curved surface surrounding the "electrodes" represents the complex increase in field strength as the electron-analog approaches the "electrode"; the upward distortion in the sheet is a close analogy to field strength. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "3T/bt^2" } ]
https://en.wikipedia.org/wiki?curid=10148835
1014906
Cyclomatic complexity
Measure of the structural complexity of a software program Cyclomatic complexity is a software metric used to indicate the complexity of a program. It is a quantitative measure of the number of linearly independent paths through a program's source code. It was developed by Thomas J. McCabe, Sr. in 1976. Cyclomatic complexity is computed using the control-flow graph of the program. The nodes of the graph correspond to indivisible groups of commands of a program, and a directed edge connects two nodes if the second command might be executed immediately after the first command. Cyclomatic complexity may also be applied to individual functions, modules, methods, or classes within a program. One testing strategy, called basis path testing by McCabe who first proposed it, is to test each linearly independent path through the program. In this case, the number of test cases will equal the cyclomatic complexity of the program. Description. Definition. There are multiple ways to define cyclomatic complexity of a section of source code. One common way is the number of linearly independent paths within it. A set formula_0 of paths is linearly independent if the edge set of any path formula_1 in formula_0 is not the union of edge sets of the paths in some subset of formula_2. If the source code contained no control flow statements (conditionals or decision points) the complexity would be 1, since there would be only a single path through the code. If the code had one single-condition IF statement, there would be two paths through the code: one where the IF statement is TRUE and another one where it is FALSE. Here, the complexity would be 2. Two nested single-condition IFs, or one IF with two conditions, would produce a complexity of 3. Another way to define the cyclomatic complexity of a program is to look at its control-flow graph, a directed graph containing the basic blocks of the program, with an edge between two basic blocks if control may pass from the first to the second. The complexity M is then defined as formula_3 where An alternative formulation of this, as originally proposed, is to use a graph in which each exit point is connected back to the entry point. In this case, the graph is strongly connected. Here, the cyclomatic complexity of the program is equal to the cyclomatic number of its graph (also known as the ), which is defined as formula_4 This may be seen as calculating the number of linearly independent cycles that exist in the graph: those cycles that do not contain other cycles within themselves. Because each exit point loops back to the entry point, there is at least one such cycle for each exit point. For a single program (or subroutine or method), P always equals 1; a simpler formula for a single subroutine is formula_5 Cyclomatic complexity may be applied to several such programs or subprograms at the same time (to all of the methods in a class, for example). In these cases, P will equal the number of programs in question, and each subprogram will appear as a disconnected subset of the graph. McCabe showed that the cyclomatic complexity of a structured program with only one entry point and one exit point is equal to the number of decision points ("if" statements or conditional loops) contained in that program plus one. This is true only for decision points counted at the lowest, machine-level instructions. Decisions involving compound predicates like those found in high-level languages like codice_0 should be counted in terms of predicate variables involved. In this example, one should count two decision points because at machine level it is equivalent to codice_1. Cyclomatic complexity may be extended to a program with multiple exit points. In this case, it is equal to formula_6 where formula_7 is the number of decision points in the program and s is the number of exit points. Algebraic topology. An even subgraph of a graph (also known as an Eulerian subgraph) is one in which every vertex is incident with an even number of edges. Such subgraphs are unions of cycles and isolated vertices. Subgraphs will be identified with their edge sets, which is equivalent to only considering those even subgraphs which contain all vertices of the full graph. The set of all even subgraphs of a graph is closed under symmetric difference, and may thus be viewed as a vector space over GF(2). This vector space is called the cycle space of the graph. The cyclomatic number of the graph is defined as the dimension of this space. Since GF(2) has two elements and the cycle space is necessarily finite, the cyclomatic number is also equal to the 2-logarithm of the number of elements in the cycle space. A basis for the cycle space is easily constructed by first fixing a spanning forest of the graph, and then considering the cycles formed by one edge not in the forest and the path in the forest connecting the endpoints of that edge. These cycles form a basis for the cycle space. The cyclomatic number also equals the number of edges not in a maximal spanning forest of a graph. Since the number of edges in a maximal spanning forest of a graph is equal to the number of vertices minus the number of components, the formula formula_8 defines the cyclomatic number. Cyclomatic complexity can also be defined as a relative Betti number, the size of a relative homology group: formula_9 which is read as "the rank of the first homology group of the graph "G" relative to the terminal nodes "t"". This is a technical way of saying "the number of linearly independent paths through the flow graph from an entry to an exit", where: This cyclomatic complexity can be calculated. It may also be computed via absolute Betti number by identifying the terminal nodes on a given component, or drawing paths connecting the exits to the entrance. The new, augmented graph formula_10 obtains formula_11 It can also be computed via homotopy. If a (connected) control-flow graph is considered a one-dimensional CW complex called formula_12, the fundamental group of formula_12 will be formula_13. The value of formula_14 is the cyclomatic complexity. The fundamental group counts how many loops there are through the graph up to homotopy, aligning as expected. Interpretation. In his presentation "Software Quality Metrics to Identify Risk" for the Department of Homeland Security, Tom McCabe introduced the following categorization of cyclomatic complexity: Applications. Limiting complexity during development. One of McCabe's original applications was to limit the complexity of routines during program development. He recommended that programmers should count the complexity of the modules they are developing, and split them into smaller modules whenever the cyclomatic complexity of the module exceeded 10. This practice was adopted by the NIST Structured Testing methodology, which observed that since McCabe's original publication, the figure of 10 had received substantial corroborating evidence. However, it also noted that in some circumstances it may be appropriate to relax the restriction and permit modules with a complexity as high as 15. As the methodology acknowledged that there were occasional reasons for going beyond the agreed-upon limit, it phrased its recommendation as "For each module, either limit cyclomatic complexity to [the agreed-upon limit] or provide a written explanation of why the limit was exceeded." Measuring the "structuredness" of a program. Section VI of McCabe's 1976 paper is concerned with determining what the control-flow graphs (CFGs) of non-structured programs look like in terms of their subgraphs, which McCabe identified. (For details, see structured program theorem.) McCabe concluded that section by proposing a numerical measure of how close to the structured programming ideal a given program is, i.e. its "structuredness". McCabe called the measure he devised for this purpose essential complexity. To calculate this measure, the original CFG is iteratively reduced by identifying subgraphs that have a single-entry and a single-exit point, which are then replaced by a single node. This reduction corresponds to what a human would do if they extracted a subroutine from the larger piece of code. (Nowadays such a process would fall under the umbrella term of refactoring.) McCabe's reduction method was later called "condensation" in some textbooks, because it was seen as a generalization of the condensation to components used in graph theory. If a program is structured, then McCabe's reduction/condensation process reduces it to a single CFG node. In contrast, if the program is not structured, the iterative process will identify the irreducible part. The essential complexity measure defined by McCabe is simply the cyclomatic complexity of this irreducible graph, so it will be precisely 1 for all structured programs, but greater than one for non-structured programs. Implications for software testing. Another application of cyclomatic complexity is in determining the number of test cases that are necessary to achieve thorough test coverage of a particular module. It is useful because of two properties of the cyclomatic complexity, M, for a specific module: All three of the above numbers may be equal: branch coverage formula_15 cyclomatic complexity formula_15 number of paths. For example, consider a program that consists of two sequential if-then-else statements. if (c1()) f1(); else f2(); if (c2()) f3(); else f4(); In this example, two test cases are sufficient to achieve a complete branch coverage, while four are necessary for complete path coverage. The cyclomatic complexity of the program is 3 (as the strongly connected graph for the program contains 9 edges, 7 nodes, and 1 connected component) (9 − 7 + 1). In general, in order to fully test a module, all execution paths through the module should be exercised. This implies a module with a high complexity number requires more testing effort than a module with a lower value since the higher complexity number indicates more pathways through the code. This also implies that a module with higher complexity is more difficult to understand since the programmer must understand the different pathways and the results of those pathways. Unfortunately, it is not always practical to test all possible paths through a program. Considering the example above, each time an additional if-then-else statement is added, the number of possible paths grows by a factor of 2. As the program grows in this fashion, it quickly reaches the point where testing all of the paths becomes impractical. One common testing strategy, espoused for example by the NIST Structured Testing methodology, is to use the cyclomatic complexity of a module to determine the number of white-box tests that are required to obtain sufficient coverage of the module. In almost all cases, according to such a methodology, a module should have at least as many tests as its cyclomatic complexity. In most cases, this number of tests is adequate to exercise all the relevant paths of the function. As an example of a function that requires more than mere branch coverage to test accurately, reconsider the above function. However, assume that to avoid a bug occurring, any code that calls either codice_2 or codice_3 must also call the other. Assuming that the results of codice_4 and codice_5 are independent, the function as presented above contains a bug. Branch coverage allows the method to be tested with just two tests, such as the following test cases: Neither of these cases exposes the bug. If, however, we use cyclomatic complexity to indicate the number of tests we require, the number increases to 3. We must therefore test one of the following paths: Either of these tests will expose the bug. Correlation to number of defects. Multiple studies have investigated the correlation between McCabe's cyclomatic complexity number with the frequency of defects occurring in a function or method. Some studies find a positive correlation between cyclomatic complexity and defects; functions and methods that have the highest complexity tend to also contain the most defects. However, the correlation between cyclomatic complexity and program size (typically measured in lines of code) has been demonstrated many times. Les Hatton has claimed that complexity has the same predictive ability as lines of code. Studies that controlled for program size (i.e., comparing modules that have different complexities but similar size) are generally less conclusive, with many finding no significant correlation, while others do find correlation. Some researchers question the validity of the methods used by the studies finding no correlation. Although this relation likely exists, it is not easily used in practice. Since program size is not a controllable feature of commercial software, the usefulness of McCabe's number has been questioned. The essence of this observation is that larger programs tend to be more complex and to have more defects. Reducing the cyclomatic complexity of code is not proven to reduce the number of errors or bugs in that code. International safety standards like ISO 26262, however, mandate coding guidelines that enforce low code complexity. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "P" }, { "math_id": 2, "text": "S/P" }, { "math_id": 3, "text": "M = E - N + 2P," }, { "math_id": 4, "text": "M = E - N + P." }, { "math_id": 5, "text": "M = E - N + 2." }, { "math_id": 6, "text": "\\pi - s + 2," }, { "math_id": 7, "text": "\\pi" }, { "math_id": 8, "text": "E-N+P" }, { "math_id": 9, "text": "M := b_1(G,t) := \\operatorname{rank}H_1(G,t)," }, { "math_id": 10, "text": "\\tilde G" }, { "math_id": 11, "text": "M = b_1(\\tilde G) = \\operatorname{rank}H_1(\\tilde G)." }, { "math_id": 12, "text": "X" }, { "math_id": 13, "text": "\\pi_1(X) \\cong \\Z^{*n}" }, { "math_id": 14, "text": "n+1" }, { "math_id": 15, "text": "\\leq" } ]
https://en.wikipedia.org/wiki?curid=1014906
10151043
Moni Naor
Israeli computer scientist (born 1961) Moni Naor () is an Israeli computer scientist, currently a professor at the Weizmann Institute of Science. Naor received his Ph.D. in 1989 at the University of California, Berkeley. His advisor was Manuel Blum. He works in various fields of computer science, mainly the foundations of cryptography. He is notable for initiating research on public key systems secure against chosen ciphertext attack and creating non-malleable cryptography, visual cryptography (with Adi Shamir), and suggesting various methods for verifying that users of a computer system are human (leading to the notion of CAPTCHA). His research on Small-bias sample space, give a general framework for combining small k-wise independent spaces with small formula_0-biased spaces to obtain formula_1-almost k-wise independent spaces of small size. In 1994 he was the first, with Amos Fiat, to formally study the problem of practical broadcast encryption. Along with Benny Chor, Amos Fiat, and Benny Pinkas, he made a contribution to the development of Traitor tracing, a copyright infringement detection system which works by tracing the source of leaked files rather than by direct copy protection. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\epsilon" }, { "math_id": 1, "text": "\\delta" } ]
https://en.wikipedia.org/wiki?curid=10151043
10151726
Atmospheric tide
Global-scale periodic oscillations of the atmosphere Atmospheric tides are global-scale periodic oscillations of the atmosphere. In many ways they are analogous to ocean tides. They can be excited by: General characteristics. The largest-amplitude atmospheric tides are mostly generated in the troposphere and stratosphere when the atmosphere is periodically heated, as water vapor and ozone absorb solar radiation during the day. These tides propagate away from the source regions and ascend into the mesosphere and thermosphere. Atmospheric tides can be measured as regular fluctuations in wind, temperature, density and pressure. Although atmospheric tides share much in common with ocean tides they have two key distinguishing features: At ground level, atmospheric tides can be detected as regular but small oscillations in surface pressure with periods of 24 and 12 hours. However, at greater heights, the amplitudes of the tides can become very large. In the mesosphere (heights of about ) atmospheric tides can reach amplitudes of more than 50 m/s and are often the most significant part of the motion of the atmosphere. The reason for this dramatic growth in amplitude from tiny fluctuations near the ground to oscillations that dominate the motion of the mesosphere lies in the fact that the density of the atmosphere decreases with increasing height. As tides or waves propagate upwards, they move into regions of lower and lower density. If the tide or wave is not dissipating, then its kinetic energy density must be conserved. Since the density is decreasing, the amplitude of the tide or wave increases correspondingly so that energy is conserved. Following this growth with height atmospheric tides have much larger amplitudes in the middle and upper atmosphere than they do at ground level. Solar atmospheric tides. The largest amplitude atmospheric tides are generated by the periodic heating of the atmosphere by the Sun – the atmosphere is heated during the day and not heated at night. This regular diurnal (daily) cycle in heating generates thermal tides that have periods related to the solar day. It might initially be expected that this diurnal heating would give rise to tides with a period of 24 hours, corresponding to the heating's periodicity. However, observations reveal that large amplitude tides are generated with periods of 24 and 12 hours. Tides have also been observed with periods of 8 and 6 hours, although these latter tides generally have smaller amplitudes. This set of periods occurs because the solar heating of the atmosphere occurs in an approximate square wave profile and so is rich in harmonics. When this pattern is decomposed into separate frequency components using a Fourier transform, as well as the mean and daily (24-hour) variation, significant oscillations with periods of 12, 8 and 6 hours are produced. Tides generated by the gravitational effect of the Sun are very much smaller than those generated by solar heating. Solar tides will refer to only thermal solar tides from this point. Solar energy is absorbed throughout the atmosphere some of the most significant in this context are water vapor at about 0–15 km in the troposphere, ozone at about 30–60 km in the stratosphere and molecular oxygen and molecular nitrogen at about 120–170 km) in the thermosphere. Variations in the global distribution and density of these species result in changes in the amplitude of the solar tides. The tides are also affected by the environment through which they travel. Solar tides can be separated into two components: migrating and non-migrating. Migrating solar tides. Migrating tides are Sun synchronous – from the point of view of a stationary observer on the ground they propagate westwards with the apparent motion of the Sun. As the migrating tides stay fixed relative to the Sun a pattern of excitation is formed that is also fixed relative to the Sun. Changes in the tide observed from a stationary viewpoint on the Earth's surface are caused by the rotation of the Earth with respect to this fixed pattern. Seasonal variations of the tides also occur as the Earth tilts relative to the Sun and so relative to the pattern of excitation. The migrating solar tides have been extensively studied both through observations and mechanistic models. Non-migrating solar tides. Non-migrating tides can be thought of as global-scale waves with the same periods as the migrating tides. However, non-migrating tides do not follow the apparent motion of the Sun. Either they do not propagate horizontally, they propagate eastwards or they propagate westwards at a different speed to the Sun. These non-migrating tides may be generated by differences in topography with longitude, land-sea contrast, and surface interactions. An important source is latent heat release due to deep convection in the tropics. The primary source for the 24-hr tide is in the lower atmosphere where surface effects are important. This is reflected in a relatively large non-migrating component seen in longitudinal differences in tidal amplitudes. Largest amplitudes have been observed over South America, Africa and Australia. Lunar atmospheric tides. Atmospheric tides are also produced through the gravitational effects of the Moon. Lunar (gravitational) tides are much weaker than solar thermal tides and are generated by the motion of the Earth's oceans (caused by the Moon) and to a lesser extent the effect of the Moon's gravitational attraction on the atmosphere. Classical tidal theory. The basic characteristics of the atmospheric tides are described by the "classical tidal theory". By neglecting mechanical forcing and dissipation, the classical tidal theory assumes that atmospheric wave motions can be considered as linear perturbations of an initially motionless zonal mean state that is horizontally stratified and isothermal. The two major results of the classical theory are Basic equations. The primitive equations lead to the linearized equations for perturbations (primed variables) in a spherical isothermal atmosphere: with the definitions Separation of variables. The set of equations can be solved for "atmospheric tides", i.e., longitudinally propagating waves of zonal wavenumber formula_17 and frequency formula_18. Zonal wavenumber formula_17 is a positive integer so that positive values for formula_18 correspond to eastward propagating tides and negative values to westward propagating tides. A separation approach of the form formula_19 and doing some manipulations yields expressions for the latitudinal and vertical structure of the tides. Laplace's tidal equation. The latitudinal structure of the tides is described by the "horizontal structure equation" which is also called "Laplace's tidal equation": formula_20 with "Laplace operator" formula_21 using formula_22, formula_23 and "eigenvalue" formula_24 Hence, atmospheric tides are eigenoscillations (eigenmodes)of Earth's atmosphere with eigenfunctions formula_25, called Hough functions, and eigenvalues formula_26. The latter define the "equivalent depth" formula_27 which couples the latitudinal structure of the tides with their vertical structure. General solution of Laplace's equation. Longuet-Higgins has completely solved Laplace's equations and has discovered tidal modes with negative eigenvalues (Figure 2). There exist two kinds of waves: class 1 waves, (sometimes called gravity waves), labelled by positive n, and class 2 waves (sometimes called rotational waves), labelled by negative n. Class 2 waves owe their existence to the Coriolis force and can only exist for periods greater than 12 hours (or | · | ≤ 2). Tidal waves can be either internal (travelling waves) with positive eigenvalues (or equivalent depth) which have finite vertical wavelengths and can transport wave energy upward, or external (evanescent waves) with negative eigenvalues and infinitely large vertical wavelengths meaning that their phases remain constant with altitude. These external wave modes cannot transport wave energy, and their amplitudes decrease exponentially with height outside their source regions. Even numbers of n correspond to waves symmetric with respect to the equator, and odd numbers corresponding to antisymmetric waves. The transition from internal to external waves appears at , or at the vertical wavenumber , and , respectively. The fundamental solar diurnal tidal mode which optimally matches the solar heat input configuration and thus is most strongly excited is the Hough mode (1, −2) (Figure 3). It depends on local time and travels westward with the Sun. It is an external mode of class 2 and has the eigenvalue of −12.56. Its maximum pressure amplitude on the ground is about 60 Pa. The largest solar semidiurnal wave is mode (2, 2) with maximum pressure amplitudes at the ground of 120 Pa. It is an internal class 1 wave. Its amplitude increases exponentially with altitude. Although its solar excitation is half of that of mode (1, −2), its amplitude on the ground is larger by a factor of two. This indicates the effect of suppression of external waves, in this case by a factor of four. Vertical structure equation. For bounded solutions and at altitudes above the forcing region, the "vertical structure equation" in its canonical form is: formula_28 with solution formula_29 using the definitions formula_30 Propagating solutions. Therefore, each wavenumber/frequency pair (a tidal "component") is a superposition of associated Hough functions (often called tidal "modes" in the literature) of index "n". The nomenclature is such that a negative value of "n" refers to evanescent modes (no vertical propagation) and a positive value to propagating modes. The equivalent depth formula_27 is linked to the vertical wavelength formula_31, since formula_32 is the vertical wavenumber: formula_33 For propagating solutions formula_34, the vertical group velocity formula_35 becomes positive (upward energy propagation) only if formula_36 for westward formula_37 or if formula_38 for eastward formula_39 propagating waves. At a given height formula_40, the wave maximizes for formula_41 For a fixed longitude formula_10, this in turn always results in downward phase progression as time progresses, independent of the propagation direction. This is an important result for the interpretation of observations: downward phase progression in time means an upward propagation of energy and therefore a tidal forcing lower in the atmosphere. Amplitude increases with height formula_42, as density decreases. Dissipation. Damping of the tides occurs primarily in the lower thermosphere region, and may be caused by turbulence from breaking gravity waves. A similar phenomenon to ocean waves breaking on a beach, the energy dissipates into the background atmosphere. Molecular diffusion also becomes increasingly important at higher levels in the lower thermosphere as the mean free path increases in the rarefied atmosphere. At thermospheric heights, attenuation of atmospheric waves, mainly due to collisions between the neutral gas and the ionospheric plasma, becomes significant so that at above about 150 km altitude, all wave modes gradually become external waves, and the Hough functions degenerate to spherical functions; e.g., mode (1, −2) develops to the spherical function , mode (2, 2) becomes , with θ the co-latitude, etc. Within the thermosphere, mode (1, −2) is the predominant mode reaching diurnal temperature amplitudes at the exosphere of at least 140 K and horizontal winds of the order of 100 m/s and more increasing with geomagnetic activity. It is responsible for the electric Sq currents within the ionospheric dynamo region between about 100 and 200 km altitude. Both diurnal and semidiurnal tides can be observed across the ionospheric dynamo region with incoherent scatter radars by tracking the tidal motion of ionospheric plasma. Effects of atmospheric tide. The tides form an important mechanism for transporting energy from the lower atmosphere into the upper atmosphere, while dominating the dynamics of the mesosphere and lower thermosphere. Therefore, understanding the atmospheric tides is essential in understanding the atmosphere as a whole. Modeling and observations of atmospheric tides are needed in order to monitor and predict changes in the Earth's atmosphere.
[ { "math_id": 0, "text": "u" }, { "math_id": 1, "text": "v" }, { "math_id": 2, "text": "w" }, { "math_id": 3, "text": "\\Phi" }, { "math_id": 4, "text": "\\int g(z,\\varphi) \\, dz" }, { "math_id": 5, "text": "N^2" }, { "math_id": 6, "text": "\\Omega" }, { "math_id": 7, "text": "\\varrho_o" }, { "math_id": 8, "text": "\\propto \\exp(-z/H)" }, { "math_id": 9, "text": "z" }, { "math_id": 10, "text": "\\lambda" }, { "math_id": 11, "text": "\\varphi" }, { "math_id": 12, "text": "J" }, { "math_id": 13, "text": "a" }, { "math_id": 14, "text": "g" }, { "math_id": 15, "text": "H" }, { "math_id": 16, "text": "t" }, { "math_id": 17, "text": "s" }, { "math_id": 18, "text": "\\sigma" }, { "math_id": 19, "text": "\\begin{align}\n\\Phi'(\\varphi, \\lambda, z, t) &= \\hat{\\Phi}(\\varphi,z) \\, e^{i(s\\lambda - \\sigma t)} \\\\\n\\hat{\\Phi}(\\varphi,z) &= \\sum_n \\Theta_n (\\varphi) \\, G_n(z)\n\\end{align}\n" }, { "math_id": 20, "text": "\n{L} {\\Theta}_n + \\varepsilon_n {\\Theta}_n = 0\n" }, { "math_id": 21, "text": "\n{L}=\\frac{\\partial}{\\partial \\mu} \\left[ \\frac{(1-\\mu^2)}{(\\eta^2 - \\mu^2)} \\, \n \\frac{\\partial}{\\partial \\mu} \\right] - \\frac{1}{\\eta^2 - \\mu^2} \\,\n \\left[ -\\frac{s}{\\eta} \\, \\frac{(\\eta^2 + \\mu^2)}{(\\eta^2 - \\mu^2)} + \n \\frac{s^2}{1-\\mu^2} \\right]\n" }, { "math_id": 22, "text": " \\mu = \\sin \\varphi " }, { "math_id": 23, "text": "\\eta= \\sigma / (2 \\Omega)" }, { "math_id": 24, "text": "\n\\varepsilon_n = (2 \\Omega a)^2 / gh_n.\n" }, { "math_id": 25, "text": "\\Theta_n" }, { "math_id": 26, "text": "\\varepsilon_n" }, { "math_id": 27, "text": "h_n" }, { "math_id": 28, "text": "\n\\frac{\\partial^2 G^{\\star}_n}{\\partial x^2} \\, + \\, \\alpha_n^2 \\, G^{\\star}_n = F_n(x)\n" }, { "math_id": 29, "text": "\nG^{\\star}_n (x) \\sim \\begin{cases}\n e^{-|\\alpha_n| x} & \\text{:} \\, \\alpha_n^2 < 0, \\, \\text{ evanescent or trapped} \\\\\n e^{i \\alpha_n x} & \\text{:} \\, \\alpha_n^2 > 0, \\, \\text{ propagating}\\\\\n e^{\\left( \\kappa - \\frac{1}{2} \\right) x} & \\text{:} \\, h_n = H / (1- \\kappa), F_n(x)=0 \\, \\forall x, \\, \\text{ Lamb waves (free solutions)}\n\\end{cases}\n" }, { "math_id": 30, "text": "\\begin{align}\n \\alpha_n^2 &= \\frac{\\kappa H}{h_n} - \\frac{1}{4} \\\\\n x &= \\frac{z}{H} \\\\\n G^{\\star}_n &= G_n \\, \\varrho_o^{\\frac{1}{2}} \\, N^{-1} \\\\\n F_n(x) & = - \\frac{\\varrho_o^{-\\frac{1}{2}}}{i \\sigma N} \\, \\frac{\\partial}{\\partial x} (\\varrho_o J_n).\n\\end{align}" }, { "math_id": 31, "text": "\\lambda_{z,n}" }, { "math_id": 32, "text": "\\alpha_n / H" }, { "math_id": 33, "text": "\n \\lambda_{z,n} = \\frac{2 \\pi \\, H}{\\alpha_n} = \n \\frac{2 \\pi \\, H}{ \\sqrt{\\frac{\\kappa H}{h_n} - \\frac{1}{4}}}.\n" }, { "math_id": 34, "text": "(\\alpha_n^2 > 0)" }, { "math_id": 35, "text": "\nc_{gz,n}=H \\frac{\\partial \\sigma}{\\partial \\alpha_n}\n" }, { "math_id": 36, "text": "\\alpha_n > 0" }, { "math_id": 37, "text": "(\\sigma < 0)" }, { "math_id": 38, "text": "\\alpha_n < 0" }, { "math_id": 39, "text": "(\\sigma >0)" }, { "math_id": 40, "text": "x=z/H" }, { "math_id": 41, "text": "\nK_n = s\\lambda + \\alpha_n x - \\sigma t = 0.\n" }, { "math_id": 42, "text": "\\propto e^{z/2H}" } ]
https://en.wikipedia.org/wiki?curid=10151726
10151936
Random regular graph
A random "r"-regular graph is a graph selected from formula_0, which denotes the probability space of all "r"-regular graphs on formula_1 vertices, where formula_2 and formula_3 is even. It is therefore a particular kind of random graph, but the regularity restriction significantly alters the properties that will hold, since most graphs are not regular. Properties of random regular graphs. As with more general random graphs, it is possible to prove that certain properties of random formula_4–regular graphs hold asymptotically almost surely. In particular, for formula_5, a random "r"-regular graph of large size is asymptotically almost surely "r"-connected. In other words, although formula_6–regular graphs with connectivity less than formula_6 exist, the probability of selecting such a graph tends to 0 as formula_1 increases. If formula_7 is a positive constant, and formula_8 is the least integer satisfying formula_9 then, asymptotically almost surely, a random "r"-regular graph has diameter at most "d". There is also a (more complex) lower bound on the diameter of "r"-regular graphs, so that almost all "r"-regular graphs (of the same size) have almost the same diameter. The distribution of the number of short cycles is also known: for fixed formula_10, let formula_11 be the number of cycles of lengths up to formula_4. Then the formula_12are asymptotically independent Poisson random variables with means formula_13 Algorithms for random regular graphs. It is non-trivial to implement the random selection of "r"-regular graphs efficiently and in an unbiased way, since most graphs are not regular. The "pairing model" (also "configuration model") is a method which takes "nr" points, and partitions them into "n" buckets with "r" points in each of them. Taking a random matching of the "nr" points, and then contracting the "r" points in each bucket into a single vertex, yields an "r"-regular graph or multigraph. If this object has no multiple edges or loops (i.e. it is a graph), then it is the required result. If not, a restart is required. A refinement of this method was developed by Brendan McKay and Nicholas Wormald. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{G}_{n,r}" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "3 \\le r < n" }, { "math_id": 3, "text": "nr" }, { "math_id": 4, "text": "m" }, { "math_id": 5, "text": " r \\ge 3 " }, { "math_id": 6, "text": "r" }, { "math_id": 7, "text": "\\epsilon > 0" }, { "math_id": 8, "text": "d" }, { "math_id": 9, "text": "(r-1)^{d-1} \\ge (2 + \\epsilon)rn \\ln n" }, { "math_id": 10, "text": "m \\ge 3" }, { "math_id": 11, "text": "Y_3,Y_4,...Y_m" }, { "math_id": 12, "text": "Y_i" }, { "math_id": 13, "text": "\\lambda_i=\\frac{(r-1)^i}{2i}" } ]
https://en.wikipedia.org/wiki?curid=10151936
1015276
Shoe size
Measurement scale indicating the fitting size of a shoe A shoe size is an indication of the fitting size of a shoe for a person. There are a number of different shoe-size systems used worldwide. While all shoe sizes use a number to indicate the length of the shoe, they differ in exactly what they measure, what unit of measurement they use, and where the size 0 (or 1) is positioned. Some systems also indicate the shoe width, sometimes also as a number, but in many cases by one or more letters. Some regions use different shoe-size systems for different types of shoes (e.g. men's, women's, children's, sport, and safety shoes). This article sets out several complexities in the definition of shoe sizes. In practice, shoes are often tried on for both size and fit before they are purchased. Deriving the shoe size. Foot versus shoe and last. The length of a person's foot is commonly defined as the distance between two parallel lines that are perpendicular to the foot and in contact with the most prominent toe and the most prominent part of the heel. Foot length is measured with the subject standing barefoot and the weight of the body equally distributed between both feet. The sizes of the left and right feet are often slightly different. In this case, both feet are measured, and purchasers of mass-produced shoes are advised to purchase a shoe size based on the larger foot, as most retailers do not sell pairs of shoes in non-matching sizes. Each size of shoe is considered suitable for a small interval of foot lengths, typically limited by half-point of the shoe size system. A shoe-size system can refer to three characteristic lengths: All these measures differ substantially from one another for the same shoe. For example, the inner cavity of a shoe must typically be 15 mm longer than the foot, and the shoe last would be 2 size points larger than the foot, but this varies between different types of shoes and the shoe size system used. The typical range lies between for the UK/US size system and for the European size system, but may extend to and . Length. Sizing systems also differ in the units of measurement they use. This also results in different increments between shoe sizes, because usually only "full" or "half" sizes are made. The following length units are commonly used today to define shoe-size systems: Since the early 2000s, labels on sports shoes typically include sizes measured in all four systems: EU, UK, US, and Mondopoint. Zero point. The sizing systems also place size 0 (or 1) at different locations: Width. Some systems also include the width of a foot (or the girth of a shoe last), but do so in a variety of ways: The width for which these sizes are suitable can vary significantly between manufacturers. The A–E width indicators used by most American, Canadian, and some British shoe manufacturers are typically based on the width of the foot, and common step sizes are &lt;templatestyles src="Fraction/styles.css" /&gt;3⁄16 inch (4.8 mm). Difficulties. There could be differences between various shoe size tables from shoemakers and shoe stores. They are usually due to the following factors: Conversion tables available on the Web often contain obvious errors, not taking into account different zero points or wiggle room. Although shoe size systems are not fully standardised, the ISO/TC 137 had released a technical specification ISO/TS 19407:2015 for converting shoe sizes across various local sizing systems. Even though the problem of converting shoe sizes accurately has yet to be fully resolved, this standard serves as "a good compromise solution" for shoe-buyers. Common sizing systems. United Kingdom. Shoe size in the United Kingdom, Ireland, India, Pakistan and South Africa is based on the length of the last used to make the shoes, measured in barleycorns (&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄3 inch) starting from the smallest size deemed practical, which is called size zero. It is not formally standardised. The last is typically longer than the foot heel to toe length by &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 to &lt;templatestyles src="Fraction/styles.css" /&gt;2⁄3 in or &lt;templatestyles src="Fraction/styles.css" /&gt;1+1⁄2 to 2 barleycorns, so to determine the shoe size based on actual foot length one must add 2 barleycorns. A child's size zero is equivalent to 4 inches (a hand = 12 barleycorns = 10.16 cm), and the sizes go up to size &lt;templatestyles src="Fraction/styles.css" /&gt;13+1⁄2 (measuring &lt;templatestyles src="Fraction/styles.css" /&gt;25+1⁄2 barleycorns, or ). Thus, the calculation for a children's shoe size in the UK is: child shoe size (barleycorns) = 3 × last length (in) − 12 equivalent to: child shoe size (barleycorns) ≈ 3 × foot length (in) − 10. An adult size one is then the next size up (26 barleycorns, or ) and each size up continues the progression in barleycorns. The calculation for an adult shoe size in the UK is thus: adult shoe size (barleycorns) = 3 × last length (in) − 25 equivalent to: adult shoe size (barleycorns) ≈ 3 × foot length (in) − 23. Although this sizing standard is nominally for both men and women, some manufacturers use different numbering for women's UK sizing. In Australia and New Zealand, the UK system is followed for men and children's footwear. Women's footwear follows the US sizings. In Mexico, shoes are sized either according to the foot length they are intended to fit, in cm, or alternatively to another variation of the barleycorn system, with sizes calculated approximately as: adult shoe size (barleycorns) = 3 × last length (in) − &lt;templatestyles src="Fraction/styles.css" /&gt;25+1⁄2 equivalent to: adult shoe size (barleycorns) ≈ 3 × foot length (in) − &lt;templatestyles src="Fraction/styles.css" /&gt;23+1⁄2. United States. In the United States and Canada, the traditional system is similar to British but there are different zero points for children's, men's, and women's shoe sizes. The most common is the customary system where men's shoes are one size longer than the UK equivalent, making a men's 13 in the US the same size as a men's 12 in the UK. Customary. The customary system is offset by &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄4 barleycorn, or , comparing to the UK sizes. The men's range starts at size 1, with zero point corresponding to the children's size 13 which equals &lt;templatestyles src="Fraction/styles.css" /&gt;24+3⁄4 barleycorns or . However, most US manufacturers are using greater offsets, such as &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 and 1 barleycorns. Therefore in current practice, US men's size 1 equals 25 barleycorns, or , so the calculation for a male shoe size in the United States is: male shoe size (barleycorns) = 3 × last length (in) − 24 equivalent to: male shoe size (barleycorns) ≈ 3 × foot length (in) − 22. In the "standard" or "FIA" (Footwear Industries of America) scale, women's sizes are men's sizes plus 1 (so a men's &lt;templatestyles src="Fraction/styles.css" /&gt;10+1⁄2 is a women's &lt;templatestyles src="Fraction/styles.css" /&gt;11+1⁄2): female shoe size (barleycorns) = 3 × last length (in) − 23 equivalent to: female shoe size (barleycorns) ≈ 3 × foot length (in) − 21. There is also the "common" scale, where women's sizes are equal to men's sizes plus &lt;templatestyles src="Fraction/styles.css" /&gt;1+1⁄2. Children's shoes start from size zero, which is equivalent to &lt;templatestyles src="Fraction/styles.css" /&gt;3+11⁄12 inches (&lt;templatestyles src="Fraction/styles.css" /&gt;11+3⁄4 barleycorns = 99.48 mm), and end at &lt;templatestyles src="Fraction/styles.css" /&gt;13+1⁄2. Thus the formula for children's sizes in the US is child shoe size (barleycorns) = 3 × last length (in) − 11&lt;templatestyles src="Fraction/styles.css" /&gt;3⁄4 equivalent to: child shoe size (barleycorns) ≈ 3 × foot length (in) − 9&lt;templatestyles src="Fraction/styles.css" /&gt;3⁄4. Alternatively, a Mondopoint-based scale running from K4 to K13 and then 1 to 7 is in use. K4 to K9 are toddler sizes, K10 to K13 are pre-school and 1 to 7 are grade school sizes. Brannock Device. The Brannock Device is a measuring instrument invented by Charles F. Brannock in 1925 and now found in many shoe stores. The recent formula used by the Brannock device assumes a foot length of 2 barleycorns less than the length of the last; thus, men's size 1 is equivalent to a last's length of and foot's length of , and children's size 1 is equivalent to last's length and foot's length. The device also measures the length of the arch, or the distance between the heel and the ball (metatarsal head) of the foot. For this measurement, the device has a shorter scale at the instep of the foot with an indicator that slides into position. If this scale indicates a larger size, it is taken in place of the foot's length to ensure proper fitting. For children's sizes, additional wiggle room is added to allow for growth. The device also measures the width of the foot and assigns it designations of AAA, AA, A, B, C, D, E, EE, or EEE. The widths are &lt;templatestyles src="Fraction/styles.css" /&gt;3⁄16 inches apart and differ by shoe length. Some shoe stores and medical professionals use optical 3D surface scanners to precisely measure the length and width of both feet and recommend the appropriate shoe model and size. Continental Europe. In the Continental European system, the shoe size is the length of the last, expressed in Paris points or , for both sexes and for adults and children alike. The last is typically longer than the foot heel to toe length by to , or 2 to &lt;templatestyles src="Fraction/styles.css" /&gt;2+1⁄2 Paris points, so to determine the shoe size based on actual foot length one must add 2 Paris points. Because a Paris point is &lt;templatestyles src="Fraction/styles.css" /&gt;2⁄3 of a centimetre, a centimetre is &lt;templatestyles src="Fraction/styles.css" /&gt;3⁄2 Paris points, and the formula is as follows: shoe size (Paris points) = &lt;templatestyles src="Fraction/styles.css" /&gt;3⁄2 × last length (cm) equivalent to: shoe size (Paris points) ≈ (&lt;templatestyles src="Fraction/styles.css" /&gt;3⁄2 × foot length (cm)) + 2 The Continental European system is used in Austria, Belgium, Denmark, France, Germany, Italy, Luxembourg, the Netherlands, Norway, Poland, Portugal, Spain, Sweden, Switzerland, and most other continental European countries. It is also used in Middle Eastern countries (such as Iran), Brazil—which uses the same method but subtracts 2 from the final result, in effect measuring foot size instead of last size—and, commonly, Hong Kong. The system is sometimes described as Stich size (from "Pariser Stich", the German name for the Paris point), or "" size (from a German name of a micrometer for internal measurements). Mondopoint. The Mondopoint shoe length system is widely used in the sports industry to size athletic shoes, ski boots, skates, and pointe ballet shoes; it was also adopted as the primary shoe sizing system in the Soviet Union, Russia, East Germany, China, Japan, Taiwan, and South Korea, and as an optional system in the United Kingdom, India, Mexico, and European countries. The Mondopoint system is also used by NATO and other military services. The Mondopoint system was introduced in the 1970s by International Standard ISO 2816:1973 "Fundamental characteristics of a system of shoe sizing to be known as Mondopoint" and ISO 3355:1975 "Shoe sizes – System of length grading (for use in the Mondopoint system)". ISO 9407:2019, "Shoe sizes—Mondopoint system of sizing and marking", is the current version of the standard. The Mondopoint system is based on average foot length and foot width for which the shoe is suitable, measured in millimetres. The length of the foot is measured as horizontal distance between the perpendiculars in contact with the end of the most prominent toe and the most prominent part of the heel. The width of the foot is measured as horizontal distance between vertical lines in contact with the first and fifth metatarsophalangeal joints. The perimeter of the foot is the length of the foot circumference, measured with a flexible tape at the same points as foot width. The origin of the grade is zero. The labeling typically includes foot length, followed by an optional foot width: a shoe size of 280/110 indicates a foot length of and width of . Other customary markings, such as EU, UK and US sizes, may also be used. Because Mondopoint takes the foot width into account, it allows for better fitting than most other systems. A given shoe size shall fit every foot with indicated average measurements, and those differing by no more than a half-step of the corresponding interval grid. Standard foot lengths are defined with interval steps of 5 mm for casual footwear and steps of 7.5 mm for specialty (protective) footwear. The standard is maintained by ISO Technical Committee 137 "Footwear sizing designations and marking systems." East Asia. In Japan, mainland China, Taiwan, and South Korea, the Mondopoint system is used as defined by national standard Japanese Industrial Standards (JIS) S 5037:1998 and its counterparts Guobiao (GB/T) 3293.1-1998, Chinese National Standard (CNS) 4800-S1093:2000 and Korean Standards Association (KS) M 6681:2007. Foot length and girth (foot circumference) are taken into account. The foot length is indicated in centimetres; an increment of 5 mm is used. The length is followed by designators for girth (A, B, C, D, E, EE, EEE, EEEE, F, G), which are specified in an indexed table as foot circumference in millimetres for each given foot length; foot width is also included as supplemental information. There are different tables for men's, women's, and children's (less than 12 years of age) shoes. Not all designators are used for all genders and in all countries. For example, the largest girth for women in Taiwan is EEEE, whereas in Japan, it is F. The foot length and width can also be indicated in millimetres, separated by a slash or a hyphen. Soviet Union (Russia, Commonwealth of Independent States). Historically the Soviet Union used the European (Paris point) system, but the Mondopoint metric system was introduced in the 1980s by GOST 24382-80 "Sizes of Sport Shoes" (based on ISO 2816:1973) and GOST 11373-88 "Shoe Sizes" (based on ISO 3355:1975), and lately by GOST R 58149-2018 (based on ISO 9407:1991) Standard metric foot sizes can be converted to the nearest Paris point (&lt;templatestyles src="Fraction/styles.css" /&gt;2⁄3 cm) sizes using approximate conversion tables; shoes are marked with both foot length in millimetres, as for pointe ballet shoe sizes, and last length in European Paris point sizes (although such converted "Stichmaß" sizes may come &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 to 1 size smaller than comparable European-made adult footwear, and up to &lt;templatestyles src="Fraction/styles.css" /&gt;1+1⁄2 sizes smaller for children's footwear, according to ISO 19407 shoe size definitions). Foot lengths are aligned to 5 mm intervals for sports and casual shoes, and 7.5 mm for protective/safety shoes. Optional foot width designations includes narrow, normal (medium or regular), and wide grades. Infant sizes start at 16 (95 mm) and pre-school kids at 23 (140 mm); schoolchildren sizes span 32 (202.5 mm) to 40 (255 mm) for girls and 32 to 44 (285 mm) for boys. Adult sizes span 33 (210 mm) to 44 for women and 38 (245 mm) to 48 (310 mm) for men. ISO 19407 and shoe size conversion. ISO/TS 19407:2023 "Footwear - Sizing - Conversion of sizing systems" is a technical specification from the International Organization for Standardization. It contains basic description and conversion tables for major shoe sizing systems including Mondopoint with length steps of 5 mm and 7.5 mm, European Paris point system, and UK &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄3-inch system. The standard has also been adopted as Russian GOST R 57425-2017. The standard is maintained by ISO/TC 137, which also developed ISO/TS 19408:2015 "Footwear - Sizing - Vocabulary and terminology"; in development are companion standards ISO/TS 19409 "Footwear - Sizing - Measurement of last dimensions" and ISO/TS 19410 "Footwear - Sizing - Inshoe measurement". Shoe sizing. The adult shoe sizes are calculated from typical last length, which is converted from foot length in millimetres by adding an allowance of two shoe sizes: formula_0 where "L" is foot length in millimetres. Direct conversion between adult UK, Continental European and Mondopoint shoe size systems is derived as follows: formula_1 Using these formulas, the standard derives shoe size tables for adults and children, based on actual foot length measurement (insole) in millimetres. Typical last length ranges are also included (13 to 25 mm over foot length for adults, 8% greater than foot length plus 6 mm for children). Exact foot lengths may contain repeating decimals because the formulas include division by 3; in practice, approximate interval steps of 6.67 mm and 8.47 mm are used, and sizes are rounded to either the nearest half size or closest matching Mondopoint size. Size marking. It is recommended to include size marking in each of the four sizing systems on the shoe label and on the package. The principal system used for manufacturing the shoe needs to be placed first and emphasized with a boldface. The standard includes quick conversion tables for adult shoe size marking; they provide matching sizes for shoes marked in Mondopoint, European, and UK systems. Converted values are rounded to a larger shoe size to increase comfort. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n \\text{EUR shoe size} &= \\frac{L + 2\\times{6.66\\bar{6}} } {6.6\\bar{6}} = \\frac{3}{20}\\times{L} + 2 \\\\[3pt]\n \\text{UK shoe size} &= \\frac{L + 2\\times{8.4\\bar{6}} } {8.4\\bar{6}} - 25= \\frac{3}{25.4}\\times{L} - 23\n\\end{align}" }, { "math_id": 1, "text": "\\begin{align}\n L &= \\frac{20}{3} \\times\\left(\\text{EUR shoe size} - 2 \\right) = \\frac{25.4}{3} \\times\\left(\\text{UK shoe size} + 23 \\right) \\\\[3pt]\n \\text{EUR shoe size} &= {1.27 \\times\\left(\\text{UK shoe size} + 23\\right)} + 2 \\\\[3pt]\n \\text{UK shoe size} &= { \\frac{\\text{EUR shoe size} - 2}{1.27} } - 23\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=1015276
1015425
Stolper–Samuelson theorem
Macroeconomic trade theorem The Stolper–Samuelson theorem is a theorem in Heckscher–Ohlin trade theory. It describes the relationship between relative prices of output and relative factor returns—specifically, real wages and real returns to capital. The theorem states that—under specific economic assumptions (constant returns to scale, perfect competition, equality of the number of factors to the number of products)—a rise in the relative price of a good will lead to a rise in the real return to that factor which is used most intensively in the production of the good, and conversely, to a fall in the real return to the other factor. History. It was derived in 1941 from within the framework of the Heckscher–Ohlin model by Wolfgang Stolper and Paul Samuelson, but has subsequently been derived in less restricted models. As a term, it is applied to all cases where the effect is seen. Ronald W. Jones and José Scheinkman show that under very general conditions the factor returns change with output prices as predicted by the theorem. If considering the change in real returns under increased international trade a robust finding of the theorem is that returns to the scarce factor will go down, ceteris paribus. An additional robust corollary of the theorem is that a compensation to the scarce factor exists which will overcome this effect and make increased trade Pareto optimal. The original Heckscher–Ohlin model was a two-factor model with a labor market specified by a single number. Therefore, the early versions of the theorem could make no predictions about the effect on the unskilled labor force in a high-income country under trade liberalization. However, more sophisticated models with multiple classes of worker productivity have been shown to produce the Stolper–Samuelson effect within each class of labor: Unskilled workers producing traded goods in a high-skill country will be worse off as international trade increases, because, relative to the world market in the good they produce, an unskilled first world production-line worker is a less abundant factor of production than capital. The Stolper–Samuelson theorem is closely linked to the factor price equalization theorem, which states that, regardless of international factor mobility, factor prices will tend to equalize across countries that do not differ in technology. Derivation. Considering a two-good economy that produces only wheat and cloth, with labor and land being the only factors of production, wheat a land-intensive industry and cloth a labor-intensive one, and assuming that the price of each product equals its marginal cost, the theorem can be derived. The price of cloth should be: (1) formula_0 with "P"("C") standing for the price of cloth, "r" standing for rent paid to landowners, "w" for wage levels and "a" and "b" respectively standing for the amount of land and labor used, and do not change with the prices of goods. Similarly, the price of wheat would be: (2) formula_1 with "P"("W") standing for the price of wheat, "r" and "w" for rent and wages, and "c" and "d" for the respective amount of land and labor used, and also considered to be constant. If, then, cloth experiences a rise in its price, at least one of its factors must also become more expensive, for equation 1 to hold true, since the relative amounts of labor and land are not affected by changing prices. It can be assumed that it would be labor—the factor that is intensively used in the production of cloth—that would rise. When wages rise, rent must fall, in order for equation 2 to hold true. But a fall in rent also affects equation 1. For it to still hold true, then, the rise in wages must be more than proportional to the rise in cloth prices. A rise in the price of a product, then, will more than proportionally raise the return to the most intensively used factor, and decrease the return to the less intensively used factor. Criticism. The validity of the Heckscher–Ohlin model has been questioned since the classical Leontief paradox. Indeed, Feenstra called the Heckscher–Ohlin model "hopelessly inadequate as an explanation for historical and modern trade patterns". As for the Stolper–Samuelson theorem itself, Davis and Mishra recently stated, "It is time to declare Stolper–Samuelson dead". They argue that the Stolper–Samuelson theorem is "dead" because following trade liberalization in some developing countries (particularly in Latin America), wage inequality rose, and, under the assumption that these countries are labor-abundant, the SS theorem predicts that wage inequality should have fallen. Aside from the declining trend in wage inequality in Latin America that has followed trade liberalization in the longer run (see Lopez-Calva and Lustig), an alternative view would be to recognize that technically the SS theorem predicts a relationship between output prices and relative wages. Papers that compare output prices with changes in relative wages find moderate-to-strong support for the Stolper–Samuelson theorem for Chile, Mexico, and Brazil. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(C)= ar + bw, \\, " }, { "math_id": 1, "text": "P(W) = cr + dw \\, " } ]
https://en.wikipedia.org/wiki?curid=1015425
10159772
Percus–Yevick approximation
In statistical mechanics the Percus–Yevick approximation is a closure relation to solve the Ornstein–Zernike equation. It is also referred to as the Percus–Yevick equation. It is commonly used in fluid theory to obtain e.g. expressions for the radial distribution function. The approximation is named after Jerome K. Percus and George J. Yevick. Derivation. The direct correlation function represents the direct correlation between two particles in a system containing "N" − 2 other particles. It can be represented by formula_0 where formula_1 is the radial distribution function, i.e. formula_2 (with "w"("r") the potential of mean force) and formula_3 is the radial distribution function without the direct interaction between pairs formula_4 included; i.e. we write formula_5. Thus we "approximate" "c"("r") by formula_6 If we introduce the function formula_7 into the approximation for "c"("r") one obtains formula_8 This is the essence of the Percus–Yevick approximation for if we substitute this result in the Ornstein–Zernike equation, one obtains the Percus–Yevick equation: formula_9 The approximation was defined by Percus and Yevick in 1958. Hard spheres. For hard spheres, the potential "u(r)" is either zero or infinite, and therefore the Boltzmann factor formula_10 is either one or zero, regardless of temperature "T". Therefore structure of a hard-spheres fluid is temperature independent. This leaves just two parameters: the hard-core radius "R" (which can be eliminated by rescaling distances or wavenumbers), and the packing fraction η (which has a maximum value of 0.64 for random close packing). Under these conditions, the Percus–Yevick equation has an analytical solution, obtained by Wertheim in 1963. Solution as C code. The static structure factor of the hard-spheres fluid in Percus–Yevick approximation can be computed using the following C function: double py(double qr, double eta) const double a = pow(1+2*eta, 2)/pow(1-eta, 4); const double b = -6*eta*pow(1+eta/2, 2)/pow(1-eta, 4); const double c = eta/2*pow(1+2*eta, 2)/pow(1-eta, 4); const double A = 2*qr; const double A2 = A*A; const double G = a/A2*(sin(A)-A*cos(A)) + b/A/A2*(2*A*sin(A)+(2-A2)*cos(A)-2) + c/pow(A,5)*(-pow(A,4)*cos(A)+4*((3*A2-6)*cos(A)+A*(A2-6)*sin(A)+6)); return 1/(1+24*eta*G/A); Hard spheres in shear flow. For hard spheres in shear flow, the function "u(r)" arises from the solution to the steady-state two-body Smoluchowski convection–diffusion equation or two-body Smoluchowski equation with shear flow. An approximate analytical solution to the Smoluchowski convection-diffusion equation was found using the method of matched asymptotic expansions by Banetta and Zaccone in Ref. This analytical solution can then be used together with the Percus–Yevick approximation in the Ornstein-Zernike equation. Approximate solutions for the pair distribution function in the extensional and compressional sectors of shear flow and hence the angular-averaged radial distribution function can be obtained, as shown in Ref., which are in good parameter-free agreement with numerical data up to packing fractions formula_11. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " c(r)=g_{\\rm total}(r) - g_{\\rm indirect}(r) \\, " }, { "math_id": 1, "text": "g_{\\rm total}(r)" }, { "math_id": 2, "text": "g(r)=\\exp[-\\beta w(r)]" }, { "math_id": 3, "text": "g_{\\rm indirect}(r)" }, { "math_id": 4, "text": "u(r)" }, { "math_id": 5, "text": "g_{\\rm indirect}(r)=\\exp[-\\beta(w(r)-u(r))]" }, { "math_id": 6, "text": " c(r)=e^{-\\beta w(r)}- e^{-\\beta[w(r)-u(r)]}. \\, " }, { "math_id": 7, "text": "y(r)=e^{\\beta u(r)}g(r)" }, { "math_id": 8, "text": " c(r)=g(r)-y(r)=e^{-\\beta u}y(r)-y(r)=f(r)y(r). \\, " }, { "math_id": 9, "text": " y(r_{12})=1+\\rho \\int f(r_{13})y(r_{13})h(r_{23}) d \\mathbf{r_{3}}. \\, " }, { "math_id": 10, "text": "\\text{e}^{-u/k_\\text{B}T}" }, { "math_id": 11, "text": " \\eta \\approx 0.5 " } ]
https://en.wikipedia.org/wiki?curid=10159772
10159868
Spinors in three dimensions
Spin representations of the SO(3) group In mathematics, the spinor concept as specialised to three dimensions can be treated by means of the traditional notions of dot product and cross product. This is part of the detailed algebraic discussion of the rotation group SO(3). Formulation. The association of a spinor with a 2×2 complex traceless Hermitian matrix was formulated by Élie Cartan. In detail, given a vector x = ("x"1, "x"2, "x"3) of real (or complex) numbers, one can associate the complex matrix formula_0 In physics, this is often written as a dot product formula_1, where formula_2 is the vector form of Pauli matrices. Matrices of this form have the following properties, which relate them intrinsically to the geometry of 3-space: The last property can be used to simplify rotational operations. It is an elementary fact from linear algebra that any rotation in 3-space factors as a composition of two reflections. (More generally, any orientation-reversing orthogonal transformation is either a reflection or the product of three reflections.) Thus if "R" is a rotation which decomposes as the reflection in the plane perpendicular to a unit vector formula_12 followed by the reflection in the plane perpendicular to formula_13, then the matrix formula_14 represents the rotation of the vector formula_11 through "R". Having effectively encoded all the rotational linear geometry of 3-space into a set of complex 2×2 matrices, it is natural to ask what role, if any, the 2×1 matrices (i.e., the column vectors) play. Provisionally, a spinor is a column vector formula_15 with complex entries "ξ"1 and "ξ"2. The space of spinors is evidently acted upon by complex 2×2 matrices. As shown above, the product of two reflections in a pair of unit vectors defines a 2×2 matrix whose action on euclidean vectors is a rotation. So there is an action of rotations on spinors. However, there is one important caveat: the factorization of a rotation is not unique. Clearly, if formula_16 is a representation of a rotation, then replacing "R" by −"R" will yield the same rotation. In fact, one can easily show that this is the only ambiguity which arises. Thus the action of a rotation on a spinor is always "double-valued". History. There were some precursors to Cartan's work with 2×2 complex matrices: Wolfgang Pauli had used these matrices so intensively that elements of a certain basis of a four-dimensional subspace are called Pauli matrices σi, so that the Hermitian matrix is written as a Pauli vector formula_17 In the mid 19th century the algebraic operations of this algebra of four complex dimensions were studied as biquaternions. Michael Stone and Paul Goldbar, in "Mathematics for Physics", contest this, saying, "The spin representations were discovered by ´Elie Cartan in 1913, some years before they were needed in physics." Formulation using isotropic vectors. Spinors can be constructed directly from isotropic vectors in 3-space without using the quaternionic construction. To motivate this introduction of spinors, suppose that "X" is a matrix representing a vector x in complex 3-space. Suppose further that x is isotropic: i.e., formula_18 Then since the determinant of "X" is zero there is a proportionality between its rows or columns. Thus the matrix may be written as an outer product of two complex 2-vectors: formula_19 This factorization yields an overdetermined system of equations in the coordinates of the vector x: subject to the constraint This system admits the solutions Either choice of sign solves the system (1). Thus a spinor may be viewed as an isotropic vector, along with a choice of sign. Note that because of the logarithmic branching, it is impossible to choose a sign consistently so that (3) varies continuously along a full rotation among the coordinates x. In spite of this ambiguity of the representation of a rotation on a spinor, the rotations do act unambiguously by a fractional linear transformation on the ratio "ξ"1:"ξ"2 since one choice of sign in the solution (3) forces the choice of the second sign. In particular, the space of spinors is a projective representation of the orthogonal group. As a consequence of this point of view, spinors may be regarded as a kind of "square root" of isotropic vectors. Specifically, introducing the matrix formula_20 the system (1) is equivalent to solving "X" = 2 "ξ" t"ξ" "C" for the undetermined spinor "ξ". "A fortiori", if the roles of "ξ" and x are now reversed, the form "Q"("ξ") = x defines, for each spinor "ξ", a vector x quadratically in the components of "ξ". If this quadratic form is polarized, it determines a bilinear vector-valued form on spinors "Q"("μ", "ξ"). This bilinear form then transforms tensorially under a reflection or a rotation. Reality. The above considerations apply equally well whether the original euclidean space under consideration is real or complex. When the space is real, however, spinors possess some additional structure which in turn facilitates a complete description of the representation of the rotation group. Suppose, for simplicity, that the inner product on 3-space has positive-definite signature: With this convention, real vectors correspond to Hermitian matrices. Furthermore, real rotations preserving the form (4) correspond (in the double-valued sense) to unitary matrices of determinant one. In modern terms, this presents the special unitary group SU(2) as a double cover of SO(3). As a consequence, the spinor Hermitian product is preserved by all rotations, and therefore is canonical. If, however, the signature of the inner product on 3-space is indefinite (i.e., non-degenerate, but also not positive definite), then the foregoing analysis must be adjusted to reflect this. Suppose then that the length form on 3-space is given by: Then the construction of spinors of the preceding sections proceeds, but with formula_21 replacing formula_22 formula_21 in all the formulas. With this new convention, the matrix associated to a real vector formula_23 is itself real: formula_24. The form (5) is no longer invariant under a real rotation (or reversal), since the group stabilizing (4′) is now a Lorentz group O(2,1). Instead, the anti-Hermitian form formula_25 defines the appropriate notion of inner product for spinors in this metric signature. This form is invariant under transformations in the connected component of the identity of O(2,1). In either case, the quartic form formula_26 is fully invariant under O(3) (or O(2,1), respectively), where "Q" is the vector-valued bilinear form described in the previous section. The fact that this is a quartic invariant, rather than quadratic, has an important consequence. If one confines attention to the group of special orthogonal transformations, then it is possible unambiguously to take the square root of this form and obtain an identification of spinors with their duals. In the language of representation theory, this implies that there is only one irreducible spin representation of SO(3) (or SO(2,1)) up to isomorphism. If, however, reversals (e.g., reflections in a plane) are also allowed, then it is no longer possible to identify spinors with their duals owing to a change of sign on the application of a reflection. Thus there are two irreducible spin representations of O(3) (or O(2,1)), sometimes called the pin representations. Reality structures. The differences between these two signatures can be codified by the notion of a "reality structure" on the space of spinors. Informally, this is a prescription for taking a complex conjugate of a spinor, but in such a way that this may not correspond to the usual conjugate per the components of a spinor. Specifically, a reality structure is specified by a Hermitian 2 × 2 matrix "K" whose product with itself is the identity matrix: "K"2 = "Id". The conjugate of a spinor with respect to a reality structure "K" is defined by formula_27 The particular form of the inner product on vectors (e.g., (4) or (4′)) determines a reality structure (up to a factor of -1) by requiring formula_28, whenever "X" is a matrix associated to a real vector. Thus "K" = "i C" is the reality structure in Euclidean signature (4), and "K" = "Id" is that for signature (4′). With a reality structure in hand, one has the following results: Examples in physics. Spinors of the Pauli spin matrices. Often, the first example of spinors that a student of physics encounters are the 2×1 spinors used in Pauli's theory of electron spin. The Pauli matrices are a vector of three 2×2 matrices that are used as spin operators. Given a unit vector in 3 dimensions, for example ("a", "b", "c"), one takes a dot product with the Pauli spin matrices to obtain a spin matrix for spin in the direction of the unit vector. The eigenvectors of that spin matrix are the spinors for spin-1/2 oriented in the direction given by the vector. Example: "u" = (0.8, -0.6, 0) is a unit vector. Dotting this with the Pauli spin matrices gives the matrix: formula_31 The eigenvectors may be found by the usual methods of linear algebra, but a convenient trick is to note that a Pauli spin matrix is an involutory matrix, that is, the square of the above matrix is the identity matrix. Thus a (matrix) solution to the eigenvector problem with eigenvalues of ±1 is simply 1 ± "Su". That is, formula_32 One can then choose either of the columns of the eigenvector matrix as the vector solution, provided that the column chosen is not zero. Taking the first column of the above, eigenvector solutions for the two eigenvalues are: formula_33 The trick used to find the eigenvectors is related to the concept of ideals, that is, the matrix eigenvectors (1 ± "Su")/2 are projection operators or idempotents and therefore each generates an ideal in the Pauli algebra. The same trick works in any Clifford algebra, in particular the Dirac algebra that is discussed below. These projection operators are also seen in density matrix theory where they are examples of pure density matrices. More generally, the projection operator for spin in the ("a", "b", "c") direction is given by formula_34 and any non zero column can be taken as the projection operator. While the two columns appear different, one can use "a"2 + "b"2 + "c"2 = 1 to show that they are multiples (possibly zero) of the same spinor. General remarks. In atomic physics and quantum mechanics, the property of "spin" plays a major role. In addition to their other properties all particles possess a non-classical property, i.e., which has no correspondence at all in conventional physics, namely the spin, which is a kind of "intrinsic angular momentum". In the position representation, instead of a wavefunction without spin, "ψ" = "ψ"(r), one has with spin: "ψ" = "ψ"(r, "σ"), where "σ" takes the following discrete set of values: formula_35. The "total angular momentum" operator, formula_36, of a particle corresponds to the "sum" of the "orbital angular momentum" (i.e., there only integers are allowed) and the "intrinsic part", the "spin". One distinguishes bosons (S = 0, ±1, ±2, ...) and fermions (S = ±1/2, ±3/2, ±5/2, ...). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vec{x} \\rightarrow X \\ =\\left(\\begin{matrix}x_3&x_1-ix_2\\\\x_1+ix_2&-x_3\\end{matrix}\\right)." }, { "math_id": 1, "text": " X\\equiv {\\vec \\sigma}\\cdot{\\vec x} " }, { "math_id": 2, "text": " {\\vec \\sigma}\\equiv (\\sigma_1, \\sigma_2, \\sigma_3)" }, { "math_id": 3, "text": "\\det X = -|{\\vec x}|^2 " }, { "math_id": 4, "text": "\\det" }, { "math_id": 5, "text": " X^2 = |{\\vec x}|^2I " }, { "math_id": 6, "text": "\\frac{1}{2}(XY+YX)=({\\vec x}\\cdot{\\vec y})I" }, { "math_id": 7, "text": "\\frac{1}{2}(XY-YX)=iZ" }, { "math_id": 8, "text": "{\\vec z} = {\\vec x} \\times {\\vec y} " }, { "math_id": 9, "text": "{\\vec u}" }, { "math_id": 10, "text": "-UXU" }, { "math_id": 11, "text": "{\\vec x}" }, { "math_id": 12, "text": "{\\vec u}_1" }, { "math_id": 13, "text": "{\\vec u}_2" }, { "math_id": 14, "text": "U_2U_1XU_1U_2" }, { "math_id": 15, "text": "\\xi=\\left[\\begin{matrix}\\xi_1\\\\\\xi_2\\end{matrix}\\right]," }, { "math_id": 16, "text": "X \\mapsto RXR^{-1}" }, { "math_id": 17, "text": "{\\vec x} \\cdot {\\vec \\sigma}." }, { "math_id": 18, "text": "{\\mathbf x}\\cdot{\\mathbf x} = x_1^2+x_2^2+x_3^2=0." }, { "math_id": 19, "text": "X=2\\left[\\begin{matrix}\\xi_1\\\\\\xi_2\\end{matrix}\\right]\\left[\\begin{matrix}-\\xi_2&\\xi_1\\end{matrix}\\right]." }, { "math_id": 20, "text": "C=\\left(\\begin{matrix}0&1\\\\-1&0\\end{matrix}\\right)," }, { "math_id": 21, "text": "x_2" }, { "math_id": 22, "text": "i" }, { "math_id": 23, "text": "(x_1,x_2,x_3)" }, { "math_id": 24, "text": "\\left(\\begin{matrix}x_3&x_1-x_2\\\\x_1+x_2&-x_3\\end{matrix}\\right)" }, { "math_id": 25, "text": "\\langle\\mu|\\xi\\rangle = \\bar{\\mu}_1\\xi_2-\\bar{\\mu}_2\\xi_1" }, { "math_id": 26, "text": "\\langle\\mu|\\xi\\rangle^2 = \\hbox{length}\\left(Q(\\bar{\\mu},\\xi)\\right)^2" }, { "math_id": 27, "text": "\\xi^* = K\\bar{\\xi}." }, { "math_id": 28, "text": "\\bar{X}=KXK\\," }, { "math_id": 29, "text": "\\bar{X} = K X K\\,." }, { "math_id": 30, "text": "\\langle\\mu|\\xi\\rangle = i\\,^t\\mu^* C \\xi" }, { "math_id": 31, "text": "\n S_u = (0.8,-0.6,0.0)\\cdot \\vec{\\sigma}=0.8 \\sigma_{1}-0.6\\sigma_{2}+0.0\\sigma_{3} = \\begin{bmatrix}\n 0.0 & 0.8+0.6i \\\\\n 0.8-0.6i & 0.0\n \\end{bmatrix}\n" }, { "math_id": 32, "text": "\nS_u (1\\pm S_u) = \\pm 1 (1 \\pm S_u)\n" }, { "math_id": 33, "text": "\n\\begin{bmatrix}\n1.0+ (0.0)\\\\\n0.0 +(0.8-0.6i)\n\\end{bmatrix},\n\\begin{bmatrix}\n1.0- (0.0)\\\\\n0.0-(0.8-0.6i)\n\\end{bmatrix}\n" }, { "math_id": 34, "text": "\\frac{1}{2}\\begin{bmatrix}1+c&a-ib\\\\a+ib&1-c\\end{bmatrix}" }, { "math_id": 35, "text": "\\sigma =-S\\cdot\\hbar , -(S-1)\\cdot\\hbar , ... ,+(S-1)\\cdot\\hbar ,+S\\cdot\\hbar " }, { "math_id": 36, "text": "\\vec{\\mathbb J}" } ]
https://en.wikipedia.org/wiki?curid=10159868
10160091
Spin representation
Particular projective representations of the orthogonal or special orthogonal groups In mathematics, the spin representations are particular projective representations of the orthogonal or special orthogonal groups in arbitrary dimension and signature (i.e., including indefinite orthogonal groups). More precisely, they are two equivalent representations of the spin groups, which are double covers of the special orthogonal groups. They are usually studied over the real or complex numbers, but they can be defined over other fields. Elements of a spin representation are called spinors. They play an important role in the physical description of fermions such as the electron. The spin representations may be constructed in several ways, but typically the construction involves (perhaps only implicitly) the choice of a maximal isotropic subspace in the vector representation of the group. Over the real numbers, this usually requires using a complexification of the vector representation. For this reason, it is convenient to define the spin representations over the complex numbers first, and derive real representations by introducing real structures. The properties of the spin representations depend, in a subtle way, on the dimension and signature of the orthogonal group. In particular, spin representations often admit invariant bilinear forms, which can be used to embed the spin groups into classical Lie groups. In low dimensions, these embeddings are surjective and determine special isomorphisms between the spin groups and more familiar Lie groups; this elucidates the properties of spinors in these dimensions. Set-up. Let "V" be a finite-dimensional real or complex vector space with a nondegenerate quadratic form "Q". The (real or complex) linear maps preserving "Q" form the orthogonal group O("V", "Q"). The identity component of the group is called the special orthogonal group SO("V", "Q"). (For "V" real with an indefinite quadratic form, this terminology is not standard: the special orthogonal group is usually defined to be a subgroup with two components in this case.) Up to group isomorphism, SO("V", "Q") has a unique connected double cover, the spin group Spin("V", "Q"). There is thus a group homomorphism "h": Spin("V", "Q") → SO("V", "Q") whose kernel has two elements denoted {1, −1}, where 1 is the identity element. Thus, the group elements "g" and "−g" of Spin("V", "Q") are equivalent after the homomorphism to SO("V", "Q"); that is, "h"("g") = "h"("−g") for any "g" in Spin("V", "Q"). The groups O("V", "Q"), SO("V", "Q") and Spin("V", "Q") are all Lie groups, and for fixed ("V", "Q") they have the same Lie algebra, so("V", "Q"). If "V" is real, then "V" is a real vector subspace of its complexification "V"C "V" ⊗R C, and the quadratic form "Q" extends naturally to a quadratic form "Q"C on "V"C. This embeds SO("V", "Q") as a subgroup of SO("V"C, "Q"C), and hence we may realise Spin("V", "Q") as a subgroup of Spin("V"C, "Q"C). Furthermore, so("V"C, "Q"C) is the complexification of so("V", "Q"). In the complex case, quadratic forms are determined uniquely up to isomorphism by the dimension "n" of "V". Concretely, we may assume "V" C"n" and formula_0 The corresponding Lie groups are denoted O("n", C), SO("n", C), Spin("n", C) and their Lie algebra as so("n", C). In the real case, quadratic forms are determined up to isomorphism by a pair of nonnegative integers ("p", "q") where "n" "p" + "q" is the dimension of "V", and "p" − "q" is the signature. Concretely, we may assume "V" R"n" and formula_1 The corresponding Lie groups and Lie algebra are denoted O("p", "q"), SO("p", "q"), Spin("p", "q") and so("p", "q"). We write R"p","q" in place of R"n" to make the signature explicit. The spin representations are, in a sense, the simplest representations of Spin("n", C) and Spin("p", "q") that do not come from representations of SO("n", C) and SO("p", "q"). A spin representation is, therefore, a real or complex vector space "S" together with a group homomorphism "ρ" from Spin("n", C) or Spin("p", "q") to the general linear group GL("S") such that the element −1 is "not" in the kernel of "ρ". If "S" is such a representation, then according to the relation between Lie groups and Lie algebras, it induces a Lie algebra representation, i.e., a Lie algebra homomorphism from so("n", "C") or so("p", "q") to the Lie algebra gl("S") of endomorphisms of "S" with the commutator bracket. Spin representations can be analysed according to the following strategy: if "S" is a real spin representation of Spin("p", "q"), then its complexification is a complex spin representation of Spin("p", "q"); as a representation of so("p", "q"), it therefore extends to a complex representation of so("n", C). Proceeding in reverse, we therefore "first" construct complex spin representations of Spin("n", C) and so("n", C), then restrict them to complex spin representations of so("p", "q") and Spin("p", "q"), then finally analyse possible reductions to real spin representations. Complex spin representations. Let "V" = C"n" with the standard quadratic form "Q" so that formula_2 The symmetric bilinear form on "V" associated to "Q" by polarization is denoted ⟨..⟩. Isotropic subspaces and root systems. A standard construction of the spin representations of so("n", C) begins with a choice of a pair ("W", "W"∗) of maximal totally isotropic subspaces (with respect to "Q") of "V" with "W" ∩ "W"∗ 0. Let us make such a choice. If "n" 2"m" or "n" 2"m" + 1, then "W" and "W"∗ both have dimension "m". If "n" 2"m", then "V" "W" ⊕ "W"∗, whereas if "n" 2"m" + 1, then "V" "W" ⊕ "U" ⊕ "W"∗, where "U" is the 1-dimensional orthogonal complement to "W" ⊕ "W"∗. The bilinear form ⟨..⟩ associated to "Q" induces a pairing between "W" and "W"∗, which must be nondegenerate, because "W" and "W"∗ are totally isotropic subspaces and "Q" is nondegenerate. Hence "W" and "W"∗ are dual vector spaces. More concretely, let "a"1, ... "a""m" be a basis for "W". Then there is a unique basis "α"1, ... "α""m" of "W"∗ such that formula_3 If "A" is an "m" × "m" matrix, then "A" induces an endomorphism of "W" with respect to this basis and the transpose "A"T induces a transformation of "W"∗ with formula_4 for all "w" in "W" and "w"∗ in "W"∗. It follows that the endomorphism "ρ""A" of "V", equal to "A" on "W", −"A"T on "W"∗ and zero on "U" (if "n" is odd), is skew, formula_5 for all "u", "v" in "V", and hence (see classical group) an element of so("n", C) ⊂ End("V"). Using the diagonal matrices in this construction defines a Cartan subalgebra h of so("n", C): the rank of so("n", C) is "m", and the diagonal "n" × "n" matrices determine an "m"-dimensional abelian subalgebra. Let "ε"1, ... "ε""m" be the basis of h∗ such that, for a diagonal matrix "A", "ε""k"("ρ""A") is the "k"th diagonal entry of "A". Clearly this is a basis for h∗. Since the bilinear form identifies so("n", C) with formula_6, explicitly, formula_7 it is now easy to construct the root system associated to h. The root spaces (simultaneous eigenspaces for the action of h) are spanned by the following elements: formula_8 with root (simultaneous eigenvalue) formula_9 formula_10 (which is in h if "i" "j") with root formula_11 formula_12 with root formula_13 and, if "n" is odd, and "u" is a nonzero element of "U", formula_14 with root formula_15 formula_16 with root formula_17 Thus, with respect to the basis "ε"1, ... "ε""m", the roots are the vectors in h∗ that are permutations of formula_18 together with the permutations of formula_19 if "n" 2"m" + 1 is odd. A system of positive roots is given by "ε""i" + "ε""j" ("i" ≠ "j"), "ε""i" − "ε""j" ("i" &lt; "j") and (for "n" odd) "ε""i". The corresponding simple roots are formula_20 The positive roots are nonnegative integer linear combinations of the simple roots. Spin representations and their weights. One construction of the spin representations of so("n", C) uses the exterior algebra(s) formula_21 and/or formula_22 There is an action of "V" on "S" such that for any element "v" = "w" + "w"∗ in "W" ⊕ "W"∗ and any "ψ" in "S" the action is given by: formula_23 where the second term is a contraction (interior multiplication) defined using the bilinear form, which pairs "W" and "W"∗. This action respects the Clifford relations "v"2 = "Q"("v")1, and so induces a homomorphism from the Clifford algebra Cl"n"C of "V" to End("S"). A similar action can be defined on "S"′, so that both "S" and "S"′ are Clifford modules. The Lie algebra so("n", C) is isomorphic to the complexified Lie algebra spin"n"C in Cl"n"C via the mapping induced by the covering Spin("n") → SO("n") formula_24 It follows that both "S" and "S"′ are representations of so("n", C). They are actually equivalent representations, so we focus on "S". The explicit description shows that the elements "α""i" ∧ "a""i" of the Cartan subalgebra h act on "S" by formula_25 A basis for "S" is given by elements of the form formula_26 for 0 ≤ "k" ≤ "m" and "i"1 &lt; ... &lt; "i""k". These clearly span weight spaces for the action of h: "α""i" ∧ "a""i" has eigenvalue −1/2 on the given basis vector if "i" = "i""j" for some "j", and has eigenvalue 1/2 otherwise. It follows that the weights of "S" are all possible combinations of formula_27 and each weight space is one-dimensional. Elements of "S" are called Dirac spinors. When "n" is even, "S" is not an irreducible representation: formula_28 and formula_29 are invariant subspaces. The weights divide into those with an even number of minus signs, and those with an odd number of minus signs. Both "S"+ and "S"− are irreducible representations of dimension 2"m"−1 whose elements are called Weyl spinors. They are also known as chiral spin representations or half-spin representations. With respect to the positive root system above, the highest weights of "S"+ and "S"− are formula_30 and formula_31 respectively. The Clifford action identifies Cl"n"C with End("S") and the even subalgebra is identified with the endomorphisms preserving "S"+ and "S"−. The other Clifford module "S"′ is isomorphic to "S" in this case. When "n" is odd, "S" is an irreducible representation of so("n",C) of dimension 2"m": the Clifford action of a unit vector "u" ∈ "U" is given by formula_32 and so elements of so("n",C) of the form "u"∧"w" or "u"∧"w"∗ do not preserve the even and odd parts of the exterior algebra of "W". The highest weight of "S" is formula_33 The Clifford action is not faithful on "S": Cl"n"C can be identified with End("S") ⊕ End("S"′), where "u" acts with the opposite sign on "S"′. More precisely, the two representations are related by the parity involution "α" of Cl"n"C (also known as the principal automorphism), which is the identity on the even subalgebra, and minus the identity on the odd part of Cl"n"C. In other words, there is a linear isomorphism from "S" to "S"′, which identifies the action of "A" in Cl"n"C on "S" with the action of "α"("A") on "S"′. Bilinear forms. if "λ" is a weight of "S", so is −"λ". It follows that "S" is isomorphic to the dual representation "S"∗. When "n" = 2"m" + 1 is odd, the isomorphism "B": "S" → "S"∗ is unique up to scale by Schur's lemma, since "S" is irreducible, and it defines a nondegenerate invariant bilinear form "β" on "S" via formula_34 Here invariance means that formula_35 for all "ξ" in so("n",C) and "φ", "ψ" in "S" — in other words the action of "ξ" is skew with respect to "β". In fact, more is true: "S"∗ is a representation of the opposite Clifford algebra, and therefore, since Cl"n"C only has two nontrivial simple modules "S" and "S"′, related by the parity involution "α", there is an antiautomorphism "τ" of Cl"n"C such that formula_36 for any "A" in Cl"n"C. In fact "τ" is reversion (the antiautomorphism induced by the identity on "V") for "m" even, and conjugation (the antiautomorphism induced by minus the identity on "V") for "m" odd. These two antiautomorphisms are related by parity involution "α", which is the automorphism induced by minus the identity on "V". Both satisfy "τ"("ξ") = −"ξ" for "ξ" in so("n",C). When "n" = 2"m", the situation depends more sensitively upon the parity of "m". For "m" even, a weight "λ" has an even number of minus signs if and only if −"λ" does; it follows that there are separate isomorphisms "B"±: "S"± → "S"±∗ of each half-spin representation with its dual, each determined uniquely up to scale. These may be combined into an isomorphism "B": "S" → "S"∗. For "m" odd, "λ" is a weight of "S"+ if and only if −"λ" is a weight of "S"−; thus there is an isomorphism from "S"+ to "S"−∗, again unique up to scale, and its transpose provides an isomorphism from "S"− to "S"+∗. These may again be combined into an isomorphism "B": "S" → "S"∗. For both "m" even and "m" odd, the freedom in the choice of "B" may be restricted to an overall scale by insisting that the bilinear form "β" corresponding to "B" satisfies (1), where "τ" is a fixed antiautomorphism (either reversion or conjugation). Symmetry and the tensor square. The symmetry properties of "β": "S" ⊗ "S" → C can be determined using Clifford algebras or representation theory. In fact much more can be said: the tensor square "S" ⊗ "S" must decompose into a direct sum of "k"-forms on "V" for various "k", because its weights are all elements in h∗ whose components belong to {−1,0,1}. Now equivariant linear maps "S" ⊗ "S" → ∧"k""V"∗ correspond bijectively to invariant maps ∧"k""V" ⊗ "S" ⊗ "S" → C and nonzero such maps can be constructed via the inclusion of ∧"k""V" into the Clifford algebra. Furthermore, if "β"("φ","ψ") = "ε" "β"("ψ","φ") and "τ" has sign "ε""k" on ∧"k""V" then formula_37 for "A" in ∧"k""V". If "n" = 2"m"+1 is odd then it follows from Schur's Lemma that formula_38 (both sides have dimension 22"m" and the representations on the right are inequivalent). Because the symmetries are governed by an involution "τ" that is either conjugation or reversion, the symmetry of the ∧"2j""V"∗ component alternates with "j". Elementary combinatorics gives formula_39 and the sign determines which representations occur in S2"S" and which occur in ∧2"S". In particular formula_40 and formula_41 for "v" ∈ "V" (which is isomorphic to ∧2"m""V"), confirming that "τ" is reversion for "m" even, and conjugation for "m" odd. If "n" = 2"m" is even, then the analysis is more involved, but the result is a more refined decomposition: S2"S"±, ∧2"S"± and "S"+ ⊗ "S"− can each be decomposed as a direct sum of "k"-forms (where for "k" = "m" there is a further decomposition into selfdual and antiselfdual "m"-forms). The main outcome is a realisation of so("n",C) as a subalgebra of a classical Lie algebra on "S", depending upon "n" modulo 8, according to the following table: For "n" ≤ 6, these embeddings are isomorphisms (onto sl rather than gl for "n" = 6): formula_42 formula_43 formula_44 formula_45 formula_46 Real representations. The complex spin representations of so("n",C) yield real representations "S" of so("p","q") by restricting the action to the real subalgebras. However, there are additional "reality" structures that are invariant under the action of the real Lie algebras. These come in three types. The type of structure invariant under so("p","q") depends only on the signature "p" − "q" modulo 8, and is given by the following table. Here R, C and H denote real, hermitian and quaternionic structures respectively, and R + R and H + H indicate that the half-spin representations both admit real or quaternionic structures respectively. Description and tables. To complete the description of real representation, we must describe how these structures interact with the invariant bilinear forms. Since "n" = "p" + "q" ≅ "p" − "q" mod 2, there are two cases: the dimension and signature are both even, and the dimension and signature are both odd. The odd case is simpler, there is only one complex spin representation "S", and hermitian structures do not occur. Apart from the trivial case "n" = 1, "S" is always even-dimensional, say dim "S" = 2"N". The real forms of so(2"N",C) are so("K","L") with "K" + "L" = 2"N" and so∗("N",H), while the real forms of sp(2"N",C) are sp(2"N",R) and sp("K","L") with "K" + "L" = "N". The presence of a Clifford action of "V" on "S" forces "K" = "L" in both cases unless "pq" = 0, in which case "KL"=0, which is denoted simply so(2"N") or sp("N"). Hence the odd spin representations may be summarized in the following table. (†) "N" is even for "n" &gt; 3 and for "n" = 3, this is sp(1). The even-dimensional case is similar. For "n" &gt; 2, the complex half-spin representations are even-dimensional. We have additionally to deal with hermitian structures and the real forms of sl(2"N", C), which are sl(2"N", R), su("K", "L") with "K" + "L" = 2"N", and sl("N", H). The resulting even spin representations are summarized as follows. The low-dimensional isomorphisms in the complex case have the following real forms. The only special isomorphisms of real Lie algebras missing from this table are formula_47 and formula_48 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q(z_1,\\ldots, z_n) = z_1^2+ z_2^2+\\cdots+z_n^2." }, { "math_id": 1, "text": "Q(x_1,\\ldots, x_n) = x_1^2+ x_2^2+\\cdots+x_p^2-(x_{p+1}^2+\\cdots +x_{p+q}^2)." }, { "math_id": 2, "text": "\\mathfrak{so}(V,Q) = \\mathfrak{so}(n,\\mathbb C)." }, { "math_id": 3, "text": " \\langle \\alpha_i,a_j\\rangle = \\delta_{ij}." }, { "math_id": 4, "text": " \\langle Aw, w^* \\rangle = \\langle w,A^\\mathrm{T} w^*\\rangle" }, { "math_id": 5, "text": " \\langle \\rho_A u, v \\rangle = -\\langle u,\\rho_A v\\rangle" }, { "math_id": 6, "text": "\\wedge^2 V" }, { "math_id": 7, "text": "x \\wedge y \\mapsto \\varphi_{x \\wedge y}, \\quad \\varphi_{x \\wedge y}(v) = \\langle y, v\\rangle x - \\langle x, v\\rangle y,\\quad x \\wedge y \\in \\wedge^2V,\\quad x,y,v \\in V, \\quad \\varphi_{x \\wedge y} \\in \\mathfrak{so}(n, \\mathbb{C})," }, { "math_id": 8, "text": " a_i\\wedge a_j,\\; i\\neq j," }, { "math_id": 9, "text": "\\varepsilon_i + \\varepsilon_j" }, { "math_id": 10, "text": " a_i\\wedge \\alpha_j" }, { "math_id": 11, "text": " \\varepsilon_i - \\varepsilon_j" }, { "math_id": 12, "text": " \\alpha_i\\wedge \\alpha_j,\\; i\\neq j," }, { "math_id": 13, "text": " -\\varepsilon_i - \\varepsilon_j," }, { "math_id": 14, "text": " a_i\\wedge u," }, { "math_id": 15, "text": " \\varepsilon_i " }, { "math_id": 16, "text": " \\alpha_i\\wedge u," }, { "math_id": 17, "text": " -\\varepsilon_i." }, { "math_id": 18, "text": "(\\pm 1,\\pm 1, 0, 0, \\dots, 0)" }, { "math_id": 19, "text": "(\\pm 1, 0, 0, \\dots, 0)" }, { "math_id": 20, "text": "\\varepsilon_1-\\varepsilon_2, \\varepsilon_2-\\varepsilon_3, \\ldots, \\varepsilon_{m-1}-\\varepsilon_m, \\left\\{\\begin{matrix}\n\\varepsilon_{m-1}+\\varepsilon_m& n=2m\\\\\n\\varepsilon_m & n=2m+1.\n\\end{matrix}\\right." }, { "math_id": 21, "text": "S=\\wedge^\\bullet W" }, { "math_id": 22, "text": "S'=\\wedge^\\bullet W^*." }, { "math_id": 23, "text": " v\\cdot \\psi = 2^{\\frac{1}{2}}(w\\wedge\\psi+\\iota(w^*)\\psi), " }, { "math_id": 24, "text": " v \\wedge w \\mapsto \\tfrac14[v,w]." }, { "math_id": 25, "text": " (\\alpha_i\\wedge a_i) \\cdot \\psi = \\tfrac14 (2^{\\tfrac12})^{2} ( \\iota(\\alpha_i)(a_i\\wedge\\psi)-a_i\\wedge(\\iota(\\alpha_i)\\psi))\n= \\tfrac12 \\psi - a_i\\wedge(\\iota(\\alpha_i)\\psi)." }, { "math_id": 26, "text": " a_{i_1}\\wedge a_{i_2}\\wedge\\cdots\\wedge a_{i_k}" }, { "math_id": 27, "text": "\\bigl(\\pm \\tfrac12,\\pm \\tfrac12, \\ldots \\pm\\tfrac12\\bigr)" }, { "math_id": 28, "text": "S_+=\\wedge^{\\mathrm{even}} W" }, { "math_id": 29, "text": "S_-=\\wedge^{\\mathrm{odd}} W" }, { "math_id": 30, "text": "\\bigl(\\tfrac12,\\tfrac12, \\ldots\\tfrac12, \\tfrac12\\bigr)" }, { "math_id": 31, "text": "\\bigl(\\tfrac12,\\tfrac12, \\ldots\\tfrac12, -\\tfrac12\\bigr)" }, { "math_id": 32, "text": " u\\cdot \\psi = \\left\\{\\begin{matrix}\n\\psi&\\hbox{if } \\psi\\in \\wedge^{\\mathrm{even}} W\\\\\n-\\psi&\\hbox{if } \\psi\\in \\wedge^{\\mathrm{odd}} W\n\\end{matrix}\\right." }, { "math_id": 33, "text": "\\bigl(\\tfrac12,\\tfrac12, \\ldots \\tfrac12\\bigr)." }, { "math_id": 34, "text": "\\beta(\\varphi,\\psi) = B(\\varphi)(\\psi)." }, { "math_id": 35, "text": "\\beta(\\xi\\cdot\\varphi,\\psi) + \\beta(\\varphi,\\xi\\cdot\\psi) = 0" }, { "math_id": 36, "text": "\\quad\\beta(A\\cdot\\varphi,\\psi) = \\beta(\\varphi,\\tau(A)\\cdot\\psi)\\qquad (1)" }, { "math_id": 37, "text": "\\beta(A\\cdot\\varphi,\\psi) = \\varepsilon\\varepsilon_k \\beta(A\\cdot\\psi,\\varphi)" }, { "math_id": 38, "text": " S\\otimes S \\cong \\bigoplus_{j=0}^{m} \\wedge^{2j} V^*" }, { "math_id": 39, "text": " \\sum_{j=0}^m (-1)^j \\dim \\wedge^{2j} \\Complex^{2m+1} = (-1)^{\\frac12 m(m+1)} 2^m = (-1)^{\\frac12 m(m+1)}(\\dim \\mathrm S^2S-\\dim \\wedge^2 S)" }, { "math_id": 40, "text": " \\beta(\\phi,\\psi)=(-1)^{\\frac12 m(m+1)}\\beta(\\psi,\\phi)," }, { "math_id": 41, "text": " \\beta(v\\cdot\\phi,\\psi) = (-1)^m(-1)^{\\frac12 m(m+1)}\\beta(v\\cdot\\psi,\\phi) = (-1)^m \\beta(\\phi,v\\cdot\\psi)" }, { "math_id": 42, "text": " \\mathfrak{so}(2,\\mathbb C) \\cong \\mathfrak{gl}(1,\\mathbb C)\\qquad(=\\mathbb C)" }, { "math_id": 43, "text": " \\mathfrak{so}(3,\\mathbb C) \\cong \\mathfrak{sp}(2,\\mathbb C)\\qquad(=\\mathfrak{sl}(2,\\mathbb C))" }, { "math_id": 44, "text": " \\mathfrak{so}(4,\\mathbb C) \\cong \\mathfrak{sp}(2,\\mathbb C)\\oplus\\mathfrak{sp}(2,\\mathbb C)" }, { "math_id": 45, "text": " \\mathfrak{so}(5,\\mathbb C) \\cong \\mathfrak{sp}(4,\\mathbb C)" }, { "math_id": 46, "text": " \\mathfrak{so}(6,\\mathbb C) \\cong \\mathfrak{sl}(4,\\mathbb C)." }, { "math_id": 47, "text": "\\mathfrak{so}^*(3,\\mathbb H) \\cong \\mathfrak{su}(3,1)" }, { "math_id": 48, "text": "\\mathfrak{so}^*(4,\\mathbb H)\\cong\\mathfrak{so}(6,2)." } ]
https://en.wikipedia.org/wiki?curid=10160091
1016017
Thiocyanate
Ion (S=C=N, charge –1) &lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Thiocyanates are salts containing the thiocyanate anion (also known as rhodanide or rhodanate). is the conjugate base of thiocyanic acid. Common salts include the colourless salts potassium thiocyanate and sodium thiocyanate. Mercury(II) thiocyanate was formerly used in pyrotechnics. Thiocyanate is analogous to the cyanate ion, , wherein oxygen is replaced by sulfur. is one of the pseudohalides, due to the similarity of its reactions to that of halide ions. Thiocyanate used to be known as rhodanide (from a Greek word for rose) because of the red colour of its complexes with iron. Thiocyanate is produced by the reaction of elemental sulfur or thiosulfate with cyanide: &lt;chem display=block&gt;8 CN- + S8 -&gt; 8 SCN-&lt;/chem&gt;&lt;chem display=block&gt;CN- + S2O3^2- -&gt; SCN- + SO3^2-&lt;/chem&gt; The second reaction is catalyzed by thiosulfate sulfurtransferase, a hepatic mitochondrial enzyme, and by other sulfur transferases, which together are responsible for around 80% of cyanide metabolism in the body. Oxidation of thiocyanate inevitably produces hydrogen sulfate. The other product depends on pH: in acid, it is hydrogen cyanide, presumably via HOSCN and with a sulfur dicyanide side-product; but in base and neutral solutions, it is cyanate. Biology. Occurrences. Thiocyanate occurs widely in nature, albeit often in low concentrations. It is a component of some sulfur cycles. Biochemistry. Thiocyanate hydrolases catalyze the conversion of thiocyanate to carbonyl sulfide and to cyanate: Medicine. Thiocyanate is known to be an important part in the biosynthesis of hypothiocyanite by a lactoperoxidase. Thus the complete absence of thiocyanate or reduced thiocyanate in the human body, (e.g., cystic fibrosis) is damaging to the human host defense system. Thiocyanate is a potent competitive inhibitor of the thyroid sodium-iodide symporter. Iodine is an essential component of thyroxine. Since thiocyanates will decrease iodide transport into the thyroid follicular cell, they will decrease the amount of thyroxine produced by the thyroid gland. As such, foodstuffs containing thiocyanate are best avoided by iodide deficient hypothyroid patients. In the early 20th century, thiocyanate was used in the treatment of hypertension, but it is no longer used because of associated toxicity. Sodium nitroprusside, a metabolite of which is thiocyanate, is however still used for the treatment of a hypertensive emergency. Rhodanese catalyzes the reaction of sodium nitroprusside (like other cyanides) with thiosulfate to form the metabolite thiocyanate. Coordination chemistry. formula_0Resonance structures of the thiocyanate ion Thiocyanate shares its negative charge approximately equally between sulfur and nitrogen. As a consequence, thiocyanate can act as a nucleophile at either sulfur or nitrogen—it is an ambidentate ligand. [SCN]− can also bridge two (M−SCN−M) or even three metals (&gt;SCN− or −SCN&lt;). Experimental evidence leads to the general conclusion that class A metals (hard acids) tend to form "N"-bonded thiocyanate complexes, whereas class B metals (soft acids) tend to form "S"-bonded thiocyanate complexes. Other factors, e.g. kinetics and solubility, are sometimes involved, and linkage isomerism can occur, for example [Co(NH3)5(NCS)]Cl2 and [Co(NH3)5(SCN)]Cl2. It [SCN] is considered as a weak ligand. ([NCS] is a strong ligand) Test for iron(III) and cobalt(II). If [SCN]− is added to a solution with iron(III) ions, a blood-red solution forms mainly due to the formation of [Fe(SCN)(H2O)5]2+, i.e. pentaaqua(thiocyanato-"N")iron(III). Lesser amounts of other hydrated compounds also form: e.g. Fe(SCN)3 and [Fe(SCN)4]−. Similarly, Co2+ gives a blue complex with thiocyanate. Both the iron and cobalt complexes can be extracted into organic solvents like diethyl ether or amyl alcohol. This allows the determination of these ions even in strongly coloured solutions. The determination of Co(II) in the presence of Fe(III) is possible by adding KF to the solution, which forms uncoloured, very stable complexes with Fe(III), which no longer react with SCN−. Phospholipids or some detergents aid the transfer of thiocyanatoiron into chlorinated solvents like chloroform and can be determined in this fashion. References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ce{S=C=N^\\ominus <-> {^{\\ominus}S}-C}\\ce{#N}" } ]
https://en.wikipedia.org/wiki?curid=1016017
10160606
Overdetermined system
More equations than unknowns (mathematics) In mathematics, a system of equations is considered overdetermined if there are more equations than unknowns. An overdetermined system is almost always inconsistent (it has no solution) when constructed with random coefficients. However, an overdetermined system will have solutions in some cases, for example if some equation occurs several times in the system, or if some equations are linear combinations of the others. The terminology can be described in terms of the concept of constraint counting. Each unknown can be seen as an available degree of freedom. Each equation introduced into the system can be viewed as a constraint that restricts one degree of freedom. Therefore, the critical case occurs when the number of equations and the number of free variables are equal. For every variable giving a degree of freedom, there exists a corresponding constraint. The "overdetermined" case occurs when the system has been overconstrained — that is, when the equations outnumber the unknowns. In contrast, the "underdetermined" case occurs when the system has been underconstrained — that is, when the number of equations is fewer than the number of unknowns. Such systems usually have an infinite number of solutions. Overdetermined linear systems of equations. An example in two dimensions. Consider the system of 3 equations and 2 unknowns ("X" and "Y"), which is overdetermined because 3 &gt; 2, and which corresponds to Diagram #1: formula_0 There is one solution for each pair of linear equations: for the first and second equations (0.2, −1.4), for the first and third (−2/3, 1/3), and for the second and third (1.5, 2.5). However, there is no solution that satisfies all three simultaneously. Diagrams #2 and 3 show other configurations that are inconsistent because no point is on all of the lines. Systems of this variety are deemed inconsistent. The only cases where the overdetermined system does in fact have a solution are demonstrated in Diagrams #4, 5, and 6. These exceptions can occur only when the overdetermined system contains enough linearly dependent equations that the number of independent equations does not exceed the number of unknowns. Linear dependence means that some equations can be obtained from linearly combining other equations. For example, "Y" = "X" + 1 and 2"Y" = 2"X" + 2 are linearly dependent equations because the second one can be obtained by taking twice the first one. Matrix form. Any system of linear equations can be written as a matrix equation. The previous system of equations (in Diagram #1) can be written as follows: formula_1 Notice that the rows of the coefficient matrix (corresponding to equations) outnumber the columns (corresponding to unknowns), meaning that the system is overdetermined. The rank of this matrix is 2, which corresponds to the number of dependent variables in the system. A linear system is consistent if and only if the coefficient matrix has the same rank as its augmented matrix (the coefficient matrix with an extra column added, that column being the column vector of constants). The augmented matrix has rank 3, so the system is inconsistent. The nullity is 0, which means that the null space contains only the zero vector and thus has no basis. In linear algebra the concepts of row space, column space and null space are important for determining the properties of matrices. The informal discussion of constraints and degrees of freedom above relates directly to these more formal concepts. Homogeneous case. The homogeneous case (in which all constant terms are zero) is always consistent (because there is a trivial, all-zero solution). There are two cases, depending on the number of linearly dependent equations: either there is just the trivial solution, or there is the trivial solution plus an infinite set of other solutions. Consider the system of linear equations: "L""i" = 0 for 1 ≤ "i" ≤ "M", and variables "X"1, "X"2, ..., "X""N", where each "L""i" is a weighted sum of the "X""i"s. Then "X"1 = "X"2 = ⋯ = "X""N" = 0 is always a solution. When "M" &lt; "N" the system is "underdetermined" and there are always an infinitude of further solutions. In fact the dimension of the space of solutions is always at least "N" − "M". For "M" ≥ "N", there may be no solution other than all values being 0. There will be an infinitude of other solutions only when the system of equations has enough dependencies (linearly dependent equations) that the number of independent equations is at most "N" − 1. But with "M" ≥ "N" the number of independent equations could be as high as "N", in which case the trivial solution is the only one. Non-homogeneous case. In systems of linear equations, "L""i"="c""i" for 1 ≤ "i" ≤ "M", in variables "X"1, "X"2, ..., "X""N" the equations are sometimes linearly dependent; in fact the number of linearly independent equations cannot exceed "N"+1. We have the following possible cases for an overdetermined system with "N" unknowns and "M" equations ("M"&gt;"N"). These results may be easier to understand by putting the augmented matrix of the coefficients of the system in row echelon form by using Gaussian elimination. This row echelon form is the augmented matrix of a system of equations that is equivalent to the given system (it has exactly the same solutions). The number of independent equations in the original system is the number of non-zero rows in the echelon form. The system is inconsistent (no solution) if and only if the last non-zero row in echelon form has only one non-zero entry that is in the last column (giving an equation 0 = c where c is a non-zero constant). Otherwise, there is exactly one solution when the number of non-zero rows in echelon form is equal to the number of unknowns, and there are infinitely many solutions when the number of non-zero rows is lower than the number of variables. Putting it another way, according to the Rouché–Capelli theorem, any system of equations (overdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has "k" free parameters where "k" is the difference between the number of variables and the rank; hence in such a case there are an infinitude of solutions. Exact solutions. All exact solutions can be obtained, or it can be shown that none exist, using matrix algebra. See System of linear equations#Matrix solution. Approximate solutions. The method of ordinary least squares can be used to find an approximate solution to overdetermined systems. For the system formula_2 the least squares formula is obtained from the problem formula_3 the solution of which can be written with the normal equations, formula_4 where formula_5 indicates a matrix transpose, "provided" formula_6 exists (that is, provided "A" has full column rank). With this formula an approximate solution is found when no exact solution exists, and it gives an exact solution when one does exist. However, to achieve good numerical accuracy, using the QR factorization of "A" to solve the least squares problem is preferred. Using QR factorization. The QR decomposition of a (tall) matrix formula_7 is the representation of the matrix in the product form, formula_8 where formula_9 is a (tall) semi-orthonormal matrix that spans the range of the matrix formula_7, and where formula_10 is a (small) square right-triangular matrix. The solution to the problem of minimizing the norm formula_11 is then given as formula_12 where in practice instead of calculating formula_13 one should do a run of backsubstitution on the right-triangular system formula_14 Using Singular Value Decomposition. The Singular Value Decomposition (SVD) of a (tall) matrix formula_7 is the representation of the matrix in the product form, formula_15 where formula_16 is a (tall) semi-orthonormal matrix that spans the range of the matrix formula_7, formula_17 is a (small) square diagonal matrix with non-negative singular values along the diagonal, and where formula_18 is a (small) square orthonormal matrix. The solution to the problem of minimizing the norm formula_11 is then given as formula_19 Overdetermined nonlinear systems of equations. In finite dimensional spaces, a system of equations can be written or represented in the form of formula_20 or in the form of formula_21 with formula_22 where formula_23 is a point in formula_24 or formula_25 and formula_26 are real or complex functions. The system is overdetermined if formula_27. In contrast, the system is an underdetermined system if formula_28. As an effective method for solving overdetermined systems, the Gauss-Newton iteration locally quadratically converges to solutions at which the Jacobian matrices of formula_29 are injective. In general use. The concept can also be applied to more general systems of equations, such as systems of polynomial equations or partial differential equations. In the case of the systems of polynomial equations, it may happen that an overdetermined system has a solution, but that no one equation is a consequence of the others and that, when removing any equation, the new system has more solutions. For example, formula_30 has the single solution formula_31 but each equation by itself has two solutions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\nY&=-2X-1\\\\\nY&=3X-2\\\\\nY&=X+1.\n\\end{align}" }, { "math_id": 1, "text": "\n\\begin{bmatrix}\n 2 & 1 \\\\\n-3 & 1 \\\\\n-1 & 1 \\\\\n\\end{bmatrix}\n\\begin{bmatrix} X \\\\ Y \\end{bmatrix}\n = \n\\begin{bmatrix} -1 \\\\ -2 \\\\ 1 \\end{bmatrix}\n" }, { "math_id": 2, "text": "A \\mathbf x = \\mathbf b," }, { "math_id": 3, "text": "\\min_{\\mathbf x}\\lVert A \\mathbf x - \\mathbf b \\rVert," }, { "math_id": 4, "text": "\\mathbf x = \\left(A^{\\mathsf{T}}A\\right)^{-1}A^{\\mathsf{T}}\\mathbf b," }, { "math_id": 5, "text": "\\mathsf{T}" }, { "math_id": 6, "text": "\\left(A^{\\mathsf{T}}A\\right)^{-1}" }, { "math_id": 7, "text": "A" }, { "math_id": 8, "text": "\nA=QR ,\n" }, { "math_id": 9, "text": "Q" }, { "math_id": 10, "text": "R" }, { "math_id": 11, "text": "\\| A x - b \\|^2" }, { "math_id": 12, "text": "\nx = R^{-1}Q^T b ,\n" }, { "math_id": 13, "text": "R^{-1}" }, { "math_id": 14, "text": "\nR x = Q^T b .\n" }, { "math_id": 15, "text": "\nA=USV^T ,\n" }, { "math_id": 16, "text": "U" }, { "math_id": 17, "text": "S" }, { "math_id": 18, "text": "V" }, { "math_id": 19, "text": "\nx = VS^{-1}U^T b .\n" }, { "math_id": 20, "text": " \\left\\{\\begin{array}{ccc} f_1(x_1,\\ldots,x_n) & = & 0 \\\\\n\\vdots & \\vdots & \\vdots \\\\ f_m(x_1,\\ldots,x_n) & = & 0 \\end{array}\\right." }, { "math_id": 21, "text": "\\mathbf{f}(\\mathbf{x}) = \\mathbf{0} " }, { "math_id": 22, "text": "\\mathbf{f}(\\mathbf{x}) = \\left[\\begin{array}{c} f_1(x_1,\\ldots,x_n) \\\\ \\vdots \\\\ f_m(x_1,\\ldots,x_n) \\end{array}\\right] \\;\\;\\; \\mbox{and}\\;\\;\\;\n\\mathbf{0} = \\left[\\begin{array}{c} 0 \\\\ \\vdots \\\\ 0 \\end{array}\\right]\n" }, { "math_id": 23, "text": "\\mathbf{x}=(x_1,\\ldots,x_n)" }, { "math_id": 24, "text": "R^n" }, { "math_id": 25, "text": "C^n" }, { "math_id": 26, "text": "f_1,\\ldots,f_m" }, { "math_id": 27, "text": "m>n" }, { "math_id": 28, "text": "m<n" }, { "math_id": 29, "text": "\\mathbf{f}(\\mathbf{x})" }, { "math_id": 30, "text": "(x-1)(x-2)=0, (x-1)(x-3)=0" }, { "math_id": 31, "text": "x=1," } ]
https://en.wikipedia.org/wiki?curid=10160606
10161645
Kazhdan–Lusztig polynomial
In the mathematical field of representation theory, a Kazhdan–Lusztig polynomial formula_0 is a member of a family of integral polynomials introduced by David Kazhdan and George Lusztig (1979). They are indexed by pairs of elements "y", "w" of a Coxeter group "W", which can in particular be the Weyl group of a Lie group. Motivation and history. In the spring of 1978 Kazhdan and Lusztig were studying Springer representations of the Weyl group of an algebraic group on formula_1-adic cohomology groups related to conjugacy classes which are unipotent. They found a new construction of these representations over the complex numbers . The representation had two natural bases, and the transition matrix between these two bases is essentially given by the Kazhdan–Lusztig polynomials. The actual Kazhdan–Lusztig construction of their polynomials is more elementary. Kazhdan and Lusztig used this to construct a canonical basis in the Hecke algebra of the Coxeter group and its representations. In their first paper Kazhdan and Lusztig mentioned that their polynomials were related to the failure of local Poincaré duality for Schubert varieties. In they reinterpreted this in terms of the intersection cohomology of Mark Goresky and Robert MacPherson, and gave another definition of such a basis in terms of the dimensions of certain intersection cohomology groups. The two bases for the Springer representation reminded Kazhdan and Lusztig of the two bases for the Grothendieck group of certain infinite dimensional representations of semisimple Lie algebras, given by Verma modules and simple modules. This analogy, and the work of Jens Carsten Jantzen and Anthony Joseph relating primitive ideals of enveloping algebras to representations of Weyl groups, led to the Kazhdan–Lusztig conjectures. Definition. Fix a Coxeter group "W" with generating set "S", and write formula_2 for the length of an element "w" (the smallest length of an expression for "w" as a product of elements of "S"). The Hecke algebra of "W" has a basis of elements formula_3 for formula_4 over the ring formula_5, with multiplication defined by formula_6 The quadratic second relation implies that each generator Ts is invertible in the Hecke algebra, with inverse "T""s"−1 "q"−1"T""s" + "q"−1 − 1. These inverses satisfy the relation ("T""s"−1 + 1)("T""s"−1 − "q"−1) 0 (obtained by multiplying the quadratic relation for Ts by −Ts−2"q"−1), and also the braid relations. From this it follows that the Hecke algebra has an automorphism "D" that sends "q"1/2 to "q"−1/2 and each Ts to Ts−1. More generally one has formula_7; also "D" can be seen to be an involution. The Kazhdan–Lusztig polynomials "P""yw"("q") are indexed by a pair of elements "y", "w" of "W", and uniquely determined by the following properties. formula_8 are invariant under the involution "D" of the Hecke algebra. The elements formula_9 form a basis of the Hecke algebra as a formula_5-module, called the Kazhdan–Lusztig basis. To establish existence of the Kazhdan–Lusztig polynomials, Kazhdan and Lusztig gave a simple recursive procedure for computing the polynomials "Pyw"("q") in terms of more elementary polynomials denoted "R""yw"("q"). defined by formula_10 They can be computed using the recursion relations formula_11 The Kazhdan–Lusztig polynomials can then be computed recursively using the relation formula_12 using the fact that the two terms on the left are polynomials in "q"1/2 and "q"−1/2 without constant terms. These formulas are tiresome to use by hand for rank greater than about 3, but are well adapted for computers, and the only limit on computing Kazhdan–Lusztig polynomials with them is that for large rank the number of such polynomials exceeds the storage capacity of computers. formula_13 Kazhdan–Lusztig conjectures. The Kazhdan–Lusztig polynomials arise as transition coefficients between their canonical basis and the natural basis of the Hecke algebra. The "Inventiones" paper also put forth two equivalent conjectures, known now as Kazhdan–Lusztig conjectures, which related the values of their polynomials at 1 with representations of complex semisimple Lie groups and Lie algebras, addressing a long-standing problem in representation theory. Let "W" be a finite Weyl group. For each w ∈ "W" denote by Mw be the Verma module of highest weight −"w"("ρ") − "ρ" where ρ is the half-sum of positive roots (or Weyl vector), and let Lw be its irreducible quotient, the simple highest weight module of highest weight −"w"("ρ") − "ρ". Both Mw and Lw are locally-finite weight modules over the complex semisimple Lie algebra "g" with the Weyl group "W", and therefore admit an algebraic character. Let us write ch("X") for the character of a "g"-module "X". The Kazhdan–Lusztig conjectures state: formula_14 formula_15 where "w"0 is the element of maximal length of the Weyl group. These conjectures were proved over characteristic 0 algebraically closed fields independently by Alexander Beilinson and Joseph Bernstein (1981) and by Jean-Luc Brylinski and Masaki Kashiwara (1981). The methods introduced in the course of the proof have guided development of representation theory throughout the 1980s and 1990s, under the name "geometric representation theory". Remarks. 1. The two conjectures are known to be equivalent. Moreover, Borho–Jantzen's translation principle implies that "w"("ρ") − "ρ" can be replaced by "w"("λ" + "ρ") − "ρ" for any dominant integral weight λ. Thus, the Kazhdan–Lusztig conjectures describe the Jordan–Hölder multiplicities of Verma modules in any regular integral block of Bernstein–Gelfand–Gelfand category O. 2. A similar interpretation of "all" coefficients of Kazhdan–Lusztig polynomials follows from the "Jantzen conjecture", which roughly says that individual coefficients of Py,w are multiplicities of Ly in certain subquotient of the Verma module determined by a canonical filtration, the Jantzen filtration. The Jantzen conjecture in regular integral case was proved in a later paper of Beilinson and Bernstein (1993). 3. David Vogan showed as a consequence of the conjectures that formula_16 and that Ext"j"("My", "Lw") vanishes if "j" + "ℓ"("w") + "ℓ"("y") is odd, so the dimensions of all such Ext groups in category "O" are determined in terms of coefficients of Kazhdan–Lusztig polynomials. This result demonstrates that all coefficients of the Kazhdan–Lusztig polynomials of a finite Weyl group are non-negative integers. However, positivity for the case of a finite Weyl group "W" was already known from the interpretation of coefficients of the Kazhdan–Lusztig polynomials as the dimensions of intersection cohomology groups, irrespective of the conjectures. Conversely, the relation between Kazhdan–Lusztig polynomials and the Ext groups theoretically can be used to prove the conjectures, although this approach to proving them turned out to be more difficult to carry out. 4. Some special cases of the Kazhdan–Lusztig conjectures are easy to verify. For example, "M"1 is the antidominant Verma module, which is known to be simple. This means that "M"1 = "L"1, establishing the second conjecture for "w" = 1, since the sum reduces to a single term. On the other hand, the first conjecture for "w" = "w"0 follows from the Weyl character formula and the formula for the character of a Verma module, together with the fact that all Kazhdan–Lusztig polynomials formula_17 are equal to 1. 5. Kashiwara (1990) proved a generalization of the Kazhdan–Lusztig conjectures to symmetrizable Kac–Moody algebras. Relation to intersection cohomology of Schubert varieties. By the Bruhat decomposition the space "G"/"B" of the algebraic group "G" with Weyl group "W" is a disjoint union of affine spaces "X""w" parameterized by elements "w" of "W". The closures of these spaces Xw are called Schubert varieties, and Kazhdan and Lusztig, following a suggestion of Deligne, showed how to express Kazhdan–Lusztig polynomials in terms of intersection cohomology groups of Schubert varieties. More precisely, the Kazhdan–Lusztig polynomial "P""y","w"("q") is equal to formula_18 where each term on the right means: take the complex IC of sheaves whose hyperhomology is the intersection homology of the Schubert variety of "w" (the closure of the cell Xw), take its cohomology of degree 2"i", and then take the dimension of the stalk of this sheaf at any point of the cell Xy whose closure is the Schubert variety of "y". The odd-dimensional cohomology groups do not appear in the sum because they are all zero. This gave the first proof that all coefficients of Kazhdan–Lusztig polynomials for finite Weyl groups are non-negative integers. Generalization to real groups. Lusztig–Vogan polynomials (also called Kazhdan–Lusztig polynomials or Kazhdan–Lusztig–Vogan polynomials) were introduced in . They are analogous to Kazhdan–Lusztig polynomials, but are tailored to representations of "real" semisimple Lie groups, and play major role in the conjectural description of their unitary duals. Their definition is more complicated, reflecting relative complexity of representations of real groups compared to complex groups. The distinction, in the cases directly connection to representation theory, is explained on the level of double cosets; or in other terms of actions on analogues of complex flag manifolds "G"/"B" where "G" is a complex Lie group and "B" a Borel subgroup. The original (K-L) case is then about the details of decomposing formula_19, a classical theme of the Bruhat decomposition, and before that of Schubert cells in a Grassmannian. The L-V case takes a real form GR of "G", a maximal compact subgroup KR in that semisimple group GR, and makes the complexification "K" of KR. Then the relevant object of study is formula_20. In March 2007, a collaborative project, the "Atlas of Lie groups and representations", announced that the L–V polynomials had been calculated for the split form of "E"8. Generalization to other objects in representation theory. The second paper of Kazhdan and Lusztig established a geometric setting for definition of Kazhdan–Lusztig polynomials, namely, the geometry of singularities of Schubert varieties in the flag variety. Much of the later work of Lusztig explored analogues of Kazhdan–Lusztig polynomials in the context of other natural singular algebraic varieties arising in representation theory, in particular, closures of nilpotent orbits and quiver varieties. It turned out that the representation theory of quantum groups, modular Lie algebras and affine Hecke algebras are all tightly controlled by appropriate analogues of Kazhdan–Lusztig polynomials. They admit an elementary description, but the deeper properties of these polynomials necessary for representation theory follow from sophisticated techniques of modern algebraic geometry and homological algebra, such as the use of intersection cohomology, perverse sheaves and Beilinson–Bernstein–Deligne decomposition. The coefficients of the Kazhdan–Lusztig polynomials are conjectured to be the dimensions of some homomorphism spaces in Soergel's bimodule category. This is the only known positive interpretation of these coefficients for arbitrary Coxeter groups. Combinatorial theory. Combinatorial properties of Kazhdan–Lusztig polynomials and their generalizations are a topic of active current research. Given their significance in representation theory and algebraic geometry, attempts have been undertaken to develop the theory of Kazhdan–Lusztig polynomials in purely combinatorial fashion, relying to some extent on geometry, but without reference to intersection cohomology and other advanced techniques. This has led to exciting developments in algebraic combinatorics, such as "pattern-avoidance phenomenon". Some references are given in the textbook of . A research monograph on the subject is . Inequality. Kobayashi (2013) proved that values of Kazhdan–Lusztig polynomials at formula_21 for crystallographic Coxeter groups satisfy certain strict inequality: Let formula_22 be a crystallographic Coxeter system and formula_23 its Kazhdan–Lusztig polynomials. If formula_24 and formula_25, then there exists a reflection formula_26 such that formula_27. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P_{y,w}(q)" }, { "math_id": 1, "text": "\\ell" }, { "math_id": 2, "text": "\\ell(w)" }, { "math_id": 3, "text": "T_w" }, { "math_id": 4, "text": "w\\in W" }, { "math_id": 5, "text": "\\mathbb{Z}[q^{1/2}, q^{-1/2}]" }, { "math_id": 6, "text": "\\begin{align}\nT_y T_w &= T_{yw}, && \\mbox{if }\\ell(yw) = \\ell(y) + \\ell(w) \\\\\n(T_s + 1)(T_s - q) &= 0, && \\mbox{if }s \\in S.\n\\end{align}" }, { "math_id": 7, "text": "D(T_w)=T_{w^{-1}}^{-1}" }, { "math_id": 8, "text": "C'_w=q^{-\\frac{\\ell(w)}{2}}\\sum_{y\\le w} P_{y,w}T_y" }, { "math_id": 9, "text": "C'_w" }, { "math_id": 10, "text": "T_{y^{-1}}^{-1} = \\sum_xD(R_{x,y})q^{-\\ell(x)}T_x." }, { "math_id": 11, "text": "R_{x,y} =\n\\begin{cases}\n 0, & \\mbox{if } x \\not\\le y \\\\\n 1, & \\mbox{if } x = y \\\\\n R_{sx,sy}, & \\mbox{if } sx < x \\mbox{ and } sy < y \\\\\n R_{xs,ys}, & \\mbox{if } xs < x \\mbox{ and } ys < y \\\\\n (q-1)R_{sx,y} + qR_{sx,sy}, & \\mbox{if } sx > x \\mbox{ and } sy < y\n\\end{cases}" }, { "math_id": 12, "text": "q^{\\frac{1}{2}(\\ell(w)-\\ell(x))}D(P_{x,w}) - q^{\\frac{1}{2}(\\ell(x)-\\ell(w))}P_{x,w} = \\sum_{x<y\\le w}(-1)^{\\ell(x)+\\ell(y)}q^{\\frac{1}{2}(-\\ell(x)+2\\ell(y)-\\ell(w))}D(R_{x,y})P_{y,w}" }, { "math_id": 13, "text": "\\begin{align}\n152 q^{22} &+ 3,472 q^{21} + 38,791 q^{20} + 293,021 q^{19} + 1,370,892 q^{18} + 4,067,059 q^{17} + 7,964,012 q^{16}\\\\\n&+ 11,159,003 q^{15} + 11,808,808 q^{14} + 9,859,915 q^{13} + 6,778,956 q^{12} + 3,964,369 q^{11} + 2,015,441 q^{10}\\\\\n&+ 906,567 q^9 + 363,611 q^8 + 129,820 q^7 + 41,239 q^6 + 11,426 q^5 + 2,677 q^4 + 492 q^3 + 61 q^2 + 3 q\n\\end{align}" }, { "math_id": 14, "text": "\\operatorname{ch}(L_w)=\\sum_{y\\le w}(-1)^{\\ell(w)-\\ell(y)}P_{y,w}(1)\\operatorname{ch}(M_y)" }, { "math_id": 15, "text": "\\operatorname{ch}(M_w)=\\sum_{y\\le w}P_{w_0w,w_0y}(1)\\operatorname{ch}(L_y)" }, { "math_id": 16, "text": "P_{y,w}(q) = \\sum_{i} q^i \\dim(\\operatorname{Ext}^{\\ell(w)-\\ell(y)-2i}(M_y,L_w))" }, { "math_id": 17, "text": "P_{y,w_0}" }, { "math_id": 18, "text": "P_{y,w}(q) = \\sum_iq^i\\dim IH^{2i}_{X_y}(\\overline{X_w})" }, { "math_id": 19, "text": "B\\backslash G/ B" }, { "math_id": 20, "text": "K\\backslash G/ B" }, { "math_id": 21, "text": "q=1" }, { "math_id": 22, "text": "(W, S)" }, { "math_id": 23, "text": "{P_{uw}(q)}" }, { "math_id": 24, "text": "u<w" }, { "math_id": 25, "text": "P_{uw}(1)>1" }, { "math_id": 26, "text": "t" }, { "math_id": 27, "text": "P_{uw}(1)>P_{tu, w}(1)>0" } ]
https://en.wikipedia.org/wiki?curid=10161645
10162277
Sturm–Picone comparison theorem
In mathematics, in the field of ordinary differential equations, the Sturm–Picone comparison theorem, named after Jacques Charles François Sturm and Mauro Picone, is a classical theorem which provides criteria for the oscillation and non-oscillation of solutions of certain linear differential equations in the real domain. Let pi, qi for "i"   1, 2 be real-valued continuous functions on the interval ["a", "b"] and let be two homogeneous linear second order differential equations in self-adjoint form with formula_2 and formula_3 Let u be a non-trivial solution of (1) with successive roots at z1 and "z"2 and let v be a non-trivial solution of (2). Then one of the following properties holds.  0; or  λ "u"("x"). The first part of the conclusion is due to Sturm (1836), while the second (alternative) part of the theorem is due to Picone (1910) whose simple proof was given using his now famous Picone identity. In the special case where both equations are identical one obtains the Sturm separation theorem. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(p_1(x) y^\\prime)^\\prime + q_1(x) y = 0 " }, { "math_id": 1, "text": "(p_2(x) y^\\prime)^\\prime + q_2(x) y = 0 " }, { "math_id": 2, "text": "0 < p_2(x) \\le p_1(x)" }, { "math_id": 3, "text": "q_1(x) \\le q_2(x)." } ]
https://en.wikipedia.org/wiki?curid=10162277
10162448
Volatility swap
Financial derivative instrument In finance, a volatility swap is a forward contract on the future realised volatility of a given underlying asset. Volatility swaps allow investors to trade the volatility of an asset directly, much as they would trade a price index. Its payoff at expiration is equal to formula_0 where: that is, the holder of a volatility swap receives formula_3 for every point by which the underlying's annualised realised volatility formula_1 exceeded the delivery price of formula_2, and conversely, pays formula_3 for every point the realised volatility falls short of the strike. The underlying is usually a financial instrument with an active or liquid options market, such as foreign exchange, stock indices, or single stocks. Unlike an investment in options, whose volatility exposure is contaminated by its price dependence, these swaps provide pure exposure to volatility alone. This is truly the case only for forward starting volatility swaps. However, once the swap has its asset fixings its mark-to-market value also depends on the current asset price. One can use these instruments to speculate on future volatility levels, to trade the spread between realized and implied volatility, or to hedge the volatility exposure of other positions or businesses. Volatility swaps are more commonly quoted and traded than the very similar but simpler variance swaps, which can be replicated with a linear combination of options and a dynamic position in futures. The difference between the two is convexity: The payoff of a variance swap is linear with variance but convex with volatility. That means, inevitably, a static replication (a buy-and-hold strategy) of a volatility swap is impossible. However, using the variance swap (formula_4) as a hedging instrument and targeting volatility (formula_5), volatility can be written as a function of variance: formula_6 and formula_7 and formula_8 chosen to minimise the expect expected squared deviation of the two sides: formula_9 then, if the probability of negative realised volatilities is negligible, future volatilities could be assumed to be normal with mean formula_10 and standard deviation formula_11: formula_12 then the hedging coefficients are: formula_13 formula_14 Definition of the realized volatility. Definition of the annualized realized volatility depends on traders viewpoint on the underlying price observation, which could be either discretely or continuously in time. For the former one, with the analogous construction to that of the variance swap, if there are formula_15 sampling points of the observed underlying prices, says, formula_16 where formula_17 for formula_18 to formula_19. Define formula_20 the natural log returns. Then the discrete-sampling annualized realized volatility is defined by which basically is the square root of annualized realized variance. Here, formula_22 denotes an annualized factor which commonly selected to be the number of the observed price in a year i.e. formula_23 if the price is monitored daily or formula_24 if it is done weekly. formula_25 is the expiry date of the volatility swap defined by formula_26. The continuous version of the annualized realized volatility is defined by means of the square root of quadratic variation of the underlying price log-return: where formula_28 is the instantaneous volatility of the underlying asset. Once the number of price's observation increase to infinity, one can find that formula_1 converges in probability to formula_29 i.e. formula_30 representing the interconnection and consistency between the two approaches. Pricing and valuation. In general, for a specified underlying asset, the main aim of pricing swaps is to find a fair strike price since there is no cost to enter the contract. One of the most popular approaches to such fairness is exploiting the Martingale pricing method, which is the method to find the expected present value of given derivative security with respect to some risk-neutral probability measure (or Martingale measure). And how such a measure is chosen depends on the model used to describe the price evolution. Mathematically speaking, if we suppose that the price process formula_31 follows the Black-Scholes model under the martingale measure formula_32, then it solves the following SDE: formula_33 where: Since we know that formula_40 is the volatility swap payoff at expiry in the discretely sampled case (which is switched to formula_29 for the continuous case), then its expected value at time formula_41, denoted by formula_42 is formula_43 which gives formula_44 due to the zero price of the swap, defining the value of a fair volatility strike. The solution can be discovered in various ways. For instance, we obtain the closed-form pricing formula once the probability distribution function of formula_45 or formula_29 is known, or compute it numerically by means of the Monte Carlo method. Alternatively, Upon certain restrictions, one can utilize the value of the European options to approximate the solution. Pricing volatility swap with continuous-sampling. Regarding the argument of Carr and Lee (2009), in the case of the continuous- sampling realized volatility if we assumes that the contract begins at time formula_46, formula_47 is deterministic and formula_48 is arbitrary (deterministic or a stochastic process) but independent of the price's movement i.e. there is no correlation between formula_48 and formula_49, and denotes by formula_50 the Black-Scholes formula for European call option written on formula_49 with the strike price formula_51 at time formula_52 with expiry date formula_25, then by the auxilarity of the call option chosen to be at-the-money i.e. formula_53, the volatility strike formula_2 can be approximated by the function formula_54 which is resulted from applying Taylor's series on the normal distribution parts of the Black-Scholes formula. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(\\sigma_{\\text{realised}}-K_{\\text{vol}})N_{\\text{vol}}" }, { "math_id": 1, "text": "\\sigma_{\\text{realised}}" }, { "math_id": 2, "text": "K_{\\text{vol}}" }, { "math_id": 3, "text": "N_{\\text{vol}}" }, { "math_id": 4, "text": "\\Sigma_{T}^{2}" }, { "math_id": 5, "text": "\\Sigma_{T}" }, { "math_id": 6, "text": "\\Sigma_{T} = a\\Sigma_{T}^{2}+b" }, { "math_id": 7, "text": "a" }, { "math_id": 8, "text": "b" }, { "math_id": 9, "text": "\\text{min} E[(\\Sigma_{T} - a\\Sigma_{T}^{2}-b)^{2}]" }, { "math_id": 10, "text": "\\bar\\Sigma" }, { "math_id": 11, "text": "\\sigma_{\\Sigma}" }, { "math_id": 12, "text": "\\Sigma_{T} \\sim N(\\bar\\Sigma, \\sigma_{\\Sigma})" }, { "math_id": 13, "text": "a=\\frac{1}{2 \\bar\\Sigma + \\frac{\\sigma_{\\Sigma}^{2}}{\\bar\\Sigma}}" }, { "math_id": 14, "text": "b=\\frac{\\bar\\Sigma}{2+\\frac{\\sigma_{\\Sigma}^{2}}{\\bar\\Sigma^2}}" }, { "math_id": 15, "text": "n+1" }, { "math_id": 16, "text": "S_{t_0},S_{t_1}, ..., S_{t_n} " }, { "math_id": 17, "text": "0\\leq t_{i-1}<t_{i}\\leq T" }, { "math_id": 18, "text": "i=1" }, { "math_id": 19, "text": "n" }, { "math_id": 20, "text": "R_{i} = \\ln(S_{t_{i}}/S_{t_{i-1}})," }, { "math_id": 21, "text": "\\sigma_{\\text{realised}} := \\sqrt{\\frac{A}{n} \\sum_{i=1}^{n} R_{i}^2 }," }, { "math_id": 22, "text": "A" }, { "math_id": 23, "text": "A=252" }, { "math_id": 24, "text": "A=52" }, { "math_id": 25, "text": "T" }, { "math_id": 26, "text": "n/A" }, { "math_id": 27, "text": "\\tilde{\\sigma}_{\\text{realized}}:=\\sqrt{\\frac{1}{T}\\int_{0}^{T}\\sigma^2(s)ds}," }, { "math_id": 28, "text": "\\sigma(s)" }, { "math_id": 29, "text": "\\tilde{\\sigma}_{\\text{realized}}" }, { "math_id": 30, "text": "\\lim_{n\\to\\infty}\\sqrt{\\frac{A}{n} \\sum_{i=1}^{n} R_{i}^2 } = \\sqrt{\\frac{1}{T}\\int_{0}^{T}\\sigma^2(s)ds}," }, { "math_id": 31, "text": "S=(S_t)_{0\\leq t \\leq T}" }, { "math_id": 32, "text": "\\mathbb{Q}" }, { "math_id": 33, "text": "\\frac{dS_t}{S_t}=r(t)dt+\\sigma(t)dW_t, \\;\\; S_0>0" }, { "math_id": 34, "text": "r(t)\\in\\mathbb{R}" }, { "math_id": 35, "text": "\\sigma(t)>0" }, { "math_id": 36, "text": "W=(W_t)_{0\\leq t \\leq T}" }, { "math_id": 37, "text": "(\\Omega,\\mathcal{F},\\mathbb{F},\\mathbb{Q})" }, { "math_id": 38, "text": "\\mathbb{F}=(\\mathcal{F}_t)_{0\\leq t \\leq T}" }, { "math_id": 39, "text": "W" }, { "math_id": 40, "text": "(\\sigma_{\\text{realised}}-K_{\\text{vol}})\\times N_{\\text{vol}}" }, { "math_id": 41, "text": "t_0" }, { "math_id": 42, "text": "V_{t_0}" }, { "math_id": 43, "text": "V_{t_0}=e^{\\int^{T}_{t_0}r(s)ds}\\mathbb{E}^{\\mathbb{Q}}[\\sigma_{\\text{realised}}-K_{\\text{vol}}|\\mathcal{F}_ {t_0}]\\times N_{\\text{vol}}," }, { "math_id": 44, "text": "K_{\\text{vol}} = \\mathbb{E}^{\\mathbb{Q}}[\\sigma_{\\text{realised}}|\\mathcal{F}_ {t_0}]" }, { "math_id": 45, "text": "\\sigma}_{\\text{realized}" }, { "math_id": 46, "text": "t_0=0" }, { "math_id": 47, "text": "r(t)" }, { "math_id": 48, "text": "\\sigma(t)" }, { "math_id": 49, "text": "S_t" }, { "math_id": 50, "text": "C_t(K,T)" }, { "math_id": 51, "text": "K" }, { "math_id": 52, "text": "t,\\;0\\leq t \\leq T" }, { "math_id": 53, "text": "K=S_0" }, { "math_id": 54, "text": "K_{\\text{vol}}=\\mathbb{E}^{\\mathbb{Q}}[\\tilde{\\sigma}_{\\text{realised}}|\\mathcal{F}_ {t_0}]\\approx \\sqrt{\\frac{2\\pi}{T}}\\frac{C_0(S_0,T)}{S_0}-2r(T)" } ]
https://en.wikipedia.org/wiki?curid=10162448
10163003
Bruhat order
Partial order on a Coxeter group In mathematics, the Bruhat order (also called the strong order, strong Bruhat order, Chevalley order, Bruhat–Chevalley order, or Chevalley–Bruhat order) is a partial order on the elements of a Coxeter group, that corresponds to the inclusion order on Schubert varieties. History. The Bruhat order on the Schubert varieties of a flag manifold or a Grassmannian was first studied by , and the analogue for more general semisimple algebraic groups was studied by . started the combinatorial study of the Bruhat order on the Weyl group, and introduced the name "Bruhat order" because of the relation to the Bruhat decomposition introduced by François Bruhat. The left and right weak Bruhat orderings were studied by Björner (1984). Definition. If ("W", "S") is a Coxeter system with generators "S", then the Bruhat order is a partial order on the group "W". Recall that a reduced word for an element "w" of "W" is a minimal length expression of "w" as a product of elements of "S", and the length "ℓ"("w") of "w" is the length of a reduced word. For more on the weak orders, see the article weak order of permutations. Bruhat graph. The Bruhat graph is a directed graph related to the (strong) Bruhat order. The vertex set is the set of elements of the Coxeter group and the edge set consists of directed edges ("u", "v") whenever "u" = "tv" for some reflection "t" and "ℓ"("u") &lt; "ℓ"("v"). One may view the graph as an edge-labeled directed graph with edge labels coming from the set of reflections. (One could also define the Bruhat graph using multiplication on the right; as graphs, the resulting objects are isomorphic, but the edge labelings are different.) The strong Bruhat order on the symmetric group (permutations) has Möbius function given by formula_0, and thus this poset is Eulerian, meaning its Möbius function is produced by the rank function on the poset.
[ { "math_id": 0, "text": "\\mu(\\pi,\\sigma)=(-1)^{\\ell(\\sigma)-\\ell(\\pi)}" } ]
https://en.wikipedia.org/wiki?curid=10163003
10163132
Heston model
Model in finance In finance, the Heston model, named after Steven L. Heston, is a mathematical model that describes the evolution of the volatility of an underlying asset. It is a stochastic volatility model: such a model assumes that the volatility of the asset is not constant, nor even deterministic, but follows a random process. Basic Heston model. The basic Heston model assumes that "St", the price of the asset, is determined by a stochastic process, formula_0 where the volatility formula_1 follows an Ornstein-Uhlenbeck process formula_2 Itô's lemma then shows that formula_3, the instantaneous variance, is given by a Feller square-root or CIR process, formula_4 and formula_5 are Wiener processes (i.e., continuous random walks) with correlation ρ. The model has five parameters: If the parameters obey the following condition (known as the Feller condition) then the process formula_3 is strictly positive formula_11 "See Risk-neutral measure for the complete article" Risk-neutral measure. A fundamental concept in derivatives pricing is the risk-neutral measure; this is explained in further depth in the above article. For our purposes, it is sufficient to note the following: Consider a general situation where we have formula_12 underlying assets and a linearly independent set of formula_13 Wiener processes. The set of equivalent measures is isomorphic to Rm, the space of possible drifts. Consider the set of equivalent martingale measures to be isomorphic to a manifold formula_14 embedded in Rm; initially, consider the situation where we have no assets and formula_14 is isomorphic to Rm. Now consider each of the underlying assets as providing a constraint on the set of equivalent measures, as its expected discount process must be equal to a constant (namely, its initial value). By adding one asset at a time, we may consider each additional constraint as reducing the dimension of formula_14 by one dimension. Hence we can see that in the general situation described above, the dimension of the set of equivalent martingale measures is formula_15. In the Black-Scholes model, we have one asset and one Wiener process. The dimension of the set of equivalent martingale measures is zero; hence it can be shown that there is a single value for the drift, and thus a single risk-neutral measure, under which the discounted asset formula_16 will be a martingale. In the Heston model, we still have one asset (volatility is not considered to be directly observable or tradeable in the market) but we now have two Wiener processes - the first in the Stochastic Differential Equation (SDE) for the stock price and the second in the SDE for the variance of the stock price. Here, the dimension of the set of equivalent martingale measures is one; there is no unique risk-free measure. This is of course problematic; while any of the risk-free measures may theoretically be used to price a derivative, it is likely that each of them will give a different price. In theory, however, only one of these risk-free measures would be compatible with the market prices of volatility-dependent options (for example, European calls, or more explicitly, variance swaps). Hence we could add a volatility-dependent asset; by doing so, we add an additional constraint, and thus choose a single risk-free measure which is compatible with the market. This measure may be used for pricing. Calibration. The calibration of the Heston model is often formulated as a least squares problem, with the objective function minimizing the squared difference between the prices observed in the market and those calculated from the model. The prices are typically those of vanilla options. Sometimes the model is also calibrated to the variance swap term-structure as in Guillaume and Schoutens. Yet another approach is to include forward start options, or barrier options as well, in order to capture the forward smile. Under the Heston model, the price of vanilla options is given analytically, but requires a numerical method to compute the integral. Le Floc'h summarized the various quadratures applied and proposed an efficient adaptive Filon quadrature. Calibration usually requires the gradient of the objective function with respect to the model parameters. This was usually computed with a finite difference approximation although it is less accurate, less efficient and less elegant than an analytical gradient because an insightful expression of the latter became available only when a new representation of the characteristic function was introduced by Cui et al. in 2017 . Another possibility is to resort to automatic differentiation. For example, the tangent mode of algorithmic differentiation may be applied using dual numbers in a straightforward manner. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\ndS_t = \\mu S_t\\,dt + \\sqrt{\\nu_t} S_t\\,dW^S_t,\n" }, { "math_id": 1, "text": "\n\\sqrt{\\nu_t}\n" }, { "math_id": 2, "text": "\nd \\sqrt{\\nu_t} = -\\theta \\sqrt{\\nu_t} \\,dt + \\delta\\,dW^\\nu_t.\n" }, { "math_id": 3, "text": "\\nu_t" }, { "math_id": 4, "text": "\nd\\nu_t = \\kappa(\\theta - \\nu_t)\\,dt + \\xi \\sqrt{\\nu_t}\\,dW^{\\nu}_t,\n" }, { "math_id": 5, "text": "W^S_t, W^{\\nu}_t" }, { "math_id": 6, "text": "\\nu_0" }, { "math_id": 7, "text": "\\theta" }, { "math_id": 8, "text": "\\rho" }, { "math_id": 9, "text": "\\kappa" }, { "math_id": 10, "text": "\\xi" }, { "math_id": 11, "text": "\n2 \\kappa \\theta > \\xi^2.\n" }, { "math_id": 12, "text": "n" }, { "math_id": 13, "text": "m" }, { "math_id": 14, "text": "M" }, { "math_id": 15, "text": "m-n" }, { "math_id": 16, "text": "e^{-\\rho t}S_t" } ]
https://en.wikipedia.org/wiki?curid=10163132
10163390
Picone identity
In the field of ordinary differential equations, the Picone identity, named after Mauro Picone, is a classical result about homogeneous linear second order differential equations. Since its inception in 1910 it has been used with tremendous success in association with an almost immediate proof of the Sturm comparison theorem, a theorem whose proof took up many pages in Sturm's original memoir of 1836. It is also useful in studying the oscillation of such equations and has been generalized to other type of differential equations and difference equations. The Picone identity is used to prove the Sturm–Picone comparison theorem. Picone identity. Suppose that "u" and "v" are solutions of the two homogeneous linear second order differential equations in self-adjoint form formula_0 and formula_1 Then, for all "x" with "v"("x") ≠ 0, the following identity holds formula_2 Proof. formula_3 formula_4 formula_5 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(p_1(x) u')' + q_1(x) u = 0 " }, { "math_id": 1, "text": "(p_2(x) v')' + q_2(x) v = 0. " }, { "math_id": 2, "text": "\\left(\\frac{u}{v}(p_1 u' v - p_2 u v')\\right)' = \\left(q_2 - q_1\\right) u^2 + \\left(p_1 - p_2\\right)u'^2 + p_2\\left(u'-v'\\frac{u}{v}\\right)^2." }, { "math_id": 3, "text": "\\left(\\frac{u}{v}(p_1 u' v - p_2 u v')\\right)'=\\left(u p_1 u' -p_2v'u^2 \\frac 1 v \\right)'\n=u'p_1u' +u(p_1 u')' -(p_2v')'u^2 \\frac 1 v-p_2 v' 2u u' \\frac 1 v +p_2 v' u^2 \\frac{v'}{v^2}=\n " }, { "math_id": 4, "text": "= p_1u'^2-2p_2\\frac{ u u' v'} v+p_2\\frac{u^2 v'^2}{v^2}+u (p_1u')'- (p_2 v')'\\frac{u^2}{v}=" }, { "math_id": 5, "text": "= p_1u'^2-p_2u'^2+p_2u'^2-2p_2u'\\frac {u v'} v+p_2\\left(\\frac{u v'}{v}\\right)^2-u (q_1u)+ (q_2 v)\\frac {u^2} v \n= \\left(p_1 - p_2\\right)u'^2 + p_2\\left(u'-v'\\frac{u}{v}\\right)^2 + \\left(q_2 - q_1\\right) u^2" } ]
https://en.wikipedia.org/wiki?curid=10163390
1016345
Cayley–Purser algorithm
1999 public-key cryptography algorithm The Cayley–Purser algorithm was a public-key cryptography algorithm published in early 1999 by 16-year-old Irishwoman Sarah Flannery, based on an unpublished work by Michael Purser, founder of Baltimore Technologies, a Dublin data security company. Flannery named it for mathematician Arthur Cayley. It has since been found to be flawed as a public-key algorithm, but was the subject of considerable media attention. History. During a work-experience placement with Baltimore Technologies, Flannery was shown an unpublished paper by Michael Purser which outlined a new public-key cryptographic scheme using non-commutative multiplication. She was asked to write an implementation of this scheme in Mathematica. Before this placement, Flannery had attended the 1998 ESAT Young Scientist and Technology Exhibition with a project describing already existing cryptographic techniques from the Caesar cipher to RSA. This had won her the Intel Student Award which included the opportunity to compete in the 1998 Intel International Science and Engineering Fair in the United States. Feeling that she needed some original work to add to her exhibition project, Flannery asked Michael Purser for permission to include work based on his cryptographic scheme. On advice from her mathematician father, Flannery decided to use matrices to implement Purser's scheme as matrix multiplication has the necessary property of being non-commutative. As the resulting algorithm would depend on multiplication it would be a great deal faster than the RSA algorithm which uses an exponential step. For her Intel Science Fair project Flannery prepared a demonstration where the same plaintext was enciphered using both RSA and her new Cayley–Purser algorithm and it did indeed show a significant time improvement. Returning to the ESAT Young Scientist and Technology Exhibition in 1999, Flannery formalised Cayley-Purser's runtime and analyzed a variety of known attacks, none of which were determined to be effective. Flannery did not make any claims that the Cayley–Purser algorithm would replace RSA, knowing that any new cryptographic system would need to stand the test of time before it could be acknowledged as a secure system. The media were not so circumspect however and when she received first prize at the ESAT exhibition, newspapers around the world reported the story that a young girl genius had revolutionised cryptography. In fact an attack on the algorithm was discovered shortly afterwards but she analyzed it and included it as an appendix in later competitions, including a Europe-wide competition in which she won a major award. Overview. Notation used in this discussion is as in Flannery's original paper. Key generation. Like RSA, Cayley-Purser begins by generating two large primes "p" and "q" and their product "n", a semiprime. Next, consider GL(2,"n"), the general linear group of 2×2 matrices with integer elements and modular arithmetic mod "n". For example, if "n"=5, we could write: formula_0 formula_1 This group is chosen because it has large order (for large semiprime "n"), equal to ("p"2−1)("p"2−"p")("q"2−1)("q"2−"q"). Let formula_2 and formula_3 be two such matrices from GL(2,"n") chosen such that formula_4. Choose some natural number "r" and compute: formula_5 formula_6 The public key is formula_7, formula_3, formula_8, and formula_9. The private key is formula_2. Encryption. The sender begins by generating a random natural number "s" and computing: formula_10 formula_11 formula_12 Then, to encrypt a message, each message block is encoded as a number (as in RSA) and they are placed four at a time as elements of a plaintext matrix formula_13. Each formula_13 is encrypted using: formula_14 Then formula_15 and formula_16 are sent to the receiver. Decryption. The receiver recovers the original plaintext matrix formula_13 via: formula_17 formula_18 Security. Recovering the private key formula_2 from formula_9 is computationally infeasible, at least as hard as finding square roots mod "n" (see quadratic residue). It could be recovered from formula_3 and formula_8 if the system formula_19 could be solved, but the number of solutions to this system is large as long as elements in the group have a large order, which can be guaranteed for almost every element. However, the system can be broken by finding a multiple formula_20 of formula_2 by solving for formula_21 in the following congruence: formula_22 Observe that a solution exists if for some formula_23 and formula_24 formula_25 If formula_21 is known, formula_26 — a multiple of formula_2. Any multiple of formula_2 yields formula_27. This presents a fatal weakness for the system, which has not yet been reconciled. This flaw does not preclude the algorithm's use as a mixed private-key/public-key algorithm, if the sender transmits formula_16 secretly, but this approach presents no advantage over the common approach of transmitting a symmetric encryption key using a public-key encryption scheme and then switching to symmetric encryption, which is faster than Cayley-Purser.
[ { "math_id": 0, "text": "\\begin{bmatrix}0 & 1 \\\\ 2 & 3\\end{bmatrix} +\n\\begin{bmatrix}1 & 2 \\\\ 3 & 4\\end{bmatrix} =\n\\begin{bmatrix}1 & 3 \\\\ 5 & 7\\end{bmatrix} \\equiv\n\\begin{bmatrix}1 & 3 \\\\ 0 & 2\\end{bmatrix}" }, { "math_id": 1, "text": "\\begin{bmatrix}0 & 1 \\\\ 2 & 3 \\end{bmatrix} \\begin{bmatrix}1 & 2 \\\\ 3 & 4\\end{bmatrix} =\n\\begin{bmatrix}3 & 4 \\\\ 11 & 16\\end{bmatrix} \\equiv\n\\begin{bmatrix}3 & 4 \\\\ 1 & 1\\end{bmatrix}" }, { "math_id": 2, "text": "\\chi" }, { "math_id": 3, "text": "\\alpha" }, { "math_id": 4, "text": "\\chi\\alpha \\not= \\alpha\\chi" }, { "math_id": 5, "text": "\\beta = \\chi^{-1}\\alpha^{-1}\\chi," }, { "math_id": 6, "text": "\\gamma = \\chi^r." }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "\\beta" }, { "math_id": 9, "text": "\\gamma" }, { "math_id": 10, "text": "\\delta = \\gamma^s" }, { "math_id": 11, "text": "\\epsilon = \\delta^{-1}\\alpha\\delta" }, { "math_id": 12, "text": "\\kappa = \\delta^{-1}\\beta\\delta" }, { "math_id": 13, "text": "\\mu" }, { "math_id": 14, "text": "\\mu' = \\kappa\\mu\\kappa." }, { "math_id": 15, "text": "\\mu'" }, { "math_id": 16, "text": "\\epsilon" }, { "math_id": 17, "text": "\\lambda = \\chi^{-1}\\epsilon\\chi," }, { "math_id": 18, "text": "\\mu = \\lambda\\mu'\\lambda." }, { "math_id": 19, "text": "\\chi\\beta = \\alpha^{-1}\\chi" }, { "math_id": 20, "text": "\\chi'" }, { "math_id": 21, "text": "d" }, { "math_id": 22, "text": "d \\left(\\beta - \\alpha^{-1}\\right) \\equiv \\left(\\alpha^{-1}\\gamma - \\gamma\\beta\\right) \\pmod n" }, { "math_id": 23, "text": "i, j \\in \\left|\\gamma\\right|" }, { "math_id": 24, "text": "x, y \\in \\mathbb{Z}_n" }, { "math_id": 25, "text": "x\\left(\\beta_{ij}^{-1} - \\alpha_{ij}\\right) \\equiv y \\pmod n." }, { "math_id": 26, "text": "d \\mathrm{I} + \\gamma = \\chi'" }, { "math_id": 27, "text": "\\lambda = \\kappa^{-1} = v^{-1}\\chi^{-1} \\epsilon v\\chi" } ]
https://en.wikipedia.org/wiki?curid=1016345
1016556
Induced seismicity
Minor earthquakes and tremors caused by human activity Induced seismicity is typically earthquakes and tremors that are caused by human activity that alters the stresses and strains on Earth's crust. Most induced seismicity is of a low magnitude. A few sites regularly have larger quakes, such as The Geysers geothermal plant in California which averaged two M4 events and 15 M3 events every year from 2004 to 2009. The Human-Induced Earthquake Database ("HiQuake") documents all reported cases of induced seismicity proposed on scientific grounds and is the most complete compilation of its kind. Results of ongoing multi-year research on induced earthquakes by the United States Geological Survey (USGS) published in 2015 suggested that most of the significant earthquakes in Oklahoma, such as the 1952 magnitude 5.7 El Reno earthquake may have been induced by deep injection of wastewater by the oil industry. A huge number of seismic events in oil and gas extraction states like Oklahoma is caused by increasing the volume of wastewater injection that is generated as part of the extraction process. "Earthquake rates have recently increased markedly in multiple areas of the Central and Eastern United States (CEUS), especially since 2010, and scientific studies have linked the majority of this increased activity to wastewater injection in deep disposal wells." Induced seismicity can also be caused by the injection of carbon dioxide as the storage step of carbon capture and storage, which aims to sequester carbon dioxide captured from fossil fuel production or other sources in Earth's crust as a means of climate change mitigation. This effect has been observed in Oklahoma and Saskatchewan. Though safe practices and existing technologies can be utilized to reduce the risk of induced seismicity due to injection of carbon dioxide, the risk is still significant if the storage is large in scale. The consequences of the induced seismicity could disrupt pre-existing faults in the Earth's crust as well as compromise the seal integrity of the storage locations. The seismic hazard from induced seismicity can be assessed using similar techniques as for natural seismicity, although accounting for non-stationary seismicity. It appears that earthquake shaking from induced earthquakes may be similar to that observed in natural tectonic earthquakes, or may have higher shaking at shorter distances. This means that ground-motion models derived from recordings of natural earthquakes, which are often more numerous in strong-motion databases than data from induced earthquakes, may be used with minor adjustments. Subsequently, a risk assessment can be performed, taking into account the increased seismic hazard and the vulnerability of the exposed elements at risk (e.g. local population and the building stock). Finally, the risk can, theoretically at least, be mitigated, either through reductions to the hazard or a reduction to the exposure or the vulnerability. Causes. There are many ways in which induced seismicity has been seen to occur. In the 2010s, some energy technologies that inject or extract fluid from the Earth, such as oil and gas extraction and geothermal energy development, have been found or suspected to cause seismic events. Some energy technologies also produce wastes that may be managed through disposal or storage by injection deep into the ground. For example, waste water from oil and gas production and carbon dioxide from a variety of industrial processes may be managed through underground injection. Artificial lakes. The column of water in a large and deep artificial lake alters in-situ stress along an existing fault or fracture. In these reservoirs, the weight of the water column can significantly change the stress on an underlying fault or fracture by increasing the total stress through direct loading, or decreasing the effective stress through the increased pore water pressure. This significant change in stress can lead to sudden movement along the fault or fracture, resulting in an earthquake. Reservoir-induced seismic events can be relatively large compared to other forms of induced seismicity. Though understanding of reservoir-induced seismic activity is very limited, it has been noted that seismicity appears to occur on dams with heights larger than . The extra water pressure created by large reservoirs is the most accepted explanation for the seismic activity. When the reservoirs are filled or drained, induced seismicity can occur immediately or with a small time lag. The first case of reservoir-induced seismicity occurred in 1932 in Algeria's Oued Fodda Dam. The 6.3 magnitude 1967 Koynanagar earthquake occurred in Maharashtra, India with its epicenter, fore- and aftershocks all located near or under the Koyna Dam reservoir. 180 people died and 1,500 were left injured. The effects of the earthquake were felt away in Bombay with tremors and power outages. During the beginnings of the Vajont Dam in Italy, there were seismic shocks recorded during its initial fill. After a landslide almost filled the reservoir in 1963, causing a massive flooding and around 2,000 deaths, it was drained and consequently seismic activity was almost non-existent. On August 1, 1975, a magnitude 6.1 earthquake at Oroville, California, was attributed to seismicity from a large earth-fill dam and reservoir recently constructed and filled. The filling of the Katse Dam in Lesotho, and the Nurek Dam in Tajikistan is an example. In Zambia, Kariba Lake may have provoked similar effects. The 2008 Sichuan earthquake, which caused approximately 68,000 deaths, is another possible example. An article in "Science" suggested that the construction and filling of the Zipingpu Dam may have triggered the earthquake. Some experts worry that the Three Gorges Dam in China may cause an increase in the frequency and intensity of earthquakes. Mining. Mining affects the stress state of the surrounding rock mass, often causing observable deformation and seismic activity. A small portion of mining-induced events are associated with damage to mine workings and pose a risk to mine workers. These events are known as rock bursts in hard rock mining, or as bumps in underground coal mining. A mine's propensity to burst or bump depends primarily on depth, mining method, extraction sequence and geometry, and the material properties of the surrounding rock. Many underground hardrock mines operate seismic monitoring networks in order to manage bursting risks, and guide mining practices. Seismic networks have recorded a variety of mining-related seismic sources including: Waste disposal wells. Injecting liquids into waste disposal wells, most commonly in disposing of produced water from oil and natural gas wells, has been known to cause earthquakes. This high-saline water is usually pumped into salt water disposal (SWD) wells. The resulting increase in subsurface pore pressure can trigger movement along faults, resulting in earthquakes. One of the first known examples was from the Rocky Mountain Arsenal, northeast of Denver. In 1961, waste water was injected into deep strata, and this was later found to have caused a series of earthquakes. The 2011 Oklahoma earthquake near Prague, of magnitude 5.8, occurred after 20 years of injecting waste water into porous deep formations at increasing pressures and saturation. On September 3, 2016, an even stronger earthquake with a magnitude of 5.8 occurred near Pawnee, Oklahoma, followed by nine aftershocks between magnitudes 2.6 and 3.6 within &lt;templatestyles src="Fraction/styles.css" /&gt;3+1⁄2 hours. Tremors were felt as far away as Memphis, Tennessee, and Gilbert, Arizona. Mary Fallin, the Oklahoma governor, declared a local emergency and shutdown orders for local disposal wells were ordered by the Oklahoma Corporation Commission. Results of ongoing multi-year research on induced earthquakes by the United States Geological Survey (USGS) published in 2015 suggested that most of the significant earthquakes in Oklahoma, such as the 1952 magnitude 5.5 El Reno earthquake may have been induced by deep injection of waste water by the oil industry. Prior to April 2015 however, the Oklahoma Geological Survey's position was that the quake was most likely due to natural causes and was not triggered by waste injection. This was one of many earthquakes which have affected the Oklahoma region. Since 2009, earthquakes have become hundreds of times more common in Oklahoma with magnitude 3 events increasing from 1 or 2 per year to 1 or 2 per day. On April 21, 2015, the Oklahoma Geological Survey released a statement reversing its stance on induced earthquakes in Oklahoma: "The OGS considers it very likely that the majority of recent earthquakes, particularly those in central and north-central Oklahoma, are triggered by the injection of produced water in disposal wells." Hydrocarbon extraction and storage. Large-scale fossil fuel extraction can generate earthquakes. Induced seismicity can be also related to underground gas storage operations. The 2013 September–October seismic sequence occurred 21 km off the coast of the Valencia Gulf (Spain) is probably the best known case of induced seismicity related to Underground Gas Storage operations (the Castor Project). In September 2013, after the injection operations started, the Spanish seismic network recorded a sudden increase of seismicity. More than 1,000 events with magnitudes () between 0.7 and 4.3 (the largest earthquake ever associated with gas storage operations) and located close the injection platform were recorded in about 40 days. Due to the significant population concern the Spanish Government halted the operations. By the end of 2014, the Spanish government definitively terminated the concession of the UGS plant. Since January 2015 about 20 people who took part in the transaction and approval of the Castor Project were indicted. Groundwater extraction. The changes in crustal stress patterns caused by the large scale extraction of groundwater has been shown to trigger earthquakes, as in the case of the 2011 Lorca earthquake. Geothermal energy. Enhanced geothermal systems (EGS), a new type of geothermal power technology that does not require natural convective hydrothermal resources, are known to be associated with induced seismicity. EGS involves pumping fluids at pressure to enhance or create permeability through the use of hydraulic fracturing techniques. Hot dry rock (HDR) EGS actively creates geothermal resources through hydraulic stimulation. Depending on the rock properties, and on injection pressures and fluid volume, the reservoir rock may respond with tensile failure, as is common in the oil and gas industry, or with shear failure of the rock's existing joint set, as is thought to be the main mechanism of reservoir growth in EGS efforts. HDR and EGS systems are currently being developed and tested in Soultz-sous-Forêts (France), Desert Peak and the Geysers (U.S.), Landau (Germany), and Paralana and Cooper Basin (Australia). Induced seismicity events at the Geysers geothermal field in California has been strongly correlated with injection data. The test site at Basel, Switzerland, has been shut down due to induced seismic events. In November 2017 a Mw 5.5 struck the city of Pohang (South Korea) injuring several people and causing extensive damage. The proximity of the seismic sequence to an EGS site, where stimulation operations had taken place only a few months before the earthquake, raised the possibility that this earthquake had been anthropogenic. According to two different studies it seems plausible that the Pohang earthquake was induced by EGS operations. Researchers at MIT believe that seismicity associated with hydraulic stimulation can be mitigated and controlled through predictive siting and other techniques. With appropriate management, the number and magnitude of induced seismic events can be decreased, significantly reducing the probability of a damaging seismic event. Induced seismicity in Basel led to suspension of its HDR project. A seismic hazard evaluation was then conducted, which resulted in the cancellation of the project in December 2009. Hydraulic fracturing. Hydraulic fracturing is a technique in which high-pressure fluid is injected into the low-permeable reservoir rocks in order to induce fractures to increase hydrocarbon production. This process is generally associated with seismic events that are too small to be felt at the surface (with moment magnitudes ranging from −3 to 1), although larger magnitude events are not excluded. For example, several cases of larger magnitude events (M &gt; 4) have been recorded in Canada in the unconventional resources of Alberta and British Columbia. Carbon capture and storage. Risk analysis. Operation of technologies involving long-term geologic storage of waste fluids have been shown to induce seismic activity in nearby areas, and correlation of periods of seismic dormancy with minima in injection volumes and pressures has even been demonstrated for fracking wastewater injection in Youngstown, Ohio. Of particular concern to the viability of carbon dioxide storage from coal-fired power plants and similar endeavors is that the scale of intended CCS projects is much larger in both injection rate and total injection volume than any current or past operation that has already been shown to induce seismicity. As such, extensive modeling must be done of future injection sites in order to assess the risk potential of CCS operations, particularly in relation to the effect of long-term carbon dioxide storage on shale caprock integrity, as the potential for fluid leaks to the surface might be quite high for moderate earthquakes. However, the potential of CCS to induce large earthquakes and CO2 leakage remains a controversial issue., Monitoring. Since geological sequestration of carbon dioxide has the potential to induce seismicity, researchers have developed methods to monitor and model the risk of injection-induced seismicity in order to manage better the risks associated with this phenomenon. Monitoring can be conducted with measurements from an instrument such as a geophone to measure the movement of the ground. Generally a network of instruments is used around the site of injection, although many current carbon dioxide injection sites use no monitoring devices. Modelling is an important technique for assessing the potential for induced seismicity and two primary models are used: Physical and numerical. A physical model uses measurements from the early stages of a project to forecast how the project will behave once more carbon dioxide is injected. A numerical model, on the other hand, uses numerical methods to simulate the physics of what is happening within the reservoir. Both modelling and monitoring are useful tools whereby to quantify, understand better and mitigate the risks associated with injection-induced seismicity. Failure mechanisms due to fluid injection. To assess induced seismicity risks associated with carbon storage, one must understand the mechanisms behind rock failure. The Mohr-Coulomb failure criteria describe shear failure on a fault plane. Most generally, failure will happen on existing faults due to several mechanisms: an increase in shear stress, a decrease in normal stress or a pore pressure increase. The injection of supercritical CO2 will change the stresses in the reservoir as it expands, causing potential failure on nearby faults. Injection of fluids also increases the pore pressures in the reservoir, triggering slip on existing rock weakness planes. The latter is the most common cause of induced seismicity due to fluid injection. The Mohr-Coulomb failure criteria state that formula_0 with formula_1 the critical shear stress leading to failure on a fault, formula_2 the cohesive strength along the fault, formula_3 the normal stress, formula_4 the friction coefficient on the fault plane and formula_5 the pore pressure within the fault. When formula_1 is attained, shear failure occurs and an earthquake can be felt. This process can be represented graphically on a Mohr's circle. Comparison of risks due to CCS versus other injection methods. While there is risk of induced seismicity associated with carbon capture and storage underground on a large scale, it is currently a much less serious risk than other injection types. Wastewater injection, hydraulic fracturing, and secondary recovery after oil extraction have all contributed significantly more to induced seismic events than carbon capture and storage in the last several years. There have actually not been any major seismic events associated with carbon injection at this point, whereas there have been recorded seismic occurrences caused by the other injection methods. One such example is massively increased induced seismicity in Oklahoma, USA caused by injection of huge volumes of wastewater into the Arbuckle Group sedimentary rock. Electromagnetic pulses. It has been shown that high-energy electromagnetic pulses can trigger the release of energy stored by tectonic movements by increasing the rate of local earthquakes, within 2–6 days after the emission by the EMP generators. The energy released is approximately six orders of magnitude larger than the EM pulses energy. The release of tectonic stress by these relatively small triggered earthquakes equals to 1-17% of the stress released by a strong earthquake in the area. It has been proposed that strong EM impacts could control seismicity as during the periods of the experiments and long time after, the seismicity dynamics were a lot more regular than usual. Risk analysis. Risk factors. Risk is defined as the probability of being impacted from an event in the future. Seismic risk is generally estimated by combining the seismic hazard with the exposure and vulnerability at a site or over a region. The hazard from earthquakes depends on the proximity to potential earthquake sources, and the rates of occurrence of different magnitude earthquakes for those sources, and the propagation of seismic waves from the sources to the site of interest. Hazard is then represented in terms of the probability of exceeding some level of ground shaking at a site. Earthquake hazards can include ground shaking, liquefaction, surface fault displacement, landslides, tsunamis, and uplift/subsidence for very large events (ML &gt; 6.0). Because induced seismic events, in general, are smaller than ML 5.0 with short durations, the primary concern is ground shaking. Ground shaking. Ground shaking can result in both structural and nonstructural damage to buildings and other structures. It is commonly accepted that structural damage to modern engineered structures happens only in earthquakes larger than ML 5.0. In seismology and earthquake engineering, ground shaking can be measured as peak ground velocity (PGV), peak ground acceleration (PGA) or spectral acceleration (SA) at a building's period of excitation. In regions of historical seismicity where buildings are engineered to withstand seismic forces, moderate structural damage is possible, and very strong shaking can be perceived when PGA is greater than 18-34% of g (the acceleration of gravity). In rare cases, nonstructural damage has been reported in earthquakes as small as ML 3.0. For critical facilities like dams and nuclear plants, the acceptable levels of ground shaking is lower than that for buildings. Probabilistic seismic hazard analysis. "Extended reading – An Introduction to Probabilistic Seismic Hazard Analysis (PSHA)" Probabilistic Seismic Hazard Analysis (PSHA) is a probabilistic framework that accounts for probabilities in earthquake occurrence and the probabilities in ground motion propagation. Using the framework, the probability of exceeding a certain level of ground shaking at a site can be quantified, taking into account all the possible earthquakes (both natural and induced). PSHA methodology is used to determine seismic loads for building codes in both the United States and Canada, and increasingly in other parts of the world, as well as protecting dams and nuclear plants from the damage of seismic events. Calculating Seismic Risk. Earthquake source characterization. Understanding the geological background on the site is a prerequisite for seismic hazard estimation. Formations of the rocks, subsurface structures, locations of faults, state of stresses and other parameters that contribute to possible seismic events are considered. Records of past earthquakes of the site are also taken into account. Recurrence pattern. The magnitudes of earthquakes occurring at a source generally follow the Gutenberg-Richter relation that states that the number of earthquakes decrease exponentially with increase in magnitude, as shown below, formula_6 where formula_7 is the magnitude of seismic events, formula_8 is the number of events with magnitudes bigger than formula_7, formula_9 is the rate parameter and formula_10 is the slope. formula_9 and formula_10 vary for different sources. In the case of natural earthquakes, historical seismicity is used to determine these parameters. Using this relationship, the number and probability of earthquakes exceeding a certain magnitude can be predicted following the assumptions that earthquakes follow a Poisson process. However, the goal of this analysis is to determine the possibility of future earthquakes. For induced seismicity in contrast to natural seismicity, the earthquake rates change over time as a result of changes in human activity, and hence are quantified as non-stationary processes with varying seismicity rates over time. Ground motions. At a given site, the ground motion describes the seismic waves that would have been observed at that site with a seismometer. In order to simplify the representation of an entire seismogram, PGV (peak ground velocity), PGA (peak ground acceleration), spectral acceleration (SA) at different period, earthquake duration, arias intensity (IA) are some of the parameters that are used to represent ground shaking. Ground motion propagation from the source to a site for an earthquake of a given magnitude is estimated using ground motion prediction equations (GMPE) that have been developed based on historical records. Since historical records are scarce for induced seismicity, researchers have provided modifications to GMPEs for natural earthquakes in order to apply them to indced earthquakes. Seismic hazard. The PSHA framework uses the distributions of earthquake magnitudes and ground motion propagation to estimate the seismic hazard - the probability of exceeding a certain level of ground shaking (PGA, PGV, SA, IA, etc.) in the future. Depending on the complexity of the probability distributions, either numerical methods or simulations (such as, Monte Carlo method) may be used to estimate seismic hazard. In the case of induced seismicity, the seismic hazard is not constant, but varies with time due to changes in the underlying seismicity rates. Exposure and vulnerability. In order to estimate seismic risk, the hazard is combined with the exposure and vulnerability at a site or in a region. For example, if an earthquake occurs where there are no humans or structures, there would be no human impacts despite any level of seismic hazard. Exposure is defined as the set of entities (such as, buildings and people) that exist at a given site or a region. Vulnerability is defined as the potential of impact to those entities, for example, structural or non-structural damage to a building, and loss of well-being and life for people. Vulnerability can also be represented probabilistically using vulnerability or fragility functions. A vulnerability or fragility function specifies the probability of impact at different levels of ground shaking. In regions like Oklahoma without a lot of historical natural seismicity, structures are not engineered to withstand seismic forces, and as a result are more vulnerable even at low levels of ground shaking, as compared to structures in tectonic regions like California and Japan. Seismic risk. Seismic risk is defined as the probability of exceeding a certain level of impact in the future. For example, it may estimate the exceedance probability of moderate or more damage to a building in the future. Seismic hazard is combined with the exposure and vulnerability to estimate seismic risk. While numerical methods may be used to estimate risk at one site, simulation-based methods are better suited to estimate seismic risk for a region with a portfolio of entities, in order to correctly account for the correlations in ground shaking, and impacts. In the case of induced seismicity, the seismic risk varies over time due to changes in the seismic hazard. Risk Mitigation. Induced seismicity can cause damage to infrastructure and has been documented to damage buildings in Oklahoma. It can also lead to brine and CO2 leakages. It is easier to predict and mitigate seismicity caused by explosions. Common mitigation strategies include constraining the amount of dynamite used in one single explosion and the locations of the explosions. For injection-related induced seismicity, however, it is still difficult to predict when and where induced seismic events will occur, as well as the magnitudes. Since induced seismic events related to fluid injection are unpredictable, it has garnered more attention from the public. Induced seismicity is only part of the chain reaction from industrial activities that worry the public. Impressions toward induced seismicity are very different between different groups of people. The public tends to feel more negatively towards earthquakes caused by human activities than natural earthquakes. Two major parts of public concern are related to the damages to infrastructure and the well-being of humans. Most induced seismic events are below M 2 and are not able to cause any physical damage. Nevertheless, when the seismic events are felt and cause damages or injuries, questions arise from the public whether it is appropriate to conduct oil and gas operations in those areas. Public perceptions may vary based on the population and tolerance of local people. For example, in the seismically active Geysers geothermal area in Northern California, which is a rural area with a relatively small population, the local population tolerates earthquakes up to M 4.5. Actions have been taken by regulators, industry and researchers. On October 6, 2015, people from industry, government, academia, and the public gathered together to discuss how effective it was to implement a traffic light system or protocol in Canada to help manage risks from induced seismicity. Risk assessment and tolerance for induced seismicity, however, is subjective and shaped by different factors like politics, economics, and understanding from the public. Policymakers have to often balance the interests of industry with the interests of the population. In these situations, seismic risk estimation serves as a critical tool for quantifying future risk, and can be used to regulate earthquake-inducing activities until the seismic risk reaches a maximum acceptable level to the population. Traffic Light System. One of the methods suggested to mitigate seismic risk is a Traffic Light System (TLS), also referred to as Traffic Light Protocol (TLP), which is a calibrated control system that provides continuous and real-time monitoring and management of ground shaking of induced seismicity for specific sites. TLS was first implemented in 2005 in an enhanced geothermal plant in Central America. For oil and gas operations, the most widely implemented one is modified by the system used in the UK. Normally there are two types of TLS – the first one sets different thresholds, usually earthquake local magnitudes (ML) or ground motions from small to large. If the induced seismicity reaches the smaller thresholds, modifications of the operations are implemented by the operators and the regulators are informed. If the induced seismicity reaches the larger thresholds, operations are shut down immediately. The second type of traffic light system sets only one threshold. If this threshold is reached, the operations are halted. This is also called a "stop light system". Thresholds for the traffic light system vary between and within countries, depending on the area. However, the traffic light system is not able to account for future changes in seismicity. It may take time for changes in human activities to mitigate the seismic activity, and it has been observed that some of the largest induced earthquakes have occurred after stopping fluid injection. Nuclear explosions. Nuclear explosions can cause seismic activity, but according to USGS, the resulting seismic activity is less energetic than the original nuclear blast, and generally does not produce large aftershocks. Nuclear explosions may instead release the elastic strain energy that was stored in the rock, strengthening the initial blast shockwave. U.S. National Research Council report. A 2013 report from the U.S. National Research Council examined the potential for energy technologies—including shale gas recovery, carbon capture and storage, geothermal energy production, and conventional oil and gas development—to cause earthquakes. The report found that only a very small fraction of injection and extraction activities among the hundreds of thousands of energy development sites in the United States have induced seismicity at levels noticeable to the public. However, although scientists understand the general mechanisms that induce seismic events, they are unable to accurately predict the magnitude or occurrence of these earthquakes due to insufficient information about the natural rock systems and a lack of validated predictive models at specific energy development sites. The report noted that hydraulic fracturing has a low risk for inducing earthquakes that can be felt by people, but underground injection of wastewater produced by hydraulic fracturing and other energy technologies has a higher risk of causing such earthquakes. In addition, carbon capture and storage—a technology for storing excess carbon dioxide underground—may have the potential for inducing seismic events, because significant volumes of fluids are injected underground over long periods of time. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tau_c =\\tau_0 +\\mu(\\sigma_n -P)" }, { "math_id": 1, "text": "\\tau_c " }, { "math_id": 2, "text": "\\tau_0 " }, { "math_id": 3, "text": "\\sigma_n" }, { "math_id": 4, "text": "\\mu" }, { "math_id": 5, "text": "P" }, { "math_id": 6, "text": "\\log N(\\geq M)=a-bM" }, { "math_id": 7, "text": "M" }, { "math_id": 8, "text": "N" }, { "math_id": 9, "text": "a" }, { "math_id": 10, "text": "b" } ]
https://en.wikipedia.org/wiki?curid=1016556
10165595
Sturm separation theorem
In mathematics, in the field of ordinary differential equations, Sturm separation theorem, named after Jacques Charles François Sturm, describes the location of roots of solutions of homogeneous second order linear differential equations. Basically the theorem states that given two linear independent solutions of such an equation the zeros of the two solutions are alternating. Sturm separation theorem. If "u"("x") and "v"("x") are two non-trivial continuous linearly independent solutions to a homogeneous second order linear differential equation with "x"0 and "x"1 being successive roots of "u"("x"), then "v"("x") has exactly one root in the open interval ("x"0, "x"1). It is a special case of the Sturm-Picone comparison theorem. Proof. Since formula_0 and formula_1 are linearly independent it follows that the Wronskian formula_2 must satisfy formula_3 for all formula_4 where the differential equation is defined, say formula_5. Without loss of generality, suppose that formula_6. Then formula_7 So at formula_8 formula_9 and either formula_10 and formula_11 are both positive or both negative. Without loss of generality, suppose that they are both positive. Now, at formula_12 formula_13 and since formula_8 and formula_12 are successive zeros of formula_14 it causes formula_15. Thus, to keep formula_16 we must have formula_17. We see this by observing that if formula_18 then formula_14 would be increasing (away from the formula_4-axis), which would never lead to a zero at formula_12. So for a zero to occur at formula_12 at most formula_19 (i.e., formula_20 and it turns out, by our result from the Wronskian that formula_20). So somewhere in the interval formula_21 the sign of formula_22 changed. By the Intermediate Value Theorem there exists formula_23 such that formula_24. On the other hand, there can be only one zero in formula_21, because otherwise formula_25 would have two zeros and there would be no zeros of formula_26 in between, and it was just proved that this is impossible.
[ { "math_id": 0, "text": "\\displaystyle u" }, { "math_id": 1, "text": "\\displaystyle v" }, { "math_id": 2, "text": "\\displaystyle W[u,v]" }, { "math_id": 3, "text": "W[u,v](x)\\equiv W(x)\\neq 0" }, { "math_id": 4, "text": "\\displaystyle x" }, { "math_id": 5, "text": "\\displaystyle I" }, { "math_id": 6, "text": "W(x)<0\\mbox{ }\\forall\\mbox{ }x\\in I" }, { "math_id": 7, "text": "u(x)v'(x)-u'(x)v(x)\\neq 0." }, { "math_id": 8, "text": "\\displaystyle x=x_0" }, { "math_id": 9, "text": "W(x_0)=-u'\\left(x_0\\right)v\\left(x_0\\right)" }, { "math_id": 10, "text": "u'\\left(x_0\\right)" }, { "math_id": 11, "text": "v\\left(x_0\\right)" }, { "math_id": 12, "text": "\\displaystyle x=x_1" }, { "math_id": 13, "text": "W(x_1)=-u'\\left(x_1\\right)v\\left(x_1\\right)" }, { "math_id": 14, "text": "\\displaystyle u(x)" }, { "math_id": 15, "text": "u'\\left(x_1\\right)<0" }, { "math_id": 16, "text": "\\displaystyle W(x)<0" }, { "math_id": 17, "text": "v\\left(x_1\\right)<0" }, { "math_id": 18, "text": "\\displaystyle u'(x)>0\\mbox{ }\\forall\\mbox{ }x\\in \\left(x_0,x_1\\right]" }, { "math_id": 19, "text": "u'\\left(x_1\\right)=0" }, { "math_id": 20, "text": "u'\\left(x_1\\right)\\leq 0" }, { "math_id": 21, "text": "\\left(x_0,x_1\\right)" }, { "math_id": 22, "text": "\\displaystyle v(x)" }, { "math_id": 23, "text": "x^*\\in\\left(x_0,x_1\\right)" }, { "math_id": 24, "text": "v\\left(x^*\\right)=0" }, { "math_id": 25, "text": "v" }, { "math_id": 26, "text": "u" } ]
https://en.wikipedia.org/wiki?curid=10165595
10167616
Quantile function
Statistical function that defines the quantiles of a probability distribution In probability and statistics, the quantile function outputs the value of a random variable such that its probability is less than or equal to an input probability value. Intuitively, the quantile function associates with a range at and below a probability input the likelihood that a random variable is realized in that range for some probability distribution. It is also called the percentile function (after the percentile), percent-point function, inverse cumulative distribution function (after the cumulative distribution function or c.d.f.) or inverse distribution function. Definition. Strictly monotonic distribution function. With reference to a continuous and strictly monotonic cumulative distribution function formula_0 of a random variable "X", the quantile function formula_1 maps its input "p" to a threshold value "x" so that the probability of "X" being less or equal than "x" is "p". In terms of the distribution function "F", the quantile function "Q" returns the value "x" such that formula_2 which can be written as inverse of the c.d.f. formula_3 General distribution function. In the general case of distribution functions that are not strictly monotonic and therefore do not permit an inverse c.d.f., the quantile is a (potentially) set valued functional of a distribution function F, given by the interval formula_4 It is often standard to choose the lowest value, which can equivalently be written as (using right-continuity of F) formula_5 Here we capture the fact that the quantile function returns the minimum value of x from amongst all those values whose c.d.f value exceeds p, which is equivalent to the previous probability statement in the special case that the distribution is continuous. Note that the infimum function can be replaced by the minimum function, since the distribution function is right-continuous and weakly monotonically increasing. The quantile is the unique function satisfying the Galois inequalities formula_6 if and only if formula_7 If the function F is continuous and strictly monotonically increasing, then the inequalities can be replaced by equalities, and we have: formula_8 In general, even though the distribution function F may fail to possess a left or right inverse, the quantile function Q behaves as an "almost sure left inverse" for the distribution function, in the sense that formula_9 almost surely. Simple example. For example, the cumulative distribution function of Exponential("λ") (i.e. intensity "λ" and expected value (mean) 1/"λ") is formula_10 The quantile function for Exponential("λ") is derived by finding the value of Q for which formula_11: formula_12 for 0 ≤ "p" &lt; 1. The quartiles are therefore: Applications. Quantile functions are used in both statistical applications and Monte Carlo methods. The quantile function is one way of prescribing a probability distribution, and it is an alternative to the probability density function (pdf) or probability mass function, the cumulative distribution function (cdf) and the characteristic function. The quantile function, "Q", of a probability distribution is the inverse of its cumulative distribution function "F". The derivative of the quantile function, namely the quantile density function, is yet another way of prescribing a probability distribution. It is the reciprocal of the pdf composed with the quantile function. Consider a statistical application where a user needs to know key percentage points of a given distribution. For example, they require the median and 25% and 75% quartiles as in the example above or 5%, 95%, 2.5%, 97.5% levels for other applications such as assessing the statistical significance of an observation whose distribution is known; see the quantile entry. Before the popularization of computers, it was not uncommon for books to have appendices with statistical tables sampling the quantile function. Statistical applications of quantile functions are discussed extensively by Gilchrist. Monte-Carlo simulations employ quantile functions to produce non-uniform random or pseudorandom numbers for use in diverse types of simulation calculations. A sample from a given distribution may be obtained in principle by applying its quantile function to a sample from a uniform distribution. The demands of simulation methods, for example in modern computational finance, are focusing increasing attention on methods based on quantile functions, as they work well with multivariate techniques based on either copula or quasi-Monte-Carlo methods and Monte Carlo methods in finance. Calculation. The evaluation of quantile functions often involves numerical methods, such as the exponential distribution above, which is one of the few distributions where a closed-form expression can be found (others include the uniform, the Weibull, the Tukey lambda (which includes the logistic) and the log-logistic). When the cdf itself has a closed-form expression, one can always use a numerical root-finding algorithm such as the bisection method to invert the cdf. Other methods rely on an approximation of the inverse via interpolation techniques. Further algorithms to evaluate quantile functions are given in the Numerical Recipes series of books. Algorithms for common distributions are built into many statistical software packages. General methods to numerically compute the quantile functions for general classes of distributions can be found in the following libraries: Quantile functions may also be characterized as solutions of non-linear ordinary and partial differential equations. The ordinary differential equations for the cases of the normal, Student, beta and gamma distributions have been given and solved. Normal distribution. The normal distribution is perhaps the most important case. Because the normal distribution is a location-scale family, its quantile function for arbitrary parameters can be derived from a simple transformation of the quantile function of the standard normal distribution, known as the probit function. Unfortunately, this function has no closed-form representation using basic algebraic functions; as a result, approximate representations are usually used. Thorough composite rational and polynomial approximations have been given by Wichura and Acklam. Non-composite rational approximations have been developed by Shaw. Ordinary differential equation for the normal quantile. A non-linear ordinary differential equation for the normal quantile, "w"("p"), may be given. It is formula_16 with the centre (initial) conditions formula_17 formula_18 This equation may be solved by several methods, including the classical power series approach. From this solutions of arbitrarily high accuracy may be developed (see Steinbrecher and Shaw, 2008). Student's "t"-distribution. This has historically been one of the more intractable cases, as the presence of a parameter, ν, the degrees of freedom, makes the use of rational and other approximations awkward. Simple formulas exist when the ν = 1, 2, 4 and the problem may be reduced to the solution of a polynomial when ν is even. In other cases the quantile functions may be developed as power series. The simple cases are as follows: formula_19 formula_20 formula_21 where formula_22 and formula_23 In the above the "sign" function is +1 for positive arguments, −1 for negative arguments and zero at zero. It should not be confused with the trigonometric sine function. Quantile mixtures. Analogously to the mixtures of densities, distributions can be defined as quantile mixtures formula_24, where formula_25, formula_26 are quantile functions and formula_27, formula_26 are the model parameters. The parameters formula_27 must be selected so that formula_28 is a quantile function. Two four-parametric quantile mixtures, the normal-polynomial quantile mixture and the Cauchy-polynomial quantile mixture, are presented by Karvanen. Non-linear differential equations for quantile functions. The non-linear ordinary differential equation given for normal distribution is a special case of that available for any quantile function whose second derivative exists. In general the equation for a quantile, "Q"("p"), may be given. It is formula_29 augmented by suitable boundary conditions, where formula_30 and "ƒ"("x") is the probability density function. The forms of this equation, and its classical analysis by series and asymptotic solutions, for the cases of the normal, Student, gamma and beta distributions has been elucidated by Steinbrecher and Shaw (2008). Such solutions provide accurate benchmarks, and in the case of the Student, suitable series for live Monte Carlo use. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " F_X\\colon \\mathbb{R} \\to [0,1]" }, { "math_id": 1, "text": "Q\\colon [0, 1] \\to \\mathbb{R}" }, { "math_id": 2, "text": "F_X(x) := \\Pr(X \\le x) = p\\,," }, { "math_id": 3, "text": "Q(p) =F_X^{-1}(p)\\,." }, { "math_id": 4, "text": "Q(p)\\ =\\ \\boldsymbol{\\biggl[}\\ \\sup\\left\\{x \\colon F(x) < p\\right\\}\\ \\boldsymbol{,}\\ \\sup\\left\\{x \\colon F(x) \\le p \\right\\}\\ \\boldsymbol{\\biggr]} ~." }, { "math_id": 5, "text": "Q(p)\\ =\\ \\inf \\left\\{\\ x\\in \\mathbb{R}\\ :\\ p \\le F(x)\\ \\right\\} ~." }, { "math_id": 6, "text": " Q(p) \\le x \\quad" }, { "math_id": 7, "text": "\\quad p \\le F(x) ~." }, { "math_id": 8, "text": " Q = F^{-1} " }, { "math_id": 9, "text": " Q\\bigl( \\ F(X)\\ \\bigr) = X \\quad" }, { "math_id": 10, "text": "F(x;\\lambda) = \\begin{cases}\n1-e^{-\\lambda x} & x \\ge 0, \\\\\n0 & x < 0.\n\\end{cases}" }, { "math_id": 11, "text": "1-e^{-\\lambda Q} =p " }, { "math_id": 12, "text": "Q(p;\\lambda) = \\frac{-\\ln(1-p)}{\\lambda}, \\!" }, { "math_id": 13, "text": "-\\ln(3/4)/\\lambda\\," }, { "math_id": 14, "text": "-\\ln(1/2)/\\lambda\\," }, { "math_id": 15, "text": "-\\ln(1/4)/\\lambda.\\," }, { "math_id": 16, "text": "\\frac{d^2 w}{d p^2} = w \\left(\\frac{d w}{d p}\\right)^2 " }, { "math_id": 17, "text": "w\\left(1/2\\right) = 0,\\, " }, { "math_id": 18, "text": "w'\\left(1/2\\right) = \\sqrt{2\\pi}.\\, " }, { "math_id": 19, "text": "Q(p) = \\tan (\\pi(p-1/2)) \\!" }, { "math_id": 20, "text": "Q(p) = 2(p-1/2)\\sqrt{\\frac{2}{\\alpha}}\\!" }, { "math_id": 21, "text": "Q(p) = \\operatorname{sign}(p-1/2)\\,2\\,\\sqrt{q-1}\\!" }, { "math_id": 22, "text": "q = \\frac{\\cos \\left( \\frac{1}{3} \\arccos \\left( \\sqrt{\\alpha} \\, \\right) \\right)}{\\sqrt{\\alpha}}\\!" }, { "math_id": 23, "text": "\\alpha = 4p(1-p).\\!" }, { "math_id": 24, "text": "Q(p)=\\sum_{i=1}^{m}a_i Q_i(p)" }, { "math_id": 25, "text": "Q_i(p)" }, { "math_id": 26, "text": "i=1,\\ldots,m" }, { "math_id": 27, "text": "a_i" }, { "math_id": 28, "text": "Q(p)" }, { "math_id": 29, "text": "\\frac{d^2 Q}{d p^2} = H(Q) \\left(\\frac{d Q}{d p}\\right)^2 " }, { "math_id": 30, "text": " H(x) = -\\frac{f'(x)}{f(x)} = -\\frac{d}{d x} \\ln f(x) " } ]
https://en.wikipedia.org/wiki?curid=10167616
10167815
Shape moiré
Type of moiré patterns Shape moiré is one type of moiré patterns demonstrating the phenomenon of moiré magnification. 1D shape moiré is the particular simplified case of 2D shape moiré. One-dimensional patterns may appear when superimposing an opaque layer containing tiny horizontal transparent lines on top of a layer containing a complex shape which is periodically repeating along the vertical axis. Description. Shape moiré is sometimes referred as band moiré. The opaque layer with transparent lines is called the revealing layer. The layer containing the periodically repeating shapes is called the base layer. The period of shapes in the base layer is denoted as "p"b. The period of transparent lines in the revealing layer is denoted as "p"r. The periods of both layers must be sufficiently close. The superimposition image reveals the shapes of the base layer stretched along the vertical axis. The magnified shapes appear periodically along the vertical axis. The dimensions along the horizontal axis are not changed. If the complex shape of the base layer is a sequence of symbols (e.g. a horizontal text) compressed along the vertical axis, then the superimposition of the revealing layer can restore the original proportions of these symbols. The size along the vertical axis, "p"m, of the magnified optical shape is expressed by the following formula: formula_0 Negative values of "p"m signify mirrored appearance (the magnified shapes will be inverted along the vertical axis) of the stretched shapes. When the revealing layer is moved along the vertical axis, the magnified shapes move along the vertical axis at a faster speed. The speedup factor is expressed by the following formula: formula_1 Negative values of "v"m / "v"r signify the movement of optical shapes in reverse direction. Examples. When "p"r &gt; "p"b, the magnified shapes appear normally, but they move in reverse direction compared to the movement of the revealing layer. See the figure below: When "p"r &lt; "p"b, the magnified shapes appear inverted along the vertical axis, but they move in the same direction as the revealing layer. See the figure below: Line moiré. Line moiré can be considered as a particular case of shape moiré when the shape embedded in the base layer is simply a straight or curved line.
[ { "math_id": 0, "text": "p_m=-\\frac{p_b \\cdot p_r}{p_b-p_r}" }, { "math_id": 1, "text": "\\frac{v_m}{v_r}=-\\frac{p_b}{p_b-p_r}" } ]
https://en.wikipedia.org/wiki?curid=10167815
101700
Diophantine set
Solution of some Diophantine equation In mathematics, a Diophantine equation is an equation of the form "P"("x"1, ..., "x""j", "y"1, ..., "y""k") = 0 (usually abbreviated "P"("x", "y") = 0) where "P"("x", "y") is a polynomial with integer coefficients, where "x"1, ..., "x""j" indicate parameters and "y"1, ..., "y""k" indicate unknowns. A Diophantine set is a subset "S" of formula_0, the set of all "j"-tuples of natural numbers, so that for some Diophantine equation "P"("x", "y") = 0, formula_1 That is, a parameter value is in the Diophantine set "S" if and only if the associated Diophantine equation is satisfiable under that parameter value. The use of natural numbers both in "S" and the existential quantification merely reflects the usual applications in computability theory and model theory. It does not matter whether natural numbers refer to the set of nonnegative integers or positive integers since the two definitions for Diophantine sets are equivalent. We can also equally well speak of Diophantine sets of integers and freely replace quantification over natural numbers with quantification over the integers. Also it is sufficient to assume "P" is a polynomial over formula_2 and multiply "P" by the appropriate denominators to yield integer coefficients. However, whether quantification over rationals can also be substituted for quantification over the integers is a notoriously hard open problem. The MRDP theorem (so named for the initials of the four principal contributors to its solution) states that a set of integers is Diophantine if and only if it is computably enumerable. A set of integers "S" is computably enumerable if and only if there is an algorithm that, when given an integer, halts if that integer is a member of "S" and runs forever otherwise. This means that the concept of general Diophantine set, apparently belonging to number theory, can be taken rather in logical or computability-theoretic terms. This is far from obvious, however, and represented the culmination of some decades of work. Matiyasevich's completion of the MRDP theorem settled Hilbert's tenth problem. Hilbert's tenth problem was to find a general algorithm that can decide whether a given Diophantine equation has a solution among the integers. While Hilbert's tenth problem is not a formal mathematical statement as such, the nearly universal acceptance of the (philosophical) identification of a decision algorithm with a total computable predicate allows us to use the MRDP theorem to conclude that the tenth problem is unsolvable. Examples. In the following examples, the natural numbers refer to the set of positive integers. The equation formula_3 is an example of a Diophantine equation with a parameter "x" and unknowns "y"1 and "y"2. The equation has a solution in "y"1 and "y"2 precisely when "x" can be expressed as a product of two integers greater than 1, in other words "x" is a composite number. Namely, this equation provides a Diophantine definition of the set {4, 6, 8, 9, 10, 12, 14, 15, 16, 18, ...} consisting of the composite numbers. Other examples of Diophantine definitions are as follows: Matiyasevich's theorem. Matiyasevich's theorem, also called the Matiyasevich–Robinson–Davis–Putnam or MRDP theorem, says: Every computably enumerable set is Diophantine, and the converse. A set "S" of integers is computably enumerable if there is an algorithm such that: For each integer input "n", if "n" is a member of "S", then the algorithm eventually halts; otherwise it runs forever. That is equivalent to saying there is an algorithm that runs forever and lists the members of "S". A set "S" is Diophantine precisely if there is some polynomial with integer coefficients "f"("n", "x"1, ..., "x""k") such that an integer "n" is in "S" if and only if there exist some integers "x"1, ..., "x""k" such that "f"("n", "x"1, ..., "x""k") = 0. Conversely, every Diophantine set is computably enumerable: consider a Diophantine equation "f"("n", "x"1, ..., "x""k") = 0. Now we make an algorithm that simply tries all possible values for "n", "x"1, ..., "x""k" (in, say, some simple order consistent with the increasing order of the sum of their absolute values), and prints "n" every time "f"("n", "x"1, ..., "x""k") = 0. This algorithm will obviously run forever and will list exactly the "n" for which "f"("n", "x"1, ..., "x""k") = 0 has a solution in "x"1, ..., "x""k". Proof technique. Yuri Matiyasevich utilized a method involving Fibonacci numbers, which grow exponentially, in order to show that solutions to Diophantine equations may grow exponentially. Earlier work by Julia Robinson, Martin Davis and Hilary Putnam – hence, MRDP – had shown that this suffices to show that every computably enumerable set is Diophantine. Application to Hilbert's tenth problem. Hilbert's tenth problem asks for a general algorithm deciding the solvability of Diophantine equations. The conjunction of Matiyasevich's result with the fact that most recursively enumerable languages are not decidable implies that a solution to Hilbert's tenth problem is impossible. Refinements. Later work has shown that the question of solvability of a Diophantine equation is undecidable even if the equation only has 9 natural number variables (Matiyasevich, 1977) or 11 integer variables (Zhi Wei Sun, 1992). Further applications. Matiyasevich's theorem has since been used to prove that many problems from calculus and differential equations are unsolvable. One can also derive the following stronger form of Gödel's first incompleteness theorem from Matiyasevich's result: "Corresponding to any given consistent axiomatization of number theory, one can explicitly construct a Diophantine equation that has no solutions, but such that this fact cannot be proved within the given axiomatization." According to the incompleteness theorems, a powerful-enough consistent axiomatic theory is incomplete, meaning the truth of some of its propositions cannot be established within its formalism. The statement above says that this incompleteness must include the solvability of a diophantine equation, assuming that the theory in question is a number theory.
[ { "math_id": 0, "text": "\\mathbb{N}^j" }, { "math_id": 1, "text": "\\bar{x} \\in S \\iff (\\exists \\bar{y} \\in \\mathbb{N}^{k})(P(\\bar{x},\\bar{y})=0) ." }, { "math_id": 2, "text": "\\mathbb{Q}" }, { "math_id": 3, "text": "x = (y_1 + 1)(y_2 + 1)" }, { "math_id": 4, "text": "x = y_1^2 + y_2^2" }, { "math_id": 5, "text": "\\mathbb{N}" }, { "math_id": 6, "text": "y_1^2 - xy_2^2 = 1" }, { "math_id": 7, "text": "x_1 + y = x_2" } ]
https://en.wikipedia.org/wiki?curid=101700
1017002
Upper topology
In mathematics, the upper topology on a partially ordered set "X" is the coarsest topology in which the closure of a singleton formula_0 is the order section formula_1 for each formula_2 If formula_3 is a partial order, the upper topology is the least order consistent topology in which all open sets are up-sets. However, not all up-sets must necessarily be open sets. The lower topology induced by the preorder is defined similarly in terms of the down-sets. The preorder inducing the upper topology is its specialization preorder, but the specialization preorder of the lower topology is opposite to the inducing preorder. The real upper topology is most naturally defined on the upper-extended real line formula_4 by the system formula_5 of open sets. Similarly, the real lower topology formula_6 is naturally defined on the lower real line formula_7 A real function on a topological space is upper semi-continuous if and only if it is lower-continuous, i.e. is continuous with respect to the lower topology on the lower-extended line formula_8 Similarly, a function into the upper real line is lower semi-continuous if and only if it is upper-continuous, i.e. is continuous with respect to the upper topology on formula_9 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{a\\}" }, { "math_id": 1, "text": "a] = \\{x \\leq a\\}" }, { "math_id": 2, "text": "a\\in X." }, { "math_id": 3, "text": "\\leq" }, { "math_id": 4, "text": "(-\\infty, +\\infty] = \\R \\cup \\{+\\infty\\}" }, { "math_id": 5, "text": "\\{(a, +\\infty] : a \\in \\R \\cup \\{\\pm\\infty\\}\\}" }, { "math_id": 6, "text": "\\{[-\\infty, a) : a \\in \\R \\cup \\{\\pm\\infty\\}\\}" }, { "math_id": 7, "text": "[-\\infty, +\\infty) = \\R \\cup \\{-\\infty\\}." }, { "math_id": 8, "text": "{[-\\infty, +\\infty)}." }, { "math_id": 9, "text": "{(-\\infty, +\\infty]}." } ]
https://en.wikipedia.org/wiki?curid=1017002
10171509
Hirschberg's algorithm
Algorithm for aligning two sequences In computer science, Hirschberg's algorithm, named after its inventor, Dan Hirschberg, is a dynamic programming algorithm that finds the optimal sequence alignment between two strings. Optimality is measured with the Levenshtein distance, defined to be the sum of the costs of insertions, replacements, deletions, and null actions needed to change one string into the other. Hirschberg's algorithm is simply described as a more space-efficient version of the Needleman–Wunsch algorithm that uses divide and conquer. Hirschberg's algorithm is commonly used in computational biology to find maximal global alignments of DNA and protein sequences. Algorithm information. Hirschberg's algorithm is a generally applicable algorithm for optimal sequence alignment. BLAST and FASTA are suboptimal heuristics. If formula_0 and formula_1 are strings, where formula_2 and formula_3, the Needleman–Wunsch algorithm finds an optimal alignment in formula_4 time, using formula_4 space. Hirschberg's algorithm is a clever modification of the Needleman–Wunsch Algorithm, which still takes formula_4 time, but needs only formula_5 space and is much faster in practice. One application of the algorithm is finding sequence alignments of DNA or protein sequences. It is also a space-efficient way to calculate the longest common subsequence between two sets of data such as with the common diff tool. The Hirschberg algorithm can be derived from the Needleman–Wunsch algorithm by observing that: Algorithm description. formula_11 denotes the "i"-th character of formula_0, where formula_12. formula_13 denotes a substring of size formula_14, ranging from the "i"-th to the "j"-th character of formula_0. formula_15 is the reversed version of formula_0. formula_0 and formula_1 are sequences to be aligned. Let formula_16 be a character from formula_0, and formula_17 be a character from formula_1. We assume that formula_18, formula_19 and formula_20 are well defined integer-valued functions. These functions represent the cost of deleting formula_16, inserting formula_17, and replacing formula_16 with formula_17, respectively. We define formula_21, which returns the last line of the Needleman–Wunsch score matrix formula_22: function NWScore(X, Y) Score(0, 0) = 0 // 2 * (length(Y) + 1) array for j = 1 to length(Y) Score(0, j) = Score(0, j - 1) + Ins(Yj) for i = 1 to length(X) // Init array Score(1, 0) = Score(0, 0) + Del(Xi) for j = 1 to length(Y) scoreSub = Score(0, j - 1) + Sub(Xi, Yj) scoreDel = Score(0, j) + Del(Xi) scoreIns = Score(1, j - 1) + Ins(Yj) Score(1, j) = max(scoreSub, scoreDel, scoreIns) end // Copy Score[1] to Score[0] Score(0, :) = Score(1, :) end for j = 0 to length(Y) LastLine(j) = Score(1, j) return LastLine Note that at any point, formula_23 only requires the two most recent rows of the score matrix. Thus, formula_23 is implemented in formula_24 space. The Hirschberg algorithm follows: function Hirschberg(X, Y) Z = " W = " if length(X) == 0 for i = 1 to length(Y) Z = Z + '-' W = W + Yi end else if length(Y) == 0 for i = 1 to length(X) Z = Z + Xi W = W + '-' end else if length(X) == 1 or length(Y) == 1 (Z, W) = NeedlemanWunsch(X, Y) else xlen = length(X) xmid = length(X) / 2 ylen = length(Y) ScoreL = NWScore(X1:xmid, Y) ScoreR = NWScore(rev(Xxmid+1:xlen), rev(Y)) ymid = arg max ScoreL + rev(ScoreR) (Z,W) = Hirschberg(X1:xmid, y1:ymid) + Hirschberg(Xxmid+1:xlen, Yymid+1:ylen) end return (Z, W) In the context of observation (2), assume that formula_25 is a partition of formula_0. Index formula_26 is computed such that formula_27 and formula_28. Example. Let formula_29 The optimal alignment is given by W = AGTACGCA Z = --TATGC- Indeed, this can be verified by backtracking its corresponding Needleman–Wunsch matrix: T A T G C 0 -2 -4 -6 -8 -10 A -2 -1 0 -2 -4 -6 G -4 -3 -2 -1 0 -2 T -6 -2 -4 0 -2 -1 A -8 -4 0 -2 -1 -3 C -10 -6 -2 -1 -3 1 G -12 -8 -4 -3 1 -1 C -14 -10 -6 -5 -1 3 A -16 -12 -8 -7 -3 1 One starts with the top level call to formula_30, which splits the first argument in half: formula_31. The call to formula_32 produces the following matrix: T A T G C 0 -2 -4 -6 -8 -10 A -2 -1 0 -2 -4 -6 G -4 -3 -2 -1 0 -2 T -6 -2 -4 0 -2 -1 A -8 -4 0 -2 -1 -3 Likewise, formula_33 generates the following matrix: C G T A T 0 -2 -4 -6 -8 -10 A -2 -1 -3 -5 -4 -6 C -4 0 -2 -4 -6 -5 G -6 -2 2 0 -2 -4 C -8 -4 0 1 -1 -3 Their last lines (after reversing the latter) and sum of those are respectively ScoreL = [ -8 -4 0 -2 -1 -3 ] rev(ScoreR) = [ -3 -1 1 0 -4 -8 ] Sum = [-11 -5 1 -2 -5 -11] The maximum (shown in bold) appears at codice_0, producing the partition formula_34. The entire Hirschberg recursion (which we omit for brevity) produces the following tree: (AGTACGCA,TATGC) (AGTA,TA) (CGCA,TGC) (AG, ) (TA,TA) (CG,TG) (CA,C) (T,T) (A,A) (C,T) (G,G) The leaves of the tree contain the optimal alignment. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "Y" }, { "math_id": 2, "text": "\\operatorname{length}(X) = n" }, { "math_id": 3, "text": "\\operatorname{length}(Y) = m" }, { "math_id": 4, "text": "O(nm)" }, { "math_id": 5, "text": "O(\\min\\{n,m\\})" }, { "math_id": 6, "text": "(Z, W) = \\operatorname{NW}(X, Y)" }, { "math_id": 7, "text": "(X, Y)" }, { "math_id": 8, "text": "X = X^l + X^r" }, { "math_id": 9, "text": "Y^l + Y^r" }, { "math_id": 10, "text": "\\operatorname{NW}(X, Y) = \\operatorname{NW}(X^l, Y^l) + \\operatorname{NW}(X^r, Y^r)" }, { "math_id": 11, "text": "X_i" }, { "math_id": 12, "text": "1 \\leqslant i \\leqslant \\operatorname{length}(X)" }, { "math_id": 13, "text": "X_{i:j}" }, { "math_id": 14, "text": "j - i + 1" }, { "math_id": 15, "text": "\\operatorname{rev}(X)" }, { "math_id": 16, "text": "x" }, { "math_id": 17, "text": "y" }, { "math_id": 18, "text": "\\operatorname{Del}(x)" }, { "math_id": 19, "text": "\\operatorname{Ins}(y)" }, { "math_id": 20, "text": "\\operatorname{Sub}(x,y)" }, { "math_id": 21, "text": "\\operatorname{NWScore}(X,Y)" }, { "math_id": 22, "text": "\\mathrm{Score}(i, j)" }, { "math_id": 23, "text": "\\operatorname{NWScore}" }, { "math_id": 24, "text": "O(\\min\\{\\operatorname{length}(X),\\operatorname{length}(Y)\\})" }, { "math_id": 25, "text": "X^l + X^r" }, { "math_id": 26, "text": "\\mathrm{ymid}" }, { "math_id": 27, "text": "Y^l = Y_{1:\\mathrm{ymid}}" }, { "math_id": 28, "text": "Y^r = Y_{\\mathrm{ymid}+1:\\operatorname{length}(Y)}" }, { "math_id": 29, "text": "\n \\begin{align}\n X &= \\text{AGTACGCA},\\\\\n Y &= \\text{TATGC},\\\\\n \\operatorname{Del}(x) &= -2,\\\\\n \\operatorname{Ins}(y) &= -2,\\\\\n \\operatorname{Sub}(x,y) &= \\begin{cases} +2, & \\text{if } x = y \\\\ -1, & \\text{if } x \\neq y.\\end{cases}\n \\end{align}\n" }, { "math_id": 30, "text": "\\operatorname{Hirschberg}(\\text{AGTACGCA}, \\text{TATGC})" }, { "math_id": 31, "text": "X = \\text{AGTA} + \\text{CGCA}" }, { "math_id": 32, "text": "\\operatorname{NWScore}(\\text{AGTA}, Y)" }, { "math_id": 33, "text": "\\operatorname{NWScore}(\\operatorname{rev}(\\text{CGCA}), \\operatorname{rev}(Y))" }, { "math_id": 34, "text": "Y = \\text{TA} + \\text{TGC}" } ]
https://en.wikipedia.org/wiki?curid=10171509
10171965
Joel Bowman
American chemist Joel Mark Bowman is an American physical chemist and educator. He is the Samuel Candler Dobbs Professor of Theoretical Chemistry at Emory University. Publications, honors and awards. Bowman is the author or co-author of more than 600 publications and is a member of the International Academy of Quantum Molecular Sciences. He received the Herschbach Medal. He is a fellow of the American Physical Society and of the American Association for the Advancement of Science. Research interests. His research interests are in basic theories of chemical reactivity. His AAAS fellow citation cited him “for distinguished contributions to reduced dimensionality quantum approaches to reaction rates and to the formulation and application of self-consistent field approaches to molecular vibrations.” Bowman is well known for his contributions in simulating potential energy surfaces for polyatomic molecules and clusters. Approximately fifty potential energy surfaces for molecules and clusters have been simulated employing his permutationally invariant polynomial method. Permutationally invariant polynomial (PIP) method. Simulating potential energy surfaces (PESs) for reactive and non-reactive systems is of broad utility in theoretical and computational chemistry. Development of global PESs, or surfaces spanning a broad range of nuclear coordinates, is particularly necessary for certain applications, including molecular dynamics and Monte Carlo simulations and quantum reactive scattering calculations. Rather than utilizing all of the internuclear distances, theoretical chemists often analytical equations for PESs by using a set of internal coordinates. For systems containing more than four atoms, the count of internuclear distances deviates from the equation 3"N"−6 (which represents the degrees of freedom in a three-dimensional space for a nonlinear molecule with "N" atoms). As an example, Collins and his team developed a method employing different sets of 3"N"−6 internal coordinates, which they applied to analyze the H+ CH4 reaction. They addressed permutational symmetry by replicating data for permutations of the H atoms. In contrast to this approach, the PIP method uses the linear least-square method to accurately match tens of thousands of electronic energies for both reactive and non-reactive systems mathematically. Methodology. Generally, the functions used in fitting potential energy surfaces to experimental and/or electronic structure theory data are based on the choice of coordinates. Most of the chosen coordinates are bond stretches, valence and dihedral angles, or other curvilinear coordinates such as the Jacobi coordinates or polyspherical coordinates. There are advantages to each of these choices. In the PIP approach, the N(N − 1)/2 internuclear distances are utilized. This number of variables is equal to 3N −6 (or 3N − 5 = 1 for diatomic molecules) for N = 3, 4 and differs for N ≥ 5. Thus, N = 5 is an important boundary that affects the choice of coordinates. An advantage of employing this variable set is its inherent closure under all permutations of atoms. This implies that regardless of the order in which atoms are permuted, the resulting set of variables remains unchanged. However, the main focus pertains to permutations involving identical atoms, as the PES must be invariant under such transformations. PIP utilizing Morse variables of the form formula_0, where formula_1 is the distance between atoms formula_2 and formula_3 and formula_4 is a range parameter) offers a method for mathematically characterizing high-dimensional PESs. By fixing the range parameter in the Morse variable, the PES can be determined through linear least-squares fitting of computed electronic energies for the system at various structural arrangements. The adoption of a permutationally invariant fitting basis, whether in the form of all internuclear distances or transformed variables like Morse variables, facilitates the attainment of accurate fits for molecules and clusters. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y_{ij}=exp(-r_{ij}/a)" }, { "math_id": 1, "text": "r_{ij}" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "j" }, { "math_id": 4, "text": "a" } ]
https://en.wikipedia.org/wiki?curid=10171965
10172238
Nanofluidics
Dynamics of fluids confined in nanoscale structures Nanofluidics is the study of the behavior, manipulation, and control of fluids that are confined to structures of nanometer (typically 1–100 nm) characteristic dimensions (1 nm = 10−9 m). Fluids confined in these structures exhibit physical behaviors not observed in larger structures, such as those of micrometer dimensions and above, because the characteristic physical scaling lengths of the fluid, ("e.g." Debye length, hydrodynamic radius) very closely coincide with the dimensions of the nanostructure itself. When structures approach the size regime corresponding to molecular scaling lengths, new physical constraints are placed on the behavior of the fluid. For example, these physical constraints induce regions of the fluid to exhibit new properties not observed in bulk, "e.g." vastly increased viscosity near the pore wall; they may effect changes in thermodynamic properties and may also alter the chemical reactivity of species at the fluid-solid interface. A particularly relevant and useful example is displayed by electrolyte solutions confined in nanopores that contain surface charges, "i.e." at electrified interfaces, as shown in the nanocapillary array membrane (NCAM) in the accompanying figure. All electrified interfaces induce an organized charge distribution near the surface known as the electrical double layer. In pores of nanometer dimensions the electrical double layer may completely span the width of the nanopore, resulting in dramatic changes in the composition of the fluid and the related properties of fluid motion in the structure. For example, the drastically enhanced surface-to-volume ratio of the pore results in a preponderance of counter-ions ("i.e." ions charged oppositely to the static wall charges) over co-ions (possessing the same sign as the wall charges), in many cases to the near-complete exclusion of co-ions, such that only one ionic species exists in the pore. This can be used for manipulation of species with selective polarity along the pore length to achieve unusual fluidic manipulation schemes not possible in micrometer and larger structures. Theory. In 1965, Rice and Whitehead published the seminal contribution to the theory of the transport of electrolyte solutions in long (ideally infinite) nanometer-diameter capillaries. Briefly, the potential, "ϕ", at a radial distance, "r", is given by the Poisson-Boltzmann equation, formula_0 where "κ" is the inverse Debye length, formula_1 determined by the ion number density, "n", the dielectric constant, "ε", the Boltzmann constant, "k", and the temperature, "T". Knowing the potential, "φ(r)", the charge density can then be recovered from the Poisson equation, whose solution may be expressed as a modified Bessel function of the first kind, "I0", and scaled to the capillary radius, "a". An equation of motion under combined pressure and electrically-driven flow can then be written, formula_2 where "η" is the viscosity, "dp/dz" is the pressure gradient, and "Fz" is the body force driven by the action of the applied electric field, "Ez", on the net charge density in the double layer. When there is no applied pressure, the radial distribution of the velocity is given by, formula_3 From the equation above, it follows that fluid flow in nanocapillaries is governed by the "κa" product, that is, the relative sizes of the Debye length and the pore radius. By adjusting these two parameters and the surface charge density of the nanopores, fluid flow can be manipulated as desired. Fabrication. Nanostructures can be fabricated as single cylindrical channels, nanoslits, or nanochannel arrays from materials such as silicon, glass, polymers (e.g. PMMA, PDMS, PCTE) and synthetic vesicles. Standard photolithography, bulk or surface micromachining, replication techniques (embossing, printing, casting and injection molding), and nuclear track or chemical etching, are commonly used to fabricate structures which exhibit characteristic nanofluidic behavior. Applications. Because of the small size of the fluidic conduits, nanofluidic structures are naturally applied in situations demanding that samples be handled in exceedingly small quantities, including Coulter counting, analytical separations and determinations of biomolecules, such as proteins and DNA, and facile handling of mass-limited samples. One of the more promising areas of nanofluidics is its potential for integration into microfluidic systems, i.e. micrototal analytical systems or lab-on-a-chip structures. For instance, NCAMs, when incorporated into microfluidic devices, can reproducibly perform digital switching, allowing transfer of fluid from one microfluidic channel to another, selectivity separate and transfer analytes by size and mass, mix reactants efficiently, and separate fluids with disparate characteristics. In addition, there is a natural analogy between the fluid handling capabilities of nanofluidic structures and the ability of electronic components to control the flow of electrons and holes. This analogy has been used to realize active electronic functions such as rectification and field-effect and bipolar transistor action with ionic currents. Application of nanofluidics is also to nano-optics for producing tuneable microlens array Nanofluidics have had a significant impact in biotechnology, medicine and clinical diagnostics with the development of lab-on-a-chip devices for PCR and related techniques. Attempts have been made to understand the behaviour of flowfields around nanoparticles in terms of fluid forces as a function of Reynolds and Knudsen number using computational fluid dynamics. The relationship between lift, drag and Reynolds number has been shown to differ dramatically at the nanoscale compared with macroscale fluid dynamics. Challenges. There are a variety of challenges associated with the flow of liquids through carbon nanotubes and nanopipes. A common occurrence is channel blocking due to large macromolecules in the liquid. Also, any insoluble debris in the liquid can easily clog the tube. A solution for this researchers are hoping to find is a low friction coating or channel materials that help reduce the blocking of the tubes. Also, large polymers, including biologically relevant molecules such as DNA, often fold "in vivo," causing blockages. Typical DNA molecules from a virus have lengths of approx. 100–200 kilobases and will form a random coil of the radius some 700 nm in aqueous solution at 20%. This is also several times greater than the pore diameter of even large carbon pipes and two orders of magnitude the diameter of a single walled carbon nanotube. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{1}{r}\\frac{d}{dr}\\left (r \\frac{d\\phi}{dr} \\right )= \\kappa^2 \\phi," }, { "math_id": 1, "text": "\\kappa = \\sqrt{\\frac{8\\pi n e^2}{\\epsilon k T}}, " }, { "math_id": 2, "text": "\\frac{1}{r} \\frac{d}{dr} \\left (r \\frac{d v_z}{dr} \\right )= \\frac{1}{\\eta} \\frac{dp}{dz} - \\frac{F_z}{\\eta}," }, { "math_id": 3, "text": "v_z\\left (r \\right) = \\frac{\\epsilon \\phi_0}{4 \\pi \\eta} E_z \\left [ 1 - \\frac {I_0 \\left ( \\kappa r \\right )} {I_0 \\left ( \\kappa a \\right )} \\right ]." } ]
https://en.wikipedia.org/wiki?curid=10172238
10172878
Classical group
In mathematics, the classical groups are defined as the special linear groups over the reals formula_0, the complex numbers formula_1 and the quaternions formula_2 together with special automorphism groups of symmetric or skew-symmetric bilinear forms and Hermitian or skew-Hermitian sesquilinear forms defined on real, complex and quaternionic finite-dimensional vector spaces. Of these, the complex classical Lie groups are four infinite families of Lie groups that together with the exceptional groups exhaust the classification of simple Lie groups. The compact classical groups are compact real forms of the complex classical groups. The finite analogues of the classical groups are the classical groups of Lie type. The term "classical group" was coined by Hermann Weyl, it being the title of his 1939 monograph "The Classical Groups". The classical groups form the deepest and most useful part of the subject of linear Lie groups. Most types of classical groups find application in classical and modern physics. A few examples are the following. The rotation group SO(3) is a symmetry of Euclidean space and all fundamental laws of physics, the Lorentz group O(3,1) is a symmetry group of spacetime of special relativity. The special unitary group SU(3) is the symmetry group of quantum chromodynamics and the symplectic group Sp("m") finds application in Hamiltonian mechanics and quantum mechanical versions of it. The classical groups. The classical groups are exactly the general linear groups over formula_0, formula_1 and formula_2 together with the automorphism groups of non-degenerate forms discussed below. These groups are usually additionally restricted to the subgroups whose elements have determinant 1, so that their centers are discrete. The classical groups, with the determinant 1 condition, are listed in the table below. In the sequel, the determinant 1 condition is "not" used consistently in the interest of greater generality. The complex classical groups are SL("n", formula_1), SO("n", formula_1) and Sp("n", formula_1). A group is complex according to whether its Lie algebra is complex. The real classical groups refers to all of the classical groups since any Lie algebra is a real algebra. The compact classical groups are the compact real forms of the complex classical groups. These are, in turn, SU("n"), SO("n") and Sp("n"). One characterization of the compact real form is in terms of the Lie algebra g. If g u + "i"u, the complexification of u, and if the connected group "K" generated by {exp("X"): "X" ∈ u} is compact, then "K" is a compact real form. The classical groups can uniformly be characterized in a different way using real forms. The classical groups (here with the determinant 1 condition, but this is not necessary) are the following: The complex linear algebraic groups SL("n", formula_1), SO("n", formula_1), and Sp("n", formula_1) together with their real forms. For instance, SO∗(2"n") is a real form of SO(2"n", formula_1), SU("p", "q") is a real form of SL("n", formula_1), and SL("n", formula_2) is a real form of SL(2"n", formula_1). Without the determinant 1 condition, replace the special linear groups with the corresponding general linear groups in the characterization. The algebraic groups in question are Lie groups, but the "algebraic" qualifier is needed to get the right notion of "real form". Bilinear and sesquilinear forms. The classical groups are defined in terms of forms defined on R"n", C"n", and H"n", where R and C are the fields of the real and complex numbers. The quaternions, H, do not constitute a field because multiplication does not commute; they form a division ring or a skew field or non-commutative field. However, it is still possible to define matrix quaternionic groups. For this reason, a vector space "V" is allowed to be defined over R, C, as well as H below. In the case of H, "V" is a "right" vector space to make possible the representation of the group action as matrix multiplication from the "left", just as for R and C. A form "φ": "V" × "V" → "F" on some finite-dimensional right vector space over "F" R, C, or H is bilinear if formula_3 and if formula_4 It is called sesquilinear if formula_5 and if formula_6 These conventions are chosen because they work in all cases considered. An automorphism of "φ" is a map "Α" in the set of linear operators on "V" such that The set of all automorphisms of "φ" form a group, it is called the automorphism group of "φ", denoted Aut("φ"). This leads to a preliminary definition of a classical group: A classical group is a group that preserves a bilinear or sesquilinear form on finite-dimensional vector spaces over R, C or H. This definition has some redundancy. In the case of "F" R, bilinear is equivalent to sesquilinear. In the case of "F" H, there are no non-zero bilinear forms. Symmetric, skew-symmetric, Hermitian, and skew-Hermitian forms. A form is symmetric if formula_7 It is skew-symmetric if formula_8 It is Hermitian if formula_9 Finally, it is skew-Hermitian if formula_10 A bilinear form "φ" is uniquely a sum of a symmetric form and a skew-symmetric form. A transformation preserving "φ" preserves both parts separately. The groups preserving symmetric and skew-symmetric forms can thus be studied separately. The same applies, mutatis mutandis, to Hermitian and skew-Hermitian forms. For this reason, for the purposes of classification, only purely symmetric, skew-symmetric, Hermitian, or skew-Hermitian forms are considered. The normal forms of the forms correspond to specific suitable choices of bases. These are bases giving the following normal forms in coordinates: formula_11 The j in the skew-Hermitian form is the third basis element in the basis (1, i, j, k) for H. Proof of existence of these bases and Sylvester's law of inertia, the independence of the number of plus- and minus-signs, "p" and "q", in the symmetric and Hermitian forms, as well as the presence or absence of the fields in each expression, can be found in or . The pair ("p", "q"), and sometimes "p" − "q", is called the signature of the form. Explanation of occurrence of the fields R, C, H: There are no nontrivial bilinear forms over H. In the symmetric bilinear case, only forms over R have a signature. In other words, a complex bilinear form with "signature" ("p", "q") can, by a change of basis, be reduced to a form where all signs are "+" in the above expression, whereas this is impossible in the real case, in which "p" − "q" is independent of the basis when put into this form. However, Hermitian forms have basis-independent signature in both the complex and the quaternionic case. (The real case reduces to the symmetric case.) A skew-Hermitian form on a complex vector space is rendered Hermitian by multiplication by i, so in this case, only H is interesting. Automorphism groups. The first section presents the general framework. The other sections exhaust the qualitatively different cases that arise as automorphism groups of bilinear and sesquilinear forms on finite-dimensional vector spaces over R, C and H. Aut("φ") – the automorphism group. Assume that "φ" is a non-degenerate form on a finite-dimensional vector space "V" over R, C or H. The automorphism group is defined, based on condition (1), as formula_12 Every "A" ∈ "M""n"("V") has an adjoint "A""φ" with respect to "φ" defined by Using this definition in condition (1), the automorphism group is seen to be given by Fix a basis for "V". In terms of this basis, put formula_13 where ξ"i", η"j" are the components of "x", "y". This is appropriate for the bilinear forms. Sesquilinear forms have similar expressions and are treated separately later. In matrix notation one finds formula_14 and from (2) where Φ is the matrix ("φij"). The non-degeneracy condition means precisely that Φ is invertible, so the adjoint always exists. Aut("φ") expressed with this becomes formula_15 The Lie algebra aut("φ") of the automorphism groups can be written down immediately. Abstractly, "X" ∈ aut("φ") if and only if formula_16 for all "t", corresponding to the condition in (3) under the exponential mapping of Lie algebras, so that formula_17 or in a basis as is seen using the power series expansion of the exponential mapping and the linearity of the involved operations. Conversely, suppose that "X" ∈ aut("φ"). Then, using the above result, "φ"("Xx", "y") φ("x", "X""φ""y") −φ("x", "Xy"). Thus the Lie algebra can be characterized without reference to a basis, or the adjoint, as formula_18 The normal form for "φ" will be given for each classical group below. From that normal form, the matrix Φ can be read off directly. Consequently, expressions for the adjoint and the Lie algebras can be obtained using formulas (4) and (5). This is demonstrated below in most of the non-trivial cases. Bilinear case. When the form is symmetric, Aut("φ") is called O("φ"). When it is skew-symmetric then Aut("φ") is called Sp("φ"). This applies to the real and the complex cases. The quaternionic case is empty since no nonzero bilinear forms exists on quaternionic vector spaces. Real case. The real case breaks up into two cases, the symmetric and the antisymmetric forms that should be treated separately. O("p", "q") and O("n") – the orthogonal groups. If "φ" is symmetric and the vector space is real, a basis may be chosen so that formula_19 The number of plus and minus-signs is independent of the particular basis. In the case "V" R"n" one writes O("φ") O("p", "q") where "p" is the number of plus signs and "q" is the number of minus-signs, "p" + "q" "n". If "q" 0 the notation is O("n"). The matrix Φ is in this case formula_20 after reordering the basis if necessary. The adjoint operation (4) then becomes formula_21 which reduces to the usual transpose when "p" or "q" is 0. The Lie algebra is found using equation (5) and a suitable ansatz (this is detailed for the case of Sp("m", R) below), formula_22 and the group according to (3) is given by formula_23 The groups O("p", "q") and O("q", "p") are isomorphic through the map formula_24 For example, the Lie algebra of the Lorentz group could be written as formula_25 Naturally, it is possible to rearrange so that the "q"-block is the upper left (or any other block). Here the "time component" end up as the fourth coordinate in a physical interpretation, and not the first as may be more common. Sp("m", R) – the real symplectic group. If "φ" is skew-symmetric and the vector space is real, there is a basis giving formula_26 where "n" 2"m". For Aut("φ") one writes Sp("φ") Sp("V") In case "V" R"n" R2"m" one writes Sp("m", R) or Sp(2"m", R). From the normal form one reads off formula_27 By making the ansatz formula_28 where "X", "Y", "Z", "W" are "m"-dimensional matrices and considering (5), formula_29 one finds the Lie algebra of Sp("m", R), formula_30 and the group is given by formula_31 Complex case. Like in the real case, there are two cases, the symmetric and the antisymmetric case that each yield a family of classical groups. O("n", C) – the complex orthogonal group. If case "φ" is symmetric and the vector space is complex, a basis formula_32 with only plus-signs can be used. The automorphism group is in the case of "V" C"n" called O(n, C). The lie algebra is simply a special case of that for o("p", "q"), formula_33 and the group is given by formula_34 In terms of classification of simple Lie algebras, the so("n") are split into two classes, those with "n" odd with root system "B""n" and "n" even with root system "D""n". Sp("m", C) – the complex symplectic group. For "φ" skew-symmetric and the vector space complex, the same formula, formula_26 applies as in the real case. For Aut("φ") one writes Sp("φ") Sp("V"). In the case formula_35 one writes Sp("m", formula_1) or Sp(2"m", formula_1). The Lie algebra parallels that of sp("m", formula_0), formula_36 and the group is given by formula_37 Sesquilinear case. In the sesquilinear case, one makes a slightly different approach for the form in terms of a basis, formula_38 The other expressions that get modified are formula_39 formula_40 The real case, of course, provides nothing new. The complex and the quaternionic case will be considered below. Complex case. From a qualitative point of view, consideration of skew-Hermitian forms (up to isomorphism) provide no new groups; multiplication by "i" renders a skew-Hermitian form Hermitian, and vice versa. Thus only the Hermitian case needs to be considered. U("p", "q") and U("n") – the unitary groups. A non-degenerate hermitian form has the normal form formula_41 As in the bilinear case, the signature ("p", "q") is independent of the basis. The automorphism group is denoted U("V"), or, in the case of "V" C"n", U("p", "q"). If "q" 0 the notation is U("n"). In this case, Φ takes the form formula_42 and the Lie algebra is given by formula_43 The group is given by formula_44 where g is a general n x n complex matrix and formula_45 is defined as the conjugate transpose of g, what physicists call formula_46. As a comparison, a Unitary matrix U(n) is defined as formula_47 We note that formula_48 is the same as formula_49 Quaternionic case. The space H"n" is considered as a "right" vector space over H. This way, "A"("vh") ("Av")"h" for a quaternion "h", a quaternion column vector "v" and quaternion matrix "A". If H"n" was a "left" vector space over H, then matrix multiplication from the "right" on row vectors would be required to maintain linearity. This does not correspond to the usual linear operation of a group on a vector space when a basis is given, which is matrix multiplication from the "left" on column vectors. Thus "V" is henceforth a right vector space over H. Even so, care must be taken due to the non-commutative nature of H. The (mostly obvious) details are skipped because complex representations will be used. When dealing with quaternionic groups it is convenient to represent quaternions using complex 2×2-matrices, With this representation, quaternionic multiplication becomes matrix multiplication and quaternionic conjugation becomes taking the Hermitian adjoint. Moreover, if a quaternion according to the complex encoding "q" "x" + j"y" is given as a column vector ("x", "y")T, then multiplication from the left by a matrix representation of a quaternion produces a new column vector representing the correct quaternion. This representation differs slightly from a more common representation found in the quaternion article. The more common convention would force multiplication from the right on a row matrix to achieve the same thing. Incidentally, the representation above makes it clear that the group of unit quaternions () is isomorphic to SU(2). Quaternionic "n"×"n"-matrices can, by obvious extension, be represented by 2"n"×2"n" block-matrices of complex numbers. If one agrees to represent a quaternionic "n"×1 column vector by a 2"n"×1 column vector with complex numbers according to the encoding of above, with the upper "n" numbers being the α"i" and the lower "n" the β"i", then a quaternionic "n"×"n"-matrix becomes a complex 2"n"×2"n"-matrix exactly of the form given above, but now with α and β "n"×"n"-matrices. More formally A matrix "T" ∈ GL(2"n", C) has the form displayed in (8) if and only if "J""n""T" "TJ""n". With these identifications, formula_50 The space "M""n"(H) ⊂ "M"2"n"(C) is a real algebra, but it is not a complex subspace of "M"2"n"(C). Multiplication (from the left) by i in "M""n"(H) using entry-wise quaternionic multiplication and then mapping to the image in "M"2"n"(C) yields a different result than multiplying entry-wise by "i" directly in "M"2"n"(C). The quaternionic multiplication rules give i("X" + j"Y") (iX") + j(−iY") where the new "X" and "Y" are inside the parentheses. The action of the quaternionic matrices on quaternionic vectors is now represented by complex quantities, but otherwise it is the same as for "ordinary" matrices and vectors. The quaternionic groups are thus embedded in M2"n"("C") where "n" is the dimension of the quaternionic matrices. The determinant of a quaternionic matrix is defined in this representation as being the ordinary complex determinant of its representative matrix. The non-commutative nature of quaternionic multiplication would, in the quaternionic representation of matrices, be ambiguous. The way "M""n"(H) is embedded in "M"2"n"(C) is not unique, but all such embeddings are related through "g" ↦ "AgA"−1, "g" ∈ GL(2"n", C) for "A" ∈ O(2"n", C), leaving the determinant unaffected. The name of SL("n", H) in this complex guise is SU∗(2"n"). As opposed to in the case of C, both the Hermitian and the skew-Hermitian case bring in something new when H is considered, so these cases are considered separately. GL("n", H) and SL("n", H). Under the identification above, formula_51 Its Lie algebra gl("n", H) is the set of all matrices in the image of the mapping "M""n"(H) ↔ "M"2"n"(C) of above, formula_52 The quaternionic special linear group is given by formula_53 where the determinant is taken on the matrices in C2"n". Alternatively, one can define this as the kernel of the Dieudonné determinant formula_54. The Lie algebra is formula_55 Sp("p", "q") – the quaternionic unitary group. As above in the complex case, the normal form is formula_56 and the number of plus-signs is independent of basis. When "V" H"n" with this form, Sp("φ") Sp("p", "q"). The reason for the notation is that the group can be represented, using the above prescription, as a subgroup of Sp("n", C) preserving a complex-hermitian form of signature (2"p", 2"q") If "p" or "q" 0 the group is denoted U("n", H). It is sometimes called the hyperunitary group. In quaternionic notation, formula_57 meaning that "quaternionic" matrices of the form will satisfy formula_58 see the section about u("p", "q"). Caution needs to be exercised when dealing with quaternionic matrix multiplication, but here only "I" and -"I" are involved and these commute with every quaternion matrix. Now apply prescription (8) to each block, formula_59 and the relations in (9) will be satisfied if formula_60 The Lie algebra becomes formula_61 The group is given by formula_62 Returning to the normal form of "φ"("w", "z") for Sp("p", "q"), make the substitutions "w" → "u" + "jv" and "z" → "x" + "jy" with u, v, x, y ∈ C"n". Then formula_63 viewed as a H-valued form on C2"n". Thus the elements of Sp("p", "q"), viewed as linear transformations of C2"n", preserve both a Hermitian form of signature (2"p", 2"q") and a non-degenerate skew-symmetric form. Both forms take purely complex values and due to the prefactor of j of the second form, they are separately conserved. This means that formula_64 and this explains both the name of the group and the notation. O∗(2"n") = O("n", H)- quaternionic orthogonal group. The normal form for a skew-hermitian form is given by formula_65 where j is the third basis quaternion in the ordered listing (1, i, j, k). In this case, Aut("φ") O∗(2"n") may be realized, using the complex matrix encoding of above, as a subgroup of O(2"n", C) which preserves a non-degenerate complex skew-hermitian form of signature ("n", "n"). From the normal form one sees that in quaternionic notation formula_66 and from (6) follows that for "V" ∈ o(2"n"). Now put formula_67 according to prescription (8). The same prescription yields for Φ, formula_68 Now the last condition in (9) in complex notation reads formula_69 The Lie algebra becomes formula_70 and the group is given by formula_71 The group SO∗(2"n") can be characterized as formula_72 where the map "θ": GL(2"n", C) → GL(2"n", C) is defined by "g" ↦ −"J"2"n""gJ"2"n". Also, the form determining the group can be viewed as a H-valued form on C2"n". Make the substitutions "x" → "w"1 + "iw"2 and "y" → "z"1 + "iz"2 in the expression for the form. Then formula_73 The form "φ"1 is Hermitian (while the first form on the left hand side is skew-Hermitian) of signature ("n", "n"). The signature is made evident by a change of basis from (e, f) to ((e + "if)/√2, (e − "if)/√2) where e, f are the first and last "n" basis vectors respectively. The second form, "φ"2 is symmetric positive definite. Thus, due to the factor j, O∗(2"n") preserves both separately and it may be concluded that formula_74 and the notation "O" is explained. Classical groups over general fields or algebras. Classical groups, more broadly considered in algebra, provide particularly interesting matrix groups. When the field "F" of coefficients of the matrix group is either real number or complex numbers, these groups are just the classical Lie groups. When the ground field is a finite field, then the classical groups are groups of Lie type. These groups play an important role in the classification of finite simple groups. Also, one may consider classical groups over a unital associative algebra "R" over "F"; where "R" = H (an algebra over reals) represents an important case. For the sake of generality the article will refer to groups over "R", where "R" may be the ground field "F" itself. Considering their abstract group theory, many linear groups have a "special" subgroup, usually consisting of the elements of determinant 1 over the ground field, and most of them have associated "projective" quotients, which are the quotients by the center of the group. For orthogonal groups in characteristic 2 "S" has a different meaning. The word "general" in front of a group name usually means that the group is allowed to multiply some sort of form by a constant, rather than leaving it fixed. The subscript "n" usually indicates the dimension of the module on which the group is acting; it is a vector space if "R" = "F". Caveat: this notation clashes somewhat with the "n" of Dynkin diagrams, which is the rank. General and special linear groups. The general linear group GL"n"("R") is the group of all "R"-linear automorphisms of "R""n". There is a subgroup: the special linear group SL"n"("R"), and their quotients: the projective general linear group PGL"n"("R") = GL"n"("R")/Z(GL"n"("R")) and the projective special linear group PSL"n"("R") = SL"n"("R")/Z(SL"n"("R")). The projective special linear group PSL"n"("F") over a field "F" is simple for "n" ≥ 2, except for the two cases when "n" = 2 and the field has order 2 or 3. Unitary groups. The unitary group U"n"("R") is a group preserving a sesquilinear form on a module. There is a subgroup, the special unitary group SU"n"("R") and their quotients the projective unitary group PU"n"("R") = U"n"("R")/Z(U"n"("R")) and the projective special unitary group PSU"n"("R") = SU"n"("R")/Z(SU"n"("R")) Symplectic groups. The symplectic group Sp2"n"("R") preserves a skew symmetric form on a module. It has a quotient, the projective symplectic group PSp2"n"("R"). The general symplectic group GSp2"n"("R") consists of the automorphisms of a module multiplying a skew symmetric form by some invertible scalar. The projective symplectic group PSp2"n"(F"q") over a finite field is simple for "n" ≥ 1, except for the cases of PSp2 over the fields of two and three elements. Orthogonal groups. The orthogonal group O"n"("R") preserves a non-degenerate quadratic form on a module. There is a subgroup, the special orthogonal group SO"n"("R") and quotients, the projective orthogonal group PO"n"("R"), and the projective special orthogonal group PSO"n"("R"). In characteristic 2 the determinant is always 1, so the special orthogonal group is often defined as the subgroup of elements of Dickson invariant 1. There is a nameless group often denoted by Ω"n"("R") consisting of the elements of the orthogonal group of elements of spinor norm 1, with corresponding subgroup and quotient groups SΩ"n"("R"), PΩ"n"("R"), PSΩ"n"("R"). (For positive definite quadratic forms over the reals, the group Ω happens to be the same as the orthogonal group, but in general it is smaller.) There is also a double cover of Ω"n"("R"), called the pin group Pin"n"("R"), and it has a subgroup called the spin group Spin"n"("R"). The general orthogonal group GO"n"("R") consists of the automorphisms of a module multiplying a quadratic form by some invertible scalar. Contrast with exceptional Lie groups. Contrasting with the classical Lie groups are the exceptional Lie groups, G2, F4, E6, E7, E8, which share their abstract properties, but not their familiarity. These were only discovered around 1890 in the classification of the simple Lie algebras over the complex numbers by Wilhelm Killing and Élie Cartan. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}" }, { "math_id": 1, "text": "\\mathbb{C}" }, { "math_id": 2, "text": "\\mathbb{H}" }, { "math_id": 3, "text": "\\varphi(x\\alpha, y\\beta) = \\alpha\\varphi(x, y)\\beta, \\quad \\forall x,y \\in V, \\forall \\alpha,\\beta \\in F." }, { "math_id": 4, "text": "\\varphi(x_1+x_2,y_1+y_2)=\\varphi(x_1,y_1)+\\varphi(x_1,y_2)+\\varphi(x_2,y_1)+\\varphi(x_2,y_2),\\quad \\forall x_1, x_2, y_1, y_2 \\in V. " }, { "math_id": 5, "text": "\\varphi(x\\alpha, y\\beta) = \\bar{\\alpha}\\varphi(x, y)\\beta, \\quad \\forall x,y \\in V, \\forall \\alpha,\\beta \\in F." }, { "math_id": 6, "text": "\\varphi(x_1+x_2,y_1+y_2)=\\varphi(x_1,y_1)+\\varphi(x_1,y_2)+\\varphi(x_2,y_1)+\\varphi(x_2,y_2), \\quad \\forall x_1, x_2, y_1, y_2 \\in V. " }, { "math_id": 7, "text": "\\varphi(x, y) = \\varphi(y, x)." }, { "math_id": 8, "text": "\\varphi(x, y) = -\\varphi(y, x)." }, { "math_id": 9, "text": "\\varphi(x, y) = \\overline{\\varphi(y, x)}" }, { "math_id": 10, "text": "\\varphi(x, y) = -\\overline{\\varphi(y, x)}." }, { "math_id": 11, "text": "\\begin{align}\n \\text{Bilinear symmetric form in (pseudo-)orthonormal basis:} \\quad\n \\varphi(x, y) ={} &{\\pm}\\xi_1\\eta_1 \\pm \\xi_2\\eta_2 \\pm \\cdots \\pm \\xi_n\\eta_n, & &(\\mathbf R)\\\\\n \\text{Bilinear symmetric form in orthonormal basis:} \\quad\n \\varphi(x, y) ={} &\\xi_1\\eta_1 + \\xi_2\\eta_2 + \\cdots + \\xi_n\\eta_n, & &(\\mathbf C)\\\\\n \\text{Bilinear skew-symmetric in symplectic basis:} \\quad\n \\varphi(x, y) ={} &\\xi_1\\eta_{m + 1} + \\xi_2\\eta_{m + 2} + \\cdots + \\xi_m\\eta_{2m = n} \\\\\n &-\\xi_{m + 1}\\eta_1 - \\xi_{m + 2}\\eta_2 - \\cdots - \\xi_{2m = n}\\eta_m, & &(\\mathbf R, \\mathbf C)\\\\\n \\text{Sesquilinear Hermitian:} \\quad\n \\varphi(x, y) ={} &{\\pm}\\bar{\\xi_1}\\eta_1 \\pm \\bar{\\xi_2}\\eta_2 \\pm \\cdots \\pm \\bar{\\xi_n}\\eta_n, & &(\\mathbf C, \\mathbf H)\\\\\n \\text{Sesquilinear skew-Hermitian:} \\quad\n \\varphi(x, y) ={} &\\bar{\\xi_1}\\mathbf{j}\\eta_1 + \\bar{\\xi_2}\\mathbf{j}\\eta_2 + \\cdots + \\bar{\\xi_n}\\mathbf{j}\\eta_n, & &(\\mathbf H)\n\\end{align}" }, { "math_id": 12, "text": "\\mathrm{Aut}(\\varphi) = \\{A \\in \\mathrm{GL}(V) : \\varphi(Ax, Ay) = \\varphi(x, y), \\quad \\forall x,y \\in V\\}." }, { "math_id": 13, "text": "\\varphi(x, y) = \\sum \\xi_i\\varphi_{ij}\\eta_j" }, { "math_id": 14, "text": "\\varphi(x, y) = x^{\\mathrm T}\\Phi y" }, { "math_id": 15, "text": "\\operatorname{Aut}(\\varphi) = \\left\\{A \\in \\operatorname{GL}(V): \\Phi^{-1}A^\\mathrm{T}\\Phi A = 1\\right\\}." }, { "math_id": 16, "text": "(e^{tX})^\\varphi e^{tX} = 1" }, { "math_id": 17, "text": "\\mathfrak{aut}(\\varphi) = \\left\\{X \\in M_n(V): X^\\varphi = -X\\right\\}," }, { "math_id": 18, "text": "\\mathfrak{aut}(\\varphi) = \\{X \\in M_n(V): \\varphi(Xx, y) = -\\varphi(x, Xy),\\quad \\forall x,y \\in V\\}." }, { "math_id": 19, "text": "\\varphi(x, y) = \\pm \\xi_1\\eta_1 \\pm \\xi_2\\eta_2 \\cdots \\pm \\xi_n\\eta_n." }, { "math_id": 20, "text": "\\Phi = \\left(\\begin{matrix}I_p & 0 \\\\0 & -I_q\\end{matrix}\\right) \\equiv I_{p,q}" }, { "math_id": 21, "text": "A^\\varphi = \\left(\\begin{matrix}I_p & 0 \\\\0 & -I_q\\end{matrix}\\right) \\left(\\begin{matrix}A_{11} & \\cdots \\\\\\cdots & A_{nn}\\end{matrix}\\right)^{\\mathrm{T}} \\left(\\begin{matrix}I_p & 0 \\\\0 & -I_q\\end{matrix}\\right)," }, { "math_id": 22, "text": "\\mathfrak{o}(p, q) = \\left\\{\\left .\\left(\\begin{matrix}X_{p \\times p} & Y_{p \\times q} \\\\ Y^{\\mathrm{T}} & W_{q \\times q}\\end{matrix}\\right)\\right| X^{\\mathrm T} = -X,\\quad W^{\\mathrm T} = -W\\right\\}," }, { "math_id": 23, "text": "\\mathrm{O}(p, q) = \\{g \\in \\mathrm{GL}(n, \\mathbb{R})|I_{p,q}^{-1}g^{\\mathrm{T}}I_{p,q}g = I\\}." }, { "math_id": 24, "text": "\\mathrm{O}(p, q) \\rightarrow \\mathrm{O}(q, p), \\quad g \\rightarrow \\sigma g \\sigma^{-1}, \\quad \\sigma = \\left[\\begin{smallmatrix}0 & 0 & \\cdots & 1\\\\ \\vdots & \\vdots & \\ddots & \\vdots\\\\0 & 1 & \\cdots & 0\\\\1 & 0 & \\cdots & 0 \\end{smallmatrix}\\right]." }, { "math_id": 25, "text": "\\mathfrak{o}(3, 1) = \\mathrm{span} \\left\\{ \n\\left( \\begin{smallmatrix}0&1&0&0\\\\-1&0&0&0\\\\0&0&0&0\\\\0&0&0&0 \\end{smallmatrix} \\right),\n\\left( \\begin{smallmatrix}0&0&-1&0\\\\0&0&0&0\\\\1&0&0&0\\\\0&0&0&0 \\end{smallmatrix} \\right),\n\\left( \\begin{smallmatrix}0&0&0&0\\\\0&0&1&0\\\\0&-1&0&0\\\\0&0&0&0 \\end{smallmatrix} \\right),\n\\left( \\begin{smallmatrix}0&0&0&1\\\\0&0&0&0\\\\0&0&0&0\\\\1&0&0&0 \\end{smallmatrix} \\right),\n\\left( \\begin{smallmatrix}0&0&0&0\\\\0&0&0&1\\\\0&0&0&0\\\\0&1&0&0 \\end{smallmatrix} \\right),\n\\left( \\begin{smallmatrix}0&0&0&0\\\\0&0&0&0\\\\0&0&0&1\\\\0&0&1&0 \\end{smallmatrix} \\right)\n \\right\\}." }, { "math_id": 26, "text": "\\varphi(x, y) = \\xi_1\\eta_{m + 1} + \\xi_2\\eta_{m + 2} \\cdots + \\xi_m\\eta_{2m = n} - \\xi_{m + 1}\\eta_1 - \\xi_{m + 2}\\eta_2 \\cdots - \\xi_{2m = n}\\eta_m," }, { "math_id": 27, "text": "\\Phi = \\left(\\begin{matrix}0_m & I_m \\\\ -I_m & 0_m\\end{matrix}\\right) = J_m." }, { "math_id": 28, "text": "V = \\left(\\begin{matrix}X & Y \\\\ Z & W\\end{matrix}\\right)," }, { "math_id": 29, "text": "\\left(\\begin{matrix}0_m & -I_m \\\\ I_m & 0_m\\end{matrix}\\right)\\left(\\begin{matrix}X & Y \\\\ Z & W\\end{matrix}\\right)^{\\mathrm T}\\left(\\begin{matrix}0_m & I_m \\\\ -I_m & 0_m\\end{matrix}\\right) = -\\left(\\begin{matrix}X & Y \\\\ Z & W\\end{matrix}\\right)" }, { "math_id": 30, "text": "\\mathfrak{sp}(m, \\mathbb{R}) = \\{X \\in M_n(\\mathbb{R}): J_mX + X^{\\mathrm T}J_m = 0\\} = \\left\\{\\left .\\left(\\begin{matrix}X & Y \\\\ Z & -X^{\\mathrm T}\\end{matrix}\\right)\\right| Y^{\\mathrm T} = Y, Z^{\\mathrm T} = Z\\right\\}," }, { "math_id": 31, "text": "\\mathrm{Sp}(m, \\mathbb{R}) = \\{g \\in M_n(\\mathbb{R})|g^{\\mathrm{T}}J_mg = J_m\\}." }, { "math_id": 32, "text": "\\varphi(x, y) = \\xi_1\\eta_1 + \\xi_1\\eta_1 \\cdots + \\xi_n\\eta_n" }, { "math_id": 33, "text": "\\mathfrak{o}(n, \\mathbb{C}) = \\mathfrak{so}(n, \\mathbb{C}) = \\{X|X^{\\mathrm{T}} = -X\\}," }, { "math_id": 34, "text": "\\mathrm{O}(n, \\mathbb{C}) = \\{g|g^{\\mathrm{T}}g = I_n\\}." }, { "math_id": 35, "text": "V = \\mathbb{C}^n = \\mathbb{C}^{2m}" }, { "math_id": 36, "text": "\\mathfrak{sp}(m, \\mathbb{C}) = \\{X \\in M_n(\\mathbb{C}): J_mX + X^{\\mathrm T}J_m = 0\\} =\\left\\{\\left .\\left(\\begin{matrix}X & Y \\\\ Z & -X^{\\mathrm T}\\end{matrix}\\right)\\right| Y^{\\mathrm T} = Y, Z^{\\mathrm T} = Z\\right\\}," }, { "math_id": 37, "text": "\\mathrm{Sp}(m, \\mathbb{C}) = \\{g \\in M_n(\\mathbb{C})|g^{\\mathrm{T}}J_mg = J_m\\}." }, { "math_id": 38, "text": "\\varphi(x, y) = \\sum \\bar{\\xi}_i\\varphi_{ij}\\eta_j." }, { "math_id": 39, "text": "\\varphi(x, y) = x^*\\Phi y, \\qquad A^\\varphi = \\Phi^{-1}A^*\\Phi," }, { "math_id": 40, "text": "\\operatorname{Aut}(\\varphi) = \\{A \\in \\operatorname{GL}(V): \\Phi^{-1}A^*\\Phi A = 1\\}," }, { "math_id": 41, "text": "\\varphi(x, y) = \\pm \\bar{\\xi_1}\\eta_1 \\pm \\bar{\\xi_2}\\eta_2 \\cdots \\pm \\bar{\\xi_n}\\eta_n." }, { "math_id": 42, "text": "\\Phi = \\left(\\begin{matrix}1_p & 0\\\\0 & -1_q\\end{matrix}\\right) = I_{p,q}," }, { "math_id": 43, "text": "\\mathfrak{u}(p, q) = \\left\\{ \\left. \\left( \\begin{matrix} X_{p \\times p} & Z_{p \\times q} \\\\ {\\overline{Z_{p \\times q}}}^{\\mathrm{T}} & Y_{q \\times q} \\end{matrix}\\right) \\right| {\\overline{X}}^{\\mathrm T} = -X , \\quad {\\overline{Y}}^{\\mathrm T} = -Y \\right\\} ." }, { "math_id": 44, "text": "\\mathrm{U}(p, q) = \\{g|I_{p,q}^{-1}g^*I_{p,q}g = I\\}." }, { "math_id": 45, "text": "g^{*}" }, { "math_id": 46, "text": "g^{\\dagger}" }, { "math_id": 47, "text": "\\mathrm{U}(n) = \\{g|g^*g = I\\}." }, { "math_id": 48, "text": "\\mathrm{U}(n)" }, { "math_id": 49, "text": "\\mathrm{U}(n,0)" }, { "math_id": 50, "text": "\\mathbb{H}^n \\approx \\mathbb{C}^{2n}, M_n(\\mathbb{H}) \\approx \\left\\{\\left .T \\in M_{2n}(\\mathbb{C})\\right|J_nT = \\overline{T}J_n, \\quad J_n = \\left(\\begin{matrix}0 & I_n\\\\-I_n & 0\\end{matrix}\\right) \\right\\}." }, { "math_id": 51, "text": "\\mathrm{GL}(n, \\mathbb{H}) = \\{g \\in \\mathrm{GL}(2n, \\mathbb{C})|Jg = \\overline{g}J, \\mathrm{det}\\quad g \\ne 0\\} \\equiv \\mathrm{U}^*(2n)." }, { "math_id": 52, "text": "\\mathfrak{gl}(n, \\mathbb{H}) = \\left\\{\\left .\\left(\\begin{matrix}X & -\\overline{Y}\\\\Y & \\overline{X}\\end{matrix}\\right)\\right|X, Y \\in \\mathfrak{gl}(n, \\mathbb{C})\\right\\} \\equiv \\mathfrak{u}^*(2n)." }, { "math_id": 53, "text": "\\mathrm{SL}(n, \\mathbb{H}) = \\{g \\in \\mathrm{GL}(n, \\mathbb{H})|\\mathrm{det}\\ g = 1\\} \\equiv \\mathrm{SU}^*(2n)," }, { "math_id": 54, "text": "\\mathrm{GL}(n, \\mathbb{H}) \\rightarrow \\mathbb H^*/[\\mathbb H^*, \\mathbb H^*] \\simeq \\mathbb{R}_{> 0}^*" }, { "math_id": 55, "text": "\\mathfrak{sl}(n, \\mathbb{H}) = \\left\\{\\left .\\left(\\begin{matrix}X & -\\overline{Y}\\\\Y & \\overline{X}\\end{matrix}\\right)\\right|Re(\\operatorname{Tr}X) = 0\\right\\} \\equiv \\mathfrak{su}^*(2n)." }, { "math_id": 56, "text": "\\varphi(x, y) = \\pm \\bar{\\xi_1}\\eta_1 \\pm \\bar{\\xi_2}\\eta_2 \\cdots \\pm \\bar{\\xi_n}\\eta_n" }, { "math_id": 57, "text": "\\Phi = \\begin{pmatrix} I_p & 0 \\\\ 0 & -I_q \\end{pmatrix} = I_{p,q}" }, { "math_id": 58, "text": "\\Phi^{-1}\\mathcal{Q}^*\\Phi = -\\mathcal{Q}," }, { "math_id": 59, "text": "\n \\mathcal{X} = \\begin{pmatrix} X_{1 (p \\times p)} & -\\overline{X}_2 \\\\ X_2 & \\overline{X}_1 \\end{pmatrix}, \\quad\n \\mathcal{Y} = \\begin{pmatrix} Y_{1 (q \\times q)} & -\\overline{Y}_2 \\\\ Y_2 & \\overline{Y}_1 \\end{pmatrix}, \\quad\n \\mathcal{Z} = \\begin{pmatrix} Z_{1 (p \\times q)} & -\\overline{Z}_2 \\\\ Z_2 & \\overline{Z}_1 \\end{pmatrix},\n" }, { "math_id": 60, "text": "X_1^* = -X_1, \\quad Y_1^* = -Y_1." }, { "math_id": 61, "text": "\n \\mathfrak{sp}(p, q) = \\left\\{\\left.\n \\begin{pmatrix}\n \\begin{bmatrix} X_{1 (p \\times p)} & -\\overline{X}_2 \\\\ X_2 & \\overline{X}_1 \\end{bmatrix} &\n \\begin{bmatrix} Z_{1 (p \\times q)} & -\\overline{Z}_2 \\\\ Z_2 & \\overline{Z}_1 \\end{bmatrix} \\\\\n \\begin{bmatrix} Z_{1 (p \\times q)} & -\\overline{Z}_2 \\\\ Z_2 & \\overline{Z}_1 \\end{bmatrix}^* &\n \\begin{bmatrix} Y_{1 (q \\times q)} & -\\overline{Y}_2 \\\\ Y_2 & \\overline{Y}_1 \\end{bmatrix} \n \\end{pmatrix}\n \\right|\n X_1^* = -X_1,\\quad Y_1^* = -Y_1\n \\right\\}.\n" }, { "math_id": 62, "text": "\n \\mathrm{Sp}(p, q) =\n \\left\\{g \\in \\mathrm{GL}( n, \\mathbb{H}) \\mid I_{p,q}^{-1} g^* I_{p,q}g = I_{p + q}\\right\\} =\n \\left\\{g \\in \\mathrm{GL}(2n, \\mathbb{C}) \\mid K_{p,q}^{-1} g^* K_{p,q}g = I_{2(p + q)},\\quad\n K = \\operatorname{diag}\\left(I_{p,q}, I_{p,q}\\right)\\right\\}.\n" }, { "math_id": 63, "text": "\n \\varphi(w, z) =\n \\begin{bmatrix} u^* & v^* \\end{bmatrix}K_{p, q}\\begin{bmatrix} x \\\\ y \\end{bmatrix} +\n j\\begin{bmatrix} u & -v \\end{bmatrix}K_{p, q}\\begin{bmatrix} y \\\\ x \\end{bmatrix} =\n \\varphi_1(w, z) + \\mathbf{j}\\varphi_2(w, z), \\quad\n K_{p, q} = \\mathrm{diag}\\left(I_{p, q}, I_{p, q}\\right)\n" }, { "math_id": 64, "text": "\\mathrm{Sp}(p, q) = \\mathrm{U}\\left(\\mathbb{C}^{2n}, \\varphi_1\\right) \\cap \\mathrm{Sp}\\left(\\mathbb{C}^{2n}, \\varphi_2\\right)" }, { "math_id": 65, "text": "\\varphi(x, y) = \\bar{\\xi_1}\\mathbf{j}\\eta_1 + \\bar{\\xi_2}\\mathbf{j}\\eta_2 \\cdots + \\bar{\\xi_n}\\mathbf{j}\\eta_n," }, { "math_id": 66, "text": "\\Phi =\n \\left(\\begin{smallmatrix}\n \\mathbf{j} & 0 & \\cdots & 0 \\\\ 0 & \\mathbf{j} & \\cdots & \\vdots \\\\\n \\vdots & & \\ddots & & \\\\ 0 & \\cdots & 0 & \\mathbf{j}\n \\end{smallmatrix}\\right) \\equiv\n \\mathrm{j}_n\n" }, { "math_id": 67, "text": "V = X + \\mathbf{j}Y \\leftrightarrow \\left(\\begin{matrix} X & -\\overline{Y}\\\\Y & \\overline{X} \\end{matrix}\\right)" }, { "math_id": 68, "text": "\\Phi \\leftrightarrow \\left(\\begin{matrix} 0 & -I_n \\\\ I_n & 0 \\end{matrix}\\right) \\equiv J_{n}." }, { "math_id": 69, "text": "\n \\left(\\begin{matrix} X & -\\overline{Y} \\\\ Y & \\overline{X} \\end{matrix}\\right)^* =\n \\left(\\begin{matrix} 0 & -I_n \\\\ I_n & 0 \\end{matrix}\\right)\n \\left(\\begin{matrix} X & -\\overline{Y} \\\\ Y & \\overline{X} \\end{matrix}\\right)\n \\left(\\begin{matrix} 0 & -I_n \\\\ I_n & 0 \\end{matrix}\\right) \\Leftrightarrow\n X^\\mathrm{T} = -X, \\quad \\overline{Y}^\\mathrm{T} = Y.\n" }, { "math_id": 70, "text": "\\mathfrak{o}^*(2n) = \\left\\{\\left. \\left(\\begin{matrix} X & -\\overline{Y} \\\\ Y & \\overline{X} \\end{matrix}\\right)\\right| X^\\mathrm{T} = -X, \\quad \\overline{Y}^\\mathrm{T} = Y\\right\\}," }, { "math_id": 71, "text": "\n \\mathrm{O}^*(2n) =\n \\left\\{g \\in \\mathrm{GL}(n, \\mathbb{H}) \\mid \\mathrm{j}_n^{-1}g^*\\mathrm{j}_n g = I_n\\right\\} =\n \\left\\{g \\in \\mathrm{GL}(2n, \\mathbb{C}) \\mid J_{n}^{-1}g^* J_n g = I_{2n}\\right\\}.\n" }, { "math_id": 72, "text": "\\mathrm{O}^*(2n) = \\left\\{g \\in \\mathrm{O}(2n, \\mathbb{C}) \\mid \\theta\\left(\\overline{g}\\right) = g\\right\\}," }, { "math_id": 73, "text": "\\varphi(x, y) = \\overline{w}_2 I_n z_1 - \\overline{w}_1 I_n z_2 + \\mathbf{j}(w_1 I_n z_1 + w_2 I_n z_2) = \\overline{\\varphi_1(w, z)} + \\mathbf{j}\\varphi_2(w, z)." }, { "math_id": 74, "text": "\\mathrm{O}^*(2n) = \\mathrm{O}(2n, \\mathbb{C}) \\cap \\mathrm{U}\\left(\\mathbb{C}^{2n}, \\varphi_1\\right)," } ]
https://en.wikipedia.org/wiki?curid=10172878
10175953
Zero dagger
In set theory, 0† (zero dagger) is a particular subset of the natural numbers, first defined by Robert M. Solovay in unpublished work in the 1960s. (The superscript † should be a dagger, but it appears as a plus sign on some browsers.) The definition is a bit awkward, because there might be "no" set of natural numbers satisfying the conditions. Specifically, if ZFC is consistent, then ZFC + "0† does not exist" is consistent. ZFC + "0† exists" is not known to be inconsistent (and most set theorists believe that it is consistent). In other words, it is believed to be independent (see large cardinal for a discussion). It is usually formulated as follows: 0† exists if and only if there exists a non-trivial elementary embedding  "j" : "L[U]" → "L[U]" for the relativized Gödel constructible universe "L[U]", where "U" is an ultrafilter witnessing that some cardinal κ is measurable. If 0† exists, then a careful analysis of the embeddings of "L[U]" into itself reveals that there is a closed unbounded subset of κ, and a closed unbounded proper class of ordinals greater than κ, which together are "indiscernible" for the structure formula_0, and 0† is defined to be the set of Gödel numbers of the true formulas about the indiscernibles in "L[U]". Solovay showed that the existence of 0† follows from the existence of two measurable cardinals. It is traditionally considered a large cardinal axiom, although it is not a large cardinal, nor indeed a cardinal at all.
[ { "math_id": 0, "text": "(L,\\in,U)" } ]
https://en.wikipedia.org/wiki?curid=10175953
10176565
Complete homogeneous symmetric polynomial
In mathematics, specifically in algebraic combinatorics and commutative algebra, the complete homogeneous symmetric polynomials are a specific kind of symmetric polynomials. Every symmetric polynomial can be expressed as a polynomial expression in complete homogeneous symmetric polynomials. Definition. The complete homogeneous symmetric polynomial of degree "k" in "n" variables "X"1, ..., "X""n", written "h""k" for "k" 0, 1, 2, ..., is the sum of all monomials of total degree "k" in the variables. Formally, formula_0 The formula can also be written as: formula_1 Indeed, "lp" is just the multiplicity of "p" in the sequence "ik". The first few of these polynomials are formula_2 Thus, for each nonnegative integer "k", there exists exactly one complete homogeneous symmetric polynomial of degree "k" in "n" variables. Another way of rewriting the definition is to take summation over all sequences "ik", without condition of ordering "ip" ≤ "i""p" + 1: formula_3 here "mp" is the multiplicity of number "p" in the sequence "ik". For example formula_4 The polynomial ring formed by taking all integral linear combinations of products of the complete homogeneous symmetric polynomials is a commutative ring. Examples. The following lists the "n" basic (as explained below) complete homogeneous symmetric polynomials for the first three positive values of "n". For "n" 1: formula_5 For "n" 2: formula_6 For "n" 3: formula_7 Properties. Generating function. The complete homogeneous symmetric polynomials are characterized by the following identity of formal power series in "t": formula_8 (this is called the generating function, or generating series, for the complete homogeneous symmetric polynomials). Here each fraction in the final expression is the usual way to represent the formal geometric series that is a factor in the middle expression. The identity can be justified by considering how the product of those geometric series is formed: each factor in the product is obtained by multiplying together one term chosen from each geometric series, and every monomial in the variables "X""i" is obtained for exactly one such choice of terms, and comes multiplied by a power of "t" equal to the degree of the monomial. The formula above can be seen as a special case of the MacMahon master theorem. The right hand side can be interpreted as formula_9 where formula_10 and formula_11. On the left hand side, one can identify the complete homogeneous symmetric polynomials as special cases of the multinomial coefficient that appears in the MacMahon expression. Performing some standard computations, we can also write the generating function as formula_12which is the power series expansion of the plethystic exponential of formula_13 (and note that formula_14 is precisely the "j-"th power sum symmetric polynomial). Relation with the elementary symmetric polynomials. There is a fundamental relation between the elementary symmetric polynomials and the complete homogeneous ones: formula_15 which is valid for all "m" &gt; 0, and any number of variables "n". The easiest way to see that it holds is from an identity of formal power series in "t" for the elementary symmetric polynomials, analogous to the one given above for the complete homogeneous ones, which can also be written in terms of plethystic exponentials as: formula_16 (this is actually an identity of polynomials in "t", because after "e""n"("X"1, ..., "X""n") the elementary symmetric polynomials become zero). Multiplying this by the generating function for the complete homogeneous symmetric polynomials, one obtains the constant series 1 (equivalently, plethystic exponentials satisfy the usual properties of an exponential), and the relation between the elementary and complete homogeneous polynomials follows from comparing coefficients of "t""m". A somewhat more direct way to understand that relation is to consider the contributions in the summation involving a fixed monomial "X""α" of degree "m". For any subset "S" of the variables appearing with nonzero exponent in the monomial, there is a contribution involving the product "X""S" of those variables as term from "e""s"("X"1, ..., "X""n"), where "s" #"S", and the monomial from "h""m" − "s"("X"1, ..., "X""n"); this contribution has coefficient (−1)"s". The relation then follows from the fact that formula_17 by the binomial formula, where "l" &lt; "m" denotes the number of distinct variables occurring (with nonzero exponent) in "X""α". Since "e"0("X"1, ..., "X""n") and "h"0("X"1, ..., "X""n") are both equal to 1, one can isolate from the relation either the first or the last terms of the summation. The former gives a sequence of equations: formula_18 and so on, that allows to recursively express the successive complete homogeneous symmetric polynomials in terms of the elementary symmetric polynomials; the latter gives a set of equations formula_19 and so forth, that allows doing the inverse. The first "n" elementary and complete homogeneous symmetric polynomials play perfectly similar roles in these relations, even though the former polynomials then become zero, whereas the latter do not. This phenomenon can be understood in the setting of the ring of symmetric functions. It has a ring automorphism that interchanges the sequences of the "n" elementary and first "n" complete homogeneous symmetric functions. The set of complete homogeneous symmetric polynomials of degree 1 to "n" in "n" variables generates the ring of symmetric polynomials in "n" variables. More specifically, the ring of symmetric polynomials with integer coefficients equals the integral polynomial ring formula_20 This can be formulated by saying that formula_21 form a transcendence basis of the ring of symmetric polynomials in "X"1, ..., "X""n" with integral coefficients (as is also true for the elementary symmetric polynomials). The same is true with the ring formula_22 of integers replaced by any other commutative ring. These statements follow from analogous statements for the elementary symmetric polynomials, due to the indicated possibility of expressing either kind of symmetric polynomials in terms of the other kind. Relation with the Stirling numbers. The evaluation at integers of complete homogeneous polynomials and elementary symmetric polynomials is related to Stirling numbers: formula_23 Relation with the monomial symmetric polynomials. The polynomial "h""k"("X"1, ..., "X""n") is also the sum of "all" distinct monomial symmetric polynomials of degree "k" in "X"1, ..., "X""n", for instance formula_24 Relation with power sums. Newton's identities for homogeneous symmetric polynomials give the simple recursive formula formula_25 where formula_26 and "p""k" is the "k"-th power sum symmetric polynomial: formula_27, as above. For small formula_28 we have formula_29 Relation with symmetric tensors. Consider an "n"-dimensional vector space "V" and a linear operator "M" : "V" → "V" with eigenvalues "X"1, "X"2, ..., "X""n". Denote by Sym"k"("V") its "k"th symmetric tensor power and "M"Sym("k") the induced operator Sym"k"("V") → Sym"k"("V"). Proposition: formula_30 The proof is easy: consider an eigenbasis "ei" for "M". The basis in Sym"k"("V") can be indexed by sequences "i"1 ≤ "i"2 ≤ ... ≤ "i""k", indeed, consider the symmetrizations of formula_31. All such vectors are eigenvectors for "M"Sym("k") with eigenvalues formula_32 hence this proposition is true. Similarly one can express elementary symmetric polynomials via traces over antisymmetric tensor powers. Both expressions are subsumed in expressions of Schur polynomials as traces over Schur functors, which can be seen as the Weyl character formula for GL("V"). Complete homogeneous symmetric polynomial with variables shifted by 1. If we replace the variables formula_33 for formula_34, the symmetric polynomial formula_35 can be written as a linear combination of the formula_36, for formula_37, formula_38 The proof, as found in Lemma 3.5 of, relies on the combinatorial properties of increasing formula_28-tuples formula_39 where formula_40. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h_k (X_1, X_2, \\dots,X_n) = \\sum_{1 \\leq i_1 \\leq i_2 \\leq \\cdots \\leq i_k \\leq n} X_{i_1} X_{i_2} \\cdots X_{i_k}." }, { "math_id": 1, "text": "h_k (X_1, X_2, \\dots,X_n) = \\sum_{l_1+l_2+ \\cdots + l_n=k \\atop l_i \\geq 0 } X_{1}^{l_1} X_{2}^{l_2} \\cdots X_{n}^{l_n}." }, { "math_id": 2, "text": "\\begin{align}\nh_0 (X_1, X_2, \\dots,X_n) &= 1, \\\\[10px]\nh_1 (X_1, X_2, \\dots,X_n) &= \\sum_{1 \\leq j \\leq n} X_j, \\\\\nh_2 (X_1, X_2, \\dots,X_n) &= \\sum_{1 \\leq j \\leq k \\leq n} X_j X_k, \\\\\nh_3 (X_1, X_2, \\dots,X_n) &= \\sum_{1 \\leq j \\leq k \\leq l \\leq n} X_j X_k X_l.\n\\end{align}" }, { "math_id": 3, "text": "h_k (X_1, X_2, \\dots, X_n) = \\sum_{1 \\leq i_1, i_2 , \\cdots , i_k \\leq n} \\frac{m_1! m_2 !\\cdots m_n!}{k!} X_{i_1} X_{i_2} \\cdots X_{i_k}," }, { "math_id": 4, "text": "h_2 (X_1, X_2) = \\frac{2!0!}{2!}X_1^2 +\\frac{1!1!}{2!}X_1X_2 +\\frac{1!1!}{2!}X_2X_1 + \\frac{0!2!}{2!}X_2^2 = X_1^2+X_1X_2+X_2^2." }, { "math_id": 5, "text": "h_1(X_1) = X_1\\,." }, { "math_id": 6, "text": "\\begin{align}\n h_1(X_1,X_2)&= X_1 + X_2\\\\\n h_2(X_1,X_2)&= X_1^2 + X_1X_2 + X_2^2.\n\\end{align}" }, { "math_id": 7, "text": "\\begin{align}\n h_1(X_1,X_2,X_3) &= X_1 + X_2 + X_3\\\\\n h_2(X_1,X_2,X_3) &= X_1^2 + X_2^2 + X_3^2 + X_1X_2 + X_1X_3 + X_2X_3\\\\\n h_3(X_1,X_2,X_3) &= X_1^3+X_2^3+X_3^3 + X_1^2X_2+X_1^2X_3+X_2^2X_1+X_2^2X_3+X_3^2X_1+X_3^2X_2 + X_1X_2X_3.\n\\end{align}" }, { "math_id": 8, "text": "\\sum_{k=0}^\\infty h_k(X_1,\\ldots,X_n)t^k = \\prod_{i=1}^n\\sum_{j=0}^\\infty(X_it)^j = \\prod_{i=1}^n\\frac1{1-X_it}" }, { "math_id": 9, "text": "1/\\!\\det(1-tM)" }, { "math_id": 10, "text": "t \\in \\mathbb{R}" }, { "math_id": 11, "text": "M = \\text{diag}(X_1, \\ldots, X_N)" }, { "math_id": 12, "text": "\\sum_{k=0}^\\infty h_k(X_1,\\ldots,X_n)\\, t^k = \\exp \\left( \\sum_{j=1}^\\infty (X_1^j+\\cdots+X_n^j) \\frac{t^j}j \\right)" }, { "math_id": 13, "text": "(X_1+\\cdots +X_n)t" }, { "math_id": 14, "text": "p_j:=X_1^j+\\cdots+X_n^j" }, { "math_id": 15, "text": "\\sum_{i=0}^m(-1)^ie_i(X_1,\\ldots,X_n)h_{m-i}(X_1,\\ldots,X_n)=0," }, { "math_id": 16, "text": "\\sum_{k=0}^\\infty e_k(X_1,\\ldots,X_n)(-t)^k = \\prod_{i=1}^n(1-X_it) = PE[-(X_1+\\cdots+X_n)t]" }, { "math_id": 17, "text": "\\sum_{s=0}^l\\binom{l}{s}(-1)^s=(1-1)^l=0\\quad\\mbox{for }l>0," }, { "math_id": 18, "text": "\\begin{align}\n h_1(X_1,\\ldots,X_n)&=e_1(X_1,\\ldots,X_n),\\\\\n h_2(X_1,\\ldots,X_n)&=h_1(X_1,\\ldots,X_n)e_1(X_1,\\ldots,X_n)-e_2(X_1,\\ldots,X_n),\\\\\n h_3(X_1,\\ldots,X_n)&=h_2(X_1,\\ldots,X_n)e_1(X_1,\\ldots,X_n)-h_1(X_1,\\ldots,X_n)e_2(X_1,\\ldots,X_n)+e_3(X_1,\\ldots,X_n),\\\\\n\\end{align}" }, { "math_id": 19, "text": "\\begin{align}\n e_1(X_1,\\ldots,X_n)&=h_1(X_1,\\ldots,X_n),\\\\\n e_2(X_1,\\ldots,X_n)&=h_1(X_1,\\ldots,X_n)e_1(X_1,\\ldots,X_n)-h_2(X_1,\\ldots,X_n),\\\\\n e_3(X_1,\\ldots,X_n)&=h_1(X_1,\\ldots,X_n)e_2(X_1,\\ldots,X_n)-h_2(X_1,\\ldots,X_n)e_1(X_1,\\ldots,X_n)+h_3(X_1,\\ldots,X_n),\\\\\n\\end{align}" }, { "math_id": 20, "text": "\\mathbb Z\\big[h_1(X_1,\\ldots,X_n),\\ldots,h_n(X_1,\\ldots,X_n)\\big]." }, { "math_id": 21, "text": " h_1(X_1,\\ldots,X_n),\\ldots,h_n(X_1,\\ldots,X_n) " }, { "math_id": 22, "text": "\\mathbb{Z}" }, { "math_id": 23, "text": "\\begin{align}\nh_n(1,2,\\ldots,k)&= \\left\\{\\begin{matrix} n+k \\\\ k \\end{matrix}\\right\\}\\\\\ne_n(1,2,\\ldots,k)&=\\left[{k+1 \\atop k+1-n}\\right]\\\\\n\\end{align}" }, { "math_id": 24, "text": "\\begin{align}\n h_3(X_1,X_2,X_3)&=m_{(3)}(X_1,X_2,X_3)+m_{(2,1)}(X_1,X_2,X_3)+m_{(1,1,1)}(X_1,X_2,X_3)\\\\\n &=\\left(X_1^3+X_2^3+X_3^3\\right)+\\left(X_1^2X_2+X_1^2X_3+X_1X_2^2+X_1X_3^2+X_2^2X_3+X_2X_3^2\\right)+(X_1X_2X_3).\\\\\n\\end{align}" }, { "math_id": 25, "text": "kh_k = \\sum_{i=1}^kh_{k-i}p_i," }, { "math_id": 26, "text": "h_k=h_k(X_1, \\dots, X_n)" }, { "math_id": 27, "text": "p_k(X_1,\\ldots,X_n)=\\sum\\nolimits_{i=1}^nx_i^k = X_1^k+\\cdots+X_n^k" }, { "math_id": 28, "text": "k" }, { "math_id": 29, "text": "\\begin{align}\n h_1 &= p_1,\\\\\n 2h_2 &= h_1p_1 + p_2,\\\\\n 3h_3 &= h_2p_1 + h_1p_2 + p_3.\\\\ \n\\end{align}" }, { "math_id": 30, "text": " \\operatorname{Trace}_{\\operatorname{Sym}^k(V)} \\left(M^{\\operatorname{Sym}(k)}\\right) = h_{k}(X_1,X_2,\\ldots,X_n)." }, { "math_id": 31, "text": "e_{i_1} \\otimes\\, e_{i_2} \\otimes \\ldots \\otimes\\, e_{i_k}" }, { "math_id": 32, "text": "X_{i_1}X_{i_2}\\cdots X_{i_k}," }, { "math_id": 33, "text": "X_i" }, { "math_id": 34, "text": "1+X_i" }, { "math_id": 35, "text": "h_k(1+X_1, \\ldots, 1+X_n)" }, { "math_id": 36, "text": "h_j(X_1, \\ldots, X_n)" }, { "math_id": 37, "text": "0 \\le j \\le k" }, { "math_id": 38, "text": "h_k(1+X_1, \\ldots, 1+X_n) = \n\\sum_{j=0}^k \\binom{n+k-1}{k-j} h_j(X_1, \\ldots, X_n)." }, { "math_id": 39, "text": "(i_1, \\ldots,i_k)" }, { "math_id": 40, "text": "1 \\le i_1 \\le \\cdots \\le i_k \\le n" } ]
https://en.wikipedia.org/wiki?curid=10176565
1018
Algebraically closed field
Algebraic structure where all polynomials have roots In mathematics, a field "F" is algebraically closed if every non-constant polynomial in "F"["x"] (the univariate polynomial ring with coefficients in "F") has a root in "F". Examples. As an example, the field of real numbers is not algebraically closed, because the polynomial equation formula_0 has no solution in real numbers, even though all its coefficients (1 and 0) are real. The same argument proves that no subfield of the real field is algebraically closed; in particular, the field of rational numbers is not algebraically closed. By contrast, the fundamental theorem of algebra states that the field of complex numbers is algebraically closed. Another example of an algebraically closed field is the field of (complex) algebraic numbers. No finite field "F" is algebraically closed, because if "a"1, "a"2, ..., "an" are the elements of "F", then the polynomial ("x" − "a"1)("x" − "a"2) ⋯ ("x" − "a""n") + 1 has no zero in "F". However, the union of all finite fields of a fixed characteristic "p" is an algebraically closed field, which is, in fact, the algebraic closure of the field formula_1 with "p" elements. Equivalent properties. Given a field "F", the assertion ""F" is algebraically closed" is equivalent to other assertions: The only irreducible polynomials are those of degree one. The field "F" is algebraically closed if and only if the only irreducible polynomials in the polynomial ring "F"["x"] are those of degree one. The assertion "the polynomials of degree one are irreducible" is trivially true for any field. If "F" is algebraically closed and "p"("x") is an irreducible polynomial of "F"["x"], then it has some root "a" and therefore "p"("x") is a multiple of "x" − "a". Since "p"("x") is irreducible, this means that "p"("x") = "k"("x" − "a"), for some "k" ∈ "F" \ {0}. On the other hand, if "F" is not algebraically closed, then there is some non-constant polynomial "p"("x") in "F"["x"] without roots in "F". Let "q"("x") be some irreducible factor of "p"("x"). Since "p"("x") has no roots in "F", "q"("x") also has no roots in "F". Therefore, "q"("x") has degree greater than one, since every first degree polynomial has one root in "F". Every polynomial is a product of first degree polynomials. The field "F" is algebraically closed if and only if every polynomial "p"("x") of degree "n" ≥ 1, with coefficients in "F", splits into linear factors. In other words, there are elements "k", "x"1, "x"2, ..., "xn" of the field "F" such that "p"("x") = "k"("x" − "x"1)("x" − "x"2) ⋯ ("x" − "xn"). If "F" has this property, then clearly every non-constant polynomial in "F"["x"] has some root in "F"; in other words, "F" is algebraically closed. On the other hand, that the property stated here holds for "F" if "F" is algebraically closed follows from the previous property together with the fact that, for any field "K", any polynomial in "K"["x"] can be written as a product of irreducible polynomials. Polynomials of prime degree have roots. If every polynomial over "F" of prime degree has a root in "F", then every non-constant polynomial has a root in "F". It follows that a field is algebraically closed if and only if every polynomial over "F" of prime degree has a root in "F". The field has no proper algebraic extension. The field "F" is algebraically closed if and only if it has no proper algebraic extension. If "F" has no proper algebraic extension, let "p"("x") be some irreducible polynomial in "F"["x"]. Then the quotient of "F"["x"] modulo the ideal generated by "p"("x") is an algebraic extension of "F" whose degree is equal to the degree of "p"("x"). Since it is not a proper extension, its degree is 1 and therefore the degree of "p"("x") is 1. On the other hand, if "F" has some proper algebraic extension "K", then the minimal polynomial of an element in "K" \ "F" is irreducible and its degree is greater than 1. The field has no proper finite extension. The field "F" is algebraically closed if and only if it has no proper finite extension because if, within the previous proof, the term "algebraic extension" is replaced by the term "finite extension", then the proof is still valid. (Finite extensions are necessarily algebraic.) Every endomorphism of "Fn" has some eigenvector. The field "F" is algebraically closed if and only if, for each natural number "n", every linear map from "Fn" into itself has some eigenvector. An endomorphism of "Fn" has an eigenvector if and only if its characteristic polynomial has some root. Therefore, when "F" is algebraically closed, every endomorphism of "Fn" has some eigenvector. On the other hand, if every endomorphism of "Fn" has an eigenvector, let "p"("x") be an element of "F"["x"]. Dividing by its leading coefficient, we get another polynomial "q"("x") which has roots if and only if "p"("x") has roots. But if "q"("x") = "xn" + "a""n" − 1 "x""n" − 1 + ⋯ + "a"0, then "q"("x") is the characteristic polynomial of the "n×n" companion matrix formula_2 Decomposition of rational expressions. The field "F" is algebraically closed if and only if every rational function in one variable "x", with coefficients in "F", can be written as the sum of a polynomial function with rational functions of the form "a"/("x" − "b")"n", where "n" is a natural number, and "a" and "b" are elements of "F". If "F" is algebraically closed then, since the irreducible polynomials in "F"["x"] are all of degree 1, the property stated above holds by the theorem on partial fraction decomposition. On the other hand, suppose that the property stated above holds for the field "F". Let "p"("x") be an irreducible element in "F"["x"]. Then the rational function 1/"p" can be written as the sum of a polynomial function "q" with rational functions of the form "a"/("x" – "b")"n". Therefore, the rational expression formula_3 can be written as a quotient of two polynomials in which the denominator is a product of first degree polynomials. Since "p"("x") is irreducible, it must divide this product and, therefore, it must also be a first degree polynomial. Relatively prime polynomials and roots. For any field "F", if two polynomials "p"("x"), "q"("x") ∈ "F"["x"] are relatively prime then they do not have a common root, for if "a" ∈ "F" was a common root, then "p"("x") and  "q"("x") would both be multiples of "x" − "a" and therefore they would not be relatively prime. The fields for which the reverse implication holds (that is, the fields such that whenever two polynomials have no common root then they are relatively prime) are precisely the algebraically closed fields. If the field "F" is algebraically closed, let "p"("x") and "q"("x") be two polynomials which are not relatively prime and let "r"("x") be their greatest common divisor. Then, since "r"("x") is not constant, it will have some root "a", which will be then a common root of "p"("x") and "q"("x"). If "F" is not algebraically closed, let "p"("x") be a polynomial whose degree is at least 1 without roots. Then "p"("x") and "p"("x") are not relatively prime, but they have no common roots (since none of them has roots). Other properties. If "F" is an algebraically closed field and "n" is a natural number, then "F" contains all "n"th roots of unity, because these are (by definition) the "n" (not necessarily distinct) zeroes of the polynomial "xn" − 1. A field extension that is contained in an extension generated by the roots of unity is a "cyclotomic extension", and the extension of a field generated by all roots of unity is sometimes called its "cyclotomic closure". Thus algebraically closed fields are cyclotomically closed. The converse is not true. Even assuming that every polynomial of the form "xn" − "a" splits into linear factors is not enough to assure that the field is algebraically closed. If a proposition which can be expressed in the language of first-order logic is true for an algebraically closed field, then it is true for every algebraically closed field with the same characteristic. Furthermore, if such a proposition is valid for an algebraically closed field with characteristic 0, then not only is it valid for all other algebraically closed fields with characteristic 0, but there is some natural number "N" such that the proposition is valid for every algebraically closed field with characteristic "p" when "p" &gt; "N". Every field "F" has some extension which is algebraically closed. Such an extension is called an algebraically closed extension. Among all such extensions there is one and only one (up to isomorphism, but not unique isomorphism) which is an algebraic extension of "F"; it is called the algebraic closure of "F". The theory of algebraically closed fields has quantifier elimination. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Div col/styles.css"/&gt;
[ { "math_id": 0, "text": "x^2+1=0" }, { "math_id": 1, "text": "\\mathbb F_p" }, { "math_id": 2, "text": "\\begin{pmatrix}\n 0 & 0 & \\cdots & 0 & -a_0\\\\\n 1 & 0 & \\cdots & 0 & -a_1\\\\\n 0 & 1 & \\cdots & 0 & -a_2\\\\\n \\vdots & \\vdots & \\ddots & \\vdots & \\vdots\\\\\n 0 & 0 & \\cdots & 1 & -a_{n-1}\n\\end{pmatrix}." }, { "math_id": 3, "text": "\\frac1{p(x)}-q(x)=\\frac{1-p(x)q(x)}{p(x)}" } ]
https://en.wikipedia.org/wiki?curid=1018
1018020
Evaporative cooling (atomic physics)
Atomic physics technique to achieve high phase space densities Evaporative cooling is an atomic physics technique to achieve high phase space densities which optical cooling techniques alone typically can not reach. Atoms trapped in optical or magnetic traps can be evaporatively cooled via two primary mechanisms, usually specific to the type of trap in question: in magnetic traps, radiofrequency (RF) fields are used to selectively drive warm atoms from the trap by inducing transitions between trapping and non-trapping spin states; or, in optical traps, the depth of the trap itself is gradually decreased, allowing the most energetic atoms in the trap to escape over the edges of the optical barrier. In the case of a Maxwell-Boltzmann distribution for the velocities of the atoms in the trap, these atoms which escape/are driven out of the trap lie in the highest velocity tail of the distribution, meaning that their kinetic energy (and therefore temperature) is much higher than the average for the trap. The net result is that while the total trap population decreases, so does the mean energy of the remaining population. This decrease in the mean kinetic energy of the atom cloud translates into a progressive decrease in the trap temperature, cooling the trap. The process is analogous to blowing on a cup of coffee to cool it: those molecules at the highest end of the energy distribution for the coffee form a vapor above the surface and are then removed from the system by blowing them away, decreasing the average energy, and therefore temperature, of the remaining coffee molecules. Radiofrequency induced evaporation. Radiofrequency (RF) induced evaporative cooling is the most common method for evaporatively cooling atoms in a magneto-optical trap (MOT). Consider trapped atoms laser cooled on a |F=0⟩ → |F=1⟩ transition. The magnetic sublevels of the |F=1⟩ state (|mF= -1,0,1⟩) are degenerate for zero external field. The confining magnetic quadrupole field, which is zero at the center of the trap and nonzero everywhere else, causes a Zeeman shift in atoms which stray from the trap center, lifting the degeneracy of the three magnetic sublevels. The interaction energy between the total spin angular momentum of the trapped atom and the external magnetic field depends on the projection of the spin angular momentum onto the z-axis, and is proportional toformula_0From this relation it can be seen that only the |mF=-1⟩ magnetic sublevel will have a positive interaction energy with the field, that is to say, the energy of atoms in this state increases as they migrate from the trap center, making the trap center a point of minimum energy, the definition of a trap. Conversely, the energy of the |mF=0⟩ state is unchanged by the field (no trapping), and the |mF=1⟩ state actually decreases in energy as it strays from the trap center, making the center a point of maximum energy. For this reason |mF=-1⟩ is referred to as the trapping state, and |mF=0,1⟩ the non-trapping states. From the equation for the magnetic field interaction energy, it can also be seen that the energies of the |mF=1,-1⟩ states shift in opposite directions, changing the total energy difference between these two states. The |mF=-1⟩→|mF=1⟩ transition frequency therefore experiences a Zeeman shift. With this in mind, the RF evaporative cooling scheme works as follows: the size of the Zeeman shift of the -1→+1 transition depends on the strength of the magnetic field, which increases radially outward from the trap center. Those atoms which are coldest move within a small region around the trap center, where they experience only a small Zeeman shift in the -1→+1 transition frequency. Warm atoms, however, spend time in regions of the trap much further from the center, where the magnetic field is stronger and the Zeeman shift therefore larger. The shift induced by magnetic fields on the scale used in typical MOTs is on the order of MHz, so that a radiofrequency source can be used to drive the -1→+1 transition. The choice of frequency for the RF source corresponds to a point on the trapping potential curve at which atoms experience a Zeeman shift equal to the frequency of the RF source, which then drives the atoms to the anti-trapping |mF=1⟩ magnetic sublevel and immediately exits the trap. Lowering the RF frequency is therefore equivalent to lowering the dashed line in the figure, effectively reducing the depth of the potential well. For this reason the RF source used to remove these energetic atoms is often referred to as an "RF knife," as it effectively lowers the height of the trapping potential to remove the most energetic atoms from the trap, "cutting" away the high energy tail of the trap's energy distribution. This method was famously used to cool a cloud of rubidium atoms below the condensation critical temperature to form the first experimentally observed Bose-Einstein condensate (BEC) Optical evaporation. While the first observation of Bose-Einstein condensation was made in a magnetic atom trap using RF driven evaporative cooling, optical dipole traps are now much more common platforms for achieving condensation. Beginning in a MOT, cold, trapped atoms are transferred to the focal point of a high power, tightly focused, off-resonant laser beam. The electric field of the laser at its focus is sufficiently strong to induce dipole moments in the atoms, which are then attracted to the electric field maximum at the laser focus, effectively creating a trapping potential to hold them at the beam focus. The depth of the optical trapping potential in an optical dipole trap (ODT) is proportional to the intensity of the trapping laser light. Decreasing the power in the trapping laser beam therefore decreases the depth of the trapping potential. In the case of RF-driven evaporation, the actual height of the potential barrier confining the atoms is fixed during the evaporation sequence, but the RF knife effectively decreases the depth of this barrier, as previously discussed. For an optical trap, however, evaporation is facilitated by decreasing the laser power and thus lowering the depth of the trapping potential. As a result, the warmest atoms in the trap will have sufficient kinetic energy to be able to make it over the barrier walls and escape the trap, reducing the average energy of the remaining atoms as previously described. While trap depths for ODTs can be shallow (on the order of mK, in terms of temperature), the simplicity of this optical evaporation procedure has helped to make it increasingly popular for BEC experiments since its first demonstrations shortly after magnetic BEC production. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta E\\propto-m_{F}B_{Z}" } ]
https://en.wikipedia.org/wiki?curid=1018020
10180397
Hapke parameters
The Hapke parameters are a set of parameters for an empirical model that is commonly used to describe the directional reflectance properties of the airless regolith surfaces of bodies in the Solar System. The model has been developed by astronomer Bruce Hapke at the University of Pittsburgh. The parameters are: The Hapke parameters can be used to derive other albedo and scattering properties, such as the geometric albedo, the phase integral, and the Bond albedo.
[ { "math_id": 0, "text": "\\bar{\\omega}_0" }, { "math_id": 1, "text": "K_s/(K_s+K_a)" }, { "math_id": 2, "text": "K_s" }, { "math_id": 3, "text": "K_a" }, { "math_id": 4, "text": "h" }, { "math_id": 5, "text": "B_0" }, { "math_id": 6, "text": "S_0" }, { "math_id": 7, "text": "P_0" }, { "math_id": 8, "text": "\\theta" } ]
https://en.wikipedia.org/wiki?curid=10180397
1018257
3-manifold
Mathematical space In mathematics, a 3-manifold is a topological space that locally looks like a three-dimensional Euclidean space. A 3-manifold can be thought of as a possible shape of the universe. Just as a sphere looks like a plane (a tangent plane) to a small and close enough observer, all 3-manifolds look like our universe does to a small enough observer. This is made more precise in the definition below. Principles. Definition. A topological space formula_0 is a 3-manifold if it is a second-countable Hausdorff space and if every point in formula_0 has a neighbourhood that is homeomorphic to Euclidean 3-space. Mathematical theory of 3-manifolds. The topological, piecewise-linear, and smooth categories are all equivalent in three dimensions, so little distinction is made in whether we are dealing with say, topological 3-manifolds, or smooth 3-manifolds. Phenomena in three dimensions can be strikingly different from phenomena in other dimensions, and so there is a prevalence of very specialized techniques that do not generalize to dimensions greater than three. This special role has led to the discovery of close connections to a diversity of other fields, such as knot theory, geometric group theory, hyperbolic geometry, number theory, Teichmüller theory, topological quantum field theory, gauge theory, Floer homology, and partial differential equations. 3-manifold theory is considered a part of low-dimensional topology or geometric topology. A key idea in the theory is to study a 3-manifold by considering special surfaces embedded in it. One can choose the surface to be nicely placed in the 3-manifold, which leads to the idea of an incompressible surface and the theory of Haken manifolds, or one can choose the complementary pieces to be as nice as possible, leading to structures such as Heegaard splittings, which are useful even in the non-Haken case. Thurston's contributions to the theory allow one to also consider, in many cases, the additional structure given by a particular Thurston model geometry (of which there are eight). The most prevalent geometry is hyperbolic geometry. Using a geometry in addition to special surfaces is often fruitful. The fundamental groups of 3-manifolds strongly reflect the geometric and topological information belonging to a 3-manifold. Thus, there is an interplay between group theory and topological methods. Invariants describing 3-manifolds. 3-manifolds are an interesting special case of low-dimensional topology because their topological invariants give a lot of information about their structure in general. If we let formula_0 be a 3-manifold and formula_1 be its fundamental group, then a lot of information can be derived from them. For example, using Poincare duality and the Hurewicz theorem, we have the following homology groups: formula_2where the last two groups are isomorphic to the group homology and cohomology of formula_3, respectively; that is,formula_4From this information a basic homotopy theoretic classification of 3-manifolds can be found. Note from the Postnikov tower there is a canonical mapformula_5If we take the pushforward of the fundamental class formula_6 into formula_7 we get an element formula_8. It turns out the group formula_3 together with the group homology class formula_9 gives a complete algebraic description of the homotopy type of formula_0. Connected sums. One important topological operation is the connected sum of two 3-manifolds formula_10. In fact, from general theorems in topology, we find for a three manifold with a connected sum decomposition formula_11 the invariants above for formula_0 can be computed from the formula_12. In particularformula_13Moreover, a 3-manifold formula_0 which cannot be described as a connected sum of two 3-manifolds is called prime. Second homotopy groups. For the case of a 3-manifold given by a connected sum of prime 3-manifolds, it turns out there is a nice description of the second fundamental group as a formula_14-module. For the special case of having each formula_15 is infinite but not cyclic, if we take based embeddings of a 2-sphereformula_16 where formula_17then the second fundamental group has the presentationformula_18giving a straightforward computation of this group. Important examples of 3-manifolds. Euclidean 3-space. Euclidean 3-space is the most important example of a 3-manifold, as all others are defined in relation to it. This is just the standard 3-dimensional vector space over the real numbers. 3-sphere. A 3-sphere is a higher-dimensional analogue of a sphere. It consists of the set of points equidistant from a fixed central point in 4-dimensional Euclidean space. Just as an ordinary sphere (or 2-sphere) is a two-dimensional surface that forms the boundary of a ball in three dimensions, a 3-sphere is an object with three dimensions that forms the boundary of a ball in four dimensions. Many examples of 3-manifolds can be constructed by taking quotients of the 3-sphere by a finite group formula_3 acting freely on formula_19 via a map formula_20, so formula_21. Real projective 3-space. Real projective 3-space, or RP"3", is the topological space of lines passing through the origin 0 in R4. It is a compact, smooth manifold of dimension "3", and is a special case Gr(1, R4) of a Grassmannian space. RP3 is (diffeomorphic to) SO(3), hence admits a group structure; the covering map "S"3 → RP3 is a map of groups Spin(3) → SO(3), where Spin(3) is a Lie group that is the universal cover of SO(3). 3-torus. The 3-dimensional torus is the product of 3 circles. That is: formula_22 The 3-torus, T3 can be described as a quotient of R3 under integral shifts in any coordinate. That is, the 3-torus is R3 modulo the action of the integer lattice Z3 (with the action being taken as vector addition). Equivalently, the 3-torus is obtained from the 3-dimensional cube by gluing the opposite faces together. A 3-torus in this sense is an example of a 3-dimensional compact manifold. It is also an example of a compact abelian Lie group. This follows from the fact that the unit circle is a compact abelian Lie group (when identified with the unit complex numbers with multiplication). Group multiplication on the torus is then defined by coordinate-wise multiplication. Hyperbolic 3-space. Hyperbolic space is a homogeneous space that can be characterized by a constant negative curvature. It is the model of hyperbolic geometry. It is distinguished from Euclidean spaces with zero curvature that define the Euclidean geometry, and models of elliptic geometry (like the 3-sphere) that have a constant positive curvature. When embedded to a Euclidean space (of a higher dimension), every point of a hyperbolic space is a saddle point. Another distinctive property is the amount of space covered by the 3-ball in hyperbolic 3-space: it increases exponentially with respect to the radius of the ball, rather than polynomially. Poincaré dodecahedral space. The Poincaré homology sphere (also known as Poincaré dodecahedral space) is a particular example of a homology sphere. Being a spherical 3-manifold, it is the only homology 3-sphere (besides the 3-sphere itself) with a finite fundamental group. Its fundamental group is known as the binary icosahedral group and has order 120. This shows the Poincaré conjecture cannot be stated in homology terms alone. In 2003, lack of structure on the largest scales (above 60 degrees) in the cosmic microwave background as observed for one year by the WMAP spacecraft led to the suggestion, by Jean-Pierre Luminet of the Observatoire de Paris and colleagues, that the shape of the universe is a Poincaré sphere. In 2008, astronomers found the best orientation on the sky for the model and confirmed some of the predictions of the model, using three years of observations by the WMAP spacecraft. However, there is no strong support for the correctness of the model, as yet. Seifert–Weber space. In mathematics, Seifert–Weber space (introduced by Herbert Seifert and Constantin Weber) is a closed hyperbolic 3-manifold. It is also known as Seifert–Weber dodecahedral space and hyperbolic dodecahedral space. It is one of the first discovered examples of closed hyperbolic 3-manifolds. It is constructed by gluing each face of a dodecahedron to its opposite in a way that produces a closed 3-manifold. There are three ways to do this gluing consistently. Opposite faces are misaligned by 1/10 of a turn, so to match them they must be rotated by 1/10, 3/10 or 5/10 turn; a rotation of 3/10 gives the Seifert–Weber space. Rotation of 1/10 gives the Poincaré homology sphere, and rotation by 5/10 gives 3-dimensional real projective space. With the 3/10-turn gluing pattern, the edges of the original dodecahedron are glued to each other in groups of five. Thus, in the Seifert–Weber space, each edge is surrounded by five pentagonal faces, and the dihedral angle between these pentagons is 72°. This does not match the 117° dihedral angle of a regular dodecahedron in Euclidean space, but in hyperbolic space there exist regular dodecahedra with any dihedral angle between 60° and 117°, and the hyperbolic dodecahedron with dihedral angle 72° may be used to give the Seifert–Weber space a geometric structure as a hyperbolic manifold. It is a quotient space of the order-5 dodecahedral honeycomb, a regular tessellation of hyperbolic 3-space by dodecahedra with this dihedral angle. Gieseking manifold. In mathematics, the Gieseking manifold is a cusped hyperbolic 3-manifold of finite volume. It is non-orientable and has the smallest volume among non-compact hyperbolic manifolds, having volume approximately 1.01494161. It was discovered by Hugo Gieseking (1912). The Gieseking manifold can be constructed by removing the vertices from a tetrahedron, then gluing the faces together in pairs using affine-linear maps. Label the vertices 0, 1, 2, 3. Glue the face with vertices 0,1,2 to the face with vertices 3,1,0 in that order. Glue the face 0,2,3 to the face 3,2,1 in that order. In the hyperbolic structure of the Gieseking manifold, this ideal tetrahedron is the canonical polyhedral decomposition of David B. A. Epstein and Robert C. Penner. Moreover, the angle made by the faces is formula_23. The triangulation has one tetrahedron, two faces, one edge and no vertices, so all the edges of the original tetrahedron are glued together. Some important classes of 3-manifolds. Hyperbolic link complements. A hyperbolic link is a link in the 3-sphere with complement that has a complete Riemannian metric of constant negative curvature, i.e. has a hyperbolic geometry. A hyperbolic knot is a hyperbolic link with one component. The following examples are particularly well-known and studied. The classes are not necessarily mutually exclusive. Some important structures on 3-manifolds. Contact geometry. Contact geometry is the study of a geometric structure on smooth manifolds given by a hyperplane distribution in the tangent bundle and specified by a one-form, both of which satisfy a 'maximum non-degeneracy' condition called 'complete non-integrability'. From the Frobenius theorem, one recognizes the condition as the opposite of the condition that the distribution be determined by a codimension one foliation on the manifold ('complete integrability'). Contact geometry is in many ways an odd-dimensional counterpart of symplectic geometry, which belongs to the even-dimensional world. Both contact and symplectic geometry are motivated by the mathematical formalism of classical mechanics, where one can consider either the even-dimensional phase space of a mechanical system or the odd-dimensional extended phase space that includes the time variable. Haken manifold. A Haken manifold is a compact, P²-irreducible 3-manifold that is sufficiently large, meaning that it contains a properly embedded two-sided incompressible surface. Sometimes one considers only orientable Haken manifolds, in which case a Haken manifold is a compact, orientable, irreducible 3-manifold that contains an orientable, incompressible surface. A 3-manifold finitely covered by a Haken manifold is said to be virtually Haken. The Virtually Haken conjecture asserts that every compact, irreducible 3-manifold with infinite fundamental group is virtually Haken. Haken manifolds were introduced by Wolfgang Haken. Haken proved that Haken manifolds have a hierarchy, where they can be split up into 3-balls along incompressible surfaces. Haken also showed that there was a finite procedure to find an incompressible surface if the 3-manifold had one. Jaco and Oertel gave an algorithm to determine if a 3-manifold was Haken. Essential lamination. An essential lamination is a lamination where every leaf is incompressible and end incompressible, if the complementary regions of the lamination are irreducible, and if there are no spherical leaves. Essential laminations generalize the incompressible surfaces found in Haken manifolds. Heegaard splitting. A Heegaard splitting is a decomposition of a compact oriented 3-manifold that results from dividing it into two handlebodies. Every closed, orientable three-manifold may be so obtained; this follows from deep results on the triangulability of three-manifolds due to Moise. This contrasts strongly with higher-dimensional manifolds which need not admit smooth or piecewise linear structures. Assuming smoothness the existence of a Heegaard splitting also follows from the work of Smale about handle decompositions from Morse theory. Taut foliation. A taut foliation is a codimension 1 foliation of a 3-manifold with the property that there is a single transverse circle intersecting every leaf. By transverse circle, is meant a closed loop that is always transverse to the tangent field of the foliation. Equivalently, by a result of Dennis Sullivan, a codimension 1 foliation is taut if there exists a Riemannian metric that makes each leaf a minimal surface. Taut foliations were brought to prominence by the work of William Thurston and David Gabai. Foundational results. Some results are named as conjectures as a result of historical artifacts. We begin with the purely topological: Moise's theorem. In geometric topology, Moise's theorem, proved by Edwin E. Moise in, states that any topological 3-manifold has an essentially unique piecewise-linear structure and smooth structure. As corollary, every compact 3-manifold has a Heegaard splitting. Prime decomposition theorem. The prime decomposition theorem for 3-manifolds states that every compact, orientable 3-manifold is the connected sum of a unique (up to homeomorphism) collection of prime 3-manifolds. A manifold is "prime" if it cannot be presented as a connected sum of more than one manifold, none of which is the sphere of the same dimension. Kneser–Haken finiteness. Kneser-Haken finiteness says that for each compact 3-manifold, there is a constant C such that any collection of disjoint incompressible embedded surfaces of cardinality greater than C must contain parallel elements. Loop and Sphere theorems. The loop theorem is a generalization of Dehn's lemma and should more properly be called the "disk theorem". It was first proven by Christos Papakyriakopoulos in 1956, along with Dehn's lemma and the Sphere theorem. A simple and useful version of the loop theorem states that if there is a map formula_24 with formula_25 not nullhomotopic in formula_26, then there is an embedding with the same property. The sphere theorem of Papakyriakopoulos (1957) gives conditions for elements of the second homotopy group of a 3-manifold to be represented by embedded spheres. One example is the following: Let formula_0 be an orientable 3-manifold such that formula_27 is not the trivial group. Then there exists a non-zero element of formula_27 having a representative that is an embedding formula_28. Annulus and Torus theorems. The annulus theorem states that if a pair of disjoint simple closed curves on the boundary of a three manifold are freely homotopic then they cobound a properly embedded annulus. This should not be confused with the high dimensional theorem of the same name. The torus theorem is as follows: Let M be a compact, irreducible 3-manifold with nonempty boundary. If M admits an essential map of a torus, then M admits an essential embedding of either a torus or an annulus JSJ decomposition. The JSJ decomposition, also known as the toral decomposition, is a topological construct given by the following theorem: Irreducible orientable closed (i.e., compact and without boundary) 3-manifolds have a unique (up to isotopy) minimal collection of disjointly embedded incompressible tori such that each component of the 3-manifold obtained by cutting along the tori is either atoroidal or Seifert-fibered. The acronym JSJ is for William Jaco, Peter Shalen, and Klaus Johannson. The first two worked together, and the third worked independently. Scott core theorem. The Scott core theorem is a theorem about the finite presentability of fundamental groups of 3-manifolds due to G. Peter Scott. The precise statement is as follows: Given a 3-manifold (not necessarily compact) with finitely generated fundamental group, there is a compact three-dimensional submanifold, called the compact core or Scott core, such that its inclusion map induces an isomorphism on fundamental groups. In particular, this means a finitely generated 3-manifold group is finitely presentable. A simplified proof is given in, and a stronger uniqueness statement is proven in. Lickorish–Wallace theorem. The Lickorish–Wallace theorem states that any closed, orientable, connected 3-manifold may be obtained by performing Dehn surgery on a framed link in the 3-sphere with formula_29 surgery coefficients. Furthermore, each component of the link can be assumed to be unknotted. Waldhausen's theorems on topological rigidity. Friedhelm Waldhausen's theorems on topological rigidity say that certain 3-manifolds (such as those with an incompressible surface) are homeomorphic if there is an isomorphism of fundamental groups which respects the boundary. Waldhausen conjecture on Heegaard splittings. Waldhausen conjectured that every closed orientable 3-manifold has only finitely many Heegaard splittings (up to homeomorphism) of any given genus. Smith conjecture. The Smith conjecture (now proven) states that if "f" is a diffeomorphism of the 3-sphere of finite order, then the fixed point set of "f" cannot be a nontrivial knot. Cyclic surgery theorem. The cyclic surgery theorem states that, for a compact, connected, orientable, irreducible three-manifold "M" whose boundary is a torus "T", if "M" is not a Seifert-fibered space and "r,s" are slopes on "T" such that their Dehn fillings have cyclic fundamental group, then the distance between "r" and "s" (the minimal number of times that two simple closed curves in "T" representing "r" and "s" must intersect) is at most 1. Consequently, there are at most three Dehn fillings of "M" with cyclic fundamental group. Thurston's hyperbolic Dehn surgery theorem and the Jørgensen–Thurston theorem. Thurston's hyperbolic Dehn surgery theorem states: formula_30 is hyperbolic as long as a finite set of "exceptional slopes" formula_31 is avoided for the "i"-th cusp for each "i". In addition, formula_30 converges to "M" in "H" as all formula_32 for all formula_33 corresponding to non-empty Dehn fillings formula_34. This theorem is due to William Thurston and fundamental to the theory of hyperbolic 3-manifolds. It shows that nontrivial limits exist in "H". Troels Jorgensen's study of the geometric topology further shows that all nontrivial limits arise by Dehn filling as in the theorem. Another important result by Thurston is that volume decreases under hyperbolic Dehn filling. In fact, the theorem states that volume decreases under topological Dehn filling, assuming of course that the Dehn-filled manifold is hyperbolic. The proof relies on basic properties of the Gromov norm. Jørgensen also showed that the volume function on this space is a continuous, proper function. Thus by the previous results, nontrivial limits in "H" are taken to nontrivial limits in the set of volumes. In fact, one can further conclude, as did Thurston, that the set of volumes of finite volume hyperbolic 3-manifolds has ordinal type formula_35. This result is known as the Thurston-Jørgensen theorem. Further work characterizing this set was done by Gromov. Also, Gabai, Meyerhoff &amp; Milley showed that the Weeks manifold has the smallest volume of any closed orientable hyperbolic 3-manifold. Thurston's hyperbolization theorem for Haken manifolds. One form of Thurston's geometrization theorem states: If "M" is an compact irreducible atoroidal Haken manifold whose boundary has zero Euler characteristic, then the interior of "M" has a complete hyperbolic structure of finite volume. The Mostow rigidity theorem implies that if a manifold of dimension at least 3 has a hyperbolic structure of finite volume, then it is essentially unique. The conditions that the manifold "M" should be irreducible and atoroidal are necessary, as hyperbolic manifolds have these properties. However the condition that the manifold be Haken is unnecessarily strong. Thurston's hyperbolization conjecture states that a closed irreducible atoroidal 3-manifold with infinite fundamental group is hyperbolic, and this follows from Perelman's proof of the Thurston geometrization conjecture. Tameness conjecture, also called the Marden conjecture or tame ends conjecture. The tameness theorem states that every complete hyperbolic 3-manifold with finitely generated fundamental group is topologically tame, in other words homeomorphic to the interior of a compact 3-manifold. The tameness theorem was conjectured by Marden. It was proved by Agol and, independently, by Danny Calegari and David Gabai. It is one of the fundamental properties of geometrically infinite hyperbolic 3-manifolds, together with the density theorem for Kleinian groups and the ending lamination theorem. It also implies the Ahlfors measure conjecture. Ending lamination conjecture. The ending lamination theorem, originally conjectured by William Thurston and later proven by Jeffrey Brock, Richard Canary, and Yair Minsky, states that hyperbolic 3-manifolds with finitely generated fundamental groups are determined by their topology together with certain "end invariants", which are geodesic laminations on some surfaces in the boundary of the manifold. Poincaré conjecture. The 3-sphere is an especially important 3-manifold because of the now-proven Poincaré conjecture. Originally conjectured by Henri Poincaré, the theorem concerns a space that locally looks like ordinary three-dimensional space but is connected, finite in size, and lacks any boundary (a closed 3-manifold). The Poincaré conjecture claims that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. An analogous result has been known in higher dimensions for some time. After nearly a century of effort by mathematicians, Grigori Perelman presented a proof of the conjecture in three papers made available in 2002 and 2003 on arXiv. The proof followed on from the program of Richard S. Hamilton to use the Ricci flow to attack the problem. Perelman introduced a modification of the standard Ricci flow, called "Ricci flow with surgery" to systematically excise singular regions as they develop, in a controlled way. Several teams of mathematicians have verified that Perelman's proof is correct. Thurston's geometrization conjecture. Thurston's geometrization conjecture states that certain three-dimensional topological spaces each have a unique geometric structure that can be associated with them. It is an analogue of the uniformization theorem for two-dimensional surfaces, which states that every simply connected Riemann surface can be given one of three geometries (Euclidean, spherical, or hyperbolic). In three dimensions, it is not always possible to assign a single geometry to a whole topological space. Instead, the geometrization conjecture states that every closed 3-manifold can be decomposed in a canonical way into pieces that each have one of eight types of geometric structure. The conjecture was proposed by William , and implies several other conjectures, such as the Poincaré conjecture and Thurston's elliptization conjecture. Thurston's hyperbolization theorem implies that Haken manifolds satisfy the geometrization conjecture. Thurston announced a proof in the 1980s and since then several complete proofs have appeared in print. Grigori Perelman sketched a proof of the full geometrization conjecture in 2003 using Ricci flow with surgery. There are now several different manuscripts (see below) with details of the proof. The Poincaré conjecture and the spherical space form conjecture are corollaries of the geometrization conjecture, although there are shorter proofs of the former that do not lead to the geometrization conjecture. Virtually fibered conjecture and Virtually Haken conjecture. The virtually fibered conjecture, formulated by American mathematician William Thurston, states that every closed, irreducible, atoroidal 3-manifold with infinite fundamental group has a finite cover which is a surface bundle over the circle. The virtually Haken conjecture states that every compact, orientable, irreducible three-dimensional manifold with infinite fundamental group is "virtually Haken". That is, it has a finite cover (a covering space with a finite-to-one covering map) that is a Haken manifold. In a posting on the ArXiv on 25 Aug 2009, Daniel Wise implicitly implied (by referring to a then unpublished longer manuscript) that he had proven the Virtually fibered conjecture for the case where the 3-manifold is closed, hyperbolic, and Haken. This was followed by a survey article in Electronic Research Announcements in Mathematical Sciences. Several more preprints have followed, including the aforementioned longer manuscript by Wise. In March 2012, during a conference at Institut Henri Poincaré in Paris, Ian Agol announced he could prove the virtually Haken conjecture for closed hyperbolic 3-manifolds. The proof built on results of Kahn and Markovic in their proof of the Surface subgroup conjecture and results of Wise in proving the Malnormal Special Quotient Theorem and results of Bergeron and Wise for the cubulation of groups. Taken together with Wise's results, this implies the virtually fibered conjecture for all closed hyperbolic 3-manifolds. Simple loop conjecture. If formula_36 is a map of closed connected surfaces such that formula_37 is not injective, then there exists a non-contractible simple closed curve formula_38 such that formula_39 is homotopically trivial. This conjecture was proven by David Gabai. Surface subgroup conjecture. The surface subgroup conjecture of Friedhelm Waldhausen states that the fundamental group of every closed, irreducible 3-manifold with infinite fundamental group has a surface subgroup. By "surface subgroup" we mean the fundamental group of a closed surface not the 2-sphere. This problem is listed as Problem 3.75 in Robion Kirby's problem list. Assuming the geometrization conjecture, the only open case was that of closed hyperbolic 3-manifolds. A proof of this case was announced in the Summer of 2009 by Jeremy Kahn and Vladimir Markovic and outlined in a talk August 4, 2009 at the FRG (Focused Research Group) Conference hosted by the University of Utah. A preprint appeared on the arxiv in October 2009. Their paper was published in the Annals of Mathematics in 2012. In June 2012, Kahn and Markovic were given the Clay Research Awards by the Clay Mathematics Institute at a ceremony in Oxford. Important conjectures. Cabling conjecture. The cabling conjecture states that if Dehn surgery on a knot in the 3-sphere yields a reducible 3-manifold, then that knot is a formula_40-cable on some other knot, and the surgery must have been performed using the slope formula_41. Lubotzky–Sarnak conjecture. The fundamental group of any finite volume hyperbolic "n"-manifold does not have Property τ. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "\\pi = \\pi_1(M)" }, { "math_id": 2, "text": "\\begin{align}\nH_0(M) &= H^3(M) =& \\mathbb{Z} \\\\\nH_1(M) &= H^2(M) =& \\pi/[\\pi,\\pi] \\\\\nH_2(M) &= H^1(M) =& \\text{Hom}(\\pi,\\mathbb{Z}) \\\\\nH_3(M) &= H^0(M) = & \\mathbb{Z}\n\\end{align}" }, { "math_id": 3, "text": "\\pi" }, { "math_id": 4, "text": "\\begin{align}\nH_1(\\pi;\\mathbb{Z}) &\\cong \\pi/[\\pi,\\pi] \\\\\nH^1(\\pi;\\mathbb{Z}) &\\cong \\text{Hom}(\\pi,\\mathbb{Z})\n\\end{align}" }, { "math_id": 5, "text": "q: M \\to B\\pi" }, { "math_id": 6, "text": "[M] \\in H_3(M)" }, { "math_id": 7, "text": "H_3(B\\pi)" }, { "math_id": 8, "text": "\\zeta_M = q_*([M])" }, { "math_id": 9, "text": "\\zeta_M \\in H_3(\\pi,\\mathbb{Z})" }, { "math_id": 10, "text": "M_1\\# M_2" }, { "math_id": 11, "text": "M = M_1\\# \\cdots \\# M_n" }, { "math_id": 12, "text": "M_i" }, { "math_id": 13, "text": "\\begin{align}\nH_1(M) &= H_1(M_1)\\oplus \\cdots \\oplus H_1(M_n) \\\\\nH_2(M) &= H_2(M_1)\\oplus \\cdots \\oplus H_2(M_n) \\\\\n\\pi_1(M) &= \\pi_1(M_1) * \\cdots * \\pi_1(M_n)\n\\end{align}" }, { "math_id": 14, "text": "\\mathbb{Z}[\\pi]" }, { "math_id": 15, "text": "\\pi_1(M_i)" }, { "math_id": 16, "text": "\\sigma_i:S^2 \\to M" }, { "math_id": 17, "text": "\\sigma_i(S^2) \\subset M_i - \\{B^3\\} \\subset M" }, { "math_id": 18, "text": "\\pi_2(M) = \\frac{\\mathbb{Z}[\\pi]\\{ \\sigma_1,\\ldots,\\sigma_n\\}}{(\\sigma_1 + \\cdots + \\sigma_n)}" }, { "math_id": 19, "text": "S^3" }, { "math_id": 20, "text": "\\pi \\to \\text{SO}(4)" }, { "math_id": 21, "text": "M = S^3/\\pi" }, { "math_id": 22, "text": "\\mathbf{T}^3 = S^1 \\times S^1 \\times S^1." }, { "math_id": 23, "text": "\\pi/3" }, { "math_id": 24, "text": "f\\colon (D^2,\\partial D^2)\\to (M,\\partial M) \\, " }, { "math_id": 25, "text": "f|\\partial D^2" }, { "math_id": 26, "text": "\\partial M" }, { "math_id": 27, "text": "\\pi_2(M)" }, { "math_id": 28, "text": "S^2\\to M" }, { "math_id": 29, "text": "\\pm 1" }, { "math_id": 30, "text": "M(u_1, u_2, \\dots, u_n)" }, { "math_id": 31, "text": "E_i" }, { "math_id": 32, "text": "p_i^2+q_i^2 \\rightarrow \\infty" }, { "math_id": 33, "text": "p_i/q_i" }, { "math_id": 34, "text": "u_i" }, { "math_id": 35, "text": "\\omega^\\omega" }, { "math_id": 36, "text": "f\\colon S \\rightarrow T" }, { "math_id": 37, "text": "f_\\star \\colon \\pi_1(S) \\rightarrow \\pi_1(T)" }, { "math_id": 38, "text": "\\alpha \\subset S " }, { "math_id": 39, "text": "f|_a" }, { "math_id": 40, "text": "(p,q)" }, { "math_id": 41, "text": "pq" } ]
https://en.wikipedia.org/wiki?curid=1018257
10182648
Monetary inflation
Sustained increase in a state's money supply (not prices) Monetary inflation is a sustained increase in the money supply of a country (or currency area). Depending on many factors, especially public expectations, the fundamental state and development of the economy, and the transmission mechanism, it is likely to result in price inflation, which is usually just called "inflation", which is a rise in the general level of prices of goods and services. There is general agreement among economists that there is a causal relationship between monetary inflation and price inflation. But there is neither a common view about the exact theoretical mechanisms and relationships, nor about how to accurately measure it. This relationship is also constantly changing, within a larger complex economic system. So there is a great deal of debate on the issues involved, such as how to measure the monetary base and price inflation, how to measure the effect of public expectations, how to judge the effect of financial innovations on the transmission mechanisms, and how much factors like the velocity of money affect the relationship. Thus, there are different views on what could be the best targets and tools in monetary policy. However, there is a general consensus on the importance and responsibility of central banks and monetary authorities in setting public expectations of price inflation and in trying to control it. Currently, most central banks follow a monetarist or Keynesian approach, or more often a mix of both. There is a trend of central banks towards the use of inflation targeting. Quantity theory. The monetarist explanation of inflation operates through the Quantity Theory of Money, formula_0 where "M" is the money supply, "V" is the velocity of circulation, "P" is the price level and "T" is total transactions or output. As monetarists assume that "V" and "T" are determined, in the long run, by real variables, such as the productive capacity of the economy, there is a direct relationship between the growth of the money supply and inflation. The mechanisms by which excess money might be translated into inflation are examined below. Individuals can also spend their excess money balances directly on goods and services. This has a direct impact on inflation by raising aggregate demand. Also, the increase in the demand for labour resulting from higher demands for goods and services will cause a rise in money wages and unit labour costs. The more inelastic the aggregate supply in the economy, the greater the impact on inflation. The increase in demand for goods and services may cause a rise in imports. Although this leakage from the domestic economy reduces the money supply, it also increases the supply of money on the foreign exchange market thus applying downward pressure on the exchange rate. This may cause imported inflation. Modern Monetary Theory. Modern Monetary Theory, like all derivatives of the Chartalist school, emphasizes that in nations with monetary sovereignty, a country is always able to repay debts that are denominated in its own currency. However, under modern-day monetary systems, the supply of money is largely determined endogenously. But exogenous factors like government surpluses and deficits play a role and allow government to set inflation targets. Yet, adherents of this school note that monetary inflation and price inflation are distinct, and that when there is idle capacity, monetary inflation can cause a boost in aggregate demand which can, up to a point, offset price inflation. Austrian view. The Austrian School maintains that inflation is any increase of the money supply (i.e. units of currency or means of exchange) that is not matched by an increase in demand for money, or as Ludwig von Mises put it: In theoretical investigation there is only one meaning that can rationally be attached to the expression Inflation: an increase in the quantity of money (in the broader sense of the term, so as to include fiduciary media as well), that is not offset by a corresponding increase in the need for money (again in the broader sense of the term), so that a fall in the objective exchange-value of money must occur. Given that all major economies currently have a central bank supporting the private banking system, money can be supplied into these economies by means of bank credit (or debt). Austrian economists believe that credit growth propagates business cycles ("see" Austrian Business Cycle Theory). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "MV = PT" } ]
https://en.wikipedia.org/wiki?curid=10182648
1018336
Lawson criterion
Criterion for igniting a nuclear fusion chain reaction The Lawson criterion is a figure of merit used in nuclear fusion research. It compares the rate of energy being generated by fusion reactions within the fusion fuel to the rate of energy losses to the environment. When the rate of production is higher than the rate of loss, the system will produce net energy. If enough of that energy is captured by the fuel, the system will become self-sustaining and is said to be ignited. The concept was first developed by John D. Lawson in a classified 1955 paper that was declassified and published in 1957. As originally formulated, the Lawson criterion gives a minimum required value for the product of the plasma (electron) density "n"e and the "energy confinement time" formula_0 that leads to net energy output. Later analysis suggested that a more useful figure of merit is the triple product of density, confinement time, and plasma temperature "T". The triple product also has a minimum required value, and the name "Lawson criterion" may refer to this value. On August 8, 2021, researchers at Lawrence Livermore National Laboratory's National Ignition Facility in California confirmed to have produced the first-ever successful ignition of a nuclear fusion reaction surpassing the Lawson's criteria in the experiment. Energy balance. The central concept of the Lawson criterion is an examination of the energy balance for any fusion power plant using a hot plasma. This is shown below: Net power = Efficiency × (Fusion − Radiation loss − Conduction loss) Lawson calculated the fusion rate by assuming that the fusion reactor contains a hot plasma cloud which has a Gaussian curve of individual particle energies, a Maxwell–Boltzmann distribution characterized by the plasma's temperature. Based on that assumption, he estimated the first term, the fusion energy being produced, using the volumetric fusion equation. Fusion = Number density of fuel A × Number density of fuel B × Cross section(Temperature) × Energy per reaction This equation is typically averaged over a population of ions which has a normal distribution. The result is the amount of energy being created by the plasma at any instant in time. Lawson then estimated the radiation losses using the following equation: formula_1 where "N" is the number density of the cloud and "T" is the temperature. For his analysis, Lawson ignores conduction losses. In reality this is nearly impossible; practically all systems lose energy through mass leaving the plasma and carrying away its energy. By equating radiation losses and the volumetric fusion rates, Lawson estimated the minimum temperature for the fusion for the deuterium–tritium (D-T) reaction formula_2 to be 30 million degrees (2.6 keV), and for the deuterium–deuterium (D-D) reaction formula_3 to be 150 million degrees (12.9 keV). Extensions into "nτ""E". The confinement time formula_0 measures the rate at which a system loses energy to its environment. The faster the rate of loss of energy, formula_4, the shorter the energy confinement time. It is the energy density formula_5 (energy content per unit volume) divided by the power loss density formula_4 (rate of energy loss per unit volume): formula_6 For a fusion reactor to operate in steady state, the fusion plasma must be maintained at a constant temperature. Thermal energy must therefore be added at the same rate the plasma loses energy in order to maintain the fusion conditions. This energy can be supplied by the fusion reactions themselves, depending on the reaction type, or by supplying additional heating through a variety of methods. For illustration, the Lawson criterion for the D-T reaction will be derived here, but the same principle can be applied to other fusion fuels. It will also be assumed that all species have the same temperature, that there are no ions present other than fuel ions (no impurities and no helium ash), and that D and T are present in the optimal 50-50 mixture.a Ion density then equals electron density and the energy density of both electrons and ions together is given by formula_7 where formula_8 is the temperature in electronvolt (eV) and formula_9 is the particle density. The volume rate formula_10 (reactions per volume per time) of fusion reactions is formula_11 where formula_12 is the fusion cross section, formula_13 is the relative velocity, and formula_14 denotes an average over the Maxwellian velocity distribution at the temperature formula_8. The volume rate of heating by fusion is formula_10 times formula_15, the energy of the charged fusion products (the neutrons cannot help to heat the plasma). In the case of the D-T reaction, formula_16. The Lawson criterion requires that fusion heating exceeds the losses: formula_17 Substituting in known quantities yields: formula_18 Rearranging the equation produces: The quantity formula_19 is a function of temperature with an absolute minimum. Replacing the function with its minimum value provides an absolute lower limit for the product formula_20. This is the Lawson criterion. For the deuterium–tritium reaction, the physical value is at least formula_21 The minimum of the product occurs near formula_22. Extension into the "triple product". A still more useful figure of merit is the "triple product" of density, temperature, and confinement time, "nTτ""E". For most confinement concepts, whether inertial, mirror, or toroidal confinement, the density and temperature can be varied over a fairly wide range, but the maximum attainable pressure "p" is a constant. When such is the case, the fusion power density is proportional to "p"2&lt;σ"v"&gt;/"T" 2. The maximum fusion power available from a given machine is therefore reached at the temperature "T" where &lt;σ"v"&gt;/"T" 2 is a maximum. By continuation of the above derivation, the following inequality is readily obtained: formula_23 The quantity formula_24 is also a function of temperature with an absolute minimum at a slightly lower temperature than formula_25. For the D-T reaction, the minimum occurs at "T" = 14 keV. The average &lt;σ"v"&gt; in this temperature region can be approximated as formula_26 so the minimum value of the triple product value at "T" = 14 keV is about formula_27 This number has not yet been achieved in any reactor, although the latest generations of machines have come close. JT-60 reported 1.53x1021 keV.s.m−3. For instance, the TFTR has achieved the densities and energy lifetimes needed to achieve Lawson at the temperatures it can create, but it cannot create those temperatures at the same time. ITER aims to do both. As for tokamaks, there is a special motivation for using the triple product. Empirically, the energy confinement time τ"E" is found to be nearly proportional to "n"1/3/"P" 2/3. In an ignited plasma near the optimum temperature, the heating power "P" equals fusion power and therefore is proportional to "n"2"T" 2. The triple product scales as formula_28 The triple product is only weakly dependent on temperature as "T" -1/3. This makes the triple product an adequate measure of the efficiency of the confinement scheme. Inertial confinement. The Lawson criterion applies to inertial confinement fusion (ICF) as well as to magnetic confinement fusion (MCF) but in the inertial case it is more usefully expressed in a different form. A good approximation for the inertial confinement time formula_0 is the time that it takes an ion to travel over a distance "R" at its thermal speed formula_29 where "m""i" denotes mean ionic mass. The inertial confinement time formula_0 can thus be approximated as formula_30 By substitution of the above expression into relationship (1), we obtain formula_31 This product must be greater than a value related to the minimum of "T" 3/2/&lt;σv&gt;. The same requirement is traditionally expressed in terms of mass density "ρ" = &lt;"nm"i&gt;: formula_32 Satisfaction of this criterion at the density of solid D-T (0.2 g/cm3) would require a laser pulse of implausibly large energy. Assuming the energy required scales with the mass of the fusion plasma ("E"laser ~ "ρR"3 ~ "ρ"−2), compressing the fuel to 103 or 104 times solid density would reduce the energy required by a factor of 106 or 108, bringing it into a realistic range. With a compression by 103, the compressed density will be 200 g/cm3, and the compressed radius can be as small as 0.05 mm. The radius of the fuel before compression would be 0.5 mm. The initial pellet will be perhaps twice as large since most of the mass will be ablated during the compression. The fusion power times density is a good figure of merit to determine the optimum temperature for magnetic confinement, but for inertial confinement the fractional burn-up of the fuel is probably more useful. The burn-up should be proportional to the specific reaction rate ("n"2&lt;"σv"&gt;) times the confinement time (which scales as "T" -1/2) divided by the particle density "n": formula_33 Thus the optimum temperature for inertial confinement fusion maximises &lt;σv&gt;/"T"3/2, which is slightly higher than the optimum temperature for magnetic confinement. Non-thermal systems. Lawson's analysis is based on the rate of fusion and loss of energy in a thermalized plasma. There is a class of fusion machines that do not use thermalized plasmas but instead directly accelerate individual ions to the required energies. The best-known examples are the migma, fusor and polywell. When applied to the fusor, Lawson's analysis is used as an argument that conduction and radiation losses are the key impediments to reaching net power. Fusors use a voltage drop to accelerate and collide ions, resulting in fusion. The voltage drop is generated by wire cages, and these cages conduct away particles. Polywells are improvements on this design, designed to reduce conduction losses by removing the wire cages which cause them. Regardless, it is argued that radiation is still a major impediment. Notes. &lt;templatestyles src="Citation/styles.css"/&gt;^a It is straightforward to relax these assumptions. The most difficult question is how to define formula_9 when the ion and electrons differ in density and temperature. Considering that this is a calculation of energy production and loss by ions, and that any plasma confinement concept must contain the pressure forces of the plasma, it seems appropriate to define the effective (electron) density formula_9 through the (total) pressure formula_34 as formula_35. The factor of formula_36 is included because formula_9 usually refers to the density of the electrons alone, but formula_34 here refers to the total pressure. Given two species with ion densities formula_37, atomic numbers formula_38, ion temperature formula_39, and electron temperature formula_40, it is easy to show that the fusion power is maximized by a fuel mix given by formula_41. The values for formula_42, formula_43, and the power density must be multiplied by the factor formula_44. For example, with protons and boron (formula_45) as fuel, another factor of formula_46 must be included in the formulas. On the other hand, for cold electrons, the formulas must all be divided by formula_47 (with no additional factor for formula_48).
[ { "math_id": 0, "text": "\\tau_E" }, { "math_id": 1, "text": "P_B = 1.4 \\cdot 10^{-34} \\cdot N^2 \\cdot T^{1/2} \\frac{\\mathrm{W}}{\\mathrm{cm}^3}" }, { "math_id": 2, "text": "^2_1\\mathrm{D} +\\, ^3_1\\mathrm{T} \\rightarrow\\, ^4_2\\mathrm{He} \\left(3.5\\, \\mathrm{MeV}\\right)+\\, ^1_0\\mathrm{n} \\left(14.1\\, \\mathrm{MeV}\\right)" }, { "math_id": 3, "text": "^2_1\\mathrm{D} +\\, ^2_1\\mathrm{D} \\rightarrow\\, ^3_1\\mathrm{T} \\left(1.0\\, \\mathrm{MeV}\\right)+\\, ^1_1\\mathrm{p} \\left(3.0\\, \\mathrm{MeV}\\right)" }, { "math_id": 4, "text": "P_{\\mathrm{loss}}" }, { "math_id": 5, "text": "W" }, { "math_id": 6, "text": "\\tau_E = \\frac{W}{P_{\\mathrm{loss}}}" }, { "math_id": 7, "text": "W = 3nT" }, { "math_id": 8, "text": "T" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "f" }, { "math_id": 11, "text": "f = n_{\\mathrm{d}} n_{\\mathrm{t}} \\langle \\sigma v \\rangle = \\frac{1}{4}n^2 \\langle \\sigma v\\rangle " }, { "math_id": 12, "text": "\\sigma" }, { "math_id": 13, "text": "v" }, { "math_id": 14, "text": "\\langle \\rangle" }, { "math_id": 15, "text": "E_{\\mathrm{ch}}" }, { "math_id": 16, "text": "E_{\\mathrm{ch}} = 3.5\\,\\mathrm{MeV}" }, { "math_id": 17, "text": "f E_{\\rm ch} \\ge P_{\\rm loss}" }, { "math_id": 18, "text": "\\frac{1}{4}n^2 \\langle\\sigma v\\rangle E_{\\rm ch} \\ge \\frac{3nT}{\\tau_E}" }, { "math_id": 19, "text": "T/\\langle\\sigma v\\rangle" }, { "math_id": 20, "text": "n\\tau_E" }, { "math_id": 21, "text": "n \\tau_E \\ge 1.5 \\cdot 10^{20} \\frac{\\mathrm{s}}{\\mathrm{m}^3}" }, { "math_id": 22, "text": "T = 26\\,\\mathrm{keV}" }, { "math_id": 23, "text": "n T \\tau_{\\rm E} \\ge \\frac{12}{E_{\\rm ch}}\\,\\frac{T^2}{\\langle\\sigma v\\rangle} " }, { "math_id": 24, "text": "\\frac{T^2}{\\langle\\sigma v\\rangle}" }, { "math_id": 25, "text": "\\frac{T}{\\langle\\sigma v\\rangle}" }, { "math_id": 26, "text": "\\left \\langle \\sigma v \\right \\rangle = 1.1 \\cdot 10^{-24} T^2 \\; \\frac{{\\rm m}^3}{\\rm s} \\, {\\rm ,} \\quad {\\rm T \\, in \\, keV} {\\rm ,}" }, { "math_id": 27, "text": "\n\\begin{matrix}\nn T \\tau_E & \\ge & \\frac{12\\cdot 14^2 \\cdot {\\rm keV}^2}{1.1\\cdot 10^{-24} \\frac{{\\rm m}^3}{\\rm s} 14^2 \\cdot 3500 \\cdot{\\rm keV}} \\approx 3 \\cdot 10^{21} \\mbox{keV s}/\\mbox{m}^3 \\\\\n\\end{matrix}\n(3.5 \\cdot 10^{28} \\mbox{K s}/\\mbox{m}^3)" }, { "math_id": 28, "text": "\n\\begin{matrix}n T \\tau_E & \\propto & n T \\left(n^{1/3}/P^{2/3}\\right) \\\\\n& \\propto & n T \\left(n^{1/3}/\\left(n^2 T^2\\right)^{2/3}\\right) \\\\\n& \\propto & T^{-1/3} \\\\\n\\end{matrix}\n" }, { "math_id": 29, "text": "v_{th} = \\sqrt{\\frac{k_{\\rm B} T}{m_i}}" }, { "math_id": 30, "text": "\n\\begin{matrix}\n\\tau_E & \\approx & \\frac{R}{v_{th}} \\\\\n\\\\\n & = & \\frac{R}{\\sqrt{\\frac{k_{\\rm B} T}{m_i}}} \\\\\n\\\\\n & = & R \\cdot \\sqrt{\\frac{m_i}{k_{\\rm B} T}} \\mbox{ .} \\\\\n\\end{matrix}\n" }, { "math_id": 31, "text": "\n\\begin{matrix}\nn \\tau_E & \\approx & n \\cdot R \\cdot \\sqrt{\\frac{m_i}{k_B T}} \\geq \\frac{12}{E_{\\rm ch}}\\,\\frac{k_{\\rm B}T}{\\langle\\sigma v\\rangle} \\\\\n\\\\\nn \\cdot R & \\gtrapprox & \\frac{12}{E_{\\rm ch}}\\,\\frac{\\left(k_{\\rm B}T\\right)^{3/2}}{\\langle\\sigma v\\rangle\\cdot m_i^{1/2}}\\\\\n\\\\\nn \\cdot R & \\gtrapprox & \\frac{\\left(k_{\\rm B}T\\right)^{3/2}}{\\langle\\sigma v\\rangle}\\mbox{ .} \\\\\n\\end{matrix}\n" }, { "math_id": 32, "text": "\n\\rho \\cdot R \\geq 1 \\mathrm{g}/\\mathrm{cm}^2\n" }, { "math_id": 33, "text": "\n\\begin{matrix}\n\\mbox{burn-up fraction } & \\propto & n^2\\langle\\sigma v\\rangle T^{-1/2}/n \\\\\n& \\propto & \\left( n T\\right)\\langle\\sigma v\\rangle /T^{3/2}\\\\\n\n\\end{matrix}\n" }, { "math_id": 34, "text": "p" }, { "math_id": 35, "text": "n = p/2 T_{\\mathrm{i}}" }, { "math_id": 36, "text": "2" }, { "math_id": 37, "text": "n_{1,2}" }, { "math_id": 38, "text": "Z_{1,2}" }, { "math_id": 39, "text": "T_{\\mathrm{i}}" }, { "math_id": 40, "text": "T_{\\mathrm{e}}" }, { "math_id": 41, "text": "n_{1}/n_{2} = (1 + Z_{2}T_{\\mathrm{e}}/T_{\\mathrm{i}})/(1 + Z_{1}T_{\\mathrm{e}}/T_{\\mathrm{i}})" }, { "math_id": 42, "text": "n\\tau" }, { "math_id": 43, "text": "nT\\tau" }, { "math_id": 44, "text": "(1 + Z_{1}T_{\\mathrm{e}}/T_{\\mathrm{i}}) \\cdot (1 + Z_{2}T_{\\mathrm{e}}/T_{\\mathrm{i}})/4" }, { "math_id": 45, "text": "Z = 5" }, { "math_id": 46, "text": "3" }, { "math_id": 47, "text": "4" }, { "math_id": 48, "text": "Z > 1" } ]
https://en.wikipedia.org/wiki?curid=1018336
1018347
Utility maximization problem
Problem of allocation of money by consumers in order to most benefit themselves Utility maximization was first developed by utilitarian philosophers Jeremy Bentham and John Stuart Mill. In microeconomics, the utility maximization problem is the problem consumers face: "How should I spend my money in order to maximize my utility?" It is a type of optimal decision problem. It consists of choosing how much of each available good or service to consume, taking into account a constraint on total spending (income), the prices of the goods and their preferences. Utility maximization is an important concept in consumer theory as it shows how consumers decide to allocate their income. Because consumers are modelled as being rational, they seek to extract the most benefit for themselves. However, due to bounded rationality and other biases, consumers sometimes pick bundles that do not necessarily maximize their utility. The utility maximization bundle of the consumer is also not set and can change over time depending on their individual preferences of goods, price changes and increases or decreases in income. Basic setup. For utility maximization there are four basic steps process to derive consumer demand and find the utility maximizing bundle of the consumer given prices, income, and preferences. 1) Check if Walras's law is satisfied 2) 'Bang for buck' 3) the budget constraint 4) Check for negativity 1) Walras's Law. Walras's law states that if a consumers preferences are complete, monotone and transitive then the optimal demand will lie on the budget line. Preferences of the consumer. For a utility representation to exist the preferences of the consumer must be complete and transitive (necessary conditions). Complete. Completeness of preferences indicates that all bundles in the consumption set can be compared by the consumer. For example, if the consumer has 3 bundles A,B and C then; A formula_0 B, A formula_0 C, B formula_0 A, B formula_0C, C formula_0B, C formula_0A, A formula_0A, B formula_0B, C formula_0C. Therefore, the consumer has complete preferences as they can compare every bundle. Transitive. Transitivity states that individuals preferences are consistent across the bundles. therefore, if the consumer weakly prefers A over B (A formula_0 B) and B formula_0C this means that A formula_0 C (A is weakly preferred to C) Monotone. For a preference relation to be monotone increasing the quantity of both goods should make the consumer strictly better off (increase their utility), and increasing the quantity of one good holding the other quantity constant should not make the consumer worse off (same utility). The preference formula_0 is monotone if and only if; 1)formula_1 2) formula_2 3) formula_3 where formula_4 &gt; 0 2) 'Bang for buck'. Bang for buck is a concept in utility maximization which refers to the consumer's desire to get the best value for their money. If Walras's law has been satisfied, the optimal solution of the consumer lies at the point where the budget line and optimal indifference curve intersect, this is called the tangency condition. To find this point, differentiate the utility function with respect to x and y to find the marginal utilities, then divide by the respective prices of the goods. formula_5 This can be solved to find the optimal amount of good x or good y. 3) Budget constraint. The basic set up of the budget constraint of the consumer is: formula_6 Due to Walras's law being satisfied: formula_7 The tangency condition is then substituted into this to solve for the optimal amount of the other good. 4) Check for negativity. Negativity must be checked for as the utility maximization problem can give an answer where the optimal demand of a good is negative, which in reality is not possible as this is outside the domain. If the demand for one good is negative, the optimal consumption bundle will be where 0 of this good is consumed and all income is spent on the other good (a corner solution). See figure 1 for an example when the demand for good x is negative. A technical representation. Suppose the consumer's consumption set, or the enumeration of all possible consumption bundles that could be selected if there were a budget constraint. The consumption set = formula_8 (a set of positive real numbers, the consumer cannot preference negative amount of commodities). formula_9 Suppose also that the price vector ("p") of the n commodities is positive, formula_10 and that the consumer's income is formula_11; then the set of all affordable packages, the budget set is, formula_12 The consumer would like to buy the best affordable package of commodities. It is assumed that the consumer has an ordinal utility function, called "u". It is a real-valued function with domain being the set of all commodity bundles, or formula_13 Then the consumer's optimal choice formula_14 is the utility maximizing bundle of all bundles in the budget set if formula_15 then the consumers optimal demand function is: formula_16 Finding formula_14 is the utility maximization problem. If "u" is continuous and no commodities are free of charge, then formula_14 exists, but it is not necessarily unique. If the preferences of the consumer are complete, transitive and strictly convex then the demand of the consumer contains a unique maximiser for all values of the price and wealth parameters. If this is satisfied then formula_14 is called the Marshallian demand function. Otherwise, formula_14 is set-valued and it is called the Marshallian demand correspondence. Utility maximisation of perfect complements. U = min {x, y} For a minimum function with goods that are perfect complements, the same steps cannot be taken to find the utility maximising bundle as it is a non differentiable function. Therefore, intuition must be used. The consumer will maximise their utility at the kink point in the highest indifference curve that intersects the budget line where x = y. This is intuition, as the consumer is rational there is no point the consumer consuming more of one good and not the other good as their utility is taken at the minimum of the two ( they have no gain in utility from this and would be wasting their income). See figure 3. Utility maximisation of perfect substitutes. U = x + y For a utility function with perfect substitutes, the utility maximising bundle can be found by differentiation or simply by inspection. Suppose a consumer finds listening to Australian rock bands AC/DC and Tame Impala perfect substitutes. This means that they are happy to spend all afternoon listening to only AC/DC, or only Tame Impala, or three-quarters AC/DC and one-quarter Tame Impala, or any combination of the two bands in any amount. Therefore, the consumer's optimal choice is determined entirely by the relative prices of listening to the two artists. If attending a Tame Impala concert is cheaper than attending the AC/DC concert, the consumer chooses to attend the Tame Impala concert, and vice versa. If the two concert prices are the same, the consumer is completely indifferent and may flip a coin to decide. To see this mathematically, differentiate the utility function to find that the MRS is constant - this is the technical meaning of perfect substitutes. As a result of this, the solution to the consumer's constrained maximization problem will not (generally) be an interior solution, and as such one must check the utility level in the boundary cases (spend entire budget on good x, spend entire budget on good y) to see which is the solution. The special case is when the (constant) MRS equals the price ratio (for example, both goods have the same price, and same coefficients in the utility function). In this case, any combination of the two goods is a solution to the consumer problem. Reaction to changes in prices. For a given level of real wealth, only relative prices matter to consumers, not absolute prices. If consumers reacted to changes in nominal prices and nominal wealth even if relative prices and real wealth remained unchanged, this would be an effect called money illusion. The mathematical first order conditions for a maximum of the consumer problem guarantee that the demand for each good is homogeneous of degree zero jointly in nominal prices and nominal wealth, so there is no money illusion. When the prices of goods change, the optimal consumption of these goods will depend on the substitution and income effects. The substitution effect says that if the demand for both goods is homogeneous, when the price of one good decreases (holding the price of the other good constant) the consumer will consume more of this good and less of the other as it becomes relatively cheeper. The same goes if the price of one good increases, consumers will buy less of that good and more of the other. The income effect occurs when the change in prices of goods cause a change in income. If the price of one good rises, then income is decreased (more costly than before to consume the same bundle), the same goes if the price of a good falls, income is increased (cheeper to consume the same bundle, they can therefore consume more of their desired combination of goods). Reaction to changes in income. If the consumers income is increased their budget line is shifted outwards and they now have more income to spend on either good x, good y, or both depending on their preferences for each good. if both goods x and y were normal goods then consumption of both goods would increase and the optimal bundle would move from A to C (see figure 5). If either x or y were inferior goods, then demand for these would decrease as income rises (the optimal bundle would be at point B or C). Bounded rationality. for further information see: Bounded rationality In practice, a consumer may not always pick an optimal bundle. For example, it may require too much thought or too much time. Bounded rationality is a theory that explains this behaviour. Examples of alternatives to utility maximisation due to bounded rationality are; satisficing, elimination by aspects and the mental accounting heuristic. Related concepts. The relationship between the utility function and Marshallian demand in the utility maximisation problem mirrors the relationship between the expenditure function and Hicksian demand in the expenditure minimisation problem. In expenditure minimisation the utility level is given and well as the prices of goods, the role of the consumer is to find a minimum level of expenditure required to reach this utility level. The utilitarian social choice rule is a rule that says that society should choose the alternative that maximizes the "sum" of utilities. While utility-maximization is done by individuals, utility-sum maximization is done by society.
[ { "math_id": 0, "text": "\\succcurlyeq" }, { "math_id": 1, "text": "(x+\\epsilon, y)\\succcurlyeq(x,y)" }, { "math_id": 2, "text": "(x,y+\\epsilon)\\succcurlyeq(x,y)" }, { "math_id": 3, "text": "(x+\\epsilon, y+\\epsilon)\\succ(x,y)" }, { "math_id": 4, "text": "\\epsilon" }, { "math_id": 5, "text": " MU_x/p_x = MU_y/p_y" }, { "math_id": 6, "text": " p_xx + p_yy \\leq I" }, { "math_id": 7, "text": " p_xx + p_yy = I" }, { "math_id": 8, "text": " \\mathbb{R}^n_+ \\ ." }, { "math_id": 9, "text": "x \\in \\mathbb{R}^n_+ \\ ." }, { "math_id": 10, "text": "p \\in \\mathbb{R}^n_+ \\ ," }, { "math_id": 11, "text": "I" }, { "math_id": 12, "text": "B(p, I) = \\{x \\in \\mathbb{R}^n_+ | \\mathbb{\\Sigma}^n_{i=1} p_i x_i \\leq I\\} \\ ," }, { "math_id": 13, "text": "u : \\mathbb{R}^n_+ \\rightarrow \\mathbb{R}_+ \\ ." }, { "math_id": 14, "text": "x(p,I)" }, { "math_id": 15, "text": "x\\in B(p,I)" }, { "math_id": 16, "text": "x(p, I) = \\{x \\in B(p,I)| U(x) \\geq U(y) \\forall y \\in B(p,I)\\}" } ]
https://en.wikipedia.org/wiki?curid=1018347
101843
Degenerate distribution
The probability distribution of a random variable which only takes a single value In mathematics, a degenerate distribution (sometimes also Dirac distribution) is, according to some, a probability distribution in a space with support only on a manifold of lower dimension, and according to others a distribution with support only at a single point. By the latter definition, it is a deterministic distribution and takes only a single value. Examples include a two-headed coin and rolling a die whose sides all show the same number. This distribution satisfies the definition of "random variable" even though it does not appear random in the everyday sense of the word; hence it is considered degenerate. In the case of a real-valued random variable, the degenerate distribution is a one-point distribution, localized at a point "k"0 on the real line. The probability mass function equals 1 at this point and 0 elsewhere. The degenerate univariate distribution can be viewed as the limiting case of a continuous distribution whose variance goes to 0 causing the probability density function to be a delta function at "k"0, with infinite height there but area equal to 1. The cumulative distribution function of the univariate degenerate distribution is: formula_0 Constant random variable. In probability theory, a constant random variable is a discrete random variable that takes a constant value, regardless of any event that occurs. This is technically different from an almost surely constant random variable, which may take other values, but only on events with probability zero. Constant and almost surely constant random variables, which have a degenerate distribution, provide a way to deal with constant values in a probabilistic framework. Let  "X": Ω → R  be a random variable defined on a probability space  (Ω, "P"). Then  "X"  is an "almost surely constant random variable" if there exists formula_1 such that formula_2 and is furthermore a "constant random variable" if formula_3 A constant random variable is almost surely constant, but not necessarily "vice versa", since if  "X"  is almost surely constant then there may exist  γ ∈ Ω  such that  "X"(γ) ≠ "k"0  (but then necessarily Pr({γ}) = 0, in fact Pr(X ≠ "k"0) = 0). For practical purposes, the distinction between  "X"  being constant or almost surely constant is unimportant, since the cumulative distribution function  "F"("x")  of  "X"  does not depend on whether  "X"  is constant or 'merely' almost surely constant. In either case, formula_4 The function  "F"("x")  is a step function; in particular it is a translation of the Heaviside step function. Higher dimensions. Degeneracy of a multivariate distribution in "n" random variables arises when the support lies in a space of dimension less than "n". This occurs when at least one of the variables is a deterministic function of the others. For example, in the 2-variable case suppose that "Y" = "aX + b" for scalar random variables "X" and "Y" and scalar constants "a" ≠ 0 and "b"; here knowing the value of one of "X" or "Y" gives exact knowledge of the value of the other. All the possible points ("x", "y") fall on the one-dimensional line "y = ax + b". In general when one or more of "n" random variables are exactly linearly determined by the others, if the covariance matrix exists its rank is less than "n" and its determinant is 0, so it is positive semi-definite but not positive definite, and the joint probability distribution is degenerate. Degeneracy can also occur even with non-zero covariance. For example, when scalar "X" is symmetrically distributed about 0 and "Y" is exactly given by "Y" = "X" 2, all possible points ("x", "y") fall on the parabola "y = x" 2, which is a one-dimensional subset of the two-dimensional space. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F_{k_0}(x)=\\left\\{\\begin{matrix} 1, & \\mbox{if }x\\ge k_0 \\\\ 0, & \\mbox{if }x<k_0 \\end{matrix}\\right." }, { "math_id": 1, "text": " k_0 \\in \\mathbb{R} " }, { "math_id": 2, "text": "\\Pr(X = k_0) = 1," }, { "math_id": 3, "text": "X(\\omega) = k_0, \\quad \\forall\\omega \\in \\Omega." }, { "math_id": 4, "text": "F(x) = \\begin{cases}1, &x \\geq k_0,\\\\0, &x < k_0.\\end{cases}" } ]
https://en.wikipedia.org/wiki?curid=101843
10184674
Large deviations of Gaussian random functions
A random function – of either one variable (a random process), or two or more variables (a random field) – is called Gaussian if every finite-dimensional distribution is a multivariate normal distribution. Gaussian random fields on the sphere are useful (for example) when analysing Sometimes, a value of a Gaussian random function deviates from its expected value by several standard deviations. This is a large deviation. Though rare in a small domain (of space or/and time), large deviations may be quite usual in a large domain. Basic statement. Let formula_0 be the maximal value of a Gaussian random function formula_1 on the (two-dimensional) sphere. Assume that the expected value of formula_1 is formula_2 (at every point of the sphere), and the standard deviation of formula_1 is formula_3 (at every point of the sphere). Then, for large formula_4, formula_5 is close to formula_6, where formula_7 is distributed formula_8 (the standard normal distribution), and formula_9 is a constant; it does not depend on formula_10, but depends on the correlation function of formula_1 (see below). The relative error of the approximation decays exponentially for large formula_10. The constant formula_9 is easy to determine in the important special case described in terms of the directional derivative of formula_1 at a given point (of the sphere) in a given direction (tangential to the sphere). The derivative is random, with zero expectation and some standard deviation. The latter may depend on the point and the direction. However, if it does not depend, then it is equal to formula_11 (for the sphere of radius formula_3). The coefficient formula_12 before formula_13 is in fact the Euler characteristic of the sphere (for the torus it vanishes). It is assumed that formula_1 is twice continuously differentiable (almost surely), and reaches its maximum at a single point (almost surely). The clue: mean Euler characteristic. The clue to the theory sketched above is, Euler characteristic formula_14 of the set formula_15 of all points formula_16 (of the sphere) such that formula_17. Its expected value (in other words, mean value) formula_18 can be calculated explicitly: formula_19 (which is far from being trivial, and involves Poincaré–Hopf theorem, Gauss–Bonnet theorem, Rice's formula etc.). The set formula_15 is the empty set whenever formula_20; in this case formula_21. In the other case, when formula_22, the set formula_15 is non-empty; its Euler characteristic may take various values, depending on the topology of the set (the number of connected components, and possible holes in these components). However, if formula_10 is large and formula_22 then the set formula_15 is usually a small, slightly deformed disk or ellipse (which is easy to guess, but quite difficult to prove). Thus, its Euler characteristic formula_14 is usually equal to formula_3 (given that formula_22). This is why formula_23 is close to formula_5. Further reading. The basic statement given above is a simple special case of a much more general (and difficult) theory stated by Adler. For a detailed presentation of this special case see Tsirelson's lectures.
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "0" }, { "math_id": 3, "text": "1" }, { "math_id": 4, "text": "a>0" }, { "math_id": 5, "text": "P(M>a)" }, { "math_id": 6, "text": "C a \\exp(-a^2/2) + 2P(\\xi>a)" }, { "math_id": 7, "text": "\\xi" }, { "math_id": 8, "text": "N(0,1)" }, { "math_id": 9, "text": "C" }, { "math_id": 10, "text": "a" }, { "math_id": 11, "text": "(\\pi/2)^{1/4} C^{1/2}" }, { "math_id": 12, "text": "2" }, { "math_id": 13, "text": "P(\\xi>a)" }, { "math_id": 14, "text": "\\chi_a" }, { "math_id": 15, "text": "\\{X>a\\}" }, { "math_id": 16, "text": "t" }, { "math_id": 17, "text": "X(t)>a" }, { "math_id": 18, "text": "E(\\chi_a)" }, { "math_id": 19, "text": " E(\\chi_a) = C a \\exp(-a^2/2) + 2 P(\\xi>a) " }, { "math_id": 20, "text": "M<a" }, { "math_id": 21, "text": "\\chi_a=0" }, { "math_id": 22, "text": "M>a" }, { "math_id": 23, "text": " E(\\chi_a)" } ]
https://en.wikipedia.org/wiki?curid=10184674
101848
ALOHAnet
Computer networking system ALOHAnet, also known as the ALOHA System, or simply ALOHA, was a pioneering computer networking system developed at the University of Hawaii. ALOHAnet became operational in June 1971, providing the first public demonstration of a wireless packet data network. The ALOHAnet used a new method of medium access, called "ALOHA random access", and experimental ultra high frequency (UHF) for its operation. In its simplest form, later known as Pure ALOHA, remote units communicated with a base station (Menehune) over two separate radio frequencies (for inbound and outbound respectively). Nodes did not wait for the channel to be clear before sending, but instead waited for acknowledgement of successful receipt of a message, and re-sent it if this was not received. Nodes would also stop and re-transmit data if they detected any other messages while transmitting. While simple to implement, this results in an efficiency of only 18.4%. A later advancement, Slotted ALOHA, improved the efficiency of the protocol by reducing the chance of collision, improving throughput to 36.8%. ALOHA was subsequently employed in the Ethernet cable based network in the 1970s, and following regulatory developments in the early 1980s it became possible to use the ALOHA random-access techniques in both Wi-Fi and in mobile telephone networks. ALOHA channels were used in a limited way in the 1980s in 1G mobile phones for signaling and control purposes. In the late 1980s, the European standardization group GSM who worked on the Pan-European Digital mobile communication system GSM greatly expanded the use of ALOHA channels for access to radio channels in mobile telephony. In the early 2000s additional ALOHA channels were added to 2.5G and 3G mobile phones with the widespread introduction of General Packet Radio Service (GPRS), using a slotted-ALOHA random-access channel combined with a version of the Reservation ALOHA scheme first analyzed by a group at BBN Technologies. History. One of the early computer networking designs, development of the ALOHA network was begun in September 1968 at the University of Hawaii under the leadership of Norman Abramson and Franklin Kuo, along with Thomas Gaarder, Shu Lin, Wesley Peterson and Edward ("Ned") Weldon. The goal was to use low-cost commercial radio equipment to connect users on Oahu and the other Hawaiian islands with a central time-sharing computer on the main Oahu campus. The first packet broadcasting unit went into operation in June 1971. Terminals were connected to a special purpose "terminal connection unit" using RS-232 at 9600 bit/s. ALOHA was originally a contrived acronym standing for Additive Links On-line Hawaii Area. The original version of ALOHA used two distinct frequencies in a hub configuration, with the hub machine broadcasting packets to everyone on the "outbound" channel, and the various client machines sending data packets to the hub on the "inbound" channel. If data was received correctly at the hub, a short acknowledgment packet was sent to the client; if an acknowledgment was not received by a client machine after a short wait time, it would automatically retransmit the data packet after waiting a randomly selected time interval. This acknowledgment mechanism was used to detect and correct for collisions created when two client machines both attempted to send a packet at the same time. ALOHAnet's primary importance was its use of a shared medium for client transmissions. Unlike the ARPANET where each node could only talk to a single node at the other end of a wire or satellite circuit, in ALOHAnet all client nodes communicated with the hub on the same frequency. This meant that some sort of mechanism was needed to control who could talk at what time. The ALOHAnet solution was to allow each client to send its data without controlling when it was sent, and implementing an acknowledgment/retransmission scheme to deal with collisions. This approach radically reduced the complexity of the protocol and the networking hardware, since nodes do not need to negotiate "who" is allowed to speak. This solution became known as a pure ALOHA, or random-access channel, and was the basis for subsequent Ethernet development and later Wi-Fi networks. Various versions of the ALOHA protocol (such as Slotted ALOHA) also appeared later in satellite communications, and were used in wireless data networks such as ARDIS, Mobitex, CDPD, and GSM. The Aloha network introduced the mechanism of randomized multiple access, which resolved device transmission collisions by transmitting a packet immediately if no acknowledgement is present, and if no acknowledgment was received, the transmission was repeated after a random waiting time. The probability distribution of this random waiting time for retransmission of a packet that has not been acknowledged as received is critically important for the stability of Aloha-type communication systems. The average waiting time for retransmission is typically shorter than the average time for generation of a new packet from the same client node, but it should not be allowed to be so short as to compromise the stability of the network, causing a collapse in its overall throughput. Also important was ALOHAnet's use of the outgoing hub channel to broadcast packets directly to all clients on a second shared frequency and using an address in each packet to allow selective receipt at each client node. Separate frequencies were used for incoming and outgoing communications to the hub so that devices could receive acknowledgments regardless of transmissions. Protocol. Pure ALOHA. The original version of the protocol (now called Pure ALOHA, and the one implemented in ALOHAnet) was quite simple: Pure ALOHA does not check whether the channel is busy before transmitting. Since collisions can occur and data may have to be sent again, ALOHA cannot efficiently use 100% of the capacity of the communications channel. How long a station waits until it retransmits, and the likelihood a collision occurs are interrelated, and both affect how efficiently the channel can be used. This means that the concept of "retransmit later" is a critical aspect; The quality of the backoff scheme chosen significantly influences the efficiency of the protocol, the ultimate channel capacity, and the predictability of its behavior. To assess Pure ALOHA, there is a need to predict its throughput, the rate of (successful) transmission of frames. First make a few simplifying assumptions: Let T refer to the time needed to transmit one frame on the channel, and define "frame-time" as a unit of time equal to T. Let G refer to the mean used in the Poisson distribution over transmission-attempt amounts. That is, on average, there are G transmission attempts per "frame-time". Consider what needs to happen for a frame to be transmitted successfully. Let t refer to the time at which it is intended to send a frame. It is preferable to use the channel for one frame-time beginning at t, and all other stations to refrain from transmitting during this time. For any frame-time, the probability of there being k transmission-attempts during that frame-time is: formula_0 The average number of transmission-attempts for two consecutive frame-times is 2G. Hence, for any pair of consecutive frame-times, the probability of there being k transmission attempts during those two frame-times is: formula_1 Therefore, the probability (formula_2) of there being zero transmission-attempts between t-T and t+T (and thus of a successful transmission for us) is: formula_3 The throughput can be calculated as the rate of transmission attempts multiplied by the probability of success, and it can be concluded that the throughput (formula_4) is: formula_5 The maximum throughput is 0.5/e frames per frame-time (reached when formula_6), which is approximately 0.184 frames per frame-time. This means that, in Pure ALOHA, only about 18.4% of the time is used for successful transmissions. Slotted ALOHA. An improvement to the original ALOHA protocol was Slotted ALOHA, which introduced discrete time slots and increased the maximum throughput. A station can start a transmission only at the beginning of a time slot, and thus collisions are reduced. In this case, only transmission-attempts within 1 frame-time and not 2 consecutive frame-times need to be considered, since collisions can only occur during each time slot. Thus, the probability of there being zero transmission attempts by other stations in a single time slot is: formula_7 the probability of a transmission requiring exactly k attempts is (k-1 collisions and 1 success): formula_8 The throughput is: formula_9 The maximum throughput is "1/e" frames per frame-time (reached when "G" = 1), which is approximately 0.368 frames per frame-time, or 36.8%. Slotted ALOHA is used in low-data-rate tactical satellite communications networks by military forces, in subscriber-based satellite communications networks, mobile telephony call setup, set-top box communications and in the contactless RFID technologies. Reservation ALOHA. Reservation ALOHA, or R-ALOHA, is an effort to improve the efficiency of Slotted ALOHA. The improvements with Reservation ALOHA are markedly shorter delays and ability to efficiently support higher levels of utilization. As a contrast of efficiency, simulations have shown that Reservation ALOHA exhibits less delay at 80% utilization than Slotted ALOHA at 20–36% utilization. The chief difference between Slotted and Reservation ALOHA is that with Slotted ALOHA, any slot is available for utilization without regards to prior usage. Under Reservation ALOHA's contention-based reservation schema, the slot is temporarily considered "owned" by the station that successfully used it. Additionally, Reservation ALOHA simply stops sending data once the station has completed its transmission. As a rule, idle slots are considered available to all stations that may then implicitly reserve (utilize) the slot on a contention basis. Other protocols. The use of a random-access channel in ALOHAnet led to the development of carrier-sense multiple access (CSMA), a "listen before send" random-access protocol that can be used when all nodes send and receive on the same channel. CSMA in radio channels was extensively modeled. The AX.25 packet radio protocol is based on the CSMA approach with collision recovery, based on the experience gained from ALOHAnet. A variation of CSMA, CSMA/CD is used in early versions of Ethernet. ALOHA and the other random-access protocols have an inherent variability in their throughput and delay performance characteristics. For this reason, applications that need highly deterministic load behavior may use master/slave or token-passing schemes (such as Token Ring or ARCNET) instead of contention systems. Hardware. The central node communications processor was an HP 2100 minicomputer called the Menehune, which is the Hawaiian language word for dwarf people, and was named for its similar role to the original ARPANET Interface Message Processor (IMP) which was being deployed at about the same time. In the original system, the Menehune forwarded correctly received user data to the UH central computer, an IBM System 360/65 time-sharing system. Outgoing messages from the 360 were converted into packets by the Menehune, which were queued and broadcast to the remote users at a data rate of 9600 bit/s. Unlike the half-duplex radios at the user TCUs, the Menehune was interfaced to the radio channels with full-duplex radio equipment. The original user interface developed for the system was an all-hardware unit called an ALOHAnet Terminal Control Unit (TCU) and was the sole piece of equipment necessary to connect a terminal into the ALOHA channel. The TCU was composed of a UHF antenna, transceiver, modem, buffer and control unit. The buffer was designed for a full line length of 80 characters, which allowed handling of both the 40- and 80-character fixed-length packets defined for the system. The typical user terminal in the original system consisted of a Teletype Model 33 or a dumb CRT user terminal connected to the TCU using a standard RS-232 interface. Shortly after the original ALOHA network went into operation, the TCU was redesigned with one of the first Intel microprocessors, and the resulting upgrade was called a Programmable Control Unit (PCU). Additional basic functions performed by the TCUs and PCUs were generation of a cyclic-parity-check code vector and decoding of received packets for packet error detection purposes, and generation of packet retransmissions using a simple random interval generator. If an acknowledgment was not received from the Menehune after the prescribed number of automatic retransmissions, a flashing light was used as an indicator to the human user. Also, since the TCUs and PCUs did not send acknowledgments to the Menehune, a steady warning light was displayed to the human user when an error was detected in a received packet. Considerable simplification was incorporated into the initial design of the TCU as well as the PCU for interfacing a human user into the network. In later versions of the system, simple radio relays were placed in operation to connect the main network on the island of Oahu to other islands in Hawaii, and Menehune routing capabilities were expanded to allow user nodes to exchange packets with other user nodes, the ARPANET, and an experimental satellite network. Network architecture. Two fundamental choices which dictated much of the ALOHAnet design were the two-channel star configuration of the network and the use of random access for user transmissions. The two-channel configuration was primarily chosen to allow for efficient transmission of the relatively dense total traffic stream being returned to users by the central time-sharing computer. An additional reason for the star configuration was the desire to centralize as many communication functions as possible at the central network node (the Menehune) to minimize the cost of the original all-hardware terminal control unit (TCU) at each user node. The random-access channel for communication between users and the Menehune was designed specifically for the traffic characteristics of interactive computing. In a conventional communication system, a user might be assigned a portion of the channel on either a frequency-division multiple access or time-division multiple access basis. Since it was well known that in time-sharing systems (circa 1970), computer and user data are bursty, such fixed assignments are generally wasteful of bandwidth because of the high peak-to-average data rates that characterize the traffic. To achieve a more efficient use of bandwidth for bursty traffic, ALOHAnet developed the random-access packet switching method that has come to be known as a "pure ALOHA" channel. This approach effectively dynamically allocates bandwidth immediately to a user who has data to send, using the acknowledgment and retransmission mechanism described earlier to deal with occasional access collisions. While the average channel loading must be kept below about 10% to maintain a low collision rate, this still results in better bandwidth efficiency than when fixed allocations are used in a bursty traffic context. Two 100 kHz channels in the experimental UHF band were used in the implemented system, one for the user-to-computer random-access channel and one for the computer-to-user broadcast channel. The system was configured as a star network, allowing only the central node to receive transmissions in the random-access channel. All user TCUs received each transmission made by the central node in the broadcast channel. All transmissions were made in bursts at , with data and control information encapsulated in packets. Each packet consisted of a 32-bit header and a 16-bit header parity check word, followed by up to 80 bytes of data and a 16-bit parity check word for the data. The header contained address information identifying a particular user so that when the Menehune broadcast a packet, only the intended user's node would accept it. Legacy. In the 1970s ALOHA random access was employed in the nascent Ethernet cable based network and then in the Marisat (now Inmarsat) satellite network. In the early 1980s frequencies for mobile networks became available, and in 1985 frequencies suitable for what became known as Wi-Fi were allocated in the US. These regulatory developments made it possible to use the ALOHA random-access techniques in both Wi-Fi and in mobile telephone networks. ALOHA channels were used in a limited way in the 1980s in 1G mobile phones for signaling and control purposes. In the late 1980s, the European standardization group GSM who worked on the Pan-European Digital mobile communication system GSM greatly expanded the use of ALOHA channels for access to radio channels in mobile telephony. In addition, SMS message texting was implemented in 2G mobile phones. In the early 2000s additional ALOHA channels were added to 2.5G and 3G mobile phones with the widespread introduction of General Packet Radio Service (GPRS), using a slotted-ALOHA random-access channel combined with a version of the Reservation ALOHA scheme first analyzed by a group at BBN Technologies. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{G^k e^{-G}}{k!}" }, { "math_id": 1, "text": "\\frac{(2G)^k e^{-2G}}{k!}" }, { "math_id": 2, "text": "Prob_{pure}" }, { "math_id": 3, "text": "Prob_{pure}=e^{-2G}" }, { "math_id": 4, "text": "S_{pure}" }, { "math_id": 5, "text": "S_{pure}=Ge^{-2G}" }, { "math_id": 6, "text": "G=0.5" }, { "math_id": 7, "text": "Prob_{slotted} = e^{-G}" }, { "math_id": 8, "text": "Prob_{slotted} k = e^{-G} ( 1 - e^{-G} )^{k-1}" }, { "math_id": 9, "text": "S_{slotted}=Ge^{-G}" } ]
https://en.wikipedia.org/wiki?curid=101848
101851
Hilbert's tenth problem
On solvability of Diophantine equations Hilbert's tenth problem is the tenth on the list of mathematical problems that the German mathematician David Hilbert posed in 1900. It is the challenge to provide a general algorithm that, for any given Diophantine equation (a polynomial equation with integer coefficients and a finite number of unknowns), can decide whether the equation has a solution with all unknowns taking integer values. For example, the Diophantine equation formula_0 has an integer solution: formula_1. By contrast, the Diophantine equation formula_2 has no such solution. Hilbert's tenth problem has been solved, and it has a negative answer: such a general algorithm cannot exist. This is the result of combined work of Martin Davis, Yuri Matiyasevich, Hilary Putnam and Julia Robinson that spans 21 years, with Matiyasevich completing the theorem in 1970. The theorem is now known as Matiyasevich's theorem or the MRDP theorem (an initialism for the surnames of the four principal contributors to its solution). When all coefficients and variables are restricted to be "positive" integers, the related problem of polynomial identity testing becomes a decidable (exponentiation-free) variation of Tarski's high school algebra problem, sometimes denoted formula_3 Background. Original formulation. Hilbert formulated the problem as follows: "Given a Diophantine equation with any number of unknown quantities and with rational integral numerical coefficients:" "To devise a process according to which it can be determined in a finite number of operations whether the equation is solvable in rational integers." The words "process" and "finite number of operations" have been taken to mean that Hilbert was asking for an algorithm. The term "rational integral" simply refers to the integers, positive, negative or zero: 0, ±1, ±2, ... . So Hilbert was asking for a general algorithm to decide whether a given polynomial Diophantine equation with integer coefficients has a solution in integers. Hilbert's problem is not concerned with finding the solutions. It only asks whether, in general, we can decide whether one or more solutions exist. The answer to this question is negative, in the sense that no "process can be devised" for answering that question. In modern terms, Hilbert's 10th problem is an undecidable problem. Diophantine sets. In a Diophantine equation, there are two kinds of variables: the parameters and the unknowns. The Diophantine set consists of the parameter assignments for which the Diophantine equation is solvable. A typical example is the linear Diophantine equation in two unknowns, formula_4, where the equation is solvable if and only if the greatest common divisor formula_5 evenly divides formula_6. The set of all ordered triples formula_7 satisfying this restriction is called the "Diophantine set" defined by formula_4. In these terms, Hilbert's tenth problem asks whether there is an algorithm to determine if the Diophantine set corresponding to an arbitrary polynomial is non-empty. The problem is generally understood in terms of the natural numbers (that is, the non-negative integers) rather than arbitrary integers. However, the two problems are equivalent: any general algorithm that can decide whether a given Diophantine equation has an integer solution could be modified into an algorithm that decides whether a given Diophantine equation has a natural-number solution, and vice versa. By Lagrange's four-square theorem, every natural number is the sum of the squares of four integers, so we could rewrite every natural-valued parameter in terms of the sum of the squares of four new integer-valued parameters. Similarly, since every integer is the difference of two natural numbers, we could rewrite every integer parameter as the difference of two natural parameters. Furthermore, we can always rewrite a system of simultaneous equations formula_8 (where each formula_9 is a polynomial) as a single equation formula_10. Recursively enumerable sets. A recursively enumerable set can be characterized as one for which there exists an algorithm that will ultimately halt when a member of the set is provided as input, but may continue indefinitely when the input is a non-member. It was the development of computability theory (also known as recursion theory) that provided a precise explication of the intuitive notion of algorithmic computability, thus making the notion of recursive enumerability perfectly rigorous. It is evident that Diophantine sets are recursively enumerable (also known as semi-decidable). This is because one can arrange all possible tuples of values of the unknowns in a sequence and then, for a given value of the parameter(s), test these tuples, one after another, to see whether they are solutions of the corresponding equation. The unsolvability of Hilbert's tenth problem is a consequence of the surprising fact that the converse is true: "Every recursively enumerable set is Diophantine." This result is variously known as Matiyasevich's theorem (because he provided the crucial step that completed the proof) and the MRDP theorem (for Yuri Matiyasevich, Julia Robinson, Martin Davis, and Hilary Putnam). Because "there exists a recursively enumerable set that is not computable," the unsolvability of Hilbert's tenth problem is an immediate consequence. In fact, more can be said: there is a polynomial formula_11 with integer coefficients such that the set of values of formula_12 for which the equation formula_13 has solutions in natural numbers is not computable. So, not only is there no general algorithm for testing Diophantine equations for solvability, even for this one parameter family of equations, there is no algorithm. Applications. The Matiyasevich/MRDP theorem relates two notions—one from computability theory, the other from number theory—and has some surprising consequences. Perhaps the most surprising is the existence of a "universal" Diophantine equation: "There exists a polynomial" formula_14 "such that, given any Diophantine set" formula_15 "there is a number" formula_16 "such that" formula_17 This is true simply because Diophantine sets, being equal to recursively enumerable sets, are also equal to Turing machines. It is a well known property of Turing machines that there exist universal Turing machines, capable of executing any algorithm. Hilary Putnam has pointed out that for any Diophantine set formula_15 of positive integers, there is a polynomial formula_18 such that formula_15 consists of exactly the positive numbers among the values assumed by formula_19 as the variables formula_20 range over all natural numbers. This can be seen as follows: If formula_21 provides a Diophantine definition of formula_15, then it suffices to set formula_22 So, for example, there is a polynomial for which the positive part of its range is exactly the prime numbers. (On the other hand, no polynomial can only take on prime values.) The same holds for other recursively enumerable sets of natural numbers: the factorial, the binomial coefficients, the fibonacci numbers, etc. Other applications concern what logicians refer to as formula_23 propositions, sometimes also called propositions of "Goldbach type". These are like Goldbach's conjecture, in stating that all natural numbers possess a certain property that is algorithmically checkable for each particular number. The Matiyasevich/MRDP theorem implies that each such proposition is equivalent to a statement that asserts that some particular Diophantine equation has no solutions in natural numbers. A number of important and celebrated problems are of this form: in particular, Fermat's Last Theorem, the Riemann hypothesis, and the four color theorem. In addition the assertion that particular formal systems such as Peano arithmetic or ZFC are consistent can be expressed as formula_23 sentences. The idea is to follow Kurt Gödel in coding proofs by natural numbers in such a way that the property of being the number representing a proof is algorithmically checkable. formula_23 sentences have the special property that if they are false, that fact will be provable in any of the usual formal systems. This is because the falsity amounts to the existence of a counter-example that can be verified by simple arithmetic. So if a formula_23 sentence is such that neither it nor its negation is provable in one of these systems, that sentence must be true. A particularly striking form of Gödel's incompleteness theorem is also a consequence of the Matiyasevich/MRDP theorem: Let formula_25 provide a Diophantine definition of a non-computable set. Let formula_26 be an algorithm that outputs a sequence of natural numbers formula_24 such that the corresponding equation formula_27 has no solutions in natural numbers. Then there is a number formula_16 that is not output by formula_26 while in fact the equation formula_28 has no solutions in natural numbers. To see that the theorem is true, it suffices to notice that if there were no such number formula_16, one could algorithmically test membership of a number formula_24 in this non-computable set by simultaneously running the algorithm formula_26 to see whether formula_24 is output while also checking all possible formula_29-tuples of natural numbers seeking a solution of the equation formula_27 and we may associate an algorithm formula_26 with any of the usual formal systems such as Peano arithmetic or ZFC by letting it systematically generate consequences of the axioms and then output a number formula_24 whenever a sentence of the form formula_30 is generated. Then the theorem tells us that either a false statement of this form is proved or a true one remains unproved in the system in question. Further results. We may speak of the "degree" of a Diophantine set as being the least degree of a polynomial in an equation defining that set. Similarly, we can call the "dimension" of such a set the fewest unknowns in a defining equation. Because of the existence of a universal Diophantine equation, it is clear that there are absolute upper bounds to both of these quantities, and there has been much interest in determining these bounds. Already in the 1920s Thoralf Skolem showed that any Diophantine equation is equivalent to one of degree 4 or less. His trick was to introduce new unknowns by equations setting them equal to the square of an unknown or the product of two unknowns. Repetition of this process results in a system of second degree equations; then an equation of degree 4 is obtained by summing the squares. So every Diophantine set is trivially of degree 4 or less. It is not known whether this result is best possible. Julia Robinson and Yuri Matiyasevich showed that every Diophantine set has dimension no greater than 13. Later, Matiyasevich sharpened their methods to show that 9 unknowns suffice. Although it may well be that this result is not the best possible, there has been no further progress. So, in particular, there is no algorithm for testing Diophantine equations with 9 or fewer unknowns for solvability in natural numbers. For the case of rational integer solutions (as Hilbert had originally posed it), the 4-squares trick shows that there is no algorithm for equations with no more than 36 unknowns. But Zhi Wei Sun showed that the problem for integers is unsolvable even for equations with no more than 11 unknowns. Martin Davis studied algorithmic questions involving the number of solutions of a Diophantine equation. Hilbert's tenth problem asks whether or not that number is 0. Let formula_31 and let formula_32 be a proper non-empty subset of formula_26. Davis proved that there is no algorithm to test a given Diophantine equation to determine whether the number of its solutions is a member of the set formula_32. Thus there is no algorithm to determine whether the number of solutions of a Diophantine equation is finite, odd, a perfect square, a prime, etc. The proof of the MRDP theorem has been formalized in Coq. Extensions of Hilbert's tenth problem. Although Hilbert posed the problem for the rational integers, it can be just as well asked for many rings (in particular, for any ring whose number of elements is countable). Obvious examples are the rings of integers of algebraic number fields as well as the rational numbers. There has been much work on Hilbert's tenth problem for the rings of integers of algebraic number fields. Basing themselves on earlier work by Jan Denef and Leonard Lipschitz and using class field theory, Harold N. Shapiro and Alexandra Shlapentokh were able to prove: "Hilbert's tenth problem is unsolvable for the ring of integers of any algebraic number field whose Galois group over the rationals is abelian." Shlapentokh and Thanases Pheidas (independently of one another) obtained the same result for algebraic number fields admitting exactly one pair of complex conjugate embeddings. The problem for the ring of integers of algebraic number fields other than those covered by the results above remains open. Likewise, despite much interest, the problem for equations over the rationals remains open. Barry Mazur has conjectured that for any variety over the rationals, the topological closure over the reals of the set of solutions has only finitely many components. This conjecture implies that the integers are not Diophantine over the rationals and so if this conjecture is true a negative answer to Hilbert's Tenth Problem would require a different approach than that used for other rings. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "3x^2-2xy-y^2z-7=0" }, { "math_id": 1, "text": "x=1,\\ y=2,\\ z=-2" }, { "math_id": 2, "text": "x^2+y^2+1=0" }, { "math_id": 3, "text": "\\overline{HSI}." }, { "math_id": 4, "text": "a_1x + a_2y = a_3" }, { "math_id": 5, "text": "\\gcd(a_1, a_2)" }, { "math_id": 6, "text": "a_3" }, { "math_id": 7, "text": "(a_1, a_2, a_3)" }, { "math_id": 8, "text": "p_1=0,\\ldots,p_k=0" }, { "math_id": 9, "text": "p_i" }, { "math_id": 10, "text": "p_1^{\\,2}+\\cdots+p_k^{\\,2}=0" }, { "math_id": 11, "text": "p(a,x_1,\\ldots,x_n)" }, { "math_id": 12, "text": "a" }, { "math_id": 13, "text": "p(a,x_1,\\ldots,x_n)=0" }, { "math_id": 14, "text": "p(a,n,x_1,\\ldots,x_k)" }, { "math_id": 15, "text": "S" }, { "math_id": 16, "text": "n_0" }, { "math_id": 17, "text": " S = \\{\\,a \\mid \\exists x_1, \\ldots, x_k[p(a,n_0,x_1,\\ldots,x_k)=0]\\,\\}." }, { "math_id": 18, "text": "q(x_0,x_1,\\ldots,x_n)" }, { "math_id": 19, "text": "q" }, { "math_id": 20, "text": "x_0,x_1,\\ldots,x_n" }, { "math_id": 21, "text": "p(a,y_1,\\ldots,y_n)=0" }, { "math_id": 22, "text": "q(x_0,x_1,\\ldots,x_n)= x_0[1- p(x_0,x_1,\\ldots,x_n)^2]." }, { "math_id": 23, "text": "\\Pi^{0}_1" }, { "math_id": 24, "text": "n" }, { "math_id": 25, "text": "p(a,x_1,\\ldots,x_k)=0" }, { "math_id": 26, "text": "A" }, { "math_id": 27, "text": "p(n,x_1,\\ldots,x_k)=0" }, { "math_id": 28, "text": "p(n_0,x_1,\\ldots,x_k)=0" }, { "math_id": 29, "text": "k" }, { "math_id": 30, "text": "\\neg \\exists x_1,\\ldots , x_k [p(n,x_1,\\ldots,x_k)=0]" }, { "math_id": 31, "text": "A=\\{0,1,2,3,\\ldots,\\aleph_0\\}" }, { "math_id": 32, "text": "C" } ]
https://en.wikipedia.org/wiki?curid=101851
101863
Linear independence
Vectors whose linear combinations are nonzero In the theory of vector spaces, a set of vectors is said to be &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;linearly independent if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;linearly dependent. These concepts are central to the definition of dimension. A vector space can be of finite dimension or infinite dimension depending on the maximum number of linearly independent vectors. The definition of linear dependence and the ability to determine whether a subset of vectors in a vector space is linearly dependent are central to determining the dimension of a vector space. Definition. A sequence of vectors formula_0 from a vector space V is said to be "linearly dependent", if there exist scalars formula_1 not all zero, such that formula_2 where formula_3 denotes the zero vector. This implies that at least one of the scalars is nonzero, say formula_4, and the above equation is able to be written as formula_5 if formula_6 and formula_7 if formula_8 Thus, a set of vectors is linearly dependent if and only if one of them is zero or a linear combination of the others. A sequence of vectors formula_9 is said to be "linearly independent" if it is not linearly dependent, that is, if the equation formula_10 can only be satisfied by formula_11 for formula_12 This implies that no vector in the sequence can be represented as a linear combination of the remaining vectors in the sequence. In other words, a sequence of vectors is linearly independent if the only representation of formula_13 as a linear combination of its vectors is the trivial representation in which all the scalars formula_14 are zero. Even more concisely, a sequence of vectors is linearly independent if and only if formula_13 can be represented as a linear combination of its vectors in a unique way. If a sequence of vectors contains the same vector twice, it is necessarily dependent. The linear dependency of a sequence of vectors does not depend of the order of the terms in the sequence. This allows defining linear independence for a finite set of vectors: A finite set of vectors is "linearly independent" if the sequence obtained by ordering them is linearly independent. In other words, one has the following result that is often useful. A sequence of vectors is linearly independent if and only if it does not contain the same vector twice and the set of its vectors is linearly independent. Infinite case. An infinite set of vectors is "linearly independent" if every nonempty finite subset is linearly independent. Conversely, an infinite set of vectors is "linearly dependent" if it contains a finite subset that is linearly dependent, or equivalently, if some vector in the set is a linear combination of other vectors in the set. An indexed family of vectors is "linearly independent" if it does not contain the same vector twice, and if the set of its vectors is linearly independent. Otherwise, the family is said to be "linearly dependent". A set of vectors which is linearly independent and spans some vector space, forms a basis for that vector space. For example, the vector space of all polynomials in x over the reals has the (infinite) subset {1, "x", "x"2, ...} as a basis. Geometric examples. Geographic location. A person describing the location of a certain place might say, "It is 3 miles north and 4 miles east of here." This is sufficient information to describe the location, because the geographic coordinate system may be considered as a 2-dimensional vector space (ignoring altitude and the curvature of the Earth's surface). The person might add, "The place is 5 miles northeast of here." This last statement is "true", but it is not necessary to find the location. In this example the "3 miles north" vector and the "4 miles east" vector are linearly independent. That is to say, the north vector cannot be described in terms of the east vector, and vice versa. The third "5 miles northeast" vector is a linear combination of the other two vectors, and it makes the set of vectors "linearly dependent", that is, one of the three vectors is unnecessary to define a specific location on a plane. Also note that if altitude is not ignored, it becomes necessary to add a third vector to the linearly independent set. In general, n linearly independent vectors are required to describe all locations in n-dimensional space. Evaluating linear independence. The zero vector. If one or more vectors from a given sequence of vectors formula_22 is the zero vector formula_3 then the vector formula_22 are necessarily linearly dependent (and consequently, they are not linearly independent). To see why, suppose that formula_23 is an index (i.e. an element of formula_24) such that formula_25 Then let formula_26 (alternatively, letting formula_27 be equal any other non-zero scalar will also work) and then let all other scalars be formula_28 (explicitly, this means that for any index formula_29 other than formula_23 (i.e. for formula_30), let formula_31 so that consequently formula_32). Simplifying formula_33 gives: formula_34 Because not all scalars are zero (in particular, formula_35), this proves that the vectors formula_22 are linearly dependent. As a consequence, the zero vector can not possibly belong to any collection of vectors that is linearly "in"dependent. Now consider the special case where the sequence of formula_22 has length formula_36 (i.e. the case where formula_37). A collection of vectors that consists of exactly one vector is linearly dependent if and only if that vector is zero. Explicitly, if formula_38 is any vector then the sequence formula_38 (which is a sequence of length formula_36) is linearly dependent if and only if formula_7; alternatively, the collection formula_38 is linearly independent if and only if formula_39 Linear dependence and independence of two vectors. This example considers the special case where there are exactly two vector formula_40 and formula_41 from some real or complex vector space. The vectors formula_40 and formula_41 are linearly dependent if and only if at least one of the following is true: If formula_45 then by setting formula_46 we have formula_47 (this equality holds no matter what the value of formula_41 is), which shows that (1) is true in this particular case. Similarly, if formula_48 then (2) is true because formula_49 If formula_50 (for instance, if they are both equal to the zero vector formula_3) then "both" (1) and (2) are true (by using formula_51 for both). If formula_43 then formula_52 is only possible if formula_53 "and" formula_54; in this case, it is possible to multiply both sides by formula_55 to conclude formula_56 This shows that if formula_52 and formula_54 then (1) is true if and only if (2) is true; that is, in this particular case either both (1) and (2) are true (and the vectors are linearly dependent) or else both (1) and (2) are false (and the vectors are linearly "in"dependent). If formula_43 but instead formula_45 then at least one of formula_42 and formula_41 must be zero. Moreover, if exactly one of formula_40 and formula_41 is formula_3 (while the other is non-zero) then exactly one of (1) and (2) is true (with the other being false). The vectors formula_40 and formula_41 are linearly "in"dependent if and only if formula_40 is not a scalar multiple of formula_41 "and" formula_41 is not a scalar multiple of formula_40. Vectors in R2. Three vectors: Consider the set of vectors formula_57 formula_58 and formula_59 then the condition for linear dependence seeks a set of non-zero scalars, such that formula_60 or formula_61 Row reduce this matrix equation by subtracting the first row from the second to obtain, formula_62 Continue the row reduction by (i) dividing the second row by 5, and then (ii) multiplying by 3 and adding to the first row, that is formula_63 Rearranging this equation allows us to obtain formula_64 which shows that non-zero "a""i" exist such that formula_65 can be defined in terms of formula_66 and formula_67 Thus, the three vectors are linearly dependent. Two vectors: Now consider the linear dependence of the two vectors formula_66 and formula_58 and check, formula_68 or formula_69 The same row reduction presented above yields, formula_70 This shows that formula_71 which means that the vectors formula_66 and formula_72 are linearly independent. Vectors in R4. In order to determine if the three vectors in formula_73 formula_74 are linearly dependent, form the matrix equation, formula_75 Row reduce this equation to obtain, formula_76 Rearrange to solve for v3 and obtain, formula_77 This equation is easily solved to define non-zero "a"i, formula_78 where formula_79 can be chosen arbitrarily. Thus, the vectors formula_80 and formula_81 are linearly dependent. Alternative method using determinants. An alternative method relies on the fact that formula_82 vectors in formula_83 are linearly independent if and only if the determinant of the matrix formed by taking the vectors as its columns is non-zero. In this case, the matrix formed by the vectors is formula_84 We may write a linear combination of the columns as formula_85 We are interested in whether "A"Λ = 0 for some nonzero vector Λ. This depends on the determinant of formula_86, which is formula_87 Since the determinant is non-zero, the vectors formula_88 and formula_89 are linearly independent. Otherwise, suppose we have formula_90 vectors of formula_82 coordinates, with formula_91 Then "A" is an "n"×"m" matrix and Λ is a column vector with formula_90 entries, and we are again interested in "A"Λ = 0. As we saw previously, this is equivalent to a list of formula_82 equations. Consider the first formula_90 rows of formula_86, the first formula_90 equations; any solution of the full list of equations must also be true of the reduced list. In fact, if ⟨"i"1...,"i""m"⟩ is any list of formula_90 rows, then the equation must be true for those rows. formula_92 Furthermore, the reverse is true. That is, we can test whether the formula_90 vectors are linearly dependent by testing whether formula_93 for all possible lists of formula_90 rows. (In case formula_94, this requires only one determinant, as above. If formula_95, then it is a theorem that the vectors must be linearly dependent.) This fact is valuable for theory; in practical calculations more efficient methods are available. More vectors than dimensions. If there are more vectors than dimensions, the vectors are linearly dependent. This is illustrated in the example above of three vectors in formula_96 Natural basis vectors. Let formula_97 and consider the following elements in formula_98, known as the natural basis vectors: formula_99 Then formula_100 are linearly independent. &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof Suppose that formula_101 are real numbers such that formula_102 Since formula_103 then formula_104 for all formula_105 Linear independence of functions. Let formula_98 be the vector space of all differentiable functions of a real variable formula_106. Then the functions formula_107 and formula_108 in formula_98 are linearly independent. Proof. Suppose formula_109 and formula_110 are two real numbers such that formula_111 Take the first derivative of the above equation: formula_112 for all values of formula_113 We need to show that formula_114 and formula_115 In order to do this, we subtract the first equation from the second, giving formula_116. Since formula_108 is not zero for some formula_106, formula_117 It follows that formula_114 too. Therefore, according to the definition of linear independence, formula_118 and formula_108 are linearly independent. Space of linear dependencies. A linear dependency or linear relation among vectors v1, ..., v"n" is a tuple ("a"1, ..., "a""n") with n scalar components such that formula_119 If such a linear dependence exists with at least a nonzero component, then the n vectors are linearly dependent. Linear dependencies among v1, ..., v"n" form a vector space. If the vectors are expressed by their coordinates, then the linear dependencies are the solutions of a homogeneous system of linear equations, with the coordinates of the vectors as coefficients. A basis of the vector space of linear dependencies can therefore be computed by Gaussian elimination. Generalizations. Affine independence. A set of vectors is said to be affinely dependent if at least one of the vectors in the set can be defined as an affine combination of the others. Otherwise, the set is called affinely independent. Any affine combination is a linear combination; therefore every affinely dependent set is linearly dependent. Conversely, every linearly independent set is affinely independent. Consider a set of formula_90 vectors formula_120 of size formula_82 each, and consider the set of formula_90 augmented vectors formula_121 of size formula_122 each. The original vectors are affinely independent if and only if the augmented vectors are linearly independent. Linearly independent vector subspaces. Two vector subspaces formula_123 and formula_124 of a vector space formula_125 are said to be linearly independent if formula_126 More generally, a collection formula_127 of subspaces of formula_125 are said to be linearly independent if formula_128 for every index formula_129 where formula_130 The vector space formula_125 is said to be a direct sum of formula_127 if these subspaces are linearly independent and formula_131 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{v}_1, \\mathbf{v}_2, \\dots, \\mathbf{v}_k" }, { "math_id": 1, "text": "a_1, a_2, \\dots, a_k," }, { "math_id": 2, "text": "a_1\\mathbf{v}_1 + a_2\\mathbf{v}_2 + \\cdots + a_k\\mathbf{v}_k = \\mathbf{0}," }, { "math_id": 3, "text": "\\mathbf{0}" }, { "math_id": 4, "text": "a_1\\ne 0" }, { "math_id": 5, "text": "\\mathbf{v}_1 = \\frac{-a_2}{a_1}\\mathbf{v}_2 + \\cdots + \\frac{-a_k}{a_1} \\mathbf{v}_k," }, { "math_id": 6, "text": "k>1," }, { "math_id": 7, "text": "\\mathbf{v}_1 = \\mathbf{0}" }, { "math_id": 8, "text": "k=1." }, { "math_id": 9, "text": "\\mathbf{v}_1, \\mathbf{v}_2, \\dots, \\mathbf{v}_n" }, { "math_id": 10, "text": "a_1\\mathbf{v}_1 + a_2 \\mathbf{v}_2 + \\cdots + a_n\\mathbf{v}_n = \\mathbf{0}," }, { "math_id": 11, "text": "a_i=0" }, { "math_id": 12, "text": "i=1,\\dots,n." }, { "math_id": 13, "text": "\\mathbf 0" }, { "math_id": 14, "text": "a_i" }, { "math_id": 15, "text": "\\vec u" }, { "math_id": 16, "text": "\\vec v" }, { "math_id": 17, "text": "\\vec w" }, { "math_id": 18, "text": "\\vec j" }, { "math_id": 19, "text": "\\vec k" }, { "math_id": 20, "text": "\\vec o" }, { "math_id": 21, "text": "\\vec o = 0 \\vec k" }, { "math_id": 22, "text": "\\mathbf{v}_1, \\dots, \\mathbf{v}_k" }, { "math_id": 23, "text": "i" }, { "math_id": 24, "text": "\\{ 1, \\ldots, k \\}" }, { "math_id": 25, "text": "\\mathbf{v}_i = \\mathbf{0}." }, { "math_id": 26, "text": "a_{i} := 1" }, { "math_id": 27, "text": "a_{i}" }, { "math_id": 28, "text": "0" }, { "math_id": 29, "text": "j" }, { "math_id": 30, "text": "j \\neq i" }, { "math_id": 31, "text": "a_{j} := 0" }, { "math_id": 32, "text": "a_{j} \\mathbf{v}_j = 0 \\mathbf{v}_j = \\mathbf{0}" }, { "math_id": 33, "text": "a_1 \\mathbf{v}_1 + \\cdots + a_k\\mathbf{v}_k" }, { "math_id": 34, "text": "a_1 \\mathbf{v}_1 + \\cdots + a_k\\mathbf{v}_k = \\mathbf{0} + \\cdots + \\mathbf{0} + a_i \\mathbf{v}_i + \\mathbf{0} + \\cdots + \\mathbf{0} = a_i \\mathbf{v}_i = a_i \\mathbf{0} = \\mathbf{0}." }, { "math_id": 35, "text": "a_{i} \\neq 0" }, { "math_id": 36, "text": "1" }, { "math_id": 37, "text": "k = 1" }, { "math_id": 38, "text": "\\mathbf{v}_1" }, { "math_id": 39, "text": "\\mathbf{v}_1 \\neq \\mathbf{0}." }, { "math_id": 40, "text": "\\mathbf{u}" }, { "math_id": 41, "text": "\\mathbf{v}" }, { "math_id": 42, "text": "c" }, { "math_id": 43, "text": "\\mathbf{u} = c \\mathbf{v}" }, { "math_id": 44, "text": "\\mathbf{v} = c \\mathbf{u}" }, { "math_id": 45, "text": "\\mathbf{u} = \\mathbf{0}" }, { "math_id": 46, "text": "c := 0" }, { "math_id": 47, "text": "c \\mathbf{v} = 0 \\mathbf{v} = \\mathbf{0} = \\mathbf{u}" }, { "math_id": 48, "text": "\\mathbf{v} = \\mathbf{0}" }, { "math_id": 49, "text": "\\mathbf{v} = 0 \\mathbf{u}." }, { "math_id": 50, "text": "\\mathbf{u} = \\mathbf{v}" }, { "math_id": 51, "text": "c := 1" }, { "math_id": 52, "text": "\\mathbf{u} \\neq \\mathbf{0}" }, { "math_id": 53, "text": "c \\neq 0" }, { "math_id": 54, "text": "\\mathbf{v} \\neq \\mathbf{0}" }, { "math_id": 55, "text": "\\frac{1}{c}" }, { "math_id": 56, "text": "\\mathbf{v} = \\frac{1}{c} \\mathbf{u}." }, { "math_id": 57, "text": "\\mathbf{v}_1 = (1, 1)," }, { "math_id": 58, "text": "\\mathbf{v}_2 = (-3, 2)," }, { "math_id": 59, "text": "\\mathbf{v}_3 = (2, 4)," }, { "math_id": 60, "text": "a_1 \\begin{bmatrix} 1\\\\1\\end{bmatrix} + a_2 \\begin{bmatrix} -3\\\\2\\end{bmatrix} + a_3 \\begin{bmatrix} 2\\\\4\\end{bmatrix} =\\begin{bmatrix} 0\\\\0\\end{bmatrix}," }, { "math_id": 61, "text": "\\begin{bmatrix} 1 & -3 & 2 \\\\ 1 & 2 & 4 \\end{bmatrix}\\begin{bmatrix} a_1\\\\ a_2 \\\\ a_3 \\end{bmatrix}= \\begin{bmatrix} 0\\\\0\\end{bmatrix}." }, { "math_id": 62, "text": "\\begin{bmatrix} 1 & -3 & 2 \\\\ 0 & 5 & 2 \\end{bmatrix}\\begin{bmatrix} a_1\\\\ a_2 \\\\ a_3 \\end{bmatrix}= \\begin{bmatrix} 0\\\\0\\end{bmatrix}." }, { "math_id": 63, "text": "\\begin{bmatrix} 1 & 0 & 16/5 \\\\ 0 & 1 & 2/5 \\end{bmatrix}\\begin{bmatrix} a_1\\\\ a_2 \\\\ a_3 \\end{bmatrix}= \\begin{bmatrix} 0\\\\0\\end{bmatrix}." }, { "math_id": 64, "text": "\\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix}\\begin{bmatrix} a_1\\\\ a_2 \\end{bmatrix}= \\begin{bmatrix} a_1\\\\ a_2 \\end{bmatrix}=-a_3\\begin{bmatrix} 16/5\\\\2/5\\end{bmatrix}." }, { "math_id": 65, "text": "\\mathbf{v}_3 = (2, 4)" }, { "math_id": 66, "text": "\\mathbf{v}_1 = (1, 1)" }, { "math_id": 67, "text": "\\mathbf{v}_2 = (-3, 2)." }, { "math_id": 68, "text": "a_1 \\begin{bmatrix} 1\\\\1\\end{bmatrix} + a_2 \\begin{bmatrix} -3\\\\2\\end{bmatrix} =\\begin{bmatrix} 0\\\\0\\end{bmatrix}," }, { "math_id": 69, "text": "\\begin{bmatrix} 1 & -3 \\\\ 1 & 2 \\end{bmatrix}\\begin{bmatrix} a_1\\\\ a_2 \\end{bmatrix}= \\begin{bmatrix} 0\\\\0\\end{bmatrix}." }, { "math_id": 70, "text": "\\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix}\\begin{bmatrix} a_1\\\\ a_2 \\end{bmatrix}= \\begin{bmatrix} 0\\\\0\\end{bmatrix}." }, { "math_id": 71, "text": "a_i = 0," }, { "math_id": 72, "text": "\\mathbf{v}_2 = (-3, 2)" }, { "math_id": 73, "text": "\\mathbb{R}^4," }, { "math_id": 74, "text": "\\mathbf{v}_1= \\begin{bmatrix}1\\\\4\\\\2\\\\-3\\end{bmatrix}, \\mathbf{v}_2=\\begin{bmatrix}7\\\\10\\\\-4\\\\-1\\end{bmatrix}, \\mathbf{v}_3=\\begin{bmatrix}-2\\\\1\\\\5\\\\-4\\end{bmatrix}." }, { "math_id": 75, "text": "\\begin{bmatrix}1&7&-2\\\\4& 10& 1\\\\2&-4&5\\\\-3&-1&-4\\end{bmatrix}\\begin{bmatrix} a_1\\\\ a_2 \\\\ a_3 \\end{bmatrix} = \\begin{bmatrix}0\\\\0\\\\0\\\\0\\end{bmatrix}." }, { "math_id": 76, "text": "\\begin{bmatrix} 1& 7 & -2 \\\\ 0& -18& 9\\\\ 0 & 0 & 0\\\\ 0& 0& 0\\end{bmatrix} \\begin{bmatrix} a_1\\\\ a_2 \\\\ a_3 \\end{bmatrix} = \\begin{bmatrix}0\\\\0\\\\0\\\\0\\end{bmatrix}." }, { "math_id": 77, "text": "\\begin{bmatrix} 1& 7 \\\\ 0& -18 \\end{bmatrix} \\begin{bmatrix} a_1\\\\ a_2 \\end{bmatrix} = -a_3\\begin{bmatrix}-2\\\\9\\end{bmatrix}." }, { "math_id": 78, "text": "a_1 = -3 a_3 /2, a_2 = a_3/2," }, { "math_id": 79, "text": "a_3" }, { "math_id": 80, "text": "\\mathbf{v}_1, \\mathbf{v}_2," }, { "math_id": 81, "text": "\\mathbf{v}_3" }, { "math_id": 82, "text": "n" }, { "math_id": 83, "text": "\\mathbb{R}^n" }, { "math_id": 84, "text": "A = \\begin{bmatrix}1&-3\\\\1&2\\end{bmatrix} ." }, { "math_id": 85, "text": "A \\Lambda = \\begin{bmatrix}1&-3\\\\1&2\\end{bmatrix} \\begin{bmatrix}\\lambda_1 \\\\ \\lambda_2 \\end{bmatrix} ." }, { "math_id": 86, "text": "A" }, { "math_id": 87, "text": "\\det A = 1\\cdot2 - 1\\cdot(-3) = 5 \\ne 0." }, { "math_id": 88, "text": "(1, 1)" }, { "math_id": 89, "text": "(-3, 2)" }, { "math_id": 90, "text": "m" }, { "math_id": 91, "text": "m < n." }, { "math_id": 92, "text": "A_{\\lang i_1,\\dots,i_m \\rang} \\Lambda = \\mathbf{0} ." }, { "math_id": 93, "text": "\\det A_{\\lang i_1,\\dots,i_m \\rang} = 0" }, { "math_id": 94, "text": "m = n" }, { "math_id": 95, "text": "m > n" }, { "math_id": 96, "text": "\\R^2." }, { "math_id": 97, "text": "V = \\R^n" }, { "math_id": 98, "text": "V" }, { "math_id": 99, "text": "\\begin{matrix}\n\\mathbf{e}_1 & = & (1,0,0,\\ldots,0) \\\\\n\\mathbf{e}_2 & = & (0,1,0,\\ldots,0) \\\\\n& \\vdots \\\\\n\\mathbf{e}_n & = & (0,0,0,\\ldots,1).\\end{matrix}" }, { "math_id": 100, "text": "\\mathbf{e}_1, \\mathbf{e}_2, \\ldots, \\mathbf{e}_n" }, { "math_id": 101, "text": "a_1, a_2, \\ldots, a_n" }, { "math_id": 102, "text": "a_1 \\mathbf{e}_1 + a_2 \\mathbf{e}_2 + \\cdots + a_n \\mathbf{e}_n = \\mathbf{0}." }, { "math_id": 103, "text": "a_1 \\mathbf{e}_1 + a_2 \\mathbf{e}_2 + \\cdots + a_n \\mathbf{e}_n = \\left( a_1 ,a_2 ,\\ldots, a_n \\right)," }, { "math_id": 104, "text": "a_i = 0" }, { "math_id": 105, "text": "i = 1, \\ldots, n." }, { "math_id": 106, "text": "t" }, { "math_id": 107, "text": "e^t" }, { "math_id": 108, "text": "e^{2t}" }, { "math_id": 109, "text": "a" }, { "math_id": 110, "text": "b" }, { "math_id": 111, "text": "ae ^ t + be ^ {2t} = 0" }, { "math_id": 112, "text": "ae ^ t + 2be ^ {2t} = 0" }, { "math_id": 113, "text": "t." }, { "math_id": 114, "text": "a = 0" }, { "math_id": 115, "text": "b = 0." }, { "math_id": 116, "text": "be^{2t} = 0" }, { "math_id": 117, "text": "b=0." }, { "math_id": 118, "text": "e^{t}" }, { "math_id": 119, "text": "a_1 \\mathbf{v}_1 + \\cdots + a_n \\mathbf{v}_n= \\mathbf{0}." }, { "math_id": 120, "text": "\\mathbf{v}_1, \\ldots, \\mathbf{v}_m" }, { "math_id": 121, "text": "\\left(\\left[\\begin{smallmatrix} 1 \\\\ \\mathbf{v}_1\\end{smallmatrix}\\right], \\ldots, \\left[\\begin{smallmatrix}1 \\\\ \\mathbf{v}_m\\end{smallmatrix}\\right]\\right)" }, { "math_id": 122, "text": "n + 1" }, { "math_id": 123, "text": "M" }, { "math_id": 124, "text": "N" }, { "math_id": 125, "text": "X" }, { "math_id": 126, "text": "M \\cap N = \\{0\\}." }, { "math_id": 127, "text": "M_1, \\ldots, M_d" }, { "math_id": 128, "text": "M_i \\cap \\sum_{k \\neq i} M_k = \\{0\\}" }, { "math_id": 129, "text": "i," }, { "math_id": 130, "text": "\\sum_{k \\neq i} M_k = \\Big\\{m_1 + \\cdots + m_{i-1} + m_{i+1} + \\cdots + m_d : m_k \\in M_k \\text{ for all } k\\Big\\} = \\operatorname{span} \\bigcup_{k \\in \\{1,\\ldots,i-1,i+1,\\ldots,d\\}} M_k." }, { "math_id": 131, "text": "M_1 + \\cdots + M_d = X." } ]
https://en.wikipedia.org/wiki?curid=101863
10186385
Quadrature domains
In the branch of mathematics called potential theory, a quadrature domain in two dimensional real Euclidean space is a domain D (an open connected set) together with a finite subset {"z"1, …, z"k"} of D such that, for every function "u" harmonic and integrable over D with respect to area measure, the integral of "u" with respect to this measure is given by a "quadrature formula"; that is, formula_0 where the "c""j" are nonzero complex constants independent of "u". The most obvious example is when D is a circular disk: here "k" = 1, "z"1 is the center of the circle, and "c"1 equals the area of D. That quadrature formula expresses the mean value property of harmonic functions with respect to disks. It is known that quadrature domains exist for all values of "k". There is an analogous definition of quadrature domains in Euclidean space of dimension "d" larger than 2. There is also an alternative, electrostatic interpretation of quadrature domains: a domain D is a quadrature domain if a uniform distribution of electric charge on D creates the same electrostatic field outside D as does a "k"-tuple of point charges at the points "z"1, …, "z""k". Quadrature domains and numerous generalizations thereof (e.g., replace area measure by length measure on the boundary of D) have in recent years been encountered in various connections such as inverse problems of Newtonian gravitation, Hele-Shaw flows of viscous fluids, and purely mathematical isoperimetric problems, and interest in them seems to be steadily growing. They were the subject of an international conference at the University of California at Santa Barbara in 2003 and the state of the art as of that date can be seen in the proceedings of that conference, published by Birkhäuser Verlag.
[ { "math_id": 0, "text": "\n\\iint_D u\\, dx dy = \\sum_{j=1}^k c_j u(z_j),\n" } ]
https://en.wikipedia.org/wiki?curid=10186385
1018676
Field of sets
Algebraic concept in measure theory, also referred to as an algebra of sets In mathematics, a field of sets is a mathematical structure consisting of a pair formula_0 consisting of a set formula_1 and a family formula_2 of subsets of formula_1 called an algebra over formula_1 that contains the empty set as an element, and is closed under the operations of taking complements in formula_3 finite unions, and finite intersections. Fields of sets should not be confused with fields in ring theory nor with fields in physics. Similarly the term "algebra over formula_1" is used in the sense of a Boolean algebra and should not be confused with algebras over fields or rings in ring theory. Fields of sets play an essential role in the representation theory of Boolean algebras. Every Boolean algebra can be represented as a field of sets. Definitions. A field of sets is a pair formula_0 consisting of a set formula_1 and a family formula_2 of subsets of formula_3 called an algebra over formula_3 that has the following properties: In other words, formula_2 forms a subalgebra of the power set Boolean algebra of formula_4 (with the same identity element formula_5). Many authors refer to formula_2 itself as a field of sets. Elements of formula_1 are called points while elements of formula_2 are called complexes and are said to be the admissible sets of formula_6 A field of sets formula_0 is called a σ-field of sets and the algebra formula_2 is called a σ-algebra if the following additional condition (4) is satisfied: Fields of sets in the representation theory of Boolean algebras. Stone representation. For an arbitrary set formula_7 its power set formula_8 (or, somewhat pedantically, the pair formula_9 of this set and its power set) is a field of sets. If formula_10 is finite (namely, formula_11-element), then formula_8 is finite (namely, formula_12-element). It appears that every finite field of sets (it means, formula_0 with formula_2 finite, while formula_1 may be infinite) admits a representation of the form formula_9 with finite formula_10; it means a function formula_13 that establishes a one-to-one correspondence between formula_2 and formula_8 via inverse image: formula_14 where formula_15 and formula_16 (that is, formula_17). One notable consequence: the number of complexes, if finite, is always of the form formula_18 To this end one chooses formula_10 to be the set of all atoms of the given field of sets, and defines formula_19 by formula_20 whenever formula_21 for a point formula_22 and a complex formula_23 that is an atom; the latter means that a nonempty subset of formula_24 different from formula_24 cannot be a complex. In other words: the atoms are a partition of formula_1; formula_10 is the corresponding quotient set; and formula_19 is the corresponding canonical surjection. Similarly, every finite Boolean algebra can be represented as a power set – the power set of its set of atoms; each element of the Boolean algebra corresponds to the set of atoms below it (the join of which is the element). This power set representation can be constructed more generally for any complete atomic Boolean algebra. In the case of Boolean algebras which are not complete and atomic we can still generalize the power set representation by considering fields of sets instead of whole power sets. To do this we first observe that the atoms of a finite Boolean algebra correspond to its ultrafilters and that an atom is below an element of a finite Boolean algebra if and only if that element is contained in the ultrafilter corresponding to the atom. This leads us to construct a representation of a Boolean algebra by taking its set of ultrafilters and forming complexes by associating with each element of the Boolean algebra the set of ultrafilters containing that element. This construction does indeed produce a representation of the Boolean algebra as a field of sets and is known as the Stone representation. It is the basis of Stone's representation theorem for Boolean algebras and an example of a completion procedure in order theory based on ideals or filters, similar to Dedekind cuts. Alternatively one can consider the set of homomorphisms onto the two element Boolean algebra and form complexes by associating each element of the Boolean algebra with the set of such homomorphisms that map it to the top element. (The approach is equivalent as the ultrafilters of a Boolean algebra are precisely the pre-images of the top elements under these homomorphisms.) With this approach one sees that Stone representation can also be regarded as a generalization of the representation of finite Boolean algebras by truth tables. Separative and compact fields of sets: towards Stone duality. These definitions arise from considering the topology generated by the complexes of a field of sets. (It is just one of notable topologies on the given set of points; it often happens that another topology is given, with quite different properties, in particular, not zero-dimensional). Given a field of sets formula_25 the complexes form a base for a topology. We denote by formula_26 the corresponding topological space, formula_27 where formula_28 is the topology formed by taking arbitrary unions of complexes. Then The Stone representation of a Boolean algebra is always separative and compact; the corresponding Boolean space is known as the Stone space of the Boolean algebra. The clopen sets of the Stone space are then precisely the complexes of the Stone representation. The area of mathematics known as Stone duality is founded on the fact that the Stone representation of a Boolean algebra can be recovered purely from the corresponding Stone space whence a duality exists between Boolean algebras and Boolean spaces. Fields of sets with additional structure. Sigma algebras and measure spaces. If an algebra over a set is closed under countable unions (hence also under countable intersections), it is called a sigma algebra and the corresponding field of sets is called a measurable space. The complexes of a measurable space are called measurable sets. The Loomis-Sikorski theorem provides a Stone-type duality between countably complete Boolean algebras (which may be called abstract sigma algebras) and measurable spaces. A measure space is a triple formula_30 where formula_0 is a measurable space and formula_31 is a measure defined on it. If formula_31 is in fact a probability measure we speak of a probability space and call its underlying measurable space a sample space. The points of a sample space are called sample points and represent potential outcomes while the measurable sets (complexes) are called events and represent properties of outcomes for which we wish to assign probabilities. (Many use the term sample space simply for the underlying set of a probability space, particularly in the case where every subset is an event.) Measure spaces and probability spaces play a foundational role in measure theory and probability theory respectively. In applications to Physics we often deal with measure spaces and probability spaces derived from rich mathematical structures such as inner product spaces or topological groups which already have a topology associated with them - this should not be confused with the topology generated by taking arbitrary unions of complexes. Topological fields of sets. A topological field of sets is a triple formula_32 where formula_27 is a topological space and formula_0 is a field of sets which is closed under the closure operator of formula_28 or equivalently under the interior operator i.e. the closure and interior of every complex is also a complex. In other words, formula_2 forms a subalgebra of the power set interior algebra on formula_33 Topological fields of sets play a fundamental role in the representation theory of interior algebras and Heyting algebras. These two classes of algebraic structures provide the algebraic semantics for the modal logic "S4" (a formal mathematical abstraction of epistemic logic) and intuitionistic logic respectively. Topological fields of sets representing these algebraic structures provide a related topological semantics for these logics. Every interior algebra can be represented as a topological field of sets with the underlying Boolean algebra of the interior algebra corresponding to the complexes of the topological field of sets and the interior and closure operators of the interior algebra corresponding to those of the topology. Every Heyting algebra can be represented by a topological field of sets with the underlying lattice of the Heyting algebra corresponding to the lattice of complexes of the topological field of sets that are open in the topology. Moreover the topological field of sets representing a Heyting algebra may be chosen so that the open complexes generate all the complexes as a Boolean algebra. These related representations provide a well defined mathematical apparatus for studying the relationship between truth modalities (possibly true vs necessarily true, studied in modal logic) and notions of provability and refutability (studied in intuitionistic logic) and is thus deeply connected to the theory of modal companions of intermediate logics. Given a topological space the clopen sets trivially form a topological field of sets as each clopen set is its own interior and closure. The Stone representation of a Boolean algebra can be regarded as such a topological field of sets, however in general the topology of a topological field of sets can differ from the topology generated by taking arbitrary unions of complexes and in general the complexes of a topological field of sets need not be open or closed in the topology. Algebraic fields of sets and Stone fields. A topological field of sets is called algebraic if and only if there is a base for its topology consisting of complexes. If a topological field of sets is both compact and algebraic then its topology is compact and its compact open sets are precisely the open complexes. Moreover, the open complexes form a base for the topology. Topological fields of sets that are separative, compact and algebraic are called Stone fields and provide a generalization of the Stone representation of Boolean algebras. Given an interior algebra we can form the Stone representation of its underlying Boolean algebra and then extend this to a topological field of sets by taking the topology generated by the complexes corresponding to the open elements of the interior algebra (which form a base for a topology). These complexes are then precisely the open complexes and the construction produces a Stone field representing the interior algebra - the Stone representation. (The topology of the Stone representation is also known as the McKinsey–Tarski Stone topology after the mathematicians who first generalized Stone's result for Boolean algebras to interior algebras and should not be confused with the Stone topology of the underlying Boolean algebra of the interior algebra which will be a finer topology). Preorder fields. A preorder field is a triple formula_34 where formula_35 is a preordered set and formula_0 is a field of sets. Like the topological fields of sets, preorder fields play an important role in the representation theory of interior algebras. Every interior algebra can be represented as a preorder field with its interior and closure operators corresponding to those of the Alexandrov topology induced by the preorder. In other words, for all formula_36: formula_37 and formula_38 Similarly to topological fields of sets, preorder fields arise naturally in modal logic where the points represent the "possible worlds" in the Kripke semantics of a theory in the modal logic "S4", the preorder represents the accessibility relation on these possible worlds in this semantics, and the complexes represent sets of possible worlds in which individual sentences in the theory hold, providing a representation of the Lindenbaum–Tarski algebra of the theory. They are a special case of the general modal frames which are fields of sets with an additional accessibility relation providing representations of modal algebras. Algebraic and canonical preorder fields. A preorder field is called algebraic (or tight) if and only if it has a set of complexes formula_39 which determines the preorder in the following manner: formula_40 if and only if for every complex formula_41, formula_42 implies formula_43. The preorder fields obtained from "S4" theories are always algebraic, the complexes determining the preorder being the sets of possible worlds in which the sentences of the theory closed under necessity hold. A separative compact algebraic preorder field is said to be canonical. Given an interior algebra, by replacing the topology of its Stone representation with the corresponding canonical preorder (specialization preorder) we obtain a representation of the interior algebra as a canonical preorder field. By replacing the preorder by its corresponding Alexandrov topology we obtain an alternative representation of the interior algebra as a topological field of sets. (The topology of this "Alexandrov representation" is just the Alexandrov bi-coreflection of the topology of the Stone representation.) While representation of modal algebras by general modal frames is possible for any normal modal algebra, it is only in the case of interior algebras (which correspond to the modal logic "S4") that the general modal frame corresponds to topological field of sets in this manner. Complex algebras and fields of sets on relational structures. The representation of interior algebras by preorder fields can be generalized to a representation theorem for arbitrary (normal) Boolean algebras with operators. For this we consider structures formula_44 where formula_45 is a relational structure i.e. a set with an indexed family of relations defined on it, and formula_46 is a field of sets. The complex algebra (or algebra of complexes) determined by a field of sets formula_47 on a relational structure, is the Boolean algebra with operators formula_48 where for all formula_49 if formula_50 is a relation of arity formula_51 then formula_52 is an operator of arity formula_11 and for all formula_53 formula_54 This construction can be generalized to fields of sets on arbitrary algebraic structures having both operators and relations as operators can be viewed as a special case of relations. If formula_2 is the whole power set of formula_1 then formula_55 is called a full complex algebra or power algebra. Every (normal) Boolean algebra with operators can be represented as a field of sets on a relational structure in the sense that it is isomorphic to the complex algebra corresponding to the field. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "( X, \\mathcal{F} )" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "\\mathcal{F}" }, { "math_id": 3, "text": "X," }, { "math_id": 4, "text": "X " }, { "math_id": 5, "text": "X \\in \\mathcal{F}" }, { "math_id": 6, "text": "X." }, { "math_id": 7, "text": "Y," }, { "math_id": 8, "text": "2^Y" }, { "math_id": 9, "text": "( Y, 2^Y )" }, { "math_id": 10, "text": "Y" }, { "math_id": 11, "text": "n" }, { "math_id": 12, "text": "2^n" }, { "math_id": 13, "text": "f: X \\to Y" }, { "math_id": 14, "text": "S = f^{-1}[B] = \\{x \\in X \\mid f(x)\\in B \\}" }, { "math_id": 15, "text": "S\\in\\mathcal{F}" }, { "math_id": 16, "text": "B \\in 2^Y" }, { "math_id": 17, "text": "B\\subset Y" }, { "math_id": 18, "text": "2^n." }, { "math_id": 19, "text": "f" }, { "math_id": 20, "text": "f(x) = A" }, { "math_id": 21, "text": "x \\in A" }, { "math_id": 22, "text": "x \\in X" }, { "math_id": 23, "text": "A \\in \\mathcal{F}" }, { "math_id": 24, "text": "A" }, { "math_id": 25, "text": "\\mathbf{X} = ( X, \\mathcal{F} )" }, { "math_id": 26, "text": "T(\\mathbf{X})" }, { "math_id": 27, "text": "( X, \\mathcal{T} )" }, { "math_id": 28, "text": "\\mathcal{T}" }, { "math_id": 29, "text": "\\mathbf{X}" }, { "math_id": 30, "text": "( X, \\mathcal{F}, \\mu )" }, { "math_id": 31, "text": "\\mu" }, { "math_id": 32, "text": "( X, \\mathcal{T}, \\mathcal{F} )" }, { "math_id": 33, "text": "( X, \\mathcal{T} )." }, { "math_id": 34, "text": "( X, \\leq , \\mathcal{F} )" }, { "math_id": 35, "text": "( X, \\leq )" }, { "math_id": 36, "text": "S \\in \\mathcal{F}" }, { "math_id": 37, "text": "\\mathrm{Int}(S) = \\{ x \\in X : \\text{ there exists a } y \\in S \\text{ with } y \\leq x \\}" }, { "math_id": 38, "text": "\\mathrm{Cl}(S) = \\{ x \\in X : \\text{ there exists a } y \\in S \\text{ with } x \\leq y \\}" }, { "math_id": 39, "text": "\\mathcal{A}" }, { "math_id": 40, "text": "x \\leq y" }, { "math_id": 41, "text": "S \\in \\mathcal{A}" }, { "math_id": 42, "text": "x \\in S" }, { "math_id": 43, "text": "y \\in S" }, { "math_id": 44, "text": "( X, (R_i)_I, \\mathcal{F} ) " }, { "math_id": 45, "text": "( X,(R_i)_I ) " }, { "math_id": 46, "text": "( X, \\mathcal{F} ) " }, { "math_id": 47, "text": "\\mathbf{X} = ( X, \\left(R_i\\right)_I, \\mathcal{F} ) " }, { "math_id": 48, "text": "\\mathcal{C}(\\mathbf{X}) = ( \\mathcal{F}, \\cap, \\cup, \\prime, \\empty, X, (f_i)_I )" }, { "math_id": 49, "text": "i \\in I," }, { "math_id": 50, "text": "R_i" }, { "math_id": 51, "text": "n + 1," }, { "math_id": 52, "text": "f_i" }, { "math_id": 53, "text": "S_1, \\ldots, S_n \\in \\mathcal{F}" }, { "math_id": 54, "text": "f_i(S_1, \\ldots, S_n) = \\left\\{ x \\in X : \\text{ there exist } x_1 \\in S_1, \\ldots, x_n \\in S_n \\text{ such that } R_i(x_1, \\ldots, x_n, x) \\right\\}" }, { "math_id": 55, "text": "\\mathcal{C}(\\mathbf{X})" } ]
https://en.wikipedia.org/wiki?curid=1018676
1018783
Marshallian demand function
Microeconomic function In microeconomics, a consumer's Marshallian demand function (named after Alfred Marshall) is the quantity they demand of a particular good as a function of its price, their income, and the prices of other goods, a more technical exposition of the standard demand function. It is a solution to the utility maximization problem of how the consumer can maximize their utility for given income and prices. A synonymous term is uncompensated demand function, because when the price rises the consumer is not compensated with higher nominal income for the fall in their real income, unlike in the Hicksian demand function. Thus the change in quantity demanded is a combination of a substitution effect and a wealth effect. Although Marshallian demand is in the context of partial equilibrium theory, it is sometimes called Walrasian demand as used in general equilibrium theory (named after Léon Walras). According to the utility maximization problem, there are formula_0 commodities with price vector formula_1 and choosable quantity vector formula_2. The consumer has income formula_3, and hence a budget set of affordable packages formula_4 where formula_5 is the dot product of the price and quantity vectors. The consumer has a utility function formula_6 The consumer's Marshallian demand correspondence is defined to be formula_7 Revealed preference. Marshall's theory suggests that pursuit of utility is a motivational factor to a consumer which can be attained through the consumption of goods or service. The amount of consumer's utility is dependent on the level of consumption of a certain good, which is subject to the fundamental tendency of human nature and it is described as the law of diminishing marginal utility. As utility maximum always exists, Marshallian demand correspondence must be nonempty at every value that corresponds with the standard budget set. Uniqueness. formula_8 is called a "correspondence" because in general it may be set-valued - there may be several different bundles that attain the same maximum utility. In some cases, there is a "unique" utility-maximizing bundle for each price and income situation; then, formula_8 is a function and it is called the Marshallian demand function. If the consumer has strictly convex preferences and the prices of all goods are strictly positive, then there is a unique utility-maximizing bundle. To prove this, suppose, by contradiction, that there are two different bundles, formula_9 and formula_10, that maximize the utility. Then formula_9 and formula_10 are equally preferred. By definition of strict convexity, the mixed bundle formula_11 is strictly better than formula_12. But this contradicts the optimality of formula_12. Continuity. The maximum theorem implies that if: then formula_8 is an upper-semicontinuous correspondence. Moreover, if formula_8 is unique, then it is a continuous function of formula_17 and formula_18. Combining with the previous subsection, if the consumer has strictly convex preferences, then the Marshallian demand is unique and continuous. In contrast, if the preferences are not convex, then the Marshallian demand may be non-unique and non-continuous. Homogeneity. The optimal Marshallian demand correspondence of a continuous utility function is a homogeneous function with degree zero. This means that for every constant formula_19 formula_20 This is intuitively clear. Suppose formula_17 and formula_18 are measured in dollars. When formula_21, formula_22 and formula_23 are exactly the same quantities measured in cents. When prices and wealth go up by a factor a, the purchasing pattern of an economic agent remains constant. Obviously, expressing in different unit of measurement for prices and income should not affect the demand. Demand curve. Marshall's theory exploits that demand curve represents individual's diminishing marginal values of the good. The theory insists that the consumer's purchasing decision is dependent on the gainable utility of a goods or services compared to the price since the additional utility that the consumer gain must be at least as great as the price. The following suggestion proposes that the price demanded is equal to the maximum price that the consumer would pay for an extra unit of good or service. Hence, the utility is held constant along the demand curve. When the marginal utility of income is constant, or its value is the same across individuals within a market demand curve, generating net benefits of purchased units, or consumer surplus is possible through adding up of demand prices. Examples. In the following examples, there are two commodities, 1 and 2. 1. The utility function has the Cobb–Douglas form: formula_24 The constrained optimization leads to the Marshallian demand function: formula_25 2. The utility function is a CES utility function: formula_26 Then formula_27 In both cases, the preferences are strictly convex, the demand is unique and the demand function is continuous. 3. The utility function has the linear form: formula_28 The utility function is only weakly convex, and indeed the demand is not unique: when formula_29, the consumer may divide his income in arbitrary ratios between product types 1 and 2 and get the same utility. 4. The utility function exhibits a non-diminishing marginal rate of substitution: formula_30 The utility function is not convex, and indeed the demand is not continuous: when formula_31, the consumer demands only product 1, and when formula_32, the consumer demands only product 2 (when formula_29 the demand correspondence contains two distinct bundles: either buy only product 1 or buy only product 2). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " L " }, { "math_id": 1, "text": " p " }, { "math_id": 2, "text": " x " }, { "math_id": 3, "text": " I " }, { "math_id": 4, "text": "B(p, I) = \\{x : p \\cdot x \\leq I\\}," }, { "math_id": 5, "text": " p \\cdot x = \\sum_i^L p_i x_i " }, { "math_id": 6, "text": "u : \\mathbb R^L_+ \\rightarrow \\mathbb R." }, { "math_id": 7, "text": "x^*(p, I) = \\operatorname{argmax}_{x \\in B(p, I)} u(x) " }, { "math_id": 8, "text": "x^*(p, I)" }, { "math_id": 9, "text": "x_1" }, { "math_id": 10, "text": "x_2" }, { "math_id": 11, "text": "0.5 x_1 + 0.5 x_2" }, { "math_id": 12, "text": "x_1 , x_2" }, { "math_id": 13, "text": "u(x)" }, { "math_id": 14, "text": "x" }, { "math_id": 15, "text": "B(p, I)" }, { "math_id": 16, "text": "p,I" }, { "math_id": 17, "text": "p" }, { "math_id": 18, "text": "I" }, { "math_id": 19, "text": "a>0," }, { "math_id": 20, "text": "x^*(a\\cdot p, a\\cdot I) = x^*(p, I)." }, { "math_id": 21, "text": "a=100" }, { "math_id": 22, "text": "ap" }, { "math_id": 23, "text": "aI" }, { "math_id": 24, "text": "u(x_1,x_2) = x_1^{\\alpha}x_2^{\\beta}." }, { "math_id": 25, "text": "x^*(p_1,p_2,I) = \\left(\\frac{\\alpha I}{(\\alpha+\\beta)p_1}, \\frac{\\beta I}{(\\alpha+\\beta)p_2}\\right)." }, { "math_id": 26, "text": "u(x_1,x_2) = \\left[ \\frac{x_1^{\\delta}}{\\delta} + \\frac{x_2^{\\delta}}{\\delta} \\right]^{\\frac{1}{\\delta}}." }, { "math_id": 27, "text": "x^*(p_1,p_2,I) = \\left(\\frac{I p_1^{\\epsilon-1}}{p_1^{\\epsilon} + p_2^{\\epsilon}}, \\frac{I p_2^{\\epsilon-1}}{p_1^{\\epsilon} + p_2^{\\epsilon}}\\right), \\quad \\text{with} \\quad \\epsilon = \\frac{\\delta}{\\delta-1}." }, { "math_id": 28, "text": "u(x_1,x_2) = x_1 + x_2." }, { "math_id": 29, "text": "p_1=p_2" }, { "math_id": 30, "text": "u(x_1,x_2) = (x_1^{\\alpha} + x_2^{\\alpha}), \\quad \\text{with} \\quad \\alpha > 1." }, { "math_id": 31, "text": "p_1<p_2" }, { "math_id": 32, "text": "p_2<p_1" } ]
https://en.wikipedia.org/wiki?curid=1018783
1018951
List of convexity topics
This is a list of convexity topics, by Wikipedia page.
[ { "math_id": 0, "text": "\\mathbb{R}^n" } ]
https://en.wikipedia.org/wiki?curid=1018951
1019002
Dirac measure
Measure that is 1 if and only if a specified element is in the set In mathematics, a Dirac measure assigns a size to a set based solely on whether it contains a fixed element "x" or not. It is one way of formalizing the idea of the Dirac delta function, an important tool in physics and other technical fields. Definition. A Dirac measure is a measure "δ""x" on a set "X" (with any "σ"-algebra of subsets of "X") defined for a given "x" ∈ "X" and any (measurable) set "A" ⊆ "X" by formula_0 where 1"A" is the indicator function of "A". The Dirac measure is a probability measure, and in terms of probability it represents the almost sure outcome "x" in the sample space "X". We can also say that the measure is a single atom at "x"; however, treating the Dirac measure as an atomic measure is not correct when we consider the sequential definition of Dirac delta, as the limit of a delta sequence. The Dirac measures are the extreme points of the convex set of probability measures on "X". The name is a back-formation from the Dirac delta function; considered as a Schwartz distribution, for example on the real line, measures can be taken to be a special kind of distribution. The identity formula_1 which, in the form formula_2 is often taken to be part of the definition of the "delta function", holds as a theorem of Lebesgue integration. Properties of the Dirac measure. Let "δ""x" denote the Dirac measure centred on some fixed point "x" in some measurable space ("X", Σ). Suppose that ("X", "T") is a topological space and that Σ is at least as fine as the Borel "σ"-algebra "σ"("T") on "X". "λ""n"("B") = 0. Generalizations. A discrete measure is similar to the Dirac measure, except that it is concentrated at countably many points instead of a single point. More formally, a measure on the real line is called a discrete measure (in respect to the Lebesgue measure) if its support is at most a countable set.
[ { "math_id": 0, "text": "\\delta_x (A) = 1_A(x)= \\begin{cases} 0, & x \\not \\in A; \\\\ 1, & x \\in A. \\end{cases}" }, { "math_id": 1, "text": "\\int_{X} f(y) \\, \\mathrm{d} \\delta_x (y) = f(x)," }, { "math_id": 2, "text": "\\int_X f(y) \\delta_x (y) \\, \\mathrm{d} y = f(x)," } ]
https://en.wikipedia.org/wiki?curid=1019002
1019142
Expenditure function
In microeconomics, the expenditure function gives the minimum amount of money an individual needs to spend to achieve some level of utility, given a utility function and the prices of the available goods. Formally, if there is a utility function formula_0 that describes preferences over "n " commodities, the expenditure function formula_1 says what amount of money is needed to achieve a utility formula_2 if the "n" prices are given by the price vector formula_3. This function is defined by formula_4 where formula_5 is the set of all bundles that give utility at least as good as formula_2. Expressed equivalently, the individual minimizes expenditure formula_6 subject to the minimal utility constraint that formula_7 giving optimal quantities to consume of the various goods as formula_8 as function of formula_2 and the prices; then the expenditure function is formula_9 (Properties of the Expenditure Function) Suppose u is a continuous utility function representing a locally non-satiated preference relation º on Rn +. Then e(p, u) is 1.   Homogeneous of degree one in p: for all and formula_10, formula_11 2.   Continuous in formula_12 and formula_13 3.   Nondecreasing in formula_12 and strictly increasing in formula_14 provided formula_15 4.   Concave in formula_16 5. If the utility function is strictly quasi-concave, there is the Shephard's lemma Features of Expenditure Functions. Proof (1) As in the above proposition, note that formula_17 formula_18 formula_19 (2) Continue on the domain formula_20: formula_21 (3) Let formula_22 and suppose formula_23. Then formula_24, and formula_25 . It follows immediately that formula_26. For the second statement , suppose to the contrary that for some formula_27, formula_28 Than, for some formula_29, formula_30, which contradicts the "no excess utility" conclusion of the previous proposition (4)Let formula_31 and suppose formula_32. Then, formula_33 and formula_34, so formula_35formula_36. (5) formula_37 Expenditure and indirect utility. The expenditure function is the inverse of the indirect utility function when the prices are kept constant. I.e, for every price vector formula_3 and income level formula_38: formula_39 There is a duality relationship between expenditure function and utility function. If given a specific regular quasi-concave utility function, the corresponding price is homogeneous, and the utility is monotonically increasing expenditure function, conversely, the given price is homogeneous, and the utility is monotonically increasing expenditure function will generate the regular quasi-concave utility function. In addition to the property that prices are once homogeneous and utility is monotonically increasing, the expenditure function usually assumes (1) is a non-negative function, i.e., formula_40 (2) For P, it is non-decreasing, i.e., formula_41; (3)E(Pu) is a concave function. That is, formula_42 formula_43 Expenditure function is an important theoretical method to study consumer behavior. Expenditure function is very similar to cost function in production theory. Dual to the utility maximization problem is the cost minimization problem Example. Suppose the utility function is the Cobb-Douglas function formula_44 which generates the demand functions formula_45 where formula_38 is the consumer's income. One way to find the expenditure function is to first find the indirect utility function and then invert it. The indirect utility function formula_46 is found by replacing the quantities in the utility function with the demand functions thus: formula_47 where formula_48 Then since formula_49 when the consumer optimizes, we can invert the indirect utility function to find the expenditure function: formula_50 Alternatively, the expenditure function can be found by solving the problem of minimizing formula_51 subject to the constraint formula_52 This yields conditional demand functions formula_53 and formula_54 and the expenditure function is then formula_55 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u" }, { "math_id": 1, "text": "e(p, u^*) : \\textbf R^n_+ \\times \\textbf R\n \\rightarrow \\textbf R" }, { "math_id": 2, "text": "u^*" }, { "math_id": 3, "text": "p" }, { "math_id": 4, "text": "e(p, u^*) = \\min_{x \\in \\geq(u^*)} p \\cdot x" }, { "math_id": 5, "text": "\\geq(u^*) = \\{x \\in \\textbf R^n_+ : u(x) \\geq u^*\\}" }, { "math_id": 6, "text": " x_1p_1+\\dots +x_n p_n" }, { "math_id": 7, "text": "u(x_1, \\dots , x_n) \\ge u^*," }, { "math_id": 8, "text": " x_1^*, \\dots x_n^*" }, { "math_id": 9, "text": "e(p_1, \\dots , p_n ; u^*)=p_1 x_1^*+\\dots + p_n x_n^*." }, { "math_id": 10, "text": " \\lambda > 0 " }, { "math_id": 11, "text": " e(\\lambda p,u)=\\lambda e(p,u); " }, { "math_id": 12, "text": " p" }, { "math_id": 13, "text": " u;" }, { "math_id": 14, "text": " u" }, { "math_id": 15, "text": " p \\gg 0 ; " }, { "math_id": 16, "text": " p " }, { "math_id": 17, "text": "e(\\lambda p,u)=\\min_{x\\in\\mathbb{R}^n_+ :u(x)\\geq u}" }, { "math_id": 18, "text": "\\lambda p\\cdot x=\\lambda \\min_{x\\in\\mathbb{R}^n_+ :u(x)\\geq u}" }, { "math_id": 19, "text": "p\\cdot x=\\lambda e(p,u)" }, { "math_id": 20, "text": "e" }, { "math_id": 21, "text": "\\textbf R_{++}^N*\\textbf R\\rightarrow \\textbf R\n" }, { "math_id": 22, "text": "p^\\prime>p" }, { "math_id": 23, "text": "x \\in h(p^\\prime,u)" }, { "math_id": 24, "text": "u(h)\\geq u" }, { "math_id": 25, "text": "e(p^\\prime,u)=p^\\prime\\cdot x\\geq p \\cdot x" }, { "math_id": 26, "text": "e(p,u)\\leq e(p^\\prime,u)" }, { "math_id": 27, "text": "u^\\prime > u" }, { "math_id": 28, "text": "e(p,u^\\prime)\\leq e(p,u)" }, { "math_id": 29, "text": "x \\in h(p,u)" }, { "math_id": 30, "text": "u(x)=u^\\prime>u" }, { "math_id": 31, "text": "t \\in(0,1)" }, { "math_id": 32, "text": "x \\in h(tp+(1-t)p^\\prime)" }, { "math_id": 33, "text": "p \\cdot x\\geq e(p,u)" }, { "math_id": 34, "text": "p^\\prime \\cdot x\\geq e(p^\\prime,u)" }, { "math_id": 35, "text": "e(tp+(1-t)p^\\prime,u)=(tp+(1-t)p^\\prime)\\cdot x\\geq" }, { "math_id": 36, "text": "te(p,u)+(1-t)e(p^\\prime,u)" }, { "math_id": 37, "text": "\\frac{\\delta(p^0,u^0)}{\\delta p_i}=x^h_i(p^0,u^0)\n" }, { "math_id": 38, "text": "I" }, { "math_id": 39, "text": "e(p, v(p,I)) \\equiv I" }, { "math_id": 40, "text": " E(P \\cdot u)>O; " }, { "math_id": 41, "text": " E(p^1 u)> E(p^2 u),u> Op^l>p^2> O_N " }, { "math_id": 42, "text": " e(np^l+(1-n)p^2)u )>\\lambda E(p^1u)(1-n)E(p^2u)y>0 " }, { "math_id": 43, "text": " O<\\lambda<1p^l\\geq O_Np^2 \\geq O_N " }, { "math_id": 44, "text": "u(x_1, x_2) = x_1^{.6}x_2^{.4}," }, { "math_id": 45, "text": " x_1(p_1, p_2, I) = \\frac{ .6I}{p_1} \\;\\;\\;\\; {\\rm and}\\;\\;\\; x_2(p_1, p_2, I) = \\frac{ .4I}{p_2}, " }, { "math_id": 46, "text": "v(p_1, p_2, I) " }, { "math_id": 47, "text": " v(p_1, p_2,I) = u(x_1^*, x_2^*) = (x_1^*)^{.6}(x_2^*)^{.4} = \\left( \\frac{ .6I}{p_1}\\right)^{.6} \\left( \\frac{ .4I}{p_2}\\right)^{.4} = (.6^{.6} \\times .4^{.4})I^{.6+.4}p_1^{-.6} p_2^{-.4} = K p_1^{-.6} p_2^{-.4}I, " }, { "math_id": 48, "text": "K = (.6^{.6} \\times .4^{.4}). " }, { "math_id": 49, "text": "e(p_1, p_2, u) = e(p_1, p_2, v(p_1, p_2, I)) =I" }, { "math_id": 50, "text": " e(p_1, p_2, u) = (1/K) p_1^{.6} p_2^{.4}u, " }, { "math_id": 51, "text": "(p_1x_1+ p_2x_2)" }, { "math_id": 52, "text": "u(x_1, x_2) \\geq u^*." }, { "math_id": 53, "text": "x_1^*(p_1, p_2, u^*)" }, { "math_id": 54, "text": "x_2^*(p_1, p_2, u^*)" }, { "math_id": 55, "text": "e(p_1, p_2, u^*) = p_1x_1^*+ p_2x_2^*" } ]
https://en.wikipedia.org/wiki?curid=1019142
1019195
Expenditure minimization problem
In microeconomics, the expenditure minimization problem is the dual of the utility maximization problem: "how much money do I need to reach a certain level of happiness?". This question comes in two parts. Given a consumer's utility function, prices, and a utility target, Expenditure function. Formally, the expenditure function is defined as follows. Suppose the consumer has a utility function formula_0 defined on formula_1 commodities. Then the consumer's expenditure function gives the amount of money required to buy a package of commodities at given prices formula_2 that give utility of at least formula_3, formula_4 where formula_5 is the set of all packages that give utility at least as good as formula_3. Hicksian demand correspondence. Hicksian demand is defined by formula_6 formula_7. Hicksian demand function gives the cheapest package that gives the desired utility. It is related to Marshallian demand function by and expenditure function by formula_8 The relationship between the utility function and Marshallian demand in the utility maximization problem mirrors the relationship between the expenditure function and Hicksian demand in the expenditure minimization problem. It is also possible that the Hicksian and Marshallian demands are not unique (i.e. there is more than one commodity bundle that satisfies the expenditure minimization problem); then the demand is a , and not a function. This does not happen, and the demands are functions, under the assumption of local nonsatiation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "p" }, { "math_id": 3, "text": "u^*" }, { "math_id": 4, "text": "e(p, u^*) = \\min_{x \\in \\geq{u^*}} p \\cdot x" }, { "math_id": 5, "text": "\\geq{u^*} = \\{x \\in \\mathbb{R}^L_+ : u(x) \\geq u^*\\}" }, { "math_id": 6, "text": "h : \\mathbb{R}^L_+ \\times \\mathbb{R}_+ \\to P(\\mathbb{R}^L_+)" }, { "math_id": 7, "text": "h(p, u^*) = \\underset{x \\in \\geq u^* }{\\operatorname{argmin}}\\ p \\cdot x" }, { "math_id": 8, "text": "h(p, u^*) = x(p, e(p, u^*)). \\," } ]
https://en.wikipedia.org/wiki?curid=1019195
1019406
Cuthill–McKee algorithm
In numerical linear algebra, the Cuthill–McKee algorithm (CM), named after Elizabeth Cuthill and James McKee, is an algorithm to permute a sparse matrix that has a symmetric sparsity pattern into a band matrix form with a small bandwidth. The reverse Cuthill–McKee algorithm (RCM) due to Alan George and Joseph Liu is the same algorithm but with the resulting index numbers reversed. In practice this generally results in less fill-in than the CM ordering when Gaussian elimination is applied. The Cuthill McKee algorithm is a variant of the standard breadth-first search algorithm used in graph algorithms. It starts with a peripheral node and then generates levels formula_0 for formula_1 until all nodes are exhausted. The set formula_2 is created from set formula_3 by listing all vertices adjacent to all nodes in formula_4. These nodes are ordered according to predecessors and degree. Algorithm. Given a symmetric formula_5 matrix we visualize the matrix as the adjacency matrix of a graph. The Cuthill–McKee algorithm is then a relabeling of the vertices of the graph to reduce the bandwidth of the adjacency matrix. The algorithm produces an ordered "n"-tuple formula_6 of vertices which is the new order of the vertices. First we choose a peripheral vertex (the vertex with the lowest degree) formula_7 and set formula_8. Then for formula_9 we iterate the following steps while formula_10 formula_12 In other words, number the vertices according to a particular level structure (computed by breadth-first search) where the vertices in each level are visited in order of their predecessor's numbering from lowest to highest. Where the predecessors are the same, vertices are distinguished by degree (again ordered from lowest to highest).
[ { "math_id": 0, "text": "R_i" }, { "math_id": 1, "text": "i=1, 2,.." }, { "math_id": 2, "text": " R_{i+1} " }, { "math_id": 3, "text": " R_i" }, { "math_id": 4, "text": " R_i " }, { "math_id": 5, "text": "n\\times n" }, { "math_id": 6, "text": "R" }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "R := ( \\{ x \\})" }, { "math_id": 9, "text": "i = 1,2,\\dots" }, { "math_id": 10, "text": "|R| < n" }, { "math_id": 11, "text": "A_i" }, { "math_id": 12, "text": "A_i := \\operatorname{Adj}(R_i) \\setminus R" } ]
https://en.wikipedia.org/wiki?curid=1019406
10195749
Power sum symmetric polynomial
In mathematics, specifically in commutative algebra, the power sum symmetric polynomials are a type of basic building block for symmetric polynomials, in the sense that every symmetric polynomial with rational coefficients can be expressed as a sum and difference of products of power sum symmetric polynomials with rational coefficients. However, not every symmetric polynomial with integral coefficients is generated by integral combinations of products of power-sum polynomials: they are a generating set over the "rationals," but not over the "integers." Definition. The power sum symmetric polynomial of degree "k" in formula_0 variables "x"1, ..., "x""n", written "p""k" for "k" = 0, 1, 2, ..., is the sum of all "k"th powers of the variables. Formally, formula_1 The first few of these polynomials are formula_2 formula_3 formula_4 formula_5 Thus, for each nonnegative integer formula_6, there exists exactly one power sum symmetric polynomial of degree formula_6 in formula_0 variables. The polynomial ring formed by taking all integral linear combinations of products of the power sum symmetric polynomials is a commutative ring. Examples. The following lists the formula_0 power sum symmetric polynomials of positive degrees up to "n" for the first three positive values of formula_7 In every case, formula_8 is one of the polynomials. The list goes up to degree "n" because the power sum symmetric polynomials of degrees 1 to "n" are basic in the sense of the theorem stated below. For "n" = 1: formula_9 For "n" = 2: formula_10 formula_11 For "n" = 3: formula_12 formula_13 formula_14 Properties. The set of power sum symmetric polynomials of degrees 1, 2, ..., "n" in "n" variables generates the ring of symmetric polynomials in "n" variables. More specifically: Theorem. The ring of symmetric polynomials with rational coefficients equals the rational polynomial ring formula_15 The same is true if the coefficients are taken in any field of characteristic 0. However, this is not true if the coefficients must be integers. For example, for "n" = 2, the symmetric polynomial formula_16 has the expression formula_17 which involves fractions. According to the theorem this is the only way to represent formula_18 in terms of "p"1 and "p"2. Therefore, "P" does not belong to the integral polynomial ring formula_19 For another example, the elementary symmetric polynomials "e""k", expressed as polynomials in the power sum polynomials, do not all have integral coefficients. For instance, formula_20 The theorem is also untrue if the field has characteristic different from 0. For example, if the field "F" has characteristic 2, then formula_21, so "p"1 and "p"2 cannot generate "e"2 = "x"1"x"2. "Sketch of a partial proof of the theorem": By Newton's identities the power sums are functions of the elementary symmetric polynomials; this is implied by the following recurrence relation, though the explicit function that gives the power sums in terms of the "e""j" is complicated: formula_22 Rewriting the same recurrence, one has the elementary symmetric polynomials in terms of the power sums (also implicitly, the explicit formula being complicated): formula_23 This implies that the elementary polynomials are rational, though not integral, linear combinations of the power sum polynomials of degrees 1, ..., "n". Since the elementary symmetric polynomials are an algebraic basis for all symmetric polynomials with coefficients in a field, it follows that every symmetric polynomial in "n" variables is a polynomial function formula_24 of the power sum symmetric polynomials "p"1, ..., "p""n". That is, the ring of symmetric polynomials is contained in the ring generated by the power sums, formula_15 Because every power sum polynomial is symmetric, the two rings are equal. For another system of symmetric polynomials with similar properties see complete homogeneous symmetric polynomials.
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": " p_k (x_1, x_2, \\dots,x_n) = \\sum_{i=1}^n x_i^k \\, ." }, { "math_id": 2, "text": "p_0 (x_1, x_2, \\dots,x_n) = 1 + 1 + \\cdots + 1 = n \\, ," }, { "math_id": 3, "text": "p_1 (x_1, x_2, \\dots,x_n) = x_1 + x_2 + \\cdots + x_n \\, ," }, { "math_id": 4, "text": "p_2 (x_1, x_2, \\dots,x_n) = x_1^2 + x_2^2 + \\cdots + x_n^2 \\, ," }, { "math_id": 5, "text": "p_3 (x_1, x_2, \\dots,x_n) = x_1^3 + x_2^3 + \\cdots + x_n^3 \\, ." }, { "math_id": 6, "text": "k" }, { "math_id": 7, "text": "n." }, { "math_id": 8, "text": "p_0 = n" }, { "math_id": 9, "text": "p_1 = x_1\\,." }, { "math_id": 10, "text": "p_1 = x_1 + x_2\\,," }, { "math_id": 11, "text": "p_2 = x_1^2 + x_2^2\\,." }, { "math_id": 12, "text": "p_1 = x_1 + x_2 + x_3\\,," }, { "math_id": 13, "text": "p_2 = x_1^2 + x_2^2 + x_3^2\\,," }, { "math_id": 14, "text": "p_3 = x_1^3+x_2^3+x_3^3\\,," }, { "math_id": 15, "text": "\\mathbb Q[p_1,\\ldots,p_n]." }, { "math_id": 16, "text": "P(x_1,x_2) = x_1^2 x_2 + x_1 x_2^2 + x_1x_2" }, { "math_id": 17, "text": "P(x_1,x_2) = \\frac{p_1^3-p_1p_2}{2} + \\frac{p_1^2-p_2}{2} \\,," }, { "math_id": 18, "text": "P(x_1,x_2)" }, { "math_id": 19, "text": "\\mathbb Z[p_1,\\ldots,p_n]." }, { "math_id": 20, "text": "e_2 := \\sum_{1 \\leq i<j \\leq n} x_ix_j = \\frac{p_1^2-p_2}{2} \\, ." }, { "math_id": 21, "text": "p_2 = p_1^2" }, { "math_id": 22, "text": "p_n = \\sum_{j=1}^n (-1)^{j-1} e_j p_{n-j} \\,." }, { "math_id": 23, "text": " e_n = \\frac{1}{n} \\sum_{j=1}^n (-1)^{j-1} e_{n-j} p_j \\,." }, { "math_id": 24, "text": "f(p_1,\\ldots,p_n)" } ]
https://en.wikipedia.org/wiki?curid=10195749
1019627
Howard T. Odum
American ecologist (1924–2002) Howard Thomas Odum (September 1, 1924 – September 11, 2002), usually cited as H. T. Odum, was an American ecologist. He is known for his pioneering work on ecosystem ecology, and for his provocative proposals for additional laws of thermodynamics, informed by his work on general systems theory. Biography. Odum was the third child of Howard W. Odum, an American sociologist, and his wife, Anna Louise (née Kranz) Odum (1888–1965). He was the younger brother of Eugene Odum. Their father "encouraged his sons to go into science and to develop new techniques to contribute to social progress". Howard learned his early scientific lessons about (a) birds from his brother, (b) fish and the philosophy of biology while working after school for marine zoologist Robert Coker, and (c) electrical circuits from "The Boy Electrician" (1929) by Alfred Powell Morgan. Howard Thomas studied biology at the University of North Carolina at Chapel Hill, where he published his first paper while still an undergraduate. His education was interrupted for three years by his World War II service with the Army Air Force in Puerto Rico and the Panama Canal Zone, where he worked as a tropical meteorologist. After the war, he returned to the University of North Carolina and completed his B.S. in zoology (Phi Beta Kappa) in 1947. In 1947, Odum married Virginia Wood, and they later had two children. After Wood's death in 1973, he married Elisabeth C. Odum (who had four children from her previous marriage) in 1974. Odum's advice on how to manage a blended family was ; Elisabeth's was to hold back on discipline and new rules. In 1950, Odum earned his Ph.D. in zoology at Yale University, under the guidance of G. Evelyn Hutchinson. His dissertation was titled "The Biogeochemistry of Strontium: With Discussion on the Ecological Integration of Elements", which brought him into the emerging field of systems ecology. He made a meteorological "analysis of the global circulation of strontium, [and] anticipated in the late 1940s the view of the earth as one great ecosystem". While at Yale, Howard began his lifelong collaborations with his brother Eugene. In 1953, they published the first English-language textbook on systems ecology, "Fundamentals of Ecology". Howard wrote the chapter on energetics, which introduced his energy circuit language. They continued to collaborate in research as well as writing for the rest of their lives. For Howard, his energy systems language (which he called "energese") was itself a collaborative tool. From 1956 to 1963, Odum worked as the director of the Marine Institute of the University of Texas. During this time, he became aware of the interplay of ecological-energetic and economic forces. He taught in the Department of Zoology at the University of North Carolina at Chapel Hill, and was one of the professors in the new curriculum of Marine Sciences until 1970. That year he moved to the University of Florida, where he taught in the Environmental Engineering Sciences Department, founded and directed the Center for Environmental Policy, and founded the university's Center for Wetlands in 1973; it was the first center of its kind in the world that is still in operation today. Odum continued this work for 26 years until his retirement in 1996. In the 1960s and 1970s, Odum was also chairman of the International Biological Program's Tropical Biome planning committee. He was supported by large contracts with the United States Atomic Energy Commission, resulting in participation by nearly 100 scientists, who conducted radiation studies of a tropical rainforest. His featured project at University of Florida in the 1970s was on recycling treated sewage into cypress swamps. This was one of the first projects to explore the now widespread approach of using wetlands as water quality improvement ecosystems. This is one of his most important contributions to the beginnings of the field of ecological engineering. In his last years, Odum was Graduate Research Professor Emeritus and Director of the Center for Environmental Policy. He was an avid birdwatcher in both his professional and personal life. The Ecological Society awarded Odum its Mercer Award to recognize his contributions to the study of the coral reef on Eniwetok Atoll. Odum also received the French Prix de Vie, and the Crafoord Prize of the Royal Swedish Academy of Science, considered the Nobel equivalent for bioscience. Charles A. S. Hall described Odum as one of the most innovative and important thinkers of the time. Hall noted that Odum, either alone or with his brother Eugene, received essentially all international prizes awarded to ecologists. The only higher education institute to award honorary degrees to both Odum brothers was Ohio State University, which honored Howard in 1995 and Euene in 1999. Odum's contributions to ecosystems ecology have been recognized by the Mars Society, who named their experimental station the "H. T. Odum Greenhouse" at the suggestion of his former student Patrick Kangas. Kangas and his student, David Blersch, made significant contributions to the design of the waste water recycling system on the station. Odum's students have furthered his work at institutions around the world, most notably Mark Brown at the University of Florida, David Tilley and Patrick Kangas at the University of Maryland, Daniel Campbell at the United States Environmental Protection Agency, Enrique Ortega at the UNICAMP in Brazil, and Sergio Ulgiati at the University of Siena. Work done at these institutions continues to evolve and propagate the Odum's concept of emergy. His former students Bill Mitsch, Robert Costanza, and Karin Limburg are some former students who have been recognized internationally for their contributions to ecological engineering, ecological economics, ecosystem science, wetland ecology, estuarine ecology, ecological modeling, and related fields. Work: an overview. Odum left a large legacy in many fields associated with ecology, systems, and energetics. He studied ecosystems all over the world, and pioneered the study of several areas, some of which are now distinct fields of research. According to Hall (1995, p.ix), Odum published one of the first significant papers in each of the following areas: Odum's contributions to these and other areas are summarized below. Odum also wrote on radiation ecology, systems ecology, unified science, and the . He was one of the first to discuss the use of ecosystems for life-support function in space travel. Some have suggested that Odum was technocratic in orientation, while others believe that he sided with those calling for "new values". Ecological modeling. A new integrative approach in ecology. In his 1950 Ph.D. thesis, Odum gave a novel definition of ecology as the study of large entities (ecosystems) at the "natural level of integration". In the traditional role of an ecologist, one of Odum's doctoral aims was to recognize and classify large cyclic entities (ecosystems). However, another one of his aims was to make predictive generalizations about ecosystems, such as the whole world for example. For Odum, as a large entity, the world constituted a revolving cycle with high stability. It was the presence of stability, which Odum believed enabled him to talk about the teleology of such systems. While he was writing his thesis, Odum felt that the principle of natural selection was more than empirical, because it had a teleological, "stability over time" component. As an ecologist interested in the behavior and function of large entities over time, Odum sought to give a more general statement of natural selection so that it was equally applicable to large entities as it was to small entities traditionally studied in biology. Odum also wanted to extend the scope and generality of natural selection to include large entities such as the world. This extension relied on the definition of an entity as a combination of properties that have some stability with time. Odum's approach was motivated by Lotka's idea's on the energetics of evolution. Ecosystem simulation. Odum used an analog of electrical energy networks to model the energy flow pathways of ecosystems. His analog electrical models had a significant role in the development of his approach to systems and have been recognized as one of the earliest instances of systems ecology. Electron flow in the electrical network represented the flow of material (e.g. carbon) in the ecosystem, charge in a capacitor was analogous to storage of a material, and the model was scaled to the ecosystem of interest by adjusting the size of electrical component. Ecological analog of Ohm's Law. In the 1950s Odum introduced his electrical circuit diagrams of ecosystems to the Ecological Society of America. He claimed that energy was driven through ecological systems by an "ecoforce" analogous to the role of voltage in electrical circuits. Odum developed an analogue of Ohm's Law which aimed to be a representation of energy flows through ecosystems. In terms of steady state thermodynamics, Ohm's Law can be considered a special case of a more general flux law, where the flux (formula_0) "is proportional to the driving thermodynamic force (formula_1) with conductivity (formula_2)", or formula_3. Kangas states that Odum concluded that as thermodynamic systems, ecosystems should also obey the force-flux law, and that Ohm's law and passive electrical analog circuits can be used to simulate ecosystems. In this simulation, Odum attempted to derive an ecological analog for electrical voltage. Voltage, or driving force, is related to the biomass in pounds per acre. The analogous concept required is the biomass activity, that is, the thermodynamic thrust, which may be linear. Exactly what this is in nature is still uncertain, as it is a new concept. Such a consideration led Odum to ask two important methodological questions: For example, what is a diode in nature? One needs a diode to allow biomass to accumulate after the voltage of the sun has gone down, or else the circuit reverses. Higher organisms like fish are diodes. The Silver Springs study. Silver Springs is a common type of spring-fed stream in Florida with a constant temperature and chemical composition. The study Odum conducted here was the first complete analysis of a natural ecosystem. Odum started with an overall model and in his early work used a diagramming methodology very similar to the Sankey diagrams used in chemical process engineering. Starting from that overall model, Odum "mapped in detail all the flow routes to and from the stream. He measured the energy input of sun and rain, and of all organic matter - even those of the bread the tourists threw to the ducks and fish - and then measured the energy that gradually left the spring. In this way he was able to establish the stream's energy budget." Ecological and biological energetics. Around 1955 Odum directed studies into radioecology, which included the effects of radiation on the tropical rainforest in El Verde, Puerto Rico (Odum and Pidgeon), and the coral reefs and ocean ecology at Eniwetok atoll. The Odum brothers were approached by the Atomic Energy Commission to undertake a detailed study of the atoll after nuclear testing; the atoll was sufficiently radioactive that upon their arrival the Odums were able to produce an autoradiographic image of a coral head by placing it on photographic paper. These studies were early applications of energy concepts to ecological systems, and explored the implications of the laws of thermodynamics when used in these new settings. From this view, biogeochemical cycles are driven by radiant energy. Odum expressed the balance between energy input and output as the ratio of production ("P") to respiration ("R"): "P-R". He classified water bodies based on their "P-R" ratios, which separated autotrophic from heterotrophic ecosystems: "[Odum's] measurements of flowing water metabolism were measurements of whole systems. Odum was measuring the community as a system, not adding up the metabolism of the components as Lindeman and many others had done". This reasoning appears to have followed that of Odum's doctoral supervisor, G. E. Hutchinson, who thought that if a community were an organism then it must have a form of metabolism. However, Golley notes that Odum attempted to go beyond the reporting of mere ratios, a move which resulted in the first serious disagreement in systems energetics. Maximum power theory and the proposal for additional laws of thermodynamics/energetics. In a controversial move, Odum and Richard Pinkerton (at the time physicist at the University of Florida) were motivated by Alfred J. Lotka's articles on the energetics of evolution, and subsequently proposed the theory that natural systems tend to operate at an efficiency that produces the maximum power output, not the maximum efficiency. Energy Systems Language. By the end of the 1960s, Odum's electronic circuit ecological simulation models were replaced by a more general set of energy symbols. When combined to form systems diagrams, these symbols were considered by Odum and others to be the language of the macroscope which could portray generalized patterns of energy flow: "Describing such patterns and reducing ecosystem complexities to flows of energy, Odum believed, would permit discovery of general ecosystem principles." Some have attempted to link it with the universal scientific language projects which have appeared throughout the history of natural philosophy. Kitching claimed that the language was a direct result of working with analogue computers, and reflected an electrical engineer's approach to the problem of system representation: "Because of its electrical analogy, the Odum system is relatively easy to turn into mathematical equations ... If one is building a model of energy flow then certainly the Odum system should be given serious consideration... " Emergy. In the 1990s in the latter part of his career, Odum and David M. Scienceman developed the idea of emergy as a specific use of the term embodied energy. Some consider the concept of "emergy", sometimes briefly defined as "energy memory", as one of Odum's more significant contributions, but the concept is neither free from controversy nor without its critics. Odum looked at natural systems as being formed by the use of various forms of energy in the past: "emergy is a measure of energy used in the past and thus is different from a measure of energy now. The unit of emergy (past available energy use) is the emjoule, as distinguished from joules used for available energy remaining now." This was conceived as a principle of maximum empower, which might explain the evolution of self-organising open systems. However, the principle has only been demonstrated in a few experiments and is not widely recognized in the scientific community. Ecosystem ecology and systems ecology. For J. B. Hagen, the maximum power principle, and the stability principle could be easily translated into the language of homeostasis and cybernetics systems. Hagen claims that the feedback loops in ecosystems were, for Odum, analogous to the kinds of feedback loops diagrammed in electronic circuits and cybernetic systems. This approach represented the migration of cybernetic ideas into ecology and led to the formulation of systems ecology. In Odum's work these concepts form part of what Hagen called an, "ambitious and idiosyncratic attempt to create a universal science of systems". Macroscope. Hagen identified the systems, thinking of Odum as a form of holistic thinking, who contrasted the holistic thinking of systems science with reductionistic microscopic thinking, and used the term "macroscope" to refer to the holistic view, which was a kind of "detail eliminator" allowing a simple diagram to be created. Microcosms. Odum was a pioneer in his use of small closed and open ecosystems in classroom teaching, which were often constructed from fish tanks or bottles and have been called . His microcosm studies influenced the design of Biosphere 2. Ecological economics. Ecological economics is an active field between economics and ecology with annual conferences, international societies, and an international journal. From 1956 to 1963, Odum worked as Director of the Marine Institute of the University of Texas. During this time Odum became aware of the interplay of ecological-energetic and economic forces. He therefore funded the research into the use of conventional economic approaches to quantify dollar values of ecological resources for recreational, treatment and other uses. This research calculated the potential value of primary production per bay surface area. For Hall the importance of Odum's work came through his integration of systems, ecology, and energy with economics, together with Odum's view that economics can be evaluated on objective terms such as energy rather than on a subjective, willingness to pay basis. Ecological engineering. Ecological engineering is an emerging field of study between ecology and engineering concerned with the designing, monitoring and constructing of ecosystems. The term ecological engineering was first coined by Odum in 1962, before he worked at the University of Florida. Ecological engineering, he wrote, is "those cases where the energy supplied by man is small relative to the natural sources but sufficient to produce large effects in the resulting patterns and processes." Ecological engineering as a practical field was developed by his former graduate student Bill Mitsch, who started and continues to edit the standard journal in the field and helped to start both international and U.S. societies devoted to ecological engineering, and has written two textbooks on the subject One of Odum's last papers was his assessment of ecological engineering that was published in the journal Ecological Engineering in 2003, a year after Odum died. General systems theory. In 1991, Odum was elected the 30th president of the International Society for the Systems Sciences, formerly named the International Society for General Systems Research. He presented many papers on the general systems theory at its annual conferences, and edited the last published General Systems Yearbook. The second, revised edition of his major lifework was retitled "Ecological and General Systems: An Introduction to Systems Ecology" (1994). Some of his energy models and simulations contained general systems components. Odum has been described as a "technocratic optimist", and his approach was significantly influenced by his father, who was also an advocate of viewing the social world through the various lenses of physical science. Within the processes on earth, Odum (1989) believed humans play a central role: he said that the "human is the biosphere's programmatic and pragmatic information processor for maximum performance". Publications. Odum wrote around 15 books and 300 papers, and a "Festschrift" volume ("Maximum Power: The Ideas and Applications of H. T. Odum", 1995) was published in honor of his work. Odum was also honored by the journal "Ecological Engineering" for his contributions to the field of ecological engineering and general ecology in recognition of his 70th birthday. The publication included over 25 letters from distinguished scientists from all over the world including Mitsch (lead editorial), John Allen, Robert Ulanowitcz, Robert Beyers, Ariel Lugo, Marth Gilliland, Sandra Brown, Ramon Margalef, Paul Risser, Eugene Odum, Kathy Ewel, Kenneth Watt, Pat Kangas, Sven Jørgensen, Bob Knight, Rusong Wang, John Teal, Frank Golley, AnnMari and Bengt-Owe Jansson, Joan Browder, Carl Folke, Richard Wiegert, Scott Nixon, Gene Turner, John Todd, and James Zuchetto. Books. &lt;templatestyles src="Div col/styles.css"/&gt; Articles (selection). &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "J" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "C" }, { "math_id": 3, "text": "J = CX" } ]
https://en.wikipedia.org/wiki?curid=1019627
10196392
Gaisser–Hillas function
The Gaisser–Hillas function is used in astroparticle physics. It parameterizes the longitudinal particle density in a cosmic ray air shower. The function was proposed in 1977 by Thomas K. Gaisser and Anthony Michael Hillas. The number of particles formula_0 as a function of traversed atmospheric depth formula_1 is expressed as formula_2 where formula_3 is maximum number of particles observed at depth formula_4, and formula_5 and formula_6 are primary mass and energy dependent parameters. Using substitutions formula_7, formula_8 and formula_9 the function can be written in an alternative one-parametric ("m") form as formula_10 References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "N(X)" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "N(X)= N_\\text{max}\\left(\\frac{X-X_0}{X_\\text{max}-X_0}\\right)^{\\frac{X_\\text{max}-X_{0}}{\\lambda}}\\exp\\left(\\frac{X_\\text{max}-X}{\\lambda}\\right)," }, { "math_id": 3, "text": "N_\\text{max}" }, { "math_id": 4, "text": "X_\\text{max}" }, { "math_id": 5, "text": "X_0" }, { "math_id": 6, "text": "\\lambda" }, { "math_id": 7, "text": "n=\\frac{N}{N_\\text{max}}" }, { "math_id": 8, "text": "x=\\frac{X-X_0}{\\lambda}" }, { "math_id": 9, "text": "m=\\frac{X_\\text{max}-X_0}{\\lambda}" }, { "math_id": 10, "text": "n(x)=\\left(\\frac{x}{m}\\right)^m\\exp(m-x)=\\frac{x^m \\, e^{-x}}{m^m \\, e^{-m}}=\\exp\\left(m\\,(\\ln x-\\ln m)-(x-m)\\right)\\, ." } ]
https://en.wikipedia.org/wiki?curid=10196392
1019760
Weight transfer
Change in wheel load or center of mass in a vehicle Weight transfer and load transfer are two expressions used somewhat confusingly to describe two distinct effects: In the automobile industry, weight transfer customarily refers to the change in load borne by different wheels during acceleration. This would be more properly referred to as load transfer, and that is the expression used in the motorcycle industry, while weight transfer on motorcycles, to a lesser extent on automobiles, and cargo movement on either is due to a change in the CoM location relative to the wheels. This article uses this latter pair of definitions. Load transfer. In wheeled vehicles, load transfer is the measurable change of load borne by different wheels during acceleration (both longitudinal and lateral). This includes braking, and deceleration (which is an acceleration at a negative rate). No motion of the center of mass relative to the wheels is necessary, and so load transfer may be experienced by vehicles with no suspension at all. Load transfer is a crucial concept in understanding vehicle dynamics. The same is true in bikes, though only longitudinally. Cause. The major forces that accelerate a vehicle occur at the tires' contact patches. Since these forces are not directed through the vehicle's CoM, one or more moments are generated whose forces are the tires' traction forces at pavement level, the other one (equal but opposed) is the mass inertia located at the CoM and the moment arm is the distance from pavement surface to CoM. It is these moments that cause variation in the load distributed between the tires. Often this is interpreted by the casual observer as a pitching or rolling motion of the vehicles body. A perfectly rigid vehicle, without suspension that would not exhibit pitching or rolling of the body, still undergoes load transfer. However, the pitching and rolling of the body of a non-rigid vehicle adds some (small) weight transfer due to the (small) CoM horizontal displacement with respect to the wheel's axis suspension vertical travel and also due to deformation of the tires i.e. contact patch displacement relative to wheel. Lowering the CoM towards the ground is one method of reducing load transfer. As a result load transfer is reduced in both the longitudinal and lateral directions. Another method of reducing load transfer is by increasing the wheel spacings. Increasing the vehicle's wheelbase (length) reduces longitudinal load transfer while increasing the vehicle's track (width) reduces lateral load transfer. Most high performance automobiles are designed to sit as low as possible and usually have an extended wheelbase and track. One way to calculate the effect of load transfer, keeping in mind that this article uses "load transfer" to mean the phenomenon commonly referred to as "weight transfer" in the automotive world, is with the so-called "weight transfer equation": formula_0 or formula_1 where formula_2 is the change in load borne by the front wheels, formula_3 is the longitudinal acceleration, formula_4 is the acceleration of gravity, formula_5 is the center of mass height, formula_6 is the wheelbase, formula_7 is the total vehicle mass, and formula_8 is the total vehicle weight. Weight transfer involves the "actual" (relatively small) movement of the vehicle CoM relative to the wheel axes due to displacement of the chassis as the suspension complies, or of cargo or liquids within the vehicle, which results in a redistribution of the total vehicle load between the individual tires. Center of mass. Weight transfer occurs as the vehicle's CoM shifts during automotive maneuvers. Acceleration causes the sprung mass to rotate about a geometric axis resulting in relocation of the CoM. Front-back weight transfer is proportional to the change in the longitudinal location of the CoM to the vehicle's wheelbase, and side-to-side weight transfer (summed over front and rear) is proportional to the ratio of the change in the CoM's lateral location to the vehicle's track. Liquids, such as fuel, readily flow within their containers, causing changes in the vehicle's CoM. As fuel is consumed, not only does the position of the CoM change, but the total weight of the vehicle is also reduced. By way of example, when a vehicle accelerates, a weight transfer toward the rear wheels can occur. An outside observer might witness this as the vehicle visibly leans to the back, or squats. Conversely, under braking, weight transfer toward the front of the car can occur. Under hard braking it might be clearly visible even from inside the vehicle as the nose dives toward the ground (most of this will be due to load transfer). Similarly, during changes in direction (lateral acceleration), weight transfer to the outside of the direction of the turn can occur. Weight transfer is generally of far less practical importance than load transfer, for cars and SUVs at least. For instance in a 0.9g turn, a car with a track of 1650 mm and a CoM height of 550 mm will see a load transfer of 30% of the vehicle weight, that is the outer wheels will see 60% more load than before, and the inners 60% less. Total available grip will drop by around 6% as a result of this load transfer. At the same time, the CoM of the vehicle will typically move laterally and vertically, relative to the contact patch by no more than 30 mm, leading to a weight transfer of less than 2%, and a corresponding reduction in grip of 0.01%. Traction. Load transfer causes the available traction at all four wheels to vary as the car brakes, accelerates, or turns. This bias to one pair of tires doing more "work" than the other pair results in a net loss of total available traction. The net loss can be attributed to the phenomenon known as tire load sensitivity. An exception is during positive acceleration when the engine power is driving two or fewer wheels. In this situation where all the tires are not being utilized load transfer can be advantageous. As such, the most powerful cars are almost never front wheel drive, as the acceleration itself causes the front wheels' traction to decrease. This is why sports cars usually have either rear wheel drive or all wheel drive (and in the all wheel drive case, the power tends to be biased toward the rear wheels under normal conditions). Rollover. If (lateral) load transfer reaches the tire loading on one end of a vehicle, the inside wheel on that end will lift, causing a change in handling characteristic. If it reaches half the weight of the vehicle it will start to roll over. Some large trucks will roll over before skidding, while passenger vehicles and small trucks usually roll over only when they leave the road. Fitting racing tires to a tall or narrow vehicle and then driving it hard may lead to rollover. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta \\mathrm{Weight}_\\mathrm{front} = a\\frac{h}{b}m" }, { "math_id": 1, "text": "\\Delta \\mathrm{Weight}_\\mathrm{front} = \\frac{a}{g} \\frac{h}{b}w" }, { "math_id": 2, "text": "\\Delta \\mathrm{Weight}_\\mathrm{front}" }, { "math_id": 3, "text": "a" }, { "math_id": 4, "text": "g" }, { "math_id": 5, "text": "h" }, { "math_id": 6, "text": "b" }, { "math_id": 7, "text": "m" }, { "math_id": 8, "text": "w" } ]
https://en.wikipedia.org/wiki?curid=1019760