id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
965361 | Structuring element | In mathematical morphology, a structuring element is a shape, used to probe or interact with a given image, with the purpose of drawing conclusions on how this shape fits or misses the shapes in the image. It is typically used in morphological operations, such as dilation, erosion, opening, and closing, as well as the hit-or-miss transform.
According to Georges Matheron, knowledge about an object (e.g., an image) depends on the manner in which we probe (observe) it. In particular, the choice of a certain structuring element for a particular morphological operation influences the information one can obtain. There are two main characteristics that are directly related to structuring elements:
Mathematical particulars and examples.
Structuring elements are particular cases of binary images, usually being small and simple. In mathematical morphology, binary images are subsets of a Euclidean space "R""d" or the integer grid "Z""d", for some dimension "d". Here are some examples of widely used structuring elements (denoted by "B"):
In the discrete case, a structuring element can also be represented as a set of pixels on a grid, assuming the values 1 (if the pixel belongs to the structuring element) or 0 (otherwise).
When used by a hit-or-miss transform, usually the structuring element is a composite of two disjoint sets (two simple structuring elements), one associated to the foreground, and one associated to the background of the image to be probed. In this case, an alternative representation of the composite structuring element is as a set of pixels which are either set (1, associated to the foreground), not set (0, associated to the background) or "don't care".
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "3\\times 3"
},
{
"math_id": 1,
"text": "21\\times 21"
}
] | https://en.wikipedia.org/wiki?curid=965361 |
9653775 | Semimartingale | Type of stochastic process
In probability theory, a real valued stochastic process "X" is called a semimartingale if it can be decomposed as the sum of a local martingale and a càdlàg adapted finite-variation process. Semimartingales are "good integrators", forming the largest class of processes with respect to which the Itô integral and the Stratonovich integral can be defined.
The class of semimartingales is quite large (including, for example, all continuously differentiable processes, Brownian motion and Poisson processes). Submartingales and supermartingales together represent a subset of the semimartingales.
Definition.
A real valued process "X" defined on the filtered probability space (Ω,"F",("F""t")"t" ≥ 0,P) is called a semimartingale if it can be decomposed as
formula_0
where "M" is a local martingale and "A" is a càdlàg adapted process of locally bounded variation. This means that for almost all formula_1 and all compact intervals formula_2, the sample path formula_3 is of bounded variation.
An R"n"-valued process "X" = ("X"1...,"X""n") is a semimartingale if each of its components "X""i" is a semimartingale.
Alternative definition.
First, the simple predictable processes are defined to be linear combinations of processes of the form "H""t" = "A"1{"t" > "T"} for stopping times "T" and "F""T" -measurable random variables "A". The integral "H" ⋅ "X" for any such simple predictable process "H" and real valued process "X" is
formula_4
This is extended to all simple predictable processes by the linearity of "H" ⋅ "X" in "H".
A real valued process "X" is a semimartingale if it is càdlàg, adapted, and for every "t" ≥ 0,
formula_5
is bounded in probability. The Bichteler–Dellacherie Theorem states that these two definitions are equivalent .
Examples.
Although most continuous and adapted processes studied in the literature are semimartingales, this is not always the case.
Semimartingale decompositions.
By definition, every semimartingale is a sum of a local martingale and a finite-variation process. However, this decomposition is not unique.
Continuous semimartingales.
A continuous semimartingale uniquely decomposes as "X" = "M" + "A" where "M" is a continuous local martingale and "A" is a continuous finite-variation process starting at zero.
For example, if "X" is an Itō process satisfying the stochastic differential equation d"X"t = σt d"W"t + "b"t dt, then
formula_6
Special semimartingales.
A special semimartingale is a real valued process "formula_7" with the decomposition formula_8, where formula_9 is a local martingale and formula_10 is a predictable finite-variation process starting at zero. If this decomposition exists, then it is unique up to a P-null set.
Every special semimartingale is a semimartingale. Conversely, a semimartingale is a special semimartingale if and only if the process "X"t* ≡ sup"s" ≤ "t" |X"s"| is locally integrable .
For example, every continuous semimartingale is a special semimartingale, in which case "M" and "A" are both continuous processes.
Multiplicative decompositions.
Recall that formula_11 denotes the stochastic exponential of semimartingale formula_7. If formula_7 is a special semimartingale such that formula_12, then formula_13 and formula_14 is a local martingale. Process formula_15 is called the "multiplicative compensator" of formula_11 and the identity formula_16 the "multiplicative decomposition" of formula_11.
Purely discontinuous semimartingales / quadratic pure-jump semimartingales.
A semimartingale is called "purely discontinuous" (Kallenberg 2002) if its quadratic variation ["X"] is a finite-variation pure-jump process, i.e.,
formula_17.
By this definition, "time" is a purely discontinuous semimartingale even though it exhibits no jumps at all. The alternative (and preferred) terminology "quadratic pure-jump" semimartingale for a purely discontinuous semimartingale is motivated by the fact that the quadratic variation of a purely discontinuous semimartingale is a pure jump process. Every finite-variation semimartingale is a quadratic pure-jump semimartingale. An adapted continuous process is a quadratic pure-jump semimartingale if and only if it is of finite variation.
For every semimartingale X there is a unique continuous local martingale formula_18 starting at zero such that formula_19 is a quadratic pure-jump semimartingale (; ). The local martingale formula_18 is called the "continuous martingale part of" "X".
Observe that formula_18 is measure-specific. If "formula_20" and "formula_21" are two equivalent measures then formula_22 is typically different from formula_23, while both formula_24 and formula_25 are quadratic pure-jump semimartingales. By Girsanov's theorem formula_26 is a continuous finite-variation process, yielding formula_27.
Continuous-time and discrete-time components of a semimartingale.
Every semimartingale formula_7 has a unique decomposition formula_28where formula_29, the continuous-time component formula_30 does not jump at predictable times, and the discrete-time component formula_31 is equal to the sum of its jumps at predictable times in the semimartingale topology. One then has formula_32. Typical examples of the continuous-time component are Itô process and Lévy process. The discrete-time component is often taken to be a Markov chain but in general the predictable jump times may not be isolated points; for example, in principle formula_31 may jump at every rational time. Observe also that formula_31 is not necessarily of finite variation, even though it is equal to the sum of its jumps (in the semimartingale topology). For example, on the time interval formula_33 take formula_31 to have independent increments, with jumps at times formula_34 taking values formula_35 with equal probability.
Semimartingales on a manifold.
The concept of semimartingales, and the associated theory of stochastic calculus, extends to processes taking values in a differentiable manifold. A process "X" on the manifold "M" is a semimartingale if "f"("X") is a semimartingale for every smooth function "f" from "M" to R. Stochastic calculus for semimartingales on general manifolds requires the use of the Stratonovich integral.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X_t = M_t + A_t"
},
{
"math_id": 1,
"text": " \\omega \\in \\Omega "
},
{
"math_id": 2,
"text": " I \\subset [0,\\infty) "
},
{
"math_id": 3,
"text": " I \\ni s \\mapsto A_s(\\omega) "
},
{
"math_id": 4,
"text": "H\\cdot X_t := 1_{\\{t>T\\}}A(X_t-X_T)."
},
{
"math_id": 5,
"text": "\\left\\{H\\cdot X_t:H{\\rm\\ is\\ simple\\ predictable\\ and\\ }|H|\\le 1\\right\\}"
},
{
"math_id": 6,
"text": "M_t=X_0+\\int_0^t\\sigma_s\\,dW_s,\\ A_t=\\int_0^t b_s\\,ds."
},
{
"math_id": 7,
"text": "X"
},
{
"math_id": 8,
"text": "X = M^X +B^X"
},
{
"math_id": 9,
"text": "M^X"
},
{
"math_id": 10,
"text": "B^X"
},
{
"math_id": 11,
"text": "\\mathcal{E}(X)"
},
{
"math_id": 12,
"text": "\\Delta B^X \\neq -1"
},
{
"math_id": 13,
"text": "\\mathcal{E}(B^X)\\neq 0"
},
{
"math_id": 14,
"text": "\\mathcal{E}(X)/\\mathcal{E}(B^X)=\\mathcal{E}\\left(\\int_0^\\cdot \\frac{M^X_u}{1+\\Delta B^X_u}\\right)"
},
{
"math_id": 15,
"text": "\\mathcal{E}(B^X)"
},
{
"math_id": 16,
"text": "\\mathcal{E}(X)=\\mathcal{E}\\left(\\int_0^\\cdot \\frac{M^X_u}{1+\\Delta B^X_u}\\right)\\mathcal{E}(B^X)"
},
{
"math_id": 17,
"text": "[X]_t=\\sum_{s\\le t}(\\Delta X_s)^2"
},
{
"math_id": 18,
"text": "X^c"
},
{
"math_id": 19,
"text": "X-X^c"
},
{
"math_id": 20,
"text": "P"
},
{
"math_id": 21,
"text": "Q"
},
{
"math_id": 22,
"text": "X^c(P)"
},
{
"math_id": 23,
"text": "X^c(Q)"
},
{
"math_id": 24,
"text": "X-X^c(P)"
},
{
"math_id": 25,
"text": "X-X^c(Q)"
},
{
"math_id": 26,
"text": "X^c(P)-X^c(Q)"
},
{
"math_id": 27,
"text": "[X^c(P)]=[X^c(Q)] = [X]-\\sum_{s\\leq\\cdot}(\\Delta X_s)^2"
},
{
"math_id": 28,
"text": "X = X_0 + X^{\\mathrm{qc}} +X^{\\mathrm{dp}},"
},
{
"math_id": 29,
"text": "X^{\\mathrm{qc}}_0=X^{\\mathrm{dp}}_0=0"
},
{
"math_id": 30,
"text": "X^{\\mathrm{qc}}"
},
{
"math_id": 31,
"text": "X^{\\mathrm{dp}}"
},
{
"math_id": 32,
"text": "[X^{\\mathrm{qc}},X^{\\mathrm{dp}}]=0"
},
{
"math_id": 33,
"text": "[0,\\infty)"
},
{
"math_id": 34,
"text": "\\{\\tau_n = 2-1/n\\}_{n\\in\\mathbb{N}}"
},
{
"math_id": 35,
"text": "\\pm 1/n"
}
] | https://en.wikipedia.org/wiki?curid=9653775 |
965399 | Go ranks and ratings | Ranks and rating systems used by the game Go
There are various systems of Go ranks and ratings that measure the skill in the traditional board game Go. Traditionally, Go rankings have been measured using a system of dan and kyu ranks. Especially in amateur play, these ranks facilitate the handicapping system, with a difference of one rank roughly corresponding to one free move at the beginning of the game. This system is also commonly used in many East Asian martial arts, where it often corresponds with a belt color. With the ready availability of calculators and computers, rating systems have been introduced. In such systems, a rating is rigorously calculated on the basis of game results.
Kyu and dan ranks.
Traditionally, the level of players has been defined using "kyu" and "dan" ranks. Kyu ranks are considered "student" ranks. Dan ranks are considered "master" ranks. Beginners who have just learned the rules of the game are usually around 30th kyu. As they progress, they advance numerically downwards through the kyu grades. The best kyu grade attainable is therefore 1st kyu. If players progress beyond 1st kyu, they will receive the rank of 1st dan, and from then on will move numerically upwards through the dan ranks. In martial arts, 1st dan is the equivalent of a black belt. The very best players may achieve a "professional dan rank".
The rank system is tabulated from the lowest to highest ranks:
Although almost all organizations use this system, there is no universal calibration. The methods of awarding each of those ranks and the corresponding levels of strength vary from country to country and among online Go servers. This means that a player who is considered to be a 2nd kyu in one country, may only be considered a 5th kyu in another.
Differences in strength up to amateur dan level generally correspond to the level of handicap that would yield an even game between the two players. For instance, it is expected that a 3d player could give 2 handicap stones to a 1d player and win half of their games. In contrast, differences in professional ranks are much smaller, perhaps 1/4 to 1/3 of a handicap stone. There are some significant differences in strength between ordinary 9p players and the best players in the world because 9p is the highest rank possible, which may account for this variation.
Origin.
The first Go ranks were given in 2nd century (CE) China, when Handan Chun (Chinese: 邯郸淳) described the 9 Pin Zhi (九品制) ranking system in his book "Classic of Arts" (艺经).
From the early 17th century, the Japanese formalised the teaching and ranking of Go. The system was later used in martial arts schools; and is thought to be derived originally from court ranks in China. It is thought that the fact that there are 9 professional dan grades finds its base in the original 9 Chinese Pin Zhi grades.
Achieving a dan rank.
"Dan" (abbreviated online as "d") ranks are for advanced amateur players. Although many organisations let players choose their own kyu rank to a certain extent, dan ranks are often regulated. This means that players will have to show good results in tournaments or pass exams to be awarded a dan rank. Serious students of the game will often strive to attain a dan rank, much as martial arts practitioners will strive to achieve a black belt. For amateurs, dan ranks up to 7th dan are available. Above this level, a player must become a professional player to achieve further promotions. In Japan and China, some players are awarded an amateur 8th "dan" rank as an honorary title for exceptional achievement. In the United States, amateur dan ranks are often based on the AGA rating system. Under this system, some strong amateurs and former professional players have achieved up to 9th dan amateur, though generally they will register as 6th or 7th dan in international events. Similarly, some players have achieved 9th dan amateur ranks in the rating system of online Go servers.
Although players who have achieved professional dan ranks are nominally stronger than amateur dan players, in practice some of the strongest 7th dan amateur players have a playing level on par with that of some professional players. Such players have either never tried for a professional rank, or have chosen to remain amateur players because they do not want to make a career out of playing Go.
Professional ranks.
The professional dan ranking system is similar to that of amateurs in that it awards dan ranks that increase numerically with skill. The difference between these grades is much smaller than with amateurs however, and is not based on the number of handicap stones required. Professional dan ranks go up to 9th dan, but the strength difference between a 1st dan and a 9th dan professional is generally no more than 2–3 handicap stones.
To distinguish between professional "dan" and amateur "dan" ranks, the former is often abbreviated to "p" (sometimes called "ping") and the latter to "d". There was no such abbreviation in the past, and this is not generally used as an abbreviation beyond the Internet, where it is common, but not universal.
Rating systems.
With the invention of calculators and computers, it has become easy to calculate a rating for players based on the results of their games. Commonly used rating systems include the Elo and Glicko rating systems. Rating systems generally predict the probability that one player will defeat another player and use this prediction to rank a player's strength.
Elo ratings as used in Go.
The European Go Federation (EGF) implementation of the Elo rating system attempts to establish rough correspondence between ratings and kyu/dan ranks. This is done by varying some of the components of the Elo formula to achieve a close match to the adjacent table. The probability (SE) that the player with the lower rating, player A, wins against a higher rated player B is given by the formula
formula_0
The probability that player B wins is calculated as
formula_2
The new rating of a player is calculated as
formula_3
"K" is varied depending on the rating of the players, because of the low confidence in (lower) amateur ratings (high fluctuation in the outcome) but high confidence in pro ratings (stable, consistent play). K is 116 at rating 100 and 10 at rating 2700
In the EGF system, the Elo points won by the winner almost equal the ones lost by the loser and the maximum points movement is the constant "K" (from above). However, there is a slight inflationary mechanism built into the ratings adjustment after each game to compensate for the fact that newcomers usually bring fewer ELO points into the pool than they take out with them when they cease active play. Other Elo-flavor ratings such as the AGA, IGS, and DGS systems use maximum likelihood estimation to adjust ratings, so those systems are anchored by prior distributions rather than by attempting to ensure that the gain/loss of ratings is zero sum.
Other rating systems.
A variation of the Elo rating system called WHR ('Whole History Rating'), differs from standard Elo in that it retroactively re-rates players based on their entire history after each new result is added, rather than incrementally changing a player's rating on a game-by-game basis. This involves more intense computation than other methods, but is claimed that "in comparison to Elo, Glicko, TrueSkill, and decayed-history algorithms, WHR produces better predictions.". The website Go Ratings implements the WHR method to calculate global player rankings.
Rating base.
The ratings of players are generally measured using the game results of Go competitions and tournaments. Most clubs and countries maintain their own ratings, as do Go playing servers. Go tournaments in Europe use the EGF Official ratings .
In a small club, ranks may be decided informally and adjusted manually when players consistently win or lose. In larger clubs or country wide rating systems, a mathematical ranking system is generally easier to maintain. Players can then be promoted or demoted based on their strength as calculated from their wins and losses.
Most Go playing servers use a mathematical rating system to keep track of the playing strength of their members. Such ratings may or may not be translated to kyu and dan ranks for the convenience of the players.
Player pools that do not regularly mix (such as different countries, or sub-groups on online servers) often result in divergent playing strengths compared to the same nominal rank level of other groups. Players asked to give their rank will therefore often qualify it with "in my country" or "on this Internet server".
Winning probabilities.
The rating indirectly represents the probability of winning an even game against other rated players. This probability depends only on the difference between the two players' ratings, but its magnitude varies greatly from one implementation to another. The American Go Association adopted a uniform standard deviation of 104, i.e. slightly more than one rank, while the European Go Federation ratings have a sliding standard of deviation from 200 for beginners down to 70 for top players. The IGS has a fixed standard deviation for all levels of play, but a non-standard distribution. The following table displays some of the differences:
Winning chances and handicaps in Go versus chess.
While in chess a player must take some risks to avoid a draw, in Go draws (jigo) are either impossible (with superko and non-integer komi, such as 6.5 points, as is common) or less likely in the case of integer komi. Also, an average game of Go lasts for 240 moves (120 moves in chess terms), compared to 40 in chess, so there are more opportunities for a weaker player to make sub-optimal moves. The ability to transform a small advantage into a win increases with playing strength. Due to this ability, stronger players are more consistent in their results against weaker players and will generally score a higher percentage of wins against opponents at the same rank distance. | [
{
"math_id": 0,
"text": "S_E(A) = \\frac{1}{e^{D/a} + 1}"
},
{
"math_id": 1,
"text": "R_B - R_A\\,"
},
{
"math_id": 2,
"text": "S_E(B) = 1 - S_E(A)\\,"
},
{
"math_id": 3,
"text": "R_n = R_o + K(S - S_E)\\,"
}
] | https://en.wikipedia.org/wiki?curid=965399 |
9654085 | Phred quality score | Measurement in DNA sequencing
A Phred quality score is a measure of the quality of the identification of the nucleobases generated by automated DNA sequencing. It was originally developed for the computer program Phred to help in the automation of DNA sequencing in the Human Genome Project. Phred quality scores are assigned to each nucleotide base call in automated sequencer traces. The FASTQ format encodes phred scores as ASCII characters alongside the read sequences. Phred quality scores have become widely accepted to characterize the quality of DNA sequences, and can be used to compare the efficacy of different sequencing methods. Perhaps the most important use of Phred quality scores is the automatic determination of accurate, quality-based consensus sequences.
Definition.
Phred quality scores formula_0 are logarithmically related to the base-calling error probabilities formula_1 and defined as
formula_2.
This relation can also be written as
formula_3.
For example, if Phred assigns a quality score of 30 to a base, the chances that this base is called incorrectly are 1 in 1000.
The phred quality score is the negative ratio of the error probability to the reference level of formula_4 expressed in Decibel (dB).
History.
The idea of sequence quality scores can be traced back to the original description of the SCF file format by Rodger Staden's group in 1992. In 1995, Bonfield and Staden proposed a method to use base-specific quality scores to improve the accuracy of consensus sequences in DNA sequencing projects.
However, early attempts to develop base-specific quality scores had only limited success.
The first program to develop accurate and powerful base-specific quality scores was the program Phred. Phred was able to calculate highly accurate quality scores that were logarithmically linked to the error probabilities. Phred was quickly adopted by all the major genome sequencing centers as well as many other laboratories; the vast majority of the DNA sequences produced during the Human Genome Project were processed with Phred.
After Phred quality scores became the required standard in DNA sequencing, other manufacturers of DNA sequencing instruments, including Li-Cor and ABI, developed similar quality scoring metrics for their base calling software.
Methods.
Phred's approach to base calling and calculating quality scores was outlined by Ewing "et al.". To determine quality scores, Phred first calculates several parameters related to peak shape and peak resolution at each base. Phred then uses these parameters to look up a corresponding quality score in huge lookup tables. These lookup tables were generated from sequence traces where the correct sequence was known, and are hard coded in Phred; different lookup tables are used for different sequencing chemistries and machines. An evaluation of the accuracy of Phred quality scores for a number of variations in sequencing chemistry and instrumentation showed that Phred quality scores are highly accurate.
Phred was originally developed for "slab gel" sequencing machines like the ABI373. When originally developed, Phred had a lower base calling error rate than the manufacturer's base calling software, which also did not provide quality scores. However, Phred was only partially adapted to the capillary DNA sequencers that became popular later. In contrast, instrument manufacturers like ABI continued to adapt their base calling software changes in sequencing chemistry, and have included the ability to create Phred-like quality scores. Therefore, the need to use Phred for base calling of DNA sequencing traces has diminished, and using the manufacturer's current software versions can often give more accurate results.
Applications.
Phred quality scores are used for assessment of sequence quality, recognition and removal of low-quality sequence (end clipping), and determination of accurate consensus sequences.
Originally, Phred quality scores were primarily used by the sequence assembly program Phrap. Phrap was routinely used in some of the largest sequencing projects in the Human Genome Sequencing Project and is currently one of the most widely used DNA sequence assembly programs in the biotech industry. Phrap uses Phred quality scores to determine highly accurate consensus sequences and to estimate the quality of the consensus sequences. Phrap also uses Phred quality scores to estimate whether discrepancies between two overlapping sequences are more likely to arise from random errors, or from different copies of a repeated sequence.
Within the Human Genome Project, the most important use of Phred quality scores was for automatic determination of consensus sequences. Before Phred and Phrap, scientists had to carefully look at discrepancies between overlapping DNA fragments; often, this involved manual determination of the highest-quality sequence, and manual editing of any errors. Phrap's use of Phred quality scores effectively automated finding the highest-quality consensus sequence; in most cases, this completely circumvents the need for any manual editing. As a result, the estimated error rate in assemblies that were created automatically with Phred and Phrap is typically substantially lower than the error rate of manually edited sequence.
In 2009, many commonly used software packages make use of Phred quality scores, albeit to a different extent. Programs like Sequencher use quality scores for display, end clipping, and consensus determination; other programs like CodonCode Aligner also implement quality-based consensus methods.
Compression.
Quality scores are normally stored together with the nucleotide sequence in the widely accepted FASTQ format. They account for about half of the required disk space in the FASTQ format (before compression), and therefore the compression of the quality values can significantly reduce storage requirements and speed up analysis and transmission of sequencing data. Both lossless and lossy compression are recently being considered in the literature. For example, the algorithm QualComp performs lossy compression with a rate (number of bits per quality value) specified by the user. Based on rate-distortion theory results, it allocates the number of bits so as to minimize the MSE (mean squared error) between the original (uncompressed) and the reconstructed (after compression) quality values. Other algorithms for compression of quality values include SCALCE, Fastqz and more recently QVZ, AQUa and the MPEG-G standard, that is currently under development by the MPEG standardisation working group. Both are lossless compression algorithms that provide an optional controlled lossy transformation approach. For example, SCALCE reduces the alphabet size based on the observation that “neighboring” quality values are similar in general.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Q"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "Q = -10 \\ \\log_{10} P"
},
{
"math_id": 3,
"text": "P = 10^{\\frac{-Q}{10}}"
},
{
"math_id": 4,
"text": "P = 1 "
}
] | https://en.wikipedia.org/wiki?curid=9654085 |
965852 | B3 | B3, B03, B.III or B-3 may refer to:
<templatestyles src="Template:TOC_right/styles.css" />
Other uses.
<templatestyles src="Dmbox/styles.css" />
Topics referred to by the same termThis page lists articles associated with the same title formed as a letter–number combination. | [
{
"math_id": 0,
"text": "B_3^I"
}
] | https://en.wikipedia.org/wiki?curid=965852 |
9659484 | Young measure | Measure in mathematical analysis
In mathematical analysis, a Young measure is a parameterized measure that is associated with certain subsequences of a given bounded sequence of measurable functions. They are a quantification of the oscillation effect of the sequence in the limit. Young measures have applications in the calculus of variations, especially models from material science, and the study of nonlinear partial differential equations, as well as in various optimization (or optimal control problems). They are named after Laurence Chisholm Young who invented them, already in 1937 in one dimension (curves) and later in higher dimensions in 1942.
Young measures provide a solution to Hilbert’s twentieth problem, as a broad class of problems in the calculus of variations have solutions in the form of Young measures.
Definition.
Intuition.
Young constructed the Young measure in order to complete sets of ordinary curves in the calculus of variations. That is, Young measures are "generalized curves".
Consider the problem of formula_0, where formula_1 is a function such that formula_2, and continuously differentiable. It is clear that we should pick formula_1 to have value close to zero, and its slope close to formula_3. That is, the curve should be a tight jagged line hugging close to the x-axis. No function can reach the minimum value of formula_4, but we can construct a sequence of functions formula_5 that are increasingly jagged, such that formula_6.
The pointwise limit formula_7 is identically zero, but the pointwise limit formula_8 does not exist. Instead, it is a fine mist that has half of its weight on formula_9, and the other half on formula_10.
Suppose that formula_11 is a functional defined by formula_12, where formula_13 is continuous, then formula_14so in the weak sense, we can define formula_15 to be a "function" whose value is zero and whose derivative is formula_16. In particular, it would mean that formula_17.
Motivation.
The definition of Young measures is motivated by the following theorem: Let "m", "n" be arbitrary positive integers, let formula_18 be an open bounded subset of formula_19 and formula_20 be a bounded sequence in formula_21. Then there exists a subsequence formula_22 and for almost every formula_23 a Borel probability measure formula_24 on formula_25 such that for each formula_26 we have
formula_27
weakly in formula_28 if the limit exists (or weakly* in formula_29 in case of formula_30). The measures formula_24 are called "the Young measures generated by the sequence formula_31".
A partial converse is also true: If for each formula_32 we have a Borel measure formula_24 on formula_33 such that formula_34, then there exists a sequence formula_35, bounded in formula_36, that has the same weak convergence property as above.
More generally, for any Carathéodory function formula_37, the limit
formula_38
if it exists, will be given by
formula_39.
Young's original idea in the case formula_40 was to consider for each integer formula_41 the uniform measure, let's say formula_42 concentrated on graph of the function formula_43 (Here, formula_44is the restriction of the Lebesgue measure on formula_45) By taking the weak* limit of these measures as elements of formula_46 we have
formula_47
where formula_48 is the mentioned weak limit. After a disintegration of the measure formula_48 on the product space formula_49 we get the parameterized measure formula_24.
General definition.
Let formula_50 be arbitrary positive integers, let formula_18 be an open and bounded subset of formula_51, and let formula_52. A "Young measure" (with finite "p"-moments) is a family of Borel probability measures formula_53 on formula_33 such that formula_54.
Examples.
Pointwise converging sequence.
A trivial example of Young measure is when the sequence formula_55 is bounded in formula_56 and converges pointwise almost everywhere in formula_18 to a function formula_13. The Young measure is then the Dirac measure
formula_57
Indeed, by dominated convergence theorem, formula_58 converges weakly* in formula_59 to
formula_60
for any formula_61.
Sequence of sines.
A less trivial example is a sequence
formula_62
The corresponding Young measure satisfies
formula_63
for any measurable set formula_64, independent of formula_65.
In other words, for any formula_61:
formula_66
in formula_67. Here, the Young measure does not depend on formula_68 and so the weak* limit is always a constant.
To see this intuitively, consider that at the limit of large formula_69, a rectangle of formula_70 would capture a part of the curve of formula_55. Take that captured part, and project it down to the x-axis. The length of that projection is formula_71, which means that formula_72 should look like a fine mist that has probability density formula_73 at all formula_68.
Minimizing sequence.
For every asymptotically minimizing sequence formula_74 of
formula_75
subject to formula_76 (that is, the sequence satisfies formula_77), and perhaps after passing to a subsequence, the sequence of derivatives formula_78 generates Young measures of the form formula_79. This captures the essential features of all minimizing sequences to this problem, namely, their derivatives formula_80 will tend to concentrate along the minima formula_81 of the integrand formula_82.
If we take formula_83, then its limit has value zero, and derivative formula_84, which means formula_85.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\min_u I(u) = \\int_0^1 (u'(x)^2-1)^2 +u(x)^2 dx"
},
{
"math_id": 1,
"text": "u"
},
{
"math_id": 2,
"text": "u(0) = u(1) = 0"
},
{
"math_id": 3,
"text": "\\pm 1"
},
{
"math_id": 4,
"text": "I = 0"
},
{
"math_id": 5,
"text": "u_1, u_2, \\dots"
},
{
"math_id": 6,
"text": "I(u_n) \\to 0"
},
{
"math_id": 7,
"text": "\\lim u_n"
},
{
"math_id": 8,
"text": "\\lim_n u_n'"
},
{
"math_id": 9,
"text": "+1"
},
{
"math_id": 10,
"text": "-1"
},
{
"math_id": 11,
"text": "F"
},
{
"math_id": 12,
"text": "F(u) = \\int_0^1 f(t, u(t), u'(t))dt"
},
{
"math_id": 13,
"text": "f"
},
{
"math_id": 14,
"text": "\\lim_n F(u_n) = \\frac 12 \\int_0^1 f(t, 0, -1)dt + \\frac 12 \\int_0^1 f(t, 0, +1)dt"
},
{
"math_id": 15,
"text": "\\lim_n u_n"
},
{
"math_id": 16,
"text": "\\frac 12 \\delta_{-1} + \\frac 12 \\delta_{+1}"
},
{
"math_id": 17,
"text": "I(\\lim_n u_n ) = 0"
},
{
"math_id": 18,
"text": "U"
},
{
"math_id": 19,
"text": "\\mathbb{R}^n"
},
{
"math_id": 20,
"text": "\\{ f_k \\}_{k=1}^\\infty"
},
{
"math_id": 21,
"text": "L^p (U,\\mathbb{R}^m)"
},
{
"math_id": 22,
"text": "\\{ f_{k_j} \\}_{j=1}^\\infty \\subset \\{ f_k \\}_{k=1}^\\infty"
},
{
"math_id": 23,
"text": "x \\in U"
},
{
"math_id": 24,
"text": "\\nu_x"
},
{
"math_id": 25,
"text": "\\mathbb{R}^m"
},
{
"math_id": 26,
"text": "F \\in C(\\mathbb{R}^m)"
},
{
"math_id": 27,
"text": "F \\circ f_{k_j}(x) {\\rightharpoonup} \\int_{\\mathbb{R}^m} F(y)d\\nu_x(y)"
},
{
"math_id": 28,
"text": "L^p(U)"
},
{
"math_id": 29,
"text": "L^\\infty (U)"
},
{
"math_id": 30,
"text": "p=+\\infty"
},
{
"math_id": 31,
"text": "\\{ f_{k_j} \\}_{j=1}^\\infty"
},
{
"math_id": 32,
"text": "x\\in U"
},
{
"math_id": 33,
"text": "\\mathbb R^m"
},
{
"math_id": 34,
"text": "\\int_U\\int_{\\R^m}\\|y\\|^pd\\nu_x(y)dx<+\\infty"
},
{
"math_id": 35,
"text": "\\{f_k\\}_{k=1}^\\infty\\subseteq L^p(U,\\mathbb R^m)"
},
{
"math_id": 36,
"text": " L^p(U,\\mathbb R^m)"
},
{
"math_id": 37,
"text": "G(x,A) : U\\times R^m \\to R"
},
{
"math_id": 38,
"text": "\\lim_{j\\to \\infty} \\int_{U} G(x,f_j(x)) \\ d x,"
},
{
"math_id": 39,
"text": "\\int_{U} \\int_{\\R^m} G(x,A) \\ d \\nu_x(A) \\ dx"
},
{
"math_id": 40,
"text": "G\\in C_0(U \\times \\R^m) "
},
{
"math_id": 41,
"text": "j\\ge1"
},
{
"math_id": 42,
"text": "\\Gamma_j:= (id ,f_j)_\\sharp L ^d\\llcorner U,"
},
{
"math_id": 43,
"text": "f_j."
},
{
"math_id": 44,
"text": "L ^d\\llcorner U"
},
{
"math_id": 45,
"text": "U."
},
{
"math_id": 46,
"text": "C_0(U \\times \\R^m)^\\star,"
},
{
"math_id": 47,
"text": "\\langle\\Gamma_j, G\\rangle = \\int_{U} G(x,f_j(x)) \\ d x \\to \\langle\\Gamma ,G\\rangle,"
},
{
"math_id": 48,
"text": "\\Gamma"
},
{
"math_id": 49,
"text": "\\Omega \\times \\R^m,"
},
{
"math_id": 50,
"text": "m,n"
},
{
"math_id": 51,
"text": "\\mathbb R^n"
},
{
"math_id": 52,
"text": "p\\geq 1"
},
{
"math_id": 53,
"text": "\\{\\nu_x : x\\in U\\}"
},
{
"math_id": 54,
"text": "\\int_U\\int_{\\R^m}\n\\|y\\|^p d\\nu_x(y)dx<+\\infty"
},
{
"math_id": 55,
"text": "f_n"
},
{
"math_id": 56,
"text": "L^\\infty(U, \\mathbb{R}^n )"
},
{
"math_id": 57,
"text": "\\nu_x = \\delta_{f(x)}, \\quad x \\in U."
},
{
"math_id": 58,
"text": "F(f_n(x))"
},
{
"math_id": 59,
"text": " L^\\infty (U) "
},
{
"math_id": 60,
"text": " F(f(x)) = \\int F(y) \\, \\text{d} \\delta_{f(x)} "
},
{
"math_id": 61,
"text": " F \\in C(\\mathbb{R}^n)"
},
{
"math_id": 62,
"text": " f_n(x) = \\sin (n x), \\quad x \\in (0,2\\pi). "
},
{
"math_id": 63,
"text": " \\nu_x(E) = \\frac{1}{\\pi} \\int_{E\\cap [-1,1]} \\frac{1}{\\sqrt{1-y^2}} \\, \\text{d}y, "
},
{
"math_id": 64,
"text": " E "
},
{
"math_id": 65,
"text": "x \\in (0,2\\pi)"
},
{
"math_id": 66,
"text": "F(f_n) {\\rightharpoonup}^* \\frac{1}{\\pi} \\int_{-1}^1 \\frac{F(y)}{\\sqrt{1-y^2}} \\, \\text{d}y "
},
{
"math_id": 67,
"text": "L^\\infty((0,2\\pi)) "
},
{
"math_id": 68,
"text": "x"
},
{
"math_id": 69,
"text": "n"
},
{
"math_id": 70,
"text": "[x, x+\\delta x] \\times [y, y + \\delta y]"
},
{
"math_id": 71,
"text": "\\frac{2\\delta x \\delta y}{\\sqrt{1-y^2}}"
},
{
"math_id": 72,
"text": "\\lim_n f_n"
},
{
"math_id": 73,
"text": "\\frac{1}{\\pi\\sqrt{1-y^2}}"
},
{
"math_id": 74,
"text": "u_n"
},
{
"math_id": 75,
"text": "I(u) = \\int_0^1 (u'(x)^2-1)^2 +u(x)^2 dx"
},
{
"math_id": 76,
"text": "u(0)=u(1)=0"
},
{
"math_id": 77,
"text": "\\lim_{n\\to+\\infty} I(u_n)=\\inf_{u\\in C^1([0,1])}I(u)"
},
{
"math_id": 78,
"text": "u'_n"
},
{
"math_id": 79,
"text": "\\nu_x= \\frac 12 \\delta_{-1} + \\frac 12 \\delta_1"
},
{
"math_id": 80,
"text": "u'_k(x)"
},
{
"math_id": 81,
"text": "\\{-1,1\\}"
},
{
"math_id": 82,
"text": "(u'(x)^2-1)^2 +u(x)^2"
},
{
"math_id": 83,
"text": "\\lim_n \\frac{\\sin(2\\pi n t)}{2\\pi n}"
},
{
"math_id": 84,
"text": "\\nu( dy ) = \\frac{1}{\\pi\\sqrt{1-y^2}}dy"
},
{
"math_id": 85,
"text": "\\lim I = \\frac{1}{\\pi} \\int_{-1}^{+1} (1-y^2)^{3/2}dy"
}
] | https://en.wikipedia.org/wiki?curid=9659484 |
9659775 | High harmonic generation | Laser science process
High-harmonic generation (HHG) is a non-linear process during which a target (gas, plasma, solid or liquid sample) is illuminated by an intense laser pulse. Under such conditions, the sample will emit the high harmonics of the generation beam (above the fifth harmonic). Due to the coherent nature of the process, high-harmonics generation is a prerequisite of attosecond physics.
Perturbative harmonic generation.
Perturbative harmonic generation is a process whereby laser light of frequency "ω" and photon energy "ħω" can be used to generate new frequencies of light. The newly generated frequencies are integer multiples "nω" of the original light's frequency. This process was first discovered in 1961 by Franken et al., using a ruby laser, with crystalline quartz as the nonlinear medium.
Harmonic generation in dielectric solids is well understood and extensively used in modern laser physics (see second-harmonic generation). In 1967 New et al. observed the first third harmonic generation in a gas. In monatomic gases it is only possible to produce odd numbered harmonics for reasons of symmetry. Harmonic generation in the perturbative (weak field) regime is characterised by rapidly decreasing efficiency with increasing harmonic order. This behaviour can be understood by considering an atom absorbing "n" photons then emitting a single high energy photon. The probability of absorbing "n" photons decreases as "n" increases, explaining the rapid decrease in the initial harmonic intensities.
Development.
The first high harmonic generation was observed in 1977 in interaction of intense CO2 laser pulses with plasma generated from solid targets. HHG in gases, far more widespread in application today, was first observed by McPherson and colleagues in 1987, and later by Ferray et al. in 1988, with surprising results: the high harmonics were found to decrease in intensity at low orders, as expected, but then were observed to form a plateau, with the intensity of the harmonics remaining approximately constant over many orders.
Plateau harmonics spanning hundreds of eV have been measured which extend into the soft X-ray regime. This plateau ends abruptly at a position called the high harmonic cut-off.
Properties.
High harmonics have a number of interesting properties. They are a tunable table-top source of XUV/soft X-rays, synchronised with the driving laser and produced with the same repetition rate. The harmonic cut-off varies linearly with increasing laser intensity up until the saturation intensity Isat where harmonic generation stops. The saturation intensity can be increased by changing the atomic species to lighter noble gases but these have a lower conversion efficiency so there is a balance to be found depending on the photon energies required.
High harmonic generation strongly depends on the driving laser field and as a result the harmonics have similar temporal and spatial coherence properties. High harmonics are often generated with pulse durations shorter than that of the driving laser. This is due to the nonlinearity of the generation process, phase matching and ionization. Often harmonics are only produced in a very small temporal window when the phase matching condition is met. Depletion of the generating media due to ionization also means that harmonic generation is mainly confined to the leading edge of the driving pulse.
High harmonics are emitted co-linearly with the driving laser and can have a very tight angular confinement, sometimes with less divergence than that of the fundamental field and near Gaussian beam profiles.
Semi-classical approach.
The maximum photon energy producible with high harmonic generation is given by the cut-off of the harmonic plateau. This can be calculated classically by examining the maximum energy the ionized electron can gain in the electric field of the laser. The cut-off energy is given by:
formula_0
where Up is the ponderomotive energy from the laser field and Ip is the ionization potential.
This cut-off energy is derived from a semi-classical calculation, often called the three-step model. The electron is initially treated quantum mechanically as it tunnel ionizes from the parent atom, but its subsequent dynamics are treated classically. The electron is assumed to be born into the vacuum with zero initial velocity, and to be subsequently accelerated by the laser beam's electric field.
Half an optical cycle after ionization, the electron will reverse direction as the electric field changes sign, and will accelerate back towards the parent nucleus. Upon return to the parent nucleus it can then emit bremsstrahlung-like radiation during a recombination process with the atom as it returns to its ground state. This description has become known as the recollisional model of high harmonic generation.
Since the frequency of the emitted radiation depends on both the kinetic energy and on the ionization potential, the different frequencies are emitted at different recombination times (i.e. the emitted pulse is chirped). Furthermore, for every frequency, there are two corresponding recombination times. We refer to these two trajectories as the short trajectory (which are emitted first), and the long trajectory.
In the semiclassical picture, HHG will only occur if the driving laser field is linearly polarised. Ellipticity on the laser beam causes the returning electron to miss the parent nucleus. Quantum mechanically, the overlap of the returning electron wavepacket with the nuclear wavepacket is reduced. This has been observed experimentally, where the intensity of harmonics decreases rapidly with increasing ellipticity. Another effect which limits the intensity of the driving laser is the Lorentz force. At intensities above 1016 W·cm−2 the magnetic component of the laser pulse, which is ignored in weak field optics, can become strong enough to deflect the returning electron. This will cause it to "miss" the parent nucleus and hence prevent HHG.
Phase matching.
As in every nonlinear process, phase matching plays an important role in high harmonic generation in the gas phase. In free-focusing geometry, the four causes of wavevector mismatch are: neutral dispersion, plasma dispersion, Gouy phase, and dipole phase.
The neutral dispersion is caused by the atoms while the plasma dispersion is due to the ions, and the two have opposite signs.
The Gouy phase is due to wavefront phase jump close to the focus, and varies along it. Finally the dipole phase arises from the atomic response in the HHG process.
When using a gas jet geometry, the optimal conditions for generating high harmonics emitted from short trajectories are obtained when the generating gas is located after the focus, while generation of high harmonics from long trajectory can be obtained off-axis when the generating gas is located before the focus.
Furthermore, the implementation of loose focusing geometry for the driving field enables a higher number of emitters and photons to contribute to the generation process and thus, enhance the harmonic yield.
When using a gas jet geometry, focusing the laser into the Mach disk can increase the efficiency of harmonic generation.
More generally, in the X-ray spectral region, materials have a refractive index that is very close to 1. To balance the phase mismatch, formula_1, we need to find such parameters in the high dimensional space that will effectively make the combined refractive index at the driving laser wavelength nearly 1.
In order to achieve intensity levels that can distort an atom's binding potential, it is necessary to focus the driving laser beam. This introduces dispersion terms affecting the phase mismatch, depending on the specific geometry (such as plane wave propagation, free focusing, hollow core waveguide, etc.). Additionally, during the high harmonic generation process, electrons are accelerated, and some of them return to their parent ion, resulting in X-ray bursts. However, the majority of these electrons do not return and instead contribute to dispersion for the co-propagating waves. The returning electrons carry phase due to processes like ionization, recombination, and propagation. Furthermore, the ionized atoms can influence the refractive index of the medium, providing another source of dispersion.
The phase mismatch (> 0 phase velocity of the laser is faster than that of the X-rays) can be represented as:
formula_2
where formula_3 is the neutral atoms contribution, formula_4 is the contribution from ions (when neutrals are ionized, this term can be still sufficiently large in the UV), formula_5 is the plasma contribution, formula_6 is the free focusing geometry, plane-wave of waveguiding geometry, formula_7 is the phase accumulated by the electron during the time it spends away from the atom, etc. Each term has a specific sign which allows balancing the mismatch at a particular time and frequency.
The contribution from the electrons scales quadratically with the wavelength: formula_8, while the contribution from atoms scales inversely with wavelength: formula_9.
Thus at long IR wavelengths, the term formula_10 is quite large per electron, while the term formula_11 is quite small and close to one. To phase-match the process of HHG, very high pressures and low ionization levels are required, thus giving a large number of emitters.
In the opposite UV spectral range, the term formula_11 is large because of the closely located UV resonances, and in addition, the term formula_10 is small. To phase-match the process, low pressures are needed. Moreover, in the UV, very high ionization levels can be tolerated (much larger than 100%). This gives HHG photon energy scalability with the intensity of the driving UV laser.
Plain-wave geometry or loose focusing geometry allows highly collinear phase matching and maximum flux extraction at the driving wavelengths where the formula_12 term is small. The generation of High-order harmonics in waveguide allows propagation with characteristics close to those of plane wave propagation.
Such geometries benefit, especially X-ray spectra generated by IR beams, where long interaction volumes are needed for optimal power extraction. In such geometries, spectra extending to 1.6 keV, have been generated.
For UV-VIS driven high harmonics, the waveguide term is small, and the phase-matching picture resembles the plane-wave geometry. In such geometries, narrow bandwidth harmonics extending to the carbon edge (300 eV) have been generated.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " E_{\\mathrm{max}}=I_p+3.17\\ U_p "
},
{
"math_id": 1,
"text": "\\Delta k = k_q - qk_L"
},
{
"math_id": 2,
"text": "\\Delta k = k_q-qk_L = \\underbrace{\\Delta k_{\\mathrm{neutrals}}}_{<0} + \\underbrace{\\Delta k_{\\mathrm{ions}}}_{<0} + \\underbrace{\\Delta k_{\\mathrm{electrons}}}_{>0} + \\underbrace{\\Delta k_{\\mathrm{geometry}}}_{>0} + \\underbrace{\\Delta k_{\\mathrm{intrinsic}}}_{>\\;\\!0} + \\cdots"
},
{
"math_id": 3,
"text": " \\Delta k_{\\mathrm{neutrals}}"
},
{
"math_id": 4,
"text": "\\Delta k_{\\mathrm{ions}}"
},
{
"math_id": 5,
"text": "\\Delta k_{\\mathrm{electrons}}"
},
{
"math_id": 6,
"text": "\\Delta k_{\\mathrm{geometry}}"
},
{
"math_id": 7,
"text": "\\Delta k_{\\mathrm{intrinsic}}"
},
{
"math_id": 8,
"text": "\\Delta n_{\\mathrm{electrons}} \\sim -\\lambda^2"
},
{
"math_id": 9,
"text": "\\Delta n_{\\mathrm{atoms}} \\sim 1/\\lambda^2"
},
{
"math_id": 10,
"text": "\\left|\\Delta n_{\\mathrm{electrons}}\\right|"
},
{
"math_id": 11,
"text": "\\Delta n_{\\mathrm{atoms}}"
},
{
"math_id": 12,
"text": "\\vec{v} \\times \\vec{B}"
}
] | https://en.wikipedia.org/wiki?curid=9659775 |
9660 | Euler's sum of powers conjecture | Disproved conjecture in number theory
In number theory, Euler's conjecture is a disproved conjecture related to Fermat's Last Theorem. It was proposed by Leonhard Euler in 1769. It states that for all integers n and k greater than 1, if the sum of n many kth powers of positive integers is itself a kth power, then n is greater than or equal to k:
formula_0
The conjecture represents an attempt to generalize Fermat's Last Theorem, which is the special case "n"
2: if formula_1 then 2 ≥ "k".
Although the conjecture holds for the case "k"
3 (which follows from Fermat's Last Theorem for the third powers), it was disproved for "k"
4 and "k"
5. It is unknown whether the conjecture fails or holds for any value "k" ≥ 6.
Background.
Euler was aware of the equality 594 + 1584
1334 + 1344 involving sums of four fourth powers; this, however, is not a counterexample because no term is isolated on one side of the equation. He also provided a complete solution to the four cubes problem as in Plato's number 33 + 43 + 53
63 or the taxicab number 1729. The general solution of the equation formula_2
is
formula_3
where a, b and formula_4 are any rational numbers.
Counterexamples.
Euler's conjecture was disproven by L. J. Lander and T. R. Parkin in 1966 when, through a direct computer search on a CDC 6600, they found a counterexample for "k"
5. This was published in a paper comprising just two sentences. A total of three primitive (that is, in which the summands do not all have a common factor) counterexamples are known:
formula_5
(Lander & Parkin, 1966); (Scher & Seidl, 1996); (Frye, 2004).
In 1988, Noam Elkies published a method to construct an infinite sequence of counterexamples for the "k"
4 case. His smallest counterexample was
formula_6
A particular case of Elkies' solutions can be reduced to the identity
formula_7
where
formula_8
This is an elliptic curve with a rational point at "v"1
−. From this initial rational point, one can compute an infinite collection of others. Substituting "v"1 into the identity and removing common factors gives the numerical example cited above.
In 1988, Roger Frye found the smallest possible counterexample
formula_9
for "k"
4 by a direct computer search using techniques suggested by Elkies. This solution is the only one with values of the variables below 1,000,000.
Generalizations.
In 1967, L. J. Lander, T. R. Parkin, and John Selfridge conjectured that if
formula_10,
where "ai" ≠ "bj" are positive integers for all 1 ≤ "i" ≤ "n" and 1 ≤ "j" ≤ "m", then "m" + "n" ≥ "k". In the special case "m"
1, the conjecture states that if
formula_11
(under the conditions given above) then "n" ≥ "k" − 1.
The special case may be described as the problem of giving a partition of a perfect power into few like powers. For "k"
4, 5, 7, 8 and "n"
"k" or "k" − 1, there are many known solutions. Some of these are listed below.
See OEIS: for more data.
==="k"
3===
33 + 43 + 53
63 (Plato's number 216)
This is the case "a" = 1, "b" = 0 of Srinivasa Ramanujan's formula
formula_12
A cube as the sum of three cubes can also be parameterized in one of two ways:
formula_13
The number 2 100 0003 can be expressed as the sum of three cubes in nine different ways.
==="k"
4===
formula_14
(R. Frye, 1988); (R. Norrie, smallest, 1911).
==="k"
5===
formula_15
(Lander & Parkin, 1966); (Lander, Parkin, Selfridge, smallest, 1967); (Lander, Parkin, Selfridge, second smallest, 1967); (Sastry, 1934, third smallest).
==="k"
6===
As of 2002, there are no solutions for "k" = 6 whose final term is ≤ 730000.
==="k"
7===
formula_16
(M. Dodrill, 1999).
==="k"
8===
formula_17
(S. Chase, 2000).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a_1^k + a_2^k + \\dots + a_n^k = b^k \\implies n \\ge k"
},
{
"math_id": 1,
"text": "a_1^k + a_2^k = b^k,"
},
{
"math_id": 2,
"text": "x_1^3+x_2^3=x_3^3+x_4^3"
},
{
"math_id": 3,
"text": "\\begin{align}\n x_1 &=\\lambda( 1-(a-3b)(a^2+3b^2)) \\\\[2pt]\n x_2 &=\\lambda( (a+3b)(a^2+3b^2)-1 )\\\\[2pt]\n x_3 &=\\lambda( (a+3b)-(a^2+3b^2)^2 )\\\\[2pt]\n x_4 &= \\lambda( (a^2+3b^2)^2-(a-3b))\n\\end{align}"
},
{
"math_id": 4,
"text": "{\\lambda}"
},
{
"math_id": 5,
"text": "\\begin{align}\n 144^5 &= 27^5 + 84^5 + 110^5 + 133^5 \\\\\n 14132^5 &= (-220)^5 + 5027^5 + 6237^5 + 14068^5 \\\\\n 85359^5 &= 55^5 + 3183^5 + 28969^5 + 85282^5\n\\end{align}"
},
{
"math_id": 6,
"text": "20615673^4 = 2682440^4 + 15365639^4 + 18796760^4."
},
{
"math_id": 7,
"text": "(85v^2 + 484v - 313)^4 + (68v^2 - 586v + 10)^4 + (2u)^4 = (357v^2 - 204v + 363)^4,"
},
{
"math_id": 8,
"text": "u^2 = 22030 + 28849v - 56158v^2 + 36941v^3 - 31790v^4."
},
{
"math_id": 9,
"text": "95800^4 + 217519^4 + 414560^4 = 422481^4"
},
{
"math_id": 10,
"text": "\\sum_{i=1}^{n} a_i^k = \\sum_{j=1}^{m} b_j^k"
},
{
"math_id": 11,
"text": "\\sum_{i=1}^{n} a_i^k = b^k"
},
{
"math_id": 12,
"text": "(3a^2+5ab-5b^2)^3 + (4a^2-4ab+6b^2)^3 + (5a^2-5ab-3b^2)^3 = (6a^2-4ab+4b^2)^3"
},
{
"math_id": 13,
"text": "\\begin{align}\na^3(a^3+b^3)^3\t&=\tb^3(a^3+b^3)^3+a^3(a^3-2b^3)^3+b^3(2a^3-b^3)^3 \\\\[6pt]\na^3(a^3+2b^3)^3\t&=\ta^3(a^3-b^3)^3+b^3(a^3-b^3)^3+b^3(2a^3+b^3)^3\n\\end{align}"
},
{
"math_id": 14,
"text": "\\begin{align}\n 422481^4 &= 95800^4 + 217519^4 + 414560^4 \\\\[4pt]\n 353^4 &= 30^4 + 120^4 + 272^4 + 315^4\n\\end{align}"
},
{
"math_id": 15,
"text": "\\begin{align}\n 144^5 &= 27^5 + 84^5 + 110^5 + 133^5 \\\\[2pt]\n 72^5 &= 19^5 + 43^5 + 46^5 + 47^5 + 67^5 \\\\[2pt]\n 94^5 &= 21^5 + 23^5 + 37^5 + 79^5 + 84^5 \\\\[2pt] \n 107^5 &= 7^5 + 43^5 + 57^5 + 80^5 + 100^5\n\\end{align}"
},
{
"math_id": 16,
"text": "568^7 = 127^7 + 258^7 + 266^7 + 413^7 + 430^7 + 439^7 + 525^7"
},
{
"math_id": 17,
"text": "1409^8 = 90^8 + 223^8 + 478^8 + 524^8 + 748^8 + 1088^8 + 1190^8 + 1324^8 "
}
] | https://en.wikipedia.org/wiki?curid=9660 |
9662955 | Convection (heat transfer) | Heat transfer due to combined effects of advection and diffusion
Convection (or convective heat transfer) is the transfer of heat from one place to another due to the movement of fluid. Although often discussed as a distinct method of heat transfer, convective heat transfer involves the combined processes of conduction (heat diffusion) and advection (heat transfer by bulk fluid flow). Convection is usually the dominant form of heat transfer in liquids and gases.
Note that this definition of convection is only applicable in Heat transfer and thermodynamic contexts. It should not be confused with the dynamic fluid phenomenon of convection, which is typically referred to as "Natural Convection" in thermodynamic contexts in order to distinguish the two.
Overview.
Convection can be "forced" by movement of a fluid by means other than buoyancy forces (for example, a water pump in an automobile engine). Thermal expansion of fluids may also force convection. In other cases, natural buoyancy forces alone are entirely responsible for fluid motion when the fluid is heated, and this process is called "natural convection". An example is the draft in a chimney or around any fire. In natural convection, an increase in temperature produces a reduction in density, which in turn causes fluid motion due to pressures and forces when the fluids of different densities are affected by gravity (or any g-force). For example, when water is heated on a stove, hot water from the bottom of the pan is displaced (or forced up) by the colder denser liquid, which falls. After heating has stopped, mixing and conduction from this natural convection eventually result in a nearly homogeneous density, and even temperature. Without the presence of gravity (or conditions that cause a g-force of any type), natural convection does not occur, and only forced-convection modes operate.
The convection heat transfer mode comprises two mechanism. In addition to energy transfer due to specific molecular motion (diffusion), energy is transferred by bulk, or macroscopic, motion of the fluid. This motion is associated with the fact that, at any instant, large numbers of molecules are moving collectively or as aggregates. Such motion, in the presence of a temperature gradient, contributes to heat transfer. Because the molecules in aggregate retain their random motion, the total heat transfer is then due to the superposition of energy transport by random motion of the molecules and by the bulk motion of the fluid. It is customary to use the term convection when referring to this cumulative transport and the term advection when referring to the transport due to bulk fluid motion.
Types.
Two types of convective heat transfer may be distinguished:
In many real-life applications (e.g. heat losses at solar central receivers or cooling of photovoltaic panels), natural and forced convection occur at the same time (mixed convection).
Internal and external flow can also classify convection. Internal flow occurs when a fluid is enclosed by a solid boundary such as when flowing through a pipe. An external flow occurs when a fluid extends indefinitely without encountering a solid surface. Both of these types of convection, either natural or forced, can be internal or external because they are independent of each other. The bulk temperature, or the average fluid temperature, is a convenient reference point for evaluating properties related to convective heat transfer, particularly in applications related to flow in pipes and ducts.
Further classification can be made depending on the smoothness and undulations of the solid surfaces. Not all surfaces are smooth, though a bulk of the available information deals with smooth surfaces. Wavy irregular surfaces are commonly encountered in heat transfer devices which include solar collectors, regenerative heat exchangers, and underground energy storage systems. They have a significant role to play in the heat transfer processes in these applications. Since they bring in an added complexity due to the undulations in the surfaces, they need to be tackled with mathematical finesse through elegant simplification techniques. Also, they do affect the flow and heat transfer characteristics, thereby behaving differently from straight smooth surfaces.
For a visual experience of natural convection, a glass filled with hot water and some red food dye may be placed inside a fish tank with cold, clear water. The convection currents of the red liquid may be seen to rise and fall in different regions, then eventually settle, illustrating the process as heat gradients are dissipated.
Newton's law of cooling.
Convection-cooling is sometimes loosely assumed to be described by Newton's law of cooling.
Newton's law states that "the rate of heat loss of a body is proportional to the difference in temperatures between the body and its surroundings while under the effects of a breeze". The constant of proportionality is the heat transfer coefficient. The law applies when the coefficient is independent, or relatively independent, of the temperature difference between object and environment.
In classical natural convective heat transfer, the heat transfer coefficient is dependent on the temperature. However, Newton's law does approximate reality when the temperature changes are relatively small, and for forced air and pumped liquid cooling, where the fluid velocity does not rise with increasing temperature difference.
Convective heat transfer.
The basic relationship for heat transfer by convection is:
formula_0
where formula_1 is the heat transferred per unit time, "A" is the area of the object, "h" is the heat transfer coefficient, "T" is the object's surface temperature, and "Tf" is the fluid temperature.
The convective heat transfer coefficient is dependent upon the physical properties of the fluid and the physical situation. Values of "h" have been measured and tabulated for commonly encountered fluids and flow situations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\dot{Q} = hA(T - T_f) "
},
{
"math_id": 1,
"text": "\\dot{Q}"
}
] | https://en.wikipedia.org/wiki?curid=9662955 |
9664483 | Frontogenesis | Frontogenesis is a meteorological process of tightening of horizontal temperature gradients to produce fronts. In the end, two types of fronts form: cold fronts and warm fronts. A cold front is a narrow line where temperature decreases rapidly. A warm front is a narrow line of warmer temperatures and essentially where much of the precipitation occurs. Frontogenesis occurs as a result of a developing baroclinic wave. According to Hoskins & Bretherton (1972, p. 11), there are eight mechanisms that influence temperature gradients: horizontal deformation, horizontal shearing, vertical deformation, differential vertical motion, latent heat release, surface friction, turbulence and mixing, and radiation. Semigeostrophic frontogenesis theory focuses on the role of horizontal deformation and shear.
Kinematics.
Horizontal deformation in mid-latitude cyclones concentrates temperature gradients—cold air from the poles and warm air from the equator.
Horizontal shear has two effects on an air parcel; it tends to rotate the parcel (think of placing a wheel at a point in space and as the wind blows, the wheel rotates) and deform the parcel through stretching and shrinking. In the end, this can also tighten temperature gradient, but most importantly, this rotates a concentrated temperature gradient for example, from the x-axis to the y direction.
Within a mid-latitude cyclone, these two key features play an essential role in frontogenesis. On a typical mid-latitude cyclone, there are
In the end, this results to concentrate a cyclonic shear along a line of maximum shear (which in this case is the birth of a cold front).
On the eastern side of a cyclone, horizontal deformation is seen which turns into confluence (a result of translation + deformation).
Horizontal deformation at low levels is an important mechanism for the development of both cold and warm fronts (Holton, 2004).
Elements of Frontogenesis.
The horizontal shear and horizontal deformation direct to concentrate the polar equatorial temperature gradient over a large synoptic scale (1000 km).
The quasi-geostrophic equations fail in the dynamics of frontogenesis because this weather society is of smaller scale compared to the Rossby radius; so semigeostrophic theory is used.
Generally, Rossby number—ratio of inertial to coriolis force
is used to formulate a condition of geostrophic flow.
Finally, looking at a cross section (y-z) through the confluent flow, using Q-vectors (Q pointing toward upward motion), on the warm side (bottom of confluent schematic), there is upward motion and on the other hand, the cold side (top of confluent schematic), there is downward motion.
The cross-section points out convergence (arrows pointing towards each other) associated with tightening of horizontal temperature gradient.
Conversely, divergence is noticed (arrows point away from each other), associated with stretching horizontal temperature gradient. Since the strength of the ageostrophic flow is proportional to temperature gradient, the ageostrophic tightening tendencies grow rapidly after the initial geostrophic intensification.
Development of the Frontogenetical Circulation.
During frontogenesis, the temperature gradient tightens and as a result, the thermal wind becomes imbalanced. To maintain balance, the geostrophic wind aloft and below adjust, such that regions of divergence/convergence form. Mass continuity would require a vertical transport of air along the cold front where there is divergence (lowered pressure). Although this circulation is described by a series of processes, they are actually occurring at the same time, observable along the front as a thermally direct circulation. There are several factors that influence the final shape and tilt of the circulation around the front, ultimately determining the type and location of clouds and precipitation.
3-Dimensional Equation.
The three-dimensional form of the frontogenesis equation is
formula_0
where each dimension begins with a diabatic term; in the formula_1 direction
formula_2
in the formula_3 direction
formula_4
and in the formula_5 direction
formula_6.
The equation also includes horizontal and vertical deformation terms; in the formula_1 direction
formula_7
and in the formula_3 direction
formula_8
and in the vertical formula_5 direction
formula_9.
The final terms are the tilting term and the vertical divergence term; the tilting term is represented in the three-dimensional frontogenesis equation in the formula_1 and formula_3 directions
formula_10
formula_11
and the vertical divergence term is present as
formula_12
References.
1. Holton, J. R. (2004). An introduction to dynamic meteorology. (4 ed., Vol. 88, pp. 269–276). San Diego, CA: Academic Press.
2. Hoskins, B. J., & Bretherton, F. P. (1972). Atmospheric frontogenesis models: Mathematical formulation and solution. J. Atmos. Sci., 29, 11–13.
3. Martin, J. E. (2006). Mid-latitude atmospheric dynamics. (1 ed., pp. 189–194). England: Wiley.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{alignat}{3} F = \\frac{1}{|\\nabla \\theta|}\\cdot \\frac{\\partial \\theta}{\\partial x}\\left \\{ \\frac{1}{C_p} \\left ( \\frac{p_\\circ}{p} \\right )^\\kappa \\left [ \\frac{\\partial}{\\partial x} \\left (\\frac{dQ}{dt} \\right ) \\right ] - \\left ( \\frac{\\partial u}{\\partial x} \\frac{\\partial \\theta}{\\partial x} \\right ) - \\left ( \\frac{\\partial v}{\\partial x} \\frac{\\partial \\theta}{\\partial y} \\right ) - \\left ( \\frac{\\partial w}{\\partial x} \\frac{\\partial \\theta}{\\partial z} \\right ) \\right \\} \\\\\n+ \\frac{\\partial \\theta}{\\partial y}\\left \\{ \\frac{1}{C_p} \\left ( \\frac{p_\\circ}{p} \\right )^\\kappa \\left [ \\frac{\\partial}{\\partial y} \\left (\\frac{dQ}{dt} \\right ) \\right ] - \\left ( \\frac{\\partial u}{\\partial y} \\frac{\\partial \\theta}{\\partial x} \\right ) - \\left ( \\frac{\\partial v}{\\partial y} \\frac{\\partial \\theta}{\\partial y} \\right ) - \\left ( \\frac{\\partial w}{\\partial y} \\frac{\\partial \\theta}{\\partial z} \\right ) \\right \\} \\\\\n+ \\frac{\\partial \\theta}{\\partial z}\\left \\{ \\frac{p_\\circ^\\kappa}{C_p} \\left [ \\frac{\\partial}{\\partial z} \\left (p^{-\\kappa} \\frac{dQ}{dt} \\right ) \\right ] - \\left ( \\frac{\\partial u}{\\partial z} \\frac{\\partial \\theta}{\\partial x} \\right ) - \\left ( \\frac{\\partial v}{\\partial z} \\frac{\\partial \\theta}{\\partial y} \\right ) - \\left ( \\frac{\\partial w}{\\partial z} \\frac{\\partial \\theta}{\\partial z} \\right ) \\right \\}\\end{alignat} "
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "\\frac{1}{C_p} \\left ( \\frac{p_\\circ}{p} \\right )^\\kappa \\left [ \\frac{\\partial}{\\partial x} \\left (\\frac{dQ}{dt} \\right ) \\right ]"
},
{
"math_id": 3,
"text": "y"
},
{
"math_id": 4,
"text": "\\frac{1}{C_p} \\left ( \\frac{p_\\circ}{p} \\right )^\\kappa \\left [ \\frac{\\partial}{\\partial y} \\left (\\frac{dQ}{dt} \\right ) \\right ]"
},
{
"math_id": 5,
"text": "z"
},
{
"math_id": 6,
"text": "\\frac{p_\\circ^\\kappa}{C_p} \\left [ \\frac{\\partial}{\\partial z} \\left (p^{-\\kappa} \\frac{dQ}{dt} \\right ) \\right ]"
},
{
"math_id": 7,
"text": "-\\left ( \\frac{\\partial u}{\\partial x} \\frac{\\partial \\theta}{\\partial x} \\right ) - \\left ( \\frac{\\partial v}{\\partial x} \\frac{\\partial \\theta}{\\partial y} \\right )"
},
{
"math_id": 8,
"text": "-\\left ( \\frac{\\partial u}{\\partial y} \\frac{\\partial \\theta}{\\partial x} \\right ) - \\left ( \\frac{\\partial v}{\\partial y} \\frac{\\partial \\theta}{\\partial y} \\right )"
},
{
"math_id": 9,
"text": "-\\left ( \\frac{\\partial u}{\\partial z} \\frac{\\partial \\theta}{\\partial x} \\right ) - \\left ( \\frac{\\partial v}{\\partial z} \\frac{\\partial \\theta}{\\partial y} \\right )"
},
{
"math_id": 10,
"text": "-\\left ( \\frac{\\partial w}{\\partial x} \\frac{\\partial \\theta}{\\partial z} \\right )"
},
{
"math_id": 11,
"text": "-\\left ( \\frac{\\partial w}{\\partial y} \\frac{\\partial \\theta}{\\partial z} \\right )"
},
{
"math_id": 12,
"text": "-\\left ( \\frac{\\partial w}{\\partial z} \\frac{\\partial \\theta}{\\partial z} \\right ) "
}
] | https://en.wikipedia.org/wiki?curid=9664483 |
9666599 | Tanaka's formula | In the stochastic calculus, Tanaka's formula for the Brownian motion states that
formula_0
where "B""t" is the standard Brownian motion, sgn denotes the sign function
formula_1
and "L""t" is its local time at 0 (the local time spent by "B" at 0 before time "t") given by the "L"2-limit
formula_2
One can also extend the formula to semimartingales.
Properties.
Tanaka's formula is the explicit Doob–Meyer decomposition of the submartingale |"B""t"| into the martingale part (the integral on the right-hand side, which is a Brownian motion), and a continuous increasing process (local time). It can also be seen as the analogue of Itō's lemma for the (nonsmooth) absolute value function formula_3, with formula_4 and formula_5; see local time for a formal explanation of the Itō term.
Outline of proof.
The function |"x"| is not "C"2 in "x" at "x" = 0, so we cannot apply Itō's formula directly. But if we approximate it near zero (i.e. in [−"ε", "ε"]) by parabolas
formula_6
and use Itō's formula, we can then take the limit as "ε" → 0, leading to Tanaka's formula.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "|B_t| = \\int_0^t \\sgn(B_s)\\, dB_s + L_t"
},
{
"math_id": 1,
"text": "\\sgn (x) = \\begin{cases} +1, & x > 0; \\\\0,& x=0 \\\\-1, & x < 0. \\end{cases}"
},
{
"math_id": 2,
"text": "L_{t} = \\lim_{\\varepsilon \\downarrow 0} \\frac1{2 \\varepsilon} | \\{ s \\in [0, t] | B_{s} \\in (- \\varepsilon, + \\varepsilon) \\} |."
},
{
"math_id": 3,
"text": "f(x)=|x|"
},
{
"math_id": 4,
"text": " f'(x) = \\sgn(x)"
},
{
"math_id": 5,
"text": " f''(x) = 2\\delta(x) "
},
{
"math_id": 6,
"text": "\\frac{x^2}{2|\\varepsilon|}+\\frac{|\\varepsilon|}{2}."
}
] | https://en.wikipedia.org/wiki?curid=9666599 |
9667106 | Minimal polynomial (field theory) | Concept in abstract algebra
In field theory, a branch of mathematics, the minimal polynomial of an element "α" of an extension field of a field is, roughly speaking, the polynomial of lowest degree having coefficients in the smaller field, such that "α" is a root of the polynomial. If the minimal polynomial of "α" exists, it is unique. The coefficient of the highest-degree term in the polynomial is required to be 1.
More formally, a minimal polynomial is defined relative to a field extension "E"/"F" and an element of the extension field "E"/"F". The minimal polynomial of an element, if it exists, is a member of "F"["x"], the ring of polynomials in the variable "x" with coefficients in "F". Given an element "α" of "E", let "J""α" be the set of all polynomials "f"("x") in "F"["x"] such that "f"("α") = 0. The element "α" is called a root or zero of each polynomial in "J""α"
More specifically, "J""α" is the kernel of the ring homomorphism from "F"["x"] to "E" which sends polynomials "g" to their value "g"("α") at the element "α". Because it is the kernel of a ring homomorphism, "J""α" is an ideal of the polynomial ring "F"["x"]: it is closed under polynomial addition and subtraction (hence containing the zero polynomial), as well as under multiplication by elements of "F" (which is scalar multiplication if "F"["x"] is regarded as a vector space over "F").
The zero polynomial, all of whose coefficients are 0, is in every "J""α" since 0"α""i" = 0 for all "α" and "i". This makes the zero polynomial useless for classifying different values of "α" into types, so it is excepted. If there are any non-zero polynomials in "J""α", i.e. if the latter is not the zero ideal, then "α" is called an algebraic element over "F", and there exists a monic polynomial of least degree in "J""α". This is the minimal polynomial of "α" with respect to "E"/"F". It is unique and irreducible over "F". If the zero polynomial is the only member of "J""α", then "α" is called a transcendental element over "F" and has no minimal polynomial with respect to "E"/"F".
Minimal polynomials are useful for constructing and analyzing field extensions. When "α" is algebraic with minimal polynomial "f"("x"), the smallest field that contains both "F" and "α" is isomorphic to the quotient ring "F"["x"]/⟨"f"("x")⟩, where ⟨"f"("x")⟩ is the ideal of "F"["x"] generated by "f"("x"). Minimal polynomials are also used to define conjugate elements.
Definition.
Let "E"/"F" be a field extension, "α" an element of "E", and "F"["x"] the ring of polynomials in "x" over "F". The element "α" has a minimal polynomial when "α" is algebraic over "F", that is, when "f"("α") = 0 for some non-zero polynomial "f"("x") in "F"["x"]. Then the minimal polynomial of "α" is defined as the monic polynomial of least degree among all polynomials in "F"["x"] having "α" as a root.
Properties.
Throughout this section, let "E"/"F" be a field extension over "F" as above, let "α" ∈ "E" be an algebraic element over "F" and let "J""α" be the ideal of polynomials vanishing on "α".
Uniqueness.
The minimal polynomial "f" of "α" is unique.
To prove this, suppose that "f" and "g" are monic polynomials in "J""α" of minimal degree "n" > 0. We have that "r" := "f"−"g" ∈ "J""α" (because the latter is closed under addition/subtraction) and that "m" := deg("r") < "n" (because the polynomials are monic of the same degree). If "r" is not zero, then "r" / "c""m" (writing "c""m" ∈ "F" for the non-zero coefficient of highest degree in "r") is a monic polynomial of degree "m" < "n" such that "r" / "c""m" ∈ "J""α" (because the latter is closed under multiplication/division by non-zero elements of "F"), which contradicts our original assumption of minimality for "n". We conclude that 0 = "r" = "f" − "g", i.e. that "f" = "g".
Irreducibility.
The minimal polynomial "f" of "α" is irreducible, i.e. it cannot be factorized as "f" = "gh" for two polynomials "g" and "h" of strictly lower degree.
To prove this, first observe that any factorization "f" = "gh" implies that either "g"("α") = 0 or "h"("α") = 0, because "f"("α") = 0 and "F" is a field (hence also an integral domain). Choosing both "g" and "h" to be of degree strictly lower than "f" would then contradict the minimality requirement on "f", so "f" must be irreducible.
Minimal polynomial generates "J""α".
The minimal polynomial "f" of "α" generates the ideal "J""α", i.e. every " g" in "J""α" can be factorized as "g=fh" for some "h' " in "F"["x"].
To prove this, it suffices to observe that "F"["x"] is a principal ideal domain, because "F" is a field: this means that every ideal "I" in "F"["x"], "J""α" amongst them, is generated by a single element "f". With the exception of the zero ideal "I" = {0}, the generator "f" must be non-zero and it must be the unique polynomial of minimal degree, up to a factor in "F" (because the degree of "fg" is strictly larger than that of "f" whenever "g" is of degree greater than zero). In particular, there is a unique monic generator "f", and all generators must be irreducible. When "I" is chosen to be "J""α", for "α" algebraic over "F", then the monic generator "f" is the minimal polynomial of "α".
Examples.
Minimal polynomial of a Galois field extension.
Given a Galois field extension formula_0 the minimal polynomial of any formula_1 not in formula_2 can be computed asformula_3if formula_4 has no stabilizers in the Galois action. Since it is irreducible, which can be deduced by looking at the roots of formula_5, it is the minimal polynomial. Note that the same kind of formula can be found by replacing formula_6 with formula_7 where formula_8 is the stabilizer group of formula_4. For example, if formula_9 then its stabilizer is formula_10, hence formula_11 is its minimal polynomial.
Quadratic field extensions.
Q(√2).
If "F" = Q, "E" = R, "α" = √2, then the minimal polynomial for "α" is "a"("x") = "x"2 − 2. The base field "F" is important as it determines the possibilities for the coefficients of "a"("x"). For instance, if we take "F" = R, then the minimal polynomial for "α" = √2 is "a"("x") = "x" − √2.
Q(√"d"&hairsp;).
In general, for the quadratic extension given by a square-free formula_12, computing the minimal polynomial of an element formula_13 can be found using Galois theory. Thenformula_14in particular, this implies formula_15 and formula_16. This can be used to determine formula_17 through a series of relations using modular arithmetic.
Biquadratic field extensions.
If "α" = √2 + √3, then the minimal polynomial in Q["x"] is "a"("x") = "x"4 − 10"x"2 + 1 = ("x" − √2 − √3)("x" + √2 − √3)("x" − √2 + √3)("x" + √2 + √3).
Notice if formula_18 then the Galois action on formula_19 stabilizes formula_4. Hence the minimal polynomial can be found using the quotient group formula_20.
Roots of unity.
The minimal polynomials in Q["x"] of roots of unity are the cyclotomic polynomials. The roots of the minimal polynomial of 2cos(2pi/n) are twice the real part of the primitive roots of unity.
Swinnerton-Dyer polynomials.
The minimal polynomial in Q["x"] of the sum of the square roots of the first "n" prime numbers is constructed analogously, and is called a Swinnerton-Dyer polynomial.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L/K"
},
{
"math_id": 1,
"text": "\\alpha \\in L"
},
{
"math_id": 2,
"text": "K"
},
{
"math_id": 3,
"text": "f(x) = \\prod_{\\sigma \\in \\text{Gal}(L/K)} (x - \\sigma(\\alpha))"
},
{
"math_id": 4,
"text": "\\alpha"
},
{
"math_id": 5,
"text": "f'"
},
{
"math_id": 6,
"text": "G = \\text{Gal}(L/K)"
},
{
"math_id": 7,
"text": "G/N"
},
{
"math_id": 8,
"text": "N = \\text{Stab}(\\alpha)"
},
{
"math_id": 9,
"text": "\\alpha \\in K"
},
{
"math_id": 10,
"text": "G"
},
{
"math_id": 11,
"text": "(x-\\alpha)"
},
{
"math_id": 12,
"text": "d"
},
{
"math_id": 13,
"text": "a + b\\sqrt{d\\,}"
},
{
"math_id": 14,
"text": "\\begin{align}\nf(x) &= (x - (a + b\\sqrt{d\\,}))(x - (a - b\\sqrt{d\\,})) \\\\\n&= x^2 - 2ax + (a^2 - b^2d)\n\\end{align}"
},
{
"math_id": 15,
"text": "2a \\in \\mathbb{Z}"
},
{
"math_id": 16,
"text": "a^2 - b^2d \\in \\mathbb{Z}"
},
{
"math_id": 17,
"text": "\\mathcal{O}_{\\mathbb{Q}(\\sqrt{d\\,}\\!\\!\\!\\;\\;)}"
},
{
"math_id": 18,
"text": "\\alpha = \\sqrt{2}"
},
{
"math_id": 19,
"text": "\\sqrt{3}"
},
{
"math_id": 20,
"text": "\\text{Gal}(\\mathbb{Q}(\\sqrt{2},\\sqrt{3})/\\mathbb{Q})/\\text{Gal}(\\mathbb{Q}(\\sqrt{3})/\\mathbb{Q})"
}
] | https://en.wikipedia.org/wiki?curid=9667106 |
9667107 | Minimal polynomial (linear algebra) | Of a matrix
In linear algebra, the minimal polynomial "μA" of an "n" × "n" matrix A over a field F is the monic polynomial P over F of least degree such that "P"("A")
0. Any other polynomial Q with "Q"("A")
0 is a (polynomial) multiple of "μA".
The following three statements are equivalent:
The multiplicity of a root λ of "μA" is the largest power m such that ker(("A" − "λIn")"m") "strictly" contains ker(("A" − "λIn")"m"−1). In other words, increasing the exponent up to m will give ever larger kernels, but further increasing the exponent beyond m will just give the same kernel.
If the field F is not algebraically closed, then the minimal and characteristic polynomials need not factor according to their roots (in F) alone, in other words they may have irreducible polynomial factors of degree greater than 1. For irreducible polynomials P one has similar equivalences:
Like the characteristic polynomial, the minimal polynomial does not depend on the base field. In other words, considering the matrix as one with coefficients in a larger field does not change the minimal polynomial. The reason for this differs from the case with the characteristic polynomial (where it is immediate from the definition of determinants), namely by the fact that the minimal polynomial is determined by the relations of linear dependence between the powers of A: extending the base field will not introduce any new such relations (nor of course will it remove existing ones).
The minimal polynomial is often the same as the characteristic polynomial, but not always. For example, if A is a multiple "aIn" of the identity matrix, then its minimal polynomial is "X" − "a" since the kernel of "aIn" − "A"
0 is already the entire space; on the other hand its characteristic polynomial is ("X" − "a")"n" (the only eigenvalue is a, and the degree of the characteristic polynomial is always equal to the dimension of the space). The minimal polynomial always divides the characteristic polynomial, which is one way of formulating the Cayley–Hamilton theorem (for the case of matrices over a field).
Formal definition.
Given an endomorphism T on a finite-dimensional vector space V over a field F, let "IT" be the set defined as
formula_0
where F["t"&hairsp;] is the space of all polynomials over the field F. "IT" is a proper ideal of F["t"&hairsp;]. Since F is a field, F["t"&hairsp;] is a principal ideal domain, thus any ideal is generated by a single polynomial, which is unique up to a unit in F. A particular choice among the generators can be made, since precisely one of the generators is monic. The minimal polynomial is thus defined to be the monic polynomial that generates "IT". It is the monic polynomial of least degree in "IT".
Applications.
An endomorphism φ of a finite-dimensional vector space over a field F is diagonalizable if and only if its minimal polynomial factors completely over F into "distinct" linear factors. The fact that there is only one factor "X" − "λ" for every eigenvalue λ means that the generalized eigenspace for λ is the same as the eigenspace for λ: every Jordan block has size 1. More generally, if φ satisfies a polynomial equation "P"("φ")
0 where P factors into distinct linear factors over F, then it will be diagonalizable: its minimal polynomial is a divisor of P and therefore also factors into distinct linear factors. In particular one has:
"X k" − 1: finite order endomorphisms of complex vector spaces are diagonalizable. For the special case "k"
2 of involutions, this is even true for endomorphisms of vector spaces over any field of characteristic other than 2, since "X" 2 − 1
("X" − 1)("X" + 1) is a factorization into distinct factors over such a field. This is a part of representation theory of cyclic groups.
"X" 2 − "X"
"X"("X" − 1): endomorphisms satisfying "φ"2
"φ" are called projections, and are always diagonalizable (moreover their only eigenvalues are 0 and 1).
"X k" with "k" ≥ 2 then φ (a nilpotent endomorphism) is not necessarily diagonalizable, since "X k" has a repeated root 0.
These cases can also be proved directly, but the minimal polynomial gives a unified perspective and proof.
Computation.
For a nonzero vector v in V define:
formula_1
This definition satisfies the properties of a proper ideal. Let "μ""T","v" be the monic polynomial which generates it.
Example.
Define T to be the endomorphism of R3 with matrix, on the canonical basis,
formula_2
Taking the first canonical basis vector "e"1 and its repeated images by T one obtains
formula_3
of which the first three are easily seen to be linearly independent, and therefore span all of R3. The last one then necessarily is a linear combination of the first three, in fact
"T" 3 ⋅ "e"1
−4"T" 2 ⋅ "e"1 − "T" ⋅ "e"1 + "e"1,
so that:
"μ""T",&hairsp;"e"1
"X" 3 + 4"X" 2 + "X" − "I".
This is in fact also the minimal polynomial "μ""T" and the characteristic polynomial "χ""T"&hairsp;: indeed "μ""T",&hairsp;"e"1 divides "μT" which divides "χT", and since the first and last are of degree 3 and all are monic, they must all be the same. Another reason is that in general if any polynomial in T annihilates a vector v, then it also annihilates "T"&hairsp;⋅"v" (just apply T to the equation that says that it annihilates v), and therefore by iteration it annihilates the entire space generated by the iterated images by T of v; in the current case we have seen that for "v"
"e"1 that space is all of R3, so "μ""T",&hairsp;"e"1("T"&hairsp;)
0. Indeed one verifies for the full matrix that "T" 3 + 4"T" 2 + "T" − "I"3 is the zero matrix:
formula_4
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathit{I}_T = \\{ p \\in \\mathbf{F}[t] \\mid p(T) = 0 \\} ,"
},
{
"math_id": 1,
"text": " \\mathit{I}_{T, v} = \\{ p \\in \\mathbf{F}[t] \\; | \\; p(T)(v) = 0 \\}."
},
{
"math_id": 2,
"text": "\\begin{pmatrix} 1 & -1 & -1 \\\\ 1 & -2 & 1 \\\\ 0 & 1 & -3 \\end{pmatrix}."
},
{
"math_id": 3,
"text": " e_1 = \\begin{bmatrix} 1 \\\\ 0 \\\\ 0 \\end{bmatrix}, \\quad\n T \\cdot e_1 = \\begin{bmatrix} 1 \\\\ 1 \\\\ 0 \\end{bmatrix}. \\quad\nT^2\\! \\cdot e_1 = \\begin{bmatrix} 0 \\\\ -1 \\\\ 1 \\end{bmatrix} \\mbox{ and}\\quad\nT^3\\! \\cdot e_1 = \\begin{bmatrix} 0 \\\\ 3 \\\\ -4 \\end{bmatrix}"
},
{
"math_id": 4,
"text": "\\begin{bmatrix} 0 & 1 & -3 \\\\ 3 & -13 & 23 \\\\ -4 & 19 & -36 \\end{bmatrix}\n+ 4\\begin{bmatrix} 0 & 0 & 1 \\\\ -1 & 4 & -6 \\\\ 1 & -5 & 10 \\end{bmatrix}\n+ \\begin{bmatrix} 1 & -1 & -1 \\\\ 1 & -2 & 1 \\\\ 0 & 1 & -3 \\end{bmatrix}\n+ \\begin{bmatrix} -1 & 0 & 0 \\\\ 0 & -1 & 0 \\\\ 0 & 0 & -1 \\end{bmatrix}\n= \\begin{bmatrix} 0 & 0 & 0 \\\\ 0 & 0 & 0 \\\\ 0 & 0 & 0 \\end{bmatrix}"
}
] | https://en.wikipedia.org/wiki?curid=9667107 |
9667364 | Energy drift | Gradual change in the total energy of a closed system over timeIn computer simulations of mechanical systems, energy drift is the gradual change in the total energy of a closed system over time. According to the laws of mechanics, the energy should be a constant of motion and should not change. However, in simulations the energy might fluctuate on a short time scale and increase or decrease on a very long time scale due to numerical integration artifacts that arise with the use of a finite time step Δ"t". This is somewhat similar to the flying ice cube problem, whereby numerical errors in handling equipartition of energy can change vibrational energy into translational energy.
More specifically, the energy tends to increase exponentially; its increase can be understood intuitively because each step introduces a small perturbation δv to the true velocity vtrue, which (if uncorrelated with v, which will be true for simple integration methods) results in a second-order increase in the energy
formula_0
Energy drift - usually damping - is substantial for numerical integration schemes that are not symplectic, such as the Runge-Kutta family. Symplectic integrators usually used in molecular dynamics, such as the Verlet integrator family, exhibit increases in energy over very long time scales, though the error remains roughly constant. These integrators do not in fact reproduce the actual Hamiltonian mechanics of the system; instead, they reproduce a closely related "shadow" Hamiltonian whose value they conserve many orders of magnitude more closely. The accuracy of the energy conservation for the true Hamiltonian is dependent on the time step. The energy computed from the modified Hamiltonian of a symplectic integrator is formula_1 from the true Hamiltonian.
Energy drift is similar to parametric resonance in that a finite, discrete timestepping scheme will result in nonphysical, limited sampling of motions with frequencies close to the frequency of velocity updates. Thus the restriction on the maximum step size that will be stable for a given system is proportional to the period of the fastest fundamental modes of the system's motion. For a motion with a natural frequency "ω", artificial resonances are introduced when the frequency of velocity updates, formula_2 is related to "ω" as
formula_3
where "n" and "m" are integers describing the resonance order. For Verlet integration, resonances up to the fourth order formula_4 frequently lead to numerical instability, leading to a restriction on the timestep size of
formula_5
where "ω" is the frequency of the fastest motion in the system and "p" is its period. The fastest motions in most biomolecular systems involve the motions of hydrogen atoms; it is thus common to use constraint algorithms to restrict hydrogen motion and thus increase the maximum stable time step that can be used in the simulation. However, because the time scales of heavy-atom motions are not widely divergent from those of hydrogen motions, in practice this allows only about a twofold increase in time step. Common practice in all-atom biomolecular simulation is to use a time step of 1 femtosecond (fs) for unconstrained simulations and 2 fs for constrained simulations, although larger time steps may be possible for certain systems or choices of parameters.
Energy drift can also result from imperfections in evaluating the energy function, usually due to simulation parameters that sacrifice accuracy for computational speed. For example, cutoff schemes for evaluating the electrostatic forces introduce systematic errors in the energy with each time step as particles move back and forth across the cutoff radius if sufficient smoothing is not used. Particle mesh Ewald summation is one solution for this effect, but introduces artifacts of its own. Errors in the system being simulated can also induce energy drifts characterized as "explosive" that are not artifacts, but are reflective of the instability of the initial conditions; this may occur when the system has not been subjected to sufficient structural minimization before beginning production dynamics. In practice, energy drift may be measured as a percent increase over time, or as a time needed to add a given amount of energy to the system.
The practical effects of energy drift depend on the simulation conditions, the thermodynamic ensemble being simulated, and the intended use of the simulation under study; for example, energy drift has much more severe consequences for simulations of the microcanonical ensemble than the canonical ensemble where the temperature is held constant. However, it has been shown that long microcanonical ensemble simulations can be performed with insignificant energy drift, including those of flexible molecules which incorporate constraints and Ewald summations. Energy drift is often used as a measure of the quality of the simulation, and has been proposed as one quality metric to be routinely reported in a mass repository of molecular dynamics trajectory data analogous to the Protein Data Bank. | [
{
"math_id": 0,
"text": "E = \\sum m \\mathbf{v}^{2} = \\sum m \\mathbf{v}_\\mathrm{true}^{2} + \\sum m \\ \\delta \\mathbf{v}^{2}"
},
{
"math_id": 1,
"text": "\\mathcal{O}\\left(\\Delta t^{p}\\right)"
},
{
"math_id": 2,
"text": "\\frac{2\\pi}{\\Delta t}"
},
{
"math_id": 3,
"text": "\\frac{n}{m}\\omega = \\frac{2\\pi}{\\Delta t}"
},
{
"math_id": 4,
"text": "\\left(\\frac{n}{m} = 4\\right)"
},
{
"math_id": 5,
"text": "\\Delta t < \\frac{\\sqrt{2}}{\\omega} \\approx 0.225p"
}
] | https://en.wikipedia.org/wiki?curid=9667364 |
9667552 | Periodic boundary conditions | Concept in molecular modelling
Periodic boundary conditions (PBCs) are a set of boundary conditions which are often chosen for approximating a large (infinite) system by using a small part called a "unit cell". PBCs are often used in computer simulations and mathematical models. The topology of two-dimensional PBC is equal to that of a "world map" of some video games; the geometry of the unit cell satisfies perfect two-dimensional tiling, and when an object passes through one side of the unit cell, it re-appears on the opposite side with the same velocity. In topological terms, the space made by two-dimensional PBCs can be thought of as being mapped onto a torus (compactification). The large systems approximated by PBCs consist of an infinite number of unit cells. In computer simulations, one of these is the original simulation box, and others are copies called "images". During the simulation, only the properties of the original simulation box need to be recorded and propagated. The "minimum-image convention" is a common form of PBC particle bookkeeping in which each individual particle in the simulation interacts with the closest image of the remaining particles in the system.
One example of periodic boundary conditions can be defined according to smooth real functions formula_0 by
formula_1
formula_2
formula_3
formula_4
for all m = 0, 1, 2, ... and for constants formula_5 and formula_6.
In molecular dynamics simulations and Monte Carlo molecular modeling, PBCs are usually applied to calculate properties of bulk gasses, liquids, crystals or mixtures. A common application uses PBC to simulate solvated macromolecules in a bath of explicit solvent. Born–von Karman boundary conditions are periodic boundary conditions for a special system.
In electromagnetics, PBC can be applied for different mesh types to analyze the electromagnetic properties of periodical structures.
Requirements and artifacts.
Three-dimensional PBCs are useful for approximating the behavior of macro-scale systems of gases, liquids, and solids. Three-dimensional PBCs can also be used to simulate planar surfaces, in which case two-dimensional PBCs are often more suitable. Two-dimensional PBCs for planar surfaces are also called "slab boundary conditions"; in this case, PBCs are used for two Cartesian coordinates (e.g., x and y), and the third coordinate (z) extends to infinity.
PBCs can be used in conjunction with Ewald summation methods (e.g., the particle mesh Ewald method) to calculate electrostatic forces in the system. However, PBCs also introduce correlational artifacts that do not respect the translational invariance of the system, and requires constraints on the composition and size of the simulation box.
In simulations of solid systems, the strain field arising from any inhomogeneity in the system will be artificially truncated and modified by the periodic boundary. Similarly, the wavelength of sound or shock waves and phonons in the system is limited by the box size.
In simulations containing ionic (Coulomb) interactions, the net electrostatic charge of the system must be zero to avoid summing to an infinite charge when PBCs are applied. In some applications it is appropriate to obtain neutrality by adding ions such as sodium or chloride (as counterions) in appropriate numbers if the molecules of interest are charged. Sometimes ions are even added to a system in which the molecules of interest are neutral, to approximate the ionic strength of the solution in which the molecules naturally appear. Maintenance of the minimum-image convention also generally requires that a spherical cutoff radius for nonbonded forces be at most half the length of one side of a cubic box. Even in electrostatically neutral systems, a net dipole moment of the unit cell can introduce a spurious bulk-surface energy, equivalent to pyroelectricity in polar crystals. Another consequence of applying PBCs to a simulated system such as a liquid or a solid is that this hypothetical system has no contact with its “surroundings”, due to it being infinite in all directions. Therefore, long-range energy contributions such as the electrostatic potential, and by extension the energies of charged particles like electrons, are not automatically aligned to experimental energy scales. Mathematically, this energy level ambiguity corresponds to the sum of the electrostatic energy being dependent on a surface term that needs to be set by the user of the method.
The size of the simulation box must also be large enough to prevent periodic artifacts from occurring due to the unphysical topology of the simulation. In a box that is too small, a macromolecule may interact with its own image in a neighboring box, which is functionally equivalent to a molecule's "head" interacting with its own "tail". This produces highly unphysical dynamics in most macromolecules, although the magnitude of the consequences and thus the appropriate box size relative to the size of the macromolecules depends on the intended length of the simulation, the desired accuracy, and the anticipated dynamics. For example, simulations of protein folding that begin from the native state may undergo smaller fluctuations, and therefore may not require as large a box, as simulations that begin from a random coil conformation. However, the effects of solvation shells on the observed dynamics – in simulation or in experiment – are not well understood. A common recommendation based on simulations of DNA is to require at least 1 nm of solvent around the molecules of interest in every dimension.
Practical implementation: continuity and the minimum image convention.
An object which has passed through one face of the simulation box should re-enter through the opposite face—or its image should do it. Evidently, a strategic decision must be made: Do we (A) “fold back” particles into the simulation box when they leave it, or do we (B) let them go on (but compute interactions with the nearest images)? The decision has no effect on the course of the simulation, but if the user is interested in mean displacements, diffusion lengths, etc., the second option is preferable.
(A) Restrict particle coordinates to the simulation box.
To implement a PBC algorithm, at least two steps are needed.
Restricting the coordinates is a simple operation which can be described with the following code, where x_size is the length of the box in one direction (assuming an orthogonal unit cell centered on the origin) and x is the position of the particle in the same direction:
if (periodic_x) then
if (x < -x_size * 0.5) x = x + x_size
if (x >= x_size * 0.5) x = x - x_size
end if
Distance and vector between objects should obey the minimum image criterion.
This can be implemented according to the following code (in the case of a one-dimensional system where dx is the distance direction vector from object i to object j):
if (periodic_x) then
dx = x(j) - x(i)
if (dx > x_size * 0.5) dx = dx - x_size
if (dx <= -x_size * 0.5) dx = dx + x_size
end if
For three-dimensional PBCs, both operations should be repeated in all 3 dimensions.
These operations can be written in a much more compact form for orthorhombic cells if the origin is shifted to a corner of the box. Then we have, in one dimension, for positions and distances respectively:
! After x(i) update without regard to PBC:
x(i) = x(i) - floor(x(i) / x_size) * x_size ! For a box with the origin at the lower left vertex
! Works for x's lying in any image.
dx = x(j) - x(i)
dx = dx - nint(dx / x_size) * x_size
(B) Do not restrict the particle coordinates.
Assuming an orthorhombic simulation box with the origin at the lower left forward corner, the minimum image convention for the calculation of effective particle distances can be calculated with the “nearest integer” function as shown above, here as C/C++ code:
x_rsize = 1.0 / x_size; // compute only when box size is set or changed
dx = x[j] - x[i];
dx -= x_size * nearbyint(dx * x_rsize);
The fastest way of carrying out this operation depends on the processor architecture. If the sign of dx is not relevant, the method
dx = fabs(dx);
dx -= static_cast<int>(dx * x_rsize + 0.5) * x_size;
was found to be fastest on x86-64 processors in 2013.
For non-orthorhombic cells the situation is more complicated.
In simulations of ionic systems more complicated operations
may be needed to handle the long-range Coulomb interactions spanning several box images, for instance Ewald summation.
Unit cell geometries.
PBC requires the unit cell to be a shape that will tile perfectly into a three-dimensional crystal. Thus, a spherical or elliptical droplet cannot be used. A cube or rectangular prism is the most intuitive and common choice, but can be computationally expensive due to unnecessary amounts of solvent molecules in the corners, distant from the central macromolecules. A common alternative that requires less volume is the truncated octahedron.
General dimension.
For simulations in 2D and 3D space, cubic periodic boundary condition is most commonly used since it is simplest in coding. In computer simulation of high dimensional systems, however, the hypercubic periodic boundary condition can be less efficient because corners occupy most part of the space. In general dimension, unit cell can be viewed as the Wigner-Seitz cell of certain lattice packing. For example, the hypercubic periodic boundary condition corresponds to the hypercubic lattice packing. It is then preferred to choose a unit cell which corresponds to the dense packing of that dimension. In 4D this is D4 lattice; and E8 lattice in 8-dimension. The implementation of these high dimensional periodic boundary conditions is equivalent to error correction code approaches in information theory.
Conserved properties.
Under periodic boundary conditions, the linear momentum of the system is conserved, but Angular momentum is not. Conventional explanation of this fact is based on Noether's theorem, which states that conservation of angular momentum follows from rotational invariance of Lagrangian. However, this approach was shown to not be consistent: it fails to explain the absence of conservation of angular momentum of a single particle moving in a periodic cell. Lagrangian of the particle is constant and therefore rotationally invariant, while angular momentum of the particle is not conserved. This contradiction is caused by the fact that Noether's theorem is usually formulated for closed systems. The periodic cell exchanges mass momentum, angular momentum, and energy with the neighboring cells.
When applied to the microcanonical ensemble (constant particle number, volume, and energy, abbreviated NVE), using PBC rather than reflecting walls slightly alters the sampling of the simulation due to the conservation of total linear momentum and the position of the center of mass; this ensemble has been termed the "molecular dynamics ensemble" or the NVEPG ensemble. These additional conserved quantities introduce minor artifacts related to the statistical mechanical definition of temperature, the departure of the velocity distributions from a Boltzmann distribution, and violations of equipartition for systems containing particles with heterogeneous masses. The simplest of these effects is that a system of "N" particles will behave, in the molecular dynamics ensemble, as a system of "N-1" particles. These artifacts have quantifiable consequences for small toy systems containing only perfectly hard particles; they have not been studied in depth for standard biomolecular simulations, but given the size of such systems, the effects will be largely negligible. | [
{
"math_id": 0,
"text": "\\phi: \\mathbb{R}^n \\to \\mathbb{R}"
},
{
"math_id": 1,
"text": " \\frac{\\partial^m }{\\partial x_1^m} \\phi(a_1,x_2,...,x_n) = \\frac{\\partial^m }{\\partial x_1^m} \\phi(b_1,x_2,...,x_n), "
},
{
"math_id": 2,
"text": " \\frac{\\partial^m }{\\partial x_2^m} \\phi(x_1,a_2,...,x_n) = \\frac{\\partial^m }{\\partial x_2^m} \\phi(x_1,b_2,...,x_n), "
},
{
"math_id": 3,
"text": " ... , "
},
{
"math_id": 4,
"text": " \\frac{\\partial^m }{\\partial x_n^m} \\phi(x_1,x_2,...,a_n) = \\frac{\\partial^m }{\\partial x_n^m} \\phi(x_1,x_2,...,b_n) "
},
{
"math_id": 5,
"text": "a_i"
},
{
"math_id": 6,
"text": "b_i"
}
] | https://en.wikipedia.org/wiki?curid=9667552 |
966856 | Valuation ring | Concept in algebra
In abstract algebra, a valuation ring is an integral domain "D" such that for every non-zero element "x" of its field of fractions "F", at least one of "x" or "x"−1 belongs to "D".
Given a field "F", if "D" is a subring of "F" such that either "x" or "x"−1 belongs to
"D" for every nonzero "x" in "F", then "D" is said to be a valuation ring for the field "F" or a place of "F". Since "F" in this case is indeed the field of fractions of "D", a valuation ring for a field is a valuation ring. Another way to characterize the valuation rings of a field "F" is that valuation rings "D" of "F" have "F" as their field of fractions, and their ideals are totally ordered by inclusion; or equivalently their principal ideals are totally ordered by inclusion. In particular, every valuation ring is a local ring.
The valuation rings of a field are the maximal elements of the set of the local subrings in the field partially ordered by dominance or refinement, where
formula_0 dominates formula_1 if formula_2 and formula_3.
Every local ring in a field "K" is dominated by some valuation ring of "K".
An integral domain whose localization at any prime ideal is a valuation ring is called a Prüfer domain.
Definitions.
There are several equivalent definitions of valuation ring (see below for the characterization in terms of dominance). For an integral domain "D" and its field of fractions "K", the following are equivalent:
The equivalence of the first three definitions follows easily. A theorem of states that any ring satisfying the first three conditions satisfies the fourth: take Γ to be the quotient "K"×/"D"× of the unit group of "K" by the unit group of "D", and take ν to be the natural projection. We can turn Γ into a totally ordered group by declaring the residue classes of elements of "D" as "positive".
Even further, given any totally ordered abelian group Γ, there is a valuation ring "D" with value group Γ (see Hahn series).
From the fact that the ideals of a valuation ring are totally ordered, one can conclude that a valuation ring is a local domain, and that every finitely generated ideal of a valuation ring is principal (i.e., a valuation ring is a Bézout domain). In fact, it is a theorem of Krull that an integral domain is a valuation ring if and only if it is a local Bézout domain. It also follows from this that a valuation ring is Noetherian if and only if it is a principal ideal domain. In this case, it is either a field or it has exactly one non-zero prime ideal; in the latter case it is called a discrete valuation ring. (By convention, a field is not a discrete valuation ring.)
A value group is called "discrete" if it is isomorphic to the additive group of the integers, and a valuation ring has a discrete valuation group if and only if it is a discrete valuation ring.
Very rarely, "valuation ring" may refer to a ring that satisfies the second or third condition but is not necessarily a domain. A more common term for this type of ring is "uniserial ring".
formula_10
has the valuation formula_11. The subring formula_12 is a valuation ring as well.
Dominance and integral closure.
The units, or invertible elements, of a valuation ring are the elements "x" in "D" such that "x" −1 is also a member of "D". The other elements of "D" – called nonunits – do not have an inverse in "D", and they form an ideal "M". This ideal is maximal among the (totally ordered) ideals of D. Since "M" is a maximal ideal, the quotient ring "D"/"M" is a field, called the residue field of "D".
In general, we say a local ring formula_28 dominates a local ring formula_29 if formula_30 and formula_31; in other words, the inclusion formula_32 is a local ring homomorphism. Every local ring formula_33 in a field "K" is dominated by some valuation ring of "K". Indeed, the set consisting of all subrings "R" of "K" containing "A" and formula_34 is nonempty and is inductive; thus, has a maximal element formula_35 by Zorn's lemma. We claim "R" is a valuation ring. "R" is a local ring with maximal ideal containing formula_36 by maximality. Again by maximality it is also integrally closed. Now, if formula_37, then, by maximality, formula_38 and thus we can write:
formula_39.
Since formula_40 is a unit element, this implies that formula_41 is integral over "R"; thus is in "R". This proves "R" is a valuation ring. ("R" dominates "A" since its maximal ideal contains formula_42 by construction.)
A local ring "R" in a field "K" is a valuation ring if and only if it is a maximal element of the set of all local rings contained in "K" partially ordered by dominance. This easily follows from the above.
Let "A" be a subring of a field "K" and formula_43 a ring homomorphism into an algebraically closed field "k". Then "f" extends to a ring homomorphism formula_44, "D" some valuation ring of "K" containing "A". (Proof: Let formula_45 be a maximal extension, which clearly exists by Zorn's lemma. By maximality, "R" is a local ring with maximal ideal containing the kernel of "f". If "S" is a local ring dominating "R", then "S" is algebraic over "R"; if not, formula_46 contains a polynomial ring formula_47 to which "g" extends, a contradiction to maximality. It follows formula_48 is an algebraic field extension of formula_49. Thus, formula_50 extends "g"; hence, "S" = "R".)
If a subring "R" of a field "K" contains a valuation ring "D" of "K", then, by checking Definition 1, "R" is also a valuation ring of "K". In particular, "R" is local and its maximal ideal contracts to some prime ideal of "D", say, formula_42. Then formula_51 since formula_35 dominates formula_52, which is a valuation ring since the ideals are totally ordered. This observation is subsumed to the following: there is a bijective correspondence formula_53 the set of all subrings of "K" containing "D". In particular, "D" is integrally closed, and the Krull dimension of "D" is the number of proper subrings of "K" containing "D".
In fact, the integral closure of an integral domain "A" in the field of fractions "K" of "A" is the intersection of all valuation rings of "K" containing "A". Indeed, the integral closure is contained in the intersection since the valuation rings are integrally closed. Conversely, let "x" be in "K" but not integral over "A". Since the ideal formula_54 is not formula_55, it is contained in a maximal ideal formula_42. Then there is a valuation ring "R" that dominates the localization of formula_55 at formula_42. Since formula_56, formula_37.
The dominance is used in algebraic geometry. Let "X" be an algebraic variety over a field "k". Then we say a valuation ring "R" in formula_57 has "center "x" on "X"" if formula_35 dominates the local ring formula_58 of the structure sheaf at "x".
Ideals in valuation rings.
We may describe the ideals in the valuation ring by means of its value group.
Let Γ be a totally ordered abelian group. A subset Δ of Γ is called a "segment" if it is nonempty and, for any α in Δ, any element between −α and α is also in Δ (end points included). A subgroup of Γ is called an "isolated subgroup" if it is a segment and is a proper subgroup.
Let "D" be a valuation ring with valuation "v" and value group Γ. For any subset "A" of "D", we let formula_59 be the complement of the union of formula_60 and formula_61 in formula_62. If "I" is a proper ideal, then formula_63 is a segment of formula_62. In fact, the mapping formula_64 defines an inclusion-reversing bijection between the set of proper ideals of "D" and the set of segments of formula_62. Under this correspondence, the nonzero prime ideals of "D" correspond bijectively to the isolated subgroups of Γ.
Example: The ring of "p"-adic integers formula_16 is a valuation ring with value group formula_14. The zero subgroup of formula_14 corresponds to the unique maximal ideal formula_65 and the whole group to the zero ideal. The maximal ideal is the only isolated subgroup of formula_14.
The set of isolated subgroups is totally ordered by inclusion. The height or rank "r"(Γ) of Γ is defined to be the cardinality of the set of isolated subgroups of Γ. Since the nonzero prime ideals are totally ordered and they correspond to isolated subgroups of Γ, the height of Γ is equal to the Krull dimension of the valuation ring "D" associated with Γ.
The most important special case is height one, which is equivalent to Γ being a subgroup of the real numbers formula_66 under addition (or equivalently, of the positive real numbers formula_67 under multiplication.) A valuation ring with a valuation of height one has a corresponding absolute value defining an ultrametric place. A special case of this are the discrete valuation rings mentioned earlier.
The rational rank "rr"(Γ) is defined as the rank of the value group as an abelian group,
formula_68
Places.
General definition.
A "place" of a field "K" is a ring homomorphism "p" from a valuation ring "D" of "K" to some field such that, for any formula_69, formula_70. The image of a place is a field called the residue field of "p". For example, the canonical map formula_71 is a place.
Example.
Let "A" be a Dedekind domain and formula_42 a prime ideal. Then the canonical map formula_72 is a place.
Specialization of places.
We say a place "p" "specializes to" a place "p′", denoted by formula_73, if the valuation ring of "p" contains the valuation ring of "p'". In algebraic geometry, we say a prime ideal formula_42 specializes to formula_74 if formula_75. The two notions coincide: formula_73 if and only if a prime ideal corresponding to "p" specializes to a prime ideal corresponding to "p′" in some valuation ring (recall that if formula_76 are valuation rings of the same field, then "D" corresponds to a prime ideal of formula_77.)
Example.
For example, in the function field formula_5 of some algebraic variety formula_6 every prime ideal formula_78 contained in a maximal ideal formula_79 gives a specialization formula_80.
Remarks.
It can be shown: if formula_73, then formula_81 for some place "q" of the residue field formula_82 of "p". (Observe formula_83 is a valuation ring of formula_82 and let "q" be the corresponding place; the rest is mechanical.) If "D" is a valuation ring of "p", then its Krull dimension is the cardinarity of the specializations other than "p" to "p". Thus, for any place "p" with valuation ring "D" of a field "K" over a field "k", we have:
formula_84.
If "p" is a place and "A" is a subring of the valuation ring of "p", then formula_85 is called the "center" of "p" in "A".
Places at infinity.
For the function field on an affine variety formula_6 there are valuations which are not associated to any of the primes of formula_6. These valuations are called the places at infinity. For example, the affine line formula_86 has function field formula_87. The place associated to the localization of
formula_88
at the maximal ideal
formula_89
is a place at infinity.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "(A,\\mathfrak{m}_A)"
},
{
"math_id": 1,
"text": "(B,\\mathfrak{m}_B)"
},
{
"math_id": 2,
"text": "A \\supseteq B"
},
{
"math_id": 3,
"text": "\\mathfrak{m}_A \\cap B = \\mathfrak{m}_B"
},
{
"math_id": 4,
"text": "\\mathbb{F}"
},
{
"math_id": 5,
"text": "\\mathbb{F}(X)"
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "\\Complex[X]"
},
{
"math_id": 8,
"text": "f/g \\in \\Complex(X)"
},
{
"math_id": 9,
"text": "g/f \\not\\in \\Complex[X]"
},
{
"math_id": 10,
"text": "\\mathbb{F}((X)) =\\left\\{ f(X) =\\! \\sum_{i>-\\infty}^\\infty a_iX^i \\, :\\ a_i \\in \\mathbb{F} \\right\\}"
},
{
"math_id": 11,
"text": "v(f) = \\inf\\nolimits_{a_n \\neq 0} n"
},
{
"math_id": 12,
"text": "\\mathbb{F}[[X]]"
},
{
"math_id": 13,
"text": "\\Z_{(p)},"
},
{
"math_id": 14,
"text": "\\Z"
},
{
"math_id": 15,
"text": "\\Q."
},
{
"math_id": 16,
"text": "\\Z_p"
},
{
"math_id": 17,
"text": "\\Q_p"
},
{
"math_id": 18,
"text": "\\Z_p^{\\text{cl}}"
},
{
"math_id": 19,
"text": "\\Q_p^{\\text{cl}}"
},
{
"math_id": 20,
"text": "\\Complex[x, y]"
},
{
"math_id": 21,
"text": "f"
},
{
"math_id": 22,
"text": "\\Complex[x, y] / (f)"
},
{
"math_id": 23,
"text": "\\{(x, y) : f(x, y) = 0\\}"
},
{
"math_id": 24,
"text": "P = (P_x, P_y) \\in \\Complex ^2"
},
{
"math_id": 25,
"text": "f(P) = 0"
},
{
"math_id": 26,
"text": "(\\mathbb{C}[[X^2]],(X^2)) \\hookrightarrow (\\mathbb{C}[[X]],(X))"
},
{
"math_id": 27,
"text": "\\mathbb{C}((X))"
},
{
"math_id": 28,
"text": "(S,\\mathfrak{m}_S)"
},
{
"math_id": 29,
"text": "(R,\\mathfrak{m}_R)"
},
{
"math_id": 30,
"text": "S \\supseteq R"
},
{
"math_id": 31,
"text": "\\mathfrak{m}_S \\cap R = \\mathfrak{m}_R"
},
{
"math_id": 32,
"text": "R \\subseteq S"
},
{
"math_id": 33,
"text": "(A, \\mathfrak{p})"
},
{
"math_id": 34,
"text": "1 \\not\\in \\mathfrak{p}R"
},
{
"math_id": 35,
"text": "R"
},
{
"math_id": 36,
"text": "\\mathfrak{p}R"
},
{
"math_id": 37,
"text": "x \\not\\in R"
},
{
"math_id": 38,
"text": "\\mathfrak{p}R[x] = R[x]"
},
{
"math_id": 39,
"text": "1 = r_0 + r_1 x + \\cdots + r_n x^n, \\quad r_i \\in \\mathfrak{p}R"
},
{
"math_id": 40,
"text": "1 - r_0"
},
{
"math_id": 41,
"text": "x^{-1}"
},
{
"math_id": 42,
"text": "\\mathfrak{p}"
},
{
"math_id": 43,
"text": "f: A \\to k"
},
{
"math_id": 44,
"text": "g: D \\to k"
},
{
"math_id": 45,
"text": " g: R \\to k "
},
{
"math_id": 46,
"text": "S"
},
{
"math_id": 47,
"text": "R[x]"
},
{
"math_id": 48,
"text": "S/\\mathfrak{m}_S"
},
{
"math_id": 49,
"text": "R/\\mathfrak{m}_R"
},
{
"math_id": 50,
"text": "S \\to S/\\mathfrak{m}_S \\hookrightarrow k"
},
{
"math_id": 51,
"text": "R = D_\\mathfrak{p}"
},
{
"math_id": 52,
"text": "D_\\mathfrak{p}"
},
{
"math_id": 53,
"text": "\\mathfrak{p} \\mapsto D_\\mathfrak{p}, \\operatorname{Spec}(D) \\to"
},
{
"math_id": 54,
"text": "x^{-1} A[x^{-1}]"
},
{
"math_id": 55,
"text": "A[x^{-1}]"
},
{
"math_id": 56,
"text": "x^{-1} \\in \\mathfrak{m}_R"
},
{
"math_id": 57,
"text": "k(X)"
},
{
"math_id": 58,
"text": "\\mathcal{O}_{x, X}"
},
{
"math_id": 59,
"text": "\\Gamma_A"
},
{
"math_id": 60,
"text": "v(A - 0)"
},
{
"math_id": 61,
"text": "-v(A - 0)"
},
{
"math_id": 62,
"text": "\\Gamma"
},
{
"math_id": 63,
"text": "\\Gamma_I"
},
{
"math_id": 64,
"text": "I \\mapsto \\Gamma_I"
},
{
"math_id": 65,
"text": "(p) \\subseteq \\Z_p"
},
{
"math_id": 66,
"text": "\\mathbb{R}"
},
{
"math_id": 67,
"text": "\\mathbb{R}^{+}"
},
{
"math_id": 68,
"text": "\\mathrm{dim}_\\Q(\\Gamma \\otimes_\\Z \\Q)."
},
{
"math_id": 69,
"text": "x \\not\\in D"
},
{
"math_id": 70,
"text": "p(1/x) = 0"
},
{
"math_id": 71,
"text": "D \\to D/\\mathfrak{m}_D"
},
{
"math_id": 72,
"text": "A_{\\mathfrak{p}} \\to k(\\mathfrak{p})"
},
{
"math_id": 73,
"text": "p \\rightsquigarrow p'"
},
{
"math_id": 74,
"text": "\\mathfrak{p}'"
},
{
"math_id": 75,
"text": "\\mathfrak{p} \\subseteq \\mathfrak{p}'"
},
{
"math_id": 76,
"text": "D \\supseteq D'"
},
{
"math_id": 77,
"text": "D'"
},
{
"math_id": 78,
"text": "\\mathfrak{p} \\in \\text{Spec}(R)"
},
{
"math_id": 79,
"text": "\\mathfrak{m}"
},
{
"math_id": 80,
"text": "\\mathfrak{p} \\rightsquigarrow \\mathfrak{m}"
},
{
"math_id": 81,
"text": "p' = q \\circ p|_{D'}"
},
{
"math_id": 82,
"text": "k(p)"
},
{
"math_id": 83,
"text": "p(D')"
},
{
"math_id": 84,
"text": " \\operatorname{tr.deg}_k k(p) + \\dim D \\le \\operatorname{tr.deg}_k K"
},
{
"math_id": 85,
"text": "\\operatorname{ker}(p) \\cap A"
},
{
"math_id": 86,
"text": "\\mathbb{A}^1_k"
},
{
"math_id": 87,
"text": "k(x)"
},
{
"math_id": 88,
"text": "k\\left[\\frac{1}{x}\\right]"
},
{
"math_id": 89,
"text": "\\mathfrak{m} = \\left(\\frac{1}{x}\\right)"
}
] | https://en.wikipedia.org/wiki?curid=966856 |
9672 | Entscheidungsproblem | Impossible task in computing
In mathematics and computer science, the ; ) is a challenge posed by David Hilbert and Wilhelm Ackermann in 1928. It asks for an algorithm that considers an inputted statement and answers "yes" or "no" according to whether it is universally valid, i.e., valid in every structure.
Completeness theorem.
By the completeness theorem of first-order logic, a statement is universally valid if and only if it can be deduced using logical rules and axioms, so the "" can also be viewed as asking for an algorithm to decide whether a given statement is provable using the rules of logic.
In 1936, Alonzo Church and Alan Turing published independent papers showing that a general solution to the "" is impossible, assuming that the intuitive notion of "effectively calculable" is captured by the functions computable by a Turing machine (or equivalently, by those expressible in the lambda calculus). This assumption is now known as the Church–Turing thesis.
History.
The origin of the goes back to Gottfried Leibniz, who in the seventeenth century, after having constructed a successful mechanical calculating machine, dreamt of building a machine that could manipulate symbols in order to determine the truth values of mathematical statements. He realized that the first step would have to be a clean formal language, and much of his subsequent work was directed toward that goal. In 1928, David Hilbert and Wilhelm Ackermann posed the question in the form outlined above.
In continuation of his "program", Hilbert posed three questions at an international conference in 1928, the third of which became known as "Hilbert's ". In 1929, Moses Schönfinkel published one paper on special cases of the decision problem, that was prepared by Paul Bernays.
As late as 1930, Hilbert believed that there would be no such thing as an unsolvable problem.
Negative answer.
Before the question could be answered, the notion of "algorithm" had to be formally defined. This was done by Alonzo Church in 1935 with the concept of "effective calculability" based on his λ-calculus, and by Alan Turing the next year with his concept of Turing machines. Turing immediately recognized that these are equivalent models of computation.
A negative answer to the was then given by Alonzo Church in 1935–36 (Church's theorem) and independently shortly thereafter by Alan Turing in 1936 (Turing's proof). Church proved that there is no computable function which decides, for two given λ-calculus expressions, whether they are equivalent or not. He relied heavily on earlier work by Stephen Kleene. Turing reduced the question of the existence of an 'algorithm' or 'general method' able to solve the to the question of the existence of a 'general method' which decides whether any given Turing machine halts or not (the halting problem). If 'algorithm' is understood as meaning a method that can be represented as a Turing machine, and with the answer to the latter question negative (in general), the question about the existence of an algorithm for the also must be negative (in general). In his 1936 paper, Turing says: "Corresponding to each computing machine 'it' we construct a formula 'Un(it)' and we show that, if there is a general method for determining whether 'Un(it)' is provable, then there is a general method for determining whether 'it' ever prints 0".
The work of both Church and Turing was heavily influenced by Kurt Gödel's earlier work on his incompleteness theorem, especially by the method of assigning numbers (a Gödel numbering) to logical formulas in order to reduce logic to arithmetic.
The "" is related to Hilbert's tenth problem, which asks for an algorithm to decide whether Diophantine equations have a solution. The non-existence of such an algorithm, established by the work of Yuri Matiyasevich, Julia Robinson, Martin Davis, and Hilary Putnam, with the final piece of the proof in 1970, also implies a negative answer to the "Entscheidungsproblem".
Generalizations.
Using the deduction theorem, the Entscheidungsproblem encompasses the more general problem of deciding whether a given first-order sentence is entailed by a given finite set of sentences, but validity in first-order theories with infinitely many axioms cannot be directly reduced to the Entscheidungsproblem. Such more general decision problems are of practical interest. Some first-order theories are algorithmically decidable; examples of this include Presburger arithmetic, real closed fields, and static type systems of many programming languages. On the other hand, the first-order theory of the natural numbers with addition and multiplication expressed by Peano's axioms cannot be decided with an algorithm.
Fragments.
By default, the citations in the section are from Pratt-Hartmann (2023).
The classical "Entscheidungsproblem" asks that, given a first-order formula, whether it is true in all models. The finitary problem asks whether it is true in all finite models. Trakhtenbrot's theorem shows that this is also undecidable.
Some notations: formula_0 means the problem of deciding whether there exists a model for a set of logical formulas formula_1. formula_2 is the same problem, but for "finite" models. The formula_3-problem for a logical fragment is called decidable if there exists a program that can decide, for each formula_1 finite set of logical formulas in the fragment, whether formula_0 or not.
There is a hierarchy of decidabilities. On the top are the undecidable problems. Below it are the decidable problems. Furthermore, the decidable problems can be divided into a complexity hierarchy.
Aristotelian and relational.
Aristotelian logic considers 4 kinds of sentences: "All p are q", "All p are not q", "Some p is q", "Some p is not q". We can formalize these kinds of sentences as a fragment of first-order logic:formula_4where formula_5 are atomic predicates, and formula_6. Given a finite set of Aristotelean logic formulas, it is NLOGSPACE-complete to decide its formula_3. It is also NLOGSPACE-complete to decide formula_3 for a slight extension (Theorem 2.7):formula_7Relational logic extends Aristotelean logic by allowing a relational predicate. For example, "Everybody loves somebody" can be written as formula_8. Generally, we have 8 kinds of sentences:formula_9It is NLOGSPACE-complete to decide its formula_3 (Theorem 2.15). Relational logic can be extended to 32 kinds of sentences by allowing formula_10, but this extension is EXPTIME-complete (Theorem 2.24).
Arity.
The first-order logic fragment where the only variable names are formula_11 is NEXPTIME-complete (Theorem 3.18). With formula_12, it is RE-complete to decide its formula_3, and co-RE-complete to decide formula_13 (Theorem 3.15), thus undecidable.
The monadic predicate calculus is the fragment where each formula contains only 1-ary predicates and no function symbols. Its formula_3 is NEXPTIME-complete (Theorem 3.22).
Quantifier prefix.
Any first-order formula has a prenex normal form. For each possible quantifier prefix to the prenex normal form, we have a fragment of first-order logic. For example, the Bernays–Schönfinkel class, formula_14, is the class of first-order formulas with quantifier prefix formula_15, equality symbols, and no function symbols.
For example, Turing's 1936 paper (p. 263) observed that since the halting problem for each Turing machine is equivalent to a first-order logical formula of form formula_16, the problem formula_17 is undecidable.
The precise boundaries are known, sharply:
Börger et al. (2001) describes the level of computational complexity for every possible fragment with every possible combination of quantifier prefix, functional arity, predicate arity, and equality/no-equality.
Practical decision procedures.
Having practical decision procedures for classes of logical formulas is of considerable interest for program verification and circuit verification. Pure Boolean logical formulas are usually decided using SAT-solving techniques based on the DPLL algorithm.
For more general decision problems of first-order theories, conjunctive formulas over linear real or rational arithmetic can be decided using the simplex algorithm, formulas in linear integer arithmetic (Presburger arithmetic) can be decided using Cooper's algorithm or William Pugh's Omega test. Formulas with negations, conjunctions and disjunctions combine the difficulties of satisfiability testing with that of decision of conjunctions; they are generally decided nowadays using SMT-solving techniques, which combine SAT-solving with decision procedures for conjunctions and propagation techniques. Real polynomial arithmetic, also known as the theory of real closed fields, is decidable; this is the Tarski–Seidenberg theorem, which has been implemented in computers by using the cylindrical algebraic decomposition.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rm{Sat} (\\Phi) "
},
{
"math_id": 1,
"text": "\\Phi "
},
{
"math_id": 2,
"text": "\\rm{FinSat} (\\Phi) "
},
{
"math_id": 3,
"text": "\\rm{Sat} "
},
{
"math_id": 4,
"text": "\\forall x, p(x) \\to \\pm q(x), \\quad \\exists x, p(x) \\wedge \\pm q(x) "
},
{
"math_id": 5,
"text": "p, q "
},
{
"math_id": 6,
"text": "+q := q, \\; -q := \\neg q "
},
{
"math_id": 7,
"text": "\\forall x, \\pm p(x) \\to \\pm q(x), \\quad \\exists x, \\pm p(x) \\wedge \\pm q(x) "
},
{
"math_id": 8,
"text": "\\forall x, \\rm{body}(x), \\exists y, \\rm{body}(y) \\wedge \\rm{love}(x, y) "
},
{
"math_id": 9,
"text": "\\begin{aligned}\n\\forall x, p(x) \\to (\\forall y, q(x) \\to \\pm r(x, y)), &\\quad \\forall x, p(x) \\to (\\exists y, q(x) \\wedge \\pm r(x, y)) \\\\\n\\exists x, p(x) \\wedge (\\forall y, q(x) \\to \\pm r(x, y)), &\\quad \\exists x, p(x) \\wedge (\\exists y, q(x) \\wedge \\pm r(x, y)) \n\\end{aligned} "
},
{
"math_id": 10,
"text": "\\pm p, \\pm q "
},
{
"math_id": 11,
"text": "x, y "
},
{
"math_id": 12,
"text": "x, y, z "
},
{
"math_id": 13,
"text": "\\rm{FinSat} "
},
{
"math_id": 14,
"text": "[\\exists^*\\forall^*]_= "
},
{
"math_id": 15,
"text": "\\exists\\cdots\\exists\\forall\\cdots \\forall "
},
{
"math_id": 16,
"text": "\\forall \\exists \\forall \\exists^6 "
},
{
"math_id": 17,
"text": "\\rm{Sat}(\\forall \\exists \\forall \\exists^6) "
},
{
"math_id": 18,
"text": "\\rm{Sat}(\\forall \\exists \\forall) "
},
{
"math_id": 19,
"text": "\\rm{Sat}([\\forall \\exists \\forall]_{=} ) "
},
{
"math_id": 20,
"text": "\\forall^3 \\exists "
},
{
"math_id": 21,
"text": "\\exists^* \\forall^2 \\exists^* "
},
{
"math_id": 22,
"text": "[\\forall^2 \\exists]_= "
},
{
"math_id": 23,
"text": "n \\geq 0 "
},
{
"math_id": 24,
"text": "\\rm{Sat}(\\exists^n \\forall^*) "
},
{
"math_id": 25,
"text": "\\rm{Sat}([\\exists^n \\forall^*]_=) "
},
{
"math_id": 26,
"text": "\\rm{Sat}( [\\exists^*\\forall^*]_= ) "
},
{
"math_id": 27,
"text": "n \\geq 0, m \\geq 2 "
},
{
"math_id": 28,
"text": "\\rm{Sat}(\\exists^n \\forall \\exists^m ) "
},
{
"math_id": 29,
"text": "n \\geq 0 "
},
{
"math_id": 30,
"text": "\\rm{Sat}([\\exists^n \\forall \\exists^*]_=) "
},
{
"math_id": 31,
"text": "\\rm{Sat}(\\exists^*\\forall^*\\exists^*) "
},
{
"math_id": 32,
"text": "\\rm{Sat}(\\exists^n \\forall \\exists) "
},
{
"math_id": 33,
"text": "\\rm{Sat}([\\exists^n \\forall \\exists]_=) "
}
] | https://en.wikipedia.org/wiki?curid=9672 |
9672986 | Myosin ATPase | Class of enzymes
Myosin ATPase (EC 3.6.4.1) is an enzyme with systematic name "ATP phosphohydrolase (actin-translocating)". This enzyme catalyses the following chemical reaction:
ATP + H2O formula_0 ADP + phosphate
ATP hydrolysis provides energy for actomyosin contraction.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=9672986 |
967353 | R. H. Bing | American mathematician
R. H. Bing (October 20, 1914 – April 28, 1986) was an American mathematician who worked mainly in the areas of geometric topology and continuum theory. His father was named Rupert Henry, but Bing's mother thought that "Rupert Henry" was too British for Texas. She compromised by abbreviating it to R. H. Consequently, R. H. does not stand for a first or middle name.
Mathematical contributions.
Bing's mathematical research was almost exclusively in 3-manifold theory and in particular, the geometric topology of formula_0. The term Bing-type topology was coined to describe the style of methods used by Bing.
Bing established his reputation early on in 1946, soon after completing his Ph.D. dissertation, by solving the Kline sphere characterization problem. In 1948 he proved that the pseudo-arc is homogeneous, contradicting a published but erroneous 'proof' to the contrary.
In 1951, he proved results regarding the metrizability of topological spaces, including what would later be called the Bing–Nagata–Smirnov metrization theorem.
In 1952, Bing showed that the double of a solid Alexander horned sphere was the 3-sphere. This showed the existence of an involution on the 3-sphere with fixed point set equal to a wildly embedded 2-sphere, which meant that the original Smith conjecture needed to be phrased in a suitable category. This result also jump-started research into crumpled cubes. The proof involved a method later developed by Bing and others into set of techniques called Bing shrinking. Proofs of the generalized Schoenflies conjecture and the double suspension theorem relied on Bing-type shrinking.
Bing was fascinated by the Poincaré conjecture and made several major attacks which ended unsuccessfully, contributing to the reputation of the conjecture as a very difficult one. He did show that a simply connected, closed 3-manifold with the property that every loop was contained in a 3-ball is homeomorphic to the 3-sphere. Bing was responsible for initiating research into the Property P conjecture, as well as its name, as a potentially more tractable version of the Poincaré conjecture. It was proven in 2004 as a culmination of work from several areas of mathematics. With some irony, this proof was announced some time after Grigori Perelman announced his proof of the Poincaré conjecture.
The side-approximation theorem was considered by Bing to be one of his key discoveries. It has many applications, including a simplified proof of Moise's theorem, which states that every 3-manifold can be triangulated in an essentially unique way.
Notable examples.
The house with two rooms.
The "house with two rooms" is a contractible 2-complex that is not collapsible. Another such example, popularized by E.C. Zeeman, is the "dunce hat".
The house with two rooms can also be thickened and then triangulated to be unshellable, despite the thickened house topologically being a 3-ball. The house with two rooms shows up in various ways in topology. For example, it is used in the proof that every compact 3-manifold has a standard spine.
Dogbone space.
The "dogbone space" is the quotient space obtained from a cellular decomposition of formula_0 into points and polygonal arcs. The quotient space, formula_1, is not a manifold, but formula_2 is homeomorphic to formula_3.
Service and educational contributions.
Bing was a visiting scholar at the Institute for Advanced Study in 1957–58 and again in 1962–63.
Bing served as president of the MAA (1963–1964), president of the AMS (1977–78), and was department chair at University of Wisconsin, Madison (1958–1960), and at University of Texas at Austin (1975–1977).
Before entering graduate school to study mathematics, Bing graduated from Southwest Texas State Teacher's College (known today as Texas State University), and was a high-school teacher for several years. His interest in education would persist for the rest of his life.
What does R. H. stand for?
As mentioned in the introduction, Bing's father was named Rupert Henry, but Bing's mother thought that "Rupert Henry" was too British for Texas. Thus she compromised by abbreviating it to R. H.
It is told that once Bing was applying for a visa and was requested not to use initials. He explained that his name was really "R-only H-only Bing", and ended up receiving a visa made out to "Ronly Honly Bing".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb R^3"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "B \\times \\mathbb R"
},
{
"math_id": 3,
"text": "\\mathbb R^4"
}
] | https://en.wikipedia.org/wiki?curid=967353 |
9674107 | Pharmacokinetics | Branch of pharmacology
Pharmacokinetics (from Ancient Greek "pharmakon" "drug" and "kinetikos" "moving, putting in motion"; see chemical kinetics), sometimes abbreviated as PK, is a branch of pharmacology dedicated to describing how the body affects a specific substance after administration. The substances of interest include any chemical xenobiotic such as pharmaceutical drugs, pesticides, food additives, cosmetics, etc. It attempts to analyze chemical metabolism and to discover the fate of a chemical from the moment that it is administered up to the point at which it is completely eliminated from the body. Pharmacokinetics is based on mathematical modeling that places great emphasis on the relationship between drug plasma concentration and the time elapsed since the drug's administration. Pharmacokinetics is the study of how an organism affects the drug, whereas pharmacodynamics (PD) is the study of how the drug affects the organism. Both together influence dosing, benefit, and adverse effects, as seen in PK/PD models.
<templatestyles src="Template:Quote_box/styles.css" />
IUPAC definition
Pharmacokinetics:
ADME.
A number of phases occur once the drug enters into contact with the organism, these are described using the acronym ADME (or LADME if liberation is included as a separate step from absorption):
Some textbooks combine the first two phases as the drug is often administered in an active form, which means that there is no liberation phase. Others include a phase that combines distribution, metabolism and excretion into a disposition phase. Other authors include the drug's toxicological aspect in what is known as "ADME-Tox" or "ADMET". The two phases of metabolism and excretion can be grouped together under the title elimination.
The study of these distinct phases involves the use and manipulation of basic concepts in order to understand the process dynamics. For this reason, in order to fully comprehend the "kinetics" of a drug it is necessary to have detailed knowledge of a number of factors such as: the properties of the substances that act as excipients, the characteristics of the appropriate biological membranes and the way that substances can cross them, or the characteristics of the enzyme reactions that inactivate the drug.
Metrics.
The following are the most commonly measured pharmacokinetic metrics: The units of the dose in the table are expressed in moles (mol) and molar (M). To express the metrics of the table in units of mass, instead of Amount of substance, simply replace 'mol' with 'g' and 'M' with 'g/L'. Similarly, other units in the table may be expressed in units of an equivalent dimension by scaling.
In pharmacokinetics, "steady state" refers to the situation where the overall intake of a drug is fairly in dynamic equilibrium with its elimination. In practice, it is generally considered that once regular dosing of a drug is started, steady state is reached after 3 to 5 times its half-life. In steady state and in linear pharmacokinetics, AUCτ=AUC∞.
Modeling.
Models have been developed to simplify conceptualization of the many processes that take place in the interaction between an organism and a chemical substance. Pharmacokinetic modelling may be performed either by noncompartmental or compartmental methods. Multi-compartment models provide the best approximations to reality; however, the complexity involved in adding parameters with that modelling approach means that "monocompartmental models" and above all "two compartmental models" are the most-frequently used. The model outputs for a drug can be used in industry (for example, in calculating bioequivalence when designing generic drugs) or in the clinical application of pharmacokinetic concepts. Clinical pharmacokinetics provides many performance guidelines for effective and efficient use of drugs for human-health professionals and in veterinary medicine.
Models generally take the form of mathematical formulas that have a corresponding graphical representation. The use of these models allows an understanding of the characteristics of a molecule, as well as how a particular drug will behave given information regarding some of its basic characteristics such as its acid dissociation constant (pKa), bioavailability and solubility, absorption capacity and distribution in the organism. A variety of analysis techniques may be used to develop models, such as nonlinear regression or curve stripping.
Noncompartmental analysis.
Noncompartmental methods estimate PK parameters directly from a table of concentration-time measurements. Noncompartmental methods are versatile in that they do not assume any specific model and generally produce accurate results acceptable for bioequivalence studies. Total drug exposure is most often estimated by area under the curve (AUC) methods, with the trapezoidal rule (numerical integration) the most common method. Due to the dependence on the length of "x" in the trapezoidal rule, the area estimation is highly dependent on the blood/plasma sampling schedule. That is, the closer time points are, the closer the trapezoids reflect the actual shape of the concentration-time curve. The number of time points available in order to perform a successful NCA analysis should be enough to cover the absorption, distribution and elimination phase to accurately characterize the drug. Beyond AUC exposure measures, parameters such as Cmax (maximum concentration), Tmax (time to maximum concentration), CL and Vd can also be reported using NCA methods.
Compartmental analysis.
Compartment models methods estimate the concentration-time graph by modeling it as a system of differential equations. These models are based on a consideration of an organism as a number of related "compartments". Both single compartment and multi-compartment models are in use. PK compartmental models are often similar to kinetic models used in other scientific disciplines such as chemical kinetics and thermodynamics. The advantage of compartmental over noncompartmental analysis is the ability to modify parameters and to extrapolate to novel situations. The disadvantage is the difficulty in developing and validating the proper model. Although compartment models have the potential to realistically model the situation within an organism, models inevitably make simplifying assumptions and will not be applicable in all situations. However complicated and precise a model may be, it still does not truly represent reality despite the effort involved in obtaining various distribution values for a drug. This is because the concept of distribution volume is a relative concept that is not a true reflection of reality. The choice of model therefore comes down to deciding which one offers the lowest margin of error for the drug involved.
Single-compartment model.
The simplest PK compartmental model is the one-compartmental PK model. This models an organism as one homogenous compartment. This "monocompartmental model" presupposes that blood plasma concentrations of the drug are the only information needed to determine the drug's concentration in other fluids and tissues. For example, the concentration in other areas may be approximately related by known, constant factors to the blood plasma concentration.
In this one-compartment model, the most common model of elimination is first order kinetics, where the elimination of the drug is directly proportional to the drug's concentration in the organism. This is often called "linear pharmacokinetics", as the change in concentration over time can be expressed as a linear differential equation formula_1. Assuming a single IV bolus dose resulting in a concentration formula_2 at time formula_3, the equation can be solved to give formula_4.
Two-compartment model.
Not all body tissues have the same blood supply, so the distribution of the drug will be slower in these tissues than in others with a better blood supply. In addition, there are some tissues (such as the brain tissue) that present a real barrier to the distribution of drugs, that can be breached with greater or lesser ease depending on the drug's characteristics. If these relative conditions for the different tissue types are considered along with the rate of elimination, the organism can be considered to be acting like two compartments: one that we can call the "central compartment" that has a more rapid distribution, comprising organs and systems with a well-developed blood supply; and a "peripheral compartment" made up of organs with a lower blood flow. Other tissues, such as the brain, can occupy a variable position depending on a drug's ability to cross the barrier that separates the organ from the blood supply.
Two-compartment models vary depending on which compartment elimination occurs in. The most common situation is that elimination occurs in the central compartment as the liver and kidneys are organs with a good blood supply. However, in some situations it may be that elimination occurs in the peripheral compartment or even in both. This can mean that there are three possible variations in the two compartment model, which still do not cover all possibilities.
Multi-compartment models.
In the real world, each tissue will have its own distribution characteristics and none of them will be strictly linear. The two-compartment model may not be applicable in situations where some of the enzymes responsible for metabolizing the drug become saturated, or where an active elimination mechanism is present that is independent of the drug's plasma concentration. If we label the drug's volume of distribution within the organism VdF and its volume of distribution in a tissue VdT the former will be described by an equation that takes into account all the tissues that act in different ways, that is:
formula_5
This represents the "multi-compartment model" with a number of curves that express complicated equations in order to obtain an overall curve. A number of computer programs have been developed to plot these equations. The most complex PK models (called PBPK models) rely on the use of physiological information to ease development and validation.
The graph for the non-linear relationship between the various factors is represented by a curve; the relationships between the factors can then be found by calculating the dimensions of different areas under the curve. The models used in "non-linear pharmacokinetics" are largely based on Michaelis–Menten kinetics. A reaction's factors of non-linearity include the following:
It can therefore be seen that non-linearity can occur because of reasons that affect the entire pharmacokinetic sequence: absorption, distribution, metabolism and elimination.
Bioavailability.
At a practical level, a drug's bioavailability can be defined as the proportion of the drug that reaches its site of action. From this perspective the intravenous administration of a drug provides the greatest possible bioavailability, and this method is considered to yield a bioavailability of 1 (or 100%). Bioavailability of other delivery methods is compared with that of intravenous injection (absolute bioavailability) or to a standard value related to other delivery methods in a particular study (relative bioavailability).
formula_6
formula_7
Once a drug's bioavailability has been established it is possible to calculate the changes that need to be made to its dosage in order to reach the required blood plasma levels. Bioavailability is, therefore, a mathematical factor for each individual drug that influences the administered dose. It is possible to calculate the amount of a drug in the blood plasma that has a real potential to bring about its effect using the formula:
formula_8
where "De" is the effective dose, "B" bioavailability and "Da" the administered dose.
Therefore, if a drug has a bioavailability of 0.8 (or 80%) and it is administered in a dose of 100 mg, the equation will demonstrate the following:
"De" = 0.8 × 100 mg = 80 mg
That is the 100 mg administered represents a blood plasma concentration of 80 mg that has the capacity to have a pharmaceutical effect.
This concept depends on a series of factors inherent to each drug, such as:
These concepts, which are discussed in detail in their respective titled articles, can be mathematically quantified and integrated to obtain an overall mathematical equation:
formula_9
where Q is the drug's purity.
formula_10
where formula_11 is the drug's rate of administration and formula_0 is the rate at which the absorbed drug reaches the circulatory system.
Finally, using the Henderson-Hasselbalch equation, and knowing the drug's formula_12 (pH at which there is an equilibrium between its ionized and non-ionized molecules), it is possible to calculate the non-ionized concentration of the drug and therefore the concentration that will be subject to absorption:
formula_13
When two drugs have the same bioavailability, they are said to be biological equivalents or bioequivalents. This concept of bioequivalence is important because it is currently used as a yardstick in the authorization of generic drugs in many countries.
Analysis.
Bioanalytical methods.
Bioanalytical methods are necessary to construct a concentration-time profile. Chemical techniques are employed to measure the concentration of drugs in biological matrix, most often plasma. Proper bioanalytical methods should be selective and sensitive. For example, microscale thermophoresis can be used to quantify how the biological matrix/liquid affects the affinity of a drug to its target.
Mass spectrometry.
Pharmacokinetics is often studied using mass spectrometry because of the complex nature of the matrix (often plasma or urine) and the need for high sensitivity to observe concentrations after a low dose and a long time period. The most common instrumentation used in this application is LC-MS with a triple quadrupole mass spectrometer. Tandem mass spectrometry is usually employed for added specificity. Standard curves and internal standards are used for quantitation of usually a single pharmaceutical in the samples. The samples represent different time points as a pharmaceutical is administered and then metabolized or cleared from the body. Blank samples taken before administration are important in determining background and ensuring data integrity with such complex sample matrices. Much attention is paid to the linearity of the standard curve; however it is common to use curve fitting with more complex functions such as quadratics since the response of most mass spectrometers is not linear across large concentration ranges.
There is currently considerable interest in the use of very high sensitivity mass spectrometry for microdosing studies, which are seen as a promising alternative to animal experimentation. Recent studies show that Secondary electrospray ionization (SESI-MS) can be used in drug monitoring, presenting the advantage of avoiding animal sacrifice.
Population pharmacokinetics.
"Population pharmacokinetics" is the study of the sources and correlates of variability in drug concentrations among individuals who are the target patient population receiving clinically relevant doses of a drug of interest. Certain patient demographic, pathophysiological, and therapeutical features, such as body weight, excretory and metabolic functions, and the presence of other therapies, can regularly alter dose-concentration relationships and can explain variability in exposures. For example, steady-state concentrations of drugs eliminated mostly by the kidney are usually greater in patients with kidney failure than they are in patients with normal kidney function receiving the same drug dosage. Population pharmacokinetics seeks to identify the measurable pathophysiologic factors and explain sources of variability that cause changes in the dose-concentration relationship and the extent of these changes so that, if such changes are associated with clinically relevant and significant shifts in exposures that impact the therapeutic index, dosage can be appropriately modified.
An advantage of population pharmacokinetic modelling is its ability to analyse sparse data sets (sometimes only one concentration measurement per patient is available).
Clinical pharmacokinetics.
Clinical pharmacokinetics (arising from the clinical use of population pharmacokinetics) is the direct application to a therapeutic situation of knowledge regarding a drug's pharmacokinetics and the characteristics of a population that a patient belongs to (or can be ascribed to).
An example is the relaunch of the use of ciclosporin as an immunosuppressor to facilitate organ transplant. The drug's therapeutic properties were initially demonstrated, but it was almost never used after it was found to cause nephrotoxicity in a number of patients. However, it was then realized that it was possible to individualize a patient's dose of ciclosporin by analysing the patients plasmatic concentrations (pharmacokinetic monitoring). This practice has allowed this drug to be used again and has facilitated a great number of organ transplants.
Clinical monitoring is usually carried out by determination of plasma concentrations as this data is usually the easiest to obtain and the most reliable. The main reasons for determining a drug's plasma concentration include:
Ecotoxicology.
Ecotoxicology is the branch of science that deals with the nature, effects, and interactions of substances that are harmful to the environment such as microplastics and other biosphere harmful substances. Ecotoxicology is studied in pharmacokinetics due to the substances responsible for harming the environment such as pesticides can get into the bodies of living organisms. The health effects of these chemicals is thus subject to research and safety trials by government or international agencies such as the EPA or WHO. How long these chemicals stay in the body, the lethal dose and other factors are the main focus of Ecotoxicology.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
External links.
Software.
All model based software above.
Educational centres.
Global centres with the highest profiles for providing in-depth training include the Universities of Buffalo,
Florida, Gothenburg, Leiden, Otago, San Francisco, Beijing, Tokyo, Uppsala, Washington, Manchester, Monash University, and University of Sheffield. | [
{
"math_id": 0,
"text": "\\tau"
},
{
"math_id": 1,
"text": "\\frac{dC}{dt} = -k_\\text{el} C"
},
{
"math_id": 2,
"text": "C_\\text{initial}"
},
{
"math_id": 3,
"text": "t=0"
},
{
"math_id": 4,
"text": "C=C_\\text{initial} \\times e^{-k_\\text{el} \\times t}"
},
{
"math_id": 5,
"text": "Vd_F = Vd_{T1} + Vd_{T2} + Vd_{T3} + \\cdots + Vd_{Tn}\\,"
},
{
"math_id": 6,
"text": "B_A = \\frac{[ABC]_P \\cdot D_{IV}}{[ABC]_{IV} \\cdot D_P}"
},
{
"math_id": 7,
"text": "\\mathit B_R = \\frac{[ABC]_A \\cdot \\text{dose}_B}{[ABC]_B \\cdot \\text{dose}_A}"
},
{
"math_id": 8,
"text": "De = B \\cdot Da\\,"
},
{
"math_id": 9,
"text": "De = Q\\cdot Da\\cdot B\\,"
},
{
"math_id": 10,
"text": "Va = \\frac{Da \\cdot B \\cdot Q} \\tau "
},
{
"math_id": 11,
"text": "Va"
},
{
"math_id": 12,
"text": "pKa\\,"
},
{
"math_id": 13,
"text": "\\mathrm{pH} = \\mathrm{pKa} + \\log \\frac B A "
}
] | https://en.wikipedia.org/wiki?curid=9674107 |
967440 | Riesz–Thorin theorem | Theorem on operator interpolation
In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about "interpolation of operators". It is named after Marcel Riesz and his student G. Olof Thorin.
This theorem bounds the norms of linear maps acting between "Lp" spaces. Its usefulness stems from the fact that some of these spaces have rather simpler structure than others. Usually that refers to "L"2 which is a Hilbert space, or to "L"1 and "L"∞. Therefore one may prove theorems about the more complicated cases by proving them in two simple cases and then using the Riesz–Thorin theorem to pass from the simple cases to the complicated cases. The Marcinkiewicz theorem is similar but applies also to a class of non-linear maps.
Motivation.
First we need the following definition:
Definition. Let "p"0, "p"1 be two numbers such that 0 < "p"0 < "p"1 ≤ ∞. Then for 0 < "θ" < 1 define "pθ" by: = +.
By splitting up the function "f" in "Lpθ" as the product | "f" | = | "f" |1−"θ" | "f" |"θ" and applying Hölder's inequality to its "pθ" power, we obtain the following result, foundational in the study of "Lp"-spaces:
<templatestyles src="Math_theorem/styles.css" />
Proposition (log-convexity of "Lp"-norms) — Each "f" ∈ "L""p"0 ∩ "L""p"1 satisfies:
This result, whose name derives from the convexity of the map on [0, ∞], implies that "L""p"0 ∩ "L""p"1 ⊂ "Lpθ".
On the other hand, if we take the "layer-cake decomposition" "f" = "f" 1{} + "f" 1{}, then we see that "f" 1{} ∈ "L""p"0 and "f" 1{} ∈ "L""p"1, whence we obtain the following result:
<templatestyles src="Math_theorem/styles.css" />
Proposition — Each "f" in "Lpθ" can be written as a sum: "f" = "g" + "h", where "g" ∈ "L""p"0 and "h" ∈ "L""p"1.
In particular, the above result implies that "Lpθ" is included in "L""p"0 + "L""p"1, the sumset of "L""p"0 and "L""p"1 in the space of all measurable functions. Therefore, we have the following chain of inclusions:
<templatestyles src="Math_theorem/styles.css" />
Corollary — "L""p"0 ∩ "L""p"1 ⊂ "Lpθ" ⊂ "L""p"0 + "L""p"1.
In practice, we often encounter operators defined on the sumset "L""p"0 + "L""p"1. For example, the Riemann–Lebesgue lemma shows that the Fourier transform maps "L"1(R"d") boundedly into "L"∞(R"d"), and Plancherel's theorem shows that the Fourier transform maps "L"2(R"d") boundedly into itself, hence the Fourier transform formula_0 extends to ("L"1 + "L"2) (R"d") by setting
formula_1
for all "f"1 ∈ "L"1(R"d") and "f"2 ∈ "L"2(R"d"). It is therefore natural to investigate the behavior of such operators on the "intermediate subspaces" "Lpθ".
To this end, we go back to our example and note that the Fourier transform on the sumset "L"1 + "L"2 was obtained by taking the sum of two instantiations of the same operator, namely
formula_2
formula_3
These really are the "same" operator, in the sense that they agree on the subspace ("L"1 ∩ "L"2) (R"d"). Since the intersection contains simple functions, it is dense in both "L"1(R"d") and "L"2(R"d"). Densely defined continuous operators admit unique extensions, and so we are justified in considering formula_4 and formula_5 to be "the same".
Therefore, the problem of studying operators on the sumset "L""p"0 + "L""p"1 essentially reduces to the study of operators that map two natural domain spaces, "L""p"0 and "L""p"1, boundedly to two target spaces: "L""q"0 and "L""q"1, respectively. Since such operators map the sumset space "L""p"0 + "L""p"1 to "L""q"0 + "L""q"1, it is natural to expect that these operators map the intermediate space "Lpθ" to the corresponding intermediate space "Lqθ".
Statement of the theorem.
There are several ways to state the Riesz–Thorin interpolation theorem; to be consistent with the notations in the previous section, we shall use the sumset formulation.
<templatestyles src="Math_theorem/styles.css" />
Riesz–Thorin interpolation theorem — Let (Ω1, Σ1, "μ"1) and (Ω2, Σ2, "μ"2) be σ-finite measure spaces. Suppose 1 ≤ "p"0 , "q"0 , "p"1 , "q"1 ≤ ∞, and let "T" : "L""p"0("μ"1) + "L""p"1("μ"1) → "L""q"0("μ"2) + "L""q"1("μ"2) be a linear operator that boundedly maps "L""p"0("μ"1) into "L""q"0("μ"2) and "L""p"1("μ"1) into "L""q"1("μ"2). For 0 < "θ" < 1, let "pθ", "qθ" be defined as above. Then T boundedly maps "Lpθ"("μ"1) into "Lqθ"("μ"2) and satisfies the operator norm estimate
In other words, if T is simultaneously of type ("p"0, "q"0) and of type ("p"1, "q"1), then T is of type ("pθ", "qθ") for all 0 < "θ" < 1. In this manner, the interpolation theorem lends itself to a pictorial description. Indeed, the Riesz diagram of T is the collection of all points (, ) in the unit square [0, 1] × [0, 1] such that T is of type ("p", "q"). The interpolation theorem states that the Riesz diagram of T is a convex set: given two points in the Riesz diagram, the line segment that connects them will also be in the diagram.
The interpolation theorem was originally stated and proved by Marcel Riesz in 1927. The 1927 paper establishes the theorem only for the "lower triangle" of the Riesz diagram, viz., with the restriction that "p"0 ≤ "q"0 and "p"1 ≤ "q"1. Olof Thorin extended the interpolation theorem to the entire square, removing the lower-triangle restriction. The proof of Thorin was originally published in 1938 and was subsequently expanded upon in his 1948 thesis.
Proof.
We will first prove the result for simple functions and eventually show how the argument can be extended by density to all measurable functions.
Simple Functions.
By symmetry, let us assume formula_6 (the case formula_7 trivially follows from (1)). Let formula_8 be a simple function, that is formula_9 for some finite formula_10, formula_11 and formula_12, formula_13. Similarly, let formula_14 denote a simple function formula_15, namely formula_16 for some finite formula_17, formula_18 and formula_19, formula_20.
Note that, since we are assuming formula_21 and formula_22 to be formula_23-finite metric spaces, formula_24 and formula_25 for all formula_26. Then, by proper normalization, we can assume formula_27 and formula_28, with formula_29 and with formula_30, formula_31 as defined by the theorem statement.
Next, we define the two complex functions formula_32 Note that, for formula_33, formula_34 and formula_35. We then extend formula_8 and formula_14 to depend on a complex parameter formula_36 as follows: formula_37 so that formula_38 and formula_39. Here, we are implicitly excluding the case formula_40, which yields formula_41: In that case, one can simply take formula_42, independently of formula_36, and the following argument will only require minor adaptations.
Let us now introduce the function formula_43 where formula_44 are constants independent of formula_36. We readily see that formula_45 is an entire function, bounded on the strip formula_46. Then, in order to prove (2), we only need to show that for all formula_47 and formula_48 as constructed above. Indeed, if (3) holds true, by Hadamard three-lines theorem, formula_49 for all formula_8 and formula_14. This means, by fixing formula_8, that formula_50 where the supremum is taken with respect to all formula_14 simple functions with formula_51. The left-hand side can be rewritten by means of the following lemma.
<templatestyles src="Math_theorem/styles.css" />
Lemma — Let formula_52 be conjugate exponents and let formula_8 be a function in formula_53. Then formula_54 where the supremum is taken over all simple functions formula_14 in formula_55 such that formula_56.
In our case, the lemma above implies formula_57 for all simple function formula_8 with formula_58. Equivalently, for a generic simple function, formula_59
Proof of (3).
Let us now prove that our claim (3) is indeed certain. The sequence formula_60 consists of disjoint subsets in formula_61 and, thus, each formula_62 belongs to (at most) one of them, say formula_63. Then, for formula_64, formula_65 which implies that formula_66. With a parallel argument, each formula_67 belongs to (at most) one of the sets supporting formula_14, say formula_68, and formula_69
We can now bound formula_70: By applying Hölder’s inequality with conjugate exponents formula_71 and formula_72, we have formula_73
We can repeat the same process for formula_74 to obtain formula_75, formula_76 and, finally, formula_77
Extension to All Measurable Functions in "Lpθ".
So far, we have proven that when formula_8 is a simple function. As already mentioned, the inequality holds true for all formula_78 by the density of simple functions in formula_79.
Formally, let formula_78 and let formula_80 be a sequence of simple functions such that formula_81, for all formula_82, and formula_83 pointwise. Let formula_84 and define formula_85, formula_86, formula_87 and formula_88. Note that, since we are assuming formula_89, formula_90 and, equivalently, formula_91 and formula_92.
Let us see what happens in the limit for formula_93. Since formula_94, formula_95 and formula_96, by the dominated convergence theorem one readily has formula_97 Similarly, formula_98, formula_99 and formula_100 imply formula_101 and, by the linearity of formula_102 as an operator of types formula_103 and formula_104 (we have not proven yet that it is of type formula_105 for a generic formula_8) formula_106
It is now easy to prove that formula_107 and formula_108 in measure: For any formula_109, Chebyshev’s inequality yields formula_110 and similarly for formula_111. Then, formula_107 and formula_108 a.e. for some subsequence and, in turn, formula_112 a.e. Then, by Fatou’s lemma and recalling that (4) holds true for simple functions, formula_113
Interpolation of analytic families of operators.
The proof outline presented in the above section readily generalizes to the case in which the operator T is allowed to vary analytically. In fact, an analogous proof can be carried out to establish a bound on the entire function
formula_114
from which we obtain the following theorem of Elias Stein, published in his 1956 thesis:
<templatestyles src="Math_theorem/styles.css" />
Stein interpolation theorem — Let (Ω1, Σ1, "μ"1) and (Ω2, Σ2, "μ"2) be σ-finite measure spaces. Suppose 1 ≤ "p"0 , "p"1 ≤ ∞, 1 ≤ "q"0 , "q"1 ≤ ∞, and define:
"S" = {"z" ∈ C : 0 < Re("z") < 1},
"S" = {"z" ∈ C : 0 ≤ Re("z") ≤ 1}.
We take a collection of linear operators {"Tz" : "z" ∈ "S"}on the space of simple functions in "L"1("μ"1) into the space of all "μ"2-measurable functions on Ω2. We assume the following further properties on this collection of linear operators:
Then, for each 0 < "θ" < 1, the operator "Tθ" maps "Lpθ"("μ"1) boundedly into "Lqθ"("μ"2).
The theory of real Hardy spaces and the space of bounded mean oscillations permits us to wield the Stein interpolation theorem argument in dealing with operators on the Hardy space "H"1(R"d") and the space BMO of bounded mean oscillations; this is a result of Charles Fefferman and Elias Stein.
Applications.
Hausdorff–Young inequality.
It has been shown in the first section that the Fourier transform formula_0 maps "L"1(R"d") boundedly into "L"∞(R"d") and "L"2(R"d") into itself. A similar argument shows that the Fourier series operator, which transforms periodic functions "f" : T → C into functions formula_118 whose values are the Fourier coefficients
formula_119
maps "L"1(T) boundedly into "ℓ"∞(Z) and "L"2(T) into "ℓ"2(Z). The Riesz–Thorin interpolation theorem now implies the following:
formula_120
where 1 ≤ "p" ≤ 2 and + = 1. This is the Hausdorff–Young inequality.
The Hausdorff–Young inequality can also be established for the Fourier transform on locally compact Abelian groups. The norm estimate of 1 is not optimal. See the main article for references.
Convolution operators.
Let "f" be a fixed integrable function and let T be the operator of convolution with "f" , i.e., for each function g we have "Tg" = "f" ∗ "g".
It is well known that T is bounded from "L"1 to "L"1 and it is trivial that it is bounded from "L"∞ to "L"∞ (both bounds are by || "f" ||1). Therefore the Riesz–Thorin theorem gives
formula_121
We take this inequality and switch the role of the operator and the operand, or in other words, we think of S as the operator of convolution with g, and get that S is bounded from "L"1 to "Lp". Further, since g is in "Lp" we get, in view of Hölder's inequality, that S is bounded from "Lq" to "L"∞, where again + = 1. So interpolating we get
formula_122
where the connection between "p", "r" and "s" is
formula_123
The Hilbert transform.
The Hilbert transform of "f" : R → C is given by
formula_124
where p.v. indicates the Cauchy principal value of the integral. The Hilbert transform is a Fourier multiplier operator with a particularly simple multiplier:
formula_125
It follows from the Plancherel theorem that the Hilbert transform maps "L"2(R) boundedly into itself.
Nevertheless, the Hilbert transform is not bounded on "L"1(R) or "L"∞(R), and so we cannot use the Riesz–Thorin interpolation theorem directly. To see why we do not have these endpoint bounds, it suffices to compute the Hilbert transform of the simple functions 1(−1,1)("x") and 1(0,1)("x") − 1(0,1)(−"x"). We can show, however, that
formula_126
for all Schwartz functions "f" : R → C, and this identity can be used in conjunction with the Cauchy–Schwarz inequality to show that the Hilbert transform maps "L"2"n"(R"d") boundedly into itself for all "n" ≥ 2. Interpolation now establishes the bound
formula_127
for all 2 ≤ "p" < ∞, and the self-adjointness of the Hilbert transform can be used to carry over these bounds to the 1 < "p" ≤ 2 case.
Comparison with the real interpolation method.
While the Riesz–Thorin interpolation theorem and its variants are powerful tools that yield a clean estimate on the interpolated operator norms, they suffer from numerous defects: some minor, some more severe. Note first that the complex-analytic nature of the proof of the Riesz–Thorin interpolation theorem forces the scalar field to be C. For extended-real-valued functions, this restriction can be bypassed by redefining the function to be finite everywhere—possible, as every integrable function must be finite almost everywhere. A more serious disadvantage is that, in practice, many operators, such as the Hardy–Littlewood maximal operator and the Calderón–Zygmund operators, do not have good endpoint estimates. In the case of the Hilbert transform in the previous section, we were able to bypass this problem by explicitly computing the norm estimates at several midway points. This is cumbersome and is often not possible in more general scenarios. Since many such operators satisfy the weak-type estimates
formula_128
real interpolation theorems such as the Marcinkiewicz interpolation theorem are better-suited for them. Furthermore, a good number of important operators, such as the Hardy-Littlewood maximal operator, are only sublinear. This is not a hindrance to applying real interpolation methods, but complex interpolation methods are ill-equipped to handle non-linear operators. On the other hand, real interpolation methods, compared to complex interpolation methods, tend to produce worse estimates on the intermediate operator norms and do not behave as well off the diagonal in the Riesz diagram. The off-diagonal versions of the Marcinkiewicz interpolation theorem require the formalism of Lorentz spaces and do not necessarily produce norm estimates on the "Lp"-spaces.
Mityagin's theorem.
B. Mityagin extended the Riesz–Thorin theorem; this extension is formulated here in the special case of spaces of sequences with unconditional bases (cf. below).
Assume:
formula_129
Then
formula_130
for any unconditional Banach space of sequences X, that is, for any formula_131 and any formula_132, formula_133.
The proof is based on the Krein–Milman theorem.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{F}"
},
{
"math_id": 1,
"text": "\\mathcal{F}(f_1+f_2) = \\mathcal{F}_{L^1}(f_1) + \\mathcal{F}_{L^2}(f_2)"
},
{
"math_id": 2,
"text": "\\mathcal{F}_{L^1}:L^1(\\mathbf{R}^d) \\to L^\\infty(\\mathbf{R}^d), "
},
{
"math_id": 3,
"text": "\\mathcal{F}_{L^2}:L^2(\\mathbf{R}^d) \\to L^2(\\mathbf{R}^d)."
},
{
"math_id": 4,
"text": "\\mathcal{F}_{L^1}"
},
{
"math_id": 5,
"text": "\\mathcal{F}_{L^2}"
},
{
"math_id": 6,
"text": "p_0 < p_1"
},
{
"math_id": 7,
"text": "p_0 = p_1"
},
{
"math_id": 8,
"text": "f"
},
{
"math_id": 9,
"text": "f = \\sum_{j=1}^m a_j \\mathbf{1}_{A_j}"
},
{
"math_id": 10,
"text": "m\\in\\mathbb{N}"
},
{
"math_id": 11,
"text": "a_j = \\left\\vert a_j\\right\\vert\\mathrm{e}^{\\mathrm{i}\\alpha_j} \\in \\mathbb{C}"
},
{
"math_id": 12,
"text": "A_j\\in\\Sigma_1"
},
{
"math_id": 13,
"text": "j=1,2,\\dots,m"
},
{
"math_id": 14,
"text": "g"
},
{
"math_id": 15,
"text": "\\Omega_2 \\to \\mathbb{C}"
},
{
"math_id": 16,
"text": "g = \\sum_{k=1}^n b_k \\mathbf{1}_{B_k}"
},
{
"math_id": 17,
"text": "n\\in\\mathbb{N}"
},
{
"math_id": 18,
"text": "b_k = \\left\\vert b_k\\right\\vert\\mathrm{e}^{\\mathrm{i}\\beta_k} \\in \\mathbb{C}"
},
{
"math_id": 19,
"text": "B_k\\in\\Sigma_2"
},
{
"math_id": 20,
"text": "k=1,2,\\dots,n"
},
{
"math_id": 21,
"text": "\\Omega_1"
},
{
"math_id": 22,
"text": "\\Omega_2"
},
{
"math_id": 23,
"text": "\\sigma"
},
{
"math_id": 24,
"text": "f\\in L^{r}(\\mu_1)"
},
{
"math_id": 25,
"text": "g\\in L^r(\\mu_2)"
},
{
"math_id": 26,
"text": "r \\in\n[1, \\infty]"
},
{
"math_id": 27,
"text": "\\lVert f\\rVert_{p_\\theta}=\n1"
},
{
"math_id": 28,
"text": "\\lVert g\\rVert_{q_\\theta'}=1"
},
{
"math_id": 29,
"text": "q_\\theta' = q_\\theta(q_\\theta-1)^{-1}"
},
{
"math_id": 30,
"text": "p_\\theta"
},
{
"math_id": 31,
"text": "q_\\theta"
},
{
"math_id": 32,
"text": "\\begin{aligned}\nu: \\mathbb{C}&\\to \\mathbb{C}& v: \\mathbb{C}&\\to \\mathbb{C}\\\\\n z &\\mapsto u(z)=\\frac{1-z}{p_0} + \\frac{z}{p_1} &\n z &\\mapsto v(z)=\\frac{1-z}{q_0} + \\frac{z}{q_1}.\\end{aligned}"
},
{
"math_id": 33,
"text": "z=\\theta"
},
{
"math_id": 34,
"text": "u(\\theta) = p_\\theta^{-1}"
},
{
"math_id": 35,
"text": "v(\\theta) = q_\\theta^{-1}"
},
{
"math_id": 36,
"text": "z"
},
{
"math_id": 37,
"text": "\\begin{aligned}\nf_z &= \\sum_{j=1}^m \\left\\vert a_j\\right\\vert^{\\frac{u(z)}{u(\\theta)}} \\mathrm{e}^{\\mathrm{i}\\alpha_j}\n\\mathbf{1}_{A_j} \\\\\ng_z &= \\sum_{k=1}^n \\left\\vert b_k\\right\\vert^{\\frac{1-v(z)}{1-v(\\theta)}} \\mathrm{e}^{\\mathrm{i}\n\\beta_k} \\mathbf{1}_{B_k}\\end{aligned}"
},
{
"math_id": 38,
"text": "f_\\theta = f"
},
{
"math_id": 39,
"text": "g_\\theta = g"
},
{
"math_id": 40,
"text": "q_0 = q_1 = 1"
},
{
"math_id": 41,
"text": "v\\equiv 1"
},
{
"math_id": 42,
"text": "g_z=g"
},
{
"math_id": 43,
"text": "\\Phi(z) = \\int_{\\Omega_2} (T f_z) g_z \\,\\mathrm{d}\\mu_2\n= \\sum_{j=1}^m \\sum_{k=1}^n \\left\\vert a_j\\right\\vert^{\\frac{u(z)}{u(\\theta)}}\n \\left\\vert b_k\\right\\vert^{\\frac{1-v(z)}{1-v(\\theta)}} \\gamma_{j,k}"
},
{
"math_id": 44,
"text": "\\gamma_{j,k} = \\mathrm{e}^{\\mathrm{i}(\\alpha_j + \\beta_k)} \\int_{\\Omega_2} (T \\mathbf{1}_{A_j})\n\\mathbf{1}_{B_k} \\,\\mathrm{d}\\mu_2"
},
{
"math_id": 45,
"text": "\\Phi(z)"
},
{
"math_id": 46,
"text": "0 \\le \\operatorname{\\mathbb{R}e}z \\le 1"
},
{
"math_id": 47,
"text": "f_z"
},
{
"math_id": 48,
"text": "g_z"
},
{
"math_id": 49,
"text": "\\left\\vert\\Phi(\\theta + \\mathrm{i}0)\\right\\vert = \\biggl\\vert\\int_{\\Omega_2} (Tf) g \\,\\mathrm{d}\\mu_2\\biggr\\vert \\le \\|T\\|_{L^{p_0} \\to L^{q_0}}^{1-\\theta}\n\\|T\\|_{L^{p_1} \\to L^{q_1}}^\\theta"
},
{
"math_id": 50,
"text": "\\sup_g \\biggl\\vert\\int_{\\Omega_2} (Tf) g \\,\\mathrm{d}\\mu_2\\biggr\\vert \\le \\|T\\|_{L^{p_0} \\to L^{q_0}}^{1-\\theta} \\|T\\|_{L^{p_1} \\to L^{q_1}}^\\theta"
},
{
"math_id": 51,
"text": "\\lVert g\\rVert_{q_\\theta'} = 1"
},
{
"math_id": 52,
"text": "1\\le p, p' \\le \\infty"
},
{
"math_id": 53,
"text": "L^p(\\mu_1)"
},
{
"math_id": 54,
"text": "\\lVert f\\rVert_p = \\sup \\biggl|\\int_{\\Omega_1} fg \\,\\mathrm{d}\\mu_1\\biggr|"
},
{
"math_id": 55,
"text": "L^{p'}(\\mu_1)"
},
{
"math_id": 56,
"text": "\\lVert g\\rVert_{p'} \\le 1"
},
{
"math_id": 57,
"text": "\\lVert Tf\\rVert_{q_\\theta} \\le \\|T\\|_{L^{p_0} \\to L^{q_0}}^{1-\\theta} \\|T\\|_{L^{p_1} \\to L^{q_1}}^\\theta"
},
{
"math_id": 58,
"text": "\\lVert f\\rVert_{p_\\theta} = 1"
},
{
"math_id": 59,
"text": "\\lVert Tf\\rVert_{q_\\theta} \\le \\|T\\|_{L^{p_0} \\to L^{q_0}}^{1-\\theta} \\|T\\|_{L^{p_1} \\to L^{q_1}}^\\theta \\lVert f\\rVert_{p_\\theta}."
},
{
"math_id": 60,
"text": "(A_j)_{j=1}^m"
},
{
"math_id": 61,
"text": "\\Sigma_1"
},
{
"math_id": 62,
"text": "\\xi\\in \\Omega_1"
},
{
"math_id": 63,
"text": "A_{\\hat{\\jmath}}"
},
{
"math_id": 64,
"text": "z=\\mathrm{i}y"
},
{
"math_id": 65,
"text": "\\begin{aligned}\n\\left\\vert f_{\\mathrm{i}y}(\\xi)\\right\\vert &= \\left\\vert a_{\\hat{\\jmath}}\\right\\vert^\\frac{u(\\mathrm{i}y)}{u(\\theta)} \\\\\n &= \\exp\\biggl(\\log\\left\\vert a_{\\hat{\\jmath}}\\right\\vert\\frac{p_\\theta}{p_0}\\biggr)\n \\exp\\biggl(-\\mathrm{i}y \\log\\left\\vert a_{\\hat{\\jmath}}\\right\\vert p_\\theta\\biggl(\\frac{1}{p_0}\n - \\frac{1}{p_1} \\biggr) \\biggr) \\\\\n &= \\left\\vert a_{\\hat{\\jmath}}\\right\\vert^{\\frac{p_\\theta}{p_0}} \\\\\n & = \\left\\vert f(\\xi)\\right\\vert^{\\frac{p_\\theta}{p_0}}\\end{aligned}"
},
{
"math_id": 66,
"text": "\\lVert f_{\\mathrm{i}y}\\rVert_{p_0} \\le\n\\lVert f\\rVert_{p_\\theta}^{\\frac{p_\\theta}{p_0}}"
},
{
"math_id": 67,
"text": "\\zeta \\in \\Omega_2"
},
{
"math_id": 68,
"text": "B_{\\hat{k}}"
},
{
"math_id": 69,
"text": "\\left\\vert g_{\\mathrm{i}y}(\\zeta)\\right\\vert = \\left\\vert b_{\\hat{k}}\\right\\vert^{\\frac{1-1/q_0}{1-1/q_\\theta}}\n= \\left\\vert g(\\zeta)\\right\\vert^{\\frac{1-1/q_0}{1-1/q_\\theta}}\n= \\left\\vert g(\\zeta)\\right\\vert^{\\frac{q_\\theta'}{q_0'}}\n\\implies \\lVert g_{\\mathrm{i}y}\\rVert_{q_0'} \\le\n\\lVert g\\rVert_{q_\\theta'}^{\\frac{q_\\theta'}{q_0'}}."
},
{
"math_id": 70,
"text": "\\Phi(\\mathrm{i}y)"
},
{
"math_id": 71,
"text": "q_0"
},
{
"math_id": 72,
"text": "q_0'"
},
{
"math_id": 73,
"text": "\\begin{aligned}\n\\left\\vert\\Phi(\\mathrm{i}y)\\right\\vert &\\le \\lVert T f_{\\mathrm{i}y}\\rVert_{q_0} \\lVert g_{\\mathrm{i}y}\\rVert_{q_0'} \\\\\n &\\le \\|T\\|_{L^{p_0} \\to L^{q_0}} \\lVert f_{\\mathrm{i}y}\\rVert_{p_0} \\lVert g_{\\mathrm{i}y}\\rVert_{q_0'} \\\\\n &= \\|T\\|_{L^{p_0} \\to L^{q_0}} \\lVert f\\rVert_{p_\\theta}^{\\frac{p_\\theta}{p_0}}\n \\lVert g\\rVert_{q_\\theta'}^{\\frac{q_\\theta'}{q_0'}} \\\\\n &= \\|T\\|_{L^{p_0} \\to L^{q_0}}.\\end{aligned}"
},
{
"math_id": 74,
"text": "z=1+\\mathrm{i}y"
},
{
"math_id": 75,
"text": "\\left\\vert f_{1+\\mathrm{i}\ny}(\\xi)\\right\\vert = \\left\\vert f(\\xi)\\right\\vert^{p_\\theta/p_1}"
},
{
"math_id": 76,
"text": "\\left\\vert g_{1+\\mathrm{i}y}(\\zeta)\\right\\vert =\n\\left\\vert g(\\zeta)\\right\\vert^{q_\\theta'/q_1'}"
},
{
"math_id": 77,
"text": "\\left\\vert\\Phi(1+\\mathrm{i}y)\\right\\vert \\le \\|T\\|_{L^{p_1} \\to L^{q_1}} \\lVert f_{1+\\mathrm{i}y}\\rVert_{p_1} \\lVert g_{1+\\mathrm{i}y}\\rVert_{q_1'} =\n\\|T\\|_{L^{p_1} \\to L^{q_1}}."
},
{
"math_id": 78,
"text": "f\\in L^{p_\\theta}(\\Omega_1)"
},
{
"math_id": 79,
"text": "L^{p_\\theta}(\\Omega_1)"
},
{
"math_id": 80,
"text": "(f_n)_n"
},
{
"math_id": 81,
"text": "\\left\\vert f_n\\right\\vert \\le \\left\\vert f\\right\\vert"
},
{
"math_id": 82,
"text": "n"
},
{
"math_id": 83,
"text": "f_n \\to f"
},
{
"math_id": 84,
"text": "E=\\{x\\in \\Omega_1: \\left\\vert f(x)\\right\\vert > 1\\}"
},
{
"math_id": 85,
"text": "g = f \\mathbf{1}_E"
},
{
"math_id": 86,
"text": "g_n\n= f_n \\mathbf{1}_E"
},
{
"math_id": 87,
"text": "h = f - g = f \\mathbf{1}_{E^\\mathrm{c}}"
},
{
"math_id": 88,
"text": "h_n = f_n - g_n"
},
{
"math_id": 89,
"text": "p_0 \\le p_\\theta \\le p_1"
},
{
"math_id": 90,
"text": "\\begin{aligned}\n\\lVert f\\rVert_{p_\\theta}^{p_\\theta} &= \\int_{\\Omega_1} \\left\\vert f\\right\\vert^{p_\\theta} \\,\\mathrm{d}\\mu_1 \n\\ge \\int_{\\Omega_1} \\left\\vert f\\right\\vert^{p_\\theta} \\mathbf{1}_{E} \\,\\mathrm{d}\\mu_1 \n\\ge \\int_{\\Omega_1} \\left\\vert f \\mathbf{1}_{E}\\right\\vert^{p_0} \\,\\mathrm{d}\\mu_1 \n= \\int_{\\Omega_1} \\left\\vert g\\right\\vert^{p_0} \\,\\mathrm{d}\\mu_1 = \\lVert g\\rVert_{p_0}^{p_0} \\\\\n\\lVert f\\rVert_{p_\\theta}^{p_\\theta} &= \\int_{\\Omega_1} \\left\\vert f\\right\\vert^{p_\\theta} \\,\\mathrm{d}\\mu_1 \n\\ge \\int_{\\Omega_1} \\left\\vert f\\right\\vert^{p_\\theta} \\mathbf{1}_{E^\\mathrm{c}} \\,\\mathrm{d}\\mu_1 \n\\ge \\int_{\\Omega_1} \\left\\vert f \\mathbf{1}_{E^\\mathrm{c}}\\right\\vert^{p_1} \\,\\mathrm{d}\\mu_1 \n= \\int_{\\Omega_1} \\left\\vert h\\right\\vert^{p_1} \\,\\mathrm{d}\\mu_1 = \\lVert h\\rVert_{p_1}^{p_1}\\end{aligned}"
},
{
"math_id": 91,
"text": "g\\in L^{p_0}(\\Omega_1)"
},
{
"math_id": 92,
"text": "h\\in L^{p_1}(\\Omega_1)"
},
{
"math_id": 93,
"text": "n\\to\\infty"
},
{
"math_id": 94,
"text": "\\left\\vert f_n\\right\\vert \\le\n\\left\\vert f\\right\\vert"
},
{
"math_id": 95,
"text": "\\left\\vert g_n\\right\\vert \\le \\left\\vert g\\right\\vert"
},
{
"math_id": 96,
"text": "\\left\\vert h_n\\right\\vert \\le \\left\\vert h\\right\\vert"
},
{
"math_id": 97,
"text": "\\begin{aligned}\n\\lVert f_n\\rVert_{p_\\theta} &\\to \\lVert f\\rVert_{p_\\theta} &\n\\lVert g_n\\rVert_{p_0} &\\to \\lVert g\\rVert_{p_0} &\n\\lVert h_n\\rVert_{p_1} &\\to \\lVert h\\rVert_{p_1}.\\end{aligned}"
},
{
"math_id": 98,
"text": "\\left\\vert f - f_n\\right\\vert \\le 2\\left\\vert f\\right\\vert"
},
{
"math_id": 99,
"text": "\\left\\vert g-g_n\\right\\vert \\le 2\\left\\vert g\\right\\vert"
},
{
"math_id": 100,
"text": "\\left\\vert h\n- h_n\\right\\vert \\le 2\\left\\vert h\\right\\vert"
},
{
"math_id": 101,
"text": "\\begin{aligned}\n\\lVert f - f_n\\rVert_{p_\\theta} &\\to 0 &\n\\lVert g - g_n\\rVert_{p_0} &\\to 0 &\n\\lVert h - h_n\\rVert_{p_1} &\\to 0\\end{aligned}"
},
{
"math_id": 102,
"text": "T"
},
{
"math_id": 103,
"text": "(p_0, q_0)"
},
{
"math_id": 104,
"text": "(p_1,\nq_1)"
},
{
"math_id": 105,
"text": "(p_\\theta, q_\\theta)"
},
{
"math_id": 106,
"text": "\\begin{aligned}\n\\lVert Tg - Tg_n\\rVert_{p_0} & \\le \\|T\\|_{L^{p_0} \\to L^{q_0}} \\lVert g - g_n\\rVert_{p_0} \\to 0 &\n\\lVert Th - Th_n\\rVert_{p_1} & \\le \\|T\\|_{L^{p_1} \\to L^{q_1}} \\lVert h - h_n\\rVert_{p_1} \\to 0.\\end{aligned}"
},
{
"math_id": 107,
"text": "Tg_n \\to Tg"
},
{
"math_id": 108,
"text": "Th_n \\to Th"
},
{
"math_id": 109,
"text": "\\epsilon > 0"
},
{
"math_id": 110,
"text": "\\mu_2(y\\in \\Omega_2: \\left\\vert Tg - Tg_n\\right\\vert > \\epsilon) \\le \\frac{\\lVert Tg - Tg_n\\rVert_{q_0}^{q_0}}\n{\\epsilon^{q_0}}"
},
{
"math_id": 111,
"text": "Th - Th_n"
},
{
"math_id": 112,
"text": "Tf_n \\to Tf"
},
{
"math_id": 113,
"text": "\\lVert Tf\\rVert_{q_\\theta} \\le \\liminf_{n\\to\\infty} \\lVert T f_n\\rVert_{q_\\theta} \\le\n\\|T\\|_{L^{p_\\theta} \\to L^{q_\\theta}} \\liminf_{n\\to\\infty} \\lVert f_n\\rVert_{p_\\theta} = \\|T\\|_{L^{p_\\theta} \\to L^{q_\\theta}}\n\\lVert f\\rVert_{p_\\theta}."
},
{
"math_id": 114,
"text": "\\varphi(z) = \\int (T_z f_z)g_z \\, d\\mu_2,"
},
{
"math_id": 115,
"text": " z \\mapsto \\int (T_zf)g \\, d\\mu_2"
},
{
"math_id": 116,
"text": " \\sup_{z \\in S} e^{-k|\\text{Im}(z)|} \\log \\left| \\int (T_zf)g \\, d\\mu_2 \\right| < \\infty"
},
{
"math_id": 117,
"text": "\\sup_{\\text{Re}(z) = 0, 1} e^{-k|\\text{Im}(z)|} \\log \\left \\|T_z \\right \\| < \\infty"
},
{
"math_id": 118,
"text": "\\hat{f}:\\mathbf{Z} \\to \\mathbf{C}"
},
{
"math_id": 119,
"text": "\\hat{f}(n) = \\frac{1}{2\\pi} \\int_{-\\pi}^{\\pi} f(x) e^{-inx} \\, dx ,"
},
{
"math_id": 120,
"text": "\\begin{align}\n\\left \\|\\mathcal{F}f \\right \\|_{L^{q}(\\mathbf{R}^d)} &\\leq \\|f\\|_{L^p(\\mathbf{R}^d)} \\\\\n\\left \\|\\hat{f} \\right \\|_{\\ell^{q}(\\mathbf{Z})} &\\leq \\|f\\|_{L^p(\\mathbf{T})}\n\\end{align}"
},
{
"math_id": 121,
"text": "\\| f * g \\|_p \\leq \\|f\\|_1 \\|g\\|_p."
},
{
"math_id": 122,
"text": "\\|f*g\\|_s\\leq \\|f\\|_r\\|g\\|_p"
},
{
"math_id": 123,
"text": "\\frac{1}{r}+\\frac{1}{p}=1+\\frac{1}{s}."
},
{
"math_id": 124,
"text": " \\mathcal{H}f(x) = \\frac{1}{\\pi} \\, \\mathrm{p.v.} \\int_{-\\infty}^\\infty \\frac{f(x-t)}{t} \\, dt = \\left(\\frac{1}{\\pi} \\, \\mathrm{p.v.} \\frac{1}{t} \\ast f\\right)(x),"
},
{
"math_id": 125,
"text": " \\widehat{\\mathcal{H}f}(\\xi) = -i \\, \\sgn(\\xi) \\hat{f}(\\xi)."
},
{
"math_id": 126,
"text": "(\\mathcal{H}f)^2 = f^2 + 2\\mathcal{H}(f\\mathcal{H}f)"
},
{
"math_id": 127,
"text": " \\|\\mathcal{H}f\\|_p \\leq A_p \\|f\\|_p"
},
{
"math_id": 128,
"text": " \\mu \\left( \\{x : Tf(x) > \\alpha \\} \\right) \\leq \\left( \\frac{C_{p,q} \\|f\\|_p}{\\alpha} \\right)^q,"
},
{
"math_id": 129,
"text": "\\|A\\|_{\\ell_1 \\to \\ell_1}, \\|A\\|_{\\ell_\\infty \\to \\ell_\\infty} \\leq M."
},
{
"math_id": 130,
"text": "\\|A\\|_{X \\to X} \\leq M"
},
{
"math_id": 131,
"text": "(x_i) \\in X"
},
{
"math_id": 132,
"text": "(\\varepsilon_i) \\in \\{-1, 1 \\}^\\infty"
},
{
"math_id": 133,
"text": "\\| (\\varepsilon_i x_i) \\|_X = \\| (x_i) \\|_X "
}
] | https://en.wikipedia.org/wiki?curid=967440 |
9678 | Exponential function | Mathematical function, denoted exp(x) or e^x
The exponential function is a mathematical function denoted by formula_1 or formula_2 (where the argument x is written as an exponent). Unless otherwise specified, the term generally refers to the positive-valued function of a real variable, although it can be extended to the complex numbers or generalized to other mathematical objects like matrices or Lie algebras. The exponential function originated from the operation of taking powers of a number (repeated multiplication), but various modern definitions allow it to be rigorously extended to all real arguments formula_3, including irrational numbers. Its ubiquitous occurrence in pure and applied mathematics led mathematician Walter Rudin to consider the exponential function to be "the most important function in mathematics".
The functions formula_4 for positive real numbers formula_5 are also known as exponential functions, and satisfy the exponentiation identity:formula_6This implies formula_7 (with formula_8 factors) for positive integers formula_8, where formula_9, relating exponential functions to the elementary notion of exponentiation. The natural base formula_10 is a ubiquitous mathematical constant called Euler's number. To distinguish it, formula_11 is called "the" exponential function or the natural exponential function: it is the unique real-valued function of a real variable whose derivative is itself and whose value at 0 is 1: formula_12 for all formula_13, and formula_14The relation formula_15 for formula_16 and real or complex formula_3 allows general exponential functions to be expressed in terms of the natural exponential.
More generally, especially in applied settings, any function formula_17 defined by
formula_18
is also known as an exponential function, as it solves the initial value problem formula_19, meaning its rate of change at each point is proportional to the value of the function at that point. This behavior models diverse phenomena in the biological, physical, and social sciences, for example the unconstrained growth of a self-reproducing population, the decay of a radioactive element, the compound interest accruing on a financial fund, or a growing body of manufacturing expertise.
The exponential function can also be defined as a power series, which is readily applied to real numbers, complex numbers, and even matrices. The complex exponential function formula_20 takes on all complex values except for 0 and is closely related to the complex trigonometric functions by Euler's formula: formula_21 Motivated by its more abstract properties and characterizations, the exponential function can be generalized to much larger contexts such as square matrices and Lie groups. Even further, the differential equation definition can be generalized to a Riemannian manifold.
The exponential function for real numbers is a bijection from formula_22 to the interval formula_23. Its inverse function is the natural logarithm, denoted or formula_24. Some old texts refer to the exponential function as the "antilogarithm".
Graph.
The graph of formula_25 is upward-sloping, and increases faster as x increases. The graph always lies above the x-axis, but becomes arbitrarily close to it for large negative x; thus, the x-axis is a horizontal asymptote. The equation formula_26 means that the slope of the tangent to the graph at each point is equal to its y-coordinate at that point.
Relation to more general exponential functions.
The exponential function formula_27 is sometimes called the "natural exponential function" in order to distinguish it from the other exponential functions. The study of any exponential function can easily be reduced to that of the natural exponential function, since per definition, for positive b,
formula_28
As functions of a real variable, exponential functions are uniquely characterized by the fact that the derivative of such a function is directly proportional to the value of the function. The constant of proportionality of this relationship is the natural logarithm of the base b:
formula_29
For "b" > 1, the function formula_30 is increasing (as depicted for "b" = "e" and "b" = 2), because formula_31 makes the derivative always positive; this is often referred to as exponential growth. For positive "b" < 1, the function is decreasing (as depicted for "b" =); this is often referred to as exponential decay. For "b" = 1, the function is constant.
Euler's number "e" = 2.71828... is the unique base for which the constant of proportionality is 1, since formula_32, so that the function is its own derivative:
formula_33
This function, also denoted as exp "x", is called the "natural exponential function", or simply "the exponential function". Since any exponential function defined by formula_34 can be written in terms of the natural exponential as formula_15, it is computationally and conceptually convenient to reduce the study of exponential functions to this particular one. The natural exponential is hence denoted by
formula_35 or formula_36
The former notation is commonly used for simpler exponents, while the latter is preferred when the exponent is more complicated and harder to read in a small font.
For real numbers c and d, a function of the form formula_37 is also an exponential function, since it can be rewritten as
formula_38
Formal definition.
The exponential function formula_39 can be characterized in a variety of equivalent ways. It is commonly defined by the following power series:
formula_40
Since the radius of convergence of this power series is infinite, this definition is applicable to all complex numbers; see .
The term-by-term differentiation of this power series reveals that formula_41 for all x, leading to another common characterization of formula_42 as the unique solution of the differential equation
formula_43
that satisfies the initial condition formula_44
Solving the ordinary differential equation formula_45 with the initial condition formula_46 using Euler's method gives another common characterization, the product limit formula:
formula_47
With any of these equivalent definitions, one defines Euler's number formula_48. It can then be shown that the exponentiation formula_49 and the exponential function formula_50 are equivalent.
It can be shown that every continuous, nonzero solution of the functional equation formula_51 for formula_52 is an exponential function in the more general sense, formula_53 with formula_54
Overview.
The exponential function arises whenever a quantity grows or decays at a rate proportional to its current value. One such situation is continuously compounded interest, and in fact it was this observation that led Jacob Bernoulli in 1683 to the number
formula_55
now known as "e". Later, in 1697, Johann Bernoulli studied the calculus of the exponential function.
If a principal amount of 1 earns interest at an annual rate of "x" compounded monthly, then the interest earned each month is times the current value, so each month the total value is multiplied by (1 + ), and the value at the end of the year is (1 + )12. If instead interest is compounded daily, this becomes (1 + )365. Letting the number of time intervals per year grow without bound leads to the limit definition of the exponential function,
formula_56
first given by Leonhard Euler.
This is one of a number of characterizations of the exponential function; others involve series or differential equations.
From any of these definitions it can be shown that "e"−"x" is the reciprocal of "e""x". For example from the differential equation definition, "e""x" "e"−"x" = 1 when "x" = 0 and its derivative using the product rule is "e""x" "e"−"x" − "e""x" "e"−"x" = 0 for all x, so "e""x" "e"−"x" = 1 for all x.
From any of these definitions it can be shown that the exponential function obeys the basic exponentiation identity. For example from the power series definition,
formula_57
This justifies the notation "e""x" for exp "x".
The derivative (rate of change) of the exponential function is the exponential function itself. More generally, a function with a rate of change "proportional" to the function itself (rather than equal to it) is expressible in terms of the exponential function. This function property leads to exponential growth or exponential decay.
The exponential function extends to an entire function on the complex plane. Euler's formula relates its values at purely imaginary arguments to trigonometric functions. The exponential function also has analogues for which the argument is a matrix, or even an element of a Banach algebra or a Lie algebra.
Derivatives and differential equations.
The importance of the exponential function in mathematics and the sciences stems mainly from its property as the unique function which is equal to its derivative and is equal to 1 when "x" = 0. That is,
formula_58
Functions of the form "ce""x" for constant "c" are the only functions that are equal to their derivative (by the Picard–Lindelöf theorem). Other ways of saying the same thing include:
If a variable's growth or decay rate is proportional to its size—as is the case in unlimited population growth (see Malthusian catastrophe), continuously compounded interest, or radioactive decay—then the variable can be written as a constant times an exponential function of time. Explicitly for any real constant "k", a function "f": R → R satisfies "f"′ = "kf" if and only if "f"("x") = "ce""kx" for some constant "c". The constant "k" is called the decay constant, disintegration constant, rate constant, or transformation constant.
Furthermore, for any differentiable function "f", we find, by the chain rule:
formula_59
Continued fractions for.
A continued fraction for "e""x" can be obtained via an identity of Euler:
formula_60
The following generalized continued fraction for "e""z" converges more quickly:
formula_61
or, by applying the substitution "z" =:
formula_62
with a special case for "z" = 2:
formula_63
This formula also converges, though more slowly, for "z" > 2. For example:
formula_64
Complex plane.
As in the real case, the exponential function can be defined on the complex plane in several equivalent forms.
The most common definition of the complex exponential function parallels the power series definition for real arguments, where the real variable is replaced by a complex one:
formula_65
Alternatively, the complex exponential function may be defined by modelling the limit definition for real arguments, but with the real variable replaced by a complex one:
formula_66
For the power series definition, term-wise multiplication of two copies of this power series in the Cauchy sense, permitted by Mertens' theorem, shows that the defining multiplicative property of exponential functions continues to hold for all complex arguments:
formula_67
The definition of the complex exponential function in turn leads to the appropriate definitions extending the trigonometric functions to complex arguments.
In particular, when "z" = "it" (t real), the series definition yields the expansion
formula_68
In this expansion, the rearrangement of the terms into real and imaginary parts is justified by the absolute convergence of the series. The real and imaginary parts of the above expression in fact correspond to the series expansions of cos "t" and sin "t", respectively.
This correspondence provides motivation for defining cosine and sine for all complex arguments in terms of formula_69 and the equivalent power series:
formula_70
for all formula_71
The functions exp, cos, and sin so defined have infinite radii of convergence by the ratio test and are therefore entire functions (that is, holomorphic on formula_0). The range of the exponential function is formula_72, while the ranges of the complex sine and cosine functions are both formula_0 in its entirety, in accord with Picard's theorem, which asserts that the range of a nonconstant entire function is either all of formula_0, or formula_0 excluding one lacunary value.
These definitions for the exponential and trigonometric functions lead trivially to Euler's formula:
formula_73
We could alternatively define the complex exponential function based on this relationship. If "z" = "x" + "iy", where x and y are both real, then we could define its exponential as
formula_74
where exp, cos, and sin on the right-hand side of the definition sign are to be interpreted as functions of a real variable, previously defined by other means.
For formula_75, the relationship formula_76 holds, so that formula_77 for real formula_78 and formula_79 maps the real line (mod 2"π") to the unit circle in the complex plane. Moreover, going from formula_80 to formula_81, the curve defined by formula_82 traces a segment of the unit circle of length
formula_83
starting from "z" = 1 in the complex plane and going counterclockwise. Based on these observations and the fact that the measure of an angle in radians is the arc length on the unit circle subtended by the angle, it is easy to see that, restricted to real arguments, the sine and cosine functions as defined above coincide with the sine and cosine functions as introduced in elementary mathematics via geometric notions.
The complex exponential function is periodic with period 2"πi" and formula_84 holds for all formula_85.
When its domain is extended from the real line to the complex plane, the exponential function retains the following properties:
formula_86
for all formula_87
Extending the natural logarithm to complex arguments yields the complex logarithm log "z", which is a multivalued function.
We can then define a more general exponentiation:
formula_88
for all complex numbers "z" and "w". This is also a multivalued function, even when "z" is real. This distinction is problematic, as the multivalued functions log "z" and "z""w" are easily confused with their single-valued equivalents when substituting a real number for "z". The rule about multiplying exponents for the case of positive real numbers must be modified in a multivalued context:
<templatestyles src="Block indent/styles.css"/>("ez") ≠ "ezw", but rather ("ez") = "e"("z" + 2"niπ")"w" multivalued over integers "n"
See failure of power and logarithm identities for more about problems with combining powers.
The exponential function maps any line in the complex plane to a logarithmic spiral in the complex plane with the center at the origin. Two special cases exist: when the original line is parallel to the real axis, the resulting spiral never closes in on itself; when the original line is parallel to the imaginary axis, the resulting spiral is a circle of some radius.
Considering the complex exponential function as a function involving four real variables:
formula_89
the graph of the exponential function is a two-dimensional surface curving through four dimensions.
Starting with a color-coded portion of the formula_90 domain, the following are depictions of the graph as variously projected into two or three dimensions.
The second image shows how the domain complex plane is mapped into the range complex plane:
The third and fourth images show how the graph in the second image extends into one of the other two dimensions not shown in the second image.
The third image shows the graph extended along the real formula_3 axis. It shows the graph is a surface of revolution about the formula_3 axis of the graph of the real exponential function, producing a horn or funnel shape.
The fourth image shows the graph extended along the imaginary formula_92 axis. It shows that the graph's surface for positive and negative formula_92 values doesn't really meet along the negative real formula_91 axis, but instead forms a spiral surface about the formula_92 axis. Because its formula_92 values have been extended to ±2"π", this image also better depicts the 2π periodicity in the imaginary formula_92 value.
Computation of "a""b" where both "a" and "b" are complex.
Complex exponentiation "a""b" can be defined by converting "a" to polar coordinates and using the identity ("e"ln "a") = "a""b":
formula_93
However, when "b" is not an integer, this function is multivalued, because "θ" is not unique (see "").
Matrices and Banach algebras.
The power series definition of the exponential function makes sense for square matrices (for which the function is called the matrix exponential) and more generally in any unital Banach algebra "B". In this setting, "e"0 = 1, and "e""x" is invertible with inverse "e"−"x" for any "x" in "B". If "xy" = "yx", then "e""x" + "y" = "e""x""e""y", but this identity can fail for noncommuting "x" and "y".
Some alternative definitions lead to the same function. For instance, "e""x" can be defined as
formula_94
Or "e""x" can be defined as "f""x"(1), where "f""x" : R → "B" is the solution to the differential equation ("t") = "x
f""x"("t"), with initial condition "f""x"(0) = 1; it follows that "f""x"("t") = "e""tx" for every t in R.
Lie algebras.
Given a Lie group "G" and its associated Lie algebra formula_95, the exponential map is a map formula_95 ↦ "G" satisfying similar properties. In fact, since R is the Lie algebra of the Lie group of all positive real numbers under multiplication, the ordinary exponential function for real arguments is a special case of the Lie algebra situation. Similarly, since the Lie group GL("n",R) of invertible "n" × "n" matrices has as Lie algebra M("n",R), the space of all "n" × "n" matrices, the exponential function for square matrices is a special case of the Lie algebra exponential map.
The identity formula_96 can fail for Lie algebra elements "x" and "y" that do not commute; the Baker–Campbell–Hausdorff formula supplies the necessary correction terms.
Transcendency.
The function "e""z" is not in the rational function ring formula_97: it is not the quotient of two polynomials with complex coefficients.
If "a"1, ..., "a""n" are distinct complex numbers, then "e""a"1"z", ..., "e""a""n""z" are linearly independent over formula_97, and hence "e""z" is transcendental over formula_97.
Computation.
When computing (an approximation of) the exponential function near the argument 0, the result will be close to 1, and computing the value of the difference formula_98 with floating-point arithmetic may lead to the loss of (possibly all) significant figures, producing a large calculation error, possibly even a meaningless result.
Following a proposal by William Kahan, it may thus be useful to have a dedicated routine, often called codice_0, for computing "ex" − 1 directly, bypassing computation of "e""x". For example, if the exponential is computed by using its Taylor series
formula_99
one may use the Taylor series of formula_98:
formula_100
This was first implemented in 1979 in the Hewlett-Packard HP-41C calculator, and provided by several calculators, operating systems (for example Berkeley UNIX 4.3BSD), computer algebra systems, and programming languages (for example C99).
In addition to base "e", the IEEE 754-2008 standard defines similar exponential functions near 0 for base 2 and 10: formula_101 and formula_102.
A similar approach has been used for the logarithm (see lnp1).
An identity in terms of the hyperbolic tangent,
formula_103
gives a high-precision value for small values of "x" on systems that do not implement expm1("x").
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{C}"
},
{
"math_id": 1,
"text": "f(x)=\\exp(x)"
},
{
"math_id": 2,
"text": "e^x"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "f(x) = b^x"
},
{
"math_id": 5,
"text": "b"
},
{
"math_id": 6,
"text": "b^{x+y} = b^x b^y \\text{ for all } x,y\\in\\mathbb{R}."
},
{
"math_id": 7,
"text": "b^n= b\\times\\cdots\\times b"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "b=b^1"
},
{
"math_id": 10,
"text": "e = \\exp(1)=2.71828\\ldots"
},
{
"math_id": 11,
"text": "\\exp(x)=e^x"
},
{
"math_id": 12,
"text": "\\exp'(x)=\\exp(x)"
},
{
"math_id": 13,
"text": "x\\in \\R"
},
{
"math_id": 14,
"text": "\\exp(0)=1."
},
{
"math_id": 15,
"text": "b^x = e^{x\\ln b}"
},
{
"math_id": 16,
"text": "b>0"
},
{
"math_id": 17,
"text": "f:\\mathbb{R}\\to\\mathbb{R}"
},
{
"math_id": 18,
"text": "f(x)=c e^{ax}=c b^{kx}, \\text{ with }k=a/ \\ln b,\\ a\\neq 0,\\ b ,c>0"
},
{
"math_id": 19,
"text": "f'=af,\\ f(0)=c"
},
{
"math_id": 20,
"text": "\\exp:\\mathbb{C}\\to\\mathbb{C}"
},
{
"math_id": 21,
"text": "e^{x+iy} = e^x\\cos(y) \\,+\\, i \\,e^x\\sin(y)."
},
{
"math_id": 22,
"text": "\\mathbb{R}"
},
{
"math_id": 23,
"text": "(0,\\infty)"
},
{
"math_id": 24,
"text": "\\log_e"
},
{
"math_id": 25,
"text": "y=e^x"
},
{
"math_id": 26,
"text": "\\tfrac{d}{dx}e^x = e^x"
},
{
"math_id": 27,
"text": "f(x) = e^x"
},
{
"math_id": 28,
"text": " b^x \\mathrel{\\stackrel{\\text{def}}{=}} e^{x\\ln b} "
},
{
"math_id": 29,
"text": "\\frac{d}{dx} b^x = \\frac{d}{dx} e^{x\\ln (b)} = e^{x\\ln (b)} \\ln (b) = b^x \\ln (b)."
},
{
"math_id": 30,
"text": "b^x"
},
{
"math_id": 31,
"text": "\\ln b>0"
},
{
"math_id": 32,
"text": "\\ln(e) = 1"
},
{
"math_id": 33,
"text": "\\frac{d}{dx} e^x = e^x \\ln (e) = e^x."
},
{
"math_id": 34,
"text": "f(x)=b^x"
},
{
"math_id": 35,
"text": "x\\mapsto e^x"
},
{
"math_id": 36,
"text": "x\\mapsto \\exp x."
},
{
"math_id": 37,
"text": "f(x) = a b^{cx + d}"
},
{
"math_id": 38,
"text": "a b^{c x + d} = \\left(a b^d\\right) \\left(b^c\\right)^x."
},
{
"math_id": 39,
"text": "\\exp"
},
{
"math_id": 40,
"text": "\\exp x := \\sum_{k = 0}^{\\infty} \\frac{x^k}{k!} = 1 + x + \\frac{x^2}{2!} + \\frac{x^3}{3!} + \\frac{x^4}{4!} + \\cdots"
},
{
"math_id": 41,
"text": "\\frac{d}{dx}\\exp x = \\exp x"
},
{
"math_id": 42,
"text": "\\exp x"
},
{
"math_id": 43,
"text": "y'(x) = y(x)"
},
{
"math_id": 44,
"text": "y(0) = 1."
},
{
"math_id": 45,
"text": "y'(x) = y(x)"
},
{
"math_id": 46,
"text": "y'(0)=1"
},
{
"math_id": 47,
"text": "\\exp x = \\lim_{n \\to \\infty} \\left(1 + \\frac{x}{n}\\right)^n."
},
{
"math_id": 48,
"text": "e = \\exp 1"
},
{
"math_id": 49,
"text": "(\\exp 1)^x"
},
{
"math_id": 50,
"text": "\\exp x"
},
{
"math_id": 51,
"text": "f(x+y)=f(x)f(y)"
},
{
"math_id": 52,
"text": "f: \\C \\to \\C"
},
{
"math_id": 53,
"text": "f(x) = \\exp(kx)"
},
{
"math_id": 54,
"text": "k\\in\\C."
},
{
"math_id": 55,
"text": "\\lim_{n\\to\\infty}\\left(1 + \\frac{1}{n}\\right)^{n}"
},
{
"math_id": 56,
"text": "\\exp x = \\lim_{n\\to\\infty}\\left(1 + \\frac{x}{n}\\right)^{n}"
},
{
"math_id": 57,
"text": "\\exp(x + y)\n= \\sum_{m=0}^{\\infty} \\frac{(x+y)^m}{m!}\n= \\sum_{m=0}^{\\infty} \\sum_{k=0}^m\\frac{m!}{k! (m-k)!} \\frac{x^k y^{m-k}}{m!}\n= \\sum_{n=0}^{\\infty} \\sum_{k=0}^{\\infty} \\frac{x^k y^n}{k! n!}\n= \\exp x \\cdot \\exp y\\,."
},
{
"math_id": 58,
"text": "\\frac{d}{dx}e^x = e^x \\quad\\text{and}\\quad e^0=1."
},
{
"math_id": 59,
"text": "\\frac{d}{dx} e^{f(x)} = f'(x)e^{f(x)}."
},
{
"math_id": 60,
"text": " e^x = 1 + \\cfrac{x}{1 - \\cfrac{x}{x + 2 - \\cfrac{2x}{x + 3 - \\cfrac{3x}{x + 4 - \\ddots}}}}"
},
{
"math_id": 61,
"text": " e^z = 1 + \\cfrac{2z}{2 - z + \\cfrac{z^2}{6 + \\cfrac{z^2}{10 + \\cfrac{z^2}{14 + \\ddots}}}}"
},
{
"math_id": 62,
"text": " e^\\frac{x}{y} = 1 + \\cfrac{2x}{2y - x + \\cfrac{x^2} {6y + \\cfrac{x^2} {10y + \\cfrac{x^2} {14y + \\ddots}}}}"
},
{
"math_id": 63,
"text": " e^2 = 1 + \\cfrac{4}{0 + \\cfrac{2^2}{6 + \\cfrac{2^2}{10 + \\cfrac{2^2}{14 + \\ddots\\,}}}} = 7 + \\cfrac{2}{5 + \\cfrac{1}{7 + \\cfrac{1}{9 + \\cfrac{1}{11 + \\ddots\\,}}}}"
},
{
"math_id": 64,
"text": " e^3 = 1 + \\cfrac{6}{-1 + \\cfrac{3^2}{6 + \\cfrac{3^2}{10 + \\cfrac{3^2}{14 + \\ddots\\,}}}} = 13 + \\cfrac{54}{7 + \\cfrac{9}{14 + \\cfrac{9}{18 + \\cfrac{9}{22 + \\ddots\\,}}}}"
},
{
"math_id": 65,
"text": "\\exp z := \\sum_{k = 0}^\\infty\\frac{z^k}{k!} "
},
{
"math_id": 66,
"text": "\\exp z := \\lim_{n\\to\\infty}\\left(1+\\frac{z}{n}\\right)^n "
},
{
"math_id": 67,
"text": "\\exp(w+z)=\\exp w\\exp z \\text { for all } w,z\\in\\mathbb{C}"
},
{
"math_id": 68,
"text": "\\exp(it) = \\left( 1-\\frac{t^2}{2!}+\\frac{t^4}{4!}-\\frac{t^6}{6!}+\\cdots \\right) + i\\left(t - \\frac{t^3}{3!} + \\frac{t^5}{5!} - \\frac{t^7}{7!}+\\cdots\\right)."
},
{
"math_id": 69,
"text": "\\exp(\\pm iz)"
},
{
"math_id": 70,
"text": "\\begin{align}\n & \\cos z:= \\frac{\\exp(iz)+\\exp(-iz)}{2} = \\sum_{k=0}^\\infty (-1)^k \\frac{z^{2k}}{(2k)!}, \\\\[5pt]\n \\text{and } \\quad & \\sin z := \\frac{\\exp(iz)-\\exp(-iz)}{2i} =\\sum_{k=0}^\\infty (-1)^k\\frac{z^{2k+1}}{(2k+1)!}\n \\end{align}"
},
{
"math_id": 71,
"text": " z\\in\\mathbb{C}."
},
{
"math_id": 72,
"text": "\\mathbb{C}\\setminus \\{0\\}"
},
{
"math_id": 73,
"text": "\\exp(iz)=\\cos z+i\\sin z \\text { for all } z\\in\\mathbb{C}."
},
{
"math_id": 74,
"text": "\\exp z = \\exp(x+iy) := (\\exp x)(\\cos y + i \\sin y)"
},
{
"math_id": 75,
"text": "t\\in\\R"
},
{
"math_id": 76,
"text": "\\overline{\\exp(it)}=\\exp(-it)"
},
{
"math_id": 77,
"text": "\\left|\\exp(it)\\right| = 1"
},
{
"math_id": 78,
"text": "t"
},
{
"math_id": 79,
"text": "t \\mapsto \\exp(it)"
},
{
"math_id": 80,
"text": "t = 0"
},
{
"math_id": 81,
"text": "t = t_0"
},
{
"math_id": 82,
"text": "\\gamma(t)=\\exp(it)"
},
{
"math_id": 83,
"text": "\\int_0^{t_0}|\\gamma'(t)| \\, dt = \\int_0^{t_0} |i\\exp(it)| \\, dt = t_0,"
},
{
"math_id": 84,
"text": "\\exp(z+2\\pi i k)=\\exp z"
},
{
"math_id": 85,
"text": "z \\in \\mathbb{C}, k \\in \\mathbb{Z}"
},
{
"math_id": 86,
"text": "\\begin{align}\n & e^{z + w} = e^z e^w\\, \\\\[5pt]\n & e^0 = 1\\, \\\\[5pt]\n & e^z \\ne 0 \\\\[5pt]\n & \\frac{d}{dz} e^z = e^z \\\\[5pt]\n & \\left(e^z\\right)^n = e^{nz}, n \\in \\mathbb{Z}\n \\end{align} "
},
{
"math_id": 87,
"text": " w,z\\in\\mathbb C."
},
{
"math_id": 88,
"text": "z^w = e^{w \\log z}"
},
{
"math_id": 89,
"text": "v + i w = \\exp(x + i y)"
},
{
"math_id": 90,
"text": "xy"
},
{
"math_id": 91,
"text": "v"
},
{
"math_id": 92,
"text": "y"
},
{
"math_id": 93,
"text": "a^b = \\left(re^{\\theta i}\\right)^b = \\left(e^{(\\ln r) + \\theta i}\\right)^b = e^{\\left((\\ln r) + \\theta i\\right)b}"
},
{
"math_id": 94,
"text": "\\lim_{n \\to \\infty} \\left(1 + \\frac{x}{n} \\right)^n ."
},
{
"math_id": 95,
"text": "\\mathfrak{g}"
},
{
"math_id": 96,
"text": "\\exp(x+y)=\\exp(x)\\exp(y)"
},
{
"math_id": 97,
"text": "\\C(z)"
},
{
"math_id": 98,
"text": "e^x-1"
},
{
"math_id": 99,
"text": "e^x = 1 + x + \\frac {x^2}2 + \\frac{x^3}6 + \\cdots + \\frac{x^n}{n!} + \\cdots,"
},
{
"math_id": 100,
"text": "e^x-1=x+\\frac {x^2}2 + \\frac{x^3}6+\\cdots +\\frac{x^n}{n!}+\\cdots."
},
{
"math_id": 101,
"text": "2^x - 1"
},
{
"math_id": 102,
"text": "10^x - 1"
},
{
"math_id": 103,
"text": "\\operatorname{expm1} (x) = e^x - 1 = \\frac{2 \\tanh(x/2)}{1 - \\tanh(x/2)},"
}
] | https://en.wikipedia.org/wiki?curid=9678 |
9679418 | Pseudomedian | Statistical measure of centrality
In statistics, the pseudomedian is a measure of centrality for data-sets and populations. It agrees with the median for symmetric data-sets or populations. In mathematical statistics, the pseudomedian is also a location parameter for probability distributions.
Description.
The pseudomedian of a distribution formula_0 is defined to be a median of the distribution of formula_1, where formula_2 and formula_3 are independent, each with the same distribution formula_0.
When formula_0 is a symmetric distribution, the pseudomedian coincides with the median; otherwise this is not generally the case.
The Hodges–Lehmann statistic, defined as the median of all of the midpoints of pairs of observations, is a consistent estimator of the pseudomedian.
Like the set of medians, the pseudomedian is well defined for all probability distributions, even for the many distributions that lack modes or means.
Pseudomedian filter in signal processing.
In signal processing there is another definition of pseudomedian filter for discrete signals.
For a time series of length 2"N" + 1, the pseudomedian is defined as follows. Construct "N" + 1 sliding windows each of length "N" + 1. For each window, compute the minimum and maximum. Across all "N" + 1 windows, find the maximum minimum and the minimum maximum. The pseudomedian is the average of these two quantities.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F"
},
{
"math_id": 1,
"text": "(Z_1+Z_2)/2"
},
{
"math_id": 2,
"text": "Z_1"
},
{
"math_id": 3,
"text": "Z_2"
}
] | https://en.wikipedia.org/wiki?curid=9679418 |
9683136 | Effective medium approximations | Method of approximating the properties of a composite material
In materials science, effective medium approximations (EMA) or effective medium theory (EMT) pertain to analytical or theoretical modeling that describes the macroscopic properties of composite materials. EMAs or EMTs are developed from averaging the multiple values of the constituents that directly make up the composite material. At the constituent level, the values of the materials vary and are inhomogeneous. Precise calculation of the many constituent values is nearly impossible. However, theories have been developed that can produce acceptable approximations which in turn describe useful parameters including the effective permittivity and permeability of the materials as a whole. In this sense, effective medium approximations are descriptions of a medium (composite material) based on the properties and the relative fractions of its components and are derived from calculations, and effective medium theory. There are two widely used formulae.
Effective permittivity and permeability are averaged dielectric and magnetic characteristics of a microinhomogeneous medium. They both were derived in quasi-static approximation when the electric field inside a mixture particle may be considered as homogeneous. So, these formulae can not describe the particle size effect. Many attempts were undertaken to improve these formulae.
Applications.
There are many different effective medium approximations, each of them being more or less accurate in distinct conditions. Nevertheless, they all assume that the macroscopic system is homogeneous and, typical of all mean field theories, they fail to predict the properties of a multiphase medium close to the percolation threshold due to the absence of long-range correlations or critical fluctuations in the theory.
The properties under consideration are usually the conductivity formula_0 or the dielectric constant formula_1 of the medium. These parameters are interchangeable in the formulas in a whole range of models due to the wide applicability of the Laplace equation. The problems that fall outside of this class are mainly in the field of elasticity and hydrodynamics, due to the higher order tensorial character of the effective medium constants.
EMAs can be discrete models, such as applied to resistor networks, or continuum theories as applied to elasticity or viscosity. However, most of the current theories have difficulty in describing percolating systems. Indeed, among the numerous effective medium approximations, only Bruggeman's symmetrical theory is able to predict a threshold. This characteristic feature of the latter theory puts it in the same category as other mean field theories of critical phenomena.
Bruggeman's model.
For a mixture of two materials with permittivities formula_2 and formula_3 with corresponding volume fractions formula_4 and formula_5, D.A.G. Bruggeman proposed a formula of the following form:
Here the positive sign before the square root must be altered to a negative sign in some cases in order to get the correct imaginary part of effective complex permittivity which is related with electromagnetic wave attenuation. The formula is symmetric with respect to swapping the 'd' and 'm' roles. This formula is based on the equality
where formula_6 is the jump of electric displacement flux all over the integration surface, formula_7 is the component of microscopic electric field normal to the integration surface, formula_8 is the local relative complex permittivity which takes the value formula_2 inside the picked metal particle, the value formula_3 inside the picked dielectric particle and the value formula_9 outside the picked particle, formula_10 is the normal component of the macroscopic electric field. Formula (4) comes out of Maxwell's equality formula_11. Thus only one picked particle is considered in Bruggeman's approach. The interaction with all the other particles is taken into account only in a mean field approximation described by formula_9. Formula (3) gives a reasonable resonant curve for plasmon excitations in metal nanoparticles if their size is 10 nm or smaller. However, it is unable to describe the size dependence for the resonant frequency of plasmon excitations that are observed in experiments
Formulas.
Without any loss of generality, we shall consider the study of the effective conductivity (which can be either dc or ac) for a system made up of spherical multicomponent inclusions with different arbitrary conductivities. Then the Bruggeman formula takes the form:
Circular and spherical inclusions.
In a system of Euclidean spatial dimension formula_12 that has an arbitrary number of components, the sum is made over all the constituents. formula_13 and formula_14 are respectively the fraction and the conductivity of each component, and formula_15 is the effective conductivity of the medium. (The sum over the formula_13's is unity.)
Elliptical and ellipsoidal inclusions.
This is a generalization of Eq. (1) to a biphasic system with ellipsoidal inclusions of conductivity formula_0 into a matrix of conductivity formula_16. The fraction of inclusions is formula_17 and the system is formula_18 dimensional. For randomly oriented inclusions,
where the formula_19's denote the appropriate doublet/triplet of depolarization factors which is governed by the ratios between the axis of the ellipse/ellipsoid. For example: in the case of a circle (formula_20, formula_21) and in the case of a sphere (formula_22, formula_23, formula_24). (The sum over the formula_19 's is unity.)
The most general case to which the Bruggeman approach has been applied involves bianisotropic ellipsoidal inclusions.
Derivation.
The figure illustrates a two-component medium. Consider the cross-hatched volume of conductivity formula_25, take it as a sphere of volume formula_26 and assume it is embedded in a uniform medium with an effective conductivity formula_15. If the electric field far from the inclusion is formula_27 then elementary considerations lead to a dipole moment associated with the volume
This polarization produces a deviation from formula_27. If the average deviation is to vanish, the total polarization summed over the two types of inclusion must vanish. Thus
where formula_28 and formula_29 are respectively the volume fraction of material 1 and 2. This can be easily extended to a system of dimension formula_18 that has an arbitrary number of components. All cases can be combined to yield Eq. (1).
Eq. (1) can also be obtained by requiring the deviation in current to vanish.
It has been derived here from the assumption that the inclusions are spherical and it can be modified for shapes with other depolarization factors; leading to Eq. (2).
A more general derivation applicable to bianisotropic materials is also available.
Modeling of percolating systems.
The main approximation is that all the domains are located in an equivalent mean field.
Unfortunately, it is not the case close to the percolation threshold where the system is governed by the largest cluster of conductors, which is a fractal, and long-range correlations that are totally absent from Bruggeman's simple formula.
The threshold values are in general not correctly predicted. It is 33% in the EMA, in three dimensions, far from the 16% expected from percolation theory and observed in experiments. However, in two dimensions, the EMA gives a threshold of 50% and has been proven to model percolation relatively well.
Maxwell Garnett equation.
In the Maxwell Garnett approximation, the effective medium consists of a matrix medium with formula_2 and inclusions with formula_30. Maxwell Garnett was the son of physicist William Garnett, and was named after Garnett's friend, James Clerk Maxwell. He proposed his formula to explain colored pictures that are observed in glasses doped with metal nanoparticles. His formula has a form
where formula_31 is effective relative complex permittivity of the mixture, formula_3 is relative complex permittivity of the background medium containing small spherical inclusions of relative permittivity formula_2 with volume fraction of formula_32. This formula is based on the equality
where formula_33 is the absolute permittivity of free space and formula_34 is electric dipole moment of a single inclusion induced by the external electric field E. However this equality is good only for homogeneous medium and formula_35. Moreover, the formula (1) ignores the interaction between single inclusions. Because of these circumstances, formula (1) gives too narrow and too high resonant curve for plasmon excitations in metal nanoparticles of the mixture.
Formula.
The Maxwell Garnett equation reads:
where formula_36 is the effective dielectric constant of the medium, formula_30 of the inclusions, and formula_2 of the matrix; formula_13 is the volume fraction of the inclusions.
The Maxwell Garnett equation is solved by:
so long as the denominator does not vanish. A simple MATLAB calculator using this formula is as follows.
% This simple MATLAB calculator computes the effective dielectric
% constant of a mixture of an inclusion material in a base medium
% according to the Maxwell Garnett theory
% INPUTS:
% eps_base: dielectric constant of base material;
% eps_incl: dielectric constant of inclusion material;
% vol_incl: volume portion of inclusion material;
% OUTPUT:
% eps_mean: effective dielectric constant of the mixture.
function eps_mean = MaxwellGarnettFormula(eps_base, eps_incl, vol_incl)
small_number_cutoff = 1e-6;
if vol_incl < 0 || vol_incl > 1
disp('WARNING: volume portion of inclusion material is out of range!');
end
factor_up = 2 * (1 - vol_incl) * eps_base + (1 + 2 * vol_incl) * eps_incl;
factor_down = (2 + vol_incl) * eps_base + (1 - vol_incl) * eps_incl;
if abs(factor_down) < small_number_cutoff
disp('WARNING: the effective medium is singular!');
eps_mean = 0;
else
eps_mean = eps_base * factor_up / factor_down;
end
end
Derivation.
For the derivation of the Maxwell Garnett equation we start with an array of polarizable particles. By using the Lorentz local field concept, we obtain the Clausius-Mossotti relation:
formula_37
Where formula_38 is the number of particles per unit volume. By using elementary electrostatics, we get for a spherical inclusion with dielectric constant formula_30 and a radius formula_39 a polarisability formula_40:
formula_41
If we combine formula_40 with the Clausius Mosotti equation, we get:
formula_42
Where formula_36 is the effective dielectric constant of the medium, formula_30 of the inclusions; formula_13 is the volume fraction of the inclusions.
As the model of Maxwell Garnett is a composition of a matrix medium with inclusions we enhance the equation:
Validity.
In general terms, the Maxwell Garnett EMA is expected to be valid at low volume fractions formula_43, since it is assumed that the domains are spatially separated and electrostatic interaction between the chosen inclusions and all other neighbouring inclusions is neglected. The Maxwell Garnett formula, in contrast to Bruggeman formula, ceases to be correct when the inclusions become resonant. In the case of plasmon resonance, the Maxwell Garnett formula is correct only at volume fraction of the inclusions formula_44. The applicability of effective medium approximation for dielectric multilayers and metal-dielectric multilayers have been studied, showing that there are certain cases where the effective medium approximation does not hold and one needs to be cautious in application of the theory.
Generalization of the Maxwell Garnett Equation to describe the nanoparticle size distribution.
Maxwell Garnett Equation describes optical properties of nanocomposites which consist in a collection of perfectly spherical nanoparticles. All these nanoparticles must have the same size. However, due to confinement effect, the optical properties can be influenced by the nanoparticles size distribution. As shown by Battie et al., the Maxwell Garnett equation can be generalized to take into account this distribution.
formula_45
formula_46 and formula_47 are the nanoparticle radius and size distribution, respectively. formula_48 and formula_49 are the mean radius and the volume fraction of the nanoparticles, respectively. formula_50 is the first electric Mie coefficient.
This equation reveals that the classical Maxwell Garnett equation gives a false estimation of the volume fraction nanoparticles when the size distribution cannot be neglected.
Generalization to include shape distribution of nanoparticles.
The Maxwell Garnett equation only describes the optical properties of a collection of perfectly spherical nanoparticles. However, the optical properties of nanocomposites are sensitive to the nanoparticles shape distribution. To overcome this limit, Y. Battie et al. have developed the shape distributed effective medium theory (SDEMT). This effective medium theory enables to calculate the effective dielectric function of a nanocomposite which consists in a collection of ellipsoïdal nanoparticles distributed in shape.
formula_51
with formula_52
The depolarization factors (formula_53) only depend on the shape of nanoparticles. formula_54 is the distribution of depolarization factors.f is the volume fraction of the nanoparticles.
The SDEMT theory was used to extract the shape distribution of nanoparticles from absorption or ellipsometric spectra.
Formula describing size effect.
A new formula describing size effect was proposed. This formula has a form
formula_55
formula_56
where a is the nanoparticle radius and formula_57 is wave number. It is supposed here that the time dependence of the electromagnetic field is given by the factor formula_58 In this paper Bruggeman's approach was used, but electromagnetic field for electric-dipole oscillation mode inside the picked particle was computed without applying quasi-static approximation. Thus the function formula_59 is due to the field nonuniformity inside the picked particle. In quasi-static region (formula_60, i.e. formula_61 for Agformula_62 this function becomes constant formula_63 and formula (5) becomes identical with Bruggeman's formula.
Effective permeability formula.
Formula for effective permeability of mixtures has a form
formula_64
Here formula_65 is effective relative complex permeability of the mixture, formula_66 is relative complex permeability of the background medium containing small spherical inclusions of relative permeability formula_67 with volume fraction of formula_32. This formula was derived in dipole approximation. Magnetic octupole mode and all other magnetic oscillation modes of odd orders were neglected here. When formula_68 and formula_60 this formula has a simple form
Effective medium theory for resistor networks.
For a network consisting of a high density of random resistors, an exact solution for each individual element may be impractical or impossible. In such case, a random resistor network can be considered as a two-dimensional graph and the effective resistance can be modelled in terms of graph measures and geometrical properties of networks.
Assuming, edge length is much less than electrode spacing and edges to be uniformly distributed, the potential can be considered to drop uniformly from one electrode to another.
Sheet resistance of such a random network (formula_69) can be written in terms of edge (wire) density (formula_70), resistivity (formula_71), width (formula_72) and thickness (formula_73) of edges (wires) as:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma"
},
{
"math_id": 1,
"text": "\\varepsilon"
},
{
"math_id": 2,
"text": "\\varepsilon_m"
},
{
"math_id": 3,
"text": "\\varepsilon_d"
},
{
"math_id": 4,
"text": "c_m"
},
{
"math_id": 5,
"text": "c_d"
},
{
"math_id": 6,
"text": "\\Delta \\Phi"
},
{
"math_id": 7,
"text": "E_n(\\mathbf r)"
},
{
"math_id": 8,
"text": "\\varepsilon_r (\\mathbf r)"
},
{
"math_id": 9,
"text": "\\varepsilon_{\\mathrm{eff}}"
},
{
"math_id": 10,
"text": "E_0"
},
{
"math_id": 11,
"text": "\\operatorname{div}(\\varepsilon_r\\mathbf E)=0"
},
{
"math_id": 12,
"text": " n "
},
{
"math_id": 13,
"text": "\\delta_i"
},
{
"math_id": 14,
"text": "\\sigma_i"
},
{
"math_id": 15,
"text": "\\sigma_e"
},
{
"math_id": 16,
"text": "\\sigma_m"
},
{
"math_id": 17,
"text": "\\delta"
},
{
"math_id": 18,
"text": "n"
},
{
"math_id": 19,
"text": "L_j"
},
{
"math_id": 20,
"text": "L_1 = 1/2"
},
{
"math_id": 21,
"text": "L_2 = 1/2"
},
{
"math_id": 22,
"text": "L_1 = 1/3"
},
{
"math_id": 23,
"text": "L_2 = 1/3"
},
{
"math_id": 24,
"text": "L_3 = 1/3"
},
{
"math_id": 25,
"text": "\\sigma_1"
},
{
"math_id": 26,
"text": "V"
},
{
"math_id": 27,
"text": "\\overline{E_0}"
},
{
"math_id": 28,
"text": "\\delta_1"
},
{
"math_id": 29,
"text": "\\delta_2"
},
{
"math_id": 30,
"text": "\\varepsilon_i"
},
{
"math_id": 31,
"text": "\\varepsilon_\\text{eff}"
},
{
"math_id": 32,
"text": "c_m \\ll 1"
},
{
"math_id": 33,
"text": "\\varepsilon_0"
},
{
"math_id": 34,
"text": "p_m"
},
{
"math_id": 35,
"text": "\\varepsilon_d = 1"
},
{
"math_id": 36,
"text": "\\varepsilon_\\mathrm{eff}"
},
{
"math_id": 37,
"text": "\\frac{\\varepsilon-1}{\\varepsilon+2} = \\frac{4\\pi}{3} \\sum_j N_j \\alpha_j"
},
{
"math_id": 38,
"text": "N_j"
},
{
"math_id": 39,
"text": "a"
},
{
"math_id": 40,
"text": "\\alpha"
},
{
"math_id": 41,
"text": " \\alpha = \\left( \\frac{\\varepsilon_i-1}{\\varepsilon_i+2} \\right) a^3"
},
{
"math_id": 42,
"text": " \\left( \\frac{\\varepsilon_\\mathrm{eff}-1}{\\varepsilon_\\mathrm{eff}+2} \\right) = \\delta_i \\left( \\frac{\\varepsilon_i-1}{\\varepsilon_i+2} \\right)"
},
{
"math_id": 43,
"text": "\\delta_i "
},
{
"math_id": 44,
"text": " \\delta_i < 10 ^{-5}"
},
{
"math_id": 45,
"text": "\\frac{(\\varepsilon_\\text{eff}-\\varepsilon_m)}{\\varepsilon_\\text{eff}-2\\varepsilon_m}\n=\\frac{3i \\lambda^3}{16\\pi^2\\varepsilon_m^{1.5}}\\frac{f}{R_m^3}\\int P(R) a_1(R) dR"
},
{
"math_id": 46,
"text": "R"
},
{
"math_id": 47,
"text": "P(R)"
},
{
"math_id": 48,
"text": "R_m"
},
{
"math_id": 49,
"text": "f"
},
{
"math_id": 50,
"text": "a_1"
},
{
"math_id": 51,
"text": "\\varepsilon_\\text{eff}=\\frac{(1-f)\\varepsilon_m+f\\beta\\varepsilon_i }{1-f+f\\beta}"
},
{
"math_id": 52,
"text": "\\beta=\\frac{1}{3}\\iint P(L_1,L_2)\\sum_{i \\mathop =1}^3 \\frac{\\varepsilon_m}{\\varepsilon_m+L_i(\\varepsilon_i-\\varepsilon_m)}dL_1 dL_2"
},
{
"math_id": 53,
"text": " L_1, L_2, L_3 "
},
{
"math_id": 54,
"text": "P(L_1,L_2)"
},
{
"math_id": 55,
"text": "\\varepsilon_\\text{eff} = \\frac{1}{4}\\left(H_{\\varepsilon} + i \\sqrt{-H_{\\varepsilon}^2 - 8\\varepsilon_m \\varepsilon_dJ(k_ma)}\\right),"
},
{
"math_id": 56,
"text": "J(x)=2\\frac{1-x\\cot(x)}{x^2+x\\cot(x)-1},"
},
{
"math_id": 57,
"text": "k_m = \\sqrt{\\varepsilon_m \\mu_m} \\omega / c"
},
{
"math_id": 58,
"text": "\\mathrm{exp}(-i \\omega t)."
},
{
"math_id": 59,
"text": "J(k_m a)"
},
{
"math_id": 60,
"text": "k_m a \\ll 1"
},
{
"math_id": 61,
"text": "a \\leq \\mathrm{10\\,nm}"
},
{
"math_id": 62,
"text": ")"
},
{
"math_id": 63,
"text": "J(k_m a)=1"
},
{
"math_id": 64,
"text": "H_{\\mu} = (2-3c_m)\\mu_d-(1-3c_m)\\mu_m J(k_m a)."
},
{
"math_id": 65,
"text": "\\mu_\\text{eff}"
},
{
"math_id": 66,
"text": "\\mu_d"
},
{
"math_id": 67,
"text": "\\mu_m"
},
{
"math_id": 68,
"text": "\\mu_m=\\mu_d"
},
{
"math_id": 69,
"text": "R_{sn}"
},
{
"math_id": 70,
"text": "N_E"
},
{
"math_id": 71,
"text": "\\rho"
},
{
"math_id": 72,
"text": "w"
},
{
"math_id": 73,
"text": "t"
}
] | https://en.wikipedia.org/wiki?curid=9683136 |
96845 | Small population size | Statistical effects of small numbers on a population
Small populations can behave differently from larger populations. They are often the result of population bottlenecks from larger populations, leading to loss of heterozygosity and reduced genetic diversity and loss or fixation of alleles and shifts in allele frequencies. A small population is then more susceptible to demographic and genetic stochastic events, which can impact the long-term survival of the population. Therefore, small populations are often considered at risk of endangerment or extinction, and are often of conservation concern.
Demographic effects.
The influence of stochastic variation in demographic (reproductive and mortality) rates is much higher for small populations than large ones. Stochastic variation in demographic rates causes small populations to fluctuate randomly in size. This variation could be a result of unequal sex ratios, high variance in family size, inbreeding, or fluctuating population size. The smaller the population, the greater the probability that fluctuations will lead to extinction.
One demographic consequence of a small population size is the probability that all offspring in a generation are of the same sex, and where males and females are equally likely to be produced (see sex ratio), is easy to calculate: it is given by formula_0 (the chance of all animals being females is formula_1; the same holds for all males, thus this result). This can be a problem in very small populations. In 1977, the last 18 kākāpō on a Fiordland island in New Zealand were all male, though the probability of this would only be 0.0000076 if determined by chance (however, females are generally preyed upon more often than males and kakapo may be subject to sex allocation). With a population of just three individuals the probability of them all being the same sex is 0.25. Put another way, for every four species reduced to three individuals (or more precisely three individuals in the effective population), one will become extinct within one generation just because they are all the same sex. If the population remains at this size for several generations, such an event becomes almost inevitable.
Environmental effects.
The environment can directly affect the survival of a small population. Some detrimental effects include stochastic variation in the environment (year to year variation in rainfall, temperature), which can produce temporally correlated birth and death rates (i.e. 'good' years when birth rates are high and death rates are low and 'bad' years when birth rates are low and death rates are high) that lead to fluctuations in the population size. Again, smaller populations are more likely to become extinct due to these environmentally generated population fluctuations than the large populations.
The environment can also introduce beneficial traits to a small population that promote its persistence. In the small, fragmented populations of the acorn woodpecker, minimal immigration is sufficient for population persistence. Despite the potential genetic consequences of having a small population size, the acorn woodpecker is able to avoid extinction and the classification as an endangered species because of this environmental intervention causing neighboring populations to immigrate. Immigration promotes survival by increasing genetic diversity, which will be discussed in the next section as a harmful factor in small populations.
Genetic effects.
Conservationists are often worried about a loss of genetic variation in small populations. There are two types of genetic variation that are important when dealing with small populations:
Contributing genetic factors.
Examples of genetic consequences that have happened in inbred populations are high levels of hatching failure, bone abnormalities, low infant survivability, and decrease in birth rates. Some populations that have these consequences are cheetahs, who suffer with low infant survivability and a decrease in birth rate due to having gone through a population bottleneck. Northern elephant seals, who also went through a population bottleneck, have had cranial bone structure changes to the lower mandibular tooth row. The wolves on Isle Royale, a population restricted to the island in Lake Superior, have bone malformations in the vertebral column in the lumbosacral region. These wolves also have syndactyly, which is the fusion of soft tissue between the toes of the front feet. These types of malformations are caused by inbreeding depression or genetic load.
Island populations.
Island populations often also have small populations due to geographic isolation, limited habitat and high levels of endemism. Because their environments are so isolated gene flow is poor within island populations. Without the introduction of genetic diversity from gene flow, alleles are quickly fixed or lost. This reduces island populations' ability to adapt to any new circumstances and can result in higher levels of extinction. The majority of mammal, bird, and reptile extinctions since the 1600s have been from island populations. Moreover, 20% of bird species live on islands, but 90% of all bird extinctions have been from island populations. Human activities have been the major cause of extinctions on island in the past 50,000 years due to the introduction of exotic species, habitat loss and over-exploitation.
The Galapagos penguin is an endangered endemic species of the Galapagos islands. Its population has seen extreme fluctuations in population size due to marine perturbations, which have become more extreme due to climate change. The population has ranged from as high as 10,000 specimens to as low as 700. Currently it is estimated there are about 1000 mature individuals.
Conservation.
Conservation efforts for small populations at risk of extinction focus on increasing population size as well as genetic diversity, which determines the fitness of a population and its long-term persistence. Some methods include captive breeding and genetic rescue. Stabilizing the variance in family size is an effective way can double the effective population size and is often used in conservation strategies.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1/2^{n-1}"
},
{
"math_id": 1,
"text": "1/2^n"
}
] | https://en.wikipedia.org/wiki?curid=96845 |
9685 | Earley parser | Algorithm for parsing context-free languages
In computer science, the Earley parser is an algorithm for parsing strings that belong to a given context-free language, though (depending on the variant) it may suffer problems with certain nullable grammars. The algorithm, named after its inventor, Jay Earley, is a chart parser that uses dynamic programming; it is mainly used for parsing in computational linguistics. It was first introduced in his dissertation in 1968 (and later appeared in an abbreviated, more legible, form in a journal).
Earley parsers are appealing because they can parse all context-free languages, unlike LR parsers and LL parsers, which are more typically used in compilers but which can only handle restricted classes of languages. The Earley parser executes in cubic time in the general case formula_0, where "n" is the length of the parsed string, quadratic time for unambiguous grammars formula_1, and linear time for all deterministic context-free grammars. It performs particularly well when the rules are written left-recursively.
Earley recogniser.
The following algorithm describes the Earley recogniser. The recogniser can be modified to create a parse tree as it recognises, and in that way can be turned into a parser.
The algorithm.
In the following descriptions, α, β, and γ represent any string of terminals/nonterminals (including the empty string), X and Y represent single nonterminals, and "a" represents a terminal symbol.
Earley's algorithm is a top-down dynamic programming algorithm. In the following, we use Earley's dot notation: given a production X → αβ, the notation X → α • β represents a condition in which α has already been parsed and β is expected.
Input position 0 is the position prior to input. Input position "n" is the position after accepting the "n"th token. (Informally, input positions can be thought of as locations at token boundaries.) For every input position, the parser generates a "state set". Each state is a tuple (X → α • β, "i"), consisting of
A state is finished when its current position is the last position of the right side of the production, that is, when there is no symbol to the right of the dot • in the visual representation of the state.
The state set at input position "k" is called S("k"). The parser is seeded with S(0) consisting of only the top-level rule. The parser then repeatedly executes three operations: "prediction", "scanning", and "completion".
Duplicate states are not added to the state set, only new ones. These three operations are repeated until no new states can be added to the set. The set is generally implemented as a queue of states to process, with the operation to be performed depending on what kind of state it is.
The algorithm accepts if (X → γ •, 0) ends up in S("n"), where (X → γ) is the top level-rule and "n" the input length, otherwise it rejects.
Pseudocode.
Adapted from Speech and Language Processing by Daniel Jurafsky and James H. Martin,
DECLARE ARRAY S;
function INIT(words)
S ← CREATE_ARRAY(LENGTH(words) + 1)
for k ← from 0 to LENGTH(words) do
S[k] ← EMPTY_ORDERED_SET
function EARLEY_PARSE(words, grammar)
INIT(words)
ADD_TO_SET((γ → •S, 0), S[0])
for k ← from 0 to LENGTH(words) do
for each state in S[k] do // S[k] can expand during this loop
if not FINISHED(state) then
if NEXT_ELEMENT_OF(state) is a nonterminal then
PREDICTOR(state, k, grammar) // non_terminal
else do
SCANNER(state, k, words) // terminal
else do
COMPLETER(state, k)
end
end
return chart
procedure PREDICTOR((A → α•Bβ, j), k, grammar)
for each (B → γ) in GRAMMAR_RULES_FOR(B, grammar) do
ADD_TO_SET((B → •γ, k), S[k])
end
procedure SCANNER((A → α•aβ, j), k, words)
if j < LENGTH(words) and a ⊂ PARTS_OF_SPEECH(words[k]) then
ADD_TO_SET((A → αa•β, j), S[k+1])
end
procedure COMPLETER((B → γ•, x), k)
for each (A → α•Bβ, j) in S[x] do
ADD_TO_SET((A → αB•β, j), S[k])
end
Example.
Consider the following simple grammar for arithmetic expressions:
<P> ::= <S> # the start rule
<S> ::= <S> "+" <M> | <M>
<M> ::= <M> "*" <T> | <T>
<T> ::= "1" | "2" | "3" | "4"
With the input:
2 + 3 * 4
This is the sequence of state sets:
The state (P → S •, 0) represents a completed parse. This state also appears in S(3) and S(1), which are complete sentences.
Constructing the parse forest.
Earley's dissertation briefly describes an algorithm for constructing parse trees by adding a set of pointers from each non-terminal in an Earley item back to the items that caused it to be recognized. But Tomita noticed that this does not take into account the relations between symbols, so if we consider the grammar S → SS | b and the string bbb, it only notes that each S can match one or two b's, and thus produces spurious derivations for bb and bbbb as well as the two correct derivations for bbb.
Another method is to build the parse forest as you go, augmenting each Earley item with a pointer to a shared packed parse forest (SPPF) node labelled with a triple (s, i, j) where s is a symbol or an LR(0) item (production rule with dot), and i and j give the section of the input string derived by this node. A node's contents are either a pair of child pointers giving a single derivation, or a list of "packed" nodes each containing a pair of pointers and representing one derivation. SPPF nodes are unique (there is only one with a given label), but may contain more than one derivation for ambiguous parses. So even if an operation does not add an Earley item (because it already exists), it may still add a derivation to the item's parse forest.
SPPF nodes are never labeled with a completed LR(0) item: instead they are labelled with the symbol that is produced so that all derivations are combined under one node regardless of which alternative production they come from.
Optimizations.
Philippe McLean and R. Nigel Horspool in their paper "A Faster Earley Parser" combine Earley parsing with LR parsing and achieve an improvement in an order of magnitude.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{O}(n^3)"
},
{
"math_id": 1,
"text": "{O}(n^2)"
}
] | https://en.wikipedia.org/wiki?curid=9685 |
968591 | Writhe | Invariant of a knot diagram
In knot theory, there are several competing notions of the quantity writhe, or formula_0. In one sense, it is purely a property of an oriented link diagram and assumes integer values. In another sense, it is a quantity that describes the amount of "coiling" of a mathematical knot (or any closed simple curve) in three-dimensional space and assumes real numbers as values. In both cases, writhe is a geometric quantity, meaning that while deforming a curve (or diagram) in such a way that does not change its topology, one may still change its writhe.
Writhe of link diagrams.
In knot theory, the writhe is a property of an oriented link diagram. The writhe is the total number of positive crossings minus the total number of negative crossings.
A direction is assigned to the link at a point in each component and this direction is followed all the way around each component. For each crossing one comes across while traveling in this direction, if the strand underneath goes from right to left, the crossing is positive; if the lower strand goes from left to right, the crossing is negative. One way of remembering this is to use a variation of the right-hand rule.
For a knot diagram, using the right-hand rule with either orientation gives the same result, so the writhe is well-defined on unoriented knot diagrams.
The writhe of a knot is unaffected by two of the three Reidemeister moves: moves of Type II and Type III do not affect the writhe. Reidemeister move Type I, however, increases or decreases the writhe by 1. This implies that the writhe of a knot is "not" an isotopy invariant of the knot itself — only the diagram. By a series of Type I moves one can set the writhe of a diagram for a given knot to be any integer at all.
Writhe of a closed curve.
Writhe is also a property of a knot represented as a curve in three-dimensional space. Strictly speaking, a knot is such a curve, defined mathematically as an embedding of a circle in three-dimensional Euclidean space, formula_1. By viewing the curve from different vantage points, one can obtain different projections and draw the corresponding knot diagrams. Its writhe formula_0 (in the space curve sense) is equal to the average of the integral writhe values obtained from the projections from all vantage points. Hence, writhe in this situation can take on any real number as a possible value.
In a paper from 1961, Gheorghe Călugăreanu proved the following theorem: take a ribbon in formula_1, let formula_2 be the linking number of its border components, and let formula_3 be its total twist. Then the difference formula_4 depends only on the core curve of the ribbon, and
formula_5.
In a paper from 1959, Călugăreanu also showed how to calculate the writhe Wr with an integral. Let formula_6 be a smooth, simple, closed curve and let formula_7 and formula_8 be points on formula_6. Then the writhe is equal to the Gauss integral
formula_9.
Numerically approximating the Gauss integral for writhe of a curve in space.
Since writhe for a curve in space is defined as a double integral, we can approximate its value numerically by first representing our curve as a finite chain of formula_10 line segments. A procedure that was first derived by Michael Levitt for the description of protein folding and later used for supercoiled DNA by Konstantin Klenin and Jörg Langowski is to compute
formula_11,
where formula_12 is the exact evaluation of the double integral over line segments formula_13 and formula_14; note that formula_15 and formula_16.
To evaluate formula_12 for given segments numbered formula_13 and formula_14, number the endpoints of the two segments 1, 2, 3, and 4. Let formula_17 be the vector that begins at endpoint formula_18 and ends at endpoint formula_19. Define the following quantities:
formula_20
Then we calculate
formula_21
Finally, we compensate for the possible sign difference and divide by formula_22 to obtain
formula_23
In addition, other methods to calculate writhe can be fully described mathematically and algorithmically, some of them outperform method above (which has quadratic computational complexity, by having linear complexity).
Applications in DNA topology.
DNA will coil when twisted, just like a rubber hose or a rope will, and that is why biomathematicians use the quantity of "writhe" to describe the amount a piece of DNA is deformed as a result of this torsional stress. In general, this phenomenon of forming coils due to writhe is referred to as DNA supercoiling and is quite commonplace, and in fact in most organisms DNA is negatively supercoiled.
Any elastic rod, not just DNA, relieves torsional stress by coiling, an action which simultaneously untwists and bends the rod. F. Brock Fuller shows mathematically how the “elastic energy due to local twisting of the rod may be reduced if the central curve of the rod forms coils that increase its writhing number”.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\operatorname{Wr}"
},
{
"math_id": 1,
"text": "\\R^3"
},
{
"math_id": 2,
"text": "\\operatorname{Lk}"
},
{
"math_id": 3,
"text": "\\operatorname{Tw}"
},
{
"math_id": 4,
"text": "\\operatorname{Lk}-\\operatorname{Tw}"
},
{
"math_id": 5,
"text": "\\operatorname{Wr}=\\operatorname{Lk}-\\operatorname{Tw}"
},
{
"math_id": 6,
"text": "C"
},
{
"math_id": 7,
"text": "\\mathbf{r}_{1}"
},
{
"math_id": 8,
"text": "\\mathbf{r}_{2}"
},
{
"math_id": 9,
"text": "\n\\operatorname{Wr}=\\frac{1}{4\\pi}\\int_{C}\\int_{C}d\\mathbf{r}_{1}\\times d\\mathbf{r}_{2}\\cdot\\frac{\\mathbf{r}_{1}-\\mathbf{r}_{2}}{\\left|\\mathbf{r}_{1}-\\mathbf{r}_{2}\\right|^{3}}\n"
},
{
"math_id": 10,
"text": "N"
},
{
"math_id": 11,
"text": "\n\\operatorname{Wr}=\\sum_{i=1}^{N}\\sum_{j=1}^{N}\\frac{\\Omega_{ij}}{4\\pi}=2\\sum_{i=2}^{N}\\sum_{j<i}\\frac{\\Omega_{ij}}{4\\pi}"
},
{
"math_id": 12,
"text": "\\Omega_{ij}/{4\\pi}"
},
{
"math_id": 13,
"text": "i"
},
{
"math_id": 14,
"text": "j"
},
{
"math_id": 15,
"text": "\\Omega_{ij}=\\Omega_{ji}"
},
{
"math_id": 16,
"text": "\\Omega_{i,i+1}=\\Omega_{ii}=0"
},
{
"math_id": 17,
"text": "r_{pq}"
},
{
"math_id": 18,
"text": "p"
},
{
"math_id": 19,
"text": "q"
},
{
"math_id": 20,
"text": "\nn_{1}=\\frac{r_{13}\\times r_{14}}{\\left|r_{13}\\times r_{14}\\right|},\\; n_{2}=\\frac{r_{14}\\times r_{24}}{\\left|r_{14}\\times r_{24}\\right|},\\; n_{3}=\\frac{r_{24}\\times r_{23}}{\\left|r_{24}\\times r_{23}\\right|},\\; n_{4}=\\frac{r_{23}\\times r_{13}}{\\left|r_{23}\\times r_{13}\\right|}\n"
},
{
"math_id": 21,
"text": "\n\\Omega^{*}=\\arcsin\\left(n_{1}\\cdot n_{2}\\right)+\\arcsin\\left(n_{2}\\cdot n_{3}\\right)+\\arcsin\\left(n_{3}\\cdot n_{4}\\right)+\\arcsin\\left(n_{4}\\cdot n_{1}\\right).\n"
},
{
"math_id": 22,
"text": "4\\pi"
},
{
"math_id": 23,
"text": "\n\\frac{\\Omega}{4\\pi}=\\frac{\\Omega^{*}}{4\\pi}\\text{sign}\\left(\\left(r_{34}\\times r_{12}\\right)\\cdot r_{13}\\right).\n"
}
] | https://en.wikipedia.org/wiki?curid=968591 |
968638 | Sol LeWitt | American artist (1928–2007)
Solomon "Sol" LeWitt (September 9, 1928 – April 8, 2007) was an American artist linked to various movements, including conceptual art and minimalism.
LeWitt came to fame in the late 1960s with his wall drawings and "structures" (a term he preferred to "sculptures") but was prolific in a wide range of media including drawing, printmaking, photography, painting, installation, and artist's books. He has been the subject of hundreds of solo exhibitions in museums and galleries around the world since 1965. The first biography of the artist, "Sol LeWitt: A Life of Ideas", by Lary Bloom, was published by Wesleyan University Press in the spring of 2019.
Life.
LeWitt was born in Hartford, Connecticut, to a family of Jewish immigrants from Russia. His father died when he was 6. His mother took him to art classes at the Wadsworth Atheneum in Hartford. After earning a BFA from Syracuse University in 1949, LeWitt traveled to Europe where he was exposed to Old Master paintings. Shortly thereafter, he served in the Korean War, first in California, then Japan, and finally Korea. LeWitt moved to New York City in 1953 and set up a studio on the Lower East Side, in the old Ashkenazi Jewish settlement on Hester Street. During this time he studied at the School of Visual Arts while also pursuing his interest in design at "Seventeen" magazine, where he did paste-ups, mechanicals, and photostats. In 1955, he was a graphic designer in the office of architect I.M. Pei for a year. Around that time, LeWitt also discovered the work of the late 19th-century photographer Eadweard Muybridge, whose studies in sequence and locomotion were an early influence for him. These experiences, combined with an entry-level job as a night receptionist and clerk he took in 1960 at the Museum of Modern Art (MoMA) in New York, would influence LeWitt's later work.
At MoMA, LeWitt's co-workers included fellow artists Robert Ryman, Dan Flavin, Gene Beery, and Robert Mangold, and the future art critic and writer, Lucy Lippard who worked as a page in the library. Curator Dorothy Canning Miller's now famous 1960 "Sixteen Americans" exhibition with work by Jasper Johns, Robert Rauschenberg, and Frank Stella created a swell of excitement and discussion among the community of artists with whom LeWitt associated. LeWitt also became friends with Hanne Darboven, Eva Hesse, and Robert Smithson.
LeWitt taught at several New York schools, including New York University and the School of Visual Arts, during the late 1960s. In 1980, LeWitt left New York for Spoleto, Italy. After returning to the United States in the late 1980s, LeWitt made Chester, Connecticut, his primary residence. He died at age 78 in New York from cancer complications.
Work.
LeWitt is regarded as a founder of both Minimal and Conceptual art. His prolific two and three-dimensional work ranges from wall drawings (over 1200 of which have been executed) to hundreds of works on paper extending to structures in the form of towers, pyramids, geometric forms, and progressions. These works range in size from books and gallery-sized installations to monumental outdoor pieces. LeWitt's first serial sculptures were created in the 1960s using the modular form of the square in arrangements of varying visual complexity. In Issue 5 of "0 To 9" magazine, LeWitt's work 'Sentences on Conceptual Art' was published.This piece became one of the most widely cited artists' writings of the 1960s, exploring the relationship between art, practice and art criticism. In 1979, LeWitt participated in the design for the Lucinda Childs Dance Company's piece "Dance".
Sculpture.
In the early 1960s, LeWitt first began to create his "structures," a term he used to describe his three-dimensional work. His frequent use of open, modular structures originates from the cube, a form that influenced the artist's thinking from the time that he first became an artist. After creating an early body of work made up of closed-form wooden objects, heavily lacquered by hand, in the mid-1960s he "decided to remove the skin altogether and reveal the structure." This skeletal form, the radically simplified open cube, became a basic building block of the artist's three-dimensional work. In the mid-1960s, LeWitt began to work with the open cube: twelve identical linear elements connected at eight corners to form a skeletal structure. From 1969, he would conceive many of his modular structures on a large scale, to be constructed in aluminum or steel by industrial fabricators. Several of LeWitt's cube structures stood at approximate eye level. The artist introduced bodily proportion to his fundamental sculptural unit at this scale.
Following early experimentation LeWitt settled on a standard version for his modular cubes, circa 1965: the negative space between the beams would stand to the positive space of the sculptural material itself in a ratio of 8.5:1, or formula_0. The material would also be painted white instead of black, to avoid the "expressiveness" of the black color of earlier, similar pieces. Both the ratio and the color were arbitrary aesthetic choices, but once taken they were used consistently in several pieces which typify LeWitt's "modular cube" works. Museums holding specimens of LeWitt's modular cube works have published lesson suggestions for elementary education, meant to encourage children to investigate the mathematical properties of the artworks.
Beginning in the mid-1980s, LeWitt composed some of his sculptures from stacked cinder blocks, still generating variations within self-imposed restrictions. At this time, he began to work with concrete blocks. In 1985, the first cement "Cube" was built in a park in Basel. From 1990 onwards, LeWitt conceived multiple variations on a tower to be constructed using concrete blocks. In a shift away from his well-known geometric vocabulary of forms, the works LeWitt realized in the late 1990s indicate vividly the artist's growing interest in somewhat random curvilinear shapes and highly saturated colors.
In 2007, LeWitt conceived "9 Towers", a cube made from more than 1,000 light-coloured bricks that measure five meters on each side. It was installed at the Kivik Art Centre in Lilla Stenshuvud, Sweden, in 2014.
Wall drawings.
In 1968, LeWitt began to conceive sets of guidelines or simple diagrams for his two-dimensional works drawn directly on the wall, executed first in graphite, then in crayon, later in colored pencil and finally in chromatically rich washes of India ink, bright acrylic paint, and other materials. Since he created a work of art for Paula Cooper Gallery's inaugural show in 1968, an exhibition to benefit the Student Mobilization Committee to End the War in Vietnam, thousands of LeWitt's drawings have been installed directly on the surfaces of walls. Between 1969 and 1970 he created four "Drawings Series", which presented different combinations of the basic element that governed many of his early wall drawings. In each series he applied a different system of change to each of twenty-four possible combinations of a square divided into four equal parts, each containing one of the four basic types of lines LeWitt used (vertical, horizontal, diagonal left, and diagonal right). The result is four possible permutations for each of the twenty-four original units. The system used in Drawings Series I is what LeWitt termed 'Rotation,' Drawings Series II uses a system termed 'Mirror,' Drawings Series III uses 'Cross & Reverse Mirror,' and Drawings Series IV uses 'Cross Reverse'.
In "Wall Drawing #122", first installed in 1972 at the Massachusetts Institute of Technology in Cambridge, the work contains "all combinations of two lines crossing, placed at random, using arcs from corners and sides, straight, not straight and broken lines" resulting in 150 unique pairings that unfold on the gallery walls. LeWitt further expanded on this theme, creating variations such as "Wall Drawing #260" at the Museum of Modern Art, New York, which systematically runs through all possible two-part combinations of arcs and lines. Conceived in 1995, "Wall Drawing #792: Black rectangles and squares" underscores LeWitt's early interest in the intersections between art and architecture. Spanning the two floors of the Barbara Gladstone Gallery, Brussels, this work consists of varying combinations of black rectangles, creating an irregular grid-like pattern.
LeWitt, who had moved to Spoleto, Italy, in the late 1970s credited his transition from graphite pencil or crayon to vivid ink washes, to his encounter with the frescoes of Giotto, Masaccio, and other early Florentine painters. In the late 1990s and early 2000s, he created highly saturated colorful acrylic wall drawings. While their forms are curvilinear, playful and seem almost random, they are also drawn according to an exacting set of guidelines. The bands are a standard width, for example, and no colored section may touch another section of the same color.
In 2005 LeWitt began a series of 'scribble' wall drawings, so termed because they required the draftsmen to fill in areas of the wall by scribbling with graphite. The scribbling occurs at six different densities, which are indicated on the artist's diagrams and then mapped out in string on the surface of the wall. The gradations of scribble density produce a continuum of tone that implies three dimensions. The largest scribble wall drawing, "Wall Drawing #1268", is on view at the Albright-Knox Art Gallery.
According to the principle of his work, LeWitt's wall drawings are usually executed by people other than the artist himself. Even after his death, people are still making these drawings. He would therefore eventually use teams of assistants to create such works. Writing about making wall drawings, LeWitt himself observed in 1971 that "each person draws a line differently and each person understands words differently". Between 1968 and his death in 2007, LeWitt created more than 1,270 wall drawings. The wall drawings, executed on-site, generally exist for the duration of an exhibition; they are then destroyed, giving the work in its physical form an ephemeral quality. They can be installed, removed, and then reinstalled in another location, as many times as required for exhibition purposes. When transferred to another location, the number of walls can change only by ensuring that the proportions of the original diagram are retained.
Permanent murals by LeWitt can be found at, among others, the AXA Center, New York (1984–85); The Swiss Re headquarters Americas in Armonk, New York, the Atlanta City Hall, Atlanta ("Wall Drawing #581", 1989/90); the Walter E. Washington Convention Center, Washington, DC ("Wall Drawing #1103", 2003); the Conrad Hotel, New York ("Loopy Doopy (Blue and Purple)", 1999); the Albright-Knox Art Gallery, Buffalo ("Wall Drawing #1268: Scribbles: Staircase (AKAG)", 2006/2010); Akron Art Museum, Akron (2007); the Columbus Circle Subway Station, New York; The Jewish Museum (New York), New York; the Green Center for Physics at MIT, Cambridge ("Bars of Colors Within Squares (MIT)", 2007); the Embassy of the United States in Berlin; the Wadsworth Atheneum; and John Pearson's House, Oberlin, Ohio. The artist's last public wall drawing, "Wall Drawing #1259: Loopy Doopy (Springfield)" (2008), is at the United States Courthouse in Springfield, Massachusetts (designed by architect Moshe Safdie). "Wall Drawing #599: Circles 18" (1989) — a bull's eye of concentric circles in alternating bands of yellow, blue, red and white — was installed at the lobby of the Jewish Community Center, New York, in 2013.
Gouaches.
In the 1980s, in particular after a trip to Italy, LeWitt started using gouache, an opaque water-based paint, to produce free-flowing abstract works in contrasting colors. These represented a significant departure from the rest of his practice, as he created these works with his own hands. LeWitt's gouaches are often created in series based on a specific motif. Past series have included "Irregular Forms", "Parallel Curves", "Squiggly Brushstrokes" and "Web-like Grids".
Although this loosely rendered composition may have been a departure from his earlier, more geometrically structured works visually, it nevertheless remained in alignment with his original artistic intent. LeWitt painstakingly made his own prints from his gouache compositions. In 2012, art advisor Heidi Lee Komaromi curated, "Sol LeWitt: Works on Paper 1983-2003", an exhibition revealing the variety of techniques LeWitt employed on paper during the final decades of his life.
Artist's books.
From 1966, LeWitt's interest in seriality led to his production of more than 50 artist's books throughout his career; he later donated many examples to the Wadsworth Athenaeum's library. In 1976 LeWitt helped found Printed Matter, Inc, a for-profit art space in the Tribeca neighborhood of New York City with fellow artists and critics Lucy Lippard, Carol Androcchio, Amy Baker (Sandback), Edit DeAk, Mike Glier, Nancy Linn, Walter Robinson, Ingrid Sischy, Pat Steir, Mimi Wheeler, Robin White and Irena von Zahn. LeWitt was a signal innovator of the genre of the "artist's book," a term that was coined for a 1973 exhibition curated by Dianne Perry Vanderlip at Moore College of Art and Design, Philadelphia.
Printed Matter was one of the first organizations dedicated to creating and distributing artists' books, incorporating self-publishing, small-press publishing, and artist networks and collectives. For LeWitt and others, Printed Matter also served as a support system for avant-garde artists, balancing its role as publisher, exhibition space, retail space, and community center for the downtown arts scene, in that sense emulating the network of aspiring artists LeWitt knew and enjoyed as a staff member at the Museum of Modern Art.
Architecture and landscaping.
LeWitt collaborated with architect Stephen Lloyd to design a synagogue for his congregation Beth Shalom Rodfe Zedek; he conceptualized the "airy" synagogue building, with its shallow dome supported by "exuberant wooden roof beams", an homage to the wooden synagogues of eastern Europe.
In 1981, LeWitt was invited by the Fairmount Park Art Association (currently known as the Association for Public Art) to propose a public artwork for a site in Fairmount Park. He selected the long, rectangular plot of land known as the Reilly Memorial and submitted a drawing with instructions. Installed in 2011, "Lines in Four Directions in Flowers" is made up of more than 7,000 plantings arranged in strategically configured rows. In his original proposal, the artist planned an installation of flower plantings of four different colors (white, yellow, red & blue) in four equal rectangular areas, in rows of four directions (vertical, horizontal, diagonal right & left) framed by evergreen hedges of about 2' height, with each color block comprising four to five species that bloom sequentially.
In 2004, "Six Curved Walls" sculpture was installed on the hillside slope of Crouse College on Syracuse University campus. The concrete block sculpture consists of six undulating walls, each 12 feet high, and spans 140 feet. The sculpture was designed and constructed to mark the inauguration of Nancy Cantor as the 11th Chancellor of Syracuse University.
Collection.
Since the early 1960s he and his wife, Carol Androccio, gathered nearly 9,000 works of art through purchases, in trades with other artists and dealers, or as gifts. In this way he acquired works by approximately 750 artists, including Dan Flavin, Robert Ryman, Hanne Darboven, Eva Hesse, Donald Judd, On Kawara, Kazuko Miyamoto, Carl Andre, Dan Graham, Hans Haacke, Gerhard Richter, and others. In 2007, the exhibition "Selections from The LeWitt Collection" at the Weatherspoon Art Museum assembled approximately 100 paintings, sculptures, drawings, prints, and photographs, among them works by Andre, Alice Aycock, Bernd and Hilla Becher, Jan Dibbets, Jackie Ferrara, Gilbert and George, Alex Katz, Robert Mangold, Brice Marden, Mario Merz, Shirin Neshat, Pat Steir, and many other artists.
Exhibitions.
LeWitt's work was first publicly exhibited in 1964 in a group show curated by Dan Flavin at the Kaymar Gallery, New York.Dan Graham's John Daniels Gallery later gave him his first solo show in 1965. In 1966, he participated in the "Primary Structures" exhibit at the Jewish Museum in New York (a seminal show which helped define the minimalist movement), submitting an untitled, open modular cube of 9 units. The same year he was included in the "10" exhibit at Dwan Gallery, New York. He was later invited by Harald Szeemann to participate in "When Attitude Becomes Form," at the Kunsthalle Bern, Switzerland, in 1969. Interviewed in 1993 about those years LeWitt remarked, "I decided I would make color or form recede and proceed in a three-dimensional way."
The Gemeentemuseum in The Hague presented his first retrospective exhibition in 1970, and his work was later shown in a major mid-career retrospective at the Museum of Modern Art, New York in 1978. In 1972/1973, LeWitt's first museum shows in Europe were mounted at the Kunsthalle Bern and the Museum of Modern Art, Oxford. In 1975, Lewitt created "The Location of a Rectangle for the Hartford Atheneum" for the third MATRIX exhibition at the Wadsworth Atheneum Museum of Art. Later that year, he participated in the Wadsworth Atheneum's sixth MATRIX exhibition, providing instructions for a second wall drawing. MoMA gave LeWitt his first retrospective in 1978-79. The exhibition traveled to various American venues. For the 1987 Skulptur Projekte Münster, Germany, he realized "Black Form: Memorial to the Missing Jews", a rectangular wall of black concrete blocks for the center of a plaza in front of an elegant, white Neoclassical government building; it is now installed at Altona Town Hall, Hamburg. Other major exhibitions since include "Sol LeWitt Drawings 1958-1992", which was organized by the Gemeentemuseum in The Hague, the Netherlands in 1992 which traveled over the next three years to museums in the United Kingdom, Germany, Switzerland, France, Spain, and the United States; and in 1996, the Museum of Modern Art, New York mounted a traveling survey exhibition: "Sol LeWitt Prints: 1970-1995". A major LeWitt retrospective was organized by the San Francisco Museum of Modern Art in 2000. The exhibition traveled to the Museum of Contemporary Art, Chicago, and Whitney Museum of American Art, New York.
In 2006, LeWitt's "Drawing Series…" was displayed at and was devoted to the 1970s drawings by the conceptual artist. Drafters and assistants drew directly on the walls using graphite, colored pencil, crayon, and chalk. The works were based on LeWitt's complex principles, which eliminated the limitations of the canvas for more extensive constructions.
"Sol LeWitt: A Wall Drawing Retrospective", a collaboration between the Yale University Art Gallery (YUAG), MASS MoCA (Massachusetts Museum of Contemporary Art), and the Williams College Museum of Art (WCMA) opened to the public in 2008 at MASS MoCA in North Adams, Massachusetts. The exhibition will be on view for 25 years and is housed in a three-story historic mill building in the heart of MASS MoCA's campus fully restored by Bruner/Cott and Associates architects (and outfitted with a sequence of new interior walls constructed to LeWitt's specifications.) The exhibition consists of 105 drawings — comprising nearly one acre of wall surface — that LeWitt created over 40 years from 1968 to 2007 and includes several drawings never before seen, some of which LeWitt created for the project shortly before his death.
Furthermore, the artist was the subject of exhibitions at P.S. 1 Contemporary Center, Long Island City ("Concrete Blocks"); the Addison Gallery of American Art, Andover ("Twenty-Five Years of Wall Drawings, 1968-1993"); and Wadsworth Atheneum Museum of Art, Hartford ("Incomplete Cubes"), which traveled to three art museums in the United States. At the time of his death, LeWitt had just organized a retrospective of his work at the Allen Memorial Art Museum in Oberlin, Ohio.
At Naples "Sol LeWitt. L'artista e i suoi artisti" opened at the Museo on December 15, 2012, running until April 1, 2013.
Museum collections.
LeWitt's works are found in the most important museum collections including: Tate Modern, London, the Van Abbemuseum, Eindhoven, National Museum of Serbia in Belgrade, Centre Georges Pompidou, Paris, Hallen für Neue Kunst Schaffhausen, Switzerland, Australian National Gallery, Canberra, Australia, Guggenheim Museum, the Museum of Modern Art, New York, , The Jewish Museum in Manhattan, Pérez Art Museum Miami, Florida, MASS MoCA, North Adams, Massachusetts Institute of Technology List Art Center's Public Art Collection, Cambridge, National Gallery of Art, Washington D.C., and the Hirshhorn Museum and Sculpture Garden. The erection of Double Negative Pyramid by Sol LeWitt at Europos Parkas in Vilnius, Lithuania was a significant event in the history of art in post Berlin Wall era.
Influence.
Sol LeWitt was one of the main figures of his time; he transformed the process of art-making by questioning the fundamental relationship between an idea, the subjectivity of the artist, and the artwork a given idea might produce. While many artists were challenging modern conceptions of originality, authorship, and artistic genius in the 1960s, LeWitt denied that approaches such as Minimalism, Conceptualism, and Process Art were merely technical or illustrative of philosophy. In his "Paragraphs on Conceptual Art", LeWitt asserted that Conceptual art was neither mathematical nor intellectual but intuitive, given that the complexity inherent to transforming an idea into a work of art was fraught with contingencies. LeWitt's art is not about the singular hand of the artist; it is the idea behind each work that surpasses the work itself. In the early 21st century, LeWitt's work, especially the wall drawings, has been critically acclaimed for its economic perspicacity. Though modest—most exist as simple instructions on a sheet of paper—the drawings can be made again and again and again, anywhere in the world, without the artist needing to be involved in their production.
Art world.
His auction record of $749,000 was set in 2014 for his gouache on paperboard piece "Wavy Brushstroke" (1995) at Sotheby's, New York.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{17}{2}"
}
] | https://en.wikipedia.org/wiki?curid=968638 |
968734 | Integrability conditions for differential systems | In mathematics, certain systems of partial differential equations are usefully formulated, from the point of view of their underlying geometric and algebraic structure, in terms of a system of differential forms. The idea is to take advantage of the way a differential form "restricts" to a submanifold, and the fact that this restriction is compatible with the exterior derivative. This is one possible approach to certain over-determined systems, for example, including Lax pairs of integrable systems. A Pfaffian system is specified by 1-forms alone, but the theory includes other types of example of differential system. To elaborate, a Pfaffian system is a set of 1-forms on a smooth manifold (which one sets equal to 0 to find "solutions" to the system).
Given a collection of differential 1-forms formula_0 on an formula_1-dimensional manifold formula_2, an integral manifold is an immersed (not necessarily embedded) submanifold whose tangent space at every point formula_3 is annihilated by (the pullback of) each formula_4.
A maximal integral manifold is an immersed (not necessarily embedded) submanifold
formula_5
such that the kernel of the restriction map on forms
formula_6
is spanned by the formula_4 at every point formula_7 of formula_8. If in addition the formula_4 are linearly independent, then formula_8 is (formula_9)-dimensional.
A Pfaffian system is said to be completely integrable if formula_2 admits a foliation by maximal integral manifolds. (Note that the foliation need not be regular; i.e. the leaves of the foliation might not be embedded submanifolds.)
An integrability condition is a condition on the formula_10 to guarantee that there will be integral submanifolds of sufficiently high dimension.
Necessary and sufficient conditions.
The necessary and sufficient conditions for complete integrability of a Pfaffian system are given by the Frobenius theorem. One version states that if the ideal formula_11 algebraically generated by the collection of α"i" inside the ring Ω("M") is differentially closed, in other words
formula_12
then the system admits a foliation by maximal integral manifolds. (The converse is obvious from the definitions.)
Example of a non-integrable system.
Not every Pfaffian system is completely integrable in the Frobenius sense. For example, consider the following one-form on R3 − (0,0,0):
formula_13
If "dθ" were in the ideal generated by "θ" we would have, by the skewness of the wedge product
formula_14
But a direct calculation gives
formula_15
which is a nonzero multiple of the standard volume form on R3. Therefore, there are no two-dimensional leaves, and the system is not completely integrable.
On the other hand, for the curve defined by
formula_16
then θ defined as above is 0, and hence the curve is easily verified to be a solution (i.e. an integral curve) for the above Pfaffian system for any nonzero constant "c".
Examples of applications.
In Riemannian geometry, we may consider the problem of finding an orthogonal coframe "θ""i", i.e., a collection of 1-forms forming a basis of the cotangent space at every point with formula_17 which are closed (dθ"i" = 0, "i" = 1, 2, ..., "n"). By the Poincaré lemma, the θ"i" locally will have the form d"xi" for some functions "xi" on the manifold, and thus provide an isometry of an open subset of "M" with an open subset of R"n". Such a manifold is called locally flat.
This problem reduces to a question on the coframe bundle of "M". Suppose we had such a closed coframe
formula_18
If we had another coframe formula_19, then the two coframes would be related by an orthogonal transformation
formula_20
If the connection 1-form is "ω", then we have
formula_21
On the other hand,
formula_22
But formula_23 is the Maurer–Cartan form for the orthogonal group. Therefore, it obeys the structural equation
formula_24 and this is just the curvature of M: formula_25
After an application of the Frobenius theorem, one concludes that a manifold M is locally flat if and only if its curvature vanishes.
Generalizations.
Many generalizations exist to integrability conditions on differential systems which are not necessarily generated by one-forms. The most famous of these are the Cartan–Kähler theorem, which only works for real analytic differential systems, and the Cartan–Kuranishi prolongation theorem. See "Further reading" for details. The Newlander-Nirenberg theorem gives integrability conditions for an almost-complex structure. | [
{
"math_id": 0,
"text": "\\textstyle\\alpha_i, i=1,2,\\dots, k"
},
{
"math_id": 1,
"text": "\\textstyle n"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "\\textstyle p\\in N"
},
{
"math_id": 4,
"text": "\\textstyle \\alpha_i"
},
{
"math_id": 5,
"text": "i:N\\subset M"
},
{
"math_id": 6,
"text": "i^*:\\Omega_p^1(M)\\rightarrow \\Omega_p^1(N)"
},
{
"math_id": 7,
"text": "p"
},
{
"math_id": 8,
"text": "N"
},
{
"math_id": 9,
"text": "n-k"
},
{
"math_id": 10,
"text": "\\alpha_i"
},
{
"math_id": 11,
"text": "\\mathcal I"
},
{
"math_id": 12,
"text": "d{\\mathcal I}\\subset {\\mathcal I},"
},
{
"math_id": 13,
"text": "\\theta=z\\,dx +x\\,dy+y\\,dz."
},
{
"math_id": 14,
"text": "\\theta\\wedge d\\theta=0."
},
{
"math_id": 15,
"text": "\\theta\\wedge d\\theta=(x+y+z)\\,dx\\wedge dy\\wedge dz"
},
{
"math_id": 16,
"text": " x =t, \\quad y= c, \\qquad z = e^{-t/c}, \\quad t > 0 "
},
{
"math_id": 17,
"text": "\\langle\\theta^i,\\theta^j\\rangle=\\delta^{ij}"
},
{
"math_id": 18,
"text": "\\Theta=(\\theta^1,\\dots,\\theta^n)."
},
{
"math_id": 19,
"text": "\\Phi=(\\phi^1,\\dots,\\phi^n)"
},
{
"math_id": 20,
"text": "\\Phi=M\\Theta"
},
{
"math_id": 21,
"text": "d\\Phi=\\omega\\wedge\\Phi"
},
{
"math_id": 22,
"text": "\n\\begin{align}\nd\\Phi & = (dM)\\wedge\\Theta+M\\wedge d\\Theta \\\\\n& =(dM)\\wedge\\Theta \\\\\n& =(dM)M^{-1}\\wedge\\Phi.\n\\end{align}\n"
},
{
"math_id": 23,
"text": "\\omega=(dM)M^{-1}"
},
{
"math_id": 24,
"text": "d\\omega+\\omega\\wedge\\omega=0,"
},
{
"math_id": 25,
"text": "\\Omega=d\\omega+\\omega\\wedge\\omega=0."
}
] | https://en.wikipedia.org/wiki?curid=968734 |
9687983 | Proton Synchrotron Booster | CERN particle accelerator
The Proton Synchrotron Booster (PSB) is the first and smallest circular proton accelerator (a synchrotron) in the accelerator chain at the CERN injection complex, which also provides beams to the Large Hadron Collider. It contains four superimposed rings with a radius of 25 meters, which receive protons with an energy of from the linear accelerator Linac4 and accelerate them up to , ready to be injected into the Proton Synchrotron (PS). Before the PSB was built in 1972, Linac 1 injected directly into the Proton Synchrotron, but the increased injection energy provided by the booster allowed for more protons to be injected into the PS and a higher luminosity at the end of the accelerator chain.
The PSB does not only act as a proton injector for the PS but also provides protons at an energy of 1.4 GeV to On-Line Isotope Mass Separator (ISOLDE), the only experimental facility directly linked to the PSB.
Historical background.
1964–1968: Planning and start of construction.
Before the PSB became operational in 1972, the protons were directly delivered to the Proton Synchrotron (PS) by the linear accelerator Linac 1, providing the PS with protons of 50 MeV, which were then accelerated by the PS to 25 GeV at beam intensities of approximately 1012 protons per pulse. However, with the development of new experiments (mainly at the Intersecting Storage Rings ISR), the demanded beam intensities in the order of 1013 protons per pulse exceeded the capabilities of this setup. Therefore, different approaches on how to increase the beam energy already before the protons enter the PS were discussed.
Different suggestions for this new PS injector were made, for example another linear accelerator or five intersecting synchrotron rings inspired by the shape of the Olympic rings. Eventually, it was decided to go for a setup of four vertically stacked synchrotron rings with a radius of 25 meters, which was proposed in 1964. With this special design, it would become possible to reach the aspired intensities of more than 1013 protons per pulse.
In 1967, the budget of the overall update program was estimated to be 69.5 million CHF (1968 prices). More than half of this sum was devoted to the construction of the PSB, which started one year later, in 1968.
1972–1974: First beam and start-up.
The first proton beams in the PSB were accelerated on May 1 in 1972, and the nominal energy of 800 MeV was reached on May 26. In October 1973, the intermediate intensity goal of 5.2 formula_0 1012 protons per pulse delivered to the PS was reached. In total, it took around two years to achieve the design intensity of 1013 protons per pulse.
1973–1978: Update to Linac 2.
During the first years of operation, it became clear that the linear accelerator Linac 1, CERN's primary proton source at that time, was unable to keep up with the technical advances of the other machines within the accelerator complex. Therefore, it was decided in 1963 to build a new linear accelerator, which would later be called Linac 2. This new machine would provide protons with the same energy as before (50 MeV), but with higher beam currents of up to 150 mA and a longer pulse duration of 200 μs. Construction of Linac 2 started in December 1973 and was completed in 1978.
Linac 1 continued to operate as a source of light ions up to 1992.
1988: Upgrade to 1 GeV.
After more than ten years of operation, the constant increase of the beam intensity also demanded an increase in output energy of the PSB. Therefore, with only minor hardware adjustments, the PSB was upgraded to 1 GeV in 1988.
1980s–2003: Accelerating ions.
From the beginning of the 1980s until 2003, the PSB was also used to accelerate light ions like oxygen or alpha-particles, which were delivered by Linac 1. After Linac 3 as a dedicated ion linear accelerator became operational, also heavy ions such as lead and indium were accelerated by the PSB.
From 2006 on, the Low Energy Ion Ring (LEIR) took over PSB's former task of accelerating ions.
1992: Connection to ISOLDE experiment.
Up to 1992, the only machine that used the output protons from the PSB was the PS. This changed in 1992, when the On-Line Isotope Mass Separator (ISOLDE) became the second recipient of PSB's protons. Before, ISOLDE had obtained protons from the Synchro-Cyclotron, but this machine had reached the end of its lifetime by the end of the 1980s. Thus, it was decided in 1989 to connect ISOLDE to the PSB.
1999: Preparation for the LHC and upgrade to 1.4 GeV.
With the Large Hadron Collider (LHC) at the horizon, another upgrade of the PSB to 1.4 GeV was necessary. This upgrade implied more severe adjustments of the hardware than the previous upgrade to 1 GeV, because the limits of PSB's design parameters had been reached. In 2000, the upgrade was completed.
2010–2026: Future upgrades for the High Luminosity Large Hadron Collider.
In 2010, the cornerstone for another upgrade of the LHC was laid: the High Luminosity Large Hadron Collider.
The much higher required beam intensity made it necessary to increase the PSB's output energy to 2.0 GeV. This was implemented during Long Shutdown 2 (2019–2020) by the exchange and update of various key equipment of the PSB, for example the main power supply, the radio-frequency system, the transfer line to the PS and the cooling system.
Additionally, the input energy of the PSB has been increased: Linac4, provides an output beam energy of 160 MeV, replacing Linac2. Linac4 enables the PSB to provide higher quality beam for the LHC by using hydrogen anions (H− ions) rather than bare protons (H+ ions). A stripping foil at the PSB injection point will strip the electrons off the hydrogen anions, thus creating protons that are accumulated as beam bunches in the four PSB rings. These proton bunches are then recombined at the exit of the PSB and further transferred down the CERN injector chain.
Setup and operation.
The PSB is part of CERN's accelerator complex. By the time it was constructed, the Meyrin campus had just been enlarged, now covering French territory as well. The center of PSB's rings sits directly on the border between France and Switzerland. Due to the countries’ different regulations regarding buildings at the border, it was decided to build the main PSB construction underground. The only visible PSB infrastructure is located on the Swiss side.
The PSB consists of four vertically stacked rings with a radius of 25 meters. Each ring is sectioned into 16 periods with two dipole magnets per period and a triplet focusing structure made up of three quadrupole magnets (focusing, defocusing, focusing). Every magnet structure consists of four single magnets for the four rings stacked on top of each other, sharing one yoke.
Since the PSB consists of four rings in contrast to only one beamline in Linac 2 and one ring in the PS, a special construction is necessary to couple the proton beams in and out. The proton beam coming from Linac 2 is split up vertically into four different beams by the so-called proton distributor: The beam travels through a series of pulsed magnets, which successively deflect parts of the incoming beam to different angles. This results in four beamlets filling the four rings, as well as the rising and falling edge of the proton pulse, which get dumped after the proton distributor.
Similarly, the four beamlets are merged again after they have gotten accelerated by the PSB. With a series of different magnetic structures, the beams from the four rings are brought to one vertical level and are then directed towards the PS.
In 2017, protons were accelerated by the PSB. 61.45% of those were delivered to ISOLDE, and only a small fraction of 0.084% were used by the LHC.
Results and discoveries.
The only direct experiment that is fed by PSB's protons is the On-Line Isotope Mass Separator (ISOLDE). There, the protons are used to create different types of low-energy radioactive nuclei. With these, a wide variety of experiments ranging from nuclear and atomic physics to solid state physics and life sciences are conducted. In 2010, the MEDICIS facility was initiated as part of ISOLDE, which uses leftover protons from ISOLDE targets to produce radioisotopes suitable for medical purposes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\times"
}
] | https://en.wikipedia.org/wiki?curid=9687983 |
968834 | VO2 max | Maximum rate of oxygen consumption as measured during incremental exercise
V̇O2 max (also maximal oxygen consumption, maximal oxygen uptake or maximal aerobic capacity) is the maximum rate of oxygen consumption attainable during physical exertion. The name is derived from three abbreviations: "V̇" for volume (the dot over the V indicates "per unit of time" in Newton's notation), "O2" for oxygen, and "max" for maximum and usually normalized per kilogram of body mass. A similar measure is V̇O2 peak (peak oxygen consumption), which is the measurable value from a session of physical exercise, be it incremental or otherwise. It could match or underestimate the actual V̇O2 max. Confusion between the values in older and popular fitness literature is common. The capacity of the lung to exchange oxygen and carbon dioxide is constrained by the rate of blood oxygen transport to active tissue.
The measurement of V̇O2 max in the laboratory provides a quantitative value of endurance fitness for comparison of individual training effects and between people in endurance training. Maximal oxygen consumption reflects cardiorespiratory fitness and endurance capacity in exercise performance. Elite athletes, such as competitive distance runners, racing cyclists or Olympic cross-country skiers, can achieve V̇O2 max values exceeding 90 mL/(kg·min), while some endurance animals, such as Alaskan huskies, have V̇O2 max values exceeding 200 mL/(kg·min).
In physical training, especially in its academic literature, V̇O2 max is often used as a reference level to quantify exertion levels, such as 65% V̇O2 max as a threshold for sustainable exercise, which is generally regarded as more rigorous than heart rate, but is more elaborate to measure.
Normalization per body mass.
V̇O2 max is expressed either as an absolute rate in (for example) litres of oxygen per minute (L/min) or as a relative rate in (for example) millilitres of oxygen per kilogram of the body mass per minute (e.g., mL/(kg·min)). The latter expression is often used to compare the performance of endurance sports athletes. However, V̇O2 max generally does not vary linearly with body mass, either among individuals within a species or among species, so comparisons of the performance capacities of individuals or species that differ in body size must be done with appropriate statistical procedures, such as analysis of covariance.
Measurement and calculation.
Measurement.
Accurately measuring V̇O2 max involves a physical effort sufficient in duration and intensity to fully tax the aerobic energy system. In general clinical and athletic testing, this usually involves a graded exercise test in which exercise intensity is progressively increased while measuring:
V̇O2 max is measured during a cardiopulmonary exercise test (CPX test). The test is done on a treadmill or cycle ergometer. In untrained subjects, V̇O2 max is 10% to 20% lower when using a cycle ergometer compared with a treadmill. However, trained cyclists' results on the cycle ergometer are equal to or even higher than those obtained on the treadmill.
The classic V̇O2 max, in the sense of Hill and Lupton (1923), is reached when oxygen consumption remains at a steady state ("plateau") despite an increase in workload. The occurrence of a plateau is not guaranteed and may vary by person and sampling interval, leading to modified protocols with varied results.
Calculation: the Fick equation.
V̇O2 may also be calculated by the Fick equation:
formula_0, when these values are obtained during exertion at a maximal effort. Here "Q" is the cardiac output of the heart, "Ca"O2 is the arterial oxygen content, and "Cv"O2 is the venous oxygen content. ("Ca"O2 – "Cv"O2) is also known as the arteriovenous oxygen difference.
The Fick equation may be used to measure V̇O2 in critically ill patients, but its usefulness is low even in non-exerted cases. Using a breath-based VO2 to estimate cardiac output, on the other hand, seems to be reliable enough.
Estimation using submaximal exercise testing.
The necessity for a subject to exert maximum effort in order to accurately measure V̇O2 max can be dangerous in those with compromised respiratory or cardiovascular systems; thus, sub-maximal tests for "estimating" V̇O2 max have been developed.
The heart rate ratio method.
An estimate of V̇O2 max is based on maximum and resting heart rates. In the Uth "et al." (2004) formulation, it is given by:
formula_1
This equation uses the ratio of maximum heart rate (HRmax) to resting heart rate (HRrest) to predict V̇O2 max. The researchers cautioned that the conversion rule was based on measurements on well-trained men aged 21 to 51 only, and may not be reliable when applied to other sub-groups. They also advised that the formula is most reliable when based on actual measurement of maximum heart rate, rather than an age-related estimate.
The Uth constant factor of 15.3 is given for well-trained men. Later studies have revised the constant factor for different populations. According to Voutilainen "et al." 2020, the constant factor should be 14 in around 40-year-old normal weight never-smoking men with no cardiovascular diseases, bronchial asthma, or cancer.
Every 10 years of age reduces the coefficient by one, as well as does the change in body weight from normal weight to obese or the change from never-smoker to current smoker. Consequently, V̇O2 max of 60-year-old obese current smoker men should be estimated by multiplying the HRmax to HRrest ratio by 10.
Cooper test.
Kenneth H. Cooper conducted a study for the United States Air Force in the late 1960s. One of the results of this was the Cooper test in which the distance covered running in 12 minutes is measured. Based on the measured distance, an estimate of V̇O2 max [in mL/(kg·min)] can be calculated by inverting the linear regression equation, giving us:
formula_2
where "d"12 is the distance (in metres) covered in 12 minutes.
An alternative equation is:
formula_3
where "d"′12 is distance (in miles) covered in 12 minutes.
Multi-stage fitness test.
There are several other reliable tests and V̇O2 max calculators to estimate V̇O2 max, most notably the multi-stage fitness test (or "beep" test).
Rockport fitness walking test.
Estimation of V̇O2 max from a timed one-mile track walk in decimal minutes (t, e.g.: 20:35 would be specified as 20.58), sex, age in years, body weight in pounds (BW, lbs), and 60-second heart rate in beats-per-minute (HR, bpm) at the end of the mile. The constant x is 6.3150 for males, 0 for females.
formula_4
Reference values.
Men have a V̇O2 max that is 26% higher (6.6 mL/(kg·min)) than women for treadmill and 37.9% higher (7.6 mL/(kg·min)) than women for cycle ergometer on average. V̇O2 max is on average 22% higher (4.5 mL/(kg·min)) when measured using a cycle ergometer compared with a treadmill.
Effect of training.
Non-athletes.
The average untrained healthy male has a V̇O2 max of approximately 35–40 mL/(kg·min). The average untrained healthy female has a V̇O2 max of approximately 27–31 mL/(kg·min). These scores can improve with training and decrease with age, though the degree of trainability also varies widely.
Athletes.
In sports where endurance is an important component in performance, such as road cycling, rowing, cross-country skiing, swimming, and long-distance running, world-class athletes typically have high V̇O2 max values. Elite male runners can consume up to 85 mL/(kg·min), and female elite runners can consume about 77 mL/(kg·min).
Norwegian cyclist Oskar Svendsen holds the record for the highest V̇O2 ever tested with 97.5 mL/(kg·min).
Animals.
V̇O2 max has been measured in other animal species. During loaded swimming, mice had a V̇O2 max of around 140 mL/(kg·min). Thoroughbred horses had a V̇O2 max of around 193 mL/(kg·min) after 18 weeks of high-intensity training. Alaskan huskies running in the Iditarod Trail Sled Dog Race had V̇O2 max values as high as 240 mL/(kg·min). Estimated V̇O2 max for pronghorn antelopes was as high as 300 mL/(kg·min).
Limiting factors.
The factors affecting V̇O2 may be separated into supply and demand. Supply is the transport of oxygen from the lungs to the mitochondria (combining pulmonary function, cardiac output, blood volume, and capillary density of the skeletal muscle) while demand is the rate at which the mitochondria can reduce oxygen in the process of oxidative phosphorylation. Of these, the supply factors may be more limiting. However, it has also been argued that while trained subjects are probably supply limited, untrained subjects can indeed have a demand limitation.
General characteristics that affect V̇O2 max include age, sex, fitness and training, and altitude. V̇O2 max can be a poor predictor of performance in runners due to variations in running economy and fatigue resistance during prolonged exercise. The body works as a system. If one of these factors is sub-par, then the whole system's normal capacity is reduced.
The drug erythropoietin (EPO) can boost V̇O2 max by a significant amount in both humans and other mammals. This makes EPO attractive to athletes in endurance sports, such as professional cycling. EPO has been banned since the 1990s as an illicit performance-enhancing substance, but by 1998 it had become widespread in cycling and led to the Festina affair as well as being mentioned ubiquitously in the USADA 2012 report on the U.S. Postal Service Pro Cycling Team. Greg LeMond has suggested establishing a baseline for riders' V̇O2 max (and other attributes) to detect abnormal performance increases.
Clinical use to assess cardiorespiratory fitness and mortality.
V̇O2 max/peak is widely used as an indicator of cardiorespiratory fitness (CRF) in select groups of athletes or, rarely, in people under assessment for disease risk. In 2016, the American Heart Association (AHA) published a scientific statement recommending that CRF – quantifiable as V̇O2 max/peak – be regularly assessed and used as a clinical vital sign; ergometry (exercise wattage measurement) may be used if V̇O2 is unavailable. This statement was based on evidence that lower fitness levels are associated with a higher risk of cardiovascular disease, all-cause mortality, and mortality rates. In addition to risk assessment, the AHA recommendation cited the value for measuring fitness to validate exercise prescriptions, physical activity counseling, and improve both management and health of people being assessed.
A 2023 meta-analysis of observational cohort studies showed an inverse and independent association between V̇O2 max and all-cause mortality risk. Every one metabolic equivalent increase in estimated cardiorespiratory fitness was associated with an 11% reduction in mortality. The top third of V̇O2 max scores represented a 45% lower mortality in people compared with the lowest third.
As of 2023, V̇O2 max is rarely employed in routine clinical practice to assess cardiorespiratory fitness or mortality due to its considerable demand for resources and costs.
History.
British physiologist Archibald Hill introduced the concepts of maximal oxygen uptake and oxygen debt in 1922. Hill and German physician Otto Meyerhof shared the 1922 Nobel Prize in Physiology or Medicine for their independent work related to muscle energy metabolism. Building on this work, scientists began measuring oxygen consumption during exercise. Key contributions were made by Henry Taylor at the University of Minnesota, Scandinavian scientists Per-Olof Åstrand and Bengt Saltin in the 1950s and 60s, the Harvard Fatigue Laboratory, German universities, and the Copenhagen Muscle Research Centre.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ce{\\dot VO2} = Q \\times\\ (C_a\\ce{O2} - C_v\\ce{O2})"
},
{
"math_id": 1,
"text": "\\ce{\\dot VO2}\\max \\approx \\frac{\\text{HR}_\\max}{\\text{HR}_\\text{rest}} \\times 15.3\\text{ mL}/(\\text{kg}\\cdot\\text{minute})"
},
{
"math_id": 2,
"text": "\\ce{\\dot VO2}\\max \\approx {d_{12} - 504.9 \\over 44.73}"
},
{
"math_id": 3,
"text": "\\ce{\\dot VO2}\\max \\approx {(35.97 * d'_{12}) - 11.29}"
},
{
"math_id": 4,
"text": "\\ce{\\dot VO2}\\max \\approx 132.853 -0.0769\\cdot\\text{BW} -0.3877\\cdot\\text{age} -3.2649t -0.1565\\cdot\\text{HR} +x"
}
] | https://en.wikipedia.org/wiki?curid=968834 |
9688617 | Parkland formula | The Parkland formula, also known as Baxter formula, is a burn formula developed by Charles R. Baxter, used to estimate the amount of replacement fluid required for the first 24 hours in a burn patient so as to ensure the patient is hemodynamically stable. The milliliter amount of fluid required for the first 24 hours – usually Lactated Ringer's – is four times the product of the body weight and the burn percentage (i.e. body surface area affected by burns). The first half of the fluid is given within 8 hours from the burn incident, and the remaining over the next 16 hours. Only area covered by second-degree burns or greater is taken into consideration, as first-degree burns do not cause hemodynamically significant fluid shift to warrant fluid replacement.
The Parkland formula is mathematically expressed as:
formula_0
where mass (m) is in kilograms (kg), area (A) as a percentage of total body surface area, and volume (V) is in milliliters (mL). For example, a person weighing 75 kg with burns to 20% of his or her body surface area would require 4 x 75 x 20 = 6,000 mL of fluid replacement within 24 hours. The first half of this amount is delivered within 8 hours from the burn incident, and the remaining fluid is delivered in the next 16 hours.
The burn percentage in adults can be estimated by applying the Wallace rule of nines (see total body surface area): 9% for each arm, 18% for each leg, 18% for the front of the torso, 18% for the back of the torso, and 9% for the head and 1% for the perineum.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V = 4 \\cdot m \\cdot (A \\cdot 100)"
}
] | https://en.wikipedia.org/wiki?curid=9688617 |
968879 | Vortex tube | Device for separating compressed gas into hot and cold streams
The vortex tube, also known as the Ranque-Hilsch vortex tube, is a mechanical device that separates a compressed gas into hot and cold streams. The gas emerging from the hot end can reach temperatures of , and the gas emerging from the cold end can reach . It has no moving parts and is considered an environmentally friendly technology because it can work solely on compressed air and does not use Freon. Its efficiency is low, however, counteracting its other environmental advantages.
Pressurised gas is injected tangentially into a "swirl chamber" near one end of a tube, leading to a rapid rotation—the first vortex—as it moves along the inner surface of the tube to the far end. A conical nozzle allows gas specifically from this outer layer to escape at that end through a valve. The remainder of the gas is forced to return in an inner vortex of reduced diameter within the outer vortex. Gas from the inner vortex transfers energy to the gas in the outer vortex, so the outer layer is hotter at the far end than it was initially. The gas in the central vortex is likewise cooler upon its return to the starting-point, where it is released from the tube.
Method of operation.
To explain the temperature separation in a vortex tube, there are two main approaches:
Fundamental approach: the physics.
This approach is based on first-principles physics alone and is not limited to vortex tubes only, but applies to moving gas in general. It shows that temperature separation in a moving gas is due only to enthalpy conservation in a moving frame of reference.
The thermal process in the vortex tube can be estimated in the following way:
The main physical phenomenon of the vortex tube is the temperature separation between the cold vortex core and the warm vortex periphery. The "vortex tube effect" is fully explained with the work equation of Euler, also known as Euler's turbine equation, which can be written in its most general vectorial form as:
formula_0,
where formula_1 is the total, or stagnation temperature of the rotating gas at radial position formula_2, the absolute gas velocity as observed from the stationary frame of reference is denoted with formula_3; the angular velocity of the system is formula_4 and formula_5 is the isobaric heat capacity of the gas. This equation was published in 2012; it explains the fundamental operating principle of vortex tubes (Here's a video with animated demonstration of how this works). The search for this explanation began in 1933 when the vortex tube was discovered and continued for more than 80 years.
The above equation is valid for an adiabatic turbine passage; it clearly shows that while gas moving towards the center is getting colder, the peripheral gas in the passage is "getting faster". Therefore, vortex cooling is due to angular propulsion. The more the gas cools by reaching the center, the more rotational energy it delivers to the vortex and thus the vortex rotates even faster. This explanation stems directly from the law of energy conservation. Compressed gas at room temperature is expanded in order to gain speed through a nozzle; it then climbs the centrifugal barrier of rotation during which energy is also lost. The lost energy is delivered to the vortex, which speeds its rotation. In a vortex tube, the cylindrical surrounding wall confines the flow at periphery and thus forces conversion of kinetic into internal energy, which produces hot air at the hot exit.
Therefore, the vortex tube is a rotorless turboexpander. It consists of a rotorless radial inflow turbine (cold end, center) and a rotorless centrifugal compressor (hot end, periphery). The work output of the turbine is converted into heat by the compressor at the hot end.
Phenomenological approach.
This approach relies on observation and experimental data. It is specifically tailored to the geometrical shape of the vortex tube and the details of its flow and is designed to match the particular observables of the complex vortex tube flow, namely turbulence, acoustic phenomena, pressure fields, air velocities and many others. The earlier published models of the vortex tube are phenomenological. They are:
More on these models can be found in recent review articles on vortex tubes.
The phenomenological models were developed at an earlier time when the turbine equation of Euler was not thoroughly analyzed; in the engineering literature, this equation is studied mostly to show the work output of a turbine; while temperature analysis is not performed since turbine cooling has more limited application unlike power generation, which is the main application of turbines. Phenomenological studies of the vortex tube in the past have been useful in presenting empirical data. However, due to the complexity of the vortex flow this empirical approach was able to show only aspects of the effect but was unable to explain its operating principle. Dedicated to empirical details, for a long time the empirical studies made the vortex tube effect appear enigmatic and its explanation – a matter of debate.
History.
The vortex tube was invented in 1931 by French physicist Georges J. Ranque. It was rediscovered by Paul Dirac in 1934 while he was searching for a device to perform isotope separation, leading to development of the Helikon vortex separation process. German physicist Rudolf Hilsch improved the design and published a widely read paper in 1947 on the device, which he called a "Wirbelrohr" (literally, whirl pipe).
In 1954, Westley published a comprehensive survey entitled "A bibliography and survey of the vortex tube", which included over 100 references. In 1951 Curley and McGree, in 1956 Kalvinskas, in 1964 Dobratz, in 1972 Nash, and in 1979 Hellyar made important contribution to the RHVT literature by their extensive reviews on the vortex tube and its applications.
From 1952 to 1963, C. Darby Fulton, Jr. obtained four U.S. patents relating to the development of the vortex tube. In 1961, Fulton began manufacturing the vortex tube under the company name Fulton Cryogenics. Fulton sold the company to Vortec, Inc. The vortex tube was used to separate gas mixtures, oxygen and nitrogen, carbon dioxide and helium, carbon dioxide and air in 1967 by Linderstrom-Lang.
Vortex tubes also seem to work with liquids to some extent, as demonstrated by Hsueh and Swenson in a laboratory experiment where free body rotation occurs from the core and a thick boundary layer at the wall. Air is separated causing a cooler air stream coming out the exhaust hoping to chill as a refrigerator. In 1988 R. T. Balmer applied liquid water as the working medium. It was found that when the inlet pressure is high, for instance 20-50 bar, the heat energy separation process exists in incompressible (liquids) vortex flow as well. Note that this separation is only due to heating; there is no longer cooling observed since cooling requires compressibility of the working fluid.
Efficiency.
Vortex tubes have lower efficiency than traditional air conditioning equipment. They are commonly used for inexpensive spot cooling, when compressed air is available.
Applications.
Current applications.
Commercial vortex tubes are designed for industrial applications to produce a temperature drop of up to . With no moving parts, no electricity, and no refrigerant, a vortex tube can produce refrigeration up to using 100 standard cubic feet per minute (2.832 m3/min) of filtered compressed air at . A control valve in the hot air exhaust adjusts temperatures, flows and refrigeration over a wide range.
Vortex tubes are used for cooling of cutting tools (lathes and mills, both manually-operated and CNC machines) during machining. The vortex tube is well-matched to this application: machine shops generally already use compressed air, and a fast jet of cold air provides both cooling and removal of the chips produced by the tool. This eliminates or drastically reduces the need for liquid coolant, which is messy, expensive, and environmentally hazardous.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " T - \\frac{ \\vec v \\cdot \\vec \\omega \\times \\vec r}{c_p}=\\mbox{const} "
},
{
"math_id": 1,
"text": " T "
},
{
"math_id": 2,
"text": "\\vec r"
},
{
"math_id": 3,
"text": "\\vec v"
},
{
"math_id": 4,
"text": "\\vec \\omega "
},
{
"math_id": 5,
"text": " c_p "
}
] | https://en.wikipedia.org/wiki?curid=968879 |
9688913 | Kapustinskii equation | Formula for lattice energy
The Kapustinskii equation calculates the lattice energy "UL" for an ionic crystal, which is experimentally difficult to determine. It is named after Anatoli Fedorovich Kapustinskii who published the formula in 1956.
formula_0
The calculated lattice energy gives a good estimation for the Born–Landé equation; the real value differs in most cases by less than 5%.
Furthermore, one is able to determine the ionic radii (or more properly, the thermochemical radius) using the Kapustinskii equation when the lattice energy is known. This is useful for rather complex ions like sulfate (SO) or phosphate (PO).
Derivation from the Born–Landé equation.
Kapustinskii originally proposed the following simpler form, which he faulted as "associated with antiquated concepts of the character of repulsion forces".
formula_1
Here, "K"' = 1.079×10−4 J·m·mol−1. This form of the Kapustinskii equation may be derived as an approximation of the Born–Landé equation, below.
formula_2
Kapustinskii replaced "r"0, the measured distance between ions, with the sum of the corresponding ionic radii. In addition, the Born exponent, "n", was assumed to have a mean value of 9. Finally, Kapustinskii noted that the Madelung constant, "M", was approximately 0.88 times the number of ions in the empirical formula. The derivation of the later form of the Kapustinskii equation followed similar logic, starting from the quantum chemical treatment in which the final term is 1 − where "d" is as defined above. Replacing "r"0 as before yields the full Kapustinskii equation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "U_{L} = {K} \\cdot \\frac{\\nu \\cdot |z^+| \\cdot |z^-|}{r^+ + r^-} \\cdot \\biggl( 1 - \\frac{d}{r^+ + r^-} \\biggr) "
},
{
"math_id": 1,
"text": "U_{L} = {K'} \\cdot \\frac{\\nu \\cdot |z^+| \\cdot |z^-|}{r^+ + r^-} "
},
{
"math_id": 2,
"text": "U_L =- \\frac{N_AMz^+z^- e^2 }{4 \\pi \\epsilon_0 r_0}\\left(1-\\frac{1}{n}\\right)"
}
] | https://en.wikipedia.org/wiki?curid=9688913 |
96918 | Coenzyme Q – cytochrome c reductase | Class of enzymes
The coenzyme Q : cytochrome "c" – oxidoreductase, sometimes called the cytochrome "bc"1 complex, and at other times complex III, is the third complex in the electron transport chain (EC 1.10.2.2), playing a critical role in biochemical generation of ATP (oxidative phosphorylation). Complex III is a multisubunit transmembrane protein encoded by both the mitochondrial (cytochrome b) and the nuclear genomes (all other subunits). Complex III is present in the mitochondria of all animals and all aerobic eukaryotes and the inner membranes of most bacteria. Mutations in Complex III cause exercise intolerance as well as multisystem disorders. The bc1 complex contains 11 subunits, 3 respiratory subunits (cytochrome B, cytochrome C1, Rieske protein), 2 core proteins and 6 low-molecular weight proteins.
Ubiquinol—cytochrome-c reductase catalyzes the chemical reaction
QH2 + 2 ferricytochrome c formula_0 Q + 2 ferrocytochrome c + 2 H+
Thus, the two substrates of this enzyme are quinol (QH2) and ferri- (Fe3+) cytochrome c, whereas its 3 products are quinone (Q), ferro- (Fe2+) cytochrome c, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on diphenols and related substances as donor with a cytochrome as acceptor. This enzyme participates in oxidative phosphorylation. It has four cofactors: cytochrome c1, cytochrome b-562, cytochrome b-566, and a 2-Iron ferredoxin of the Rieske type.
Nomenclature.
The systematic name of this enzyme class is ubiquinol:ferricytochrome-c oxidoreductase. Other names in common use include:
Structure.
Compared to the other major proton-pumping subunits of the electron transport chain, the number of subunits found can be small, as small as three polypeptide chains. This number does increase, and eleven subunits are found in higher animals. Three subunits have prosthetic groups. The cytochrome "b" subunit has two "b"-type hemes ("b"L and "b"H), the cytochrome "c" subunit has one "c"-type heme ("c"1), and the Rieske Iron Sulfur Protein subunit (ISP) has a two iron, two sulfur iron-sulfur cluster (2Fe•2S).
Structures of complex III: PDB: 1KYO, PDB: 1L0L
Composition of complex.
In vertebrates the bc1 complex, or Complex III, contains 11 subunits: 3 respiratory subunits, 2 core proteins and 6 low-molecular weight proteins. Proteobacterial complexes may contain as few as three subunits.
Reaction.
It catalyzes the reduction of cytochrome "c" by
oxidation of coenzyme Q (CoQ) and the concomitant pumping of 4 protons from the mitochondrial matrix to the intermembrane space:
QH2 + 2 cytochrome "c" (FeIII) + 2 H → Q + 2 cytochrome "c" (FeII) + 4 H
In the process called Q cycle, two protons are consumed from the matrix (M), four protons are released into the inter membrane space (IM) and two electrons are passed to cytochrome "c".
Reaction mechanism.
The reaction mechanism for complex III (cytochrome bc1, coenzyme Q: cytochrome C oxidoreductase) is known as the ubiquinone ("Q") cycle. In this cycle four protons get released into the positive "P" side (inter membrane space), but only two protons get taken up from the negative "N" side (matrix). As a result, a proton gradient is formed across the membrane. In the overall reaction, two ubiquinols are oxidized to ubiquinones and one ubiquinone is reduced to ubiquinol. In the complete mechanism, two electrons are transferred from ubiquinol to ubiquinone, via two cytochrome c intermediates.
Overall:
The reaction proceeds according to the following steps:
Round 1:
Round 2:
Inhibitors of complex III.
There are three distinct groups of Complex III inhibitors.
Some have been commercialized as fungicides (the strobilurin derivatives, best known of which is azoxystrobin; QoI inhibitors) and as anti-malaria agents (atovaquone).
Also propylhexedrine inhibits cytochrome c reductase.
Oxygen free radicals.
A small fraction of electrons leave the electron transport chain before reaching complex IV. Premature electron leakage to oxygen results in the formation of superoxide. The relevance of this otherwise minor side reaction is that superoxide and other reactive oxygen species are highly toxic and are thought to play a role in several pathologies, as well as aging (the free radical theory of aging). Electron leakage occurs mainly at the Qo site and is stimulated by antimycin A. Antimycin A locks the "b" hemes in the reduced state by preventing their re-oxidation at the Qi site, which, in turn, causes the steady-state concentrations of the Qo semiquinone to rise, the latter species reacting with oxygen to form superoxide. The effect of high membrane potential is thought to have a similar effect. Superoxide produced at the Qo site can be released both into the mitochondrial matrix and into the intermembrane space, where it can then reach the cytosol. This could be explained by the fact that Complex III might produce superoxide as membrane permeable HOO• rather than as membrane impermeable O.
Mutations in complex III genes in human disease.
Mutations in complex III-related genes typically manifest as exercise intolerance. Other mutations have been reported to cause septo-optic dysplasia and multisystem disorders. However, mutations in BCS1L, a gene responsible for proper maturation of complex III, can result in Björnstad syndrome and the GRACILE syndrome, which in neonates are lethal conditions that have multisystem and neurologic manifestations typifying severe mitochondrial disorders. The pathogenicity of several mutations has been verified in model systems such as yeast.
The extent to which these various pathologies are due to bioenergetic deficits or overproduction of superoxide is presently unknown.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=96918 |
969477 | Preimage attack | Attack model against cryptographic hash functions
In cryptography, a preimage attack on cryptographic hash functions tries to find a message that has a specific hash value. A cryptographic hash function should resist attacks on its preimage (set of possible inputs).
In the context of attack, there are two types of preimage resistance:
These can be compared with a collision resistance, in which it is computationally infeasible to find any two distinct inputs , that hash to the same output; i.e., such that h(x) = h(x′).
Collision resistance implies second-preimage resistance. Second-preimage resistance implies preimage resistance only if the size of the hash function's inputs can be substantially (e.g., factor 2) larger than the size of the hash function's outputs. Conversely, a second-preimage attack implies a collision attack (trivially, since, in addition to , is already known right from the start).
Applied preimage attacks.
By definition, an ideal hash function is such that the fastest way to compute a first or second preimage is through a brute-force attack. For an -bit hash, this attack has a time complexity 2, which is considered too high for a typical output size of = 128 bits. If such complexity is the best that can be achieved by an adversary, then the hash function is considered preimage-resistant. However, there is a general result that quantum computers perform a structured preimage attack in formula_0, which also implies second preimage and thus a collision attack.
Faster preimage attacks can be found by cryptanalysing certain hash functions, and are specific to that function. Some significant preimage attacks have already been discovered, but they are not yet practical. If a practical preimage attack is discovered, it would drastically affect many Internet protocols. In this case, "practical" means that it could be executed by an attacker with a reasonable amount of resources. For example, a preimaging attack that costs trillions of dollars and takes decades to preimage one desired hash value or one message is not practical; one that costs a few thousand dollars and takes a few weeks might be very practical.
All currently known practical or almost-practical attacks on MD5 and SHA-1 are collision attacks. In general, a collision attack is easier to mount than a preimage attack, as it is not restricted by any set value (any two values can be used to collide). The time complexity of a brute-force collision attack, in contrast to the preimage attack, is only formula_1.
Restricted preimage space attacks.
The computational infeasibility of a first preimage attack on an ideal hash function assumes that the set of possible hash inputs is too large for a brute force search. However if a given hash value is known to have been produced from a set of inputs that is relatively small or is ordered by likelihood in some way, then a brute force search may be effective. Practicality depends on the input set size and the speed or cost of computing the hash function.
A common example is the use of hashes to store password validation data for authentication. Rather than store the plaintext of user passwords, an access control system stores a hash of the password. When a user requests access, the password they submit is hashed and compared with the stored value. If the stored validation data is stolen, the thief will only have the hash values, not the passwords. However most users choose passwords in predictable ways and many passwords are short enough that all possible combinations can be tested if fast hashes are used, even if the hash is rated secure against preimage attacks. Special hashes called key derivation functions have been created to slow searches. "See" Password cracking. For a method to prevent the testing of short passwords see salt (cryptography). | [
{
"math_id": 0,
"text": "\\sqrt{2^{n}} = 2^{\\frac{n}{2}}"
},
{
"math_id": 1,
"text": "2^{\\frac{n}{2}}"
}
] | https://en.wikipedia.org/wiki?curid=969477 |
969540 | Limiting magnitude | Faintest item observable by an instrument
In astronomy, limiting magnitude is the faintest apparent magnitude of a celestial body that is detectable or detected by a given instrument.
In some cases, limiting magnitude refers to the upper threshold of detection. In more formal uses, limiting magnitude is specified along with the strength of the signal (e.g., "10th magnitude at 20 sigma"). Sometimes limiting magnitude is qualified by the purpose of the instrument (e.g., "10th magnitude for photometry") This statement recognizes that a photometric detector can detect light far fainter than it can reliably measure.
The limiting magnitude of an instrument is often cited for ideal conditions, but environmental conditions impose further practical limits. These include weather, moonlight, skyglow, and light pollution. The International Dark-Sky Association has been vocal in championing the cause of reducing skyglow and light pollution.
Naked-eye visibility.
The limiting magnitude for naked eye visibility refers to the faintest stars that can be seen with the unaided eye near the zenith on clear moonless nights. The quantity is most often used as an overall indicator of sky brightness, in that light polluted and humid areas generally have brighter limiting magnitudes than remote desert or high altitude areas. The limiting magnitude will depend on the observer, and will increase with the eye's dark adaptation. On a relatively clear sky, the limiting visibility will be about 6th magnitude. However, the limiting visibility is 7th magnitude for faint stars visible from dark rural areas located from major cities. (See the Bortle scale.)
There is even variation within metropolitan areas. For those who live in the immediate suburbs of New York City, the limiting magnitude might be 4.0. This corresponds to roughly 250 visible stars, or one-tenth of the number that is visible under perfectly dark skies. From the boroughs of New York City outside Manhattan (Brooklyn, Queens, Staten Island, and the Bronx), the limiting magnitude might be 3.0, suggesting that at best, only about 50 stars might be seen at any one time. From brightly lit Midtown Manhattan, the limiting magnitude is possibly 2.0, meaning that from the heart of New York City only about 15 stars will be visible at any given time.
From relatively dark suburban areas, the limiting magnitude is frequently closer to 5 or somewhat fainter, but from very remote and clear sites, some amateur astronomers can see nearly as faint as 8th magnitude. Many basic observing references quote a limiting magnitude of 6, as this is the approximate limit of star maps which date from before the invention of the telescope. Ability in this area, which requires the use of averted vision, varies substantially from observer to observer, with both youth and experience being beneficial.
Limiting magnitude is traditionally estimated by searching for faint stars of known magnitude. In 2013 an app was developed based on Google's Sky Map that allows non-specialists to estimate the limiting magnitude in polluted areas using their phone.
Modelling magnitude limits.
We see stars if they have sufficient contrast against the background sky. A star's brightness (more precisely its illuminance) must exceed the sky's surface brightness (i.e. luminance) by a sufficient amount. Earth's sky is never completely black – even in the absence of light pollution there is a natural airglow that limits what can be seen. The astronomer H.D. Curtis reported his naked-eye limit as 6.53, but by looking at stars through a hole in a black screen (i.e. against a totally dark background) was able to see one of magnitude 8.3, and possibly one of 8.9.
Naked-eye magnitude limits can be modelled theoretically using laboratory data on human contrast thresholds at various background brightness levels. Andrew Crumey has done this using data from experiments where subjects viewed artificial light sources under controlled conditions. Crumey showed that for a sky background with surface brightness formula_0, the visual limit formula_1 could be expressed as:
formula_2
where formula_3 is a "field factor" specific to the observer and viewing situation. The very darkest skies have a zenith surface brightness of approximately 22 mag arcsec−2, so it can be seen from the equation that such a sky would be expected to show stars approximately 0.4 mag fainter than one with a surface brightness of 21 mag arcsec−2. Crumey speculated that for most people formula_3 will lie between about 1.4 and 2.4, with formula_4 being typical. This would imply formula_5 at the darkest sites, consistent with the traditionally accepted value, though substantially poorer than what is often claimed by modern amateur observers.
To explain the discrepancy, Crumey pointed out that his formula assumed sustained visibility rather than momentary glimpses. He reported that "scintillation can lead to sudden 'flashes' with a brightening of 1 to 2 mag lasting a hundredth of a second." He commented, "The activities of amateur astronomers can lie anywhere between science and recreational sport. If the latter, then the individual's concern with limiting magnitude may be to maximise it, whereas for science a main interest should be consistency of measurement." He recommended that "For the purposes of visibility recommendations aimed at the general public it is preferable to consider typical rather than exceptional performance... Stars should be continuously visible (with direct or averted vision) for some extended period (e.g. at least a second or two) rather than be seen to flash momentarily."
Crumey's formula, stated above, is an approximation to a more general one he obtained in photometric units. He obtained other approximations in astronomical units for skies ranging from moderately light polluted to truly dark.
formula_6
If an observer knows their own SQM (i.e. sky brightness formula_7 measured by a sky quality meter), and establishes their actual limiting magnitude, they can work out their own formula_3 from these formulae. Crumey recommended that for accurate results, the observer should ascertain the V-magnitude of the faintest steadily visible star to one decimal place, and for highest accuracy should also record the colour index and convert to a standard value. Crumey showed that if the limit is formula_8 at colour index formula_9, then the limit at colour index zero is approximately formula_10
Some sample values are tabulated below. The general result is that a gain of 1 SQM in sky darkness equates to a gain in magnitude limit of roughly 0.3 to 0.4.
Visual magnitude limit with a telescope.
The aperture (or more formally entrance pupil) of a telescope is larger than the human eye pupil, so collects more light, concentrating it at the exit pupil where the observer's own pupil is (usually) placed. The result is increased illuminance – stars are effectively brightened. At the same time, magnification darkens the background sky (i.e. reduces its luminance). Therefore stars normally invisible to the naked eye become visible in the telescope. Further increasing the magnification makes the sky look even darker in the eyepiece, but there is a limit to how far this can be taken. One reason is that as magnification increases, the exit pupil gets smaller, resulting in a poorer image – an effect that can be seen by looking through a small pinhole in daylight. Another reason is that star images are not perfect points of light; atmospheric turbulence creates a blurring effect referred to as seeing. A third reason is that if magnification can be pushed sufficiently high, the sky background will become effectively black, and cannot be darkened any further. This happens at a background surface brightness of approximately 25 mag arcsec−2, where only 'dark light' (neural noise) is perceived.
Various authors have stated the limiting magnitude of a telescope with entrance pupil formula_11 centimetres to be of the form
formula_12
with suggested values for the constant formula_13 ranging from 6.8 to 8.7. Crumey obtained a formula for formula_13 as a function of the sky surface brightness, telescope magnification, observer's eye pupil diameter and other parameters including the personal factor formula_3 discussed above. Choosing parameter values thought typical of normal dark-site observations (e.g. eye pupil 0.7cm and formula_4) he found formula_14.
Crumey obtained his formula as an approximation to one he derived in photometric units from his general model of human contrast threshold. As an illustration, he calculated limiting magnitude as a function of sky brightness for a 100mm telescope at magnifications ranging from x25 to x200 (with other parameters given typical real-world values). Crumey found that a maximum of 12.7 mag could be achieved if magnification was sufficiently high and the sky sufficiently dark, so that the background in the eyepiece was effectively black. That limit corresponds to formula_13 = 7.7 in the formula above.
More generally, for situations where it is possible to raise a telescope's magnification high enough to make the sky background effectively black, the limiting magnitude is approximated by
formula_15
where formula_11 and formula_3 are as stated above, formula_16 is the observer's pupil diameter in centimetres, and formula_17 is the telescope transmittance (e.g. 0.75 for a typical reflector).
Telescopic limiting magnitudes were investigated empirically by I.S. Bowen at Mount Wilson Observatory in 1947, and Crumey was able to use Bowen's data as a test of the theoretical model. Bowen did not record parameters such as his eye pupil diameter, naked-eye magnitude limit, or the extent of light loss in his telescopes; but because he made observations at a range of magnifications using three telescopes (with apertures 0.33 inch, 6 inch and 60 inch), Crumey was able to construct a system of simultaneous equations from which the remaining parameters could be deduced. Because Crumey used astronomical-unit approximations, and plotted on log axes, the limit "curve" for each telescope consisted of three straight sections, corresponding to exit pupil larger than eye pupil, exit pupil smaller, and sky background effectively black. Bowen's anomalous limit at highest magnification with the 60-inch telescope was due to poor seeing. As well as vindicating the theoretical model, Crumey was able to show from this analysis that the sky brightness at the time of Bowen's observations was approximately 21.27 mag arcsec−2, highlighting the rapid growth of light pollution at Mount Wilson in the second half of the twentieth century.
Large observatories.
Telescopes at large observatories are typically located at sites selected for dark skies. They also increase the limiting magnitude by using long integration times on the detector, and by using image-processing techniques to increase the signal to noise ratio. Most 8 to 10 meter class telescopes can detect sources with a visual magnitude of about 27 using a one-hour integration time.
Automated astronomical surveys are often limited to around magnitude 20 because of the short exposure time that allows covering a large part of the sky in a night. In a 30 second exposure the 0.7-meter telescope at the Catalina Sky Survey has a limiting magnitude of 19.5. The Zwicky Transient Facility has a limiting magnitude of 20.5, and Pan-STARRS has a limiting magnitude of 24.
Even higher limiting magnitudes can be achieved for telescopes above the Earth's atmosphere, such as the Hubble Space Telescope, where the sky brightness due to the atmosphere is not relevant. For orbital telescopes, the background sky brightness is set by the zodiacal light. The Hubble telescope can detect objects as faint as a magnitude of +31.5, and the James Webb Space Telescope (operating in the infrared spectrum) is expected to exceed that.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu_\\text{sky} > 21 \\,\\mathrm{ mag \\, arcsec^{-2}}"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "m = {0.4260} \\mu_\\text{sky} - 2.3650 - 2.5 \\log F"
},
{
"math_id": 3,
"text": "F"
},
{
"math_id": 4,
"text": "F = 2"
},
{
"math_id": 5,
"text": "m = 6.25"
},
{
"math_id": 6,
"text": "\\begin{align}\nm &= {0.27} \\mu_\\text{sky} + 0.8 - 2.5 \\log F & (18 < \\mu_\\text{sky} < 20 \\,\\mathrm{ mag \\, arcsec^{-2}}) \\\\[1ex]\nm &= {0.383} \\mu_\\text{sky} -1.44 - 2.5 \\log F & (20 < \\mu_\\text{sky} < 22 \\,\\mathrm{ mag \\, arcsec^{-2}})\n\\end{align}"
},
{
"math_id": 7,
"text": "\\mu_{sky}"
},
{
"math_id": 8,
"text": "m_c"
},
{
"math_id": 9,
"text": "c"
},
{
"math_id": 10,
"text": "m_c + 0.27 c"
},
{
"math_id": 11,
"text": "D"
},
{
"math_id": 12,
"text": "m = 5 \\log D + N"
},
{
"math_id": 13,
"text": "N"
},
{
"math_id": 14,
"text": "N = 7.69"
},
{
"math_id": 15,
"text": "m = 5 \\log D + 8 - 2.5 \\log (p^2 F / T)"
},
{
"math_id": 16,
"text": "p"
},
{
"math_id": 17,
"text": "T"
}
] | https://en.wikipedia.org/wiki?curid=969540 |
969597 | Eccentric anomaly | Angle defining a position in an orbit
In orbital mechanics, the eccentric anomaly is an angular parameter that defines the position of a body that is moving along an elliptic Kepler orbit. The eccentric anomaly is one of three angular parameters ("anomalies") that define a position along an orbit, the other two being the true anomaly and the mean anomaly.
Graphical representation.
Consider the ellipse with equation given by:
formula_0
where "a" is the "semi-major" axis and "b" is the "semi-minor" axis.
For a point on the ellipse, "P" = "P"("x", "y"), representing the position of an orbiting body in an elliptical orbit, the eccentric anomaly is the angle "E" in the figure. The eccentric anomaly "E" is one of the angles of a right triangle with one vertex at the center of the ellipse, its adjacent side lying on the "major" axis, having hypotenuse "a" (equal to the "semi-major" axis of the ellipse), and opposite side (perpendicular to the "major" axis and touching the point "P′" on the auxiliary circle of radius "a") that passes through the point "P". The eccentric anomaly is measured in the same direction as the true anomaly, shown in the figure as formula_1. The eccentric anomaly "E" in terms of these coordinates is given by:
formula_2
and
formula_3
The second equation is established using the relationship
formula_4,
which implies that sin "E" = ±. The equation sin "E" = − is immediately able to be ruled out since it traverses the ellipse in the wrong direction. It can also be noted that the second equation can be viewed as coming from a similar triangle with its opposite side having the same length "y" as the distance from "P" to the "major" axis, and its hypotenuse "b" equal to the "semi-minor" axis of the ellipse.
Formulas.
Radius and eccentric anomaly.
The eccentricity "e" is defined as:
formula_5
From Pythagoras's theorem applied to the triangle with "r" (a distance "FP") as hypotenuse:
formula_6
Thus, the radius (distance from the focus to point "P") is related to the eccentric anomaly by the formula
formula_7
With this result the eccentric anomaly can be determined from the true anomaly as shown next.
From the true anomaly.
The "true anomaly" is the angle labeled formula_1 in the figure, located at the focus of the ellipse. It is sometimes represented by f or v. The true anomaly and the eccentric anomaly are related as follows.
Using the formula for r above, the sine and cosine of E are found in terms of f :
formula_8
Hence,
formula_9
where the correct quadrant for E is given by the signs of numerator and denominator, so that E can be most easily found using an atan2 function.
Angle E is therefore the adjacent angle of a right triangle with hypotenuse formula_10 adjacent side formula_11 and opposite side formula_12
Also,
formula_13
Substituting cos E as found above into the expression for r, the radial distance from the focal point to the point P, can be found in terms of the true anomaly as well:
formula_14
where
formula_15
is called "the semi-latus rectum" in classical geometry.
From the mean anomaly.
The eccentric anomaly "E" is related to the mean anomaly "M" by Kepler's equation:
formula_16
This equation does not have a closed-form solution for "E" given "M". It is usually solved by numerical methods, e.g. the Newton–Raphson method. It may be expressed in a Fourier series as
formula_17
where formula_18 is the Bessel function of the first kind. | [
{
"math_id": 0,
"text": "\\frac{x^2}{a^2} + \\frac{y^2}{b^2} = 1, "
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "\\cos E = \\frac{x}{a} ,"
},
{
"math_id": 3,
"text": "\\sin E = \\frac{y}{b}"
},
{
"math_id": 4,
"text": "\\left(\\frac{y}{b}\\right)^2 = 1 - \\cos^2 E = \\sin^2 E"
},
{
"math_id": 5,
"text": "e=\\sqrt{1 - \\left(\\frac{b}{a}\\right)^2 } \\ . "
},
{
"math_id": 6,
"text": "\\begin{align}\n r^2 &= b^2 \\sin^2E + (ae - a\\cos E)^2 \\\\\n &= a^2\\left(1 - e^2\\right)\\left(1 - \\cos^2 E\\right) + a^2 \\left(e^2 - 2e\\cos E + \\cos^2 E\\right) \\\\\n &= a^2 - 2a^2 e\\cos E + a^2 e^2 \\cos^2 E \\\\\n &= a^2 \\left(1 - e\\cos E\\right)^2 \\\\\n\\end{align} "
},
{
"math_id": 7,
"text": "r = a \\left(1 - e \\cos{E}\\right) \\ ."
},
{
"math_id": 8,
"text": "\\begin{align}\n \\cos E &= \\frac{\\,x\\,}{a} = \\frac{\\, a e + r \\cos f \\,}{a} = e + (1 - e \\cos E) \\cos f \\\\\n \\Rightarrow \\cos E &= \\frac{\\, e + \\cos f \\,}{1 + e \\cos f} \\\\\n \\sin E &= \\sqrt{\\, 1 - \\cos^2 E \\;} = \\frac{\\, \\sqrt{\\, 1 - e^2 \\;} \\, \\sin f \\,}{ 1 + e\\cos f } ~.\n\\end{align}"
},
{
"math_id": 9,
"text": "\\tan E = \\frac{\\, \\sin E \\,}{\\cos E} = \\frac{\\, \\sqrt{\\, 1 - e^2 \\;} \\, \\sin f \\,}{e + \\cos f} ~."
},
{
"math_id": 10,
"text": "\\; 1 + e \\cos f \\;,"
},
{
"math_id": 11,
"text": "\\; e + \\cos f \\;,"
},
{
"math_id": 12,
"text": "\\;\\sqrt{ \\, 1 - e^2 \\; } \\, \\sin f \\;."
},
{
"math_id": 13,
"text": "\\tan\\frac{\\, f \\,}{2} = \\sqrt{\\frac{\\, 1 + e \\,}{1 - e}\\,} \\,\\tan\\frac{\\, E \\,}{2}"
},
{
"math_id": 14,
"text": "r = \\frac{a \\left(\\, 1 - e^2 \\,\\right)}{\\, 1 + e \\cos f \\, } = \\frac{p}{\\, 1 + e \\cos f \\, }\\,"
},
{
"math_id": 15,
"text": "\\, p \\equiv a \\left(\\, 1 - e^2 \\,\\right) "
},
{
"math_id": 16,
"text": "M = E - e \\sin E"
},
{
"math_id": 17,
"text": "E = M + 2\\sum_{n=1}^{\\infty } \\frac{J_{n}(ne)}{n}\\sin(n M)"
},
{
"math_id": 18,
"text": "J_{n}(x)"
}
] | https://en.wikipedia.org/wiki?curid=969597 |
969603 | True anomaly | Parameter of Keplerian orbits
In celestial mechanics, true anomaly is an angular parameter that defines the position of a body moving along a Keplerian orbit. It is the angle between the direction of periapsis and the current position of the body, as seen from the main focus of the ellipse (the point around which the object orbits).
The true anomaly is usually denoted by the Greek letters ν or θ, or the Latin letter f, and is usually restricted to the range 0–360° (0–2π rad).
The true anomaly f is one of three angular parameters ("anomalies") that defines a position along an orbit, the other two being the eccentric anomaly and the mean anomaly.
Formulas.
From state vectors.
For elliptic orbits, the true anomaly ν can be calculated from orbital state vectors as:
formula_0
(if r ⋅ v < 0 then replace ν by )
where:
Circular orbit.
For circular orbits the true anomaly is undefined, because circular orbits do not have a uniquely determined periapsis. Instead the argument of latitude "u" is used:
formula_1
(if "rz" < 0 then replace "u" by 2π − "u")
where:
Circular orbit with zero inclination.
For circular orbits with zero inclination the argument of latitude is also undefined, because there is no uniquely determined line of nodes. One uses the true longitude instead:
formula_2
(if "vx" > 0 then replace l by )
where:
From the eccentric anomaly.
The relation between the true anomaly ν and the eccentric anomaly formula_3 is:
formula_4
or using the sine and tangent:
formula_5
or equivalently:
formula_6
so
formula_7
Alternatively, a form of this equation was derived by that avoids numerical issues when the arguments are near formula_8, as the two tangents become infinite. Additionally, since formula_9 and formula_10 are always in the same quadrant, there will not be any sign problems.
formula_11 where formula_12
so
formula_13
From the mean anomaly.
The true anomaly can be calculated directly from the mean anomaly formula_14 via a Fourier expansion:
formula_15
with Bessel functions formula_16 and parameter formula_17.
Omitting all terms of order formula_18 or higher (indicated by formula_19), it can be written as
formula_20
Note that for reasons of accuracy this approximation is usually limited to orbits where the eccentricity formula_21 is small.
The expression formula_22 is known as the equation of the center, where more details about the expansion are given.
Radius from true anomaly.
The radius (distance between the focus of attraction and the orbiting body) is related to the true anomaly by the formula
formula_23
where "a" is the orbit's semi-major axis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\nu = \\arccos { {\\mathbf{e} \\cdot \\mathbf{r}} \\over { \\mathbf{\\left |e \\right |} \\mathbf{\\left |r \\right |} }}"
},
{
"math_id": 1,
"text": " u = \\arccos { {\\mathbf{n} \\cdot \\mathbf{r}} \\over { \\mathbf{\\left |n \\right |} \\mathbf{\\left |r \\right |} }}"
},
{
"math_id": 2,
"text": " l = \\arccos { r_x \\over { \\mathbf{\\left |r \\right |}}}"
},
{
"math_id": 3,
"text": "E"
},
{
"math_id": 4,
"text": "\\cos{\\nu} = {{\\cos{E} - e} \\over {1 - e \\cos{E}}}"
},
{
"math_id": 5,
"text": "\\begin{align}\n \\sin{\\nu} &= {{\\sqrt{1 - e^2\\,} \\sin{E}} \\over {1 - e \\cos{E}}} \\\\[4pt]\n \\tan{\\nu} = {{\\sin{\\nu}} \\over {\\cos{\\nu}}} &= {{\\sqrt{1 - e^2\\,} \\sin{E}} \\over {\\cos{E} -e}}\n\\end{align}"
},
{
"math_id": 6,
"text": "\\tan{\\nu \\over 2} = \\sqrt{{{1 + e\\,} \\over {1-e\\,}}} \\tan{E \\over 2}"
},
{
"math_id": 7,
"text": "\\nu = 2 \\, \\operatorname{arctan}\\left(\\, \\sqrt{{{1 + e\\,} \\over {1 - e\\,}}} \\tan{E \\over 2} \\, \\right)"
},
{
"math_id": 8,
"text": "\\pm\\pi"
},
{
"math_id": 9,
"text": "\\frac{E}{2}"
},
{
"math_id": 10,
"text": "\\frac{\\nu}{2}"
},
{
"math_id": 11,
"text": "\\tan{\\frac{1}{2}(\\nu - E)} = \\frac{\\beta\\sin{E}}{1 - \\beta\\cos{E}}"
},
{
"math_id": 12,
"text": " \\beta = \\frac{e}{1 + \\sqrt{1 - e^2}} "
},
{
"math_id": 13,
"text": "\\nu = E + 2\\operatorname{arctan}\\left(\\,\\frac{\\beta\\sin{E}}{1 - \\beta\\cos{E}}\\,\\right)"
},
{
"math_id": 14,
"text": "M"
},
{
"math_id": 15,
"text": "\\nu = M + 2 \\sum_{k=1}^{\\infty}\\frac{1}{k} \\left[ \\sum_{n=-\\infty}^{\\infty} J_n(-ke)\\beta^{|k+n|} \\right] \\sin{kM}"
},
{
"math_id": 16,
"text": "J_n"
},
{
"math_id": 17,
"text": "\\beta = \\frac{1-\\sqrt{1-e^2}}{e}"
},
{
"math_id": 18,
"text": "e^4"
},
{
"math_id": 19,
"text": "\\operatorname{\\mathcal{O}}\\left(e^4\\right)"
},
{
"math_id": 20,
"text": "\\nu = M + \\left(2e - \\frac{1}{4} e^3\\right) \\sin{M} + \\frac{5}{4} e^2 \\sin{2M} + \\frac{13}{12} e^3 \\sin{3M} + \\operatorname{\\mathcal{O}}\\left(e^4\\right)."
},
{
"math_id": 21,
"text": "e"
},
{
"math_id": 22,
"text": "\\nu - M"
},
{
"math_id": 23,
"text": "r(t) = a\\,{1 - e^2 \\over 1 + e \\cos\\nu(t)}\\,\\!"
}
] | https://en.wikipedia.org/wiki?curid=969603 |
9697 | Euclidean space | Fundamental space of geometry
Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, in Euclid's "Elements", it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are "Euclidean spaces" of any positive integer dimension "n", which are called Euclidean "n"-spaces when one wants to specify their dimension. For "n" equal to one or two, they are commonly called respectively Euclidean lines and Euclidean planes. The qualifier "Euclidean" is used to distinguish Euclidean spaces from other spaces that were later considered in physics and modern mathematics.
Ancient Greek geometers introduced Euclidean space for modeling the physical space. Their work was collected by the ancient Greek mathematician Euclid in his "Elements", with the great innovation of "proving" all properties of the space as theorems, by starting from a few fundamental properties, called "postulates", which either were considered as evident (for example, there is exactly one straight line passing through two points), or seemed impossible to prove (parallel postulate).
After the introduction at the end of the 19th century of non-Euclidean geometries, the old postulates were re-formalized to define Euclidean spaces through axiomatic theory. Another definition of Euclidean spaces by means of vector spaces and linear algebra has been shown to be equivalent to the axiomatic definition. It is this definition that is more commonly used in modern mathematics, and detailed in this article. In all definitions, Euclidean spaces consist of points, which are defined only by the properties that they must have for forming a Euclidean space.
There is essentially only one Euclidean space of each dimension; that is, all Euclidean spaces of a given dimension are isomorphic. Therefore it is usually possible to work with a specific Euclidean space, denoted formula_0 or formula_1, which can be represented using Cartesian coordinates as the real n-space formula_2 equipped with the standard dot product.
Definition.
History of the definition.
Euclidean space was introduced by ancient Greeks as an abstraction of our physical space. Their great innovation, appearing in Euclid's "Elements" was to build and "prove" all geometry by starting from a few very basic properties, which are abstracted from the physical world, and cannot be mathematically proved because of the lack of more basic tools. These properties are called postulates, or axioms in modern language. This way of defining Euclidean space is still in use under the name of synthetic geometry.
In 1637, René Descartes introduced Cartesian coordinates, and showed that these allow reducing geometric problems to algebraic computations with numbers. This reduction of geometry to algebra was a major change in point of view, as, until then, the real numbers were defined in terms of lengths and distances.
Euclidean geometry was not applied in spaces of dimension more than three until the 19th century. Ludwig Schläfli generalized Euclidean geometry to spaces of dimension n, using both synthetic and algebraic methods, and discovered all of the regular polytopes (higher-dimensional analogues of the Platonic solids) that exist in Euclidean spaces of any dimension.
Despite the wide use of Descartes' approach, which was called analytic geometry, the definition of Euclidean space remained unchanged until the end of 19th century. The introduction of abstract vector spaces allowed their use in defining Euclidean spaces with a purely algebraic definition. This new definition has been shown to be equivalent to the classical definition in terms of geometric axioms. It is this algebraic definition that is now most often used for introducing Euclidean spaces.
Motivation of the modern definition.
One way to think of the Euclidean plane is as a set of points satisfying certain relationships, expressible in terms of distance and angles. For example, there are two fundamental operations (referred to as motions) on the plane. One is translation, which means a shifting of the plane so that every point is shifted in the same direction and by the same distance. The other is rotation around a fixed point in the plane, in which all points in the plane turn around that fixed point through the same angle. One of the basic tenets of Euclidean geometry is that two figures (usually considered as subsets) of the plane should be considered equivalent (congruent) if one can be transformed into the other by some sequence of translations, rotations and reflections (see below).
In order to make all of this mathematically precise, the theory must clearly define what is a Euclidean space, and the related notions of distance, angle, translation, and rotation. Even when used in physical theories, Euclidean space is an abstraction detached from actual physical locations, specific reference frames, measurement instruments, and so on. A purely mathematical definition of Euclidean space also ignores questions of units of length and other physical dimensions: the distance in a "mathematical" space is a number, not something expressed in inches or metres.
The standard way to mathematically define a Euclidean space, as carried out in the remainder of this article, is as a set of points on which a real vector space acts — the "space of translations" which is equipped with an inner product. The action of translations makes the space an affine space, and this allows defining lines, planes, subspaces, dimension, and parallelism. The inner product allows defining distance and angles.
The set formula_2 of n-tuples of real numbers equipped with the dot product is a Euclidean space of dimension n. Conversely, the choice of a point called the "origin" and an orthonormal basis of the space of translations is equivalent with defining an isomorphism between a Euclidean space of dimension n and formula_2 viewed as a Euclidean space.
It follows that everything that can be said about a Euclidean space can also be said about formula_3 Therefore, many authors, especially at elementary level, call formula_2 the "standard Euclidean space" of dimension n, or simply "the" Euclidean space of dimension n.
A reason for introducing such an abstract definition of Euclidean spaces, and for working with it instead of formula_2 is that it is often preferable to work in a "coordinate-free" and "origin-free" manner (that is, without choosing a preferred basis and a preferred origin). Another reason is that there is no origin nor any basis in the physical world.
Technical definition.
A <templatestyles src="Template:Visible anchor/styles.css" />Euclidean vector space is a finite-dimensional inner product space over the real numbers.
A Euclidean space is an affine space over the reals such that the associated vector space is a Euclidean vector space. Euclidean spaces are sometimes called "Euclidean affine spaces" to distinguish them from Euclidean vector spaces.
If E is a Euclidean space, its associated vector space (Euclidean vector space) is often denoted formula_4 The "dimension" of a Euclidean space is the dimension of its associated vector space.
The elements of E are called "points", and are commonly denoted by capital letters. The elements of formula_5 are called "Euclidean vectors" or "free vectors". They are also called "translations", although, properly speaking, a translation is the geometric transformation resulting from the action of a Euclidean vector on the Euclidean space.
The action of a translation v on a point P provides a point that is denoted "P" + "v". This action satisfies
formula_6
Note: The second + in the left-hand side is a vector addition; each other + denotes an action of a vector on a point. This notation is not ambiguous, as, to distinguish between the two meanings of +, it suffices to look at the nature of its left argument.
The fact that the action is free and transitive means that, for every pair of points ("P", "Q"), there is exactly one displacement vector v such that "P" + "v" = "Q". This vector v is denoted "Q" − "P" or formula_7
As previously explained, some of the basic properties of Euclidean spaces result from the structure of affine space. They are described in and its subsections. The properties resulting from the inner product are explained in and its subsections.
Prototypical examples.
For any vector space, the addition acts freely and transitively on the vector space itself. Thus a Euclidean vector space can be viewed as a Euclidean space that has itself as the associated vector space.
A typical case of Euclidean vector space is formula_2 viewed as a vector space equipped with the dot product as an inner product. The importance of this particular example of Euclidean space lies in the fact that every Euclidean space is isomorphic to it. More precisely, given a Euclidean space E of dimension n, the choice of a point, called an "origin" and an orthonormal basis of formula_5 defines an isomorphism of Euclidean spaces from E to formula_3
As every Euclidean space of dimension n is isomorphic to it, the Euclidean space formula_2 is sometimes called the "standard Euclidean space" of dimension n.
Affine structure.
Some basic properties of Euclidean spaces depend only on the fact that a Euclidean space is an affine space. They are called affine properties and include the concepts of lines, subspaces, and parallelism, which are detailed in next subsections.
Subspaces.
Let E be a Euclidean space and formula_5 its associated vector space.
A "flat", "Euclidean subspace" or "affine subspace" of E is a subset F of E such that
formula_8
as the associated vector space of F is a linear subspace (vector subspace) of formula_4 A Euclidean subspace F is a Euclidean space with formula_9 as the associated vector space. This linear subspace formula_9 is also called the "direction" of F.
If P is a point of F then
formula_10
Conversely, if P is a point of E and formula_11 is a linear subspace of formula_12 then
formula_13
is a Euclidean subspace of direction formula_11. (The associated vector space of this subspace is formula_11.)
A Euclidean vector space formula_5 (that is, a Euclidean space that is equal to formula_5) has two sorts of subspaces: its Euclidean subspaces and its linear subspaces. Linear subspaces are Euclidean subspaces and a Euclidean subspace is a linear subspace if and only if it contains the zero vector.
Lines and segments.
In a Euclidean space, a "line" is a Euclidean subspace of dimension one. Since a vector space of dimension one is spanned by any nonzero vector, a line is a set of the form
formula_14
where P and Q are two distinct points of the Euclidean space as a part of the line.
It follows that "there is exactly one line that passes through (contains) two distinct points." This implies that two distinct lines intersect in at most one point.
A more symmetric representation of the line passing through P and Q is
formula_15
where O is an arbitrary point (not necessary on the line).
In a Euclidean vector space, the zero vector is usually chosen for O; this allows simplifying the preceding formula into
formula_16
A standard convention allows using this formula in every Euclidean space, see .
The "line segment", or simply "segment", joining the points P and Q is the subset of points such that 0 ≤ "𝜆" ≤ 1 in the preceding formulas. It is denoted PQ or QP; that is
formula_17
Parallelism.
Two subspaces S and T of the same dimension in a Euclidean space are "parallel" if they have the same direction (i.e., the same associated vector space). Equivalently, they are parallel, if there is a translation vector v that maps one to the other:
formula_18
Given a point P and a subspace S, there exists exactly one subspace that contains P and is parallel to S, which is formula_19 In the case where S is a line (subspace of dimension one), this property is Playfair's axiom.
It follows that in a Euclidean plane, two lines either meet in one point or are parallel.
The concept of parallel subspaces has been extended to subspaces of different dimensions: two subspaces are parallel if the direction of one of them is contained in the direction to the other.
Metric structure.
The vector space formula_5 associated to a Euclidean space E is an inner product space. This implies a symmetric bilinear form
formula_20
that is positive definite (that is formula_21 is always positive for "x" ≠ 0).
The inner product of a Euclidean space is often called "dot product" and denoted "x" ⋅ "y". This is specially the case when a Cartesian coordinate system has been chosen, as, in this case, the inner product of two vectors is the dot product of their coordinate vectors. For this reason, and for historical reasons, the dot notation is more commonly used than the bracket notation for the inner product of Euclidean spaces. This article will follow this usage; that is formula_22 will be denoted "x" ⋅ "y" in the remainder of this article.
The Euclidean norm of a vector x is
formula_23
The inner product and the norm allows expressing and proving metric and topological properties of Euclidean geometry. The next subsection describe the most fundamental ones. "In these subsections," E "denotes an arbitrary Euclidean space, and formula_5 denotes its vector space of translations."
Distance and length.
The "distance" (more precisely the "Euclidean distance") between two points of a Euclidean space is the norm of the translation vector that maps one point to the other; that is
formula_24
The "length" of a segment "PQ" is the distance "d"("P", "Q") between its endpoints "P" and "Q". It is often denoted formula_25.
The distance is a metric, as it is positive definite, symmetric, and satisfies the triangle inequality
formula_26
Moreover, the equality is true if and only if a point R belongs to the segment "PQ". This inequality means that the length of any edge of a triangle is smaller than the sum of the lengths of the other edges. This is the origin of the term "triangle inequality".
With the Euclidean distance, every Euclidean space is a complete metric space.
Orthogonality.
Two nonzero vectors u and v of formula_5 (the associated vector space of a Euclidean space E) are "perpendicular" or "orthogonal" if their inner product is zero:
formula_27
Two linear subspaces of formula_5 are orthogonal if every nonzero vector of the first one is perpendicular to every nonzero vector of the second one. This implies that the intersection of the linear subspaces is reduced to the zero vector.
Two lines, and more generally two Euclidean subspaces (A line can be considered as one Euclidean subspace.) are orthogonal if their directions (the associated vector spaces of the Euclidean subspaces) are orthogonal. Two orthogonal lines that intersect are said "perpendicular".
Two segments "AB" and "AC" that share a common endpoint "A" are "perpendicular" or "form a right angle" if the vectors formula_28 and formula_29 are orthogonal.
If "AB" and "AC" form a right angle, one has
formula_30
This is the Pythagorean theorem. Its proof is easy in this context, as, expressing this in terms of the inner product, one has, using bilinearity and symmetry of the inner product:
formula_31
Here, formula_32 is used since these two vectors are orthogonal.
Angle.
The (non-oriented) "angle" θ between two nonzero vectors x and y in formula_5 is
formula_33
where arccos is the principal value of the arccosine function. By Cauchy–Schwarz inequality, the argument of the arccosine is in the interval [−1, 1]. Therefore θ is real, and 0 ≤ "θ" ≤ "π" (or 0 ≤ "θ" ≤ 180 if angles are measured in degrees).
Angles are not useful in a Euclidean line, as they can be only 0 or π.
In an oriented Euclidean plane, one can define the "oriented angle" of two vectors. The oriented angle of two vectors x and y is then the opposite of the oriented angle of y and x. In this case, the angle of two vectors can have any value modulo an integer multiple of 2"π". In particular, a reflex angle "π" < "θ" < 2"π" equals the negative angle −"π" < "θ" − 2"π" < 0.
The angle of two vectors does not change if they are multiplied by positive numbers. More precisely, if x and y are two vectors, and λ and μ are real numbers, then
formula_34
If A, B, and C are three points in a Euclidean space, the angle of the segments "AB" and "AC" is the angle of the vectors formula_35 and formula_36 As the multiplication of vectors by positive numbers do not change the angle, the angle of two half-lines with initial point A can be defined: it is the angle of the segments "AB" and "AC", where B and C are arbitrary points, one on each half-line. Although this is less used, one can define similarly the angle of segments or half-lines that do not share an initial point.
The angle of two lines is defined as follows. If θ is the angle of two segments, one on each line, the angle of any two other segments, one on each line, is either θ or "π" − "θ". One of these angles is in the interval [0, "π"/2], and the other being in ["π"/2, "π"]. The "non-oriented angle" of the two lines is the one in the interval [0, "π"/2]. In an oriented Euclidean plane, the "oriented angle" of two lines belongs to the interval [−"π"/2, "π"/2].
Cartesian coordinates.
Every Euclidean vector space has an orthonormal basis (in fact, infinitely many in dimension higher than one, and two in dimension one), that is a basis formula_37 of unit vectors (formula_38) that are pairwise orthogonal (formula_39 for "i" ≠ "j"). More precisely, given any basis formula_40 the Gram–Schmidt process computes an orthonormal basis such that, for every i, the linear spans of formula_41 and formula_42 are equal.
Given a Euclidean space E, a "Cartesian frame" is a set of data consisting of an orthonormal basis of formula_12 and a point of E, called the "origin" and often denoted O. A Cartesian frame formula_43 allows defining Cartesian coordinates for both E and formula_5 in the following way.
The Cartesian coordinates of a vector v of formula_5 are the coefficients of v on the orthonormal basis formula_44 For example, the Cartesian coordinates of a vector formula_45 on an orthonormal basis formula_46 (that may be named as formula_47 as a convention) in a 3-dimensional Euclidean space is formula_48 if formula_49. As the basis is orthonormal, the i-th coefficient formula_50 is equal to the dot product formula_51
The Cartesian coordinates of a point P of E are the Cartesian coordinates of the vector formula_52
Other coordinates.
As a Euclidean space is an affine space, one can consider an affine frame on it, which is the same as a Euclidean frame, except that the basis is not required to be orthonormal. This define affine coordinates, sometimes called "skew coordinates" for emphasizing that the basis vectors are not pairwise orthogonal.
An affine basis of a Euclidean space of dimension n is a set of "n" + 1 points that are not contained in a hyperplane. An affine basis define barycentric coordinates for every point.
Many other coordinates systems can be defined on a Euclidean space E of dimension n, in the following way. Let f be a homeomorphism (or, more often, a diffeomorphism) from a dense open subset of E to an open subset of formula_3 The "coordinates" of a point x of E are the components of "f"("x"). The polar coordinate system (dimension 2) and the spherical and cylindrical coordinate systems (dimension 3) are defined this way.
For points that are outside the domain of f, coordinates may sometimes be defined as the limit of coordinates of neighbour points, but these coordinates may be not uniquely defined, and may be not continuous in the neighborhood of the point. For example, for the spherical coordinate system, the longitude is not defined at the pole, and on the antimeridian, the longitude passes discontinuously from –180° to +180°.
This way of defining coordinates extends easily to other mathematical structures, and in particular to manifolds.
Isometries.
An isometry between two metric spaces is a bijection preserving the distance, that is
formula_53
In the case of a Euclidean vector space, an isometry that maps the origin to the origin preserves the norm
formula_54
since the norm of a vector is its distance from the zero vector. It preserves also the inner product
formula_55
since
formula_56
An isometry of Euclidean vector spaces is a linear isomorphism.
An isometry formula_57 of Euclidean spaces defines an isometry formula_58 of the associated Euclidean vector spaces. This implies that two isometric Euclidean spaces have the same dimension. Conversely, if E and F are Euclidean spaces, "O" ∈ "E", "O"′ ∈ "F", and formula_59 is an isometry, then the map formula_57 defined by
formula_60
is an isometry of Euclidean spaces.
It follows from the preceding results that an isometry of Euclidean spaces maps lines to lines, and, more generally Euclidean subspaces to Euclidean subspaces of the same dimension, and that the restriction of the isometry on these subspaces are isometries of these subspaces.
Isometry with prototypical examples.
If E is a Euclidean space, its associated vector space formula_5 can be considered as a Euclidean space. Every point "O" ∈ "E" defines an isometry of Euclidean spaces
formula_61
which maps O to the zero vector and has the identity as associated linear map. The inverse isometry is the map
formula_62
A Euclidean frame &NoBreak;&NoBreak; allows defining the map
formula_63
which is an isometry of Euclidean spaces. The inverse isometry is
formula_64
"This means that, up to an isomorphism, there is exactly one Euclidean space of a given dimension."
This justifies that many authors talk of formula_2 as "the" Euclidean space of dimension n.
Euclidean group.
An isometry from a Euclidean space onto itself is called "Euclidean isometry", "Euclidean transformation" or "rigid transformation". The rigid transformations of a Euclidean space form a group (under composition), called the "Euclidean group" and often denoted E("n") of ISO("n").
The simplest Euclidean transformations are translations
formula_65
They are in bijective correspondence with vectors. This is a reason for calling "space of translations" the vector space associated to a Euclidean space. The translations form a normal subgroup of the Euclidean group.
A Euclidean isometry f of a Euclidean space E defines a linear isometry formula_66 of the associated vector space (by "linear isometry", it is meant an isometry that is also a linear map) in the following way: denoting by "Q" – "P" the vector formula_67 if O is an arbitrary point of E, one has
formula_68
It is straightforward to prove that this is a linear map that does not depend from the choice of O.
The map formula_69 is a group homomorphism from the Euclidean group onto the group of linear isometries, called the orthogonal group. The kernel of this homomorphism is the translation group, showing that it is a normal subgroup of the Euclidean group.
The isometries that fix a given point P form the stabilizer subgroup of the Euclidean group with respect to P. The restriction to this stabilizer of above group homomorphism is an isomorphism. So the isometries that fix a given point form a group isomorphic to the orthogonal group.
Let P be a point, f an isometry, and t the translation that maps P to "f"("P"). The isometry formula_70 fixes P. So formula_71 and "the Euclidean group is the semidirect product of the translation group and the orthogonal group."
The special orthogonal group is the normal subgroup of the orthogonal group that preserves handedness. It is a subgroup of index two of the orthogonal group. Its inverse image by the group homomorphism formula_69 is a normal subgroup of index two of the Euclidean group, which is called the "special Euclidean group" or the "displacement group". Its elements are called "rigid motions" or "displacements".
Rigid motions include the identity, translations, rotations (the rigid motions that fix at least a point), and also screw motions.
Typical examples of rigid transformations that are not rigid motions are reflections, which are rigid transformations that fix a hyperplane and are not the identity. They are also the transformations consisting in changing the sign of one coordinate over some Euclidean frame.
As the special Euclidean group is a subgroup of index two of the Euclidean group, given a reflection r, every rigid transformation that is not a rigid motion is the product of r and a rigid motion. A glide reflection is an example of a rigid transformation that is not a rigid motion or a reflection.
All groups that have been considered in this section are Lie groups and algebraic groups.
Topology.
The Euclidean distance makes a Euclidean space a metric space, and thus a topological space. This topology is called the Euclidean topology. In the case of formula_72 this topology is also the product topology.
The open sets are the subsets that contains an open ball around each of their points. In other words, open balls form a base of the topology.
The topological dimension of a Euclidean space equals its dimension. This implies that Euclidean spaces of different dimensions are not homeomorphic. Moreover, the theorem of invariance of domain asserts that a subset of a Euclidean space is open (for the subspace topology) if and only if it is homeomorphic to an open subset of a Euclidean space of the same dimension.
Euclidean spaces are complete and locally compact. That is, a closed subset of a Euclidean space is compact if it is bounded (that is, contained in a ball). In particular, closed balls are compact.
Axiomatic definitions.
The definition of Euclidean spaces that has been described in this article differs fundamentally of Euclid's one. In reality, Euclid did not define formally the space, because it was thought as a description of the physical world that exists independently of human mind. The need of a formal definition appeared only at the end of 19th century, with the introduction of non-Euclidean geometries.
Two different approaches have been used. Felix Klein suggested to define geometries through their symmetries. The presentation of Euclidean spaces given in this article, is essentially issued from his Erlangen program, with the emphasis given on the groups of translations and isometries.
On the other hand, David Hilbert proposed a set of axioms, inspired by Euclid's postulates. They belong to synthetic geometry, as they do not involve any definition of real numbers. Later G. D. Birkhoff and Alfred Tarski proposed simpler sets of axioms, which use real numbers (see Birkhoff's axioms and Tarski's axioms).
In "Geometric Algebra", Emil Artin has proved that all these definitions of a Euclidean space are equivalent. It is rather easy to prove that all definitions of Euclidean spaces satisfy Hilbert's axioms, and that those involving real numbers (including the above given definition) are equivalent. The difficult part of Artin's proof is the following. In Hilbert's axioms, congruence is an equivalence relation on segments. One can thus define the "length" of a segment as its equivalence class. One must thus prove that this length satisfies properties that characterize nonnegative real numbers. Artin proved this with axioms equivalent to those of Hilbert.
Usage.
Since the ancient Greeks, Euclidean space has been used for modeling shapes in the physical world. It is thus used in many sciences, such as physics, mechanics, and astronomy. It is also widely used in all technical areas that are concerned with shapes, figure, location and position, such as architecture, geodesy, topography, navigation, industrial design, or technical drawing.
Space of dimensions higher than three occurs in several modern theories of physics; see Higher dimension. They occur also in configuration spaces of physical systems.
Beside Euclidean geometry, Euclidean spaces are also widely used in other areas of mathematics. Tangent spaces of differentiable manifolds are Euclidean vector spaces. More generally, a manifold is a space that is locally approximated by Euclidean spaces. Most non-Euclidean geometries can be modeled by a manifold, and embedded in a Euclidean space of higher dimension. For example, an elliptic space can be modeled by an ellipsoid. It is common to represent in a Euclidean space mathematical objects that are "a priori" not of a geometrical nature. An example among many is the usual representation of graphs.
Other geometric spaces.
Since the introduction, at the end of 19th century, of non-Euclidean geometries, many sorts of spaces have been considered, about which one can do geometric reasoning in the same way as with Euclidean spaces. In general, they share some properties with Euclidean spaces, but may also have properties that could appear as rather strange. Some of these spaces use Euclidean geometry for their definition, or can be modeled as subspaces of a Euclidean space of higher dimension. When such a space is defined by geometrical axioms, embedding the space in a Euclidean space is a standard way for proving consistency of its definition, or, more precisely for proving that its theory is consistent, if Euclidean geometry is consistent (which cannot be proved).
Affine space.
A Euclidean space is an affine space equipped with a metric. Affine spaces have many other uses in mathematics. In particular, as they are defined over any field, they allow doing geometry in other contexts.
As soon as non-linear questions are considered, it is generally useful to consider affine spaces over the complex numbers as an extension of Euclidean spaces. For example, a circle and a line have always two intersection points (possibly not distinct) in the complex affine space. Therefore, most of algebraic geometry is built in complex affine spaces and affine spaces over algebraically closed fields. The shapes that are studied in algebraic geometry in these affine spaces are therefore called affine algebraic varieties.
Affine spaces over the rational numbers and more generally over algebraic number fields provide a link between (algebraic) geometry and number theory. For example, the Fermat's Last Theorem can be stated "a Fermat curve of degree higher than two has no point in the affine plane over the rationals."
Geometry in affine spaces over a finite fields has also been widely studied. For example, elliptic curves over finite fields are widely used in cryptography.
Projective space.
Originally, projective spaces have been introduced by adding "points at infinity" to Euclidean spaces, and, more generally to affine spaces, in order to make true the assertion "two coplanar lines meet in exactly one point". Projective space share with Euclidean and affine spaces the property of being isotropic, that is, there is no property of the space that allows distinguishing between two points or two lines. Therefore, a more isotropic definition is commonly used, which consists as defining a projective space as the set of the vector lines in a vector space of dimension one more.
As for affine spaces, projective spaces are defined over any field, and are fundamental spaces of algebraic geometry.
Non-Euclidean geometries.
"Non-Euclidean geometry" refers usually to geometrical spaces where the parallel postulate is false. They include elliptic geometry, where the sum of the angles of a triangle is more than 180°, and hyperbolic geometry, where this sum is less than 180°. Their introduction in the second half of 19th century, and the proof that their theory is consistent (if Euclidean geometry is not contradictory) is one of the paradoxes that are at the origin of the foundational crisis in mathematics of the beginning of 20th century, and motivated the systematization of axiomatic theories in mathematics.
Curved spaces.
A manifold is a space that in the neighborhood of each point resembles a Euclidean space. In technical terms, a manifold is a topological space, such that each point has a neighborhood that is homeomorphic to an open subset of a Euclidean space. Manifolds can be classified by increasing degree of this "resemblance" into topological manifolds, differentiable manifolds, smooth manifolds, and analytic manifolds. However, none of these types of "resemblance" respect distances and angles, even approximately.
Distances and angles can be defined on a smooth manifold by providing a smoothly varying Euclidean metric on the tangent spaces at the points of the manifold (these tangent spaces are thus Euclidean vector spaces). This results in a Riemannian manifold. Generally, straight lines do not exist in a Riemannian manifold, but their role is played by geodesics, which are the "shortest paths" between two points. This allows defining distances, which are measured along geodesics, and angles between geodesics, which are the angle of their tangents in the tangent space at their intersection. So, Riemannian manifolds behave locally like a Euclidean space that has been bent.
Euclidean spaces are trivially Riemannian manifolds. An example illustrating this well is the surface of a sphere. In this case, geodesics are arcs of great circle, which are called orthodromes in the context of navigation. More generally, the spaces of non-Euclidean geometries can be realized as Riemannian manifolds.
Pseudo-Euclidean space.
An inner product of a real vector space is a positive definite bilinear form, and so characterized by a positive definite quadratic form. A pseudo-Euclidean space is an affine space with an associated real vector space equipped with a non-degenerate quadratic form (that may be indefinite).
A fundamental example of such a space is the Minkowski space, which is the space-time of Einstein's special relativity. It is a four-dimensional space, where the metric is defined by the quadratic form
formula_73
where the last coordinate ("t") is temporal, and the other three ("x", "y", "z") are spatial.
To take gravity into account, general relativity uses a pseudo-Riemannian manifold that has Minkowski spaces as tangent spaces. The curvature of this manifold at a point is a function of the value of the gravitational field at this point.
Footnotes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{E}^n"
},
{
"math_id": 1,
"text": "\\mathbb{E}^n"
},
{
"math_id": 2,
"text": "\\R^n"
},
{
"math_id": 3,
"text": "\\R^n."
},
{
"math_id": 4,
"text": "\\overrightarrow E."
},
{
"math_id": 5,
"text": "\\overrightarrow E"
},
{
"math_id": 6,
"text": "P+(v+w)= (P+v)+w."
},
{
"math_id": 7,
"text": "\\overrightarrow {PQ}\\vphantom{\\frac){}}."
},
{
"math_id": 8,
"text": "\\overrightarrow F = \\Bigl\\{\\overrightarrow {PQ} \\mathrel{\\Big|} P\\in F, Q\\in F \\Bigr\\}\\vphantom{\\frac({}}"
},
{
"math_id": 9,
"text": "\\overrightarrow F"
},
{
"math_id": 10,
"text": "F = \\Bigl\\{P+v \\mathrel{\\Big|} v\\in \\overrightarrow F \\Bigr\\}."
},
{
"math_id": 11,
"text": "\\overrightarrow V"
},
{
"math_id": 12,
"text": "\\overrightarrow E,"
},
{
"math_id": 13,
"text": "P + \\overrightarrow V = \\Bigl\\{P + v \\mathrel{\\Big|} v\\in \\overrightarrow V \\Bigr\\}"
},
{
"math_id": 14,
"text": "\\Bigl\\{P + \\lambda \\overrightarrow{PQ} \\mathrel{\\Big|} \\lambda \\in \\R \\Bigr\\},\\vphantom{\\frac({}}"
},
{
"math_id": 15,
"text": "\\Bigl\\{O + (1-\\lambda)\\overrightarrow{OP} + \\lambda \\overrightarrow{OQ} \\mathrel{\\Big|} \\lambda \\in \\R \\Bigr\\},\\vphantom{\\frac({}}"
},
{
"math_id": 16,
"text": "\\bigl\\{(1-\\lambda) P + \\lambda Q \\mathrel{\\big|} \\lambda \\in \\R\\bigr\\}."
},
{
"math_id": 17,
"text": "PQ = QP = \\Bigl\\{P+\\lambda \\overrightarrow{PQ} \\mathrel{\\Big|} 0 \\le \\lambda \\le 1\\Bigr\\}.\\vphantom{\\frac({}}"
},
{
"math_id": 18,
"text": "T= S+v."
},
{
"math_id": 19,
"text": "P + \\overrightarrow S."
},
{
"math_id": 20,
"text": "\\begin{align}\n\\overrightarrow E \\times \\overrightarrow E &\\to \\R\\\\\n(x,y)&\\mapsto \\langle x,y \\rangle\n\\end{align}"
},
{
"math_id": 21,
"text": "\\langle x,x \\rangle"
},
{
"math_id": 22,
"text": "\\langle x,y \\rangle"
},
{
"math_id": 23,
"text": "\\|x\\| = \\sqrt {x \\cdot x}."
},
{
"math_id": 24,
"text": "d(P,Q) = \\Bigl\\|\\overrightarrow {PQ}\\Bigr\\|.\\vphantom{\\frac({}}"
},
{
"math_id": 25,
"text": "|PQ|"
},
{
"math_id": 26,
"text": "d(P,Q)\\le d(P,R) + d(R, Q)."
},
{
"math_id": 27,
"text": " u \\cdot v =0"
},
{
"math_id": 28,
"text": "\\overrightarrow {AB}\\vphantom{\\frac){}}"
},
{
"math_id": 29,
"text": "\\overrightarrow {AC}\\vphantom{\\frac){}}"
},
{
"math_id": 30,
"text": "|BC|^2 = |AB|^2 + |AC|^2."
},
{
"math_id": 31,
"text": "\\begin{align}\n|BC|^2 &= \\overrightarrow {BC}\\cdot \\overrightarrow {BC} \\vphantom{\\frac({}}\\\\[2mu]\n&=\\Bigl(\\overrightarrow {BA}+\\overrightarrow {AC}\\Bigr) \\cdot \\Bigl(\\overrightarrow {BA}+\\overrightarrow {AC}\\Bigr)\\\\[4mu]\n&=\\overrightarrow {BA}\\cdot \\overrightarrow {BA}+ \\overrightarrow {AC}\\cdot \\overrightarrow {AC} -2 \\overrightarrow {AB}\\cdot \\overrightarrow {AC}\\\\[6mu]\n&=\\overrightarrow {AB}\\cdot \\overrightarrow {AB} + \\overrightarrow {AC}\\cdot\\overrightarrow {AC}\\\\[6mu]\n&=|AB|^2 + |AC|^2.\n\\end{align}"
},
{
"math_id": 32,
"text": "\\overrightarrow {AB}\\cdot \\overrightarrow {AC} = 0 \\vphantom{\\frac({}}"
},
{
"math_id": 33,
"text": "\\theta = \\arccos\\left(\\frac{x\\cdot y}{|x|\\,|y|}\\right)"
},
{
"math_id": 34,
"text": "\\operatorname{angle}(\\lambda x, \\mu y)= \\begin{cases}\n\\operatorname{angle}(x, y) \\qquad\\qquad \\text{if } \\lambda \\text{ and } \\mu \\text{ have the same sign}\\\\\n\\pi - \\operatorname{angle}(x, y)\\qquad \\text{otherwise}.\n\\end{cases}"
},
{
"math_id": 35,
"text": "\\overrightarrow {AB}\\vphantom{\\frac({}}"
},
{
"math_id": 36,
"text": "\\overrightarrow {AC}.\\vphantom{\\frac({}}"
},
{
"math_id": 37,
"text": "(e_1, \\dots, e_n) "
},
{
"math_id": 38,
"text": "\\|e_i\\| = 1"
},
{
"math_id": 39,
"text": "e_i\\cdot e_j = 0"
},
{
"math_id": 40,
"text": "(b_1, \\dots, b_n),"
},
{
"math_id": 41,
"text": "(e_1, \\dots, e_i)"
},
{
"math_id": 42,
"text": "(b_1, \\dots, b_i)"
},
{
"math_id": 43,
"text": "(O, e_1, \\dots, e_n)"
},
{
"math_id": 44,
"text": "e_1, \\dots, e_n."
},
{
"math_id": 45,
"text": "v"
},
{
"math_id": 46,
"text": "(e_1,e_2,e_3)"
},
{
"math_id": 47,
"text": "(x,y,z)"
},
{
"math_id": 48,
"text": "(\\alpha_1,\\alpha_2,\\alpha_3)"
},
{
"math_id": 49,
"text": "v = \\alpha_1 e_1 + \\alpha_2 e_2 + \\alpha_3 e_3"
},
{
"math_id": 50,
"text": "\\alpha_i"
},
{
"math_id": 51,
"text": "v\\cdot e_i."
},
{
"math_id": 52,
"text": "\\overrightarrow {OP}.\\vphantom{\\frac({}}"
},
{
"math_id": 53,
"text": "d(f(x), f(y))= d(x,y)."
},
{
"math_id": 54,
"text": "\\|f(x)\\| = \\|x\\|,"
},
{
"math_id": 55,
"text": "f(x)\\cdot f(y)=x\\cdot y,"
},
{
"math_id": 56,
"text": "x \\cdot y=\\tfrac 1 2 \\left(\\|x+y\\|^2 - \\|x\\|^2 - \\|y\\|^2\\right)."
},
{
"math_id": 57,
"text": "f\\colon E\\to F"
},
{
"math_id": 58,
"text": "\\overrightarrow f \\colon \\overrightarrow E \\to \\overrightarrow F"
},
{
"math_id": 59,
"text": "\\overrightarrow f\\colon \\overrightarrow E\\to \\overrightarrow F"
},
{
"math_id": 60,
"text": "f(P)=O' + \\overrightarrow f\\Bigl(\\overrightarrow{OP}\\Bigr)\\vphantom{\\frac({}}"
},
{
"math_id": 61,
"text": "P\\mapsto \\overrightarrow {OP},\\vphantom{\\frac({}}"
},
{
"math_id": 62,
"text": "v\\mapsto O+v."
},
{
"math_id": 63,
"text": "\\begin{align}\nE&\\to \\R^n\\\\\nP&\\mapsto \\Bigl(e_1\\cdot \\overrightarrow {OP}, \\dots, e_n\\cdot\\overrightarrow {OP}\\Bigr),\\vphantom{\\frac({}}\n\\end{align}"
},
{
"math_id": 64,
"text": "\\begin{align}\n\\R^n&\\to E \\\\\n(x_1\\dots, x_n)&\\mapsto \\left(O+x_1e_1+ \\dots + x_ne_n\\right).\n\\end{align}"
},
{
"math_id": 65,
"text": "P \\to P+v."
},
{
"math_id": 66,
"text": "\\overrightarrow f"
},
{
"math_id": 67,
"text": "\\overrightarrow {PQ},\\vphantom{\\frac({}}"
},
{
"math_id": 68,
"text": "\\overrightarrow f\\Bigl(\\overrightarrow {OP}\\Bigr)= f(P)-f(O).\\vphantom{\\frac({}}"
},
{
"math_id": 69,
"text": "f \\to \\overrightarrow f"
},
{
"math_id": 70,
"text": "g=t^{-1}\\circ f"
},
{
"math_id": 71,
"text": "f= t\\circ g,"
},
{
"math_id": 72,
"text": "\\mathbb R^n,"
},
{
"math_id": 73,
"text": "x^2+y^2+z^2-t^2,"
}
] | https://en.wikipedia.org/wiki?curid=9697 |
9697733 | Contact mechanics | Study of the deformation of solids that touch each other
Contact mechanics is the study of the deformation of solids that touch each other at one or more points. A central distinction in contact mechanics is between stresses acting perpendicular to the contacting bodies' surfaces (known as normal stress) and frictional stresses acting tangentially between the surfaces (shear stress). Normal contact mechanics or frictionless contact mechanics focuses on normal stresses caused by applied normal forces and by the adhesion present on surfaces in close contact, even if they are clean and dry.
"Frictional contact mechanics" emphasizes the effect of friction forces.
Contact mechanics is part of mechanical engineering. The physical and mathematical formulation of the subject is built upon the mechanics of materials and continuum mechanics and focuses on computations involving elastic, viscoelastic, and plastic bodies in static or dynamic contact. Contact mechanics provides necessary information for the safe and energy efficient design of technical systems and for the study of tribology, contact stiffness, electrical contact resistance and indentation hardness. Principles of contacts mechanics are implemented towards applications such as locomotive wheel-rail contact, coupling devices, braking systems, tires, bearings, combustion engines, mechanical linkages, gasket seals, metalworking, metal forming, ultrasonic welding, electrical contacts, and many others. Current challenges faced in the field may include stress analysis of contact and coupling members and the influence of lubrication and material design on friction and wear. Applications of contact mechanics further extend into the micro- and nanotechnological realm.
The original work in contact mechanics dates back to 1881 with the publication of the paper "On the contact of elastic solids" ("Über die Berührung fester elastischer Körper") by Heinrich Hertz. Hertz was attempting to understand how the optical properties of multiple, stacked lenses might change with the force holding them together. Hertzian contact stress refers to the localized stresses that develop as two curved surfaces come in contact and deform slightly under the imposed loads. This amount of deformation is dependent on the modulus of elasticity of the material in contact. It gives the contact stress as a function of the normal contact force, the radii of curvature of both bodies and the modulus of elasticity of both bodies. Hertzian contact stress forms the foundation for the equations for load bearing capabilities and fatigue life in bearings, gears, and any other bodies where two surfaces are in contact.
History.
Classical contact mechanics is most notably associated with Heinrich Hertz. In 1882, Hertz solved the contact problem of two elastic bodies with curved surfaces. This still-relevant classical solution provides a foundation for modern problems in contact mechanics. For example, in mechanical engineering and tribology, "Hertzian contact stress" is a description of the stress within mating parts. The Hertzian contact stress usually refers to the stress close to the area of contact between two spheres of different radii.
It was not until nearly one hundred years later that Johnson, Kendall, and Roberts found a similar solution for the case of adhesive contact. This theory was rejected by Boris Derjaguin and co-workers who proposed a different theory of adhesion in the 1970s. The Derjaguin model came to be known as the DMT (after Derjaguin, Muller and Toporov) model, and the Johnson et al. model came to be known as the JKR (after Johnson, Kendall and Roberts) model for adhesive elastic contact. This rejection proved to be instrumental in the development of the Tabor and later Maugis parameters that quantify which contact model (of the JKR and DMT models) represent adhesive contact better for specific materials.
Further advancement in the field of contact mechanics in the mid-twentieth century may be attributed to names such as Bowden and Tabor. Bowden and Tabor were the first to emphasize the importance of surface roughness for bodies in contact. Through investigation of the surface roughness, the true contact area between friction partners is found to be less than the apparent contact area. Such understanding also drastically changed the direction of undertakings in tribology. The works of Bowden and Tabor yielded several theories in contact mechanics of rough surfaces.
The contributions of Archard (1957) must also be mentioned in discussion of pioneering works in this field. Archard concluded that, even for rough elastic surfaces, the contact area is approximately proportional to the normal force. Further important insights along these lines were provided by Greenwood and Williamson (1966), Bush (1975), and Persson (2002). The main findings of these works were that the true contact surface in rough materials is generally proportional to the normal force, while the parameters of individual micro-contacts (i.e., pressure, size of the micro-contact) are only weakly dependent upon the load.
Classical solutions for non-adhesive elastic contact.
The theory of contact between elastic bodies can be used to find contact areas and indentation depths for simple geometries. Some commonly used solutions are listed below. The theory used to compute these solutions is discussed later in the article. Solutions for multitude of other technically relevant shapes, e.g. the truncated cone, the worn sphere, rough profiles, hollow cylinders, etc. can be found in
Contact between a sphere and a half-space.
An elastic sphere of radius formula_0 indents an elastic half-space where total deformation is formula_1, causing a contact area of radius
formula_2
The applied force formula_3 is related to the displacement formula_1 by
formula_4
where
formula_5
and formula_6,formula_7 are the elastic moduli and formula_8,formula_9 the Poisson's ratios associated with each body.
The distribution of normal pressure in the contact area as a function of distance from the center of the circle is
formula_10
where formula_11 is the maximum contact pressure given by
formula_12
The radius of the circle is related to the applied load formula_3 by the equation
formula_13
The total deformation formula_1 is related to the maximum contact pressure by
formula_14
The maximum shear stress occurs in the interior at formula_15 for formula_16.
Contact between two spheres.
For contact between two spheres of radii formula_17 and formula_18, the area of contact is a circle of radius formula_19. The equations are the same as for a sphere in contact with a half plane except that the effective radius formula_0 is defined as
formula_20
Contact between two crossed cylinders of equal radius.
This is equivalent to contact between a sphere of radius formula_0 and a plane.
Contact between a rigid cylinder with flat end and an elastic half-space.
If a rigid cylinder is pressed into an elastic half-space, it creates a pressure distribution described by
formula_21
where formula_0 is the radius of the cylinder and
formula_22
The relationship between the indentation depth and the normal force is given by
formula_23
Contact between a rigid conical indenter and an elastic half-space.
In the case of indentation of an elastic half-space of Young's modulus formula_24 using a rigid conical indenter, the depth of the contact region formula_25 and contact radius formula_19 are related by
formula_26
with formula_27 defined as the angle between the plane and the side surface of the cone. The total indentation depth formula_1 is given by:
formula_28
The total force is
formula_29
The pressure distribution is given by
formula_30
The stress has a logarithmic singularity at the tip of the cone.
Contact between two cylinders with parallel axes.
In contact between two cylinders with parallel axes, the force is linearly proportional to the length of cylinders "L" and to the indentation depth "d":
formula_31
The radii of curvature are entirely absent from this relationship. The contact radius is described through the usual relationship
formula_2
with
formula_20
as in contact between two spheres. The maximum pressure is equal to
formula_32
Bearing contact.
The contact in the case of bearings is often a contact between a convex surface (male cylinder or sphere) and a concave surface (female cylinder or sphere: bore or hemispherical cup).
The Method of Dimensionality Reduction.
Some contact problems can be solved with the Method of Dimensionality Reduction (MDR). In this method, the initial three-dimensional system is replaced with a contact of a body with a linear elastic or viscoelastic foundation (see fig.). The properties of one-dimensional systems coincide exactly with those of the original three-dimensional system, if the form of the bodies is modified and the elements of the foundation are defined according to the rules of the MDR. MDR is based on the solution to axisymmetric contact problems first obtained by Ludwig Föppl (1941) and Gerhard Schubert (1942)
However, for exact analytical results, it is required that the contact problem is axisymmetric and the contacts are compact.
Hertzian theory of non-adhesive elastic contact.
The classical theory of contact focused primarily on non-adhesive contact where no tension force is allowed to occur within the contact area, i.e., contacting bodies can be separated without adhesion forces. Several analytical and numerical approaches have been used to solve contact problems that satisfy the no-adhesion condition. Complex forces and moments are transmitted between the bodies where they touch, so problems in contact mechanics can become quite sophisticated. In addition, the contact stresses are usually a nonlinear function of the deformation. To simplify the solution procedure, a frame of reference is usually defined in which the objects (possibly in motion relative to one another) are static. They interact through surface tractions (or pressures/stresses) at their interface.
As an example, consider two objects which meet at some surface formula_33 in the (formula_34,formula_35)-plane with the formula_36-axis assumed normal to the surface. One of the bodies will experience a normally-directed pressure distribution formula_37 and in-plane surface traction distributions formula_38 and formula_39 over the region formula_33. In terms of a Newtonian force balance, the forces:
formula_40
must be equal and opposite to the forces established in the other body. The moments corresponding to these forces:
formula_41
are also required to cancel between bodies so that they are kinematically immobile.
Assumptions in Hertzian theory.
The following assumptions are made in determining the solutions of Hertzian contact problems:
Additional complications arise when some or all these assumptions are violated and such contact problems are usually called non-Hertzian.
Analytical solution techniques.
Analytical solution methods for non-adhesive contact problem can be classified into two types based on the geometry of the area of contact. A conforming contact is one in which the two bodies touch at multiple points before any deformation takes place (i.e., they just "fit together"). A non-conforming contact is one in which the shapes of the bodies are dissimilar enough that, under zero load, they only touch at a point (or possibly along a line). In the non-conforming case, the contact area is small compared to the sizes of the objects and the stresses are highly concentrated in this area. Such a contact is called "concentrated", otherwise it is called "diversified".
A common approach in linear elasticity is to superpose a number of solutions each of which corresponds to a point load acting over the area of contact. For example, in the case of loading of a half-plane, the Flamant solution is often used as a starting point and then generalized to various shapes of the area of contact. The force and moment balances between the two bodies in contact act as additional constraints to the solution.
Point contact on a (2D) half-plane.
A starting point for solving contact problems is to understand the effect of a "point-load" applied to an isotropic, homogeneous, and linear elastic half-plane, shown in the figure to the right. The problem may be either plane stress or plane strain. This is a boundary value problem of linear elasticity subject to the traction boundary conditions:
formula_42
where formula_43 is the Dirac delta function. The boundary conditions state that there are no shear stresses on the surface and a singular normal force P is applied at (0, 0). Applying these conditions to the governing equations of elasticity produces the result
formula_44
for some point, formula_45, in the half-plane. The circle shown in the figure indicates a surface on which the maximum shear stress is constant. From this stress field, the strain components and thus the displacements of all material points may be determined.
Line contact on a (2D) half-plane.
Normal loading over a region.
Suppose, rather than a point load formula_46, a distributed load formula_47 is applied to the surface instead, over the range formula_48. The principle of linear superposition can be applied to determine the resulting stress field as the solution to the integral equations:
formula_49
Shear loading over a region.
The same principle applies for loading on the surface in the plane of the surface. These kinds of tractions would tend to arise as a result of friction. The solution is similar the above (for both singular loads formula_50 and distributed loads formula_51) but altered slightly:
formula_52
These results may themselves be superposed onto those given above for normal loading to deal with more complex loads.
Point contact on a (3D) half-space.
Analogously to the Flamant solution for the 2D half-plane, fundamental solutions are known for the linearly elastic 3D half-space as well. These were found by Boussinesq for a concentrated normal load and by Cerruti for a tangential load. See the section on this in Linear elasticity.
Numerical solution techniques.
Distinctions between conforming and non-conforming contact do not have to be made when numerical solution schemes are employed to solve contact problems. These methods do not rely on further assumptions within the solution process since they base solely on the general formulation of the underlying equations. Besides the standard equations describing the deformation and motion of bodies two additional inequalities can be formulated. The first simply restricts the motion and deformation of the bodies by the assumption that no penetration can occur. Hence the gap formula_53 between two bodies can only be positive or zero
formula_54
where formula_55 denotes contact. The second assumption in contact mechanics is related to the fact, that no tension force is allowed to occur within the contact area (contacting bodies can be lifted up without adhesion forces). This leads to an inequality which the stresses have to obey at the contact interface. It is formulated for the normal stress formula_56.
At locations where there is contact between the surfaces the gap is zero, i.e. formula_55, and there the normal stress is different than zero, indeed, formula_57. At locations where the surfaces are not in contact the normal stress is identical to zero; formula_58, while the gap is positive; i.e., formula_59. This type of complementarity formulation can be expressed in the so-called Kuhn–Tucker form, viz.
formula_60
These conditions are valid in a general way. The mathematical formulation of the gap depends upon the kinematics of the underlying theory of the solid (e.g., linear or nonlinear solid in two- or three dimensions, beam or shell model). By restating the normal stress formula_61 in terms of the contact pressure, formula_62; i.e., formula_63 the Kuhn-Tucker problem can be restated as in standard complementarity form i.e. formula_64 In the linear elastic case the gap can be formulated as formula_65 where formula_66is the rigid body separation, formula_67 is the geometry/topography of the contact (cylinder and roughness) and formula_68 is the elastic deformation/deflection. If the contacting bodies are approximated as linear elastic half spaces, the Boussinesq-Cerruti integral equation solution can be applied to express the deformation (formula_69) as a function of the contact pressure (formula_62); i.e.,formula_70 where formula_71for line loading of an elastic half space and formula_72 for point loading of an elastic half-space.
After discretization the linear elastic contact mechanics problem can be stated in standard Linear Complementarity Problem (LCP) form.
formula_73
where formula_74 is a matrix, whose elements are so called influence coefficients relating the contact pressure and the deformation. The strict LCP formulation of the CM problem presented above, allows for direct application of well-established numerical solution techniques such as Lemke's pivoting algorithm. The Lemke algorithm has the advantage that it finds the numerically exact solution within a finite number of iterations. The MATLAB implementation presented by Almqvist et al. is one example that can be employed to solve the problem numerically. In addition, an example code for an LCP solution of a 2D linear elastic contact mechanics problem has also been made public at MATLAB file exchange by Almqvist et al.
Contact between rough surfaces.
When two bodies with rough surfaces are pressed against each other, the true contact area formed between the two bodies, formula_75, is much smaller than the apparent or nominal contact area formula_76. The mechanics of contacting rough surfaces are discussed in terms of normal contact mechanics and static frictional interactions. Natural and engineering surfaces typically exhibit roughness features, known as asperities, across a broad range of length scales down to the molecular level, with surface structures exhibiting self affinity, also known as surface fractality. It is recognized that the self affine structure of surfaces is the origin of the linear scaling of true contact area with applied pressure. Assuming a model of shearing welded contacts in tribological interactions, this ubiquitously observed linearity between contact area and pressure can also be considered the origin of the linearity of the relationship between static friction and applied normal force.
In contact between a "random rough" surface and an elastic half-space, the true contact area is related to the normal force formula_3 by
formula_77
with formula_78 equal to the root mean square (also known as the quadratic mean) of the surface slope and formula_79 . The median pressure in the true contact surface
formula_80
can be reasonably estimated as half of the effective elastic modulus formula_81 multiplied with the root mean square of the surface slope formula_78 .
An overview of the GW model.
Greenwood and Williamson in 1966 (GW) proposed a theory of elastic contact mechanics of rough surfaces which is today the foundation of many theories in tribology (friction, adhesion, thermal and electrical conductance, wear, etc.). They considered the contact between a smooth rigid plane and a nominally flat deformable rough surface covered with round tip asperities of the same radius R. Their theory assumes that the deformation of each asperity is independent of that of its neighbours and is described by the Hertz model. The heights of asperities have a random distribution. The probability that asperity height is between formula_36 and formula_82 is formula_83. The authors calculated the number of contact spots n, the total contact area formula_84 and the total load P in general case. They gave those formulas in two forms: in the basic and using standardized variables. If one assumes that N asperities covers a rough surface, then the expected number of contacts is
formula_85
The expected total area of contact can be calculated from the formula
formula_86
and the expected total force is given by
formula_87
where:
R, radius of curvature of the microasperity,
z, height of the microasperity measured from the profile line,
d, close the surface,
formula_88, composite Young's modulus of elasticity,
formula_89, modulus of elasticity of the surface,
formula_90, Poisson's surface coefficients.
Greenwood and Williamson introduced standardized separation formula_91 and standardized height distribution formula_92 whose standard deviation is equal to one. Below are presented the formulas in the standardized form.
formula_93
where:
d is the separation,
formula_75 is the nominal contact area,
formula_94 is the surface density of asperities,
formula_81 is the effective Young modulus.
"formula_75" and formula_46 can be determined when the formula_95 terms are calculated for the given surfaces using the convolution of the surface roughness formula_92. Several studies have followed the suggested curve fits for formula_95 assuming a Gaussian surface high distribution with curve fits presented by Arcoumanis et al. and Jedynak among others. It has been repeatedly observed that engineering surfaces do not demonstrate Gaussian surface height distributions e.g. Peklenik. Leighton et al. presented fits for crosshatched IC engine cylinder liner surfaces together with a process for determining the formula_95 terms for any measured surfaces. Leighton et al. demonstrated that Gaussian fit data is not accurate for modelling any engineered surfaces and went on to demonstrate that early running of the surfaces results in a gradual transition which significantly changes the surface topography, load carrying capacity and friction.
Recently the exact approximants to formula_84 and formula_46 were published by Jedynak. They are given by the following rational formulas, which are approximants to the integrals formula_95. They are calculated for the Gaussian distribution of asperities, which have been shown to be unrealistic for engineering surface but can be assumed where friction, load carrying capacity or real contact area results are not critical to the analysis.
formula_96
For formula_97 the coefficients are
formula_98
The maximum relative error is formula_99.
For formula_100 the coefficients are
formula_101
The maximum relative error is formula_102. The paper also contains the exact expressions for formula_95
formula_103
where erfc(z) means the complementary error function and formula_104 is the modified Bessel function of the second kind.
For the situation where the asperities on the two surfaces have a Gaussian height distribution and the peaks can be assumed to be spherical, the average contact pressure is sufficient to cause yield when formula_105 where formula_106 is the uniaxial yield stress and formula_107 is the indentation hardness. Greenwood and Williamson defined a dimensionless parameter formula_108 called the plasticity index that could be used to determine whether contact would be elastic or plastic.
The Greenwood-Williamson model requires knowledge of two statistically dependent quantities; the standard deviation of the surface roughness and the curvature of the asperity peaks. An alternative definition of the plasticity index has been given by Mikic. Yield occurs when the pressure is greater than the uniaxial yield stress. Since the yield stress is proportional to the indentation hardness formula_109, Mikic defined the plasticity index for elastic-plastic contact to be
formula_110
In this definition formula_108 represents the micro-roughness in a state of complete plasticity and only one statistical quantity, the rms slope, is needed which can be calculated from surface measurements. For formula_111, the surface behaves elastically during contact.
In both the Greenwood-Williamson and Mikic models the load is assumed to be proportional to the deformed area. Hence, whether the system behaves plastically or elastically is independent of the applied normal force.
An overview of the GT model.
The model proposed by Greenwood and Tripp (GT), extended the GW model to contact between two rough surfaces. The GT model is widely used in the field of elastohydrodynamic analysis.
The most frequently cited equations given by the GT model are for the asperity contact area
formula_112
and load carried by asperities
formula_113
where:
formula_114, roughness parameter,
formula_75, nominal contact area,
formula_115, Stribeck oil film parameter, first defined by Stribeck \cite{gt} as formula_116,
formula_117, effective elastic modulus,
formula_118, statistical functions introduced to match the assumed Gaussian distribution of asperities.
Leighton et al. presented fits for crosshatched IC engine cylinder liner surfaces together with a process for determining the formula_95 terms for any measured surfaces. Leighton et al. demonstrated that Gaussian fit data is not accurate for modelling any engineered surfaces and went on to demonstrate that early running of the surfaces results in a gradual transition which significantly changes the surface topography, load carrying capacity and friction.
The exact solutions for formula_119 and formula_46 are firstly presented by Jedynak. They are expressed by formula_120 as follows. They are calculated for the Gaussian distribution of asperities, which have been shown to be unrealistic for engineering surface but can be assumed where friction, load carrying capacity or real contact area results are not critical to the analysis.
formula_121
where erfc(z) means the complementary error function and formula_104 is the modified Bessel function of the second kind.
In paper one can find comprehensive review of existing approximants to formula_122. New proposals give the most accurate approximants to formula_122 and formula_123, which are reported in the literature. They are given by the following rational formulas, which are very exact approximants to integrals formula_95. They are calculated for the Gaussian distribution of asperities
formula_124
For formula_125 the coefficients are
formula_126
The maximum relative error is formula_127.
For formula_128 the coefficients are
formula_129
The maximum relative error is formula_130.
Adhesive contact between elastic bodies.
When two solid surfaces are brought into close proximity, they experience attractive van der Waals forces. Bradley's van der Waals model provides a means of calculating the tensile force between two rigid spheres with perfectly smooth surfaces. The Hertzian model of contact does not consider adhesion possible. However, in the late 1960s, several contradictions were observed when the Hertz theory was compared with experiments involving contact between rubber and glass spheres.
It was observed that, though Hertz theory applied at large loads, at low loads
This indicated that adhesive forces were at work. The Johnson-Kendall-Roberts (JKR) model and the Derjaguin-Muller-Toporov (DMT) models were the first to incorporate adhesion into Hertzian contact.
Bradley model of rigid contact.
It is commonly assumed that the surface force between two atomic planes at a distance formula_36 from each other can be derived from the Lennard-Jones potential. With this assumption
formula_131
where formula_3 is the force (positive in compression), formula_132 is the total surface energy of "both" surfaces per unit area, and formula_133 is the equilibrium separation of the two atomic planes.
The Bradley model applied the Lennard-Jones potential to find the force of adhesion between two rigid spheres. The total force between the spheres is found to be
formula_134
where formula_135 are the radii of the two spheres.
The two spheres separate completely when the "pull-off force" is achieved at formula_136 at which point
formula_137
Johnson-Kendall-Roberts (JKR) model of elastic contact.
To incorporate the effect of adhesion in Hertzian contact, Johnson, Kendall, and Roberts formulated the JKR theory of adhesive contact using a balance between the stored elastic energy and the loss in surface energy. The JKR model considers the effect of contact pressure and adhesion only inside the area of contact. The general solution for the pressure distribution in the contact area in the JKR model is
formula_138
Note that in the original Hertz theory, the term containing formula_139 was neglected on the ground that tension could not be sustained in the contact zone. For contact between two spheres
formula_140
where formula_141 is the radius of the area of contact, formula_3 is the applied force, formula_132 is the total surface energy of both surfaces per unit contact area,
formula_142 are the radii, Young's moduli, and Poisson's ratios of the two spheres, and
formula_143
The approach distance between the two spheres is given by
formula_144
The Hertz equation for the area of contact between two spheres, modified to take into account the surface energy, has the form
formula_145
When the surface energy is zero, formula_146, the Hertz equation for contact between two spheres is recovered. When the applied load is zero, the contact radius is
formula_147
The tensile load at which the spheres are separated (i.e., formula_148) is predicted to be
formula_149
This force is also called the pull-off force. Note that this force is independent of the moduli of the two spheres. However, there is another possible solution for the value of formula_19 at this load. This is the critical contact area formula_150, given by
formula_151
If we define the work of adhesion as
formula_152
where formula_153 are the adhesive energies of the two surfaces and formula_154 is an interaction term, we can write the JKR contact radius as
formula_155
The tensile load at separation is
formula_156
and the critical contact radius is given by
formula_157
The critical depth of penetration is
formula_158
Derjaguin-Muller-Toporov (DMT) model of elastic contact.
The Derjaguin-Muller-Toporov (DMT) model is an alternative model for adhesive contact which assumes that the contact profile remains the same as in Hertzian contact but with additional attractive interactions outside the area of contact.
The radius of contact between two spheres from DMT theory is
formula_159
and the pull-off force is
formula_160
When the pull-off force is achieved the contact area becomes zero and there is no singularity in the contact stresses at the edge of the contact area.
In terms of the work of adhesion formula_161
formula_162
and
formula_163
Tabor parameter.
In 1977, Tabor showed that the apparent contradiction between the JKR and DMT theories could be resolved by noting that the two theories were the extreme limits of a single theory parametrized by the Tabor parameter (formula_164) defined as
formula_165
where formula_133 is the equilibrium separation between the two surfaces in contact. The JKR theory applies to large, compliant spheres for which formula_164 is large. The DMT theory applies for small, stiff spheres with small values of formula_164.
Subsequently, Derjaguin and his collaborators by applying Bradley's surface force law to an elastic half space, confirmed that as the Tabor parameter increases, the pull-off force falls from the Bradley value formula_166 to the JKR value formula_167. More detailed calculations were later done by Greenwood revealing the S-shaped load/approach curve which explains the jumping-on effect. A more efficient method of doing the calculations and additional results were given by Feng
Maugis-Dugdale model of elastic contact.
Further improvement to the Tabor idea was provided by Maugis who represented the surface force in terms of a Dugdale cohesive zone approximation such that the work of adhesion is given by
formula_168
where formula_107 is the maximum force predicted by the Lennard-Jones potential and formula_169 is the maximum separation obtained by matching the areas under the Dugdale and Lennard-Jones curves (see adjacent figure). This means that the attractive force is constant for formula_170. There is not further penetration in compression. Perfect contact occurs in an area of radius formula_19 and adhesive forces of magnitude formula_107 extend to an area of radius formula_171. In the region formula_172, the two surfaces are separated by a distance formula_173 with formula_174 and formula_175. The ratio formula_176 is defined as
formula_177.
In the Maugis-Dugdale theory, the surface traction distribution is divided into two parts - one due to the Hertz contact pressure and the other from the Dugdale adhesive stress. Hertz contact is assumed in the region formula_178. The contribution to the surface traction from the Hertz pressure is given by
formula_179
where the Hertz contact force formula_180 is given by
formula_181
The penetration due to elastic compression is
formula_182
The vertical displacement at formula_183 is
formula_184
and the separation between the two surfaces at formula_183 is
formula_185
The surface traction distribution due to the adhesive Dugdale stress is
formula_186
The total adhesive force is then given by
formula_187
The compression due to Dugdale adhesion is
formula_188
and the gap at formula_183 is
formula_189
The net traction on the contact area is then given by formula_190 and the net contact force is formula_191. When formula_192 the adhesive traction drops to zero.
Non-dimensionalized values of formula_193 are introduced at this stage that are defied as
formula_194
In addition, Maugis proposed a parameter formula_115 which is equivalent to the Tabor parameter formula_164 . This parameter is defined as
formula_195
where the step cohesive stress formula_107 equals to the theoretical stress of the Lennard-Jones potential
formula_196
Zheng and Yu suggested another value for the step cohesive stress
formula_197
to match the Lennard-Jones potential, which leads to
formula_198
Then the net contact force may be expressed as
formula_199
and the elastic compression as
formula_200
The equation for the cohesive gap between the two bodies takes the form
formula_201
This equation can be solved to obtain values of formula_202 for various values of formula_19 and formula_115. For large values of formula_115, formula_203 and the JKR model is obtained. For small values of formula_115 the DMT model is retrieved.
Carpick-Ogletree-Salmeron (COS) model.
The Maugis-Dugdale model can only be solved iteratively if the value of formula_115 is not known a-priori. The Carpick-Ogletree-Salmeron approximate solution simplifies the process by using the following relation to determine the contact radius formula_19:
formula_204
where formula_205 is the contact area at zero load, and formula_206 is a transition parameter that is related to formula_115 by
formula_207
The case formula_208 corresponds exactly to JKR theory while formula_209 corresponds to DMT theory. For intermediate cases formula_210 the COS model corresponds closely to the Maugis-Dugdale solution for formula_211.
Influence of contact shape.
Even in the presence of perfectly smooth surfaces, geometry can come into play in form of the macroscopic shape of the contacting region. When a rigid punch with flat but oddly shaped face is carefully pulled off its soft counterpart, its detachment occurs not instantaneously but detachment fronts start at pointed corners and travel inwards, until the final configuration is reached which for macroscopically isotropic shapes is almost circular. The main parameter determining the adhesive strength of flat contacts occurs to be the maximum linear size of the contact. The process of detachment can as observed experimentally can be seen in the film.
See also.
<templatestyles src="Div col/styles.css"/>
References.
See Also: Contact Mechanics for Soft Hemi-Elliptical Fingertip<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R"
},
{
"math_id": 1,
"text": "d"
},
{
"math_id": 2,
"text": "a = \\sqrt{Rd}"
},
{
"math_id": 3,
"text": "F"
},
{
"math_id": 4,
"text": "F = \\frac{4}{3} E^*R^\\frac{1}{2}d^\\frac{3}{2}"
},
{
"math_id": 5,
"text": "\\frac{1}{E^*} = \\frac{1 - \\nu^2_1}{E_1} + \\frac{1 - \\nu^2_2}{E_2}"
},
{
"math_id": 6,
"text": "E_1"
},
{
"math_id": 7,
"text": "E_2"
},
{
"math_id": 8,
"text": "\\nu_1"
},
{
"math_id": 9,
"text": "\\nu_2"
},
{
"math_id": 10,
"text": "p(r) = p_0\\left(1 - \\frac{r^2}{a^2}\\right)^\\frac{1}{2}"
},
{
"math_id": 11,
"text": "p_0"
},
{
"math_id": 12,
"text": "p_0 = \\frac{3F}{2\\pi a^2} = \\frac{1}{\\pi}\\left(\\frac{6F{E^*}^2}{R^2}\\right)^\\frac{1}{3}"
},
{
"math_id": 13,
"text": "a^3 = \\cfrac{3FR}{4E^*}"
},
{
"math_id": 14,
"text": "d = \\frac{a^2}{R} = \\left(\\frac{9F^2}{16{E^*}^2R}\\right)^\\frac{1}{3}"
},
{
"math_id": 15,
"text": "z \\approx 0.49a"
},
{
"math_id": 16,
"text": "\\nu = 0.33"
},
{
"math_id": 17,
"text": "R_1"
},
{
"math_id": 18,
"text": "R_2"
},
{
"math_id": 19,
"text": "a"
},
{
"math_id": 20,
"text": "\\frac{1}{R} = \\frac{1}{R_1} + \\frac{1}{R_2}"
},
{
"math_id": 21,
"text": "p(r) = p_0\\left(1 - \\frac{r^2}{R^2}\\right)^{-\\frac{1}{2}}"
},
{
"math_id": 22,
"text": "p_0 = \\frac{1}{\\pi}E^*\\frac{d}{R}"
},
{
"math_id": 23,
"text": "F = 2RE^*d"
},
{
"math_id": 24,
"text": "E"
},
{
"math_id": 25,
"text": "\\epsilon"
},
{
"math_id": 26,
"text": "\\epsilon = a\\tan(\\theta)"
},
{
"math_id": 27,
"text": "\\theta"
},
{
"math_id": 28,
"text": "d = \\frac{\\pi}{2}\\epsilon"
},
{
"math_id": 29,
"text": "\n F = \\frac{\\pi E}{2\\left(1 - \\nu^2\\right)} a^2 \\tan(\\theta)\n = \\frac{2E}{\\pi\\left(1 - \\nu^2\\right)}\\frac{d^2}{\\tan(\\theta)}\n"
},
{
"math_id": 30,
"text": "\n p\\left(r\\right)\n = \\frac{Ed}{\\pi a\\left(1 - \\nu^2\\right)} \\ln\\left(\\frac{a}{r} + \\sqrt{\\left(\\frac{a}{r}\\right)^2 - 1}\\right)\n = \\frac{Ed}{\\pi a\\left(1 - \\nu^2\\right)} \\cosh^{-1}\\left(\\frac{a}{r}\\right)\n"
},
{
"math_id": 31,
"text": "F \\approx \\frac{\\pi}{4}E^*Ld"
},
{
"math_id": 32,
"text": "p_0 = \\left(\\frac{E^*F}{\\pi L R}\\right)^\\frac{1}{2}"
},
{
"math_id": 33,
"text": "S"
},
{
"math_id": 34,
"text": "x"
},
{
"math_id": 35,
"text": "y"
},
{
"math_id": 36,
"text": "z"
},
{
"math_id": 37,
"text": "p_z=p(x,y)=q_z(x,y)"
},
{
"math_id": 38,
"text": "q_x=q_x(x,y)"
},
{
"math_id": 39,
"text": "q_y=q_y(x,y)"
},
{
"math_id": 40,
"text": "\n P_z = \\int_S p(x,y)~ \\mathrm{d}A ~;~~ Q_x = \\int_S q_x(x,y)~ \\mathrm{d}A ~;~~ Q_y = \\int_S q_y(x,y)~ \\mathrm{d}A\n "
},
{
"math_id": 41,
"text": "\n M_x = \\int_S y~q_z(x,y)~ \\mathrm{d}A ~;~~ M_y = \\int_S -x~q_z(x,y)~ \\mathrm{d}A ~;~~ M_z = \\int_S [x~q_y(x,y) - y~q_x(x,y)]~ \\mathrm{d}A\n "
},
{
"math_id": 42,
"text": "\\sigma_{xz}(x, 0) = 0 ~;~~ \\sigma_z(x, z) = -P\\delta(x, z)"
},
{
"math_id": 43,
"text": "\\delta(x, z)"
},
{
"math_id": 44,
"text": "\\begin{align}\n \\sigma_{xx} & = -\\frac{2P}{\\pi}\\frac{x^2z}{\\left(x^2 + z^2\\right)^2} \\\\\n \\sigma_{zz} & = -\\frac{2P}{\\pi}\\frac{z^3}{\\left(x^2 + z^2\\right)^2} \\\\\n \\sigma_{xz} & = -\\frac{2P}{\\pi}\\frac{xz^2}{\\left(x^2 + z^2\\right)^2}\n\\end{align}"
},
{
"math_id": 45,
"text": "(x, y)"
},
{
"math_id": 46,
"text": "P"
},
{
"math_id": 47,
"text": "p(x)"
},
{
"math_id": 48,
"text": "a<x<b"
},
{
"math_id": 49,
"text": "\\begin{align}\n \\sigma_{xx} &= -\\frac{2z}{\\pi}\\int_a^b\\frac{p\\left(x'\\right)\\left(x - x'\\right)^2\\, dx'}{\\left[\\left(x - x'\\right)^2 + z^2\\right]^2} ~;~~\n \\sigma_{zz} = -\\frac{2z^3}{\\pi}\\int_a^b\\frac{p\\left(x'\\right)\\, dx'}{\\left[\\left(x - x'\\right)^2 + z^2\\right]^2} \\\\[3pt]\n \\sigma_{xz} &= -\\frac{2z^2}{\\pi}\\int_a^b\\frac{p\\left(x'\\right)\\left(x - x'\\right)\\, dx'}{\\left[\\left(x - x'\\right)^2 + z^2\\right]^2}\n\\end{align}"
},
{
"math_id": 50,
"text": "Q"
},
{
"math_id": 51,
"text": "q(x)"
},
{
"math_id": 52,
"text": "\\begin{align}\n \\sigma_{xx} &= -\\frac{2}{\\pi}\\int_a^b\\frac{q\\left(x'\\right)\\left(x - x'\\right)^3\\, dx'}{\\left[\\left(x - x'\\right)^2 + z^2\\right]^2} ~;~~\n \\sigma_{zz} = -\\frac{2z^2}{\\pi}\\int_a^b\\frac{q\\left(x'\\right)\\left(x - x'\\right)\\, dx'}{\\left[\\left(x - x'\\right)^2 + z^2\\right]^2} \\\\[3pt]\n \\sigma_{xz} &= -\\frac{2z}{\\pi}\\int_a^b\\frac{q\\left(x'\\right)\\left(x - x'\\right)^2\\, dx'}{\\left[\\left(x - x'\\right)^2 + z^2\\right]^2}\n\\end{align}"
},
{
"math_id": 53,
"text": "h"
},
{
"math_id": 54,
"text": "h \\ge 0"
},
{
"math_id": 55,
"text": "h = 0"
},
{
"math_id": 56,
"text": "\\sigma_n = \\mathbf{t} \\cdot \\mathbf{n}"
},
{
"math_id": 57,
"text": "\\sigma_n < 0"
},
{
"math_id": 58,
"text": "\\sigma_n = 0"
},
{
"math_id": 59,
"text": "h > 0"
},
{
"math_id": 60,
"text": "h \\ge 0\\,, \\quad \\sigma_n \\le 0\\,, \\quad \\sigma_n\\,h = 0\\,."
},
{
"math_id": 61,
"text": "\\sigma_n"
},
{
"math_id": 62,
"text": "p"
},
{
"math_id": 63,
"text": "p = -\\sigma_n"
},
{
"math_id": 64,
"text": "h \\ge 0\\,,\\quad p \\ge 0\\,,\\quad p\\,h = 0\\,."
},
{
"math_id": 65,
"text": "{h} = h_0 + {g} + u,"
},
{
"math_id": 66,
"text": "h_0 "
},
{
"math_id": 67,
"text": "g"
},
{
"math_id": 68,
"text": "u "
},
{
"math_id": 69,
"text": "u"
},
{
"math_id": 70,
"text": "u = \\int_\\infty^\\infty K(x - s)p(s)ds,"
},
{
"math_id": 71,
"text": "K(x - s) = \\frac{2}{\\pi E^*}\\ln|x - s|"
},
{
"math_id": 72,
"text": "K(x - s) = \\frac{1}{\\pi E^*}\\frac{1}{\\sqrt{\\left(x_1 - s_1\\right)^2 + \\left(x_2 - s_2\\right)^2}}"
},
{
"math_id": 73,
"text": "\\begin{align}\n \\mathbf{h} &= \\mathbf{h}_0 + \\mathbf{g} + \\mathbf{Cp}, \\\\\n \\mathbf{h} \\cdot \\mathbf{p} &= 0,\\,\\,\\,\\mathbf{p} \\geq 0,\\,\\,\\, \\mathbf{h} \\geq 0,\\\\\n\\end{align}"
},
{
"math_id": 74,
"text": "\\mathbf{C}"
},
{
"math_id": 75,
"text": "A"
},
{
"math_id": 76,
"text": "A_0"
},
{
"math_id": 77,
"text": "\n A=\\frac{\\kappa}{E^*h'}F\n "
},
{
"math_id": 78,
"text": "h'"
},
{
"math_id": 79,
"text": "\\kappa \\approx2"
},
{
"math_id": 80,
"text": "\n p_{\\mathrm{av}} =\\frac{F}{A}\\approx\\frac{1}{2}E^*h'\n "
},
{
"math_id": 81,
"text": "E^*"
},
{
"math_id": 82,
"text": "z + dz"
},
{
"math_id": 83,
"text": "\\phi(z)dz"
},
{
"math_id": 84,
"text": "A_r"
},
{
"math_id": 85,
"text": "n = N\\int_d^\\infty \\phi(z) dz"
},
{
"math_id": 86,
"text": "A_a = N\\pi R \\int_d^\\infty (z - d) \\phi(z) dz"
},
{
"math_id": 87,
"text": "P = \\frac{4}{3}N E_r \\sqrt{R} \\int_d^\\infty (z - d)^\\frac{3}{2} \\phi(z) dz"
},
{
"math_id": 88,
"text": "E_r = \\left(\\frac{1 - \\nu_1^2}{E_1} + \\frac{1 - \\nu_2^2}{E_2}\\right)^{-1}"
},
{
"math_id": 89,
"text": "E_i"
},
{
"math_id": 90,
"text": "\\nu_i"
},
{
"math_id": 91,
"text": "h = d/\\sigma"
},
{
"math_id": 92,
"text": "\\phi^*(s)"
},
{
"math_id": 93,
"text": "\\begin{align}\n F_n(h) &= \\int_h^\\infty (s - h)^n \\phi^*(s) ds \\\\\n n &= \\eta A_n F_0(h) \\\\\n A_a &= \\pi \\eta A R \\sigma F_1(h) \\\\\n P &= \\frac{4}{3} \\eta A E_r \\sqrt{R} \\sigma^\\frac{3}{2} F_\\frac{3}{2}(h)\n\\end{align}"
},
{
"math_id": 94,
"text": "\\eta"
},
{
"math_id": 95,
"text": "F_n(h)"
},
{
"math_id": 96,
"text": "F_{n}(h) = \\frac{a_0 + a_1 h + a_2 h^2 + a_3 h^3}{1 + b_1 h + b_2 h^2+b_3 h^3 + b_4 h^4 + b_5 h^5 + b_6 h^6}\\exp\\left(-\\frac{h^2}{2}\\right)"
},
{
"math_id": 97,
"text": "F_1(h)"
},
{
"math_id": 98,
"text": "\\begin{align}[]\n [a_0, a_1, a_2, a_3] &= [0.398942280401, 0.159773702775, 0.0389687688311, 0.00364356495452] \\\\[]\n [b_1, b_2, b_3, b_4, b_5, b_6] &= \\left[1.653807476138, 1.170419428529, 0.448892964428, 0.0951971709160, 0.00931642803836, -6.383774657279 \\times 10^{-6}\\right]\n\\end{align}"
},
{
"math_id": 99,
"text": "9.93 \\times 10^{-8}%"
},
{
"math_id": 100,
"text": "F_\\frac{3}{2}(h)"
},
{
"math_id": 101,
"text": "\\begin{align}[]\n [a_0, a_1, a_2, a_3] &= [0.430019993662, 0.101979509447, 0.0229040629580, 0.000688602924] \\\\[]\n [b_1, b_2, b_3, b_4, b_5,b_6] &= [1.671117125984, 1.199586555505, 0.46936532151, 0.102632881122, 0.010686348714, 0.0000517200271]\n\\end{align}"
},
{
"math_id": 102,
"text": "1.91 \\times 10^{-7}%"
},
{
"math_id": 103,
"text": "\\begin{align}\n F_1(h) &= \\frac{1}{\\sqrt{2\\pi}} \\exp\\left(-\\frac{1}{2}h^2\\right) - \\frac{1}{2} h\\, \n\\operatorname{erfc}\\left(\\frac{h}{\\sqrt{2}}\\right) \\\\\n F_\\frac{3}{2}(h) &= \\frac{1}{4\\sqrt{\\pi}}\\exp\\left(-\\frac{h^2}{4}\\right) \\sqrt{h} \\left(\\left(h^2 + 1\\right) K_{\\frac{1}{4}}\\left(\\frac{h^2}{4}\\right) - h^2 K_{\\frac{3}{4}}\\left(\\frac{h^2}{4}\\right)\\right)\n\\end{align}"
},
{
"math_id": 104,
"text": "K_\\nu(z)"
},
{
"math_id": 105,
"text": "p_\\text{av} = 1.1\\sigma_y \\approx 0.39 \\sigma_0"
},
{
"math_id": 106,
"text": "\\sigma_y"
},
{
"math_id": 107,
"text": "\\sigma_0"
},
{
"math_id": 108,
"text": "\\Psi"
},
{
"math_id": 109,
"text": "\\sigma _0"
},
{
"math_id": 110,
"text": "\\Psi = \\frac{E^*h'}{\\sigma _0} > \\frac{2}{3}~."
},
{
"math_id": 111,
"text": "\\Psi < \\frac{2}{3}"
},
{
"math_id": 112,
"text": "A_a = \\pi^2 (\\eta\\beta\\sigma)^2 AF_2(\\lambda),"
},
{
"math_id": 113,
"text": "P =\\frac{8\\sqrt{2}}{15}\\pi (\\eta\\beta\\sigma)^2 \\sqrt{\\frac{\\sigma}{\\beta}} E' AF_\\frac{5}{2}(\\lambda),"
},
{
"math_id": 114,
"text": "\\eta\\beta\\sigma"
},
{
"math_id": 115,
"text": "\\lambda"
},
{
"math_id": 116,
"text": " \\lambda = h/\\sigma"
},
{
"math_id": 117,
"text": "E'"
},
{
"math_id": 118,
"text": "F_2, F_\\frac{5}{2}(\\lambda)"
},
{
"math_id": 119,
"text": "A_a"
},
{
"math_id": 120,
"text": "F_n"
},
{
"math_id": 121,
"text": "\\begin{align}\n F_2 &= \\frac{1}{2} \\left(h^2 + 1\\right)\\operatorname{erfc} \\left(\\frac{h}{\\sqrt{2}}\\right) -\n \\frac{h}{\\sqrt{2\\pi}}\\exp\\left(-\\frac{h^2}{2}\\right) \\\\\n F_\\frac{5}{2} &= \\frac{1}{8\\sqrt{\\pi}}\\exp\\left(-\\frac{h^2}{4}\\right) h^\\frac{3}{2}\n \\left(\\left(2h^2 + 3\\right) K_\\frac{3}{4} \\left(\\frac{h^2}{4}\\right) - \\left(2h^2 + 5\\right)\n K_\\frac{1}{4}\\left(\\frac{h^2}{4}\\right)\\right)\n\\end{align}"
},
{
"math_id": 122,
"text": "F_\\frac{5}{2}"
},
{
"math_id": 123,
"text": "F_2"
},
{
"math_id": 124,
"text": "F_n(h) = \\frac{a_0 + a_1 h + a_2 h^2 + a_3 h^3}{1 + b_1 h + b_2 h^2 + b_3 h^3 + b_4 h^4 + b_5 h^5 + b_6 h^6}\\exp\\left(-\\frac{h^2}{2}\\right)"
},
{
"math_id": 125,
"text": "F_2(h)"
},
{
"math_id": 126,
"text": "\\begin{align}[]\n [a_0, a_1, a_2, a_3] &= [0.5, 0.182536384941, 0.039812283118, 0.003684879001] \\\\[] \n [b_1, b_2, b_3, b_4, b_5, b_6] &= [1.960841785003, 1.708677456715, 0.856592986083, 0.264996791567, 0.049257843893, 0.004640740133]\n\\end{align}"
},
{
"math_id": 127,
"text": "1.68 \\times 10^{-7}%"
},
{
"math_id": 128,
"text": "F_\\frac{5}{2}(h)"
},
{
"math_id": 129,
"text": "\\begin{align}[]\n [a_0, a_1, a_2, a_3] &= [0.616634218997, 0.108855827811, 0.023453835635, 0.000449332509] \\\\[]\n [b_1, b_2, b_3, b_4, b_5, b_6] &= [1.919948267476, 1.635304362591, 0.799392556572, 0.240278859212, 0.043178653945, 0.003863334276]\n\\end{align}"
},
{
"math_id": 130,
"text": "4.98 \\times 10^{-8}%"
},
{
"math_id": 131,
"text": "\n F(z) = \\cfrac{16\\gamma}{3 z_0}\\left[\\left(\\cfrac{z}{z_0}\\right)^{-9} - \\left(\\cfrac{z}{z_0}\\right)^{-3}\\right]\n "
},
{
"math_id": 132,
"text": "2\\gamma"
},
{
"math_id": 133,
"text": "z_0"
},
{
"math_id": 134,
"text": "\n F_a(z) = \\cfrac{16\\gamma\\pi R}{3}\\left[\\cfrac{1}{4}\\left(\\cfrac{z}{z_0}\\right)^{-8} - \\left(\\cfrac{z}{z_0}\\right)^{-2}\\right] ~;~~ \\frac{1}{R} = \\frac{1}{R_1} + \\frac{1}{R_2}\n "
},
{
"math_id": 135,
"text": "R_1,R_2"
},
{
"math_id": 136,
"text": "z = z_0"
},
{
"math_id": 137,
"text": "\n F_a = F_c = -4\\gamma\\pi R .\n "
},
{
"math_id": 138,
"text": "\n p(r) = p_0\\left(1 - \\frac{r^2}{a^2}\\right)^\\frac{1}{2} + p_0'\\left(1 - \\frac{r^2}{a^2}\\right)^{-\\frac{1}{2}}\n "
},
{
"math_id": 139,
"text": "p_0'"
},
{
"math_id": 140,
"text": "\n p_0 = \\frac{2aE^*}{\\pi R} ;\\quad\n p_0' = -\\left(\\frac{4\\gamma E^*}{\\pi a}\\right)^\\frac{1}{2}\n "
},
{
"math_id": 141,
"text": "a\\,"
},
{
"math_id": 142,
"text": "R_i,\\, E_i,\\, \\nu_i,~~i = 1, 2"
},
{
"math_id": 143,
"text": "\n \\frac{1}{R} = \\frac{1}{R_1} + \\frac{1}{R_2} ;\\quad\n \\frac{1}{E^*} = \\frac{1 - \\nu_1^2}{E_1} + \\frac{1 - \\nu_2^2}{E_2}\n "
},
{
"math_id": 144,
"text": "\n d = \\frac{\\pi a}{2E^*}\\left(p_0 + 2p_0'\\right) = \\frac{a^2}{R}\n "
},
{
"math_id": 145,
"text": "\n a^3 = \\frac{3R}{4E^*}\\left(F + 6\\gamma\\pi R + \\sqrt{12\\gamma\\pi RF + (6\\gamma\\pi R)^2}\\right)\n "
},
{
"math_id": 146,
"text": "\\gamma = 0"
},
{
"math_id": 147,
"text": "\n a^3 = \\frac{9R^2\\gamma\\pi}{E^*}\n "
},
{
"math_id": 148,
"text": "a = 0"
},
{
"math_id": 149,
"text": "\n F_\\text{c} = -3\\gamma\\pi R\\,\n "
},
{
"math_id": 150,
"text": "a_\\text{c}"
},
{
"math_id": 151,
"text": "\n a_\\text{c}^3 = \\frac{9R^2\\gamma\\pi}{4E^*}\n "
},
{
"math_id": 152,
"text": "\n \\Delta\\gamma = \\gamma_1 + \\gamma_2 - \\gamma_{12}\n "
},
{
"math_id": 153,
"text": "\\gamma_1, \\gamma_2"
},
{
"math_id": 154,
"text": "\\gamma_{12}"
},
{
"math_id": 155,
"text": "\n a^3 = \\frac{3R}{4E^*}\\left(F + 3\\Delta\\gamma\\pi R + \\sqrt{6\\Delta\\gamma\\pi R F + (3\\Delta\\gamma\\pi R)^2}\\right)\n "
},
{
"math_id": 156,
"text": "\n F = -\\frac{3}{2}\\Delta\\gamma\\pi R\\,\n "
},
{
"math_id": 157,
"text": "\n a_\\text{c}^3 = \\frac{9R^2\\Delta\\gamma\\pi}{8E^*}\n "
},
{
"math_id": 158,
"text": "\n d_\\text{c} = \\frac{a_c^2}{R}\n = \\left(R^\\frac{1}{2} \\frac{9\\Delta\\gamma\\pi}{4E^*}\\right)^\\frac{2}{3}\n "
},
{
"math_id": 159,
"text": "\n a^3 = \\cfrac{3R}{4E^*}\\left(F + 4\\gamma\\pi R\\right)\n "
},
{
"math_id": 160,
"text": "\n F_c = -4\\gamma\\pi R\\,\n "
},
{
"math_id": 161,
"text": "\\Delta\\gamma"
},
{
"math_id": 162,
"text": "\n a^3 = \\cfrac{3R}{4E^*}\\left(F + 2\\Delta\\gamma\\pi R\\right)\n "
},
{
"math_id": 163,
"text": "\n F_c = -2\\Delta\\gamma\\pi R\\,\n "
},
{
"math_id": 164,
"text": "\\mu"
},
{
"math_id": 165,
"text": "\n \\mu := \\frac{d_c}{z_0} \\approx \\left[\\frac{R(\\Delta\\gamma)^2}{{E^*}^2 z_0^3}\\right]^\\frac{1}{3}\n "
},
{
"math_id": 166,
"text": "2\\pi R\\Delta\\gamma"
},
{
"math_id": 167,
"text": "(3/2)\\pi R\\Delta\\gamma"
},
{
"math_id": 168,
"text": "\n \\Delta\\gamma = \\sigma_0~h_0\n "
},
{
"math_id": 169,
"text": "h_0"
},
{
"math_id": 170,
"text": "z_0 \\le z \\le z_0 + h_0"
},
{
"math_id": 171,
"text": "c > a"
},
{
"math_id": 172,
"text": "a < r < c"
},
{
"math_id": 173,
"text": "h(r)"
},
{
"math_id": 174,
"text": "h(a) = 0"
},
{
"math_id": 175,
"text": "h(c) = h_0"
},
{
"math_id": 176,
"text": "m"
},
{
"math_id": 177,
"text": " m := \\frac{c}{a}"
},
{
"math_id": 178,
"text": "-a < r < a"
},
{
"math_id": 179,
"text": "\n p^H(r) = \\left(\\frac{3F^H}{2\\pi a^2}\\right)\\left(1 - \\frac{r^2}{a^2}\\right)^\\frac{1}{2}\n "
},
{
"math_id": 180,
"text": "F^H"
},
{
"math_id": 181,
"text": "\n F^H = \\frac{4E^* a^3}{3R}\n "
},
{
"math_id": 182,
"text": "\n d^H = \\frac{a^2}{R}\n "
},
{
"math_id": 183,
"text": "r = c"
},
{
"math_id": 184,
"text": "\n u^H(c) = \\cfrac{1}{\\pi R} \\left[a^2\\left(2 - m^2\\right)\\sin^{-1}\\left(\\frac{1}{m}\\right) + a^2\\sqrt{m^2 - 1}\\right]\n "
},
{
"math_id": 185,
"text": "\n h^H(c) = \\frac{c^2}{2R} - d^H + u^H(c)\n "
},
{
"math_id": 186,
"text": "\n p^D(r) = \\begin{cases}\n -\\frac{\\sigma_0}{\\pi}\\cos^{-1}\\left[\\frac{2 - m^2 - \\frac{r^2}{a^2}}{m^2\\left(1 - \\frac{r^2}{m^2 a^2}\\right)}\\right] & \\quad \\text{for} \\quad r \\le a \\\\\n -\\sigma_0 & \\quad \\text{for} \\quad a \\le r \\le c\n \\end{cases}\n "
},
{
"math_id": 187,
"text": "\n F^D = -2\\sigma_0 m^2 a^2\\left[\\cos^{-1}\\left(\\frac{1}{m}\\right) + \\frac{1}{m^2}\\sqrt{m^2 - 1}\\right]\n "
},
{
"math_id": 188,
"text": "\n d^D = -\\left(\\frac{2\\sigma_0 a}{E^*}\\right)\\sqrt{m^2-1}\n "
},
{
"math_id": 189,
"text": "\n h^D(c) = \\left(\\frac{4\\sigma_0 a}{\\pi E^*}\\right)\\left[\\sqrt{m^2 - 1}\\cos^{-1}\\left(\\frac{1}{m}\\right) + 1 - m\\right]\n "
},
{
"math_id": 190,
"text": "p(r) = p^H(r) + p^D(r)"
},
{
"math_id": 191,
"text": "F = F^H + F^D"
},
{
"math_id": 192,
"text": "h(c) = h^H(c) + h^D(c) = h_0"
},
{
"math_id": 193,
"text": "a, c, F, d"
},
{
"math_id": 194,
"text": "\n \\bar{a} = \\alpha a ~;~~ \\bar{c} := \\alpha c ~;~~ \\bar{d} := \\alpha^2 Rd ~;~~\n \\alpha := \\left(\\frac{4E^*}{3\\pi\\Delta\\gamma R^2}\\right)^\\frac{1}{3} ~;~~\n \\bar{A} := \\pi c^2 ~;~~ \\bar{F} = \\frac{F}{\\pi\\Delta\\gamma R}\n "
},
{
"math_id": 195,
"text": "\n \\lambda := \\sigma_0\\left(\\frac{9R}{2\\pi\\Delta\\gamma{E^*}^2}\\right)^\\frac{1}{3}\\approx 1.16\\mu\n "
},
{
"math_id": 196,
"text": "\n \\sigma_\\text{th} = \\frac{16\\Delta\\gamma}{9\\sqrt{3}z_0}\n "
},
{
"math_id": 197,
"text": "\n \\sigma_0 = \\exp\\left(-\\frac{223}{420}\\right) \\cdot \\frac{\\Delta\\gamma}{z_0} \\approx 0.588 \\frac{\\Delta\\gamma}{z_0}\n "
},
{
"math_id": 198,
"text": "\n \\lambda \\approx 0.663\\mu\n"
},
{
"math_id": 199,
"text": "\n \\bar{F} = \\bar{a}^3 - \\lambda \\bar{a}^2\\left[\\sqrt{m^2 - 1} + m^2 \\sec^{-1} m\\right]\n "
},
{
"math_id": 200,
"text": "\n \\bar{d} = \\bar{a}^2 - \\frac{4}{3}~\\lambda \\bar{a}\\sqrt{m^2-1}\n "
},
{
"math_id": 201,
"text": "\n \\frac{\\lambda \\bar{a}^2}{2}\\left[\\left(m^2 - 2\\right)\\sec^{-1} m + \\sqrt{m^2 - 1}\\right] + \\frac{4\\lambda\\bar{a}}{3}\\left[\\sqrt{m^2-1}\\sec^{-1} m - m + 1\\right] = 1\n "
},
{
"math_id": 202,
"text": "c"
},
{
"math_id": 203,
"text": "m \\rightarrow 1"
},
{
"math_id": 204,
"text": "\n a = a_0(\\beta) \\left(\\frac{\\beta + \\sqrt{1 - F/F_c(\\beta)}}{1 + \\beta}\\right)^\\frac{2}{3}\n "
},
{
"math_id": 205,
"text": "a_0"
},
{
"math_id": 206,
"text": "\\beta"
},
{
"math_id": 207,
"text": "\n \\lambda \\approx -0.924 \\ln(1 - 1.02\\beta)\n "
},
{
"math_id": 208,
"text": "\\beta = 1"
},
{
"math_id": 209,
"text": "\\beta = 0"
},
{
"math_id": 210,
"text": "0 < \\beta < 1"
},
{
"math_id": 211,
"text": "0.1 < \\lambda < 5"
}
] | https://en.wikipedia.org/wiki?curid=9697733 |
9701718 | Dice-Sørensen coefficient | A statistic used for comparing the similarity of two samples
The Dice-Sørensen coefficient (see below for other names) is a statistic used to gauge the similarity of two samples. It was independently developed by the botanists Lee Raymond Dice and Thorvald Sørensen, who published in 1945 and 1948 respectively.
Name.
The index is known by several other names, especially Sørensen–Dice index, Sørensen index and Dice's coefficient. Other variations include the "similarity coefficient" or "index", such as Dice similarity coefficient (DSC). Common alternate spellings for Sørensen are "Sorenson", "Soerenson" and "Sörenson", and all three can also be seen with the "–sen" ending (the Danish letter ø is phonetically equivalent to the German/Swedish ö, which can be written as oe in ASCII).
Other names include:
Formula.
Sørensen's original formula was intended to be applied to discrete data. Given two sets, X and Y, it is defined as
formula_0
where |"X"| and |"Y"| are the cardinalities of the two sets (i.e. the number of elements in each set).
The Sørensen index equals twice the number of elements common to both sets divided by the sum of the number of elements in each set. Equivalently, the index is the size of the intersection as a fraction of the average size of the two sets.
When applied to Boolean data, using the definition of true positive (TP), false positive (FP), and false negative (FN), it can be written as
formula_1.
It is different from the Jaccard index which only counts true positives once in both the numerator and denominator. DSC is the quotient of similarity and ranges between 0 and 1. It can be viewed as a similarity measure over sets.
Similarly to the Jaccard index, the set operations can be expressed in terms of vector operations over binary vectors a and b:
formula_2
which gives the same outcome over binary vectors and also gives a more general similarity metric over vectors in general terms.
For sets "X" and "Y" of keywords used in information retrieval, the coefficient may be defined as twice the shared information (intersection) over the sum of cardinalities :
When taken as a string similarity measure, the coefficient may be calculated for two strings, "x" and "y" using bigrams as follows:
formula_3
where "n""t" is the number of character bigrams found in both strings, "n""x" is the number of bigrams in string "x" and "n""y" is the number of bigrams in string "y". For example, to calculate the similarity between:
codice_0
codice_1
We would find the set of bigrams in each word:
{codice_2,codice_3,codice_4,codice_5}
{codice_6,codice_7,codice_8,codice_5}
Each set has four elements, and the intersection of these two sets has only one element: codice_5.
Inserting these numbers into the formula, we calculate, "s" = (2 · 1) / (4 + 4) = 0.25.
Continuous Dice Coefficient.
Source:
For a discrete ground truth and continuous measures the following formula can be used:
formula_4
where c can be computed as follows:
formula_5
If formula_6 which means no overlap between A and B, c is set to 1 arbitrarily.
Difference from Jaccard.
This coefficient is not very different in form from the Jaccard index. In fact, both are equivalent in the sense that given a value for the Sørensen–Dice coefficient formula_7, one can calculate the respective Jaccard index value formula_8 and vice versa, using the equations formula_9 and formula_10.
Since the Sørensen–Dice coefficient does not satisfy the triangle inequality, it can be considered a semimetric version of the Jaccard index.
The function ranges between zero and one, like Jaccard. Unlike Jaccard, the corresponding difference function
formula_11
is not a proper distance metric as it does not satisfy the triangle inequality. The simplest counterexample of this is given by the three sets {a}, {b}, and {a,b}, the distance between the first two being 1, and the difference between the third and each of the others being one-third. To satisfy the triangle inequality, the sum of "any" two of these three sides must be greater than or equal to the remaining side. However, the distance between {a} and {a,b} plus the distance between {b} and {a,b} equals 2/3 and is therefore less than the distance between {a} and {b} which is 1.
Applications.
The Sørensen–Dice coefficient is useful for ecological community data (e.g. Looman & Campbell, 1960). Justification for its use is primarily empirical rather than theoretical (although it can be justified theoretically as the intersection of two fuzzy sets). As compared to Euclidean distance, the Sørensen distance retains sensitivity in more heterogeneous data sets and gives less weight to outliers. Recently the Dice score (and its variations, e.g. logDice taking a logarithm of it) has become popular in computer lexicography for measuring the lexical association score of two given words.
logDice is also used as part of the Mash Distance for genome and metagenome distance estimation
Finally, Dice is used in image segmentation, in particular for comparing algorithm output against reference masks in medical applications.
Abundance version.
The expression is easily extended to abundance instead of presence/absence of species. This quantitative version is known by several names:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " DSC = \\frac{2 |X \\cap Y|}{|X| + |Y|}"
},
{
"math_id": 1,
"text": " DSC = \\frac{2 \\mathit{TP}}{2 \\mathit{TP} + \\mathit{FP} + \\mathit{FN}}"
},
{
"math_id": 2,
"text": "s_v = \\frac{2 | \\bf{a} \\cdot \\bf{b} |}{| \\bf{a} |^2 + | \\bf{b} |^2} "
},
{
"math_id": 3,
"text": "s = \\frac{2 n_t}{n_x + n_y}"
},
{
"math_id": 4,
"text": " cDC = \\frac{2 |X \\cap Y|}{c * |X| + |Y|} \n \n"
},
{
"math_id": 5,
"text": " c = \\frac{\\Sigma a_ib_i}{\\Sigma a_i \\operatorname{sign}{(b_i)}}"
},
{
"math_id": 6,
"text": " \\Sigma a_i \\operatorname{sign}{(b_i)} = 0"
},
{
"math_id": 7,
"text": "S"
},
{
"math_id": 8,
"text": "J"
},
{
"math_id": 9,
"text": "J=S/(2-S)"
},
{
"math_id": 10,
"text": "S=2J/(1+J)"
},
{
"math_id": 11,
"text": "d = 1 - \\frac{2 | X \\cap Y |}{| X | + | Y |} "
}
] | https://en.wikipedia.org/wiki?curid=9701718 |
9702214 | Tournament theory | Theory in economics used to describe reasons for wage differences
Tournament theory is the theory in personnel economics used to describe certain situations where wage differences are based not on marginal productivity but instead upon relative differences between the individuals. This theory was invented by economists Edward Lazear and Sherwin Rosen.
The theory has been applied to professional sports and to the practice of law. Tournament theory also was applied to writing - one writer may be fractionally better at writing than another (and therefore have a better book), but because people allocate small amounts of time to reading, the writer with the marginally better book will sell far more copies.
Rank-Order Tournaments as Optimum Labour Contracts.
Lazear and Rosen proposed tournament theory in their 1981 paper "Rank-Order Tournaments as Optimum Labor Contracts", looking at performance related pay. Under conventional systems workers are paid a piece rate - an amount of money that relates to their output, rather than the time they input. Tournament theory suggests that workers can be rewarded by their rank in an organization, suggesting why large salaries are given to senior executives: to provide a 'prize' to those who put in enough effort to garner one of the top positions.
The paper invites the reader to consider the lifetime output of a worker at a firm. This output is dictated by two things - chance and skill. The worker can control his lifetime output by investing in skills early on in life, like studying hard at school and getting good qualifications, but a part of that output will be determined by chance. Participants in the tournament commit their investment early on in life and are unlikely to know each other previously, within the firm they work in, and may not even know each other within the firm. This prevents collusion or cheating in the tournament.
Looking at the tournament in its simplest form, a two player tournament, where there is a prize for the winner and a smaller consolation for the loser. The incentive to win increases as the difference between the losing and winning prize increases, and therefore the investment of the worker is increased as the difference between the winning and losing prizes is increased. It is in the interest of the firm to increase the spread of prizes. However there is a drawback for the firms. As the workers invest more their costs rise. Competing firms could offer a tournament with a lower spread and attract more workers because they would have to invest less. Therefore, there is an optimal prize spread that firms set, high enough to induce investment but low enough so that the investment is not too expensive for the worker. The prize may take the form of extra cash or a promotion - which means more money, as well as entering a higher level of tournament, where the stakes may be higher.
The idea that the prize may be in the form of a promotion explains why presidents are paid significantly more than vice presidents. In one day a Vice-President may be promoted to President of a company and have his pay tripled. Considering piece rates this seems illogical - his output is unlikely to have tripled in one day. But looking at it using tournament theory it seems logical - he has won the tournament and received his prize - presidency.
Tournament theory is an efficient way of labour compensation when quantifying output is difficult or expensive, but ranking workers is easy. It is also effective as it provides goals for workers and incentivises hard work so that they may one day attain one of the coveted positions at the top. An advantage to workers over a piece rate would be that in the event of a natural disaster they would preserve their wage as their output would go down in absolute terms but stay the same relative to their colleagues. This means that in times of disaster workers could maintain their wage.
Benefits of tournament
(+) Motivates workers
(+) Encourage Long-run behaviour to stay
Foundational Principles.
There are two foundational predictions of tournament theory. These predictions can be illustrated by examining a simple two-player contest with identical risk-neutral actors. Let performance (output) be measured by formula_0:
formula_1
Here, formula_2 represents the effort or investment by a player, while formula_3 is a random component (e.g. luck or noise). Players are rewarded for performance with one of two prizes, formula_4 or formula_5, where formula_6. formula_4 goes to the player with better performance, while formula_5 goes to the player with worse performance. Each player's actions have a cost associated, denoted by formula_7. The probability that player formula_8 wins formula_4 is positively related to that player's action formula_9 and negatively related to the opponent player's action formula_10, as well as the random component formula_3. If formula_11 is the probability of winning, then the contestant can receive the following payoff:
formula_12
When player formula_8 chooses formula_13 to maximise his/her payoff, then:
formula_14
In a two-player tournament the Nash equilibrium occurs when both players maximise their payoff while assuming the other player's effort is fixed. At this equilibrium the marginal cost of effort formula_15 is equal to the marginal value of effort formula_16, such that:
formula_17
From this equation, two principles can be derived. The first is that an actor's level of effort increases with the spread between the winning and losing prize. The second is that only the difference between the winning and the losing prize matters to the two contestants, not the absolute size of their winnings. These two testable predictions of tournament theory have been supported by empirical research over the years, especially in the fields of labour economics and sports.
Pros and Cons of Workplace Tournaments.
Incentivizing Performance
Tournaments can be very powerful at incentivising performance. Empirical research in economics and managements have shown that tournament-like incentive structure increases the individual performance or workers and managers in the workplace. The distribution of effort for one tournament experiment found that almost 80% of participants exert higher than anticipated levels of efforts, thus suggesting that tournaments provides strong competition incentives. Tournaments also provide powerful non-monetary incentives. Studies show that participants in tournaments value winning itself and placing highly on relative rankings. One experiment found that more than 40% of individuals were willing to exert positive effort with a monetary incentive of $0.
Matching workers and jobs
Tournaments play an important function in matching workers with jobs that are relevant/appropriate. The theoretical prediction in the literature is that higher-skilled individuals would be sorted into jobs that offer higher potential returns. This is well-supported by empirical data. For instance, in the field of competitive running more accomplished competitors with greater capabilities are more likely to choose tournaments with greater prize spreads.
Inequalities in the workplace
Tournaments have the potential to create large inequalities in payoffs. Incentive based tournaments are organised in such a way that some winners are created at the expense of many losers. Thus, by design there are likely to be high inequalities in payoffs in the workplace under a tournament structure. A further potential inequality is gender inequality in the workplace. Field studies have shown that women are less likely to enter tournaments than men and also do not perform as well. Thus, even in cases where women may be more capable or better skilled, tournament-like incentives may discourage women from participating.
Selfish and Unethical Behaviour
A major issue with tournaments is that individuals are incentivised to view others as competitors, thus encouraging selfish behaviour. This means that participants in a tournament structured workplace would be less likely to help each other and are discouraged from knowledge sharing more so than in other incentive schemes. Through such increased opportunistic and selfish behavior, firms with tournament structures and large inequalities in payoffs may harm their customer relationships. Further, tournaments may also encourage unethical behaviour in participants such as cheating or collusion in competitive sports or plagiarism in the academic field.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
Chronological order: | [
{
"math_id": 0,
"text": "q"
},
{
"math_id": 1,
"text": "q=u+e."
},
{
"math_id": 2,
"text": "u"
},
{
"math_id": 3,
"text": "e"
},
{
"math_id": 4,
"text": "W_1"
},
{
"math_id": 5,
"text": "W_2"
},
{
"math_id": 6,
"text": "W_1>W_2"
},
{
"math_id": 7,
"text": "C(u)"
},
{
"math_id": 8,
"text": "a"
},
{
"math_id": 9,
"text": "u_a"
},
{
"math_id": 10,
"text": "u_b"
},
{
"math_id": 11,
"text": "P"
},
{
"math_id": 12,
"text": "P[W_1-C(u_i)]+(1-P)[W_2-C(u_i)]=P(W_1-W_2)+W_2-C(u_i)."
},
{
"math_id": 13,
"text": "u_i"
},
{
"math_id": 14,
"text": "\\frac{\\partial P}{\\partial u_i}(W_1-W_2)-C'(u_i).=0"
},
{
"math_id": 15,
"text": "C'"
},
{
"math_id": 16,
"text": "V"
},
{
"math_id": 17,
"text": "\\frac{\\partial P}{\\partial u_i}(W_1-W_2)=V."
}
] | https://en.wikipedia.org/wiki?curid=9702214 |
9702578 | 1 − 2 + 3 − 4 + ⋯ | Infinite series
In mathematics, 1 − 2 + 3 − 4 + ··· is an infinite series whose terms are the successive positive integers, given alternating signs. Using sigma summation notation the sum of the first "m" terms of the series can be expressed as
formula_0
The infinite series diverges, meaning that its sequence of partial sums, (1, −1, 2, −2, 3, ...), does not tend towards any finite limit. Nonetheless, in the mid-18th century, Leonhard Euler wrote what he admitted to be a paradoxical equation:
formula_1
A rigorous explanation of this equation would not arrive until much later. Starting in 1890, Ernesto Cesàro, Émile Borel and others investigated well-defined methods to assign generalized sums to divergent series—including new interpretations of Euler's attempts. Many of these summability methods easily assign to 1 − 2 + 3 − 4 + ... a "value" of . Cesàro summation is one of the few methods that do not sum 1 − 2 + 3 − 4 + ..., so the series is an example where a slightly stronger method, such as Abel summation, is required.
The series 1 − 2 + 3 − 4 + ... is closely related to Grandi's series 1 − 1 + 1 − 1 + ... Euler treated these two as special cases of the more general sequence 1 − 2"n" + 3"n" − 4"n" + ..., where "n" = 1 and "n" = 0 respectively. This line of research extended his work on the Basel problem and leading towards the functional equations of what are now known as the Dirichlet eta function and the Riemann zeta function.
Divergence.
The series' terms (1, −2, 3, −4, ...) do not approach 0; therefore 1 − 2 + 3 − 4 + ... diverges by the term test. Divergence can also be shown directly from the definition: an infinite series converges if and only if the sequence of partial sums converges to limit, in which case that limit is the value of the infinite series. The partial sums of 1 − 2 + 3 − 4 + ... are:
<templatestyles src="Block indent/styles.css"/>1,
1 − 2 = −1,
1 − 2 + 3 = 2,
1 − 2 + 3 − 4 = −2,
1 − 2 + 3 − 4 + 5 = 3,
1 − 2 + 3 − 4 + 5 − 6 = −3,
The sequence of partial sums shows that the series does not converge to a particular number: for any proposed limit "x", there exists a point beyond which the subsequent partial sums are all outside the interval ["x"−1, "x"+1], so 1 − 2 + 3 − 4 + ... diverges.
The partial sums include every integer exactly once—even 0 if one counts the empty partial sum—and thereby establishes the countability of the set formula_2 of integers.
Heuristics for summation.
Stability and linearity.
Since the terms 1, −2, 3, −4, 5, −6, ... follow a simple pattern, the series 1 − 2 + 3 − 4 + ... can be manipulated by shifting and term-by-term addition to yield a numerical value. If it can make sense to write "s" = 1 − 2 + 3 − 4 + ... for some ordinary number "s", the following manipulations argue for "s" = <templatestyles src="Fraction/styles.css" />1⁄4:
formula_3
So formula_4.
Although 1 − 2 + 3 − 4 + ... does not have a sum in the usual sense, the equation "s" = 1 − 2 + 3 − 4 + ... = <templatestyles src="Fraction/styles.css" />1⁄4 can be supported as the most natural answer if such a sum is to be defined. A generalized definition of the "sum" of a divergent series is called a summation method or summability method. There are many different methods and it is desirable that they share some properties of ordinary summation. What the above manipulations actually prove is the following: Given any summability method that is linear and stable and sums the series 1 − 2 + 3 − 4 + ..., the sum it produces is <templatestyles src="Fraction/styles.css" />1⁄4. Furthermore, since
formula_5
such a method must also sum Grandi's series as 1 − 1 + 1 − 1 + ... = <templatestyles src="Fraction/styles.css" />1⁄2.
Cauchy product.
In 1891, Ernesto Cesàro expressed hope that divergent series would be rigorously brought into calculus, pointing out, "One already writes (1 − 1 + 1 − 1 + ...)2 = 1 − 2 + 3 − 4 + ... and asserts that both the sides are equal to <templatestyles src="Fraction/styles.css" />1⁄4." For Cesàro, this equation was an application of a theorem he had published the previous year, which is the first theorem in the history of summable divergent series. The details on his summation method are below; the central idea is that 1 − 2 + 3 − 4 + ... is the Cauchy product (discrete convolution) of 1 − 1 + 1 − 1 + ... with 1 − 1 + 1 − 1 + ...
The Cauchy product of two infinite series is defined even when both of them are divergent. In the case where "a""n" = "b""n" = (−1)"n", the terms of the Cauchy product are given by the finite diagonal sums
formula_6
The product series is then
formula_7
Thus a summation method that respects the Cauchy product of two series — and assigns to the series 1 − 1 + 1 − 1 + ... the sum 1/2 — will also assign to the series 1 − 2 + 3 − 4 + ... the sum 1/4. With the result of the previous section, this implies an equivalence between summability of 1 − 1 + 1 − 1 + ... and 1 − 2 + 3 − 4 + ... with methods that are linear, stable, and respect the Cauchy product.
Cesàro's theorem is a subtle example. The series 1 − 1 + 1 − 1 + ... is Cesàro-summable in the weakest sense, called (C, 1)-summable, while 1 − 2 + 3 − 4 + ... requires a stronger form of Cesàro's theorem, being (C, 2)-summable. Since all forms of Cesàro's theorem are linear and stable, the values of the sums are as calculated above.
Specific methods.
Cesàro and Hölder.
To find the (C, 1) Cesàro sum of 1 − 2 + 3 − 4 + ..., if it exists, one needs to compute the arithmetic means of the partial sums of the series.
The partial sums are:
<templatestyles src="Block indent/styles.css"/>1, −1, 2, −2, 3, −3, ...,
and the arithmetic means of these partial sums are:
<templatestyles src="Block indent/styles.css"/>
This sequence of means does not converge, so 1 − 2 + 3 − 4 + ... is not Cesàro summable.
There are two well-known generalizations of Cesàro summation: the conceptually simpler of these is the sequence of (H, "n") methods for natural numbers "n". The (H, 1) sum is Cesàro summation, and higher methods repeat the computation of means. Above, the even means converge to <templatestyles src="Fraction/styles.css" />1⁄2, while the odd means are all equal to 0, so the means "of the means" converge to the average of 0 and <templatestyles src="Fraction/styles.css" />1⁄2, namely <templatestyles src="Fraction/styles.css" />1⁄4. So 1 − 2 + 3 − 4 + ... is (H, 2) summable to <templatestyles src="Fraction/styles.css" />1⁄4.
The "H" stands for Otto Hölder, who first proved in 1882 what mathematicians now think of as the connection between Abel summation and (H, "n") summation; 1 − 2 + 3 − 4 + ... was his first example. The fact that <templatestyles src="Fraction/styles.css" />1⁄4 is the (H, 2) sum of 1 − 2 + 3 − 4 + ... guarantees that it is the Abel sum as well; this will also be proved directly below.
The other commonly formulated generalization of Cesàro summation is the sequence of (C, "n") methods. It has been proven that (C, "n") summation and (H, "n") summation always give the same results, but they have different historical backgrounds. In 1887, Cesàro came close to stating the definition of (C, "n") summation, but he gave only a few examples. In particular, he summed 1 − 2 + 3 − 4 + ..., to <templatestyles src="Fraction/styles.css" />1⁄4 by a method that may be rephrased as (C, "n") but was not justified as such at the time. He formally defined the (C, "n") methods in 1890 in order to state his theorem that the Cauchy product of a (C, "n")-summable series and a (C, "m")-summable series is (C, "m" + "n" + 1)-summable.
Abel summation.
In a 1749 report, Leonhard Euler admits that the series diverges but prepares to sum it anyway:
<templatestyles src="Template:Blockquote/styles.css" />... when it is said that the sum of this series 1 − 2 + 3 − 4 + 5 − 6 etc. is <templatestyles src="Fraction/styles.css" />1⁄4, that must appear paradoxical. For by adding 100 terms of this series, we get −50, however, the sum of 101 terms gives +51, which is quite different from <templatestyles src="Fraction/styles.css" />1⁄4 and becomes still greater when one increases the number of terms. But I have already noticed at a previous time, that it is necessary to give to the word "sum" a more extended meaning ...
Euler proposed a generalization of the word "sum" several times. In the case of 1 − 2 + 3 − 4 + ..., his ideas are similar to what is now known as Abel summation:
<templatestyles src="Template:Blockquote/styles.css" />... it is no more doubtful that the sum of this series 1 − 2 + 3 − 4 + 5 etc. is <templatestyles src="Fraction/styles.css" />1⁄4; since it arises from the expansion of the formula <templatestyles src="Fraction/styles.css" />1⁄(1+1)2, whose value is incontestably <templatestyles src="Fraction/styles.css" />1⁄4. The idea becomes clearer by considering the general series 1 − 2"x" + 3"x"2 − 4"x"3 + 5"x"4 − 6"x"5 + "&c." that arises while expanding the expression <templatestyles src="Fraction/styles.css" />1⁄(1+"x")2, which this series is indeed equal to after we set "x" = 1.
There are many ways to see that, at least for absolute values , Euler is right in that
formula_8
One can take the Taylor expansion of the right-hand side, or apply the formal long division process for polynomials. Starting from the left-hand side, one can follow the general heuristics above and try multiplying by (1 + "x") twice or squaring the geometric series 1 − "x" + "x"2 − ... Euler also seems to suggest differentiating the latter series term by term.
In the modern view, the generating function 1 − 2"x" + 3"x"2 − 4"x"3 + ... does not define a function at "x" = 1, so that value cannot simply be substituted into the resulting expression. Since the function is defined for all |"x"| < 1, one can still take the limit as "x" approaches 1, and this is the definition of the Abel sum:
formula_9
Euler and Borel.
Euler applied another technique to the series: the Euler transform, one of his own inventions. To compute the Euler transform, one begins with the sequence of positive terms that makes up the alternating series—in this case 1, 2, 3, 4, ... The first element of this sequence is labeled "a"0.
Next one needs the sequence of forward differences among 1, 2, 3, 4, ...; this is just 1, 1, 1, 1, ... The first element of "this" sequence is labeled Δ"a"0. The Euler transform also depends on differences of differences, and higher iterations, but all the forward differences among 1, 1, 1, 1, ... are 0. The Euler transform of 1 − 2 + 3 − 4 + ... is then defined as
formula_10
In modern terminology, one says that 1 − 2 + 3 − 4 + ... is Euler summable to <templatestyles src="Fraction/styles.css" />1⁄4.
The Euler summability also implies Borel summability, with the same summation value, as it does in general.
Separation of scales.
Saichev and Woyczyński arrive at 1 − 2 + 3 − 4 + ... = <templatestyles src="Fraction/styles.css" />1⁄4 by applying only two physical principles: "infinitesimal relaxation" and "separation of scales". To be precise, these principles lead them to define a broad family of ""φ"-summation methods", all of which sum the series to <templatestyles src="Fraction/styles.css" />1⁄4:
This result generalizes Abel summation, which is recovered by letting "φ"("x") = exp(−"x"). The general statement can be proved by pairing up the terms in the series over "m" and converting the expression into a Riemann integral. For the latter step, the corresponding proof for 1 − 1 + 1 − 1 + ... applies the mean value theorem, but here one needs the stronger Lagrange form of Taylor's theorem.
Generalization.
The threefold Cauchy product of 1 − 1 + 1 − 1 + ... is 1 − 3 + 6 − 10 + ..., the alternating series of triangular numbers; its Abel and Euler sum is <templatestyles src="Fraction/styles.css" />1⁄8. The fourfold Cauchy product of 1 − 1 + 1 − 1 + ... is 1 − 4 + 10 − 20 + ..., the alternating series of tetrahedral numbers, whose Abel sum is <templatestyles src="Fraction/styles.css" />1⁄16.
Another generalization of 1 − 2 + 3 − 4 + ... in a slightly different direction is the series 1 − 2"n" + 3"n" − 4"n" + ... for other values of "n". For positive integers "n", these series have the following Abel sums:
formula_12
where "B""n" are the Bernoulli numbers. For even "n", this reduces to
formula_13
which can be interpreted as stating that negative even values of the Riemann zeta function are zero. This sum became an object of particular ridicule by Niels Henrik Abel in 1826:
<templatestyles src="Template:Blockquote/styles.css" />Divergent series are on the whole devil's work, and it is a shame that one dares to found any proof on them. One can get out of them what one wants if one uses them, and it is they which have made so much unhappiness and so many paradoxes. Can one think of anything more appalling than to say that
<templatestyles src="Block indent/styles.css"/>0 = 1 − 2"2n" + 3"2n" − 4"2n" + etc.
where "n" is a positive number. Here's something to laugh at, friends.
Cesàro's teacher, Eugène Charles Catalan, also disparaged divergent series. Under Catalan's influence, Cesàro initially referred to the "conventional formulas" for 1 − 2"n" + 3"n" − 4"n" + ... as "absurd equalities", and in 1883 Cesàro expressed a typical view of the time that the formulas were false but still somehow formally useful. Finally, in his 1890 "Sur la multiplication des séries", Cesàro took a modern approach starting from definitions.
The series are also studied for non-integer values of "n"; these make up the Dirichlet eta function. Part of Euler's motivation for studying series related to 1 − 2 + 3 − 4 + ... was the functional equation of the eta function, which leads directly to the functional equation of the Riemann zeta function. Euler had already become famous for finding the values of these functions at positive even integers (including the Basel problem), and he was attempting to find the values at the positive odd integers (including Apéry's constant) as well, a problem that remains elusive today. The eta function in particular is easier to deal with by Euler's methods because its Dirichlet series is Abel summable everywhere; the zeta function's Dirichlet series is much harder to sum where it diverges. For example, the counterpart of 1 − 2 + 3 − 4 + ... in the zeta function is the non-alternating series 1 + 2 + 3 + 4 + ..., which has deep applications in modern physics but requires much stronger methods to sum.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{n=1}^m n(-1)^{n-1}."
},
{
"math_id": 1,
"text": "1-2+3-4+\\cdots=\\frac{1}{4}."
},
{
"math_id": 2,
"text": "\\mathbb{Z}"
},
{
"math_id": 3,
"text": "\n\\begin{alignat}{5}\n4s&= &&(1-2+3-\\cdots) \\ \\ &&{}+(1-2+3-4+\\cdots) && {}+(1-2+3-4+\\cdots) &&{}+(1-2+3-4+\\cdots) \\\\\n &= &&(1-2+3-\\cdots) && +1 {}+(-2+3-4+\\cdots) \\ \\ && {}+1+(-2+3-4+\\cdots) \\ \\ &&{}+1-2+(3-4+\\cdots) \\\\\n &=\\ 1+{} &&(1-2+3-\\cdots) && {}+(-2+3-4+\\cdots) && {}+(-2+3-4+\\cdots) &&{}+(3-4+5-\\cdots) \\\\\n &=\\ 1+{}[\\ &&(1-2-2+3) && {}+(-2+3+3-4) && {}+(3-4-4+5) &&{}+\\cdots \\ ] \\\\\n &=\\ 1+{}[\\ && 0+0+0+\\cdots\\ ] \\\\\n4s&=\\ 1\n\\end{alignat}\n"
},
{
"math_id": 4,
"text": "s=\\frac{1}{4}"
},
{
"math_id": 5,
"text": "\n\\begin{alignat}{5}\n2s&= &&(1-2+3-4+\\cdots) \\ \\ &&{}+(1-2+3-4+5-\\cdots) \\\\\n &= && 1 {}+(-2+3-4+\\cdots) \\ \\ &&{}+1-2+(3-4+5-\\cdots) \\\\\n &=\\ 0+{} &&(-2+3-4+\\cdots) &&{}+(3-4+5-\\cdots) \\\\\n &=\\ 0+{}[\\ &&(-2+3) \\quad {}+(3-4) && {}+(-4+5) \\quad +\\cdots \\ ] \\\\\n2s&=\\ && 1-1+1-1+\\cdots\n\\end{alignat}\n"
},
{
"math_id": 6,
"text": "\\begin{align}\nc_n & = \\sum_{k=0}^n a_k b_{n-k}=\\sum_{k=0}^n (-1)^k (-1)^{n-k} \\\\\n & = \\sum_{k=0}^n (-1)^n = (-1)^n(n+1).\n\\end{align}"
},
{
"math_id": 7,
"text": "\\sum_{n=0}^\\infty(-1)^n(n+1) = 1-2+3-4+\\cdots."
},
{
"math_id": 8,
"text": "1-2x+3x^2-4x^3+\\cdots = \\frac{1}{(1+x)^2}."
},
{
"math_id": 9,
"text": "\\lim_{x\\rightarrow 1^{-}}\\sum_{n=1}^\\infty n(-x)^{n-1} = \\lim_{x\\rightarrow 1^{-}}\\frac{1}{(1+x)^2} = \\frac14."
},
{
"math_id": 10,
"text": "\\frac12 a_0-\\frac14\\Delta a_0 +\\frac18\\Delta^2 a_0 -\\cdots = \\frac12-\\frac14."
},
{
"math_id": 11,
"text": "\\lim_{\\delta\\rightarrow0}\\sum_{m=0}^\\infty (-1)^m(m+1)\\varphi(\\delta m) = \\frac14."
},
{
"math_id": 12,
"text": "1-2^{n}+3^{n}-\\cdots = \\frac{2^{n+1}-1}{n+1}B_{n+1}"
},
{
"math_id": 13,
"text": "1-2^{2k}+3^{2k}-\\cdots = 0,"
}
] | https://en.wikipedia.org/wiki?curid=9702578 |
97026 | Breadth-first search | Algorithm to search the nodes of a graph
Breadth-first search (BFS) is an algorithm for searching a tree data structure for a node that satisfies a given property. It starts at the tree root and explores all nodes at the present depth prior to moving on to the nodes at the next depth level. Extra memory, usually a queue, is needed to keep track of the child nodes that were encountered but not yet explored.
For example, in a chess endgame, a chess engine may build the game tree from the current position by applying all possible moves and use breadth-first search to find a win position for White. Implicit trees (such as game trees or other problem-solving trees) may be of infinite size; breadth-first search is guaranteed to find a solution node if one exists.
In contrast, (plain) depth-first search (DFS), which explores the node branch as far as possible before backtracking and expanding other nodes, may get lost in an infinite branch and never make it to the solution node. Iterative deepening depth-first search avoids the latter drawback at the price of exploring the tree's top parts over and over again. On the other hand, both depth-first algorithms typically require far less extra memory than breadth-first search.
Breadth-first search can be generalized to both undirected graphs and directed graphs with a given start node (sometimes referred to as a 'search key'). In state space search in artificial intelligence, repeated searches of vertices are often allowed, while in theoretical analysis of algorithms based on breadth-first search, precautions are typically taken to prevent repetitions.
BFS and its application in finding connected components of graphs were invented in 1945 by Konrad Zuse, in his (rejected) Ph.D. thesis on the Plankalkül programming language, but this was not published until 1972. It was reinvented in 1959 by Edward F. Moore, who used it to find the shortest path out of a maze, and later developed by C. Y. Lee into a wire routing algorithm (published in 1961).
Pseudocode.
Input: A graph G and a "starting vertex" root of G
Output: Goal state. The "parent" links trace the shortest path back to root
1 procedure BFS("G", "root") is
2 let "Q" be a queue
3 label "root" as explored
4 "Q".enqueue("root")
5 while "Q" is not empty do
6 "v" := "Q".dequeue()
7 if "v" is the goal then
8 return "v"
9 for all edges from "v" to "w" in "G".adjacentEdges("v") do
10 if "w" is not labeled as explored then
11 label "w" as explored
12 "w".parent := "v"
13 "Q".enqueue("w")
More details.
This non-recursive implementation is similar to the non-recursive implementation of depth-first search, but differs from it in two ways:
If G is a tree, replacing the queue of this breadth-first search algorithm with a stack will yield a depth-first search algorithm. For general graphs, replacing the stack of the iterative depth-first search implementation with a queue would also produce a breadth-first search algorithm, although a somewhat nonstandard one.
The "Q" queue contains the frontier along which the algorithm is currently searching.
Nodes can be labelled as explored by storing them in a set, or by an attribute on each node, depending on the implementation.
Note that the word "node" is usually interchangeable with the word "vertex".
The "parent" attribute of each node is useful for accessing the nodes in a shortest path, for example by backtracking from the destination node up to the starting node, once the BFS has been run, and the predecessors nodes have been set.
Breadth-first search produces a so-called "breadth first tree". You can see how a "breadth first tree" looks in the following example.
Example.
The following is an example of the breadth-first tree obtained by running a BFS on German cities starting from "Frankfurt":
Analysis.
Time and space complexity.
The time complexity can be expressed as formula_0, since every vertex and every edge will be explored in the worst case. formula_1 is the number of vertices and formula_2 is the number of edges in the graph.
Note that formula_3 may vary between formula_4 and formula_5, depending on how sparse the input graph is.
When the number of vertices in the graph is known ahead of time, and additional data structures are used to determine which vertices have already been added to the queue, the space complexity can be expressed as formula_6, where formula_1 is the number of vertices. This is in addition to the space
required for the graph itself, which may vary depending on the graph representation used by an implementation of the algorithm.
When working with graphs that are too large to store explicitly (or infinite), it is more practical to describe the complexity of breadth-first search in different terms: to find the nodes that are at distance d from the start node (measured in number of edge traversals), BFS takes "O"("b""d" + 1) time and memory, where b is the "branching factor" of the graph (the average out-degree).
Completeness.
In the analysis of algorithms, the input to breadth-first search is assumed to be a finite graph, represented as an adjacency list, adjacency matrix, or similar representation. However, in the application of graph traversal methods in artificial intelligence the input may be an implicit representation of an infinite graph. In this context, a search method is described as being complete if it is guaranteed to find a goal state if one exists. Breadth-first search is complete, but depth-first search is not. When applied to infinite graphs represented implicitly, breadth-first search will eventually find the goal state, but depth first search may get lost in parts of the graph that have no goal state and never return.
BFS ordering.
An enumeration of the vertices of a graph is said to be a BFS ordering if it is the possible output of the application of BFS to this graph.
Let formula_7 be a graph with formula_8 vertices. Recall that formula_9 is the set of neighbors of formula_10.
Let formula_11 be a list of distinct elements of formula_12, for formula_13, let formula_14 be the least formula_15 such that formula_16 is a neighbor of formula_10, if such a formula_15 exists, and be formula_17 otherwise.
Let formula_18 be an enumeration of the vertices of formula_12.
The enumeration formula_19 is said to be a BFS ordering (with source formula_20) if, for all formula_21, formula_16 is the vertex formula_22 such that formula_23 is minimal. Equivalently, formula_19 is a BFS ordering if, for all formula_24 with formula_25, there exists a neighbor formula_26 of formula_27 such that formula_28.
Applications.
Breadth-first search can be used to solve many problems in graph theory, for example:
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "O(|V|+|E|)"
},
{
"math_id": 1,
"text": "|V|"
},
{
"math_id": 2,
"text": "|E|"
},
{
"math_id": 3,
"text": "O(|E|)"
},
{
"math_id": 4,
"text": "O(1)"
},
{
"math_id": 5,
"text": " O(|V|^2)"
},
{
"math_id": 6,
"text": "O(|V|)"
},
{
"math_id": 7,
"text": "G=(V,E)"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "N(v)"
},
{
"math_id": 10,
"text": "v"
},
{
"math_id": 11,
"text": "\\sigma=(v_1,\\dots,v_m)"
},
{
"math_id": 12,
"text": "V"
},
{
"math_id": 13,
"text": "v\\in V\\setminus\\{v_1,\\dots,v_m\\}"
},
{
"math_id": 14,
"text": "\\nu_{\\sigma}(v)"
},
{
"math_id": 15,
"text": "i"
},
{
"math_id": 16,
"text": "v_i"
},
{
"math_id": 17,
"text": "\\infty"
},
{
"math_id": 18,
"text": "\\sigma=(v_1,\\dots,v_n)"
},
{
"math_id": 19,
"text": "\\sigma"
},
{
"math_id": 20,
"text": "v_1"
},
{
"math_id": 21,
"text": "1<i\\le n"
},
{
"math_id": 22,
"text": "w\\in V\\setminus\\{v_1,\\dots,v_{i-1}\\}"
},
{
"math_id": 23,
"text": "\\nu_{(v_1,\\dots,v_{i-1})}(w)"
},
{
"math_id": 24,
"text": "1\\le i<j<k\\le n"
},
{
"math_id": 25,
"text": "v_i\\in N(v_k)\\setminus N(v_j)"
},
{
"math_id": 26,
"text": "v_m"
},
{
"math_id": 27,
"text": "v_j"
},
{
"math_id": 28,
"text": "m<i"
}
] | https://en.wikipedia.org/wiki?curid=97026 |
9702754 | Motion ratio | The motion ratio of a mechanism is the ratio of the displacement of the point of interest to that of another point.
The most common example is in a vehicle's suspension, where it is used to describe the displacement and forces in the springs and shock absorbers. The force in the spring is (roughly) the vertical force at the contact patch divided by the motion ratio, and the spring rate is the wheel rate divided by the motion ratio squared.
formula_0formula_1
formula_2formula_3
This is described as the Installation Ratio in the reference. Motion ratio is the more common term in the industry, but sometimes is used to mean the inverse of the above definition.
Motion ratio in suspension of a vehicle describes the amount of shock travel for a given amount of wheel travel. Mathematically, it is the ratio of shock travel and wheel travel. The amount of force transmitted to the vehicle chassis reduces with increase in motion ratio. A motion ratio close to one is desired in the vehicle for better ride and comfort. One should know the desired wheel travel of the vehicle before calculating motion ratio, which depends much on the type of track the vehicle will run upon.
Selecting the appropriate ratio depends on multiple factors:
References.
| [
{
"math_id": 0,
"text": "IR = \\frac{Spring Displacement}{Wheel Displacement}."
},
{
"math_id": 1,
"text": "MR = \\frac{Wheel Displacement}{Spring Displacement}."
},
{
"math_id": 2,
"text": "Wheel rate = {Spring rate}*{IR^2}."
},
{
"math_id": 3,
"text": "Wheel rate = {Spring rate}/{MR^2}."
}
] | https://en.wikipedia.org/wiki?curid=9702754 |
9703 | Evolutionary psychology | Branch of psychology
Evolutionary psychology is a theoretical approach in psychology that examines cognition and behavior from a modern evolutionary perspective. It seeks to identify human psychological adaptations with regards to the ancestral problems they evolved to solve. In this framework, psychological traits and mechanisms are either functional products of natural and sexual selection or non-adaptive by-products of other adaptive traits.
Adaptationist thinking about physiological mechanisms, such as the heart, lungs, and the liver, is common in evolutionary biology. Evolutionary psychologists apply the same thinking in psychology, arguing that just as the heart evolved to pump blood, the liver evolved to detoxify poisons, and the kidneys evolved to filter turbid fluids there is modularity of mind in that different psychological mechanisms evolved to solve different adaptive problems. These evolutionary psychologists argue that much of human behavior is the output of psychological adaptations that evolved to solve recurrent problems in human ancestral environments.
Some evolutionary psychologists argue that evolutionary theory can provide a foundational, metatheoretical framework that integrates the entire field of psychology in the same way evolutionary biology has for biology.
Evolutionary psychologists hold that behaviors or traits that occur universally in all cultures are good candidates for evolutionary adaptations, including the abilities to infer others' emotions, discern kin from non-kin, identify and prefer healthier mates, and cooperate with others. Findings have been made regarding human social behaviour related to infanticide, intelligence, marriage patterns, promiscuity, perception of beauty, bride price, and parental investment. The theories and findings of evolutionary psychology have applications in many fields, including economics, environment, health, law, management, psychiatry, politics, and literature.
Criticism of evolutionary psychology involves questions of testability, cognitive and evolutionary assumptions (such as modular functioning of the brain, and large uncertainty about the ancestral environment), importance of non-genetic and non-adaptive explanations, as well as political and ethical issues due to interpretations of research results. Evolutionary psychologists frequently engage with and respond to such criticisms.
<templatestyles src="Template:TOC limit/styles.css" />
Scope.
Principles.
Its central assumption is that the human brain is composed of a large number of specialized mechanisms that were shaped by natural selection over a vast periods of time to solve the recurrent information-processing problems faced by our ancestors. These problems involve food choices, social hierarchies, distributing resources to offspring, and selecting mates. Proponents suggest that it seeks to integrate psychology into the other natural sciences, rooting it in the organizing theory of biology (evolutionary theory), and thus understanding psychology as a branch of biology. Anthropologist John Tooby and psychologist Leda Cosmides note:
<templatestyles src="Template:Blockquote/styles.css" />
Just as human physiology and evolutionary physiology have worked to identify physical adaptations of the body that represent "human physiological nature," the purpose of evolutionary psychology is to identify evolved emotional and cognitive adaptations that represent "human psychological nature." According to Steven Pinker, it is "not a single theory but a large set of hypotheses" and a term that "has also come to refer to a particular way of applying evolutionary theory to the mind, with an emphasis on adaptation, gene-level selection, and modularity." Evolutionary psychology adopts an understanding of the mind that is based on the computational theory of mind. It describes mental processes as computational operations, so that, for example, a fear response is described as arising from a neurological computation that inputs the perceptional data, e.g. a visual image of a spider, and outputs the appropriate reaction, e.g. fear of possibly dangerous animals. Under this view, any domain-general learning is impossible because of the combinatorial explosion. Evolutionary Psychology specifies the domain as the problems of survival and reproduction.
While philosophers have generally considered the human mind to include broad faculties, such as reason and lust, evolutionary psychologists describe evolved psychological mechanisms as narrowly focused to deal with specific issues, such as catching cheaters or choosing mates. The discipline sees the human brain as having evolved specialized functions, called cognitive modules, or "psychological adaptations" which are shaped by natural selection. Examples include language-acquisition modules, incest-avoidance mechanisms, cheater-detection mechanisms, intelligence and sex-specific mating preferences, foraging mechanisms, alliance-tracking mechanisms, agent-detection mechanisms, and others. Some mechanisms, termed "domain-specific", deal with recurrent adaptive problems over the course of human evolutionary history. "Domain-general" mechanisms, on the other hand, are proposed to deal with evolutionary novelty.
Evolutionary psychology has roots in cognitive psychology and evolutionary biology but also draws on behavioral ecology, artificial intelligence, genetics, ethology, anthropology, archaeology, biology, ecopsycology and zoology. It is closely linked to sociobiology, but there are key differences between them including the emphasis on "domain-specific" rather than "domain-general" mechanisms, the relevance of measures of current fitness, the importance of mismatch theory, and psychology rather than behavior.
Nikolaas Tinbergen's four categories of questions can help to clarify the distinctions between several different, but complementary, types of explanations. Evolutionary psychology focuses primarily on the "why?" questions, while traditional psychology focuses on the "how?" questions.
Premises.
Evolutionary psychology is founded on several core premises.
History.
Evolutionary psychology has its historical roots in Charles Darwin's theory of natural selection. In "The Origin of Species", Darwin predicted that psychology would develop an evolutionary basis:
<templatestyles src="Template:Blockquote/styles.css" />In the distant future I see open fields for far more important researches. Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation.
Two of his later books were devoted to the study of animal emotions and psychology; "The Descent of Man, and Selection in Relation to Sex" in 1871 and "The Expression of the Emotions in Man and Animals" in 1872. Darwin's work inspired William James's functionalist approach to psychology. Darwin's theories of evolution, adaptation, and natural selection have provided insight into why brains function the way they do.
The content of evolutionary psychology has derived from, on the one hand, the biological sciences (especially evolutionary theory as it relates to ancient human environments, the study of paleoanthropology and animal behavior) and, on the other, the human sciences, especially psychology.
Evolutionary biology as an academic discipline emerged with the modern synthesis in the 1930s and 1940s. In the 1930s the study of animal behavior (ethology) emerged with the work of the Dutch biologist Nikolaas Tinbergen and the Austrian biologists Konrad Lorenz and Karl von Frisch.
W.D. Hamilton's (1964) papers on inclusive fitness and Robert Trivers's (1972) theories on reciprocity and parental investment helped to establish evolutionary thinking in psychology and the other social sciences. In 1975, Edward O. Wilson combined evolutionary theory with studies of animal and social behavior, building on the works of Lorenz and Tinbergen, in his book "".
In the 1970s, two major branches developed from ethology. Firstly, the study of animal "social" behavior (including humans) generated sociobiology, defined by its pre-eminent proponent Edward O. Wilson in 1975 as "the systematic study of the biological basis of all social behavior" and in 1978 as "the extension of population biology and evolutionary theory to social organization." Secondly, there was behavioral ecology which placed less emphasis on "social" behavior; it focused on the ecological and evolutionary basis of animal and human behavior.
In the 1970s and 1980s university departments began to include the term "evolutionary biology" in their titles. The modern era of evolutionary psychology was ushered in, in particular, by Donald Symons' 1979 book "The Evolution of Human Sexuality" and Leda Cosmides and John Tooby's 1992 book "The Adapted Mind". David Buller observed that the term "evolutionary psychology" is sometimes seen as denoting research based on the specific methodological and theoretical commitments of certain researchers from the Santa Barbara school (University of California), thus some evolutionary psychologists prefer to term their work "human ecology", "human behavioural ecology" or "evolutionary anthropology" instead.
From psychology there are the primary streams of developmental, social and cognitive psychology. Establishing some measure of the relative influence of genetics and environment on behavior has been at the core of behavioral genetics and its variants, notably studies at the molecular level that examine the relationship between genes, neurotransmitters and behavior. Dual inheritance theory (DIT), developed in the late 1970s and early 1980s, has a slightly different perspective by trying to explain how human behavior is a product of two different and interacting evolutionary processes: genetic evolution and cultural evolution. DIT is seen by some as a "middle-ground" between views that emphasize human universals versus those that emphasize cultural variation.
Theoretical foundations.
The theories on which evolutionary psychology is based originated with Charles Darwin's work, including his speculations about the evolutionary origins of social instincts in humans. Modern evolutionary psychology, however, is possible only because of advances in evolutionary theory in the 20th century.
Evolutionary psychologists say that natural selection has provided humans with many psychological adaptations, in much the same way that it generated humans' anatomical and physiological adaptations. As with adaptations in general, psychological adaptations are said to be specialized for the environment in which an organism evolved, the environment of evolutionary adaptedness. Sexual selection provides organisms with adaptations related to mating. For male mammals, which have a relatively high maximal potential reproduction rate, sexual selection leads to adaptations that help them compete for females. For female mammals, with a relatively low maximal potential reproduction rate, sexual selection leads to choosiness, which helps females select higher quality mates. Charles Darwin described both natural selection and sexual selection, and he relied on group selection to explain the evolution of altruistic (self-sacrificing) behavior. But group selection was considered a weak explanation, because in any group the less altruistic individuals will be more likely to survive, and the group will become less self-sacrificing as a whole.
In 1964, the evolutionary biologist William D. Hamilton proposed inclusive fitness theory, emphasizing a gene-centered view of evolution. Hamilton noted that genes can increase the replication of copies of themselves into the next generation by influencing the organism's social traits in such a way that (statistically) results in helping the survival and reproduction of other copies of the same genes (most simply, identical copies in the organism's close relatives). According to Hamilton's rule, self-sacrificing behaviors (and the genes influencing them) can evolve if they typically help the organism's close relatives so much that it more than compensates for the individual animal's sacrifice. Inclusive fitness theory resolved the issue of how altruism can evolve. Other theories also help explain the evolution of altruistic behavior, including evolutionary game theory, tit-for-tat reciprocity, and generalized reciprocity. These theories help to explain the development of altruistic behavior, and account for hostility toward cheaters (individuals that take advantage of others' altruism).
Several mid-level evolutionary theories inform evolutionary psychology. The r/K selection theory proposes that some species prosper by having many offspring, while others follow the strategy of having fewer offspring but investing much more in each one. Humans follow the second strategy. Parental investment theory explains how parents invest more or less in individual offspring based on how successful those offspring are likely to be, and thus how much they might improve the parents' inclusive fitness. According to the Trivers–Willard hypothesis, parents in good conditions tend to invest more in sons (who are best able to take advantage of good conditions), while parents in poor conditions tend to invest more in daughters (who are best able to have successful offspring even in poor conditions). According to life history theory, animals evolve life histories to match their environments, determining details such as age at first reproduction and number of offspring. Dual inheritance theory posits that genes and human culture have interacted, with genes affecting the development of culture, and culture, in turn, affecting human evolution on a genetic level, in a similar way to the Baldwin effect.
Evolved psychological mechanisms.
Evolutionary psychology is based on the hypothesis that, just like hearts, lungs, livers, kidneys, and immune systems, cognition has a functional structure that has a genetic basis, and therefore has evolved by natural selection. Like other organs and tissues, this functional structure should be universally shared amongst a species and should solve important problems of survival and reproduction.
Evolutionary psychologists seek to understand psychological mechanisms by understanding the survival and reproductive functions they might have served over the course of evolutionary history. These might include abilities to infer others' emotions, discern kin from non-kin, identify and prefer healthier mates, cooperate with others and follow leaders. Consistent with the theory of natural selection, evolutionary psychology sees humans as often in conflict with others, including mates and relatives. For instance, a mother may wish to wean her offspring from breastfeeding earlier than does her infant, which frees up the mother to invest in additional offspring. Evolutionary psychology also recognizes the role of kin selection and reciprocity in evolving prosocial traits such as altruism. Like chimpanzees and bonobos, humans have subtle and flexible social instincts, allowing them to form extended families, lifelong friendships, and political alliances. In studies testing theoretical predictions, evolutionary psychologists have made modest findings on topics such as infanticide, intelligence, marriage patterns, promiscuity, perception of beauty, bride price and parental investment.
Another example would be the evolved mechanism in depression. Clinical depression is maladaptive and should have evolutionary approaches so it can become adaptive. Over the centuries animals and humans have gone through hard times to stay alive, which made our fight or flight senses evolve tremendously. For instances, mammalians have separation anxiety from their guardians which causes distress and sends signals to their hypothalamic pituitary adrenal axis, and emotional/behavioral changes. Going through these types of circumstances helps mammals cope with separation anxiety.
Historical topics.
Proponents of evolutionary psychology in the 1990s made some explorations in historical events, but the response from historical experts was highly negative and there has been little effort to continue that line of research. Historian Lynn Hunt says that the historians complained that the researchers:
<templatestyles src="Template:Blockquote/styles.css" />have read the wrong studies, misinterpreted the results of experiments, or worse yet, turned to neuroscience looking for a universalizing, anti-representational and anti-intentional ontology to bolster their claims.
Hunt states that "the few attempts to build up a subfield of psychohistory collapsed under the weight of its presuppositions." She concludes that, as of 2014, the "'iron curtain' between historians and psychology...remains standing."
Products of evolution: adaptations, exaptations, byproducts, and random variation.
Not all traits of organisms are evolutionary adaptations. As noted in the table below, traits may also be exaptations, byproducts of adaptations (sometimes called "spandrels"), or random variation between individuals.
Psychological adaptations are hypothesized to be innate or relatively easy to learn and to manifest in cultures worldwide. For example, the ability of toddlers to learn a language with virtually no training is likely to be a psychological adaptation. On the other hand, ancestral humans did not read or write, thus today, learning to read and write requires extensive training, and presumably involves the repurposing of cognitive capacities that evolved in response to selection pressures unrelated to written language. However, variations in manifest behavior can result from universal mechanisms interacting with different local environments. For example, Caucasians who move from a northern climate to the equator will have darker skin. The mechanisms regulating their pigmentation do not change; rather the input to those mechanisms change, resulting in different outputs.
One of the tasks of evolutionary psychology is to identify which psychological traits are likely to be adaptations, byproducts or random variation. George C. Williams suggested that an "adaptation is a special and onerous concept that should only be used where it is really necessary." As noted by Williams and others, adaptations can be identified by their improbable complexity, species universality, and adaptive functionality.
Obligate and facultative adaptations.
A question that may be asked about an adaptation is whether it is generally obligate (relatively robust in the face of typical environmental variation) or facultative (sensitive to typical environmental variation). The sweet taste of sugar and the pain of hitting one's knee against concrete are the result of fairly obligate psychological adaptations; typical environmental variability during development does not much affect their operation. By contrast, facultative adaptations are somewhat like "if-then" statements. For example, The adaptation for skin to tan is conditional to exposure to sunlight; this is an example of another facultative adaptation. When a psychological adaptation is facultative, evolutionary psychologists concern themselves with how developmental and environmental inputs influence the expression of the adaptation.
Cultural universals.
Evolutionary psychologists hold that behaviors or traits that occur universally in all cultures are good candidates for evolutionary adaptations. Cultural universals include behaviors related to language, cognition, social roles, gender roles, and technology. Evolved psychological adaptations (such as the ability to learn a language) interact with cultural inputs to produce specific behaviors (e.g., the specific language learned).
Basic gender differences, such as greater eagerness for sex among men and greater coyness among women, are explained as sexually dimorphic psychological adaptations that reflect the different reproductive strategies of males and females.
Evolutionary psychologists contrast their approach to what they term the "standard social science model," according to which the mind is a general-purpose cognition device shaped almost entirely by culture.
Environment of evolutionary adaptedness.
Evolutionary psychology argues that to properly understand the functions of the brain, one must understand the properties of the environment in which the brain evolved. That environment is often referred to as the "environment of evolutionary adaptedness".
The idea of an "environment of evolutionary adaptedness" was first explored as a part of attachment theory by John Bowlby. This is the environment to which a particular evolved mechanism is adapted. More specifically, the environment of evolutionary adaptedness is defined as the set of historically recurring selection pressures that formed a given adaptation, as well as those aspects of the environment that were necessary for the proper development and functioning of the adaptation.
Humans, the genus "Homo", appeared between 1.5 and 2.5 million years ago, a time that roughly coincides with the start of the Pleistocene 2.6 million years ago. Because the Pleistocene ended a mere 12,000 years ago, most human adaptations either newly evolved during the Pleistocene, or were maintained by stabilizing selection during the Pleistocene. Evolutionary psychology, therefore, proposes that the majority of human psychological mechanisms are adapted to reproductive problems frequently encountered in Pleistocene environments. In broad terms, these problems include those of growth, development, differentiation, maintenance, mating, parenting, and social relationships.
The environment of evolutionary adaptedness is significantly different from modern society. The ancestors of modern humans lived in smaller groups, had more cohesive cultures, and had more stable and rich contexts for identity and meaning. Researchers look to existing hunter-gatherer societies for clues as to how hunter-gatherers lived in the environment of evolutionary adaptedness. Unfortunately, the few surviving hunter-gatherer societies are different from each other, and they have been pushed out of the best land and into harsh environments, so it is not clear how closely they reflect ancestral culture. However, all around the world small-band hunter-gatherers offer a similar developmental system for the young ("hunter-gatherer childhood model," Konner, 2005;
"evolved developmental niche" or "evolved nest;" Narvaez et al., 2013). The characteristics of the niche are largely the same as for social mammals, who evolved over 30 million years ago: soothing perinatal experience, several years of on-request breastfeeding, nearly constant affection or physical proximity, responsiveness to need (mitigating offspring distress), self-directed play, and for humans, multiple responsive caregivers. Initial studies show the importance of these components in early life for positive child outcomes.
Evolutionary psychologists sometimes look to chimpanzees, bonobos, and other great apes for insight into human ancestral behavior.
Mismatches.
Since an organism's adaptations were suited to its ancestral environment, a new and different environment can create a mismatch. Because humans are mostly adapted to Pleistocene environments, psychological mechanisms sometimes exhibit "mismatches" to the modern environment. One example is the fact that although about 10,000 people are killed with guns in the US annually, whereas spiders and snakes kill only a handful, people nonetheless learn to fear spiders and snakes about as easily as they do a pointed gun, and more easily than an unpointed gun, rabbits or flowers. A potential explanation is that spiders and snakes were a threat to human ancestors throughout the Pleistocene, whereas guns (and rabbits and flowers) were not. There is thus a mismatch between humans' evolved fear-learning psychology and the modern environment.
This mismatch also shows up in the phenomena of the supernormal stimulus, a stimulus that elicits a response more strongly than the stimulus for which the response evolved. The term was coined by Niko Tinbergen to refer to non-human animal behavior, but psychologist Deirdre Barrett said that supernormal stimulation governs the behavior of humans as powerfully as that of other animals. She explained junk food as an exaggerated stimulus to cravings for salt, sugar, and fats, and she says that television is an exaggeration of social cues of laughter, smiling faces and attention-grabbing action. Magazine centerfolds and double cheeseburgers pull instincts intended for an environment of evolutionary adaptedness where breast development was a sign of health, youth and fertility in a prospective mate, and fat was a rare and vital nutrient. The psychologist Mark van Vugt recently argued that modern organizational leadership is a mismatch. His argument is that humans are not adapted to work in large, anonymous bureaucratic structures with formal hierarchies. The human mind still responds to personalized, charismatic leadership primarily in the context of informal, egalitarian settings. Hence the dissatisfaction and alienation that many employees experience. Salaries, bonuses and other privileges exploit instincts for relative status, which attract particularly males to senior executive positions.
Research methods.
Evolutionary theory is heuristic in that it may generate hypotheses that might not be developed from other theoretical approaches. One of the major goals of adaptationist research is to identify which organismic traits are likely to be adaptations, and which are byproducts or random variations. As noted earlier, adaptations are expected to show evidence of complexity, functionality, and species universality, while byproducts or random variation will not. In addition, adaptations are expected to manifest as proximate mechanisms that interact with the environment in either a generally obligate or facultative fashion (see above). Evolutionary psychologists are also interested in identifying these proximate mechanisms (sometimes termed "mental mechanisms" or "psychological adaptations") and what type of information they take as input, how they process that information, and their outputs. Evolutionary developmental psychology, or "evo-devo," focuses on how adaptations may be activated at certain developmental times (e.g., losing baby teeth, adolescence, etc.) or how events during the development of an individual may alter life-history trajectories.
Evolutionary psychologists use several strategies to develop and test hypotheses about whether a psychological trait is likely to be an evolved adaptation. Buss (2011) notes that these methods include:
<templatestyles src="Template:Blockquote/styles.css" /><poem>"Cross-cultural Consistency." Characteristics that have been demonstrated to be cross-cultural human universals such as smiling, crying, facial expressions are presumed to be evolved psychological adaptations. Several evolutionary psychologists have collected massive datasets from cultures around the world to assess cross-cultural universality.
"Function to Form (or "problem to solution")." The fact that males, but not females, risk potential misidentification of genetic offspring (referred to as "paternity uncertainty") led evolutionary psychologists to hypothesize that, compared to females, male jealousy would be more focused on sexual, rather than emotional, infidelity.
"Form to Function (reverse-engineering – or "solution to problem")." Morning sickness, and associated aversions to certain types of food, during pregnancy seemed to have the characteristics of an evolved adaptation (complexity and universality). Margie Profet hypothesized that the function was to avoid the ingestion of toxins during early pregnancy that could damage fetus (but which are otherwise likely to be harmless to healthy non-pregnant women).
"Corresponding Neurological Modules." Evolutionary psychology and cognitive neuropsychology are mutually compatible – evolutionary psychology helps to identify psychological adaptations and their ultimate, evolutionary functions, while neuropsychology helps to identify the proximate manifestations of these adaptations.
"Current Evolutionary Adaptiveness." In addition to evolutionary models that suggest evolution occurs across large spans of time, recent research has demonstrated that some evolutionary shifts can be fast and dramatic. Consequently, some evolutionary psychologists have focused on the impact of psychological traits in the current environment. Such research can be used to inform estimates of the prevalence of traits over time. Such work has been informative in studying evolutionary psychopathology.</poem>
Evolutionary psychologists also use various sources of data for testing, including experiments, archaeological records, data from hunter-gatherer societies, observational studies, neuroscience data, self-reports and surveys, public records, and human products.
Recently, additional methods and tools have been introduced based on fictional scenarios, mathematical models, and multi-agent computer simulations.
Main areas of research.
Foundational areas of research in evolutionary psychology can be divided into broad categories of adaptive problems that arise from evolutionary theory itself: survival, mating, parenting, family and kinship, interactions with non-kin, and cultural evolution.
Survival and individual-level psychological adaptations.
Problems of survival are clear targets for the evolution of physical and psychological adaptations. Major problems the ancestors of present-day humans faced included food selection and acquisition; territory selection and physical shelter; and avoiding predators and other environmental threats.
Consciousness.
Consciousness meets George Williams' criteria of species universality, complexity, and functionality, and it is a trait that apparently increases fitness.
In his paper "Evolution of consciousness," John Eccles argues that special anatomical and physical adaptations of the mammalian cerebral cortex gave rise to consciousness. In contrast, others have argued that the recursive circuitry underwriting consciousness is much more primitive, having evolved initially in pre-mammalian species because it improves the capacity for interaction with both social "and" natural environments by providing an energy-saving "neutral" gear in an otherwise energy-expensive motor output machine. Once in place, this recursive circuitry may well have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms, as outlined by Bernard J. Baars. Richard Dawkins suggested that humans evolved consciousness in order to make themselves the subjects of thought. Daniel Povinelli suggests that large, tree-climbing apes evolved consciousness to take into account one's own mass when moving safely among tree branches. Consistent with this hypothesis, Gordon Gallup found that chimpanzees and orangutans, but not little monkeys or terrestrial gorillas, demonstrated self-awareness in mirror tests.
The concept of consciousness can refer to voluntary action, awareness, or wakefulness. However, even voluntary behavior involves unconscious mechanisms. Many cognitive processes take place in the cognitive unconscious, unavailable to conscious awareness. Some behaviors are conscious when learned but then become unconscious, seemingly automatic. Learning, especially implicitly learning a skill, can take place seemingly outside of consciousness. For example, plenty of people know how to turn right when they ride a bike, but very few can accurately explain how they actually do so.
Evolutionary psychology approaches self-deception as an adaptation that can improve one's results in social exchanges.
Sleep may have evolved to conserve energy when activity would be less fruitful or more dangerous, such as at night, and especially during the winter season.
Sensation and perception.
Many experts, such as Jerry Fodor, write that the purpose of perception is knowledge, but evolutionary psychologists hold that its primary purpose is to guide action. For example, they say, depth perception seems to have evolved not to help us know the distances to other objects but rather to help us move around in space. Evolutionary psychologists say that animals from fiddler crabs to humans use eyesight for collision avoidance, suggesting that vision is basically for directing action, not providing knowledge.
Building and maintaining sense organs is metabolically expensive, so these organs evolve only when they improve an organism's fitness. More than half the brain is devoted to processing sensory information, and the brain itself consumes roughly one-fourth of one's metabolic resources, so the senses must provide exceptional benefits to fitness. Perception accurately mirrors the world; animals get useful, accurate information through their senses.
Scientists who study perception and sensation have long understood the human senses as adaptations to their surrounding worlds. Depth perception consists of processing over half a dozen visual cues, each of which is based on a regularity of the physical world. Vision evolved to respond to the narrow range of electromagnetic energy that is plentiful and that does not pass through objects. Sound waves go around corners and interact with obstacles, creating a complex pattern that includes useful information about the sources of and distances to objects. Larger animals naturally make lower-pitched sounds as a consequence of their size. The range over which an animal hears, on the other hand, is determined by adaptation. Homing pigeons, for example, can hear the very low-pitched sound (infrasound) that carries great distances, even though most smaller animals detect higher-pitched sounds. Taste and smell respond to chemicals in the environment that are thought to have been significant for fitness in the environment of evolutionary adaptedness. For example, salt and sugar were apparently both valuable to the human or pre-human inhabitants of the environment of evolutionary adaptedness, so present-day humans have an intrinsic hunger for salty and sweet tastes. The sense of touch is actually many senses, including pressure, heat, cold, tickle, and pain. Pain, while unpleasant, is adaptive. An important adaptation for senses is range shifting, by which the organism becomes temporarily more or less sensitive to sensation. For example, one's eyes automatically adjust to dim or bright ambient light. Sensory abilities of different organisms often coevolve, as is the case with the hearing of echolocating bats and that of the moths that have evolved to respond to the sounds that the bats make.
Evolutionary psychologists contend that perception demonstrates the principle of modularity, with specialized mechanisms handling particular perception tasks. For example, people with damage to a particular part of the brain have the specific defect of not being able to recognize faces (prosopagnosia). Evolutionary psychology suggests that this indicates a so-called face-reading module.
Learning and facultative adaptations.
In evolutionary psychology, learning is said to be accomplished through evolved capacities, specifically facultative adaptations. Facultative adaptations express themselves differently depending on input from the environment. Sometimes the input comes during development and helps shape that development. For example, migrating birds learn to orient themselves by the stars during a critical period in their maturation. Evolutionary psychologists believe that humans also learn language along an evolved program, also with critical periods. The input can also come during daily tasks, helping the organism cope with changing environmental conditions. For example, animals evolved Pavlovian conditioning in order to solve problems about causal relationships. Animals accomplish learning tasks most easily when those tasks resemble problems that they faced in their evolutionary past, such as a rat learning where to find food or water. Learning capacities sometimes demonstrate differences between the sexes. In many animal species, for example, males can solve spatial problems faster and more accurately than females, due to the effects of male hormones during development. The same might be true of humans.
Emotion and motivation.
Motivations direct and energize behavior, while emotions provide the affective component to motivation, positive or negative. In the early 1970s, Paul Ekman and colleagues began a line of research which suggests that many emotions are universal. He found evidence that humans share at least five basic emotions: fear, sadness, happiness, anger, and disgust. Social emotions evidently evolved to motivate social behaviors that were adaptive in the environment of evolutionary adaptedness. For example, spite seems to work against the individual but it can establish an individual's reputation as someone to be feared. Shame and pride can motivate behaviors that help one maintain one's standing in a community, and self-esteem is one's estimate of one's status.
Motivation has a neurobiological basis in the reward system of the brain. Recently, it has been suggested that reward systems may evolve in such a way that there may be an inherent or unavoidable trade-off in the motivational system for activities of short versus long duration.
Cognition.
Cognition refers to internal representations of the world and internal information processing. From an evolutionary psychology perspective, cognition is not "general purpose", but uses heuristics, or strategies, that generally increase the likelihood of solving problems that the ancestors of present-day humans routinely faced. For example, present-day humans are far more likely to solve logic problems that involve detecting cheating (a common problem given humans' social nature) than the same logic problem put in purely abstract terms. Since the ancestors of present-day humans did not encounter truly random events, present-day humans may be cognitively predisposed to incorrectly identify patterns in random sequences. "Gamblers' Fallacy" is one example of this. Gamblers may falsely believe that they have hit a "lucky streak" even when each outcome is actually random and independent of previous trials. Most people believe that if a fair coin has been flipped 9 times and Heads appears each time, that on the tenth flip, there is a greater than 50% chance of getting Tails. Humans find it far easier to make diagnoses or predictions using frequency data than when the same information is presented as probabilities or percentages, presumably because the ancestors of present-day humans lived in relatively small tribes (usually with fewer than 150 people) where frequency information was more readily available.
Personality.
Evolutionary psychology is primarily interested in finding commonalities between people, or basic human psychological nature. From an evolutionary perspective, the fact that people have fundamental differences in personality traits initially presents something of a puzzle. (Note: The field of behavioral genetics is concerned with statistically partitioning differences between people into genetic and environmental sources of variance. However, understanding the concept of heritability can be tricky – heritability refers only to the differences between people, never the degree to which the traits of an individual are due to environmental or genetic factors, since traits are always a complex interweaving of both.)
Personality traits are conceptualized by evolutionary psychologists as due to normal variation around an optimum, due to frequency-dependent selection (behavioral polymorphisms), or as facultative adaptations. Like variability in height, some personality traits may simply reflect inter-individual variability around a general optimum. Or, personality traits may represent different genetically predisposed "behavioral morphs" – alternate behavioral strategies that depend on the frequency of competing behavioral strategies in the population. For example, if most of the population is generally trusting and gullible, the behavioral morph of being a "cheater" (or, in the extreme case, a sociopath) may be advantageous. Finally, like many other psychological adaptations, personality traits may be facultative – sensitive to typical variations in the social environment, especially during early development. For example, later-born children are more likely than firstborns to be rebellious, less conscientious and more open to new experiences, which may be advantageous to them given their particular niche in family structure.
Shared environmental influences do play a role in personality and are not always of less importance than genetic factors. However, shared environmental influences often decrease to near zero after adolescence but do not completely disappear.
Language.
According to Steven Pinker, who builds on the work by Noam Chomsky, the universal human ability to learn to talk between the ages of 1 – 4, basically without training, suggests that language acquisition is a distinctly human psychological adaptation (see, in particular, Pinker's "The Language Instinct"). Pinker and Bloom (1990) argue that language as a mental faculty shares many likenesses with the complex organs of the body which suggests that, like these organs, language has evolved as an adaptation, since this is the only known mechanism by which such complex organs can develop.
Pinker follows Chomsky in arguing that the fact that children can learn any human language with no explicit instruction suggests that language, including most of grammar, is basically innate and that it only needs to be activated by interaction. Chomsky himself does not believe language to have evolved as an adaptation, but suggests that it likely evolved as a byproduct of some other adaptation, a so-called spandrel. But Pinker and Bloom argue that the organic nature of language strongly suggests that it has an adaptational origin.
Evolutionary psychologists hold that the FOXP2 gene may well be associated with the evolution of human language. In the 1980s, psycholinguist Myrna Gopnik identified a dominant gene that causes language impairment in the KE family of Britain. This gene turned out to be a mutation of the FOXP2 gene. Humans have a unique allele of this gene, which has otherwise been closely conserved through most of mammalian evolutionary history. This unique allele seems to have first appeared between 100 and 200 thousand years ago, and it is now all but universal in humans. However, the once-popular idea that FOXP2 is a 'grammar gene' or that it triggered the emergence of language in "Homo sapiens" is now widely discredited.
Currently, several competing theories about the evolutionary origin of language coexist, none of them having achieved a general consensus. Researchers of language acquisition in primates and humans such as Michael Tomasello and Talmy Givón, argue that the innatist framework has understated the role of imitation in learning and that it is not at all necessary to posit the existence of an innate grammar module to explain human language acquisition. Tomasello argues that studies of how children and primates actually acquire communicative skills suggest that humans learn complex behavior through experience, so that instead of a module specifically dedicated to language acquisition, language is acquired by the same cognitive mechanisms that are used to acquire all other kinds of socially transmitted behavior.
On the issue of whether language is best seen as having evolved as an adaptation or as a spandrel, evolutionary biologist W. Tecumseh Fitch, following Stephen J. Gould, argues that it is unwarranted to assume that every aspect of language is an adaptation, or that language as a whole is an adaptation. He criticizes some strands of evolutionary psychology for suggesting a pan-adaptionist view of evolution, and dismisses Pinker and Bloom's question of whether "Language has evolved as an adaptation" as being misleading. He argues instead that from a biological viewpoint the evolutionary origins of language is best conceptualized as being the probable result of a convergence of many separate adaptations into a complex system. A similar argument is made by Terrence Deacon who in "The Symbolic Species" argues that the different features of language have co-evolved with the evolution of the mind and that the ability to use symbolic communication is integrated in all other cognitive processes.
If the theory that language could have evolved as a single adaptation is accepted, the question becomes which of its many functions has been the basis of adaptation. Several evolutionary hypotheses have been posited: that language evolved for the purpose of social grooming, that it evolved as a way to show mating potential or that it evolved to form social contracts. Evolutionary psychologists recognize that these theories are all speculative and that much more evidence is required to understand how language might have been selectively adapted.
Mating.
Given that sexual reproduction is the means by which genes are propagated into future generations, sexual selection plays a large role in human evolution. Human mating, then, is of interest to evolutionary psychologists who aim to investigate evolved mechanisms to attract and secure mates. Several lines of research have stemmed from this interest, such as studies of mate selection mate poaching, mate retention, mating preferences and conflict between the sexes.
In 1972 Robert Trivers published an influential paper on sex differences that is now referred to as parental investment theory. The size differences of gametes (anisogamy) is the fundamental, defining difference between males (small gametes – sperm) and females (large gametes – ova). Trivers noted that anisogamy typically results in different levels of parental investment between the sexes, with females initially investing more. Trivers proposed that this difference in parental investment leads to the sexual selection of different reproductive strategies between the sexes and to sexual conflict. For example, he suggested that the sex that invests less in offspring will generally compete for access to the higher-investing sex to increase their inclusive fitness. Trivers posited that differential parental investment led to the evolution of sexual dimorphisms in mate choice, intra- and inter- sexual reproductive competition, and courtship displays. In mammals, including humans, females make a much larger parental investment than males (i.e. gestation followed by childbirth and lactation). Parental investment theory is a branch of life history theory.
Buss and Schmitt's (1993) "Sexual Strategies Theory" proposed that, due to differential parental investment, humans have evolved sexually dimorphic adaptations related to "sexual accessibility, fertility assessment, commitment seeking and avoidance, immediate and enduring resource procurement, paternity certainty, assessment of mate value, and parental investment." Their "Strategic Interference Theory" suggested that conflict between the sexes occurs when the preferred reproductive strategies of one sex interfere with those of the other sex, resulting in the activation of emotional responses such as anger or jealousy.
Women are generally more selective when choosing mates, especially under long-term mating conditions. However, under some circumstances, short term mating can provide benefits to women as well, such as fertility insurance, trading up to better genes, reducing the risk of inbreeding, and insurance protection of her offspring.
Due to male paternity uncertainty, sex differences have been found in the domains of sexual jealousy. Females generally react more adversely to emotional infidelity and males will react more to sexual infidelity. This particular pattern is predicted because the costs involved in mating for each sex are distinct. Women, on average, should prefer a mate who can offer resources (e.g., financial, commitment), thus, a woman risks losing such resources with a mate who commits emotional infidelity. Men, on the other hand, are never certain of the genetic paternity of their children because they do not bear the offspring themselves. This suggests that for men sexual infidelity would generally be more aversive than emotional infidelity because investing resources in another man's offspring does not lead to the propagation of their own genes.
Another interesting line of research is that which examines women's mate preferences across the ovulatory cycle. The theoretical underpinning of this research is that ancestral women would have evolved mechanisms to select mates with certain traits depending on their hormonal status. Known as the ovulatory shift hypothesis, the theory posits that, during the ovulatory phase of a woman's cycle (approximately days 10–15 of a woman's cycle), a woman who mated with a male with high genetic quality would have been more likely, on average, to produce and bear a healthy offspring than a woman who mated with a male with low genetic quality. These putative preferences are predicted to be especially apparent for short-term mating domains because a potential male mate would only be offering genes to a potential offspring. This hypothesis allows researchers to examine whether women select mates who have characteristics that indicate high genetic quality during the high fertility phase of their ovulatory cycles. Indeed, studies have shown that women's preferences vary across the ovulatory cycle. In particular, Haselton and Miller (2006) showed that highly fertile women prefer creative but poor men as short-term mates. Creativity may be a proxy for good genes. Research by Gangestad et al. (2004) indicates that highly fertile women prefer men who display social presence and intrasexual competition; these traits may act as cues that would help women predict which men may have, or would be able to acquire, resources.
Parenting.
Reproduction is always costly for women, and can also be for men. Individuals are limited in the degree to which they can devote time and resources to producing and raising their young, and such expenditure may also be detrimental to their future condition, survival and further reproductive output.
Parental investment is any parental expenditure (time, energy etc.) that benefits one offspring at a cost to parents' ability to invest in other components of fitness (Clutton-Brock 1991: 9; Trivers 1972). Components of fitness (Beatty 1992) include the well-being of existing offspring, parents' future reproduction, and inclusive fitness through aid to kin (Hamilton, 1964). Parental investment theory is a branch of life history theory.
The benefits of parental investment to the offspring are large and are associated with the effects on condition, growth, survival, and ultimately, on the reproductive success of the offspring. However, these benefits can come at the cost of the parent's ability to reproduce in the future e.g. through the increased risk of injury when defending offspring against predators, the loss of mating opportunities whilst rearing offspring, and an increase in the time to the next reproduction. Overall, parents are selected to maximize the difference between the benefits and the costs, and parental care will likely evolve when the benefits exceed the costs.
The Cinderella effect is an alleged high incidence of stepchildren being physically, emotionally or sexually abused, neglected, murdered, or otherwise mistreated at the hands of their stepparents at significantly higher rates than their genetic counterparts. It takes its name from the fairy tale character Cinderella, who in the story was cruelly mistreated by her stepmother and stepsisters. Daly and Wilson (1996) noted: "Evolutionary thinking led to the discovery of the most important risk factor for child homicide – the presence of a stepparent. Parental efforts and investments are valuable resources, and selection favors those parental psyches that allocate effort effectively to promote fitness. The adaptive problems that challenge parental decision-making include both the accurate identification of one's offspring and the allocation of one's resources among them with sensitivity to their needs and abilities to convert parental investment into fitness increments…. Stepchildren were seldom or never so valuable to one's expected fitness as one's own offspring would be, and those parental psyches that were easily parasitized by just any appealing youngster must always have incurred a selective disadvantage"(Daly & Wilson, 1996, pp. 64–65). However, they note that not all stepparents will "want" to abuse their partner's children, or that genetic parenthood is any insurance against abuse. They see step parental care as primarily "mating effort" towards the genetic parent.
Family and kin.
Inclusive fitness is the sum of an organism's classical fitness (how many of its own offspring it produces and supports) and the number of equivalents of its own offspring it can add to the population by supporting others. The first component is called classical fitness by Hamilton (1964).
From the gene's point of view, evolutionary success ultimately depends on leaving behind the maximum number of copies of itself in the population. Until 1964, it was generally believed that genes only achieved this by causing the individual to leave the maximum number of viable offspring. However, in 1964 W. D. Hamilton proved mathematically that, because close relatives of an organism share some identical genes, a gene can also increase its evolutionary success by promoting the reproduction and survival of these related or otherwise similar individuals. Hamilton concluded that this leads natural selection to favor organisms that would behave in ways that maximize their inclusive fitness. It is also true that natural selection favors behavior that maximizes personal fitness.
Hamilton's rule describes mathematically whether or not a gene for altruistic behavior will spread in a population:
formula_0
where
The concept serves to explain how natural selection can perpetuate altruism. If there is an "altruism gene" (or complex of genes) that influences an organism's behavior to be helpful and protective of relatives and their offspring, this behavior also increases the proportion of the altruism gene in the population, because relatives are likely to share genes with the altruist due to common descent. Altruists may also have some way to recognize altruistic behavior in unrelated individuals and be inclined to support them. As Dawkins points out in "The Selfish Gene" (Chapter 6) and "The Extended Phenotype", this must be distinguished from the green-beard effect.
Although it is generally true that humans tend to be more altruistic toward their kin than toward non-kin, the relevant proximate mechanisms that mediate this cooperation have been debated (see kin recognition), with some arguing that kin status is determined primarily via social and cultural factors (such as co-residence, maternal association of sibs, etc.), while others have argued that kin recognition can also be mediated by biological factors such as facial resemblance and immunogenetic similarity of the major histocompatibility complex (MHC). For a discussion of the interaction of these social and biological kin recognition factors see Lieberman, Tooby, and Cosmides (2007) (PDF).
Whatever the proximate mechanisms of kin recognition there is substantial evidence that humans act generally more altruistically to close genetic kin compared to genetic non-kin.
Interactions with non-kin / reciprocity.
Although interactions with non-kin are generally less altruistic compared to those with kin, cooperation can be maintained with non-kin via mutually beneficial reciprocity as was proposed by Robert Trivers. If there are repeated encounters between the same two players in an evolutionary game in which each of them can choose either to "cooperate" or "defect", then a strategy of mutual cooperation may be favored even if it pays each player, in the short term, to defect when the other cooperates. Direct reciprocity can lead to the evolution of cooperation only if the probability, w, of another encounter between the same two individuals exceeds the cost-to-benefit ratio of the altruistic act:
w > c/b
Reciprocity can also be indirect if information about previous interactions is shared. Reputation allows evolution of cooperation by indirect reciprocity. Natural selection favors strategies that base the decision to help on the reputation of the recipient: studies show that people who are more helpful are more likely to receive help. The calculations of indirect reciprocity are complicated and only a tiny fraction of this universe has been uncovered, but again a simple rule has emerged. Indirect reciprocity can only promote cooperation if the probability, q, of knowing someone's reputation exceeds the cost-to-benefit ratio of the altruistic act:
q > c/b
One important problem with this explanation is that individuals may be able to evolve the capacity to obscure their reputation, reducing the probability, q, that it will be known.
Trivers argues that friendship and various social emotions evolved in order to manage reciprocity. Liking and disliking, he says, evolved to help present-day humans' ancestors form coalitions with others who reciprocated and to exclude those who did not reciprocate. Moral indignation may have evolved to prevent one's altruism from being exploited by cheaters, and gratitude may have motivated present-day humans' ancestors to reciprocate appropriately after benefiting from others' altruism. Likewise, present-day humans feel guilty when they fail to reciprocate. These social motivations match what evolutionary psychologists expect to see in adaptations that evolved to maximize the benefits and minimize the drawbacks of reciprocity.
Evolutionary psychologists say that humans have psychological adaptations that evolved specifically to help us identify nonreciprocators, commonly referred to as "cheaters." In 1993, Robert Frank and his associates found that participants in a prisoner's dilemma scenario were often able to predict whether their partners would "cheat", based on a half-hour of unstructured social interaction. In a 1996 experiment, for example, Linda Mealey and her colleagues found that people were better at remembering the faces of people when those faces were associated with stories about those individuals cheating (such as embezzling money from a church).
Strong reciprocity (or "tribal reciprocity").
Humans may have an evolved set of psychological adaptations that predispose them to be more cooperative than otherwise would be expected with members of their tribal in-group, and, more nasty to members of tribal out groups. These adaptations may have been a consequence of tribal warfare. Humans may also have predispositions for "altruistic punishment" – to punish in-group members who violate in-group rules, even when this altruistic behavior cannot be justified in terms of helping those you are related to (kin selection), cooperating with those who you will interact with again (direct reciprocity), or cooperating to better your reputation with others (indirect reciprocity).
Evolutionary psychology and culture.
Though evolutionary psychology has traditionally focused on individual-level behaviors, determined by species-typical psychological adaptations, considerable work has been done on how these adaptations shape and, ultimately govern, culture (Tooby and Cosmides, 1989). Tooby and Cosmides (1989) argued that the mind consists of many domain-specific psychological adaptations, some of which may constrain what cultural material is learned or taught. As opposed to a domain-general cultural acquisition program, where an individual passively receives culturally-transmitted material from the group, Tooby and Cosmides (1989), among others, argue that: "the psyche evolved to generate adaptive rather than repetitive behavior, and hence critically analyzes the behavior of those surrounding it in highly structured and patterned ways, to be used as a rich (but by no means the only) source of information out of which to construct a 'private culture' or individually tailored adaptive system; in consequence, this system may or may not mirror the behavior of others in any given respect." (Tooby and Cosmides 1989).
Biological explanations of human culture also brought criticism to evolutionary psychology: Evolutionary psychologists see the human psyche and physiology as a genetic product and assume that genes contain the information for the development and control of the organism and that this information is transmitted from one generation to the next via genes. Evolutionary psychologists thereby see physical and psychological characteristics of humans as genetically programmed. Even then, when evolutionary psychologists acknowledge the influence of the environment on human development, they understand the environment only as an activator or trigger for the programmed developmental instructions encoded in genes. Evolutionary psychologists, for example, believe that the human brain is made up of innate modules, each of which is specialised only for very specific tasks, e. g. an anxiety module. According to evolutionary psychologists, these modules are given before the organism actually develops and are then activated by some environmental event. Critics object that this view is reductionist and that cognitive specialisation only comes about through the interaction of humans with their real environment, rather than the environment of distant ancestors. Interdisciplinary approaches are increasingly striving to mediate between these opposing points of view and to highlight that biological and cultural causes need not be antithetical in explaining human behaviour and even complex cultural achievements.
In psychology sub-fields.
Developmental psychology.
According to Paul Baltes, the benefits granted by evolutionary selection decrease with age. Natural selection has not eliminated many harmful conditions and nonadaptive characteristics that appear among older adults, such as Alzheimer disease. If it were a disease that killed 20-year-olds instead of 70-year-olds this might have been a disease that natural selection could have eliminated ages ago. Thus, unaided by evolutionary pressures against nonadaptive conditions, modern humans suffer the aches, pains, and infirmities of aging and as the benefits of evolutionary selection decrease with age, the need for modern technological mediums against non-adaptive conditions increases.
Social psychology.
As humans are a highly social species, there are many adaptive problems associated with navigating the social world (e.g., maintaining allies, managing status hierarchies, interacting with outgroup members, coordinating social activities, collective decision-making). Researchers in the emerging field of evolutionary social psychology have made many discoveries pertaining to topics traditionally studied by social psychologists, including person perception, social cognition, attitudes, altruism, emotions, group dynamics, leadership, motivation, prejudice, intergroup relations, and cross-cultural differences.
When endeavouring to solve a problem humans at an early age show determination while chimpanzees have no comparable facial expression. Researchers suspect the human determined expression evolved because when a human is determinedly working on a problem other people will frequently help.
Abnormal psychology.
Adaptationist hypotheses regarding the etiology of psychological disorders are often based on analogies between physiological and psychological dysfunctions, as noted in the table below. Prominent theorists and evolutionary psychiatrists include Michael T. McGuire, Anthony Stevens, and Randolph M. Nesse. They, and others, suggest that mental disorders are due to the interactive effects of both nature and nurture, and often have multiple contributing causes.
Evolutionary psychologists have suggested that schizophrenia and bipolar disorder may reflect a side-effect of genes with fitness benefits, such as increased creativity. (Some individuals with bipolar disorder are especially creative during their manic phases and the close relatives of people with schizophrenia have been found to be more likely to have creative professions.) A 1994 report by the American Psychiatry Association found that people with schizophrenia at roughly the same rate in Western and non-Western cultures, and in industrialized and pastoral societies, suggesting that schizophrenia is not a disease of civilization nor an arbitrary social invention. Sociopathy may represent an evolutionarily stable strategy, by which a small number of people who cheat on social contracts benefit in a society consisting mostly of non-sociopaths. Mild depression may be an adaptive response to withdraw from, and re-evaluate, situations that have led to disadvantageous outcomes (the "analytical rumination hypothesis") (see Evolutionary approaches to depression).
Trofimova reviewed the most consistent psychological and behavioural sex differences in psychological abilities and disabilities and linked them to the Geodakyan's evolutionary theory of sex (ETS). She pointed out that a pattern of consistent sex differences in physical, verbal and social dis/abilities corresponds to the idea of the ETS considering sex dimorphism as a functional specialization of a species. Sex differentiation, according to the ETS, creates two partitions within a species, (1) conservational (females), and (2) variational (males). In females, superiority in verbal abilities, higher rule obedience, socialisation, empathy and agreeableness can be presented as a reflection of the systemic conservation function of the female sex. Male superiority is mostly noted in exploratory abilities - in risk- and sensation seeking, spacial orientation, physical strength and higher rates in physical aggression. In combination with higher birth and accidental death rates this pattern might be a reflection of the systemic variational function (testing the boundaries of beneficial characteristics) of the male sex. As a result, psychological sex differences might be influenced by a global tendency within a species to expand its norm of reaction, but at the same time to preserve the beneficial properties of the species. Moreover, Trofimova suggested a "redundancy pruning" hypothesis as an upgrade of the ETS theory. She pointed out to higher rates of psychopathy, dyslexia, autism and schizophrenia in males, in comparison to females. She suggested that the variational function of the "male partition" might also provide irrelevance/redundancy pruning of an excess in a bank of beneficial characteristics of a species, with a continuing resistance to any changes from the norm-driven conservational partition of species. This might explain a contradictory allocation of a high drive for social status/power in the male sex with the their least (among two sexes) abilities for social interaction. The high rates of communicative disorders and psychopathy in males might facilitate their higher rates of disengagement from normative expectations and their insensitivity to social disapproval, when they deliberately do not follow social norms.
Some of these speculations have yet to be developed into fully testable hypotheses, and a great deal of research is required to confirm their validity.
Antisocial and criminal behavior.
Evolutionary psychology has been applied to explain criminal or otherwise immoral behavior as being adaptive or related to adaptive behaviors. Males are generally more aggressive than females, who are more selective of their partners because of the far greater effort they have to contribute to pregnancy and child-rearing. Males being more aggressive is hypothesized to stem from the more intense reproductive competition faced by them. Males of low status may be especially vulnerable to being childless. It may have been evolutionary advantageous to engage in highly risky and violently aggressive behavior to increase their status and therefore reproductive success. This may explain why males are generally involved in more crimes, and why low status and being unmarried are associated with criminality. Furthermore, competition over females is argued to have been particularly intensive in late adolescence and young adulthood, which is theorized to explain why crime rates are particularly high during this period. Some sociologists have underlined differential exposure to androgens as the cause of these behaviors, notably Lee Ellis in his evolutionary neuroandrogenic (ENA) theory.
Many conflicts that result in harm and death involve status, reputation, and seemingly trivial insults. Steven Pinker in his book "The Better Angels of Our Nature" argues that in non-state societies without a police it was very important to have a credible deterrence against aggression. Therefore, it was important to be perceived as having a credible reputation for retaliation, resulting in humans developing instincts for revenge as well as for protecting reputation ("honor"). Pinker argues that the development of the state and the police have dramatically reduced the level of violence compared to the ancestral environment. Whenever the state breaks down, which can be very locally such as in poor areas of a city, humans again organize in groups for protection and aggression and concepts such as violent revenge and protecting honor again become extremely important.
Rape is theorized to be a reproductive strategy that facilitates the propagation of the rapist's progeny. Such a strategy may be adopted by men who otherwise are unlikely to be appealing to women and therefore cannot form legitimate relationships, or by high-status men on socially vulnerable women who are unlikely to retaliate to increase their reproductive success even further. The sociobiological theories of rape are highly controversial, as traditional theories typically do not consider rape to be a behavioral adaptation, and objections to this theory are made on ethical, religious, political, as well as scientific grounds.
Psychology of religion.
Adaptationist perspectives on religious belief suggest that, like all behavior, religious behaviors are a product of the human brain. As with all other organ functions, cognition's functional structure has been argued to have a genetic foundation, and is therefore subject to the effects of natural selection and sexual selection. Like other organs and tissues, this functional structure should be universally shared amongst humans and should have solved important problems of survival and reproduction in ancestral environments. However, evolutionary psychologists remain divided on whether religious belief is more likely a consequence of evolved psychological adaptations, or a byproduct of other cognitive adaptations.
Coalitional psychology.
Coalitional psychology is an approach to explain political behaviors between different coalitions and the conditionality of these behaviors in evolutionary psychological perspective. This approach assumes that since human beings appeared on the earth, they have evolved to live in groups instead of living as individuals to achieve benefits such as more mating opportunities and increased status. Human beings thus naturally think and act in a way that manages and negotiates group dynamics.
Coalitional psychology offers falsifiable ex ante prediction by positing five hypotheses on how these psychological adaptations operate:
Reception and criticism.
Critics of evolutionary psychology accuse it of promoting genetic determinism, pan-adaptationism (the idea that all behaviors and anatomical features are adaptations), unfalsifiable hypotheses, distal or ultimate explanations of behavior when proximate explanations are superior, and malevolent political or moral ideas.
Ethical implications.
Critics have argued that evolutionary psychology might be used to justify existing social hierarchies and reactionary policies. It has also been suggested by critics that evolutionary psychologists' theories and interpretations of empirical data rely heavily on ideological assumptions about race and gender.
In response to such criticism, evolutionary psychologists often caution against committing the naturalistic fallacy – the assumption that "what is natural" is necessarily a moral good. However, their caution against committing the naturalistic fallacy has been criticized as means to stifle legitimate ethical discussions.
Contradictions in models.
Some criticisms of evolutionary psychology point at contradictions between different aspects of adaptive scenarios posited by evolutionary psychology. One example is the evolutionary psychology model of extended social groups selecting for modern human brains, a contradiction being that the synaptic function of modern human brains require high amounts of many specific essential nutrients so that such a transition to higher requirements of the same essential nutrients being shared by all individuals in a population would decrease the possibility of forming large groups due to bottleneck foods with rare essential nutrients capping group sizes. It is mentioned that some insects have societies with different ranks for each individual and that monkeys remain socially functioning after the removal of most of the brain as additional arguments against big brains promoting social networking. The model of males as both providers and protectors is criticized for the impossibility of being in two places at once, the male cannot both protect his family at home and be out hunting at the same time. In the case of the claim that a provider male could buy protection service for his family from other males by bartering food that he had hunted, critics point at the fact that the most valuable food (the food that contained the rarest essential nutrients) would be different in different ecologies and as such vegetable in some geographical areas and animal in others, making it impossible for hunting styles relying on physical strength or risk-taking to be universally of similar value in bartered food and instead of making it inevitable that in some parts of Africa, food gathered with no need for major physical strength would be the most valuable to barter for protection. A contradiction between evolutionary psychology's claim of men needing to be more sexually visual than women for fast speed of assessing women's fertility than women needed to be able to assess the male's genes and its claim of male sexual jealousy guarding against infidelity is also pointed at, as it would be pointless for a male to be fast to assess female fertility if he needed to assess the risk of there being a jealous male mate and in that case his chances of defeating him before mating anyway (pointlessness of assessing one necessary condition faster than another necessary condition can possibly be assessed).
Standard social science model.
Evolutionary psychology has been entangled in the larger philosophical and social science controversies related to the debate on nature versus nurture. Evolutionary psychologists typically contrast evolutionary psychology with what they call the standard social science model (SSSM). They characterize the SSSM as the "blank slate", "relativist", "social constructionist", and "cultural determinist" perspective that they say dominated the social sciences throughout the 20th century and assumed that the mind was shaped almost entirely by culture.
Critics have argued that evolutionary psychologists created a false dichotomy between their own view and the caricature of the SSSM. Other critics regard the SSSM as a rhetorical device or a straw man and suggest that the scientists whom evolutionary psychologists associate with the SSSM did not believe that the mind was a blank state devoid of any natural predispositions.
Reductionism and determinism.
Some critics view evolutionary psychology as a form of genetic reductionism and genetic determinism, a common critique being that evolutionary psychology does not address the complexity of individual development and experience and fails to explain the influence of genes on behavior in individual cases. Evolutionary psychologists respond that they are working within a nature-nurture interactionist framework that acknowledges that many psychological adaptations are facultative (sensitive to environmental variations during individual development). The discipline is generally not focused on proximate analyses of behavior, but rather its focus is on the study of distal/ultimate causality (the evolution of psychological adaptations). The field of behavioral genetics is focused on the study of the proximate influence of genes on behavior.
Testability of hypotheses.
A frequent critique of the discipline is that the hypotheses of evolutionary psychology are frequently arbitrary and difficult or impossible to adequately test, thus questioning its status as an actual scientific discipline, for example because many current traits probably evolved to serve different functions than they do now. Thus because there are a potentially infinite number of alternative explanations for why a trait evolved, critics contend that it is impossible to determine the exact explanation. While evolutionary psychology hypotheses are difficult to test, evolutionary psychologists assert that it is not impossible. Part of the critique of the scientific base of evolutionary psychology includes a critique of the concept of the Environment of Evolutionary Adaptation (EEA). Some critics have argued that researchers know so little about the environment in which "Homo sapiens" evolved that explaining specific traits as an adaption to that environment becomes highly speculative. Evolutionary psychologists respond that they do know many things about this environment, including the facts that present day humans' ancestors were hunter-gatherers, that they generally lived in small tribes, etc. Edward Hagen argues that the human past environments were not radically different in the same sense as the Carboniferous or Jurassic periods and that the animal and plant taxa of the era were similar to those of the modern world, as was the geology and ecology. Hagen argues that few would deny that other organs evolved in the EEA (for example, lungs evolving in an oxygen rich atmosphere) yet critics question whether or not the brain's EEA is truly knowable, which he argues constitutes selective scepticism. Hagen also argues that most evolutionary psychology research is based on the fact that females can get pregnant and males cannot, which Hagen observes was also true in the EEA.
John Alcock describes this as the "No Time Machine Argument", as critics are arguing that since it is not possible to travel back in time to the EEA, then it cannot be determined what was going on there and thus what was adaptive. Alcock argues that present-day evidence allows researchers to be reasonably confident about the conditions of the EEA and that the fact that so many human behaviours are adaptive in the "current" environment is evidence that the ancestral environment of humans had much in common with the present one, as these behaviours would have evolved in the ancestral environment. Thus Alcock concludes that researchers can make predictions on the adaptive value of traits. Similarly, Dominic Murphy argues that alternative explanations cannot just be forwarded but instead need their own evidence and predictions - if one explanation makes predictions that the others cannot, it is reasonable to have confidence in that explanation. In addition, Murphy argues that other historical sciences also make predictions about modern phenomena to come up with explanations about past phenomena, for example, cosmologists look for evidence for what we would expect to see in the modern-day if the Big Bang was true, while geologists make predictions about modern phenomena to determine if an asteroid wiped out the dinosaurs. Murphy argues that if other historical disciplines can conduct tests without a time machine, then the onus is on the critics to show why evolutionary psychology is untestable if other historical disciplines are not, as "methods should be judged across the board, not singled out for ridicule in one context."
Modularity of mind.
Evolutionary psychologists generally presume that, like the body, the mind is made up of many evolved modular adaptations, although there is some disagreement within the discipline regarding the degree of general plasticity, or "generality," of some modules. It has been suggested that modularity evolves because, compared to non-modular networks, it would have conferred an advantage in terms of fitness and because connection costs are lower.
In contrast, some academics argue that it is unnecessary to posit the existence of highly domain specific modules, and, suggest that the neural anatomy of the brain supports a model based on more domain general faculties and processes. Moreover, empirical support for the domain-specific theory stems almost entirely from performance on variations of the Wason selection task which is extremely limited in scope as it only tests one subtype of deductive reasoning.
Cultural rather than genetic development of cognitive tools.
Psychologist Cecilia Heyes has argued that the picture presented by some evolutionary psychology of the human mind as a collection of cognitive instincts – organs of thought shaped by genetic evolution over very long time periods – does not fit research results. She posits instead that humans have cognitive gadgets – "special-purpose organs of thought" built in the course of development through social interaction. Similar criticisms are articulated by Subrena E. Smith of the University of New Hampshire.
Response by evolutionary psychologists.
Evolutionary psychologists have addressed many of their critics (e.g. in books by Segerstråle (2000), Barkow (2005), and Alcock (2001)). Among their rebuttals are that some criticisms are straw men, or are based on an incorrect nature versus nurture dichotomy or on basic misunderstandings of the discipline.
Robert Kurzban suggested that "...critics of the field, when they err, are not slightly missing the mark. Their confusion is deep and profound. It's not like they are marksmen who can't quite hit the center of the target; they're holding the gun backwards." Many have written specifically to correct basic misconceptions.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "rb > c \\ "
},
{
"math_id": 1,
"text": "c \\ "
},
{
"math_id": 2,
"text": "b \\ "
},
{
"math_id": 3,
"text": "r \\ "
}
] | https://en.wikipedia.org/wiki?curid=9703 |
97034 | Depth-first search | Search algorithm
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible along each branch before backtracking. Extra memory, usually a stack, is needed to keep track of the nodes discovered so far along a specified branch which helps in backtracking of the graph.
A version of depth-first search was investigated in the 19th century by French mathematician Charles Pierre Trémaux as a strategy for solving mazes.
Properties.
The time and space analysis of DFS differs according to its application area. In theoretical computer science, DFS is typically used to traverse an entire graph, and takes time formula_0, where formula_2 is the number of vertices and formula_3 the number of edges. This is linear in the size of the graph. In these applications it also uses space formula_1 in the worst case to store the stack of vertices on the current search path as well as the set of already-visited vertices. Thus, in this setting, the time and space bounds are the same as for breadth-first search and the choice of which of these two algorithms to use depends less on their complexity and more on the different properties of the vertex orderings the two algorithms produce.
For applications of DFS in relation to specific domains, such as searching for solutions in artificial intelligence or web-crawling, the graph to be traversed is often either too large to visit in its entirety or infinite (DFS may suffer from non-termination). In such cases, search is only performed to a limited depth; due to limited resources, such as memory or disk space, one typically does not use data structures to keep track of the set of all previously visited vertices. When search is performed to a limited depth, the time is still linear in terms of the number of expanded vertices and edges (although this number is not the same as the size of the entire graph because some vertices may be searched more than once and others not at all) but the space complexity of this variant of DFS is only proportional to the depth limit, and as a result, is much smaller than the space needed for searching to the same depth using breadth-first search. For such applications, DFS also lends itself much better to heuristic methods for choosing a likely-looking branch. When an appropriate depth limit is not known a priori, iterative deepening depth-first search applies DFS repeatedly with a sequence of increasing limits. In the artificial intelligence mode of analysis, with a branching factor greater than one, iterative deepening increases the running time by only a constant factor over the case in which the correct depth limit is known due to the geometric growth of the number of nodes per level.
DFS may also be used to collect a sample of graph nodes. However, incomplete DFS, similarly to incomplete BFS, is biased towards nodes of high degree.
Example.
For the following graph:
a depth-first search starting at the node A, assuming that the left edges in the shown graph are chosen before right edges, and assuming the search remembers previously visited nodes and will not repeat them (since this is a small graph), will visit the nodes in the following order: A, B, D, F, E, C, G. The edges traversed in this search form a Trémaux tree, a structure with important applications in graph theory.
Performing the same search without remembering previously visited nodes results in visiting the nodes in the order A, B, D, F, E, A, B, D, F, E, etc. forever, caught in the A, B, D, F, E cycle and never reaching C or G.
Iterative deepening is one technique to avoid this infinite loop and would reach all nodes.
Output of a depth-first search.
The result of a depth-first search of a graph can be conveniently described in terms of a spanning tree of the vertices reached during the search. Based on this spanning tree, the edges of the original graph can be divided into three classes: forward edges, which point from a node of the tree to one of its descendants, back edges, which point from a node to one of its ancestors, and cross edges, which do neither. Sometimes tree edges, edges which belong to the spanning tree itself, are classified separately from forward edges. If the original graph is undirected then all of its edges are tree edges or back edges.
Vertex orderings.
It is also possible to use depth-first search to linearly order the vertices of a graph or tree. There are four possible ways of doing this:
For binary trees there is additionally in-ordering and reverse in-ordering.
For example, when searching the directed graph below beginning at node A, the sequence of traversals is either A B D B A C A or A C D C A B A (choosing to first visit B or C from A is up to the algorithm). Note that repeat visits in the form of backtracking to a node, to check if it has still unvisited neighbors, are included here (even if it is found to have none). Thus the possible preorderings are A B D C and A C D B, while the possible postorderings are D B C A and D C B A, and the possible reverse postorderings are A C B D and A B C D.
Reverse postordering produces a topological sorting of any directed acyclic graph. This ordering is also useful in control-flow analysis as it often represents a natural linearization of the control flows. The graph above might represent the flow of control in the code fragment below, and it is natural to consider this code in the order A B C D or A C B D but not natural to use the order A B D C or A C D B.
if (A) then {
B
} else {
C
D
Pseudocode.
A recursive implementation of DFS:
procedure DFS("G", "v") is
label "v" as discovered
for all directed edges from "v" to "w that are" in "G".adjacentEdges("v") do
if vertex "w" is not labeled as discovered then
recursively call DFS("G", "w")
A non-recursive implementation of DFS with worst-case space complexity formula_4, with the possibility of duplicate vertices on the stack:
procedure DFS_iterative("G", "v") is
let "S" be a stack
"S".push("v")
while "S" is not empty do
"v" = "S".pop()
if "v" is not labeled as discovered then
label "v" as discovered
for all edges from "v" to "w" in "G".adjacentEdges("v") do
"S".push("w")
These two variations of DFS visit the neighbors of each vertex in the opposite order from each other: the first neighbor of "v" visited by the recursive variation is the first one in the list of adjacent edges, while in the iterative variation the first visited neighbor is the last one in the list of adjacent edges. The recursive implementation will visit the nodes from the example graph in the following order: A, B, D, F, E, C, G. The non-recursive implementation will visit the nodes as: A, E, F, B, D, C, G.
The non-recursive implementation is similar to breadth-first search but differs from it in two ways:
If G is a tree, replacing the queue of the breadth-first search algorithm with a stack will yield a depth-first search algorithm. For general graphs, replacing the stack of the iterative depth-first search implementation with a queue would also produce a breadth-first search algorithm, although a somewhat nonstandard one.
Another possible implementation of iterative depth-first search uses a stack of iterators of the list of neighbors of a node, instead of a stack of nodes. This yields the same traversal as recursive DFS.
procedure DFS_iterative("G", "v") is
let "S" be a stack
label "v" as discovered
"S".push(iterator of "G".adjacentEdges("v"))
while "S" is not empty do
if "S".peek().hasNext() then
"w" = "S".peek().next()
if "w" is not labeled as discovered then
label "w" as discovered
"S".push(iterator of "G".adjacentEdges("w"))
else
"S".pop()
Applications.
Algorithms that use depth-first search as a building block include:
Complexity.
The computational complexity of DFS was investigated by John Reif. More precisely, given a graph formula_5, let formula_6 be the ordering computed by the standard recursive DFS algorithm. This ordering is called the lexicographic depth-first search ordering. John Reif considered the complexity of computing the lexicographic depth-first search ordering, given a graph and a source. A decision version of the problem (testing whether some vertex u occurs before some vertex v in this order) is P-complete, meaning that it is "a nightmare for parallel processing".
A depth-first search ordering (not necessarily the lexicographic one), can be computed by a randomized parallel algorithm in the complexity class RNC. As of 1997, it remained unknown whether a depth-first traversal could be constructed by a deterministic parallel algorithm, in the complexity class NC.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "O(|V| + |E|)"
},
{
"math_id": 1,
"text": "O(|V|)"
},
{
"math_id": 2,
"text": "|V|"
},
{
"math_id": 3,
"text": "|E|"
},
{
"math_id": 4,
"text": "O(|E|)"
},
{
"math_id": 5,
"text": "G"
},
{
"math_id": 6,
"text": "O=(v_1,\\dots,v_n)"
}
] | https://en.wikipedia.org/wiki?curid=97034 |
970447 | Chern–Weil homomorphism | In mathematics, the Chern–Weil homomorphism is a basic construction in Chern–Weil theory that computes topological invariants of vector bundles and principal bundles on a smooth manifold "M" in terms of connections and curvature representing classes in the de Rham cohomology rings of "M". That is, the theory forms a bridge between the areas of algebraic topology and differential geometry. It was developed in the late 1940s by Shiing-Shen Chern and André Weil, in the wake of proofs of the generalized Gauss–Bonnet theorem. This theory was an important step in the theory of characteristic classes.
Let "G" be a real or complex Lie group with Lie algebra formula_0, and let formula_1 denote the algebra of formula_2-valued polynomials on formula_0 (exactly the same argument works if we used formula_3 instead of formula_2). Let formula_4 be the subalgebra of fixed points in formula_1 under the adjoint action of "G"; that is, the subalgebra consisting of all polynomials "f" such that formula_5, for all "g" in "G" and "x" in formula_6,
Given a principal G-bundle "P" on "M", there is an associated homomorphism of formula_2-algebras,
formula_7,
called the Chern–Weil homomorphism, where on the right cohomology is de Rham cohomology. This homomorphism is obtained by taking invariant polynomials in the curvature of any connection on the given bundle. If "G" is either compact or semi-simple, then the cohomology ring of the classifying space for "G"-bundles, formula_8, is isomorphic to the algebra formula_9 of invariant polynomials:
formula_10
(The cohomology ring of "BG" can still be given in the de Rham sense:
formula_11
when formula_12 and formula_13 are manifolds.)
Definition of the homomorphism.
Choose any connection form ω in "P", and let Ω be the associated curvature form; i.e., formula_14, the exterior covariant derivative of ω. If formula_15 is a homogeneous polynomial function of degree "k"; i.e., formula_16 for any complex number "a" and "x" in formula_0, then, viewing "f" as a symmetric multilinear functional on formula_17 (see the ring of polynomial functions), let
formula_18
be the (scalar-valued) 2"k"-form on "P" given by
formula_19
where "v""i" are tangent vectors to "P", formula_20 is the sign of the permutation formula_21 in the symmetric group on 2"k" numbers formula_22 (see Lie algebra-valued forms#Operations as well as Pfaffian).
If, moreover, "f" is invariant; i.e., formula_5, then one can show that formula_18 is a closed form, it descends to a unique form on "M" and that the de Rham cohomology class of the form is independent of formula_23. First, that formula_18 is a closed form follows from the next two lemmas:
Lemma 1: The form formula_18 on "P" descends to a (unique) form formula_24 on "M"; i.e., there is a form on "M" that pulls-back to formula_18.
Lemma 2: If a form of formula_25 on "P" descends to a form on "M", then formula_26.
Indeed, Bianchi's second identity says formula_27 and, since "D" is a graded derivation, formula_28 Finally, Lemma 1 says formula_18 satisfies the hypothesis of Lemma 2.
To see Lemma 2, let formula_29 be the projection and "h" be the projection of formula_30 onto the horizontal subspace. Then Lemma 2 is a consequence of the fact that formula_31 (the kernel of formula_32 is precisely the vertical subspace.) As for Lemma 1, first note
formula_33
which is because formula_34 and "f" is invariant. Thus, one can define formula_24 by the formula:
formula_35
where formula_36 are any lifts of formula_37: formula_38.
Next, we show that the de Rham cohomology class of formula_24 on "M" is independent of a choice of connection. Let formula_39 be arbitrary connection forms on "P" and let formula_40 be the projection. Put
formula_41
where "t" is a smooth function on formula_42 given by formula_43. Let formula_44 be the curvature forms of formula_45. Let formula_46 be the inclusions. Then formula_47 is homotopic to formula_48. Thus, formula_49 and formula_50 belong to the same de Rham cohomology class by the homotopy invariance of de Rham cohomology. Finally, by naturality and by uniqueness of descending,
formula_51
and the same for formula_52. Hence, formula_53 belong to the same cohomology class.
The construction thus gives the linear map: (cf. Lemma 1)
formula_54
In fact, one can check that the map thus obtained:
formula_7
is an algebra homomorphism.
Example: Chern classes and Chern character.
Let formula_55 and formula_56 its Lie algebra. For each "x" in formula_6, we can consider its characteristic polynomial in "t":
formula_57
where "i" is the square root of -1. Then formula_58 are invariant polynomials on formula_6, since the left-hand side of the equation is. The "k"-th Chern class of a smooth complex-vector bundle "E" of rank "n" on a manifold "M":
formula_59
is given as the image of formula_58 under the Chern–Weil homomorphism defined by "E" (or more precisely the frame bundle of "E"). If "t" = 1, then formula_60 is an invariant polynomial. The total Chern class of "E" is the image of this polynomial; that is,
formula_61
Directly from the definition, one can show that formula_62 and "c" given above satisfy the axioms of Chern classes. For example, for the Whitney sum formula, we consider
formula_63
where we wrote formula_64 for the curvature 2-form on "M" of the vector bundle "E" (so it is the descendent of the curvature form on the frame bundle of "E"). The Chern–Weil homomorphism is the same if one uses this formula_64. Now, suppose "E" is a direct sum of vector bundles formula_65's and formula_66 the curvature form of formula_65 so that, in the matrix term, formula_64 is the block diagonal matrix with Ω"I"'s on the diagonal. Then, since formula_67, we have:
formula_68
where on the right the multiplication is that of a cohomology ring: cup product. For the normalization property, one computes the first Chern class of the complex projective line; see .
Since formula_69, we also have:
formula_70
Finally, the Chern character of "E" is given by
formula_71
where formula_64 is the curvature form of some connection on "E" (since formula_64 is nilpotent, it is a polynomial in formula_64.) Then ch is a ring homomorphism:
formula_72
Now suppose, in some ring "R" containing the cohomology ring formula_73, there is the factorization of the polynomial in "t":
formula_74
where formula_75 are in "R" (they are sometimes called Chern roots.) Then formula_76.
Example: Pontrjagin classes.
If "E" is a smooth real vector bundle on a manifold "M", then the "k"-th Pontrjagin class of "E" is given as:
formula_77
where we wrote formula_78 for the complexification of "E". Equivalently, it is the image under the Chern–Weil homomorphism of the invariant polynomial formula_79 on formula_80 given by:
formula_81
The homomorphism for holomorphic vector bundles.
Let "E" be a holomorphic (complex-)vector bundle on a complex manifold "M". The curvature form formula_64 of "E", with respect to some hermitian metric, is not just a 2-form, but is in fact a (1, 1)-form (see holomorphic vector bundle#Hermitian metrics on a holomorphic vector bundle). Hence, the Chern–Weil homomorphism assumes the form: with formula_55,
formula_82
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathfrak g"
},
{
"math_id": 1,
"text": "\\Complex[\\mathfrak g]"
},
{
"math_id": 2,
"text": "\\Complex"
},
{
"math_id": 3,
"text": "\\R"
},
{
"math_id": 4,
"text": "\\Complex[\\mathfrak g]^G"
},
{
"math_id": 5,
"text": "f(\\operatorname{Ad}_g x) = f(x)"
},
{
"math_id": 6,
"text": "\\mathfrak{g}"
},
{
"math_id": 7,
"text": "\\Complex[\\mathfrak g]^{G} \\to H^*(M; \\Complex)"
},
{
"math_id": 8,
"text": "BG"
},
{
"math_id": 9,
"text": "\\Complex[\\mathfrak g]^{G}"
},
{
"math_id": 10,
"text": "H^*(BG; \\Complex) \\cong \\Complex[\\mathfrak g]^{G}."
},
{
"math_id": 11,
"text": "H^k(BG; \\Complex) = \\varinjlim \\operatorname{ker} (d\\colon \\Omega^k(B_jG) \\to \\Omega^{k+1}(B_jG))/\\operatorname{im} d."
},
{
"math_id": 12,
"text": "BG = \\varinjlim B_jG"
},
{
"math_id": 13,
"text": "B_jG"
},
{
"math_id": 14,
"text": "\\Omega = D\\omega"
},
{
"math_id": 15,
"text": "f\\in\\mathbb C[\\mathfrak g]^G"
},
{
"math_id": 16,
"text": "f(a x) = a^k f(x)"
},
{
"math_id": 17,
"text": "\\prod_1^k \\mathfrak{g}"
},
{
"math_id": 18,
"text": "f(\\Omega)"
},
{
"math_id": 19,
"text": "f(\\Omega)(v_1,\\dots,v_{2k})=\\frac{1}{(2k)!}\\sum_{\\sigma\\in\\mathfrak S_{2k}}\\epsilon_\\sigma f(\\Omega(v_{\\sigma(1)},v_{\\sigma(2)}),\\dots,\\Omega(v_{\\sigma(2k-1)}, v_{\\sigma(2k)}))"
},
{
"math_id": 20,
"text": "\\epsilon_\\sigma"
},
{
"math_id": 21,
"text": "\\sigma"
},
{
"math_id": 22,
"text": "\\mathfrak S_{2k}"
},
{
"math_id": 23,
"text": "\\omega"
},
{
"math_id": 24,
"text": "\\overline{f}(\\Omega)"
},
{
"math_id": 25,
"text": "\\varphi"
},
{
"math_id": 26,
"text": "d\\varphi = D\\varphi"
},
{
"math_id": 27,
"text": "D \\Omega = 0"
},
{
"math_id": 28,
"text": "D f(\\Omega) = 0."
},
{
"math_id": 29,
"text": "\\pi\\colon P \\to M"
},
{
"math_id": 30,
"text": "T_u P"
},
{
"math_id": 31,
"text": "d \\pi(h v) = d \\pi(v)"
},
{
"math_id": 32,
"text": "d \\pi"
},
{
"math_id": 33,
"text": "f(\\Omega)(d R_g(v_1), \\dots, d R_g(v_{2k})) = f(\\Omega)(v_1, \\dots, v_{2k}), \\, R_g(u) = ug;"
},
{
"math_id": 34,
"text": "R_g^* \\Omega = \\operatorname{Ad}_{g^{-1}} \\Omega"
},
{
"math_id": 35,
"text": "\\overline{f}(\\Omega)(\\overline{v_1}, \\dots, \\overline{v_{2k}}) = f(\\Omega)(v_1, \\dots, v_{2k}),"
},
{
"math_id": 36,
"text": "v_i"
},
{
"math_id": 37,
"text": "\\overline{v_i}"
},
{
"math_id": 38,
"text": "d \\pi(v_i) = \\overline{v}_i"
},
{
"math_id": 39,
"text": "\\omega_0, \\omega_1"
},
{
"math_id": 40,
"text": "p\\colon P \\times \\R \\to P"
},
{
"math_id": 41,
"text": "\\omega' = t \\, p^* \\omega_1 + (1 - t) \\, p^* \\omega_0"
},
{
"math_id": 42,
"text": "P \\times \\mathbb{R}"
},
{
"math_id": 43,
"text": "(x, s) \\mapsto s"
},
{
"math_id": 44,
"text": "\\Omega', \\Omega_0, \\Omega_1"
},
{
"math_id": 45,
"text": "\\omega', \\omega_0, \\omega_1"
},
{
"math_id": 46,
"text": "i_s: M \\to M \\times \\mathbb{R}, \\, x \\mapsto (x, s)"
},
{
"math_id": 47,
"text": "i_0"
},
{
"math_id": 48,
"text": "i_1"
},
{
"math_id": 49,
"text": "i_0^* \\overline{f}(\\Omega')"
},
{
"math_id": 50,
"text": "i_1^* \\overline{f}(\\Omega')"
},
{
"math_id": 51,
"text": "i_0^* \\overline{f}(\\Omega') = \\overline{f}(\\Omega_0)"
},
{
"math_id": 52,
"text": "\\Omega_1"
},
{
"math_id": 53,
"text": "\\overline{f}(\\Omega_0), \\overline{f}(\\Omega_1)"
},
{
"math_id": 54,
"text": "\\Complex[\\mathfrak g]^{G}_k \\to H^{2k}(M; \\Complex), \\, f \\mapsto \\left[\\overline{f}(\\Omega)\\right]."
},
{
"math_id": 55,
"text": "G = \\operatorname{GL}_n(\\Complex)"
},
{
"math_id": 56,
"text": "\\mathfrak{g} = \\mathfrak{gl}_n(\\Complex)"
},
{
"math_id": 57,
"text": "\\det \\left( I - t{x \\over 2 \\pi i} \\right) = \\sum_{k=0}^n f_k(x) t^k,"
},
{
"math_id": 58,
"text": "f_k"
},
{
"math_id": 59,
"text": "c_k(E) \\in H^{2k}(M, \\Z)"
},
{
"math_id": 60,
"text": "\\det \\left(I - {x \\over 2 \\pi i} \\right) = 1 + f_1(x) + \\cdots + f_n(x)"
},
{
"math_id": 61,
"text": "c(E) = 1 + c_1(E) + \\cdots + c_n(E)."
},
{
"math_id": 62,
"text": "c_j"
},
{
"math_id": 63,
"text": "c_t(E) = [\\det \\left( I - t {\\Omega / 2 \\pi i} \\right)],"
},
{
"math_id": 64,
"text": "\\Omega"
},
{
"math_id": 65,
"text": "E_i"
},
{
"math_id": 66,
"text": "\\Omega_i"
},
{
"math_id": 67,
"text": "\\det(I - t\\frac\\Omega{2\\pi i}) = \\det(I - t\\frac{\\Omega_1}{2\\pi i}) \\wedge \\dots \\wedge \\det(I - t\\frac{\\Omega_m}{2\\pi i})"
},
{
"math_id": 68,
"text": "c_t(E) = c_t(E_1) \\cdots c_t(E_m)"
},
{
"math_id": 69,
"text": "\\Omega_{E \\otimes E'} = \\Omega_E \\otimes I_{E'} + I_{E} \\otimes \\Omega_{E'}"
},
{
"math_id": 70,
"text": "c_1(E \\otimes E') = c_1(E) \\operatorname{rank} (E') + \\operatorname{rank}(E) c_1(E')."
},
{
"math_id": 71,
"text": "\\operatorname{ch}(E) = [\\operatorname{tr}(e^{-\\Omega/2\\pi i})] \\in H^*(M, \\Q)"
},
{
"math_id": 72,
"text": "\\operatorname{ch}(E \\oplus F) = \\operatorname{ch}(E) + \\operatorname{ch}(F), \\, \\operatorname{ch}(E \\otimes F) = \\operatorname{ch}(E) \\operatorname{ch}(F)."
},
{
"math_id": 73,
"text": "H^*(M, \\Complex)"
},
{
"math_id": 74,
"text": "c_t(E) = \\prod_{j=0}^n (1 + \\lambda_j t)"
},
{
"math_id": 75,
"text": "\\lambda_j"
},
{
"math_id": 76,
"text": "\\operatorname{ch}(E) = e^{\\lambda_j}"
},
{
"math_id": 77,
"text": "p_k(E) = (-1)^k c_{2k}(E \\otimes \\Complex) \\in H^{4k}(M; \\Z)"
},
{
"math_id": 78,
"text": "E \\otimes \\Complex"
},
{
"math_id": 79,
"text": "g_{2k}"
},
{
"math_id": 80,
"text": "\\mathfrak{gl}_n(\\R)"
},
{
"math_id": 81,
"text": "\\operatorname{det}\\left(I - t {x \\over 2 \\pi}\\right) = \\sum_{k = 0}^n g_k(x) t^k."
},
{
"math_id": 82,
"text": "\\Complex[\\mathfrak{g}]_k \\to H^{k, k}(M; \\Complex), f \\mapsto [f(\\Omega)]."
}
] | https://en.wikipedia.org/wiki?curid=970447 |
9704909 | IQC | IQC may refer to:
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "{\\mathsf {IPC}}"
}
] | https://en.wikipedia.org/wiki?curid=9704909 |
970650 | Specific absorption rate | Measure of absorption of energy by humans
Specific absorption rate (SAR) is a measure of the rate at which energy is absorbed per unit mass by a human body when exposed to a radio frequency (RF) electromagnetic field. It is defined as the power absorbed per mass of tissue and has units of watts per kilogram (W/kg).
SAR is usually averaged either over the whole body, or over a small sample volume (typically 1 g or 10 g of tissue). The value cited is then the maximum level measured in the body part studied over the stated volume or mass.
Calculation.
SAR for electromagnetic energy can be calculated from the electric field within the tissue as
formula_0
where
formula_1 is the sample electrical conductivity,
formula_2 is the RMS electric field,
formula_3 is the sample density,
formula_4 is the volume of the sample.
SAR measures exposure to fields between 100 kHz and 10 GHz (known as radio waves). It is commonly used to measure power absorbed from mobile phones and during MRI scans. The value depends heavily on the geometry of the part of the body that is exposed to the RF energy and on the exact location and geometry of the RF source. Thus tests must be made with each specific source, such as a mobile-phone model and at the intended position of use.
Mobile phone SAR testing.
When measuring the SAR due to a mobile phone the phone is placed against a representation of a human head (a "SAR Phantom") in a talk position. The SAR value is then measured at the location that has the highest absorption rate in the entire head, which in the case of a mobile phone is often as close to the phone's antenna as possible. Measurements are made for different positions on both sides of the head and at different frequencies representing the frequency bands at which the device can transmit. Depending on the size and capabilities of the phone, additional testing may also be required to represent usage of the device while placed close to the user's body and/or extremities. Various governments have defined maximum SAR levels for RF energy emitted by mobile devices:
SAR values are heavily dependent on the size of the averaging volume. Without information about the averaging volume used, comparisons between different measurements cannot be made. Thus, the European 10-gram ratings should be compared among themselves, and the American 1-gram ratings should only be compared among themselves.
To check SAR on your mobile phone, review the documentation provided with the phone, dial *#07# (only works on some models) or visit the manufacturer's website.
MRI scanner SAR testing.
For magnetic resonance imaging the limits (described in IEC 60601-2-33) are slightly more complicated:
Note: Averaging time of 6 minutes.
(a) Local SAR is determined over the mass of 10 g.
(b) The limit scales dynamically with the ratio "exposed patient mass / patient mass":
Normal operating mode: Partial body SAR = 10 W/kg − (8 W/kg × exposed patient mass / patient mass).
1st level controlled: Partial body SAR = 10 W/kg − (6 W/kg × exposed patient mass / patient mass).
(c) In cases where the orbit is in the field of a small local RF transmit coil, care should be taken to ensure that the temperature rise is limited to 1 °C.
Criticism.
SAR limits set by law do not consider that the human body is particularly sensitive to the power peaks or frequencies responsible for the microwave hearing effect. Frey reports that the microwave hearing effect occurs with average power density exposures of 400 μW/cm2, well below SAR limits (as set by government regulations).
Notes:
In comparison to the short term, relatively intensive exposures described above, for long-term environmental exposure of the general public there is a limit of 0.08 W/kg averaged over the whole body. A whole-body average SAR of 0.4 W/kg has been chosen as the restriction that provides adequate protection for occupational exposure. An additional safety factor of 5 is introduced for exposure of the public, giving an average whole-body SAR limit of 0.08 W/kg.
FCC advice.
The FCC guide "Specific Absorption Rate (SAR) For Cell Phones: What It Means For You", after detailing the limitations of SAR values, offers the following "bottom line" editorial:
<templatestyles src="Template:Blockquote/styles.css" />ALL cell phones must meet the FCC’s RF exposure standard, which is set at a level well below that at which laboratory testing indicates, and medical and biological experts generally agree, adverse health effects could occur. For users who are concerned with the adequacy of this standard or who otherwise wish to further reduce their exposure, the most effective means to reduce exposure are to hold the cell phone away from the head or body and to use a speakerphone or hands-free accessory. These measures will generally have much more impact on RF energy absorption than the small difference in SAR between individual cell phones, which, in any event, is an unreliable comparison of RF exposure to consumers, given the variables of individual use.
MSBE (minimum SAR with biological effect).
In order to find out possible advantages and the interaction mechanisms of electromagnetic fields (EMF), the minimum SAR (or intensity) that could have biological effect (MSBE) would be much more valuable in comparison to studying high-intensity fields. Such studies can possibly shed light on thresholds of non-ionizing radiation effects and cell capabilities (e.g., oxidative response). In addition, it is more likely to reduce the complexity of the EMF interaction targets in cell cultures by lowering the exposure power, which at least reduces the overall rise in temperature. This parameter might differ regarding the case under study and depends on the physical and biological conditions of the exposed target.
FCC regulations.
The FCC regulations for SAR are contained in 47 C.F.R. 1.1307(b), 1.1310, 2.1091, 2.1093 and also discussed in OET Bulletin No. 56, "Questions and Answers About the Biological Effects and Potential Hazards of Radiofrequency Electromagnetic Fields."
European regulations.
Specific energy absorption rate (SAR) averaged over the whole body or over parts of the body, is defined as the rate at which energy is absorbed per unit mass of body tissue and is expressed in watts per kilogram (W/kg).
Whole body SAR is a widely accepted measure for relating adverse thermal effects to RF exposure.
Legislative acts in the European Union include directive 2013/35/EU of the European Parliament and of the Council of 26 June 2013 on the minimum health and safety requirements regarding the exposure of workers to the risks arising from physical agents (electromagnetic fields) (20th individual Directive within the meaning of Article 16(1) of Directive 89/391/EEC) and repealing Directive 2004/40/EC) in its annex III "THERMAL EFFECTS" for "EXPOSURE LIMIT VALUES AND ACTION LEVELS IN THE FREQUENCY RANGE FROM 100 kHz TO 300 GHz".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{SAR} = \\frac{1}{V}\\int_\\text{sample} \\frac{\\sigma(\\mathbf{r}) |\\mathbf{E}(\\mathbf{r})|^2}{\\rho(\\mathbf{r})} \\,d\\mathbf{r},"
},
{
"math_id": 1,
"text": "\\sigma"
},
{
"math_id": 2,
"text": "E"
},
{
"math_id": 3,
"text": "\\rho"
},
{
"math_id": 4,
"text": "V"
}
] | https://en.wikipedia.org/wiki?curid=970650 |
9707 | Electronegativity | Tendency of an atom to attract a shared pair of electrons
Electronegativity, symbolized as "χ", is the tendency for an atom of a given chemical element to attract shared electrons (or electron density) when forming a chemical bond. An atom's electronegativity is affected by both its atomic number and the distance at which its valence electrons reside from the charged nucleus. The higher the associated electronegativity, the more an atom or a substituent group attracts electrons. Electronegativity serves as a simple way to quantitatively estimate the bond energy, and the sign and magnitude of a bond's chemical polarity, which characterizes a bond along the continuous scale from covalent to ionic bonding. The loosely defined term electropositivity is the opposite of electronegativity: it characterizes an element's tendency to donate valence electrons.
On the most basic level, electronegativity is determined by factors like the nuclear charge (the more protons an atom has, the more "pull" it will have on electrons) and the number and location of other electrons in the atomic shells (the more electrons an atom has, the farther from the nucleus the valence electrons will be, and as a result, the less positive charge they will experience—both because of their increased distance from the nucleus and because the other electrons in the lower energy core orbitals will act to shield the valence electrons from the positively charged nucleus).
The term "electronegativity" was introduced by Jöns Jacob Berzelius in 1811,
though the concept was known before that and was studied by many chemists including Avogadro.
In spite of its long history, an accurate scale of electronegativity was not developed until 1932, when Linus Pauling proposed an electronegativity scale which depends on bond energies, as a development of valence bond theory. It has been shown to correlate with a number of other chemical properties. Electronegativity cannot be directly measured and must be calculated from other atomic or molecular properties. Several methods of calculation have been proposed, and although there may be small differences in the numerical values of the electronegativity, all methods show the same periodic trends between elements.
The most commonly used method of calculation is that originally proposed by Linus Pauling. This gives a dimensionless quantity, commonly referred to as the Pauling scale ("χ"r), on a relative scale running from 0.79 to 3.98 (hydrogen = 2.20). When other methods of calculation are used, it is conventional (although not obligatory) to quote the results on a scale that covers the same range of numerical values: this is known as an electronegativity in "Pauling units".
As it is usually calculated, electronegativity is not a property of an atom alone, but rather a property of an atom in a molecule. Even so, the electronegativity of an atom is strongly correlated with the first ionization energy. The electronegativity is slightly negatively correlated (for smaller electronegativity values) and rather strongly positively correlated (for most and larger electronegativity values) with the electron affinity. It is to be expected that the electronegativity of an element will vary with its chemical environment, but it is usually considered to be a transferable property, that is to say that similar values will be valid in a variety of situations.
Caesium is the least electronegative element (0.79); fluorine is the most (3.98).
Methods of calculation.
Pauling electronegativity.
Pauling first proposed the concept of electronegativity in 1932 to explain why the covalent bond between two different atoms (A–B) is stronger than the average of the A–A and the B–B bonds. According to valence bond theory, of which Pauling was a notable proponent, this "additional stabilization" of the heteronuclear bond is due to the contribution of ionic canonical forms to the bonding.
The difference in electronegativity between atoms A and B is given by:
formula_0
where the dissociation energies, "E"d, of the A–B, A–A and B–B bonds are expressed in electronvolts, the factor (eV)−<templatestyles src="Fraction/styles.css" />1⁄2 being included to ensure a dimensionless result. Hence, the difference in Pauling electronegativity between hydrogen and bromine is 0.73 (dissociation energies: H–Br, 3.79 eV; H–H, 4.52 eV; Br–Br 2.00 eV)
As only differences in electronegativity are defined, it is necessary to choose an arbitrary reference point in order to construct a scale. Hydrogen was chosen as the reference, as it forms covalent bonds with a large variety of elements: its electronegativity was fixed first at 2.1, later revised to 2.20. It is also necessary to decide which of the two elements is the more electronegative (equivalent to choosing one of the two possible signs for the square root). This is usually done using "chemical intuition": in the above example, hydrogen bromide dissolves in water to form H+ and Br− ions, so it may be assumed that bromine is more electronegative than hydrogen. However, in principle, since the same electronegativities should be obtained for any two bonding compounds, the data are in fact overdetermined, and the signs are unique once a reference point has been fixed (usually, for H or F).
To calculate Pauling electronegativity for an element, it is necessary to have data on the dissociation energies of at least two types of covalent bonds formed by that element. A. L. Allred updated Pauling's original values in 1961 to take account of the greater availability of thermodynamic data, and it is these "revised Pauling" values of the electronegativity that are most often used.
The essential point of Pauling electronegativity is that there is an underlying, quite accurate, semi-empirical formula for dissociation energies, namely:
formula_1
or sometimes, a more accurate fit
formula_2
These are approximate equations but they hold with good accuracy. Pauling obtained the first equation by noting that a bond can be approximately represented as a quantum mechanical superposition of a covalent bond and two ionic bond-states. The covalent energy of a bond is approximate, by quantum mechanical calculations, the geometric mean of the two energies of covalent bonds of the same molecules, and there is additional energy that comes from ionic factors, i.e. polar character of the bond.
The geometric mean is approximately equal to the arithmetic mean—which is applied in the first formula above—when the energies are of a similar value, e.g., except for the highly electropositive elements, where there is a larger difference of two dissociation energies; the geometric mean is more accurate and almost always gives positive excess energy, due to ionic bonding. The square root of this excess energy, Pauling notes, is approximately additive, and hence one can introduce the electronegativity. Thus, it is these semi-empirical formulas for bond energy that underlie the concept of Pauling electronegativity.
The formulas are approximate, but this rough approximation is in fact relatively good and gives the right intuition, with the notion of the polarity of the bond and some theoretical grounding in quantum mechanics. The electronegativities are then determined to best fit the data.
In more complex compounds, there is an additional error since electronegativity depends on the molecular environment of an atom. Also, the energy estimate can be only used for single, not for multiple bonds. The enthalpy of formation of a molecule containing only single bonds can subsequently be estimated based on an electronegativity table, and it depends on the constituents and the sum of squares of differences of electronegativities of all pairs of bonded atoms. Such a formula for estimating energy typically has a relative error on the order of 10% but can be used to get a rough qualitative idea and understanding of a molecule.
→ Atomic radius decreases → Ionization energy increases → Electronegativity increases →
See also: Electronegativities of the elements (data page)<br>There are no reliable sources for Pm, Eu and Yb other than the range of 1.1–1.2; see Pauling, Linus (1960). "The Nature of the Chemical Bond." 3rd ed., Cornell University Press, p. 93.
<templatestyles src="Reflist/styles.css" />
Mulliken electronegativity.
Robert S. Mulliken proposed that the arithmetic mean of the first ionization energy (Ei) and the electron affinity (Eea) should be a measure of the tendency of an atom to attract electrons:
formula_3
As this definition is not dependent on an arbitrary relative scale, it has also been termed absolute electronegativity, with the units of kilojoules per mole or electronvolts. However, it is more usual to use a linear transformation to transform these absolute values into values that resemble the more familiar Pauling values. For ionization energies and electron affinities in electronvolts,
formula_4
and for energies in kilojoules per mole,
formula_5
The Mulliken electronegativity can only be calculated for an element whose electron affinity is known. Measured values are available for 72 elements, while approximate values have been estimated or calculated for the remaining elements.
The Mulliken electronegativity of an atom is sometimes said to be the negative of the chemical potential. By inserting the energetic definitions of the ionization potential and electron affinity into the Mulliken electronegativity, it is possible to show that the Mulliken chemical potential is a finite difference approximation of the electronic energy with respect to the number of electrons., i.e.,
formula_6
Allred–Rochow electronegativity.
A. Louis Allred and Eugene G. Rochow considered that electronegativity should be related to the charge experienced by an electron on the "surface" of an atom: The higher the charge per unit area of atomic surface the greater the tendency of that atom to attract electrons. The effective nuclear charge, "Z"eff, experienced by valence electrons can be estimated using Slater's rules, while the surface area of an atom in a molecule can be taken to be proportional to the square of the covalent radius, "r"cov. When "r"cov is expressed in picometres,
formula_7
Sanderson electronegativity equalization.
R.T. Sanderson has also noted the relationship between Mulliken electronegativity and atomic size, and has proposed a method of calculation based on the reciprocal of the atomic volume. With a knowledge of bond lengths, Sanderson's model allows the estimation of bond energies in a wide range of compounds. Sanderson's model has also been used to calculate molecular geometry, "s"-electron energy, NMR spin-spin coupling constants and other parameters for organic compounds. This work underlies the concept of electronegativity equalization, which suggests that electrons distribute themselves around a molecule to minimize or to equalize the Mulliken electronegativity. This behavior is analogous to the equalization of chemical potential in macroscopic thermodynamics.
Allen electronegativity.
Perhaps the simplest definition of electronegativity is that of Leland C. Allen, who has proposed that it is related to the average energy of the valence electrons in a free atom,
formula_8
where "ε"s,p are the one-electron energies of s- and p-electrons in the free atom and "n"s,p are the number of s- and p-electrons in the valence shell. It is usual to apply a scaling factor, 1.75×10−3 for energies expressed in kilojoules per mole or 0.169 for energies measured in electronvolts, to give values that are numerically similar to Pauling electronegativities.
The one-electron energies can be determined directly from spectroscopic data, and so electronegativities calculated by this method are sometimes referred to as spectroscopic electronegativities. The necessary data are available for almost all elements, and this method allows the estimation of electronegativities for elements that cannot be treated by the other methods, e.g. francium, which has an Allen electronegativity of 0.67. However, it is not clear what should be considered to be valence electrons for the d- and f-block elements, which leads to an ambiguity for their electronegativities calculated by the Allen method.
On this scale, neon has the highest electronegativity of all elements, followed by fluorine, helium, and oxygen.
Correlation of electronegativity with other properties.
The wide variety of methods of calculation of electronegativities, which all give results that correlate well with one another, is one indication of the number of chemical properties that might be affected by electronegativity. The most obvious application of electronegativities is in the discussion of bond polarity, for which the concept was introduced by Pauling. In general, the greater the difference in electronegativity between two atoms the more polar the bond that will be formed between them, with the atom having the higher electronegativity being at the negative end of the dipole. Pauling proposed an equation to relate the "ionic character" of a bond to the difference in electronegativity of the two atoms, although this has fallen somewhat into disuse.
Several correlations have been shown between infrared stretching frequencies of certain bonds and the electronegativities of the atoms involved: however, this is not surprising as such stretching frequencies depend in part on bond strength, which enters into the calculation of Pauling electronegativities. More convincing are the correlations between electronegativity and chemical shifts in NMR spectroscopy or isomer shifts in Mössbauer spectroscopy (see figure). Both these measurements depend on the s-electron density at the nucleus, and so are a good indication that the different measures of electronegativity really are describing "the ability of an atom in a molecule to attract electrons to itself".
Trends in electronegativity.
Periodic trends.
In general, electronegativity increases on passing from left to right along a period and decreases on descending a group. Hence, fluorine is the most electronegative of the elements (not counting noble gases), whereas caesium is the least electronegative, at least of those elements for which substantial data is available. This would lead one to believe that caesium fluoride is the compound whose bonding features the most ionic character.
There are some exceptions to this general rule. Gallium and germanium have higher electronegativities than aluminium and silicon, respectively, because of the d-block contraction. Elements of the fourth period immediately after the first row of the transition metals have unusually small atomic radii because the 3d-electrons are not effective at shielding the increased nuclear charge, and smaller atomic size correlates with higher electronegativity (see Allred-Rochow electronegativity and Sanderson electronegativity above). The anomalously high electronegativity of lead, in particular when compared to thallium and bismuth, is an artifact of electronegativity varying with oxidation state: its electronegativity conforms better to trends if it is quoted for the +2 state with a Pauling value of 1.87 instead of the +4 state.
Variation of electronegativity with oxidation number.
In inorganic chemistry, it is common to consider a single value of electronegativity to be valid for most "normal" situations. While this approach has the advantage of simplicity, it is clear that the electronegativity of an element is "not" an invariable atomic property and, in particular, increases with the oxidation state of the element.
Allred used the Pauling method to calculate separate electronegativities for different oxidation states of the handful of elements (including tin and lead) for which sufficient data were available. However, for most elements, there are not enough different covalent compounds for which bond dissociation energies are known to make this approach feasible. This is particularly true of the transition elements, where quoted electronegativity values are usually, of necessity, averages over several different oxidation states and where trends in electronegativity are harder to see as a result.
The chemical effects of this increase in electronegativity can be seen both in the structures of oxides and halides and in the acidity of oxides and oxoacids. Hence CrO3 and Mn2O7 are acidic oxides with low melting points, while Cr2O3 is amphoteric and Mn2O3 is a completely basic oxide.
The effect can also be clearly seen in the dissociation constants p"K"a of the oxoacids of chlorine. The effect is much larger than could be explained by the negative charge being shared among a larger number of oxygen atoms, which would lead to a difference in p"K"a of log10(<templatestyles src="Fraction/styles.css" />1⁄4) = –0.6 between hypochlorous acid and perchloric acid. As the oxidation state of the central chlorine atom increases, more electron density is drawn from the oxygen atoms onto the chlorine, diminishing the partial negative charge of individual oxygen atoms. At the same time, the positive partial charge on the hydrogen increases with a higher oxidation state. This explains the observed increased acidity with an increasing oxidation state in the oxoacids of chlorine.
Electronegativity and hybridization scheme.
The electronegativity of an atom changes depending on the hybridization of the orbital employed in bonding. Electrons in s orbitals are held more tightly than electrons in p orbitals. Hence, a bond to an atom that employs an sp"x" hybrid orbital for bonding will be more heavily polarized to that atom when the hybrid orbital has more s character. That is, when electronegativities are compared for different hybridization schemes of a given element, the order χ(sp3) < χ(sp2) < χ(sp) holds (the trend should apply to non-integer hybridization indices as well). While this holds true in principle for any main-group element, values for the hybridization-specific electronegativity are most frequently cited for carbon. In organic chemistry, these electronegativities are frequently invoked to predict or rationalize bond polarities in organic compounds containing double and triple bonds to carbon.
Group electronegativity.
In organic chemistry, electronegativity is associated more with different functional groups than with individual atoms. The terms group electronegativity and substituent electronegativity are used synonymously. However, it is common to distinguish between the inductive effect and the resonance effect, which might be described as σ- and π-electronegativities, respectively. There are a number of linear free-energy relationships that have been used to quantify these effects, of which the Hammett equation is the best known. Kabachnik parameters are group electronegativities for use in organophosphorus chemistry.
Electropositivity.
Electropositivity is a measure of an element's ability to donate electrons, and therefore form positive ions; thus, it is antipode to electronegativity.
Mainly, this is an attribute of metals, meaning that, in general, the greater the metallic character of an element the greater the electropositivity. Therefore, the alkali metals are the most electropositive of all. This is because they have a single electron in their outer shell and, as this is relatively far from the nucleus of the atom, it is easily lost; in other words, these metals have low ionization energies.
While electronegativity increases along periods in the periodic table, and decreases down groups, electropositivity "decreases" along periods (from left to right) and "increases" down groups. This means that elements in the upper right of the periodic table of elements (oxygen, sulfur, chlorine, etc.) will have the greatest electronegativity, and those in the lower-left (rubidium, caesium, and francium) the greatest electropositivity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "|\\chi_{\\rm A} - \\chi_{\\rm B}| = ({\\rm eV})^{-1/2} \\sqrt{E_{\\rm d}({\\rm AB}) - \\frac{E_{\\rm d}({\\rm AA}) + E_{\\rm d}({\\rm BB})} 2}"
},
{
"math_id": 1,
"text": "E_{\\rm d}({\\rm AB}) = \\frac{E_{\\rm d}({\\rm AA}) + E_{\\rm d}({\\rm BB})} 2 + (\\chi_{\\rm A} - \\chi_{\\rm B})^2 {\\rm eV}"
},
{
"math_id": 2,
"text": "E_{\\rm d}({\\rm AB}) =\\sqrt{E_{\\rm d}({\\rm AA}) E_{\\rm d}({\\rm BB})}+1.3(\\chi_{\\rm A} - \\chi_{\\rm B})^2 {\\rm eV}"
},
{
"math_id": 3,
"text": "\\chi = \\frac{E_{\\rm i} + E_{\\rm ea}} 2 "
},
{
"math_id": 4,
"text": "\\chi = 0.187(E_{\\rm i} + E_{\\rm ea}) + 0.17 \\,"
},
{
"math_id": 5,
"text": "\\chi = (1.97\\times 10^{-3})(E_{\\rm i} + E_{\\rm ea}) + 0.19."
},
{
"math_id": 6,
"text": "\\mu(\\rm Mulliken) = -\\chi(\\rm Mulliken) = {}-\\frac{E_{\\rm i} + E_{\\rm ea}} 2 "
},
{
"math_id": 7,
"text": "\\chi = 3590{{Z_{\\rm eff}}\\over{r^2_{\\rm cov}}} + 0.744"
},
{
"math_id": 8,
"text": "\\chi = {n_{\\rm s}\\varepsilon_{\\rm s} + n_{\\rm p}\\varepsilon_{\\rm p} \\over n_{\\rm s} + n_{\\rm p}}"
}
] | https://en.wikipedia.org/wiki?curid=9707 |
9710 | Elementary algebra | Basic concepts of algebra
formula_2The quadratic formula, which is the solution to the quadratic equation formula_0 where formula_1. Here the symbols a, b, and c represent arbitrary numbers, and x is a variable which represents the solution of the equation.
Elementary algebra, also known as college algebra, encompasses the basic concepts of algebra. It is often contrasted with arithmetic: arithmetic deals with specified numbers, whilst algebra introduces variables (quantities without fixed values).
This use of variables entails use of algebraic notation and an understanding of the general rules of the operations introduced in arithmetic: addition, subtraction, multiplication, division, etc. Unlike abstract algebra, elementary algebra is not concerned with algebraic structures outside the realm of real and complex numbers.
It is typically taught to secondary school students and at introductory college level in the United States, and builds on their understanding of arithmetic. The use of variables to denote quantities allows general relationships between quantities to be formally and concisely expressed, and thus enables solving a broader scope of problems. Many quantitative relationships in science and mathematics are expressed as algebraic equations.
Algebraic notation.
Algebraic notation describes the rules and conventions for writing mathematical expressions, as well as the terminology used for talking about parts of expressions. For example, the expression formula_3 has the following components:
A "coefficient" is a numerical value, or letter representing a numerical constant, that multiplies a variable (the operator is omitted). A "term" is an addend or a summand, a group of coefficients, variables, constants and exponents that may be separated from the other terms by the plus and minus operators. Letters represent variables and constants. By convention, letters at the beginning of the alphabet (e.g. formula_4) are typically used to represent constants, and those toward the end of the alphabet (e.g. formula_5 and z) are used to represent variables. They are usually printed in italics.
Algebraic operations work in the same way as arithmetic operations, such as addition, subtraction, multiplication, division and exponentiation, and are applied to algebraic variables and terms. Multiplication symbols are usually omitted, and implied when there is no space between two variables or terms, or when a coefficient is used. For example, formula_6 is written as formula_7, and formula_8 may be written formula_9.
Usually terms with the highest power (exponent), are written on the left, for example, formula_10 is written to the left of x. When a coefficient is one, it is usually omitted (e.g. formula_11 is written formula_10). Likewise when the exponent (power) is one, (e.g. formula_12 is written formula_13). When the exponent is zero, the result is always 1 (e.g. formula_14 is always rewritten to 1). However formula_15, being undefined, should not appear in an expression, and care should be taken in simplifying expressions in which variables may appear in exponents.
Alternative notation.
Other types of notation are used in algebraic expressions when the required formatting is not available, or can not be implied, such as where only letters and symbols are available. As an illustration of this, while exponents are usually formatted using superscripts, e.g., formula_10, in plain text, and in the TeX mark-up language, the caret symbol ^ represents exponentiation, so formula_10 is written as "x^2". This also applies to some programming languages such as Lua. In programming languages such as Ada, Fortran, Perl, Python and Ruby, a double asterisk is used, so formula_10 is written as "x**2". Many programming languages and calculators use a single asterisk to represent the multiplication symbol, and it must be explicitly used, for example, formula_13 is written "3*x".
Concepts.
Variables.
Elementary algebra builds on and extends arithmetic by introducing letters called variables to represent general (non-specified) numbers. This is useful for several reasons.
Simplifying expressions.
Algebraic expressions may be evaluated and simplified, based on the basic properties of arithmetic operations (addition, subtraction, multiplication, division and exponentiation). For example,
Equations.
An equation states that two expressions are equal using the symbol for equality,
(the equals sign). One of the best-known equations describes Pythagoras' law relating the length of the sides of a right angle triangle:
formula_35
This equation states that formula_36, representing the square of the length of the side that is the hypotenuse, the side opposite the right angle, is equal to the sum (addition) of the squares of the other two sides whose lengths are represented by a and b.
An equation is the claim that two expressions have the same value and are equal. Some equations are true for all values of the involved variables (such as formula_37); such equations are called identities. Conditional equations are true for only some values of the involved variables, e.g. formula_38 is true only for formula_39 and formula_40. The values of the variables which make the equation true are the solutions of the equation and can be found through equation solving.
Another type of equation is inequality. Inequalities are used to show that one side of the equation is greater, or less, than the other. The symbols used for this are: formula_41 where formula_42 represents 'greater than', and formula_43 where formula_44 represents 'less than'. Just like standard equality equations, numbers can be added, subtracted, multiplied or divided. The only exception is that when multiplying or dividing by a negative number, the inequality symbol must be flipped.
Properties of equality.
By definition, equality is an equivalence relation, meaning it is reflexive (i.e. formula_45), symmetric (i.e. if formula_46 then formula_47), and transitive (i.e. if formula_46 and formula_48 then formula_49). It also satisfies the important property that if two symbols are used for equal things, then one symbol can be substituted for the other in any true statement about the first and the statement will remain true. This implies the following properties:
Properties of inequality.
The relations "less than" formula_44 and greater than formula_42 have the property of transitivity:
By reversing the inequation, formula_44 and formula_42 can be swapped, for example:
Substitution.
Substitution is replacing the terms in an expression to create a new expression. Substituting 3 for a in the expression "a"*5 makes a new expression 3*5 with meaning 15. Substituting the terms of a statement makes a new statement. When the original statement is true independently of the values of the terms, the statement created by substitutions is also true. Hence, definitions can be made in symbolic terms and interpreted through substitution: if formula_67 is meant as the definition of formula_68 as the product of a with itself, substituting 3 for a informs the reader of this statement that formula_69 means 3 × 3 = 9. Often it's not known whether the statement is true independently of the values of the terms. And, substitution allows one to derive restrictions on the possible values, or show what conditions the statement holds under. For example, taking the statement "x" + 1 = 0, if x is substituted with 1, this implies 1 + 1 = 2 = 0, which is false, which implies that if "x" + 1 = 0 then x cannot be 1.
If "x" and "y" are integers, rationals, or real numbers, then "xy" = 0 implies "x" = 0 or "y" = 0. Consider "abc" = 0. Then, substituting "a" for "x" and "bc" for "y", we learn "a" = 0 or "bc" = 0. Then we can substitute again, letting "x" = "b" and "y" = "c", to show that if "bc" = 0 then "b" = 0 or "c" = 0. Therefore, if "abc" = 0, then "a" = 0 or ("b" = 0 or "c" = 0), so "abc" = 0 implies "a" = 0 or "b" = 0 or "c" = 0.
If the original fact were stated as ""ab" = 0 implies "a" = 0 or "b" = 0", then when saying "consider "abc" = 0," we would have a conflict of terms when substituting. Yet the above logic is still valid to show that if "abc" = 0 then "a" = 0 or "b" = 0 or "c" = 0 if, instead of letting "a" = "a" and "b" = "bc", one substitutes "a" for "a" and "b" for "bc" (and with "bc" = 0, substituting "b" for "a" and "c" for "b"). This shows that substituting for the terms in a statement isn't always the same as letting the terms from the statement equal the substituted terms. In this situation it's clear that if we substitute an expression "a" into the "a" term of the original equation, the "a" substituted does not refer to the "a" in the statement ""ab" = 0 implies "a" = 0 or "b" = 0."
Solving algebraic equations.
The following sections lay out examples of some of the types of algebraic equations that may be encountered.
Linear equations with one variable.
Linear equations are so-called, because when they are plotted, they describe a straight line. The simplest equations to solve are linear equations that have only one variable. They contain only constant numbers and a single variable without an exponent. As an example, consider:
Problem in words: If you double the age of a child and add 4, the resulting answer is 12. How old is the child?
Equivalent equation: formula_70 where x represent the child's age
To solve this kind of equation, the technique is add, subtract, multiply, or divide both sides of the equation by the same number in order to isolate the variable on one side of the equation. Once the variable is isolated, the other side of the equation is the value of the variable. This problem and its solution are as follows:
In words: the child is 4 years old.
The general form of a linear equation with one variable, can be written as: formula_71
Following the same procedure (i.e. subtract b from both sides, and then divide by a), the general solution is given by formula_72
Linear equations with two variables.
A linear equation with two variables has many (i.e. an infinite number of) solutions. For example:
Problem in words: A father is 22 years older than his son. How old are they?
Equivalent equation: formula_73 where y is the father's age, x is the son's age.
That cannot be worked out by itself. If the son's age was made known, then there would no longer be two unknowns (variables). The problem then becomes a linear equation with just one variable, that can be solved as described above.
To solve a linear equation with two variables (unknowns), requires two related equations. For example, if it was also revealed that:
In 10 years, the father will be twice as old as his son.
formula_74
Now there are two related linear equations, each with two unknowns, which enables the production of a linear equation with just one variable, by subtracting one from the other (called the elimination method):
formula_75
formula_76
In other words, the son is aged 12, and since the father 22 years older, he must be 34. In 10 years, the son will be 22, and the father will be twice his age, 44. This problem is illustrated on the associated plot of the equations.
For other ways to solve this kind of equations, see below, System of linear equations.
Quadratic equations.
A quadratic equation is one which includes a term with an exponent of 2, for example, formula_27, and no term with higher exponent. The name derives from the Latin "quadrus", meaning square. In general, a quadratic equation can be expressed in the form formula_79, where a is not zero (if it were zero, then the equation would not be quadratic but linear). Because of this a quadratic equation must contain the term formula_80, which is known as the quadratic term. Hence formula_81, and so we may divide by a and rearrange the equation into the standard form
formula_82
where formula_83 and formula_84. Solving this, by a process known as completing the square, leads to the quadratic formula
formula_85
where the symbol "±" indicates that both
formula_86
are solutions of the quadratic equation.
Quadratic equations can also be solved using factorization (the reverse process of which is expansion, but for two linear terms is sometimes denoted foiling). As an example of factoring:
formula_87
which is the same thing as
formula_88
It follows from the zero-product property that either formula_78 or formula_77 are the solutions, since precisely one of the factors must be equal to zero. All quadratic equations will have two solutions in the complex number system, but need not have any in the real number system. For example,
formula_89
has no real number solution since no real number squared equals −1.
Sometimes a quadratic equation has a root of multiplicity 2, such as:
formula_90
For this equation, −1 is a root of multiplicity 2. This means −1 appears twice, since the equation can be rewritten in factored form as
formula_91
Complex numbers.
All quadratic equations have exactly two solutions in complex numbers (but they may be equal to each other), a category that includes real numbers, imaginary numbers, and sums of real and imaginary numbers. Complex numbers first arise in the teaching of quadratic equations and the quadratic formula. For example, the quadratic equation
formula_92
has solutions
formula_93
Since formula_94 is not any real number, both of these solutions for "x" are complex numbers.
Exponential and logarithmic equations.
An exponential equation is one which has the form formula_95 for formula_96, which has solution
formula_97
when formula_98. Elementary algebraic techniques are used to rewrite a given equation in the above way before arriving at the solution. For example, if
formula_99
then, by subtracting 1 from both sides of the equation, and then dividing both sides by 3 we obtain
formula_100
whence
formula_101
or
formula_102
A logarithmic equation is an equation of the form formula_103 for formula_96, which has solution
formula_104
For example, if
formula_105
then, by adding 2 to both sides of the equation, followed by dividing both sides by 4, we get
formula_106
whence
formula_107
from which we obtain
formula_108
Radical equations.
formula_109Radical equation showing two ways to represent the same expression. The triple bar means the equation is true for all values of "x"
A radical equation is one that includes a radical sign, which includes square roots, formula_110 cube roots, formula_111, and "n"th roots, formula_112. Recall that an "n"th root can be rewritten in exponential format, so that formula_112 is equivalent to formula_113. Combined with regular exponents (powers), then formula_114 (the square root of x cubed), can be rewritten as formula_115. So a common form of a radical equation is formula_116 (equivalent to formula_117) where m and n are integers. It has real solution(s):
For example, if:
formula_118
then
formula_119
and thus
formula_120
System of linear equations.
There are different methods to solve a system of linear equations with two variables.
Elimination method.
An example of solving a system of linear equations is by using the elimination method:
formula_121
Multiplying the terms in the second equation by 2:
formula_122
formula_123
Adding the two equations together to get:
formula_124
which simplifies to
formula_125
Since the fact that formula_78 is known, it is then possible to deduce that formula_126 by either of the original two equations (by using "2" instead of x ) The full solution to this problem is then
formula_127
This is not the only way to solve this specific system; y could have been resolved before x.
Substitution method.
Another way of solving the same system of linear equations is by substitution.
formula_128
An equivalent for y can be deduced by using one of the two equations. Using the second equation:
formula_129
Subtracting formula_130 from each side of the equation:
formula_131
and multiplying by −1:
formula_132
Using this y value in the first equation in the original system:
formula_133
Adding "2" on each side of the equation:
formula_134
which simplifies to
formula_135
Using this value in one of the equations, the same solution as in the previous method is obtained.
formula_127
This is not the only way to solve this specific system; in this case as well, y could have been solved before x.
Other types of systems of linear equations.
Inconsistent systems.
In the above example, a solution exists. However, there are also systems of equations which do not have any solution. Such a system is called inconsistent. An obvious example is
formula_136
As 0≠2, the second equation in the system has no solution. Therefore, the system has no solution.
However, not all inconsistent systems are recognized at first sight. As an example, consider the system
formula_137
Multiplying by 2 both sides of the second equation, and adding it to the first one results in
formula_138
which clearly has no solution.
Undetermined systems.
There are also systems which have infinitely many solutions, in contrast to a system with a unique solution (meaning, a unique pair of values for x and y) For example:
formula_139
Isolating y in the second equation:
formula_140
And using this value in the first equation in the system:
formula_141
The equality is true, but it does not provide a value for x. Indeed, one can easily verify (by just filling in some values of x) that for any x there is a solution as long as formula_142. There is an infinite number of solutions for this system.
Over- and underdetermined systems.
Systems with more variables than the number of linear equations are called underdetermined. Such a system, if it has any solutions, does not have a unique one but rather an infinitude of them. An example of such a system is
formula_143
When trying to solve it, one is led to express some variables as functions of the other ones if any solutions exist, but cannot express "all" solutions numerically because there are an infinite number of them if there are any.
A system with a higher number of equations than variables is called overdetermined. If an overdetermined system has any solutions, necessarily some equations are linear combinations of the others.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "ax^2+bx+c=0"
},
{
"math_id": 1,
"text": "a\\neq0"
},
{
"math_id": 2,
"text": "\\overset{}{\\underset{}{ x=\\frac{-b\\pm\\sqrt{b^2-4ac} }{2a} } }"
},
{
"math_id": 3,
"text": "3x^2 - 2xy + c"
},
{
"math_id": 4,
"text": "a, b, c"
},
{
"math_id": 5,
"text": "x, y"
},
{
"math_id": 6,
"text": "3 \\times x^2"
},
{
"math_id": 7,
"text": "3x^2"
},
{
"math_id": 8,
"text": "2 \\times x \\times y"
},
{
"math_id": 9,
"text": "2xy"
},
{
"math_id": 10,
"text": "x^2"
},
{
"math_id": 11,
"text": "1x^2"
},
{
"math_id": 12,
"text": "3x^1"
},
{
"math_id": 13,
"text": "3x"
},
{
"math_id": 14,
"text": "x^0"
},
{
"math_id": 15,
"text": "0^0"
},
{
"math_id": 16,
"text": "C = P + 20"
},
{
"math_id": 17,
"text": "60 \\times 5 = 300"
},
{
"math_id": 18,
"text": "s = 60 \\times m"
},
{
"math_id": 19,
"text": "\\pi = c /d"
},
{
"math_id": 20,
"text": "(a + b) = (b + a)"
},
{
"math_id": 21,
"text": "x + x + x"
},
{
"math_id": 22,
"text": "3x"
},
{
"math_id": 23,
"text": "x \\times x \\times x"
},
{
"math_id": 24,
"text": "x^3"
},
{
"math_id": 25,
"text": "2x^2 + 3ab - x^2 + ab"
},
{
"math_id": 26,
"text": "x^2 + 4ab"
},
{
"math_id": 27,
"text": "x^2"
},
{
"math_id": 28,
"text": "ab"
},
{
"math_id": 29,
"text": "x (2x + 3)"
},
{
"math_id": 30,
"text": "(x \\times 2x) + (x \\times 3)"
},
{
"math_id": 31,
"text": "2x^2 + 3x"
},
{
"math_id": 32,
"text": "6x^5 + 3x^2"
},
{
"math_id": 33,
"text": "3x^2"
},
{
"math_id": 34,
"text": "3x^2 (2x^3 + 1)"
},
{
"math_id": 35,
"text": "c^2 = a^2 + b^2"
},
{
"math_id": 36,
"text": "c^2"
},
{
"math_id": 37,
"text": "a + b = b + a"
},
{
"math_id": 38,
"text": "x^2 - 1 = 8"
},
{
"math_id": 39,
"text": "x = 3"
},
{
"math_id": 40,
"text": "x = -3"
},
{
"math_id": 41,
"text": " a > b "
},
{
"math_id": 42,
"text": " > "
},
{
"math_id": 43,
"text": " a < b "
},
{
"math_id": 44,
"text": " < "
},
{
"math_id": 45,
"text": "b = b"
},
{
"math_id": 46,
"text": "a = b"
},
{
"math_id": 47,
"text": "b = a"
},
{
"math_id": 48,
"text": "b = c"
},
{
"math_id": 49,
"text": "a = c"
},
{
"math_id": 50,
"text": "c = d"
},
{
"math_id": 51,
"text": "a + c = b + d"
},
{
"math_id": 52,
"text": "ac = bd"
},
{
"math_id": 53,
"text": "a + c = b + c"
},
{
"math_id": 54,
"text": "ac = bc"
},
{
"math_id": 55,
"text": "a=b"
},
{
"math_id": 56,
"text": "f(a) = f(b)"
},
{
"math_id": 57,
"text": "a < b"
},
{
"math_id": 58,
"text": "b < c"
},
{
"math_id": 59,
"text": "a < c"
},
{
"math_id": 60,
"text": "c < d"
},
{
"math_id": 61,
"text": "a + c < b + d"
},
{
"math_id": 62,
"text": "c > 0"
},
{
"math_id": 63,
"text": "ac < bc"
},
{
"math_id": 64,
"text": "c < 0"
},
{
"math_id": 65,
"text": "bc < ac"
},
{
"math_id": 66,
"text": "b > a"
},
{
"math_id": 67,
"text": "a^2:=a\\times a"
},
{
"math_id": 68,
"text": "a^2,"
},
{
"math_id": 69,
"text": "3^2"
},
{
"math_id": 70,
"text": "2x + 4 = 12"
},
{
"math_id": 71,
"text": "ax+b=c"
},
{
"math_id": 72,
"text": "x=\\frac{c-b}{a}"
},
{
"math_id": 73,
"text": "y = x + 22"
},
{
"math_id": 74,
"text": "\\begin{align}\ny + 10 &= 2 \\times (x + 10)\\\\\ny &= 2 \\times (x + 10) - 10 && \\text{Subtract 10 from both sides}\\\\\ny &= 2x + 20 - 10 && \\text{Multiple out brackets}\\\\\ny &= 2x + 10 && \\text{Simplify}\n\\end{align}"
},
{
"math_id": 75,
"text": "\\begin{cases}\ny = x + 22 & \\text{First equation}\\\\\ny = 2x + 10 & \\text{Second equation}\n\\end{cases}"
},
{
"math_id": 76,
"text": "\\begin{align}\n&&&\\text{Subtract the first equation from}\\\\\n(y - y) &= (2x - x) +10 - 22 && \\text{the second in order to remove } y\\\\\n0 &= x - 12 && \\text{Simplify}\\\\\n12 &= x && \\text{Add 12 to both sides}\\\\\nx &= 12 && \\text{Rearrange}\n\\end{align}"
},
{
"math_id": 77,
"text": "x = -5"
},
{
"math_id": 78,
"text": "x = 2"
},
{
"math_id": 79,
"text": "ax^2 + bx + c = 0"
},
{
"math_id": 80,
"text": "ax^2"
},
{
"math_id": 81,
"text": "a \\neq 0"
},
{
"math_id": 82,
"text": "x^2 + px + q = 0 "
},
{
"math_id": 83,
"text": "p = \\frac{b}{a}"
},
{
"math_id": 84,
"text": "q = \\frac{c}{a}"
},
{
"math_id": 85,
"text": "x=\\frac{-b \\pm \\sqrt {b^2-4ac}}{2a},"
},
{
"math_id": 86,
"text": " x=\\frac{-b + \\sqrt {b^2-4ac}}{2a}\\quad\\text{and}\\quad x=\\frac{-b - \\sqrt {b^2-4ac}}{2a}"
},
{
"math_id": 87,
"text": "x^{2} + 3x - 10 = 0, "
},
{
"math_id": 88,
"text": "(x + 5)(x - 2) = 0. "
},
{
"math_id": 89,
"text": "x^{2} + 1 = 0 "
},
{
"math_id": 90,
"text": "(x + 1)^2 = 0. "
},
{
"math_id": 91,
"text": "[x-(-1)][x-(-1)]=0."
},
{
"math_id": 92,
"text": "x^2+x+1=0"
},
{
"math_id": 93,
"text": "x=\\frac{-1 + \\sqrt{-3}}{2} \\quad \\quad \\text{and} \\quad \\quad x=\\frac{-1-\\sqrt{-3}}{2}."
},
{
"math_id": 94,
"text": "\\sqrt{-3}"
},
{
"math_id": 95,
"text": "a^x = b"
},
{
"math_id": 96,
"text": "a > 0"
},
{
"math_id": 97,
"text": "X = \\log_a b = \\frac{\\ln b}{\\ln a}"
},
{
"math_id": 98,
"text": "b > 0"
},
{
"math_id": 99,
"text": "3 \\cdot 2^{x - 1} + 1 = 10"
},
{
"math_id": 100,
"text": "2^{x - 1} = 3"
},
{
"math_id": 101,
"text": "x - 1 = \\log_2 3"
},
{
"math_id": 102,
"text": "x = \\log_2 3 + 1."
},
{
"math_id": 103,
"text": "log_a(x) = b"
},
{
"math_id": 104,
"text": "X = a^b."
},
{
"math_id": 105,
"text": "4\\log_5(x - 3) - 2 = 6"
},
{
"math_id": 106,
"text": "\\log_5(x - 3) = 2"
},
{
"math_id": 107,
"text": "x - 3 = 5^2 = 25"
},
{
"math_id": 108,
"text": "x = 28."
},
{
"math_id": 109,
"text": "\\overset{}{\\underset{}{\\sqrt[2]{x^3} \\equiv x^{\\frac 3 2} } }"
},
{
"math_id": 110,
"text": "\\sqrt{x},"
},
{
"math_id": 111,
"text": "\\sqrt[3]{x}"
},
{
"math_id": 112,
"text": "\\sqrt[n]{x}"
},
{
"math_id": 113,
"text": "x^{\\frac{1}{n}}"
},
{
"math_id": 114,
"text": "\\sqrt[2]{x^3}"
},
{
"math_id": 115,
"text": "x^{\\frac{3}{2}}"
},
{
"math_id": 116,
"text": " \\sqrt[n]{x^m}=a"
},
{
"math_id": 117,
"text": " x^\\frac{m}{n}=a"
},
{
"math_id": 118,
"text": "(x + 5)^{2/3} = 4"
},
{
"math_id": 119,
"text": "\\begin{align}\nx + 5 & = \\pm (\\sqrt{4})^3,\\\\\nx + 5 & = \\pm 8,\\\\\nx & = -5 \\pm 8,\n\\end{align}"
},
{
"math_id": 120,
"text": "x = 3 \\quad \\text{or}\\quad x = -13"
},
{
"math_id": 121,
"text": "\\begin{cases}4x + 2y&= 14 \\\\\n2x - y&= 1.\\end{cases} "
},
{
"math_id": 122,
"text": "4x + 2y = 14 "
},
{
"math_id": 123,
"text": "4x - 2y = 2. "
},
{
"math_id": 124,
"text": "8x = 16 "
},
{
"math_id": 125,
"text": "x = 2. "
},
{
"math_id": 126,
"text": "y = 3"
},
{
"math_id": 127,
"text": "\\begin{cases} x = 2 \\\\ y = 3. \\end{cases}"
},
{
"math_id": 128,
"text": "\\begin{cases}4x + 2y &= 14\n\\\\ 2x - y &= 1.\\end{cases} "
},
{
"math_id": 129,
"text": "2x - y = 1 "
},
{
"math_id": 130,
"text": "2x"
},
{
"math_id": 131,
"text": "\\begin{align}2x - 2x - y & = 1 - 2x \\\\\n- y & = 1 - 2x\n\\end{align}"
},
{
"math_id": 132,
"text": " y = 2x - 1. "
},
{
"math_id": 133,
"text": "\\begin{align}4x + 2(2x - 1) &= 14\\\\\n4x + 4x - 2 &= 14 \\\\\n8x - 2 &= 14 \\end{align}"
},
{
"math_id": 134,
"text": "\\begin{align}8x - 2 + 2 &= 14 + 2 \\\\\n8x &= 16 \\end{align}"
},
{
"math_id": 135,
"text": "x = 2 "
},
{
"math_id": 136,
"text": "\\begin{cases}\\begin{align} x + y &= 1 \\\\\n0x + 0y &= 2\\,. \\end{align} \\end{cases}"
},
{
"math_id": 137,
"text": "\\begin{cases}\\begin{align}4x + 2y &= 12 \\\\\n-2x - y &= -4\\,. \\end{align}\\end{cases}"
},
{
"math_id": 138,
"text": "0x+0y = 4 \\,,"
},
{
"math_id": 139,
"text": "\\begin{cases}\\begin{align}4x + 2y & = 12 \\\\\n-2x - y & = -6 \\end{align}\\end{cases}"
},
{
"math_id": 140,
"text": "y = -2x + 6 "
},
{
"math_id": 141,
"text": "\\begin{align}4x + 2(-2x + 6) = 12 \\\\\n4x - 4x + 12 = 12 \\\\\n12 = 12 \\end{align}"
},
{
"math_id": 142,
"text": "y = -2x + 6"
},
{
"math_id": 143,
"text": "\\begin{cases}\\begin{align}x + 2y & = 10\\\\\ny - z & = 2 .\\end{align}\\end{cases}"
}
] | https://en.wikipedia.org/wiki?curid=9710 |
9710396 | Riemann–Hilbert problem | In mathematics, Riemann–Hilbert problems, named after Bernhard Riemann and David Hilbert, are a class of problems that arise in the study of differential equations in the complex plane. Several existence theorems for Riemann–Hilbert problems have been produced by Mark Krein, Israel Gohberg and others.
The Riemann problem.
Suppose that formula_0 is a smooth, simple, closed contour in the complex formula_1 plane. Divide the plane into two parts denoted by formula_2 (the inside) and formula_3 (the outside), determined by the index of the contour with respect to a point. The classical problem, considered in Riemann's PhD dissertation, was that of finding a function
formula_4
analytic inside formula_2, such that the boundary values of formula_5 along formula_0 satisfy the equation
formula_6
for formula_7, where formula_8, formula_9 and formula_10 are given real-valued functions. For example, in the special case where formula_11 and formula_0 is a circle, the problem reduces to deriving the Poisson formula.
By the Riemann mapping theorem, it suffices to consider the case when formula_0 is the circle group formula_12. In this case, one may seek formula_13 along with its Schwarz reflection
formula_14
For formula_15, one has formula_16 and so
formula_17
Hence the problem reduces to finding a pair of analytic functions formula_13 and formula_18 on the inside and outside, respectively, of the unit disk, so that on the unit circle
formula_19
and, moreover, so that the condition at infinity holds:
formula_20
The Hilbert problem.
Hilbert's generalization of the problem attempted to find a pair of analytic functions formula_21 and formula_22 on the inside and outside, respectively, of the curve formula_0, such that for formula_7 one has
formula_23
where formula_24, formula_25 and formula_26 are given complex-valued functions (no longer just complex conjugates).
Riemann–Hilbert problems.
In the Riemann problem as well as Hilbert's generalization, the contour formula_0 was simple. A full Riemann–Hilbert problem allows that the contour may be composed of a union of several oriented smooth curves, with no intersections. The "+" and "−" sides of the "contour" may then be determined according to the index of a point with respect to formula_0. The Riemann–Hilbert problem is to find a pair of analytic functions formula_21 and formula_22 on the "+" and "−" side of formula_0, respectively, such that for formula_7 one has
formula_27
where formula_24, formula_25 and formula_26 are given complex-valued functions.
Matrix Riemann–Hilbert problems.
Given an oriented contour formula_0 (technically: an oriented union of smooth curves without points of infinite self-intersection in the complex plane), a "Riemann–Hilbert factorization problem" is the following.
Given a matrix function formula_28 defined on the contour formula_0, find a holomorphic matrix function formula_29 defined on the complement of formula_0, such that the following two conditions are satisfied
In the simplest case formula_28 is smooth and integrable. In more complicated cases it could have singularities. The limits formula_5 and formula_30 could be classical and continuous or they could be taken in the formula_35-sense.
At end-points or intersection points of the contour formula_0, the jump condition is not defined; constraints on the growth of formula_31 near those points have to be posed to ensure uniqueness (see the scalar problem below).
Example: Scalar Riemann–Hilbert factorization problem.
Suppose formula_36 and formula_37. Assuming formula_31 is bounded, what is the solution formula_31?
To solve this, let's take the logarithm of equation formula_38.
formula_39
Since formula_29 tends to formula_40, formula_41 as formula_34.
A standard fact about the Cauchy transform is that formula_42 where formula_43 and formula_44 are the limits of the Cauchy transform from above and below formula_0; therefore, we get
formula_45
when formula_46. Because the solution of a Riemann–Hilbert factorization problem is unique (an easy application of Liouville's theorem (complex analysis)), the Sokhotski–Plemelj theorem gives the solution. We get
formula_47
and therefore
formula_48
which has a branch cut at contour formula_0.
Check:
formula_49
therefore,
formula_50
CAVEAT 1: If the problem is not scalar one cannot easily take logarithms. In general explicit solutions are very rare.
CAVEAT 2: The boundedness (or at least a constraint on the blow-up) of formula_31 near the special points formula_40 and formula_51 is crucial. Otherwise any function of the form
formula_52
is also a solution. In general, conditions on growth are necessary at special points (the end-points of the jump contour or intersection point) to ensure that the problem is well-posed.
Generalizations.
DBAR problem.
Suppose formula_53 is some simply connected domain of the complex formula_1 plane. Then the scalar equation
formula_54
is a generalization of a Riemann-Hilbert problem, called the DBAR problem (or formula_55 problem). It is the complex form of the nonhomogeneous Cauchy-Riemann equations. To show this, let
formula_56
with formula_57, formula_58, formula_59 and formula_60 all real-valued functions of real variables formula_61 and formula_62. Then, using
formula_63
the DBAR problem yields
formula_64
As such, if formula_31 is holomorphic for formula_65, then the Cauchy-Riemann equations must be satisfied.
In case formula_66 as formula_34 and formula_67, the solution of the DBAR problem is
formula_68
integrated over the entire complex plane; denoted by formula_69, and where the wedge product is defined as
formula_70
Generalized analytic functions.
If a function formula_29 is holomorphic in some complex region formula_71, then
formula_72
in formula_71. For generalized analytic functions, this equation is replaced by
formula_73
in a region formula_71, where formula_74 is the complex conjugate of formula_31 and formula_75 and formula_76 are functions of formula_1 and formula_77.
Generalized analytic functions have applications in differential geometry, in solving certain type of multidimensional nonlinear partial differential equations and multidimensional inverse scattering.
Applications to integrability theory.
Riemann–Hilbert problems have applications to several related classes of problems.
The numerical analysis of Riemann–Hilbert problems can provide an effective way for numerically solving integrable PDEs (see e.g. ).
Use for asymptotics.
In particular, Riemann–Hilbert factorization problems are used to extract asymptotic values for the three problems above (say, as time goes to infinity, or as the dispersion coefficient goes to zero, or as the polynomial degree goes to infinity, or as the size of the permutation goes to infinity). There exists a method for extracting the asymptotic behavior of solutions of Riemann–Hilbert problems, analogous to the method of stationary phase and the method of steepest descent applicable to exponential integrals.
By analogy with the classical asymptotic methods, one "deforms" Riemann–Hilbert problems which are not explicitly solvable to problems that are. The so-called "nonlinear" method of stationary phase is due to , expanding on a previous idea by and and using technical background results from and . A crucial ingredient of the Deift–Zhou analysis is the asymptotic analysis of singular integrals on contours. The relevant kernel is the standard Cauchy kernel (see ; also cf. the scalar example below).
An essential extension of the nonlinear method of stationary phase has been the introduction of the so-called finite gap g-function transformation by , which has been crucial in most applications. This was inspired by work of Lax, Levermore and Venakides, who reduced the analysis of the small dispersion limit of the KdV equation to the analysis of a maximization problem for a logarithmic potential under some external field: a variational problem of "electrostatic" type (see ). The g-function is the logarithmic transform of the maximizing "equilibrium" measure. The analysis of the small dispersion limit of KdV equation has in fact provided the basis for the analysis of most of the work concerning "real" orthogonal polynomials (i.e. with the orthogonality condition defined on the real line) and Hermitian random matrices.
Perhaps the most sophisticated extension of the theory so far is the one applied to the "non self-adjoint" case, i.e. when the underlying Lax operator (the first component of the Lax pair) is not self-adjoint, by . In that case, actual "steepest descent contours" are defined and computed. The corresponding variational problem is a max-min problem: one looks for a contour that minimizes the "equilibrium" measure. The study of the variational problem and the proof of existence of a regular solution, under some conditions on the external field, was done in ; the contour arising is an "S-curve", as defined and studied in the 1980s by Herbert R. Stahl, Andrei A. Gonchar and Evguenii A Rakhmanov.
An alternative asymptotic analysis of Riemann–Hilbert factorization problems is provided in , especially convenient when jump matrices do not have analytic extensions. Their method is based on the analysis of d-bar problems, rather than the asymptotic analysis of singular integrals on contours. An alternative way of dealing with jump matrices with no analytic extensions was introduced in .
Another extension of the theory appears in where the underlying space of the Riemann–Hilbert problem is a compact hyperelliptic Riemann surface. The correct factorization problem is no more holomorphic, but rather meromorphic, by reason of the Riemann–Roch theorem. The related singular kernel is not the usual Cauchy kernel, but rather a more general kernel involving meromorphic differentials defined naturally on the surface (see e.g. the appendix in ). The Riemann–Hilbert problem deformation theory is applied to the problem of stability of the infinite periodic Toda lattice under a "short range" perturbation (for example a perturbation of a finite number of particles).
Most Riemann–Hilbert factorization problems studied in the literature are 2-dimensional, i.e., the unknown matrices are of dimension 2. Higher-dimensional problems have been studied by Arno Kuijlaars and collaborators, see e.g. .
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Sigma"
},
{
"math_id": 1,
"text": "z"
},
{
"math_id": 2,
"text": "\\Sigma_{+}"
},
{
"math_id": 3,
"text": "\\Sigma_{-}"
},
{
"math_id": 4,
"text": "M_+(t) = u(t) + i v(t),"
},
{
"math_id": 5,
"text": "M_+"
},
{
"math_id": 6,
"text": "a(t)u(t) - b(t)v(t) = c(t),"
},
{
"math_id": 7,
"text": "t \\in \\Sigma"
},
{
"math_id": 8,
"text": "a(t)"
},
{
"math_id": 9,
"text": "b(t)"
},
{
"math_id": 10,
"text": "c(t)"
},
{
"math_id": 11,
"text": "a = 1, b=0"
},
{
"math_id": 12,
"text": "\\mathbb T = \\{ z \\in \\mathbb C : |z| = 1 \\}"
},
{
"math_id": 13,
"text": "M_+(z)"
},
{
"math_id": 14,
"text": "M_-(z) = \\overline{M_+\\left(\\bar{z}^{-1}\\right)}."
},
{
"math_id": 15,
"text": "z\\in \\mathbb{T}"
},
{
"math_id": 16,
"text": "z = 1/\\bar{z}"
},
{
"math_id": 17,
"text": "M_-(z) = \\overline{M_+(z)}."
},
{
"math_id": 18,
"text": "M_-(z)"
},
{
"math_id": 19,
"text": "\\frac{a(z)+ib(z)}{2}M_+(z) + \\frac{a(z)-ib(z)}{2}M_-(z) = c(z),"
},
{
"math_id": 20,
"text": "\\lim_{z\\to\\infty}M_-(z) = \\overline{{M}_+(0)}."
},
{
"math_id": 21,
"text": "M_+(t)"
},
{
"math_id": 22,
"text": "M_-(t)"
},
{
"math_id": 23,
"text": "\\alpha(t) M_+(t) + \\beta(t) M_-(t) = \\gamma(t)"
},
{
"math_id": 24,
"text": "\\alpha(t)"
},
{
"math_id": 25,
"text": "\\beta(t)"
},
{
"math_id": 26,
"text": "\\gamma(t)"
},
{
"math_id": 27,
"text": "\\alpha(t) M_+(t) + \\beta(z) M_-(t) = \\gamma(t)."
},
{
"math_id": 28,
"text": "G(t)"
},
{
"math_id": 29,
"text": "M(z)"
},
{
"math_id": 30,
"text": "M_-"
},
{
"math_id": 31,
"text": "M"
},
{
"math_id": 32,
"text": "M_+(t)=G(t)M_-(t)"
},
{
"math_id": 33,
"text": "I_N"
},
{
"math_id": 34,
"text": "z \\to \\infty"
},
{
"math_id": 35,
"text": "L^2"
},
{
"math_id": 36,
"text": "G = 2"
},
{
"math_id": 37,
"text": "\\Sigma=[-1,1]"
},
{
"math_id": 38,
"text": "M_+=GM_-"
},
{
"math_id": 39,
"text": " \\log M_+(z) = \\log M_-(z) + \\log 2. "
},
{
"math_id": 40,
"text": "1"
},
{
"math_id": 41,
"text": "\\log M \\to 0"
},
{
"math_id": 42,
"text": "C_+ -C_- = I "
},
{
"math_id": 43,
"text": "C_+"
},
{
"math_id": 44,
"text": "C_-"
},
{
"math_id": 45,
"text": " \\frac{1}{2\\pi i}\\int_{\\Sigma_+} \\frac{\\log 2}{\\zeta-z} \\, d\\zeta - \\frac{1}{2\\pi i} \\int_{\\Sigma_-} \\frac{\\log{2}}{\\zeta-z} \\, d\\zeta = \\log 2\n"
},
{
"math_id": 46,
"text": "z\\in\\Sigma"
},
{
"math_id": 47,
"text": "\\log M = \\frac{1}{2\\pi i}\\int_{\\Sigma}\\frac{\\log{2}}{\\zeta-z}d\\zeta = \\frac{\\log 2}{2\\pi i}\\int^{1-z}_{-1-z}\\frac{1}{\\zeta}d\\zeta = \\frac{\\log 2}{2\\pi i} \\log{\\frac{z-1}{z+1}}, "
},
{
"math_id": 48,
"text": " M(z)=\\left( \\frac{z-1}{z+1} \\right)^{\\frac{\\log{2}}{2\\pi i}},"
},
{
"math_id": 49,
"text": "\\begin{align}\nM_+(0) &=(e^{i\\pi} )^{\\frac{\\log 2}{2\\pi i}} = e^{\\frac{\\log 2}{2}} \\\\ \nM_-(0) &=(e^{-i\\pi})^{\\frac{\\log 2}{2\\pi i}} = e^{-\\frac{\\log 2}{2}}\n\\end{align}"
},
{
"math_id": 50,
"text": "M_+(0)=M_-(0)e^{\\log{2}}=M_-(0)2."
},
{
"math_id": 51,
"text": "-1"
},
{
"math_id": 52,
"text": " M(z)=\\left( \\frac{z-1}{z+1} \\right)^{\\frac{\\log{2}}{2\\pi i}} + \\frac{a}{z-1}+ \\frac{b}{z+1} "
},
{
"math_id": 53,
"text": "D"
},
{
"math_id": 54,
"text": "\\frac{\\partial M(z,\\bar{z})}{\\partial \\bar{z}}=f(z,\\bar{z}), \\quad z \\in D,"
},
{
"math_id": 55,
"text": "\\overline{\\partial}"
},
{
"math_id": 56,
"text": "M = u + i v, \\quad f = \\frac{g + ih}{2}, \\quad z = x + iy,"
},
{
"math_id": 57,
"text": "u(x,y)"
},
{
"math_id": 58,
"text": "v(x,y)"
},
{
"math_id": 59,
"text": "g(x,y)"
},
{
"math_id": 60,
"text": "h(x,y)"
},
{
"math_id": 61,
"text": "x"
},
{
"math_id": 62,
"text": "y"
},
{
"math_id": 63,
"text": "\\frac{\\partial}{\\partial\\bar{z}} = \\frac{1}{2}\\left(\\frac{\\partial}{\\partial x} + i\\frac{\\partial}{\\partial y}\\right),"
},
{
"math_id": 64,
"text": "\\frac{\\partial u}{\\partial x} -\\frac{\\partial v}{\\partial y} = g(x,y), \\quad \\frac{\\partial u}{\\partial y} + \\frac{\\partial v}{\\partial x} = h(x,y)."
},
{
"math_id": 65,
"text": "z \\in D"
},
{
"math_id": 66,
"text": "M \\to 1"
},
{
"math_id": 67,
"text": "D := \\mathbb{C}"
},
{
"math_id": 68,
"text": "M(z,\\bar{z}) = 1 + \\frac{1}{2\\pi i} \\iint_{\\mathbb{R}^2} \\frac{f(\\zeta,\\bar{\\zeta})}{\\zeta -z}\\, d\\zeta \\wedge d\\bar{\\zeta},"
},
{
"math_id": 69,
"text": "\\mathbb{R}^2"
},
{
"math_id": 70,
"text": "d\\zeta \\wedge d\\bar{\\zeta} = (d\\xi + i d\\eta)\\wedge(d\\xi - i d\\eta) = -2i d\\xi d\\eta."
},
{
"math_id": 71,
"text": "R"
},
{
"math_id": 72,
"text": "\\frac{\\partial M}{\\partial \\bar{z}} = 0,"
},
{
"math_id": 73,
"text": "\\frac{\\partial M}{\\partial \\bar{z}} = A(z,\\bar{z})M + B(z,\\bar{z})\\overline{M},"
},
{
"math_id": 74,
"text": "\\overline{M}"
},
{
"math_id": 75,
"text": "A(z,\\bar{z})"
},
{
"math_id": 76,
"text": "B(z,\\bar{z})"
},
{
"math_id": 77,
"text": "\\bar{z}"
}
] | https://en.wikipedia.org/wiki?curid=9710396 |
9712439 | Filled Julia set | The filled-in Julia set formula_0 of a polynomial formula_1 is a Julia set and its interior, non-escaping set.
Formal definition.
The filled-in Julia set formula_0 of a polynomial formula_1 is defined as the set of all points formula_2 of the dynamical plane that have bounded orbit with respect to formula_1
formula_3
where:
Relation to the Fatou set.
The filled-in Julia set is the (absolute) complement of the attractive basin of infinity.
formula_8
The attractive basin of infinity is one of the components of the Fatou set.
formula_9
In other words, the filled-in Julia set is the complement of the unbounded Fatou component:
formula_10
Relation between Julia, filled-in Julia set and attractive basin of infinity.
The Julia set is the common boundary of the filled-in Julia set and the attractive basin of infinity
formula_11
where: formula_12 denotes the attractive basin of infinity = exterior of filled-in Julia set = set of escaping points for formula_7
formula_13
If the filled-in Julia set has no interior then the Julia set coincides with the filled-in Julia set. This happens when all the critical points of formula_7 are pre-periodic. Such critical points are often called Misiurewicz points.
Spine.
The most studied polynomials are probably those of the form formula_14, which are often denoted by formula_15, where formula_16 is any complex number. In this case, the spine formula_17 of the filled Julia set formula_18 is defined as arc between formula_19-fixed point and formula_20,
formula_21
with such properties:
Algorithms for constructing the spine:
Curve formula_29:
formula_30
divides dynamical plane into two components.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K(f) "
},
{
"math_id": 1,
"text": "f "
},
{
"math_id": 2,
"text": "z"
},
{
"math_id": 3,
"text": " K(f) \\overset{\\mathrm{def}}{{}={}} \\left \\{ z \\in \\mathbb{C} : f^{(k)} (z) \\not\\to \\infty ~ \\text{as} ~ k \\to \\infty \\right\\} "
},
{
"math_id": 4,
"text": "\\mathbb{C}"
},
{
"math_id": 5,
"text": " f^{(k)} (z) "
},
{
"math_id": 6,
"text": "k"
},
{
"math_id": 7,
"text": "f"
},
{
"math_id": 8,
"text": "K(f) = \\mathbb{C} \\setminus A_{f}(\\infty)"
},
{
"math_id": 9,
"text": "A_{f}(\\infty) = F_\\infty "
},
{
"math_id": 10,
"text": "K(f) = F_\\infty^C."
},
{
"math_id": 11,
"text": "J(f) = \\partial K(f) = \\partial A_{f}(\\infty)"
},
{
"math_id": 12,
"text": "A_{f}(\\infty)"
},
{
"math_id": 13,
"text": "A_{f}(\\infty) \\ \\overset{\\underset{\\mathrm{def}}{}}{=} \\ \\{ z \\in \\mathbb{C} : f^{(k)} (z) \\to \\infty\\ as\\ k \\to \\infty \\}. "
},
{
"math_id": 14,
"text": "f(z) = z^2 + c"
},
{
"math_id": 15,
"text": "f_c"
},
{
"math_id": 16,
"text": "c"
},
{
"math_id": 17,
"text": "S_c"
},
{
"math_id": 18,
"text": "K "
},
{
"math_id": 19,
"text": "\\beta"
},
{
"math_id": 20,
"text": "-\\beta"
},
{
"math_id": 21,
"text": "S_c = \\left [ - \\beta , \\beta \\right ]"
},
{
"math_id": 22,
"text": "K"
},
{
"math_id": 23,
"text": " z_{cr} = 0 "
},
{
"math_id": 24,
"text": "\\mathcal{R}^K _0"
},
{
"math_id": 25,
"text": "\\mathcal{R}^K _{1/2}"
},
{
"math_id": 26,
"text": "- \\beta"
},
{
"math_id": 27,
"text": " \\beta"
},
{
"math_id": 28,
"text": "0"
},
{
"math_id": 29,
"text": "R"
},
{
"math_id": 30,
"text": "R \\overset{\\mathrm{def}}{{}={}} R_{1/2} \\cup S_c \\cup R_0 "
}
] | https://en.wikipedia.org/wiki?curid=9712439 |
9712648 | Bing metrization theorem | Characterizes when a topological space is metrizable
In topology, the Bing metrization theorem, named after R. H. Bing, characterizes when a topological space is metrizable.
Formal statement.
The theorem states that a topological space formula_0 is metrizable if and only if it is regular and T0 and has a σ-discrete basis. A family of sets is called σ-discrete when it is a union of countably many discrete collections, where a family formula_1 of subsets of a space formula_0 is called discrete, when every point of formula_0 has a neighborhood that intersects at most one member of formula_2
History.
The theorem was proven by Bing in 1951 and was an independent discovery with the Nagata–Smirnov metrization theorem that was proved independently by both Nagata (1950) and Smirnov (1951). Both theorems are often merged in the Bing-Nagata-Smirnov metrization theorem. It is a common tool to prove other metrization theorems, e.g. the Moore metrization theorem – a collectionwise normal, Moore space is metrizable – is a direct consequence.
Comparison with other metrization theorems.
Unlike the Urysohn's metrization theorem which provides a sufficient condition for metrization, this theorem provides both a necessary and sufficient condition for a topological space to be metrizable. | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "\\mathcal{F}"
},
{
"math_id": 2,
"text": "\\mathcal{F}."
}
] | https://en.wikipedia.org/wiki?curid=9712648 |
9714241 | Gross production average | Gross production average (GPA) is a baseball statistic created in 2003 by Aaron Gleeman, as a refinement of on-base plus slugging (OPS). GPA attempts to solve two frequently cited problems with OPS. First, OPS gives equal weight to its two components, on-base percentage (OBP) and slugging percentage (SLG). In fact, OBP contributes significantly more to scoring runs than SLG does. Sabermetricians have calculated that OBP is about 80% more valuable than SLG. A second problem with OPS is that it generates numbers on a scale unfamiliar to most baseball fans. For all the problems with a traditional stat like batting average (AVG), baseball fans immediately know that a player batting .365 is significantly better than average, while a player batting .167 is significantly below average. But many fans do not immediately know how good a player with a 1.013 OPS is.
The basic formula for GPA is: formula_0
Unlike OPS, this formula both gives proper relative weight to its two component statistics and generates a number that falls on a scale similar to the familiar batting average scale.
All-time leaders.
The all-time top 10 highest career gross production averages, among players with 3,000 or more plate appearances:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{{(1.8)OBP} + SLG}{4}"
}
] | https://en.wikipedia.org/wiki?curid=9714241 |
971437 | Total least squares | Statistical technique
In applied statistics, total least squares is a type of errors-in-variables regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. It is a generalization of Deming regression and also of orthogonal regression, and can be applied to both linear and non-linear models.
The total least squares approximation of the data is generically equivalent to the best, in the Frobenius norm, low-rank approximation of the data matrix.
Linear model.
Background.
In the least squares method of data modeling, the objective function, "S",
formula_0
is minimized, where "r" is the vector of residuals and "W" is a weighting matrix. In linear least squares the model contains equations which are linear in the parameters appearing in the parameter vector formula_1, so the residuals are given by
formula_2
There are "m" observations in y and "n" parameters in β with "m">"n". X is a "m"×"n" matrix whose elements are either constants or functions of the independent variables, x. The weight matrix W is, ideally, the inverse of the variance-covariance matrix formula_3 of the observations y. The independent variables are assumed to be error-free. The parameter estimates are found by setting the gradient equations to zero, which results in the normal equations
formula_4
Allowing observation errors in all variables.
Now, suppose that both x and y are observed subject to error, with variance-covariance matrices formula_5 and formula_3 respectively. In this case the objective function can be written as
formula_6
where formula_7 and formula_8 are the residuals in x and y respectively. Clearly these residuals cannot be independent of each other, but they must be constrained by some kind of relationship. Writing the model function as formula_9, the constraints are expressed by "m" condition equations.
formula_10
Thus, the problem is to minimize the objective function subject to the "m" constraints. It is solved by the use of Lagrange multipliers. After some algebraic manipulations, the result is obtained.
formula_11
or alternatively formula_12
where M is the variance-covariance matrix relative to both independent and dependent variables.
formula_13
Example.
When the data errors are uncorrelated, all matrices M and W are diagonal. Then, take the example of straight line fitting.
formula_14
in this case
formula_15
showing how the variance at the "i"th point is determined by the variances of both independent and dependent variables and by the model being used to fit the data. The expression may be generalized by noting that the parameter formula_16 is the slope of the line.
formula_17
An expression of this type is used in fitting pH titration data where a small error on "x" translates to a large error on y when the slope is large.
Algebraic point of view.
As was shown in 1980 by Golub and Van Loan, the TLS problem does not have a solution in general. The following considers the simple case where a unique solution exists without making any particular assumptions.
The computation of the TLS using singular value decomposition (SVD) is described in standard texts. We can solve the equation
formula_18
for "B" where "X" is "m"-by-"n" and "Y" is "m"-by-"k".
That is, we seek to find "B" that minimizes error matrices "E" and "F" for "X" and "Y" respectively. That is,
formula_19
where formula_20 is the augmented matrix with "E" and "F" side by side and formula_21 is the Frobenius norm, the square root of the sum of the squares of all entries in a matrix and so equivalently the square root of the sum of squares of the lengths of the rows or columns of the matrix.
This can be rewritten as
formula_22
where formula_23 is the formula_24 identity matrix.
The goal is then to find formula_20 that reduces the rank of formula_25 by "k". Define formula_26 to be the singular value decomposition of the augmented matrix formula_25.
formula_27
where "V" is partitioned into blocks corresponding to the shape of "X" and "Y".
Using the Eckart–Young theorem, the approximation minimising the norm of the error is such that matrices formula_28 and formula_29 are unchanged, while the smallest formula_30 singular values are replaced with zeroes. That is, we want
formula_31
so by linearity,
formula_32
We can then remove blocks from the "U" and Σ matrices, simplifying to
formula_33
This provides "E" and "F" so that
formula_34
Now if formula_35 is nonsingular, which is not always the case (note that the behavior of TLS when formula_35 is singular is not well understood yet), we can then right multiply both sides by formula_36 to bring the bottom block of the right matrix to the negative identity, giving
formula_37
and so
formula_38
A naive GNU Octave implementation of this is:
function B = tls(X, Y)
[m n] = size(X); % n is the width of X (X is m by n)
Z = [X Y]; % Z is X augmented with Y.
[U S V] = svd(Z, 0); % find the SVD of Z.
VXY = V(1:n, 1+n:end); % Take the block of V consisting of the first n rows and the n+1 to last column
VYY = V(1+n:end, 1+n:end); % Take the bottom-right block of V.
B = -VXY / VYY;
end
The way described above of solving the problem, which requires that the matrix formula_35 is nonsingular, can be slightly extended by the so-called "classical TLS algorithm".
Computation.
The standard implementation of classical TLS algorithm is available through Netlib, see also. All modern implementations based, for example, on solving a sequence of ordinary least squares problems, approximate the matrix formula_39 (denoted formula_40 in the literature), as introduced by Van Huffel and Vandewalle. It is worth noting, that this formula_39 is, however, "not the TLS solution" in many cases.
Non-linear model.
For non-linear systems similar reasoning shows that the normal equations for an iteration cycle can be written as
formula_41
where formula_42 is the Jacobian matrix.
Geometrical interpretation.
When the independent variable is error-free a residual represents the "vertical" distance between the observed data point and the fitted curve (or surface). In total least squares a residual represents the distance between a data point and the fitted curve measured along some direction. In fact, if both variables are measured in the same units and the errors on both variables are the same, then the residual represents the shortest distance between the data point and the fitted curve, that is, the residual vector is perpendicular to the tangent of the curve. For this reason, this type of regression is sometimes called "two dimensional Euclidean regression" (Stein, 1983) or "orthogonal regression".
Scale invariant methods.
A serious difficulty arises if the variables are not measured in the same units. First consider measuring distance between a data point and the line: what are the measurement units for this distance? If we consider measuring distance based on Pythagoras' Theorem then it is clear that we shall be adding quantities measured in different units, which is meaningless. Secondly, if we rescale one of the variables e.g., measure in grams rather than kilograms, then we shall end up with different results (a different line). To avoid these problems it is sometimes suggested that we convert to dimensionless variables—this may be called normalization or standardization. However, there are various ways of doing this, and these lead to fitted models which are not equivalent to each other. One approach is to normalize by known (or estimated) measurement precision thereby minimizing the Mahalanobis distance from the points to the line, providing a maximum-likelihood solution; the unknown precisions could be found via analysis of variance.
In short, total least squares does not have the property of units-invariance—i.e. it is not scale invariant. For a meaningful model we require this property to hold. A way forward is to realise that residuals (distances) measured in different units can be combined if multiplication is used instead of addition. Consider fitting a line: for each data point the product of the vertical and horizontal residuals equals twice the area of the triangle formed by the residual lines and the fitted line. We choose the line which minimizes the sum of these areas. Nobel laureate Paul Samuelson proved in 1942 that, in two dimensions, it is the only line expressible solely in terms of the ratios of standard deviations and the correlation coefficient which (1) fits the correct equation when the observations fall on a straight line, (2) exhibits scale invariance, and (3) exhibits invariance under interchange of variables. This solution has been rediscovered in different disciplines and is variously known as standardised major axis (Ricker 1975, Warton et al., 2006), the reduced major axis, the geometric mean functional relationship (Draper and Smith, 1998), least products regression, diagonal regression, line of organic correlation, and the least areas line (Tofallis, 2002).
Tofallis (2015, 2023) has extended this approach to deal with multiple variables. The calculations are simpler than for total least squares as they only require knowledge of covariances, and can be computed using standard spreadsheet functions.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S=\\mathbf{r^TWr},"
},
{
"math_id": 1,
"text": "\\boldsymbol\\beta"
},
{
"math_id": 2,
"text": "\\mathbf{r=y-X\\boldsymbol\\beta}."
},
{
"math_id": 3,
"text": "\\mathbf M_y"
},
{
"math_id": 4,
"text": "\\mathbf{X^TWX\\boldsymbol\\beta=X^T Wy}."
},
{
"math_id": 5,
"text": "\\mathbf M_x"
},
{
"math_id": 6,
"text": "S=\\mathbf{r_x^TM_x^{-1}r_x+r_y^TM_y^{-1}r_y},"
},
{
"math_id": 7,
"text": "\\mathbf r_x"
},
{
"math_id": 8,
"text": "\\mathbf r_y"
},
{
"math_id": 9,
"text": "\\mathbf{f(r_x,r_y,\\boldsymbol\\beta)}"
},
{
"math_id": 10,
"text": "\\mathbf{F=\\Delta y -\\frac{\\partial f}{\\partial r_x} r_x-\\frac{\\partial f}{\\partial r_y} r_y -X\\Delta\\boldsymbol\\beta=0}."
},
{
"math_id": 11,
"text": "\\mathbf{X^TM^{-1}X\\Delta \\boldsymbol\\beta=X^T M^{-1} \\Delta y}, "
},
{
"math_id": 12,
"text": "\\mathbf{X^TM^{-1}X \\boldsymbol\\beta=X^T M^{-1} y},"
},
{
"math_id": 13,
"text": "\\mathbf{M=K_xM_xK_x^T+K_yM_yK_y^T;\\ K_x=-\\frac{\\partial f}{\\partial r_x},\\ K_y=-\\frac{\\partial f}{\\partial r_y}}."
},
{
"math_id": 14,
"text": "f(x_i,\\beta)=\\alpha + \\beta x_i"
},
{
"math_id": 15,
"text": "M_{ii}=\\sigma^2_{y,i}+\\beta^2 \\sigma^2_{x,i}"
},
{
"math_id": 16,
"text": "\\beta"
},
{
"math_id": 17,
"text": "M_{ii}=\\sigma^2_{y,i}+\\left(\\frac{dy}{dx}\\right)^2_i \\sigma^2_{x,i}"
},
{
"math_id": 18,
"text": "XB \\approx Y"
},
{
"math_id": 19,
"text": "\\mathrm{argmin}_{B,E,F} \\| [E\\; F] \\|_F, \\qquad (X+E) B = Y+F"
},
{
"math_id": 20,
"text": "[E\\; F]"
},
{
"math_id": 21,
"text": "\\|\\cdot\\|_F"
},
{
"math_id": 22,
"text": "[(X+E) \\; (Y+F)] \\begin{bmatrix} B\\\\ -I_k\\end{bmatrix} = 0."
},
{
"math_id": 23,
"text": "I_k"
},
{
"math_id": 24,
"text": "k\\times k"
},
{
"math_id": 25,
"text": "[X\\; Y]"
},
{
"math_id": 26,
"text": "[U] [\\Sigma] [V]^*"
},
{
"math_id": 27,
"text": "[X\\; Y] = [U_X\\; U_Y] \\begin{bmatrix}\\Sigma_X &0 \\\\ 0 & \\Sigma_Y\\end{bmatrix}\\begin{bmatrix}V_{XX} & V_{XY} \\\\ V_{YX} & V_{YY}\\end{bmatrix}^* = [U_X\\; U_Y] \\begin{bmatrix}\\Sigma_X &0 \\\\ 0 & \\Sigma_Y\\end{bmatrix} \\begin{bmatrix} V_{XX}^* & V_{YX}^* \\\\ V_{XY}^* & V_{YY}^*\\end{bmatrix}"
},
{
"math_id": 28,
"text": "U"
},
{
"math_id": 29,
"text": "V"
},
{
"math_id": 30,
"text": "k"
},
{
"math_id": 31,
"text": "[(X+E)\\; (Y+F)] = [U_X\\; U_Y] \\begin{bmatrix}\\Sigma_X &0 \\\\ 0 & 0_{k\\times k}\\end{bmatrix}\\begin{bmatrix}V_{XX} & V_{XY} \\\\ V_{YX} & V_{YY}\\end{bmatrix}^*"
},
{
"math_id": 32,
"text": "[E\\; F] = -[U_X\\; U_Y] \\begin{bmatrix}0_{n\\times n} &0 \\\\ 0 & \\Sigma_Y\\end{bmatrix}\\begin{bmatrix}V_{XX} & V_{XY} \\\\ V_{YX} & V_{YY}\\end{bmatrix}^*. "
},
{
"math_id": 33,
"text": "[E\\; F] = -U_Y\\Sigma_Y \\begin{bmatrix}V_{XY}\\\\V_{YY}\\end{bmatrix}^*= -[X\\; Y] \\begin{bmatrix}V_{XY}\\\\V_{YY}\\end{bmatrix}\\begin{bmatrix}V_{XY}\\\\ V_{YY}\\end{bmatrix}^*."
},
{
"math_id": 34,
"text": "[(X+E) \\; (Y+F)] \\begin{bmatrix}V_{XY}\\\\ V_{YY}\\end{bmatrix} = 0."
},
{
"math_id": 35,
"text": "V_{YY}"
},
{
"math_id": 36,
"text": "-V_{YY}^{-1}"
},
{
"math_id": 37,
"text": "[(X+E) \\; (Y+F)] \\begin{bmatrix} -V_{XY} V_{YY}^{-1} \\\\ -V_{YY} V_{YY}^{-1}\\end{bmatrix} = [(X+E) \\; (Y+F)] \\begin{bmatrix} B\\\\ -I_k\\end{bmatrix} = 0 ,"
},
{
"math_id": 38,
"text": "B=-V_{XY} V_{YY}^{-1}."
},
{
"math_id": 39,
"text": "B"
},
{
"math_id": 40,
"text": "X"
},
{
"math_id": 41,
"text": "\\mathbf{J^TM^{-1}J\\Delta \\boldsymbol\\beta=J^T M^{-1} \\Delta y}, "
},
{
"math_id": 42,
"text": "\\mathbf{J}"
}
] | https://en.wikipedia.org/wiki?curid=971437 |
9715483 | Collectionwise normal space | Property of topological spaces stronger than normality
In mathematics, a topological space formula_0 is called collectionwise normal if for every discrete family "F""i" ("i" ∈ "I") of closed subsets of formula_0 there exists a pairwise disjoint family of open sets "U""i" ("i" ∈ "I"), such that "F""i" ⊆ "U""i". Here a family formula_1 of subsets of formula_0 is called "discrete" when every point of formula_0 has a neighbourhood that intersects at most one of the sets from formula_1.
An equivalent definition of collectionwise normal demands that the above "U""i" ("i" ∈ "I") themselves form a discrete family, which is stronger than pairwise disjoint.
Some authors assume that formula_0 is also a T1 space as part of the definition, but no such assumption is made here.
The property is intermediate in strength between paracompactness and normality, and occurs in metrization theorems.
Hereditarily collectionwise normal space.
A topological space "X" is called hereditarily collectionwise normal if every subspace of "X" with the subspace topology is collectionwise normal.
In the same way that hereditarily normal spaces can be characterized in terms of separated sets, there is an equivalent characterization for hereditarily collectionwise normal spaces. A family formula_2 of subsets of "X" is called a separated family if for every "i", we have formula_3, with cl denoting the closure operator in "X", in other words if the family of formula_4 is discrete in its union. The following conditions are equivalent:
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "\\mathcal{F}"
},
{
"math_id": 2,
"text": "F_i (i \\in I)"
},
{
"math_id": 3,
"text": "F_i \\cap \\operatorname{cl}(\\bigcup_{j \\ne i}F_j) = \\empty"
},
{
"math_id": 4,
"text": "F_i"
},
{
"math_id": 5,
"text": "U_i (i \\in I)"
},
{
"math_id": 6,
"text": "F_i \\subseteq U_i"
}
] | https://en.wikipedia.org/wiki?curid=9715483 |
97168 | Proofs of Fermat's little theorem |
This article collects together a variety of proofs of Fermat's little theorem, which states that
formula_0
for every prime number "p" and every integer "a" (see modular arithmetic).
Simplifications.
Some of the proofs of Fermat's little theorem given below depend on two simplifications.
The first is that we may assume that a is in the range 0 ≤ "a" ≤ "p" − 1. This is a simple consequence of the laws of modular arithmetic; we are simply saying that we may first reduce a modulo p. This is consistent with reducing formula_1 modulo p, as one can check.
Secondly, it suffices to prove that
formula_2
for a in the range 1 ≤ "a" ≤ "p" − 1. Indeed, if the previous assertion holds for such a, multiplying both sides by "a" yields the original form of the theorem,
formula_3
On the other hand, if "a"
0 or "a"
1, the theorem holds trivially.
Combinatorial proofs.
Proof by counting necklaces.
This is perhaps the simplest known proof, requiring the least mathematical background. It is an attractive example of a combinatorial proof (a proof that involves counting a collection of objects in two different ways).
The proof given here is an adaptation of Golomb's proof.
To keep things simple, let us assume that a is a positive integer. Consider all the possible strings of "p" symbols, using an alphabet with a different symbols. The total number of such strings is "ap" since there are a possibilities for each of p positions (see rule of product).
For example, if "p"
5 and "a"
2, then we can use an alphabet with two symbols (say A and B), and there are 25
32 strings of length 5:
"AAAAA", "AAAAB", "AAABA", "AAABB", "AABAA", "AABAB", "AABBA", "AABBB",
"ABAAA", "ABAAB", "ABABA", "ABABB", "ABBAA", "ABBAB", "ABBBA", "ABBBB",
"BAAAA", "BAAAB", "BAABA", "BAABB", "BABAA", "BABAB", "BABBA", "BABBB",
"BBAAA", "BBAAB", "BBABA", "BBABB", "BBBAA", "BBBAB", "BBBBA", "BBBBB".
We will argue below that if we remove the strings consisting of a single symbol from the list (in our example, "AAAAA" and "BBBBB"), the remaining "ap" − "a" strings can be arranged into groups, each group containing exactly p strings. It follows that "ap" − "a" is divisible by p.
Necklaces.
Let us think of each such string as representing a necklace. That is, we connect the two ends of the string together and regard two strings as the same necklace if we can rotate one string to obtain the second string; in this case we will say that the two strings are "friends". In our example, the following strings are all friends:
"AAAAB", "AAABA", "AABAA", "ABAAA", "BAAAA".
In full, each line of the following list corresponds to a single necklace, and the entire list comprises all 32 strings.
"AAAAB", "AAABA", "AABAA", "ABAAA", "BAAAA",
"AAABB", "AABBA", "ABBAA", "BBAAA", "BAAAB",
"AABAB", "ABABA", "BABAA", "ABAAB", "BAABA",
"AABBB", "ABBBA", "BBBAA", "BBAAB", "BAABB",
"ABABB", "BABBA", "ABBAB", "BBABA", "BABAB",
"ABBBB", "BBBBA", "BBBAB", "BBABB", "BABBB",
"AAAAA",
"BBBBB".
Notice that in the above list, each necklace with more than one symbol is represented by 5 different strings, and the number of necklaces represented by just one string is 2, i.e. is the number of distinct symbols. Thus the list shows very clearly why 32 − 2 is divisible by 5.
One can use the following rule to work out how many friends a given string "S" has:
If "S" is built up of several copies of the string "T", and "T" cannot itself be broken down further into repeating strings, then the number of friends of "S" (including "S" itself) is equal to the "length" of "T".
For example, suppose we start with the string "S"
"ABBABBABBABB", which is built up of several copies of the shorter string "T"
"ABB". If we rotate it one symbol at a time, we obtain the following 3 strings:
"ABBABBABBABB",
"BBABBABBABBA",
"BABBABBABBAB".
There aren't any others because ABB is exactly 3 symbols long and cannot be broken down into further repeating strings.
Completing the proof.
Using the above rule, we can complete the proof of Fermat's little theorem quite easily, as follows. Our starting pool of "a p" strings may be split into two categories:
The second category contains "a p" − "a" strings, and they may be arranged into groups of p strings, one group for each necklace. Therefore, "a p" − "a" must be divisible by p, as promised.
Proof using dynamical systems.
This proof uses some basic concepts from dynamical systems.
We start by considering a family of functions "T""n"("x"), where "n" ≥ 2 is an integer, mapping the interval [0, 1] to itself by the formula
formula_4
where {"y"} denotes the fractional part of "y". For example, the function "T"3("x") is illustrated below:
A number "x"0 is said to be a "fixed point" of a function "f"("x") if "f"("x"0) = "x"0; in other words, if "f" leaves "x"0 fixed. The fixed points of a function can be easily found graphically: they are simply the "x" coordinates of the points where the graph of "f"("x") intersects the graph of the line "y" = "x". For example, the fixed points of the function "T"3("x") are 0, 1/2, and 1; they are marked by black circles on the following diagram:
We will require the following two lemmas.
Lemma 1. For any "n" ≥ 2, the function "T""n"("x") has exactly "n" fixed points.
Proof. There are 3 fixed points in the illustration above, and the same sort of geometrical argument applies for any "n" ≥ 2.
Lemma 2. For any positive integers "n" and "m", and any 0 ≤ x ≤ 1,
formula_5
In other words, "T""mn"("x") is the composition of "T""n"("x") and "T""m"("x").
Proof. The proof of this lemma is not difficult, but we need to be slightly careful with the endpoint "x" = 1. For this point the lemma is clearly true, since
formula_6
So let us assume that 0 ≤ "x" < 1. In this case,
formula_7
so "T""m"("T""n"("x")) is given by
formula_8
Therefore, what we really need to show is that
formula_9
To do this we observe that {"nx"} = "nx" − "k", where "k" is the integer part of "nx"; then
formula_10
since "mk" is an integer.
Now let us properly begin the proof of Fermat's little theorem, by studying the function "T""a""p"("x"). We will assume that "a" ≥ 2. From Lemma 1, we know that it has "a""p" fixed points. By Lemma 2 we know that
formula_11
so any fixed point of "T""a"("x") is automatically a fixed point of "T""a""p"("x").
We are interested in the fixed points of "T""a""p"("x") that are "not" fixed points of "T""a"("x"). Let us call the set of such points "S". There are "a""p" − "a" points in "S", because by Lemma 1 again, "T""a"("x") has exactly "a" fixed points. The following diagram illustrates the situation for "a" = 3 and "p" = 2. The black circles are the points of "S", of which there are 32 − 3 = 6.
The main idea of the proof is now to split the set "S" up into its orbits under "T""a". What this means is that we pick a point "x"0 in "S", and repeatedly apply "T""a"(x) to it, to obtain the sequence of points
formula_12
This sequence is called the orbit of "x"0 under "T""a". By Lemma 2, this sequence can be rewritten as
formula_13
Since we are assuming that "x"0 is a fixed point of "T""a" "p"("x"), after "p" steps we hit "T""a""p"("x"0) = "x"0, and from that point onwards the sequence repeats itself.
However, the sequence "cannot" begin repeating itself any earlier than that. If it did, the length of the repeating section would have to be a divisor of "p", so it would have to be 1 (since "p" is prime). But this contradicts our assumption that "x"0 is not a fixed point of "T""a".
In other words, the orbit contains exactly "p" distinct points. This holds for every orbit of "S". Therefore, the set "S", which contains "a""p" − "a" points, can be broken up into orbits, each containing "p" points, so "a""p" − "a" is divisible by "p".
(This proof is essentially the same as the necklace-counting proof given above, simply viewed through a different lens: one may think of the interval [0, 1] as given by sequences of digits in base "a" (our distinction between 0 and 1 corresponding to the familiar distinction between representing integers as ending in ".0000..." and ".9999..."). "T""a""n" amounts to shifting such a sequence by "n" many digits. The fixed points of this will be sequences that are cyclic with period dividing "n". In particular, the fixed points of "T""a""p" can be thought of as the necklaces of length "p", with "T""a""n" corresponding to rotation of such necklaces by "n" spots.
This proof could also be presented without distinguishing between 0 and 1, simply using the half-open interval [0, 1); then "T""n" would only have "n" − 1 fixed points, but "T""a""p" − "T""a" would still work out to "a""p" − "a", as needed.)
Multinomial proofs.
Proof 1.
This proof, due to Euler, uses induction to prove the theorem for all integers "a" ≥ 0.
The base step, that 0"p" ≡ 0 (mod "p"), is trivial. Next, we must show that if the theorem is true for "a"
"k", then it is also true for "a"
"k" + 1. For this inductive step, we need the following lemma.
Lemma. For any integers "x" and "y" and for any prime "p", ("x" + "y")"p" ≡ "xp" + "yp" (mod "p").
The lemma is a case of the freshman's dream. Leaving the proof for later on, we proceed with the induction.
Proof. Assume "k""p" ≡ "k" (mod "p"), and consider ("k"+1)"p". By the lemma we have
formula_14
Using the induction hypothesis, we have that "k""p" ≡ "k" (mod "p"); and, trivially, 1"p" = 1. Thus
formula_15
which is the statement of the theorem for "a" = "k"+1. ∎
In order to prove the lemma, we must introduce the binomial theorem, which states that for any positive integer "n",
formula_16
where the coefficients are the binomial coefficients,
formula_17
described in terms of the factorial function, "n"! = 1×2×3×⋯×"n".
Proof of Lemma. We consider the binomial coefficient when the exponent is a prime "p":
formula_18
The binomial coefficients are all integers. The numerator contains a factor "p" by the definition of factorial. When 0 < "i" < "p", neither of the terms in the denominator includes a factor of "p" (relying on the primality of "p"), leaving the coefficient itself to possess a prime factor of "p" from the numerator, implying that
formula_19
Modulo "p", this eliminates all but the first and last terms of the sum on the right-hand side of the binomial theorem for prime "p". ∎
The primality of "p" is essential to the lemma; otherwise, we have examples like
formula_20
which is not divisible by 4.
Proof 2.
Using the Lemma, we have:
formula_21.
Proof using the multinomial expansion.
The proof, which was first discovered by Leibniz (who did not publish it) and later rediscovered by Euler, is a very simple application of the multinomial theorem, which states
formula_22
where
formula_23
and the summation is taken over all sequences of nonnegative integer indices "k"1, "k"2, ..., "km" such the sum of all "ki" is "n".
Thus if we express "a" as a sum of 1s (ones), we obtain
formula_24
Clearly, if "p" is prime, and if "kj" is not equal to "p" for any "j", we have
formula_25
and if "kj" is equal to "p" for some "j" then
formula_26
Since there are exactly "a" elements such that "kj"
"p" for some "j", the theorem follows.
(This proof is essentially a coarser-grained variant of the necklace-counting proof given earlier; the multinomial coefficients count the number of ways a string can be permuted into arbitrary anagrams, while the necklace argument counts the number of ways a string can be rotated into cyclic anagrams. That is to say, that the nontrivial multinomial coefficients here are divisible by "p" can be seen as a consequence of the fact that each nontrivial necklace of length "p" can be unwrapped into a string in "p" many ways.
This multinomial expansion is also, of course, what essentially underlies the binomial theorem-based proof above)
Proof using power product expansions.
An additive-combinatorial proof based on formal power product expansions was given by Giedrius Alkauskas. This proof uses neither the Euclidean algorithm nor the binomial theorem, but rather it employs formal power series with rational coefficients.
Proof as a particular case of Euler's theorem.
This proof, discovered by James Ivory and rediscovered by Dirichlet, requires some background in modular arithmetic.
Let us assume that "a" is positive and not divisible by "p".
The idea is that if we write down the sequence of numbers
and reduce each one modulo "p", the resulting sequence turns out to be a rearrangement of
Therefore, if we multiply together the numbers in each sequence, the results must be identical modulo "p":
formula_28
Collecting together the "a" terms yields
formula_29
Finally, we may “cancel out” the numbers 1, 2, ..., "p" − 1 from both sides of this equation, obtaining
formula_30
There are two steps in the above proof that we need to justify:
We will prove these things below; let us first see an example of this proof in action.
An example.
If "a"
3 and "p"
7, then the sequence in question is
formula_31
reducing modulo 7 gives
formula_32
which is just a rearrangement of
formula_33
Multiplying them together gives
formula_34
that is,
formula_35
Canceling out 1 × 2 × 3 × 4 × 5 × 6 yields
formula_36
which is Fermat's little theorem for the case "a"
3 and "p"
7.
The cancellation law.
Let us first explain why it is valid, in certain situations, to “cancel”. The exact statement is as follows. If "u", "x", and "y" are integers, and "u" is not divisible by a prime number "p", and if
then we may “cancel” "u" to obtain
Our use of this cancellation law in the above proof of Fermat's little theorem was valid because the numbers 1, 2, ..., "p" − 1 are certainly not divisible by "p" (indeed they are "smaller" than "p").
We can prove the cancellation law easily using Euclid's lemma, which generally states that if a prime "p" divides a product "ab" (where "a" and "b" are integers), then "p" must divide "a" or "b". Indeed, the assertion (C) simply means that "p" divides "ux" − "uy"
"u"("x" − "y"). Since "p" is a prime which does not divide "u", Euclid's lemma tells us that it must divide "x" − "y" instead; that is, (D) holds.
Note that the conditions under which the cancellation law holds are quite strict, and this explains why Fermat's little theorem demands that p is a prime. For example, 2×2 ≡ 2×5 (mod 6), but it is not true that 2 ≡ 5 (mod 6). However, the following generalization of the cancellation law holds: if u, x, y, and z are integers, if u and z are relatively prime, and if
formula_37
then we may “cancel” "u" to obtain
formula_38
This follows from a generalization of Euclid's lemma.
The rearrangement property.
Finally, we must explain why the sequence
formula_39
when reduced modulo "p", becomes a rearrangement of the sequence
formula_27
To start with, none of the terms "a", 2"a", ..., ("p" − 1)"a" can be congruent to zero modulo "p", since if "k" is one of the numbers 1, 2, ..., "p" − 1, then "k" is relatively prime with "p", and so is "a", so Euclid's lemma tells us that "ka" shares no factor with "p". Therefore, at least we know that the numbers "a", 2"a", ..., ("p" − 1)"a", when reduced modulo "p", must be found among the numbers 1, 2, 3, ..., "p" − 1.
Furthermore, the numbers "a", 2"a", ..., ("p" − 1)"a" must all be "distinct" after reducing them modulo "p", because if
formula_40
where "k" and "m" are one of 1, 2, ..., "p" − 1, then the cancellation law tells us that
formula_41
Since both "k" and "m" are between 1 and "p" − 1, they must be equal. Therefore, the terms "a", 2"a", ..., ("p" − 1)"a" when reduced modulo "p" must be distinct.
To summarise: when we reduce the "p" − 1 numbers "a", 2"a", ..., ("p" − 1)"a" modulo "p", we obtain distinct members of the sequence 1, 2, ..., "p" − 1. Since there are exactly "p" − 1 of these, the only possibility is that the former are a rearrangement of the latter.
Applications to Euler's theorem.
This method can also be used to prove Euler's theorem, with a slight alteration in that the numbers from 1 to "p" − 1 are substituted by the numbers less than and coprime with some number m (not necessarily prime). Both the rearrangement property and the cancellation law (under the generalized form mentioned above) are still satisfied and can be utilized.
For example, if "m"
10, then the numbers less than "m" and coprime with "m" are 1, 3, 7, and 9. Thus we have:
formula_42
Therefore,
formula_43
Proofs using group theory.
Standard proof.
This proof requires the most basic elements of group theory.
The idea is to recognise that the set "G"
{1, 2, ..., "p" − 1}, with the operation of multiplication (taken modulo "p"), forms a group. The only group axiom that requires some effort to verify is that each element of "G" is invertible. Taking this on faith for the moment, let us assume that "a" is in the range 1 ≤ "a" ≤ "p" − 1, that is, "a" is an element of "G". Let "k" be the order of "a", that is, "k" is the smallest positive integer such that "ak" ≡ 1 (mod "p"). Then the numbers 1, "a", "a"2, ..., "a""k" −1 reduced modulo "p" form a subgroup of "G" whose order is "k" and therefore, by Lagrange's theorem, "k" divides the order of "G", which is "p" − 1. So "p" − 1
"km" for some positive integer "m" and then
formula_44
To prove that every element "b" of "G" is invertible, we may proceed as follows. First, "b" is coprime to "p". Thus Bézout's identity assures us that there are integers "x" and "y" such that "bx" + "py"
1. Reading this equality modulo "p", we see that "x" is an inverse for "b", since "bx" ≡ 1 (mod "p"). Therefore, every element of "G" is invertible. So, as remarked earlier, "G" is a group.
For example, when "p"
11, the inverses of each element are given as follows:
Euler's proof.
If we take the previous proof and, instead of using Lagrange's theorem, we try to prove it in this specific situation, then we get Euler's third proof, which is the one that he found more natural. Let "A" be the set whose elements are the numbers 1, "a", "a"2, ..., "a""k" − 1 reduced modulo "p". If "A"
"G", then "k"
"p" − 1 and therefore "k" divides "p" −1. Otherwise, there is some "b"1 ∈ "G"\"A".
Let "A"1 be the set whose elements are the numbers "b"1, "ab"1, "a"2"b"1, ..., "a""k" − 1"b"1 reduced modulo "p". Then "A"1 has "k" distinct elements because otherwise there would be two distinct numbers "m", "n" ∈ {0, 1, ..., "k" − 1} such that "amb"1 ≡ "anb"1 (mod "p"), which is impossible, since it would follow that "am" ≡ "an" (mod "p"). On the other hand, no element of "A"1 can be an element of "A", because otherwise there would be numbers "m", "n" ∈ {0, 1, ..., "k" − 1} such that "amb"1 ≡ "an" (mod "p"), and then "b"1 ≡ "ana""k" − "m" ≡ "a""n" + "k" − "m" (mod "p"), which is impossible, since "b"1 ∉ "A".
So, the set "A"∪"A"1 has 2"k" elements. If it turns out to be equal to "G", then 2"k"
"p" −1 and therefore "k" divides "p" −1. Otherwise, there is some "b"2 ∈ "G"\("A"∪"A"1) and we can start all over again, defining "A"2 as the set whose elements are the numbers "b"2, "ab"2, "a"2"b"2, ..., "a""k" − 1"b"2 reduced modulo "p". Since "G" is finite, this process must stop at some point and this proves that "k" divides "p" − 1.
For instance, if "a"
5 and "p"
13, then, since
25 ≡ 12 (mod 13),
125 ≡ 8 (mod 13),
625 ≡ 1 (mod 13),
we have "k"
4 and "A"
{1, 5, 8, 12}. Clearly, "A" ≠ "G"
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}. Let "b"1 be an element of "G"\"A"; for instance, take "b"1
2. Then, since
2,
10,
16 ≡ 3 (mod 13),
24 ≡ 11 (mod 13),
we have "A"1
{2, 3, 10, 11}. Clearly, "A"∪"A"1 ≠ "G". Let "b"2 be an element of "G"\("A"∪"A"1); for instance, take "b"2
4. Then, since
4,
20 ≡ 7 (mod 13),
32 ≡ 6 (mod 13),
48 ≡ 9 (mod 13),
we have "A"2
{4, 6, 7, 9}. And now "G"
"A"∪"A"1∪"A"2.
Note that the sets "A", "A"1, and so on are in fact the cosets of "A" in "G". | [
{
"math_id": 0,
"text": "a^p \\equiv a \\pmod p"
},
{
"math_id": 1,
"text": "a^p "
},
{
"math_id": 2,
"text": "a^{p-1} \\equiv 1 \\pmod p"
},
{
"math_id": 3,
"text": "a^p \\equiv a \\pmod p "
},
{
"math_id": 4,
"text": "T_n(x) = \\begin{cases}\n \\{ nx \\} & 0 \\leq x < 1, \\\\\n 1 & x = 1,\n\\end{cases}"
},
{
"math_id": 5,
"text": "T_m(T_n(x)) = T_{mn}(x)."
},
{
"math_id": 6,
"text": "T_m(T_n(1)) = T_m(1) = 1 = T_{mn}(1)."
},
{
"math_id": 7,
"text": "T_n(x) = \\{nx\\} < 1, "
},
{
"math_id": 8,
"text": "T_m(T_n(x)) = \\{m\\{nx\\}\\}."
},
{
"math_id": 9,
"text": "\\{m\\{nx\\}\\} = \\{mnx\\}."
},
{
"math_id": 10,
"text": "\\{m\\{nx\\}\\} = \\{mnx - mk\\} = \\{mnx\\},"
},
{
"math_id": 11,
"text": "T_{a^p}(x) = \\underbrace{T_a(T_a( \\cdots T_a(x) \\cdots ))}_{p\\text{ times}},"
},
{
"math_id": 12,
"text": " x_0, T_a(x_0), T_a(T_a(x_0)), T_a(T_a(T_a(x_0))), \\ldots. "
},
{
"math_id": 13,
"text": " x_0, T_a(x_0), T_{a^2}(x_0), T_{a^3}(x_0), \\ldots. "
},
{
"math_id": 14,
"text": "(k+1)^p \\equiv k^p + 1^p \\pmod{p}."
},
{
"math_id": 15,
"text": "(k+1)^p \\equiv k + 1 \\pmod{p},"
},
{
"math_id": 16,
"text": "(x+y)^n=\\sum_{i=0}^n{n \\choose i}x^{n-i}y^i,"
},
{
"math_id": 17,
"text": "{n \\choose i}=\\frac{n!}{i!(n-i)!},"
},
{
"math_id": 18,
"text": "{p \\choose i}=\\frac{p!}{i!(p-i)!}"
},
{
"math_id": 19,
"text": "{p \\choose i} \\equiv 0 \\pmod{p},\\qquad 0 < i < p."
},
{
"math_id": 20,
"text": "{4 \\choose 2} = 6,"
},
{
"math_id": 21,
"text": "k^p \\equiv ((k-1)+1)^p \\equiv (k-1)^p + 1 \\equiv ((k-2)+1)^p + 1 \\equiv (k-2)^p + 2 \\equiv \\dots \\equiv k \\pmod{p}"
},
{
"math_id": 22,
"text": "(x_1 + x_2 + \\cdots + x_m)^n\n = \\sum_{k_1,k_2,\\dots,k_m \\atop k_1+k_2+\\dots+k_m=n} {n \\choose k_1, k_2, \\ldots, k_m}\n x_1^{k_1} x_2^{k_2} \\cdots x_m^{k_m}"
},
{
"math_id": 23,
"text": "{n \\choose k_1, k_2, \\ldots, k_m} = \\frac{n!}{k_1! k_2! \\dots k_m!}"
},
{
"math_id": 24,
"text": "a^p = (1 + 1 + ... + 1)^p\n = \\sum_{k_1,k_2,\\dots,k_a \\atop k_1+k_2+\\dots+k_a=p} {p \\choose k_1, k_2, \\ldots, k_a}"
},
{
"math_id": 25,
"text": "{p \\choose k_1, k_2, \\ldots, k_a} \\equiv 0 \\pmod p,"
},
{
"math_id": 26,
"text": "{p \\choose k_1, k_2, \\ldots, k_a} = 1."
},
{
"math_id": 27,
"text": "1, 2, 3, \\ldots, p-1."
},
{
"math_id": 28,
"text": "a \\times 2a \\times 3a \\times \\cdots \\times (p-1)a \\equiv 1 \\times 2 \\times 3 \\times \\cdots \\times (p-1) \\pmod p."
},
{
"math_id": 29,
"text": "a^{p-1} (p-1)! \\equiv (p-1)! \\pmod p."
},
{
"math_id": 30,
"text": "a^{p-1} \\equiv 1 \\pmod p."
},
{
"math_id": 31,
"text": "3, 6, 9, 12, 15, 18;"
},
{
"math_id": 32,
"text": "3, 6, 2, 5, 1, 4,"
},
{
"math_id": 33,
"text": "1, 2, 3, 4, 5, 6."
},
{
"math_id": 34,
"text": "3 \\times 6 \\times 9 \\times 12 \\times 15 \\times 18 \\equiv 3 \\times 6 \\times 2 \\times 5 \\times 1 \\times 4 \\equiv 1 \\times 2 \\times 3 \\times 4 \\times 5 \\times 6 \\pmod 7;"
},
{
"math_id": 35,
"text": "3^6 (1 \\times 2 \\times 3 \\times 4 \\times 5 \\times 6) \\equiv (1 \\times 2 \\times 3 \\times 4 \\times 5 \\times 6) \\pmod 7."
},
{
"math_id": 36,
"text": "3^6 \\equiv 1 \\pmod 7, "
},
{
"math_id": 37,
"text": "ux \\equiv uy \\pmod z,"
},
{
"math_id": 38,
"text": "x \\equiv y \\pmod z."
},
{
"math_id": 39,
"text": "a, 2a, 3a, \\ldots, (p-1)a, "
},
{
"math_id": 40,
"text": "ka \\equiv ma \\pmod p, "
},
{
"math_id": 41,
"text": "k \\equiv m \\pmod p. "
},
{
"math_id": 42,
"text": "a \\times 3a \\times 7a \\times 9a \\equiv 1 \\times 3 \\times 7 \\times 9 \\pmod {10}. "
},
{
"math_id": 43,
"text": "{a^{\\varphi(10)}} \\equiv 1 \\pmod {10}. "
},
{
"math_id": 44,
"text": "a^{p-1} \\equiv a^{km} \\equiv (a^k)^m \\equiv 1^m \\equiv 1 \\pmod p. "
}
] | https://en.wikipedia.org/wiki?curid=97168 |
971691 | Plane curve | Mathematical concept
In mathematics, a plane curve is a curve in a plane that may be a Euclidean plane, an affine plane or a projective plane. The most frequently studied cases are smooth plane curves (including piecewise smooth plane curves), and algebraic plane curves.
Plane curves also include the Jordan curves (curves that enclose a region of the plane but need not be smooth) and the graphs of continuous functions.
Symbolic representation.
A plane curve can often be represented in Cartesian coordinates by an implicit equation of the form formula_0 for some specific function "f". If this equation can be solved explicitly for "y" or "x" – that is, rewritten as formula_1 or formula_2 for specific function "g" or "h" – then this provides an alternative, explicit, form of the representation. A plane curve can also often be represented in Cartesian coordinates by a parametric equation of the form formula_3 for specific functions formula_4 and formula_5
Plane curves can sometimes also be represented in alternative coordinate systems, such as polar coordinates that express the location of each point in terms of an angle and a distance from the origin.
Smooth plane curve.
A smooth plane curve is a curve in a real Euclidean plane &NoBreak;&NoBreak; and is a one-dimensional smooth manifold. This means that a smooth plane curve is a plane curve which "locally looks like a line", in the sense that near every point, it may be mapped to a line by a smooth function.
Equivalently, a smooth plane curve can be given locally by an equation formula_6 where &NoBreak;&NoBreak; is a smooth function, and the partial derivatives &NoBreak;&NoBreak; and &NoBreak;&NoBreak; are never both 0 at a point of the curve.
Algebraic plane curve.
An algebraic plane curve is a curve in an affine or projective plane given by one polynomial equation formula_7 (or formula_8 where F is a homogeneous polynomial, in the projective case.)
Algebraic curves have been studied extensively since the 18th century.
Every algebraic plane curve has a degree, the degree of the defining equation, which is equal, in case of an algebraically closed field, to the number of intersections of the curve with a line in general position. For example, the circle given by the equation formula_9 has degree 2.
The non-singular plane algebraic curves of degree 2 are called conic sections, and their projective completion are all isomorphic to the projective completion of the circle formula_9 (that is the projective curve of equation formula_10). The plane curves of degree 3 are called cubic plane curves and, if they are non-singular, elliptic curves. Those of degree 4 are called quartic plane curves.
Examples.
Numerous examples of plane curves are shown in Gallery of curves and listed at List of curves. The algebraic curves of degree 1 or 2 are shown here (an algebraic curve of degree less than 3 is always contained in a plane): | [
{
"math_id": 0,
"text": "f(x,y)=0"
},
{
"math_id": 1,
"text": "y=g(x)"
},
{
"math_id": 2,
"text": "x=h(y)"
},
{
"math_id": 3,
"text": "(x,y)=(x(t), y(t))"
},
{
"math_id": 4,
"text": "x(t)"
},
{
"math_id": 5,
"text": "y(t)."
},
{
"math_id": 6,
"text": "f(x, y) = 0,"
},
{
"math_id": 7,
"text": "f(x,y) = 0"
},
{
"math_id": 8,
"text": "F(x,y,z) = 0,"
},
{
"math_id": 9,
"text": "x^2 + y^2 = 1"
},
{
"math_id": 10,
"text": "x^2 + y^2 - z^2 = 0"
}
] | https://en.wikipedia.org/wiki?curid=971691 |
9718351 | Quantum state space | Mathematical space representing physical quantum systems
In physics, a quantum state space is an abstract space in which different "positions" represent not literal locations, but rather quantum states of some physical system. It is the quantum analog of the phase space of classical mechanics.
Relative to Hilbert space.
In quantum mechanics a state space is a separable complex Hilbert space. The dimension of this Hilbert space depends on the system we choose to describe. The different states that could come out of any particular measurement form an orthonormal basis, so any state vector in the state space can be written as a linear combination of these basis vectors. Having an nonzero component along multiple dimensions is called a superposition. In the formalism of quantum mechanics these state vectors are often written using Dirac's compact bra–ket notation.
Examples.
The spin state of a silver atom in the Stern–Gerlach experiment can be represented in a two state space. The spin can be aligned with a measuring apparatus (arbitrarily called 'up') or oppositely ('down'). In Dirac's notation these two states can be written as formula_0. The space of a two spin system has four states, formula_1.
The spin state is a discrete degree of freedom; quantum state spaces can have continuous degrees of freedom. For example, a particle in one space dimension has one degree of freedom ranging from formula_2 to formula_3. In Dirac notation, the states in this space might be written as formula_4 or formula_5.
Relative to 3D space.
Even in the early days of quantum mechanics, the state space (or configurations as they were called at first) was understood to be essential for understanding simple quantum-mechanical problems. In 1929, Nevill Mott showed that "tendency to picture the wave as existing in ordinary three dimensional space, whereas we are really dealing with wave functions in multispace" makes analysis of simple interaction problems more difficult. Mott analyzes formula_6-particle emission in a cloud chamber. The emission process is isotropic, a spherical wave in quantum mechanics, but the tracks observed are linear.
As Mott says, "it is a little difficult to picture how it is that an
outgoing spherical wave can produce a straight track; we think intuitively that it should ionise atoms at random throughout space". This issue became known at the Mott problem. Mott then derives the straight track by considering correlations between the positions of the source and two representative atoms, showing that consecutive ionization results from just that state in which all three positions are co-linear.
Relative to classical phase space.
Classical mechanics for multiple objects describes their motion in terms of a list or vector of every object's coordinates and velocity. As the objects move, the values in the vector change; the set of all possible values is called a phase space. In quantum mechanics a state space is similar, however in the state space two vectors which are scalar multiples of each other represent the same state. Furthermore, the character of values in the quantum state differ from the classical values: in the quantum case the values can only be measured statistically (by repetition over many examples) and thus do not have well defined values at every instant of time.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " |u\\rangle, |d\\rangle "
},
{
"math_id": 1,
"text": " |uu\\rangle, |ud\\rangle, |du\\rangle, |dd\\rangle "
},
{
"math_id": 2,
"text": "-\\infty"
},
{
"math_id": 3,
"text": "\\infty"
},
{
"math_id": 4,
"text": "|q\\rangle"
},
{
"math_id": 5,
"text": "|\\psi\\rangle"
},
{
"math_id": 6,
"text": "\\alpha"
}
] | https://en.wikipedia.org/wiki?curid=9718351 |
9718355 | Eilenberg's inequality | Eilenberg's inequality, also known as the coarea inequality is a mathematical inequality for Lipschitz-continuous functions between metric spaces. Informally, it gives an upper bound on the average size of the fibers of a Lipschitz map in terms of the Lipschitz constant of the function and the measure of the domain.
The Eilenberg's inequality has applications in geometric measure theory and manifold theory. It is also a key ingredient in the proof of the coarea formula.
Formal statement.
Let "ƒ" : "X" → "Y" be a Lipschitz-continuous function between metric spaces whose Lipschitz constant is denoted by Lip "ƒ". Let s and t be nonnegative real numbers. Then, Eilenberg's inequality states that
formula_0
for any "A" ⊂ "X".
The use of upper integral is necessary because in general the function formula_1
may fail to be "H""t" measurable.
History.
The inequality was first proved by Eilenberg in 1938 for the case when the function was the distance to a fixed point in the metric space. Then it was generalized in 1943 by Eilenberg and Harold to the case of any real-valued Lipschitz function on a metric space.
The inequality in the form above was proved by Federer in 1954, except that he could prove it only under additional assumptions that he conjectured were unnecessary. Years later, Davies proved some deep results about Hausdorff contents and this conjecture was proved as a consequence. But recently a new proof, independent of Davies's result, has been found as well.
About the proof.
In many texts the inequality is proved for the case where the target space is a Euclidean space or a manifold. This is because the isodiametric inequality is available (locally in the case of manifolds), which allows for a straightforward proof. The isodiametric inequality is not available in general metric spaces. The proof of Eilenberg's inequality in the general case is quite involved and requires the notion of the so-called weighted integrals.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\int_Y^* H^{s}(f^{-1}(y) \\cap A) \\, dH^t(y) \\leq \\frac{v_{s}v_t}{v_{s+t}}(\\text{Lip }f)^t H^{s+t}(A), "
},
{
"math_id": 1,
"text": "\\ y \\mapsto H^{s}(A\\cap f^{-1}(y))"
}
] | https://en.wikipedia.org/wiki?curid=9718355 |
9719512 | Bonnesen's inequality | Relates the length, area and radius of the incircle and the circumcircle of a Jordan curve
Bonnesen's inequality is an inequality relating the length, the area, the radius of the incircle and the radius of the circumcircle of a Jordan curve. It is a strengthening of the classical isoperimetric inequality.
More precisely, consider a planar simple closed curve of length formula_0 bounding a domain of area formula_1. Let formula_2 and formula_3 denote the radii of the incircle and the circumcircle. Bonnesen proved the inequality
formula_4
The term formula_5 in the right hand side is known as the "isoperimetric defect".
Loewner's torus inequality with isosystolic defect is a systolic analogue of Bonnesen's inequality.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "R"
},
{
"math_id": 4,
"text": " \\pi^2 (R-r)^2 \\leq L^2-4\\pi A. "
},
{
"math_id": 5,
"text": " L^2-4\\pi A"
}
] | https://en.wikipedia.org/wiki?curid=9719512 |
9719800 | Reduction (computability theory) | In computability theory, many reducibility relations (also called reductions, reducibilities, and notions of reducibility) are studied. They are motivated by the question: given sets formula_0 and formula_1 of natural numbers, is it possible to effectively convert a method for deciding membership in formula_1 into a method for deciding membership in formula_0? If the answer to this question is affirmative then formula_0 is said to be reducible to formula_1.
The study of reducibility notions is motivated by the study of decision problems. For many notions of reducibility, if any noncomputable set is reducible to a set formula_0 then formula_0 must also be noncomputable. This gives a powerful technique for proving that many sets are noncomputable.
Reducibility relations.
A reducibility relation is a binary relation on sets of natural numbers that is
These two properties imply that reducibility is a preorder on the powerset of the natural numbers. Not all preorders are studied as reducibility notions, however. The notions studied in computability theory have the informal property that formula_0 is reducible to formula_1 if and only if any (possibly noneffective) decision procedure for formula_1 can be effectively converted to a decision procedure for formula_0. The different reducibility relations vary in the methods they permit such a conversion process to use.
Degrees of a reducibility relation.
Every reducibility relation (in fact, every preorder) induces an equivalence relation on the powerset of the natural numbers in which two sets are equivalent if and only if each one is reducible to the other. In computability theory, these equivalence classes are called the degrees of the reducibility relation. For example, the Turing degrees are the equivalence classes of sets of naturals induced by Turing reducibility.
The degrees of any reducibility relation are partially ordered by the relation in the following manner. Let formula_3 be a reducibility relation and let formula_2 and formula_4 be two of its degrees. Then formula_5 if and only if there is a set formula_0 in formula_2 and a set formula_1 in formula_4 such that formula_6. This is equivalent to the property that for every set formula_0 in formula_2 and every set formula_1 in formula_4, formula_6, because any two sets in "C" are equivalent and any two sets in formula_4 are equivalent. It is common, as shown here, to use boldface notation to denote degrees.
Turing reducibility.
The most fundamental reducibility notion is Turing reducibility. A set formula_0 of natural numbers is Turing reducible to a set formula_1 if and only if there is an oracle Turing machine that, when run with formula_1 as its oracle set, will compute the indicator function (characteristic function) of formula_0. Equivalently, formula_0 is Turing reducible to formula_1 if and only if there is an algorithm for computing the indicator function for formula_0 provided that the algorithm is provided with a means to correctly answer questions of the form "Is formula_7 in formula_1?".
Turing reducibility serves as a dividing line for other reducibility notions because, according to the Church-Turing thesis, it is the most general reducibility relation that is effective. Reducibility relations that imply Turing reducibility have come to be known as strong reducibilities, while those that are implied by Turing reducibility are weak reducibilities. Equivalently, a strong reducibility relation is one whose degrees form a finer equivalence relation than the Turing degrees, while a weak reducibility relation is one whose degrees form a coarser equivalence relation than Turing equivalence.
Reductions stronger than Turing reducibility.
The strong reducibilities include
Many of these were introduced by Post (1944). Post was searching for a non-computable, computably enumerable set which the halting problem could not be Turing reduced to. As he could not construct such a set in 1944, he instead worked on the analogous problems for the various reducibilities that he introduced. These reducibilities have since been the subject of much research, and many relationships between them are known.
Bounded reducibilities.
A bounded form of each of the above strong reducibilities can be defined. The most famous of these is bounded truth-table reduction, but there are also bounded Turing, bounded weak truth-table, and others. These first three are the most common ones and they are based on the number of queries. For example, a set formula_0 is bounded truth-table reducible to formula_1 if and only if the Turing machine formula_19 computing formula_0 relative to formula_1 computes a list of up to formula_7 numbers, queries formula_1 on these numbers and then terminates for all possible oracle answers; the value formula_7 is a constant independent of formula_10. The difference between bounded weak truth-table and bounded Turing reduction is that in the first case, the up to formula_7 queries have to be made at the same time while in the second case, the queries can be made one after the other. For that reason, there are cases where formula_0 is bounded Turing reducible to formula_1 but not weak truth-table reducible to formula_1.
Strong reductions in computational complexity.
The strong reductions listed above restrict the manner in which oracle information can be accessed by a decision procedure but do not otherwise limit the computational resources available. Thus if a set formula_0 is decidable then formula_0 is reducible to any set formula_1 under any of the strong reducibility relations listed above, even if formula_0 is not polynomial-time or exponential-time decidable. This is acceptable in the study of computability theory, which is interested in theoretical computability, but it is not reasonable for computational complexity theory, which studies which sets can be decided under certain asymptotical resource bounds.
The most common reducibility in computational complexity theory is polynomial-time reducibility; a set "A" is polynomial-time reducible to a set formula_1 if there is a polynomial-time function "f" such that for every formula_7, formula_7 is in formula_0 if and only if formula_20 is in formula_1. This reducibility is, essentially, a resource-bounded version of many-one reducibility. Other resource-bounded reducibilities are used in other contexts of computational complexity theory where other resource bounds are of interest.
Reductions weaker than Turing reducibility.
Although Turing reducibility is the most general reducibility that is effective, weaker reducibility relations are commonly studied. These reducibilities are related to the relative definability of sets over arithmetic or set theory. They include:
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "\\leq"
},
{
"math_id": 4,
"text": "D"
},
{
"math_id": 5,
"text": "C\\leq D"
},
{
"math_id": 6,
"text": "A\\leq B"
},
{
"math_id": 7,
"text": "n"
},
{
"math_id": 8,
"text": "f"
},
{
"math_id": 9,
"text": "A(x) = B(f(x))"
},
{
"math_id": 10,
"text": "x"
},
{
"math_id": 11,
"text": "B(0), B(1), ..."
},
{
"math_id": 12,
"text": "a"
},
{
"math_id": 13,
"text": "b"
},
{
"math_id": 14,
"text": "a = 1"
},
{
"math_id": 15,
"text": "b = 1"
},
{
"math_id": 16,
"text": "B(n)"
},
{
"math_id": 17,
"text": "F(x)"
},
{
"math_id": 18,
"text": "x \\in A"
},
{
"math_id": 19,
"text": "M"
},
{
"math_id": 20,
"text": "f(n)"
},
{
"math_id": 21,
"text": "B^{(n)}"
},
{
"math_id": 22,
"text": "\\Delta^1_1"
},
{
"math_id": 23,
"text": "B^{(\\alpha)}"
},
{
"math_id": 24,
"text": "L(B)"
}
] | https://en.wikipedia.org/wiki?curid=9719800 |
9721220 | Goldbach's comet | Goldbach's comet is the name given to a plot of the function formula_0, the so-called Goldbach function (sequence in the OEIS). The function, studied in relation to Goldbach's conjecture, is defined for all even integers formula_1 to be the number of different ways in which "E" can be expressed as the sum of two primes. For example, formula_2 since 22 can be expressed as the sum of two primes in three different ways (formula_3).
The coloring of points in the above image is based on the value of formula_4 modulo 3 with red points corresponding to 0 mod 3, blue points corresponding to 1 mod 3 and green points corresponding to 2 mod 3. In other words, the red points are multiples of 6, the blue points are multiples of 6 plus 2, and the green points are multiples of 6 plus 4.
Anatomy of the Goldbach Comet.
An illuminating way of presenting the comet data is as a histogram. The function formula_0 can be normalized by dividing by the locally averaged value of "g", gav, taken over perhaps 1000 neighboring values of the even number "E". The histogram can then be accumulated over a range of up to about 10% either side of a central "E".
Such a histogram appears on the right. A series of well-defined peaks is evident. Each of these peaks can be identified as being formed by a set of values of formula_4 which have certain smallest factors. The major peaks correspond to lowest factors of 3, 5, 7 ... as labeled. As the lowest factors become higher the peaks move left and eventually merge to give the lowest value primary peak.
There is in fact a hierarchy of peaks; the main peaks are composed of subsidiary peaks, with a succession of second smallest factors of formula_4. This hierarchy continues until all factors are exhausted.
The magnified section shows the succession of subsidiary peaks in more detail.
The relative location of the peaks follows from the form developed by Hardy and Littlewood:
formula_5
where the product is taken over all primes "p" that are factors of formula_4. The factor on the right is Hardy–Littlewood's twin prime constant
formula_6
Here the product is taken over all primes greater than 2.
Of particular interest is the peak formed by selecting only values of formula_4 that are prime. The product factor in equation (1) is then very close to 1. The peak is very close to a Gaussian form (shown in gray). For this range of "E" values, the peak location is within 0.03% of the ideal formula_7.
When histograms are formed for different average values of "E", the width of this (primes only) peak is found to be proportional to formula_8. However, it is a factor of about 1.85 less than the value formula_9 that would be expected from a hypothesis of totally random occurrence of prime-pair matching. This may be expected, since there are correlations that give rise to the separated peaks in the total histogram.
Returning to the full range of formula_4 rather than just primes, it is seen that other peaks associated with specified lowest factors of formula_4 can also be fitted by a Gaussian, but only on their lower shoulder. The upper shoulder, being formed by an aggregate of subsidiary peaks, lies above the simple Gaussian form.
The relative heights of the peaks in the total histogram are representative of the populations of various types of formula_4 having differing factors. The heights are approximately inversely proportional to formula_10, the products of the lowest factors. Thus the height of the peak marked (3,5) in the overall histogram is about 1/15 of the main peak. Heights may vary from this by about 20%; their exact value is a complex function of the way in which the peaks are constituted from their components and of their varying width.
It is interesting to speculate on the possibility of any number "E" having zero prime pairs, taking these Gaussian forms as probabilities, and assuming it is legitimate to extrapolate to the zero-pair point. If this is done, the probability of zero pairs for any one "E", in the range considered here, is of order 10−3700. The integrated probability over all "E" to infinity, taking into account the narrowing of the peak width, is not much larger. Any search for violation of the Goldbach conjecture may reasonably be expected to have these odds to contend with.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "g(E)"
},
{
"math_id": 1,
"text": "E > 2"
},
{
"math_id": 2,
"text": "g(22) = 3"
},
{
"math_id": 3,
"text": "22 = 11+11 = 5+17 = 3+19"
},
{
"math_id": 4,
"text": "E/2"
},
{
"math_id": 5,
"text": "\\frac{g(E)}{g_{av}} = \\Pi_{2}\\prod\\frac{(p-1)}{(p-2)}\\,,\\quad\\quad\\quad\\quad\\quad\\quad(1)"
},
{
"math_id": 6,
"text": "\\Pi_{2} = \\prod_{p>2}\\left(1-\\frac{1}{(p-1)^{2}}\\right) = 0.6601618..."
},
{
"math_id": 7,
"text": "\\Pi_{2}"
},
{
"math_id": 8,
"text": "1/\\sqrt{g_{pav}(E)}"
},
{
"math_id": 9,
"text": "\\sqrt{2/g_{pav}(E)}"
},
{
"math_id": 10,
"text": "\\Pi\\,p"
}
] | https://en.wikipedia.org/wiki?curid=9721220 |
9721524 | Cunningham correction factor | Number used to correct drag calculations for small particles in a fluid
In fluid dynamics, the Cunningham correction factor, or Cunningham slip correction factor (denoted C), is used to account for non-continuum effects when calculating the drag on small particles. The derivation of Stokes' law, which is used to calculate the drag force on small particles, assumes a no-slip condition which is no longer correct at high Knudsen numbers. The Cunningham slip correction factor allows predicting the drag force on a particle moving a fluid with Knudsen number between the continuum regime and free molecular flow.
The drag coefficient calculated with standard correlations is divided by the Cunningham correction factor, C, given below.
Ebenezer Cunningham derived the correction factor in 1910 and with Robert Andrews Millikan, verified the correction in the same year.
formula_0
where
For air (Davies, 1945):
"A"1 = 1.257
"A"2 = 0.400
"A"3 = 0.55
The Cunningham correction factor becomes significant when particles become smaller than 15 micrometers, for air at ambient conditions.
For sub-micrometer particles, Brownian motion must be taken into account.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C = 1+ \\frac{2\\lambda}{d} \\left(A_1+A_2 e^{\\frac{-A_3 d}{\\lambda}} \\right)"
}
] | https://en.wikipedia.org/wiki?curid=9721524 |
9722535 | Trisops | Trisops was an experimental machine for the study of magnetic confinement of plasmas with the ultimate goal of producing fusion power. The configuration was a variation of a compact toroid, a toroidal (doughnut-shaped) structure of plasma and magnetic fields with no electromagnetic coils or electrodes penetrating the center. It lost funding in its original form in 1978.
The configuration is produced by combining two individual toroids produced by two conical θ pinch guns, located at either end of a length of Pyrex pipe with a constant guide magnetic field. The toroidal currents in the toroids are in opposite directions, so that they repel each other. After coming to an equilibrium, they are compressed adiabatically by increasing the external field.
Force free plasma vortices.
Force free plasma vortices have uniform magnetic helicity and therefore are stable against many instabilities. Typically, the current decays faster in the colder regions until the gradient in helicity is large enough to allow a turbulent redistribution of the current.
Force free vortices follow these equations:
formula_0
The first equation describes a Lorentz force-free fluid: the formula_1 forces are everywhere zero. For a laboratory plasma α is a constant and β is a scalar function of spatial coordinates.
The magnetic flux surfaces are toroidal, with the current being totally toroidal at the core of the torus and totally poloidal at the surface of the torus. This is similar to the field configuration of a tokamak, except that the field-producing coils are simpler and do not penetrate the plasma torus.
Unlike most plasma structures, the Lorentz force and the Magnus force, formula_2, play equivalent roles. formula_3 is the mass density.
Project.
Dr. Daniel Wells, while working on the Stellarator at the Princeton Plasma Physics Laboratory in the 1960s conceived of colliding and then compressing stable force free plasma toroids to produce conditions needed for thermonuclear fusion. The name, Trisops, is an acronym for Thermonuclear Reactor In Support of Project Sherwood. He later moved to the University of Miami where he set up the Trisops machine, supported by the National Science Foundation and Florida Power and Light.
The project continued until 1978, when the National Science Foundation (NSF) discontinued the grant and the United States Department of Energy (DOE) did not pick up the support.
Machine.
The fourth and final version of the Trisops machine consisted of DC mirror coils producing a 0.5 T guide field, two conical θ-pinch guns which produced two counter-rotating plasma vortices inside a pyrex vacuum chamber. The vortices approached each other, collided, repelled each other, and finally came to rest. At that time the compression coils produced a 3.5 T field with a quarter-cycle rise time of 10 μs.
Results.
The compressed rings retained their structure for 5 μs, with a density of 2 x 1017 cm−3, an ion temperature of 5 keV, an electron temperature of 300 eV. Defunding prevented further measurements to resolve the discrepancy between the above figures, and the plasma electron-ion temperature equilibration time of 1 μs.
Followup.
The project lost funding in 1978. The machine was disassembled and remained at the University of Miami until 1997. Then, the machine was moved to Lanham, Maryland and reassembled for the CMTX project (see reference). As of 2024[ [update]], the status of the project and the machine are unknown. | [
{
"math_id": 0,
"text": " \n\\begin{align}\n \\vec{\\nabla} \\times \\vec{B} = \\alpha\\vec{B} \\\\\n\\vec{v} = \\pm\\beta\\vec{B}\n\\end{align}\n"
},
{
"math_id": 1,
"text": " \\vec{j} \\times \\vec{B} "
},
{
"math_id": 2,
"text": " \\rho\\vec{\\nabla} \\times \\vec{v} "
},
{
"math_id": 3,
"text": "\\rho "
}
] | https://en.wikipedia.org/wiki?curid=9722535 |
972328 | Wedderburn–Etherington number | The Wedderburn–Etherington numbers are an integer sequence named for Ivor Malcolm Haddon Etherington and Joseph Wedderburn that can be used to count certain kinds of binary trees. The first few numbers in the sequence are
0, 1, 1, 1, 2, 3, 6, 11, 23, 46, 98, 207, 451, 983, 2179, 4850, 10905, 24631, 56011, ... (OEIS: )
Combinatorial interpretation.
These numbers can be used to solve several problems in combinatorial enumeration. The "n"th number in the sequence (starting with the number 0 for "n" = 0)
counts
Formula.
The Wedderburn–Etherington numbers may be calculated using the recurrence relation
formula_6
formula_7
beginning with the base case formula_8.
In terms of the interpretation of these numbers as counting rooted binary trees with "n" leaves, the summation in the recurrence counts the different ways of partitioning these leaves into two subsets, and of forming a subtree having each subset as its leaves. The formula for even values of "n" is slightly more complicated than the formula for odd values in order to avoid double counting trees with the same number of leaves in both subtrees.
Growth rate.
The Wedderburn–Etherington numbers grow asymptotically as
formula_9
where "B" is the generating function of the numbers and "ρ" is its radius of convergence, approximately 0.4027 (sequence in the OEIS), and where the constant given by the part of the expression in the square root is approximately 0.3188 (sequence in the OEIS).
Applications.
use the Wedderburn–Etherington numbers as part of a design for an encryption system containing a hidden backdoor. When an input to be encrypted by their system can be sufficiently compressed by Huffman coding, it is replaced by the compressed form together with additional information that leaks key data to the attacker. In this system, the shape of the Huffman coding tree is described as an Otter tree and encoded as a binary number in the interval from 0 to the Wedderburn–Etherington number for the number of symbols in the code. In this way, the encoding uses a very small number of bits, the base-2 logarithm of the Wedderburn–Etherington number.
describe a similar encoding technique for rooted unordered binary trees, based on partitioning the trees into small subtrees and encoding each subtree as a number bounded by the Wedderburn–Etherington number for its size. Their scheme allows these trees to be encoded in a number of bits that is close to the information-theoretic lower bound (the base-2 logarithm of the Wedderburn–Etherington number) while still allowing constant-time navigation operations within the tree.
use unordered binary trees, and the fact that the Wedderburn–Etherington numbers are significantly smaller than the numbers that count ordered binary trees, to significantly reduce the number of terms in a series representation of the solution to certain differential equations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x^n"
},
{
"math_id": 1,
"text": "x^5"
},
{
"math_id": 2,
"text": "x(x(x(xx)))"
},
{
"math_id": 3,
"text": "x((xx)(xx))"
},
{
"math_id": 4,
"text": "(xx)(x(xx))"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "a_{2n-1}=\\sum_{i=1}^{n-1} a_i a_{2n-i-1}"
},
{
"math_id": 7,
"text": "a_{2n}=\\frac{a_n(a_n+1)}{2}+\\sum_{i=1}^{n-1} a_i a_{2n-i}"
},
{
"math_id": 8,
"text": "a_1=1"
},
{
"math_id": 9,
"text": "a_n \\approx \\sqrt{\\frac{\\rho+\\rho^2B'(\\rho^2)}{2\\pi}} \\frac{\\rho^{-n}}{n^{3/2}},"
}
] | https://en.wikipedia.org/wiki?curid=972328 |
972333 | Highly totient number | A highly totient number formula_0 is an integer that has more solutions to the equation formula_1, where formula_2 is Euler's totient function, than any integer smaller than it. The first few highly totient numbers are
1, 2, 4, 8, 12, 24, 48, 72, 144, 240, 432, 480, 576, 720, 1152, 1440 (sequence in the OEIS), with 2, 3, 4, 5, 6, 10, 11, 17, 21, 31, 34, 37, 38, 49, 54, and 72 totient solutions respectively. The sequence of highly totient numbers is a subset of the sequence of smallest number formula_0 with exactly formula_3 solutions to formula_1.
The totient of a number formula_4, with prime factorization formula_5, is the product:
formula_6
Thus, a highly totient number is a number that has more ways of being expressed as a product of this form than does any smaller number.
The concept is somewhat analogous to that of highly composite numbers, and in the same way that 1 is the only odd highly composite number, it is also the only odd highly totient number (indeed, the only odd number to not be a nontotient). And just as there are infinitely many highly composite numbers, there are also infinitely many highly totient numbers, though the highly totient numbers get tougher to find the higher one goes, since calculating the totient function involves factorization into primes, something that becomes extremely difficult as the numbers get larger.
Example.
There are five numbers (15, 16, 20, 24, and 30) whose totient number is 8. No positive integer smaller than 8 has as many such numbers, so 8 is highly totient.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "\\phi(x) = k"
},
{
"math_id": 2,
"text": "\\phi"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "x=\\prod_i p_i^{e_i}"
},
{
"math_id": 6,
"text": "\\phi(x)=\\prod_i (p_i-1)p_i^{e_i-1}."
}
] | https://en.wikipedia.org/wiki?curid=972333 |
9723822 | 1 + 2 + 3 + 4 + ⋯ | Divergent series
The infinite series whose terms are the natural numbers 1 + 2 + 3 + 4 + ⋯ is a divergent series. The "n"th partial sum of the series is the triangular number
formula_0
which increases without bound as "n" goes to infinity. Because the sequence of partial sums fails to converge to a finite limit, the series does not have a sum.
Although the series seems at first sight not to have any meaningful value at all, it can be manipulated to yield a number of mathematically interesting results. For example, many summation methods are used in mathematics to assign numerical values even to a divergent series. In particular, the methods of zeta function regularization and Ramanujan summation assign the series a value of , which is expressed by a famous formula:
formula_1
where the left-hand side has to be interpreted as being the value obtained by using one of the aforementioned summation methods and not as the sum of an infinite series in its usual meaning. These methods have applications in other fields such as complex analysis, quantum field theory, and string theory.
In a monograph on moonshine theory, University of Alberta mathematician Terry Gannon calls this equation "one of the most remarkable formulae in science".
Partial sums.
The partial sums of the series 1 + 2 + 3 + 4 + 5 + 6 + ⋯ are 1, 3, 6, 10, 15, etc. The "n"th partial sum is given by a simple formula:
formula_2
This equation was known to the Pythagoreans as early as the sixth century BCE. Numbers of this form are called triangular numbers, because they can be arranged as an equilateral triangle.
The infinite sequence of triangular numbers diverges to +∞, so by definition, the infinite series 1 + 2 + 3 + 4 + ⋯ also diverges to +∞. The divergence is a simple consequence of the form of the series: the terms do not approach zero, so the series diverges by the term test.
Summability.
Among the classical divergent series, 1 + 2 + 3 + 4 + ⋯ is relatively difficult to manipulate into a finite value. Many summation methods are used to assign numerical values to divergent series, some more powerful than others. For example, Cesàro summation is a well-known method that sums Grandi's series, the mildly divergent series 1 − 1 + 1 − 1 + ⋯, to . Abel summation is a more powerful method that not only sums Grandi's series to , but also sums the trickier series 1 − 2 + 3 − 4 + ⋯ to .
Unlike the above series, 1 + 2 + 3 + 4 + ⋯ is not Cesàro summable nor Abel summable. Those methods work on oscillating divergent series, but they cannot produce a finite answer for a series that diverges to +∞. Most of the more elementary definitions of the sum of a divergent series are stable and linear, and any method that is both stable and linear cannot sum 1 + 2 + 3 + ⋯ to a finite value <templatestyles src="Crossreference/styles.css" />. More advanced methods are required, such as zeta function regularization or Ramanujan summation. It is also possible to argue for the value of using some rough heuristics related to these methods.
Heuristics.
Srinivasa Ramanujan presented two derivations of "1 + 2 + 3 + 4 + ⋯ =" in chapter 8 of his first notebook. The simpler, less rigorous derivation proceeds in two steps, as follows.
The first key insight is that the series of positive numbers 1 + 2 + 3 + 4 + ⋯ closely resembles the alternating series 1 − 2 + 3 − 4 + ⋯. The latter series is also divergent, but it is much easier to work with; there are several classical methods that assign it a value, which have been explored since the 18th century.
In order to transform the series 1 + 2 + 3 + 4 + ⋯ into 1 − 2 + 3 − 4 + ⋯, one can subtract 4 from the second term, 8 from the fourth term, 12 from the sixth term, and so on. The total amount to be subtracted is 4 + 8 + 12 + 16 + ⋯, which is 4 times the original series. These relationships can be expressed using algebra. Whatever the "sum" of the series might be, call it "c" = 1 + 2 + 3 + 4 + ⋯. Then multiply this equation by 4 and subtract the second equation from the first:
formula_3
The second key insight is that the alternating series 1 − 2 + 3 − 4 + ⋯ is the formal power series expansion of the function but with "x" defined as 1. (This can be seen by equating to the alternating sum of the nonnegative powers of "x", and then differentiating and negating both sides of the equation.) Accordingly, Ramanujan writes
formula_4
Dividing both sides by −3, one gets "c" = .
Generally speaking, it is incorrect to manipulate infinite series as if they were finite sums. For example, if zeroes are inserted into arbitrary positions of a divergent series, it is possible to arrive at results that are not self-consistent, let alone consistent with other methods. In particular, the step 4"c" = 0 + 4 + 0 + 8 + ⋯ is not justified by the additive identity law alone. For an extreme example, appending a single zero to the front of the series can lead to a different result.
One way to remedy this situation, and to constrain the places where zeroes may be inserted, is to keep track of each term in the series by attaching a dependence on some function. In the series 1 + 2 + 3 + 4 + ⋯, each term "n" is just a number. If the term "n" is promoted to a function "n""−s", where "s" is a complex variable, then one can ensure that only like terms are added. The resulting series may be manipulated in a more rigorous fashion, and the variable "s" can be set to −1 later. The implementation of this strategy is called zeta function regularization.
Zeta function regularization.
In zeta function regularization, the series formula_5 is replaced by the series formula_6 The latter series is an example of a Dirichlet series. When the real part of "s" is greater than 1, the Dirichlet series converges, and its sum is the Riemann zeta function "ζ"("s"). On the other hand, the Dirichlet series diverges when the real part of "s" is less than or equal to 1, so, in particular, the series 1 + 2 + 3 + 4 + ⋯ that results from setting "s" = −1 does not converge. The benefit of introducing the Riemann zeta function is that it can be defined for other values of "s" by analytic continuation. One can then define the zeta-regularized sum of 1 + 2 + 3 + 4 + ⋯ to be "ζ"(−1).
From this point, there are a few ways to prove that "ζ"(−1) = . One method, along the lines of Euler's reasoning, uses the relationship between the Riemann zeta function and the Dirichlet eta function "η"("s"). The eta function is defined by an alternating Dirichlet series, so this method parallels the earlier heuristics. Where both Dirichlet series converge, one has the identities:
formula_7
The identity formula_8 continues to hold when both functions are extended by analytic continuation to include values of "s" for which the above series diverge. Substituting "s" = −1, one gets −3"ζ"(−1) = "η"(−1). Now, computing "η"(−1) is an easier task, as the eta function is equal to the Abel sum of its defining series, which is a one-sided limit:
formula_9
Dividing both sides by −3, one gets "ζ"(−1) = .
Cutoff regularization.
The method of regularization using a cutoff function can "smooth" the series to arrive at . Smoothing is a conceptual bridge between zeta function regularization, with its reliance on complex analysis, and Ramanujan summation, with its shortcut to the Euler–Maclaurin formula. Instead, the method operates directly on conservative transformations of the series, using methods from real analysis.
The idea is to replace the ill-behaved discrete series formula_10 with a smoothed version
formula_11
where "f" is a cutoff function with appropriate properties. The cutoff function must be normalized to "f"(0) = 1; this is a different normalization from the one used in differential equations. The cutoff function should have enough bounded derivatives to smooth out the wrinkles in the series, and it should decay to 0 faster than the series grows. For convenience, one may require that "f" is smooth, bounded, and compactly supported. One can then prove that this smoothed sum is asymptotic to + "CN"2, where "C" is a constant that depends on "f". The constant term of the asymptotic expansion does not depend on "f": it is necessarily the same value given by analytic continuation, .
Ramanujan summation.
The Ramanujan sum of 1 + 2 + 3 + 4 + ⋯ is also . Ramanujan wrote in his second letter to G. H. Hardy, dated 27 February 1913:
"Dear Sir, I am very much gratified on perusing your letter of the 8th February 1913. I was expecting a reply from you similar to the one which a Mathematics Professor at London wrote asking me to study carefully Bromwich's "Infinite Series" and not fall into the pitfalls of divergent series. ... I told him that the sum of an infinite number of terms of the series: 1 + 2 + 3 + 4 + ⋯
under my theory. If I tell you this you will at once point out to me the lunatic asylum as my goal. I dilate on this simply to convince you that you will not be able to follow my methods of proof if I indicate the lines on which I proceed in a single letter. ..."
Ramanujan summation is a method to isolate the constant term in the Euler–Maclaurin formula for the partial sums of a series. For a function "f", the classical Ramanujan sum of the series formula_12 is defined as
formula_13
where "f"(2"k"−1) is the (2"k" − 1)th derivative of "f" and "B"2"k" is the (2"k")th Bernoulli number: "B"2 =, "B"4 =, and so on. Setting "f"("x") = "x", the first derivative of "f" is 1, and every other term vanishes, so
formula_14
To avoid inconsistencies, the modern theory of Ramanujan summation requires that "f" is "regular" in the sense that the higher-order derivatives of "f" decay quickly enough for the remainder terms in the Euler–Maclaurin formula to tend to 0. Ramanujan tacitly assumed this property. The regularity requirement prevents the use of Ramanujan summation upon spaced-out series like 0 + 2 + 0 + 4 + ⋯, because no regular function takes those values. Instead, such a series must be interpreted by zeta function regularization. For this reason, Hardy recommends "great caution" when applying the Ramanujan sums of known series to find the sums of related series.
Failure of stable linear summation methods.
A summation method that is linear and stable cannot sum the series 1 + 2 + 3 + ⋯ to any finite value. (Stable means that adding a term at the beginning of the series increases the sum by the value of the added term.) This can be seen as follows. If
formula_15
then adding 0 to both sides gives
formula_16
by stability. By linearity, one may subtract the second equation from the first (subtracting each component of the second line from the first line in columns) to give
formula_17
Adding 0 to both sides again gives
formula_18
and subtracting the last two series gives
formula_19
contradicting stability.
Therefore, every method that gives a finite value to the sum 1 + 2 + 3 + ⋯ is not stable or not linear.
Physics.
In bosonic string theory, the attempt is to compute the possible energy levels of a string, in particular, the lowest energy level. Speaking informally, each harmonic of the string can be viewed as a collection of "D" − 2 independent quantum harmonic oscillators, one for each transverse direction, where "D" is the dimension of spacetime. If the fundamental oscillation frequency is "ω", then the energy in an oscillator contributing to the "n"th harmonic is "nħω"/2. So using the divergent series, the sum over all harmonics is −"ħω"("D" − 2)/24. Ultimately it is this fact, combined with the Goddard–Thorn theorem, which leads to bosonic string theory failing to be consistent in dimensions other than 26.
The regularization of 1 + 2 + 3 + 4 + ⋯ is also involved in computing the Casimir force for a scalar field in one dimension. An exponential cutoff function suffices to smooth the series, representing the fact that arbitrarily high-energy modes are not blocked by the conducting plates. The spatial symmetry of the problem is responsible for canceling the quadratic term of the expansion. All that is left is the constant term −1/12, and the negative sign of this result reflects the fact that the Casimir force is attractive.
A similar calculation is involved in three dimensions, using the Epstein zeta-function in place of the Riemann zeta function.
History.
It is unclear whether Leonhard Euler summed the series to . According to Morris Kline, Euler's early work on divergent series relied on function expansions, from which he concluded 1 + 2 + 3 + 4 + ⋯ = ∞. According to Raymond Ayoub, the fact that the divergent zeta series is not Abel-summable prevented Euler from using the zeta function as freely as the eta function, and he "could not have attached a meaning" to the series. Other authors have credited Euler with the sum, suggesting that Euler would have extended the relationship between the zeta and eta functions to negative integers. In the primary literature, the series 1 + 2 + 3 + 4 + ⋯ is mentioned in Euler's 1760 publication alongside the divergent geometric series 1 + 2 + 4 + 8 + ⋯. Euler hints that series of this type have finite, negative sums, and he explains what this means for geometric series, but he does not return to discuss 1 + 2 + 3 + 4 + ⋯. In the same publication, Euler writes that the sum of 1 + 1 + 1 + 1 + ⋯ is infinite.
In popular media.
David Leavitt's 2007 novel "The Indian Clerk" includes a scene where Hardy and Littlewood discuss the meaning of this series. They conclude that Ramanujan has rediscovered "ζ"(−1), and they take the "lunatic asylum" line in his second letter as a sign that Ramanujan is toying with them.
Simon McBurney's 2007 play "A Disappearing Number" focuses on the series in the opening scene. The main character, Ruth, walks into a lecture hall and introduces the idea of a divergent series before proclaiming, "I'm going to show you something really thrilling", namely 1 + 2 + 3 + 4 + ⋯ =. As Ruth launches into a derivation of the functional equation of the zeta function, another actor addresses the audience, admitting that they are actors: "But the mathematics is real. It's terrifying, but it's real."
In January 2014, Numberphile produced a YouTube video on the series, which gathered over 1.5 million views in its first month. The 8-minute video is narrated by Tony Padilla, a physicist at the University of Nottingham. Padilla begins with 1 − 1 + 1 − 1 + ⋯ and 1 − 2 + 3 − 4 + ⋯ and relates the latter to 1 + 2 + 3 + 4 + ⋯ using a term-by-term subtraction similar to Ramanujan's argument. Numberphile also released a 21-minute version of the video featuring Nottingham physicist Ed Copeland, who describes in more detail how 1 − 2 + 3 − 4 + ⋯ = as an Abel sum, and 1 + 2 + 3 + 4 + ⋯ = as "ζ"(−1). After receiving complaints about the lack of rigour in the first video, Padilla also wrote an explanation on his webpage relating the manipulations in the video to identities between the analytic continuations of the relevant Dirichlet series.
In "The New York Times" coverage of the Numberphile video, mathematician Edward Frenkel commented: "This calculation is one of the best-kept secrets in math. No one on the outside knows about it."
Coverage of this topic in "Smithsonian" magazine describes the Numberphile video as misleading and notes that the interpretation of the sum as relies on a specialized meaning for the "equals" sign, from the techniques of analytic continuation, in which "equals" means "is associated with". The Numberphile video was critiqued on similar grounds by German mathematician Burkard Polster on his "Mathologer" YouTube channel in 2018, his video receiving 2.7 million views by 2023.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{k=1}^n k = \\frac{n(n + 1)}{2},"
},
{
"math_id": 1,
"text": "1 + 2 + 3 + 4 + \\cdots = -\\frac{1}{12},"
},
{
"math_id": 2,
"text": "\\sum_{k=1}^n k = \\frac{n(n + 1)}{2}."
},
{
"math_id": 3,
"text": "\n\\begin{alignat}{7}\n c = {}&& 1 + 2 &&{} + 3 + 4 &&{} + 5 + 6 + \\cdots \\\\\n 4c = {}&& 4 &&{} + 8 &&{} + 12 + \\cdots \\\\\n c - 4c = {}&& 1 - 2 &&{} + 3 - 4 &&{} + 5 - 6 + \\cdots\n\\end{alignat}\n"
},
{
"math_id": 4,
"text": "-3c = 1 - 2 + 3 - 4 + \\cdots = \\frac{1}{(1 + 1)^2} = \\frac14."
},
{
"math_id": 5,
"text": "\\sum_{n=1}^\\infty n"
},
{
"math_id": 6,
"text": "\\sum_{n=1}^\\infty n^{-s}."
},
{
"math_id": 7,
"text": "\n\\begin{alignat}{7}\n\\zeta(s)&{}={}&1^{-s}+2^{-s}&&{}+3^{-s}+4^{-s}&&{}+5^{-s}+6^{-s}+\\cdots& \\\\\n2\\times2^{-s}\\zeta(s)&{}={}& 2\\times2^{-s}&& {}+2\\times4^{-s}&&{} +2\\times6^{-s}+\\cdots& \\\\\n\\left(1-2^{1-s}\\right)\\zeta(s)&{}={}&1^{-s}-2^{-s}&&{}+3^{-s}-4^{-s}&&{}+5^{-s}-6^{-s}+\\cdots&=\\eta(s).\n\\end{alignat}\n"
},
{
"math_id": 8,
"text": "(1 - 2^{1-s}) \\zeta(s) = \\eta(s)"
},
{
"math_id": 9,
"text": "\n -3\\zeta(-1) =\n \\eta(-1) = \\lim_{x\\to 1^-} \\left(1 - 2x + 3x^2 - 4x^3 + \\cdots\\right) =\n \\lim_{x\\to 1^-}\\frac{1}{(1 + x)^2} =\n \\frac14."
},
{
"math_id": 10,
"text": "\\textstyle\\sum_{n=0}^N n"
},
{
"math_id": 11,
"text": "\\sum_{n=0}^\\infty nf\\left(\\frac{n}{N}\\right),"
},
{
"math_id": 12,
"text": "\\textstyle\\sum_{k=1}^\\infty f(k)"
},
{
"math_id": 13,
"text": "c = -\\frac{1}{2} f(0) - \\sum_{k=1}^\\infty \\frac{B_{2k}}{(2k)!} f^{(2k-1)}(0),"
},
{
"math_id": 14,
"text": "c = -\\frac16 \\times \\frac{1}{2!} = -\\frac{1}{12}."
},
{
"math_id": 15,
"text": "1+2+3+\\cdots=x,"
},
{
"math_id": 16,
"text": "0+1+2+3+\\cdots=0+x=x"
},
{
"math_id": 17,
"text": "1+1+1+\\cdots=x-x=0."
},
{
"math_id": 18,
"text": "0+1+1+1+\\cdots=0,"
},
{
"math_id": 19,
"text": "1+0+0+0+\\cdots=0,"
}
] | https://en.wikipedia.org/wiki?curid=9723822 |
972441 | Principal branch | In mathematics, a principal branch is a function which selects one branch ("slice") of a multi-valued function. Most often, this applies to functions defined on the complex plane.
Examples.
Trigonometric inverses.
Principal branches are used in the definition of many inverse trigonometric functions, such as the selection either to define that
formula_0
or that
formula_1.
Exponentiation to fractional powers.
A more familiar principal branch function, limited to real numbers, is that of a positive real number raised to the power of 1/2.
For example, take the relation "y"
"x"1/2, where "x" is any positive real number.
This relation can be satisfied by any value of "y" equal to a square root of "x" (either positive or negative). By convention, √x is used to denote the positive square root of "x".
In this instance, the positive square root function is taken as the principal branch of the multi-valued relation "x"1/2.
Complex logarithms.
One way to view a principal branch is to look specifically at the exponential function, and the logarithm, as it is defined in complex analysis.
The exponential function is single-valued, where "ez" is defined as:
formula_2
where formula_3.
However, the periodic nature of the trigonometric functions involved makes it clear that the logarithm is not so uniquely determined. One way to see this is to look at the following:
formula_4
and
formula_5
where "k" is any integer and atan2 continues the values of the arctan(b/a)-function from their principal value range formula_6, corresponding to formula_7 into the principal value range of the arg(z)-function formula_8, covering all four quadrants in the complex plane.
Any number log "z" defined by such criteria has the property that "e"log "z"
"z".
In this manner log function is a multi-valued function (often referred to as a "multifunction" in the context of complex analysis). A branch cut, usually along the negative real axis, can limit the imaginary part so it lies between −π and π. These are the chosen principal values.
This is the principal branch of the log function. Often it is defined using a capital letter, Log "z". | [
{
"math_id": 0,
"text": "\\arcsin:[-1,+1]\\rightarrow\\left[-\\frac{\\pi}{2},\\frac{\\pi}{2}\\right]"
},
{
"math_id": 1,
"text": "\\arccos:[-1,+1]\\rightarrow[0,\\pi]"
},
{
"math_id": 2,
"text": "e^z = e^a \\cos b + i e^a \\sin b"
},
{
"math_id": 3,
"text": "z = a + i b"
},
{
"math_id": 4,
"text": "\\operatorname{Re} (\\log z) = \\log \\sqrt{a^2 + b^2}"
},
{
"math_id": 5,
"text": "\\operatorname{Im} (\\log z) = \\operatorname{atan2}(b, a) + 2 \\pi k"
},
{
"math_id": 6,
"text": "(-\\pi/2,\\; \\pi/2]"
},
{
"math_id": 7,
"text": "a > 0"
},
{
"math_id": 8,
"text": "(-\\pi,\\; \\pi]"
}
] | https://en.wikipedia.org/wiki?curid=972441 |
972601 | Elementary equivalence | Concept in model theory
In model theory, a branch of mathematical logic, two structures "M" and "N" of the same signature "σ" are called elementarily equivalent if they satisfy the same first-order "σ"-sentences.
If "N" is a substructure of "M", one often needs a stronger condition. In this case "N" is called an elementary substructure of "M" if every first-order "σ"-formula "φ"("a"1, …, "a""n") with parameters "a"1, …, "a""n" from "N" is true in "N" if and only if it is true in "M".
If "N" is an elementary substructure of "M", then "M" is called an elementary extension of "N". An embedding "h": "N" → "M" is called an elementary embedding of "N" into "M" if "h"("N") is an elementary substructure of "M".
A substructure "N" of "M" is elementary if and only if it passes the Tarski–Vaught test: every first-order formula "φ"("x", "b"1, …, "b""n") with parameters in "N" that has a solution in "M" also has a solution in "N" when evaluated in "M". One can prove that two structures are elementarily equivalent with the Ehrenfeucht–Fraïssé games.
Elementary embeddings are used in the study of large cardinals, including rank-into-rank.
Elementarily equivalent structures.
Two structures "M" and "N" of the same signature "σ" are elementarily equivalent if every first-order sentence (formula without free variables) over "σ" is true in "M" if and only if it is true in "N", i.e. if "M" and "N" have the same complete first-order theory.
If "M" and "N" are elementarily equivalent, one writes "M" ≡ "N".
A first-order theory is complete if and only if any two of its models are elementarily equivalent.
For example, consider the language with one binary relation symbol '<'. The model R of real numbers with its usual order and the model Q of rational numbers with its usual order are elementarily equivalent, since they both interpret '<' as an unbounded dense linear ordering. This is sufficient to ensure elementary equivalence, because the theory of unbounded dense linear orderings is complete, as can be shown by the Łoś–Vaught test.
More generally, any first-order theory with an infinite model has non-isomorphic, elementarily equivalent models, which can be obtained via the Löwenheim–Skolem theorem. Thus, for example, there are non-standard models of Peano arithmetic, which contain other objects than just the numbers 0, 1, 2, etc., and yet are elementarily equivalent to the standard model.
Elementary substructures and elementary extensions.
"N" is an elementary substructure or elementary submodel of "M" if "N" and "M" are structures of the same signature "σ" such that for all first-order "σ"-formulas "φ"("x"1, …, "x""n") with free variables "x"1, …, "x""n", and all elements "a"1, …, "a"n of "N", "φ"("a"1, …, "a"n) holds in "N" if and only if it holds in "M":
formula_0
This definition first appears in Tarski, Vaught (1957). It follows that "N" is a substructure of "M".
If "N" is a substructure of "M", then both "N" and "M" can be interpreted as structures in the signature "σ""N" consisting of "σ" together with a new constant symbol for every element of "N". Then "N" is an elementary substructure of "M" if and only if "N" is a substructure of "M" and "N" and "M" are elementarily equivalent as "σ""N"-structures.
If "N" is an elementary substructure of "M", one writes "N" formula_1 "M" and says that "M" is an elementary extension of "N": "M" formula_2 "N".
The downward Löwenheim–Skolem theorem gives a countable elementary substructure for any infinite first-order structure in at most countable signature; the upward Löwenheim–Skolem theorem gives elementary extensions of any infinite first-order structure of arbitrarily large cardinality.
Tarski–Vaught test.
The Tarski–Vaught test (or Tarski–Vaught criterion) is a necessary and sufficient condition for a substructure "N" of a structure "M" to be an elementary substructure. It can be useful for constructing an elementary substructure of a large structure.
Let "M" be a structure of signature "σ" and "N" a substructure of "M". Then "N" is an elementary substructure of "M" if and only if for every first-order formula "φ"("x", "y"1, …, "y""n") over "σ" and all elements "b"1, …, "b""n" from "N", if "M" formula_3 <math>\exists</math>"x" "φ"("x", "b"1, …, "b""n"), then there is an element "a" in "N" such that "M" formula_3 "φ"("a", "b"1, …, "b""n").
Elementary embeddings.
An elementary embedding of a structure "N" into a structure "M" of the same signature "σ" is a map "h": "N" → "M" such that for every first-order "σ"-formula "φ"("x"1, …, "x""n") and all elements "a"1, …, "a"n of "N",
"N" formula_3 "φ"("a"1, …, "a""n") if and only if "M" formula_3 "φ"("h"("a"1), …, "h"("a""n")).
Every elementary embedding is a strong homomorphism, and its image is an elementary substructure.
Elementary embeddings are the most important maps in model theory. In set theory, elementary embeddings whose domain is "V" (the universe of set theory) play an important role in the theory of large cardinals (see also Critical point).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N \\models \\varphi(a_1, \\dots, a_n) \\text{ if and only if } M \\models \\varphi(a_1, \\dots, a_n)."
},
{
"math_id": 1,
"text": "\\preceq"
},
{
"math_id": 2,
"text": "\\succeq"
},
{
"math_id": 3,
"text": "\\models"
}
] | https://en.wikipedia.org/wiki?curid=972601 |
9728 | Earned value management | Project management technique
Earned value management (EVM), earned value project management, or earned value performance management (EVPM) is a project management technique for measuring project performance and progress in an objective manner.
Overview.
Earned value management is a project management technique for measuring project performance and progress. It has the ability to combine measurements of the project management triangle: scope, time, and costs.
In a single integrated system, EVM is able to provide accurate forecasts of project performance problems, which is an important aspect of project management.
Early EVM research showed that the areas of planning and control are significantly impacted by its use; and similarly, using the methodology improves both scope definition as well as the analysis of overall project performance. More recent research studies have shown that the principles of EVM are positive predictors of project success. The popularity of EVM has grown in recent years beyond government contracting, a sector in which its importance continues to rise (e.g. recent new DFARS rules), in part because EVM can also surface in and help substantiate contract disputes.
EVM features.
Essential features of any EVM implementation include:
EVM implementations for large or complex projects include many more features, such as indicators and forecasts of cost performance (over budget or under budget) and schedule performance (behind schedule or ahead of schedule). Large projects usually need to use quantitative forecasts associated with earned value management. Although deliverables in these large projects can use adaptive development methods, the forecasting metrics found in earned value management are mostly used in projects using the predictive approach. However, the most basic requirement of an EVM system is that it quantifies progress using PV and EV.
Application example.
Project A has been approved for a duration of one year and with a budget. It was also planned that the project spends 50% of the approved budget and expects 50% of the work to be complete in the first six months. If now, six months after the start of the project, a project manager reports that he has spent 50% of the budget, one may presume that the project is perfectly on plan. However, in reality the provided information is not sufficient to come to such a conclusion. The project can spend 50% of the budget, whilst finishing only 25% of the work, which would mean the project is not doing well; or the project can spend 50% of the budget, whilst completing 75% of the work, which would mean that project is doing better than planned. EVM is meant to address such and similar issues.
History.
EVM emerged as a financial analysis specialty in United States government programs in the 1960s, with the government requiring contractors to implement an EVM system (EVMS). It has since become a significant branch of project management and cost engineering. Project management research investigating the contribution of EVM to project success suggests a moderately strong positive relationship. Implementations of EVM can be scaled to fit projects of all sizes and complexities.
The genesis of EVM occurred in industrial manufacturing at the turn of the 20th century, based largely on the principle of "earned time" popularized by Frank and Lillian Gilbreth.
In 1979, EVM was introduced to the architecture and engineering industry in a "Public Works Magazine" article by David Burstein, a project manager with a national engineering firm. In the late 1980s and early 1990s, EVM emerged more widely as a project management methodology to be understood and used by managers and executives, not just EVM specialists. Many industrialized nations also began to utilize EVM in their own procurement programs.
An overview of EVM was included in the Project Management Institute (PMI)'s first Project Management Body of Knowledge (PMBOK) Guide in 1987 and was expanded in subsequent editions. In the most recent edition of the PMBOK guide, EVM is listed among the general tools and techniques for processes to control project costs.
The construction industry was an early commercial adopter of EVM. Closer integration of EVM with the practice of project management accelerated in the 1990s. In 1999, the Performance Management Association merged with the PMI to become its first college, the College of Performance Management (CPM). The United States Office of Management and Budget began to mandate the use of EVM across all government agencies, and, for the first time, for certain internally managed projects (not just for contractors). EVM also received greater attention by publicly traded companies in response to the Sarbanes–Oxley Act of 2002.
In Australia, EVM has been codified as the standards AS 4817-2003 and AS 4817–2006.
US defense industry.
The EVM concept took root in the United States Department of Defense in the 1960s. The original concept was called the Program Evaluation and Review Technique, but it was considered overly burdensome and not very adaptable by contractors whom were mandated to use it, and many variations of it began to proliferate among various procurement programs. In 1967, the DoD established a criterion-based approach, using a set of 35 criteria, called the Cost/Schedule Control Systems Criteria (C/SCSC). In the 1970s and early 1980s, a subculture of C/SCSC analysis grew, but the technique was often ignored or even actively resisted by project managers in both government and industry. C/SCSC was often considered a financial control tool that could be delegated to analytical specialists.
In 1989, EVM leadership was elevated to the Undersecretary of Defense for Acquisition, thus making EVM an element of program management and procurement. In 1991, Secretary of Defense Dick Cheney canceled the Navy A-12 Avenger II Program because of performance problems detected by EVM. This demonstrated that EVM mattered to secretary-level leadership. In the 1990s, many U.S. Government regulations were eliminated or streamlined. However, EVM not only survived the acquisition reform movement, but became strongly associated with the acquisition reform movement itself. Most notably, from 1995 to 1998, ownership of EVM criteria (reduced to 32) was transferred to industry by adoption of ANSI EIA 748-A standard.
The use of EVM has expanded beyond the U.S. Department of Defense. It was adopted by the National Aeronautics and Space Administration, the United States Department of Energy and other technology-related agencies.
Project tracking.
It is helpful to see an example of project tracking that does not include earned value performance management. Consider a project that has been planned in detail, including a time-phased spend plan for all elements of work. Figure 1 shows the cumulative budget (cost) for this project as a function of time (the blue line, labeled PV). It also shows the cumulative actual cost of the project (red line, labeled AC) through week 8. To those unfamiliar with EVM, it might appear that this project was over budget through week 4 and then under budget from week 6 through week 8. However, what is missing from this chart is any understanding of how much work has been accomplished during the project. If the project was actually completed at week 8, then the project would actually be well under budget and well ahead of schedule. If, on the other hand, the project is only 10% complete at week 8, the project is significantly over budget and behind schedule. A method is needed to measure technical performance objectively and quantitatively, and that is what EVM accomplishes.
Progress measurement sheet.
Progress can be measured using a measurement sheet and employing various techniques including milestones, weighted steps, value of work done, physical percent complete, earned value, Level of Effort, earn as planned, and more. Progress can be tracked based on any measure – cost, hours, quantities, schedule, directly input percent complete, and more.
Progress can be assessed using fundamental earned value calculations and variance analysis (Planned Cost, Actual Cost, and Earned Value); these calculations can determine where project performance currently is using the estimated project baseline's cost and schedule information.
With EVM.
Consider the same project, except this time the project plan includes pre-defined methods of quantifying the accomplishment of work. At the end of each week, the project manager identifies every detailed element of work that has been completed, and sums the EV for each of these completed elements. Earned value may be accumulated monthly, weekly, or as progress is made. The Value of Work Done (VOWD) is mainly used in Oil & Gas and is similar to the Actual Cost in EVM.
Earned value (EV).
formula_0
EV is calculated by multiplying %complete of each task (completed or in progress) by its planned value
Figure 2 shows the EV curve (in green) along with the PV curve from Figure 1. The chart indicates that technical performance (i.e. progress) started more rapidly than planned, but slowed significantly and fell behind schedule at week 7 and 8. This chart illustrates the schedule performance aspect of EVM. It is complementary to critical path or critical chain schedule management.
Figure 3 shows the same EV curve (green) with the actual cost data from Figure 1 (in red). It can be seen that the project was actually under budget, relative to the amount of work accomplished, since the start of the project. This is a much better conclusion than might be derived from Figure 1.
Figure 4 shows all three curves together – which is a typical EVM line chart. The best way to read these three-line charts is to identify the EV curve first, then compare it to PV (for schedule performance) and AC (for cost performance). It can be seen from this illustration that a true understanding of cost performance and schedule performance "relies first on measuring technical performance objectively." This is the "foundational principle" of EVM.
Scaling EVM from simple to advanced implementations.
The "foundational principle" of EVM, mentioned above, does not depend on the size or complexity of the project. However, the "implementations" of EVM can vary significantly depending on the circumstances. In many cases, organizations establish an all-or-nothing threshold; projects above the threshold require a full-featured (complex) EVM system and projects below the threshold are exempted. Another approach that is gaining favor is to scale EVM implementation according to the project at hand and skill level of the project team.
Simple implementations (emphasizing only technical performance).
There are many more small and simple projects than there are large and complex ones, yet historically only the largest and most complex have enjoyed the benefits of EVM. Still, lightweight implementations of EVM are achievable by any person who has basic spreadsheet skills. In fact, spreadsheet implementations are an excellent way to learn basic EVM skills.
The "first step" is to define the work. This is typically done in a hierarchical arrangement called a work breakdown structure (WBS), although the simplest projects may use a simple list of tasks. In either case, it is important that the WBS or list be comprehensive. It is also important that the elements be mutually exclusive, so that work is easily categorized into one and only one element of work. The most detailed elements of a WBS hierarchy (or the items in a list) are called work packages. Work packages are then often devolved further in the project schedule into tasks or activities.
The "second step" is to assign a value, called planned value (PV), to each work package. For large projects, PV is almost always an allocation of the total project budget, and may be in units of currency (e.g. dollar, euro or naira) or in labor hours, or both. However, in very simple projects, each activity may be assigned a weighted "point value" which might not be a budget number. Assigning weighted values and achieving consensus on all PV quantities yields an important benefit of EVM, because it exposes misunderstandings and miscommunications about the scope of the project, and resolving these differences should always occur as early as possible. Some terminal elements can not be known (planned) in great detail in advance, and that is expected, because they can be further refined at a later time.
The "third step" is to define "earning rules" for each work package. The simplest method is to apply just one earning rule, such as the 0/100 rule, to all activities. Using the 0/100 rule, no credit is earned for an element of work until it is finished. A related rule is called the 50/50 rule, which means 50% credit is earned when an element of work is started, and the remaining 50% is earned upon completion. Other fixed earning rules such as a 25/75 rule or 20/80 rule are gaining favor, because they assign more weight to finishing work than for starting it, but they also motivate the project team to identify when an element of work is started, which can improve awareness of work-in-progress. These simple earning rules work well for small or simple projects because generally, each activity tends to be fairly short in duration.
These initial three steps define the minimal amount of planning for simplified EVM. The "final step" is to execute the project according to the plan and measure progress. When activities are started or finished, EV is accumulated according to the earning rule. This is typically done at regular intervals (e.g. weekly or monthly), but there is no reason why EV cannot be accumulated in near real-time, when work elements are started/completed. In fact, waiting to update EV only once per month (simply because that is when cost data are available) only detracts from a primary benefit of using EVM, which is to create a technical performance scoreboard for the project team.
In a lightweight implementation such as described here, the project manager has not accumulated cost nor defined a detailed project schedule network (i.e. using a critical path or critical chain methodology). While such omissions are inappropriate for managing large projects, they are a common and reasonable occurrence in many very small or simple projects. Any project can benefit from using EV alone as a real-time score of progress. One useful result of this very simple approach (without schedule models and actual cost accumulation) is to compare EV curves of similar projects, as illustrated in Figure 5. In this example, the progress of three residential construction projects are compared by aligning the starting dates. If these three home construction projects were measured with the same PV valuations, the "relative" schedule performance of the projects can be easily compared.
Making earned value schedule metrics concordant with the CPM schedule.
The actual critical path is ultimately the determining factor of every project's duration. Because earned value schedule metrics take no account of critical path data, big budget activities that are not on the critical path have the potential to dwarf the impact of performing small budget critical path activities. This can lead to gaming the SV and Schedule Performance Index (SPI) metrics by ignoring critical path activities in favor of big-budget activities that may have more float. This can sometimes even lead to performing activities out-of-sequence just to improve the schedule tracking metrics, which can cause major problems with quality.
A simple two-step process has been suggested to fix this:
In this way, the distorting aspect of float would be eliminated. There would be no benefit to performing a non-critical activity with many floats until it is due in proper sequence. Also, an activity would not generate a negative schedule variance until it had used up its float. Under this method, one way of gaming the schedule metrics would be eliminated. The only way of generating a positive schedule variance (or SPI over 1.0) would be by completing work on the current critical path ahead of schedule, which is in fact the only way for a project to get ahead of schedule.
Advanced implementations.
In addition to managing technical and schedule performance, large and complex projects require cost performance to be monitored and reviewed at regular intervals. To measure cost performance, planned value (BCWS) and earned value (BCWP) must be in the same currency units as actual costs.
In large implementations, the planned value curve is commonly called a Performance Measurement Baseline (PMB) and may be arranged in control accounts, summary-level planning packages, planning packages and work packages.
In large projects, establishing control accounts is the primary method of delegating responsibility and authority to various parts of the performing organization. Control accounts are cells of a responsibility assignment (RACI) matrix, which is the intersection of the project WBS and the organizational breakdown structure (OBS). Control accounts are assigned to Control Account Managers (CAMs).
Large projects require more elaborate processes for controlling baseline revisions, more thorough integration with subcontractor EVM systems, and more elaborate management of procured materials.
In the United States, the primary standard for full-featured EVM systems is the ANSI/EIA-748A standard, published in May 1998 and reaffirmed in August 2002. The standard defines 32 criteria for full-featured EVM system compliance. As of the year 2007, a draft of ANSI/EIA-748B, a revision to the original is available from ANSI. Other countries have established similar standards.
In addition to using BCWS and BCWP, implementations often use the term actual cost of work performed (ACWP) instead of AC. Additional acronyms and formulas include:
Budget at completion (BAC).
According to the PMBOK (7th edition) by the Project Management Institute (PMI), Budget at Completion (BAC) is the "sum of all budgets established for the work to be performed."
It is the total planned value (PV or BCWS) at the end of the project. If a project has a management reserve (MR), it is typically "not" included in the BAC, and respectively, in the performance measurement baseline.
Cost variance (CV).
According to the PMBOK (7th edition) by the Project Management Institute (PMI), Cost variance (CV) is a "The amount of budget deficit or surplus at a given point in time, expressed as the difference between the earned value and the actual cost." Cost variance compares the estimated cost of a deliverable with the actual cost.
formula_1
CV greater than 0 is good (under budget).
Cost performance index (CPI).
According to the PMBOK (7th edition) by the Project Management Institute (PMI), Cost performance index is a "measure of the cost efficiency of budgeted resources expressed at the ratio of earned value to actual cost."
formula_2
CPI greater than 1 is favorable (under budget).
CPI that is less than 1 means that the cost of completing the work is higher than planned (bad).
When CPI is equal to 1, it means that the cost of completing the work is right on plan (good).
CPI greater than 1 means that the cost of completing the work is less than planned (good or sometimes bad).
Having a CPI that is very high (in some cases, very high is only 1.2) may mean that the plan was too conservative, and thus a very high number may in fact not be good, as the CPI is being measured against a poor baseline. Management or the customer may be upset with the planners as an overly conservative baseline ties up available funds for other purposes, and the baseline is also used for manpower planning.
Estimate at completion (EAC).
According to the PMBOK (7th edition) by the Project Management Institute (PMI), Estimate at completion (EAC) is the "expected total cost of completing all work expressed as the sum of the actual cost to date and the estimate to complete."
EAC is the manager's projection of total cost of the project at completion.
formula_3
This formula is based on the assumption, that the performance of the project (or rather a deviation of the actual performance from a baseline) to date gives a good indication of what a performance (or rather deviation of a performance from a baseline) will be in the future. In other words, this formula is using statistics of the project to date to predict future results. Therefore, it has to be used carefully, when the nature of the project in the future is likely to be different from the one to date (e.g. performance of the project compare to baseline at the design phase may not be a good indication of what it will be during a construction phase).
Estimate to complete (ETC).
According to the PMBOK (7th edition) by the Project Management Institute (PMI), Estimate to complete (ETC) is the "expected cost to finish all the remaining project work."
ETC is the estimate to complete the remaining work of the project. ETC must be based on objective measures of the outstanding work remaining, typically based on the measures or estimates used to create the original planned value (PV) profile, including any adjustments to predict performance based on historical performance, actions being taken to improve performance, or acknowledgement of degraded performance.
While algebraically, ETC = EAC-AC is correct, ETC should "never" be computed using either EAC or AC.
In the following equation:
formula_4
ETC is the independent variable, EAC is the dependent variable, and AC is fixed based on expenditures to date. ETC should always be reported truthfully to reflect the project team estimate to complete the outstanding work. If ETC pushes EAC to exceed BAC, then project management skills are employed to either recommend performance improvements or scope change, but never force ETC to give the "correct" answer so that EAC=BAC. Managing project activities to keep the project within budget is a human factors activity, not a mathematical function.
To-complete performance index (TCPI).
To-complete performance index (TCPI) is an earned value management measure that estimates the cost performance needed to achieve a particular management objective.
The TCPI provides a projection of the anticipated performance required to achieve either the BAC or the EAC. TCPI indicates the future required cost efficiency needed to achieve a target BAC (Budget At Complete) or EAC (Estimate At Complete). Any significant difference between CPI, the cost performance to date, and the TCPI, the cost performance needed to meet the BAC or the EAC, should be accounted for by management in their forecast of the final cost.
For the TCPI based on BAC (describing the performance required to meet the original BAC budgeted total):
formula_5
or for the TCPI based on EAC (describing the performance required to meet a new, revised budget total EAC):
formula_6
This implies, that if revised budget (EAC) is calculated using Earned Value methodology formula (BAC/CPI), then at the moment, when TCPI based on EAC is first time calculated, it will always be equal to CPI of a project at that moment. This happens because when EAC is calculated using formula BAC/CPI it is assumed, that cost performance of the remaining part of the project will be the same as the cost performance of the project to date.
Independent estimate at completion (IEAC).
The IEAC is a metric to project total cost using the performance to date to project overall performance. This can be compared to the EAC, which is the manager's projection.
formula_7
Limitations.
Proponents of EVM note a number of issues with implementing it, and further limitations may be inherent to the concept itself.
Because EVM requires quantification of a project plan, it is often perceived to be inapplicable to discovery-driven or Agile software development projects. For example, it may be impossible to plan certain research projects far in advance, because research itself uncovers some opportunities (research paths) and actively eliminates others. However, another school of thought holds that all work can be planned, even if in weekly timeboxes or other short increments.
Traditional EVM is not intended for non-discrete (continuous) effort. In traditional EVM standards, non-discrete effort is called "level of effort" (LOE). If a project plan contains a significant portion of LOE, and the LOE is intermixed with discrete effort, EVM results will be contaminated. This is another area of EVM research.
Traditional definitions of EVM typically assume that project accounting and project network schedule management are prerequisites to achieving any benefit from EVM. Many small projects don't satisfy either of these prerequisites, but they too can benefit from EVM, as described for simple implementations, above. Other projects can be planned with a project network, but do not have access to true and timely actual cost data. In practice, the collection of true and timely actual cost data can be the most difficult aspect of EVM. Such projects can benefit from EVM, as described for intermediate implementations, above, and earned schedule.
As a means of overcoming objections to EVM's lack of connection to qualitative performance issues, the Naval Air Systems Command (NAVAIR) PEO(A) organization initiated a project in the late 1990s to integrate true technical achievement into EVM projections by utilizing risk profiles. These risk profiles anticipate opportunities that may be revealed and possibly be exploited as development and testing proceeds. The published research resulted in a Technical Performance Management (TPM) methodology and software application that is still used by many DoD agencies in informing EVM estimates with technical achievement.
The research was peer-reviewed and was the recipient of the Defense Acquisition University Acquisition Research Symposium 1997 Acker Award for excellence in the exchange of information in the field of acquisition research.
There is the difficulty inherent for any periodic monitoring of synchronizing data timing: actual deliveries, actual invoicing, and the date the EVM analysis is done are all independent, so that some items have arrived but their invoicing has not and by the time analysis is delivered the data will likely be weeks behind events. This may limit EVM to a less tactical or less definitive role where use is combined with other forms to explain why or add recent news and manage future expectations.
There is a measurement limitation for how precisely EVM can be used, stemming from classic conflict between accuracy and precision, as the mathematics can calculate deceptively far beyond the precision of the measurements of data and the approximation that is the plan estimation. The limitation on estimation is commonly understood (such as the ninety–ninety rule in software) but is not visible in any margin of error. The limitations on measurement are largely a form of digitization error as EVM measurements ultimately can be no finer than by item, which may be the work breakdown structure terminal element size, to the scale of reporting period, typically end summary of a month, and by the means of delivery measure. (The delivery measure may be actual deliveries, may include estimates of partial work done at the end of month subject to estimation limits, and typically does not include QC check or risk offsets.)
As traditionally implemented, EVM deals with, and is based in, budget and cost. It has no relationship to the investment value or benefit for which the project has been funded and undertaken. Yet due to the use of the word "value" in the name, this fact is often misunderstood. However, earned value metrics can be used to compute the cost and schedule inputs to Devaux's Index of Project Performance (the DIPP), which integrates schedule and cost performance with the planned investment value of the project's scope across the project management triangle.
Darling & Whitty (2019) conducted an ethnographic study to see how EVM is implemented, applying Goffman's Dramaturgy (sociology), they found there is a sham act occurring through impressionable acts presenting statistical data as fact even when the data may be worthless. Findings include sham progress reporting can emerge in an environment where senior management's ignorance of project work creates unworkable binds for project staff. Moreover, the sham behaviour succeeds at its objective because senior management are vulnerable to false impressions. This situation raises ethical issues for those involved, and creates an overhead in dealing with the reality of project work. Further the Darling & Whitty study is pertinent as it provides sociological insight to how a scientific management technique has been implemented.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}\n\\mathrm{EV} & = \\sum_\\mathrm{Start}^\\mathrm{Current} \\mathrm{PV(Completed)} \\quad \\mathrm{or} \\quad \\mathrm{EV} = \\mathrm{budget\\, at\\, Completion\\,(BAC)} \\times \\mathrm{Actual\\%\\, Complete}\n\\end{align}\n"
},
{
"math_id": 1,
"text": "\n\\begin{align}\nCV = EV - AC\n\\end{align}\n"
},
{
"math_id": 2,
"text": "\n\\begin{align}\nCPI = {EV\\over AC}\n\\end{align}\n"
},
{
"math_id": 3,
"text": "\n\\begin{align}\nEAC = AC + {(BAC-EV)\\over CPI} = {BAC \\over CPI}\n\\end{align}\n"
},
{
"math_id": 4,
"text": "\n\\begin{align}\nEAC = AC + ETC\n\\end{align}\n"
},
{
"math_id": 5,
"text": "TCPI_{BAC} = { BAC - EV \\over BAC - AC }"
},
{
"math_id": 6,
"text": "TCPI_{EAC} = { BAC - EV \\over EAC - AC }"
},
{
"math_id": 7,
"text": "IEAC = \\sum AC + { \\left( BAC - \\sum EV \\right) \\over CPI }"
}
] | https://en.wikipedia.org/wiki?curid=9728 |
972868 | Hyperfinite type II factor | Unique von Neumann algebra
In mathematics, there are up to isomorphism exactly two separably acting hyperfinite type II factors; one infinite and one finite. Murray and von Neumann proved that up to isomorphism there is a unique von Neumann algebra that is a factor of type II1 and also hyperfinite; it is called the hyperfinite type II1 factor.
There are an uncountable number of other factors of type II1. Connes proved that the infinite one is also unique.
Properties.
The hyperfinite II1 factor "R" is the unique smallest infinite
dimensional factor in the following sense: it is contained in any other infinite dimensional factor, and any infinite dimensional factor contained in "R" is isomorphic to "R".
The outer automorphism group of "R" is an infinite simple group with countable many conjugacy classes, indexed by pairs consisting of a positive integer "p" and a complex "p"th root of 1.
The projections of the hyperfinite II1 factor form a continuous geometry.
The infinite hyperfinite type II factor.
While there are other factors of type II∞, there is a unique hyperfinite one, up to isomorphism. It consists of those infinite square matrices with entries in the hyperfinite type II1 factor that define bounded operators. | [
{
"math_id": 0,
"text": " 1"
},
{
"math_id": 1,
"text": " 1- 1/n"
}
] | https://en.wikipedia.org/wiki?curid=972868 |
9730285 | Spherical design | A spherical design, part of combinatorial design theory in mathematics, is a finite set of "N" points on the "d"-dimensional unit "d"-sphere "Sd" such that the average value of any polynomial "f" of degree "t" or less on the set equals the average value of "f" on the whole sphere (that is, the integral of "f" over "Sd" divided by the area or measure of "Sd"). Such a set is often called a spherical "t"-design to indicate the value of "t", which is a fundamental parameter. The concept of a spherical design is due to Delsarte, Goethals, and Seidel, although these objects were understood as particular examples of cubature formulas earlier.
Spherical designs can be of value in approximation theory, in statistics for experimental design, in combinatorics, and in geometry. The main problem is to find examples, given "d" and "t", that are not too large; however, such examples may be hard to come by.
Spherical t-designs have also recently been appropriated in quantum mechanics in the form of quantum t-designs with various applications to quantum information theory and quantum computing.
Existence of spherical designs.
The existence and structure of spherical designs on the circle were studied in depth by Hong. Shortly thereafter, Seymour and Zaslavsky proved that such designs exist of all sufficiently large sizes; that is, given positive integers "d" and "t", there is a number "N"("d","t") such that for every "N" ≥ "N"("d","t") there exists a spherical "t"-design of "N" points in dimension "d". However, their proof gave no idea of how big "N"("d","t") is.
Mimura constructively found conditions in terms of the number of points and the dimension which characterize exactly when spherical 2-designs exist. Maximally sized collections of equiangular lines (up to identification of lines as antipodal points on the sphere) are examples of minimal sized spherical 5-designs. There are many sporadic small spherical designs; many of them are related to finite group actions on the sphere.
In 2013, Bondarenko, Radchenko, and Viazovska obtained the asymptotic upper bound
formula_0 for all positive integers "d" and "t". This asymptotically matches the lower bound given originally by Delsarte, Goethals, and Seidel. The value of "Cd" is currently unknown, while exact values of formula_1 are known in relatively few cases.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " N(d,t)<C_d t^d"
},
{
"math_id": 1,
"text": " N(d,t)"
}
] | https://en.wikipedia.org/wiki?curid=9730285 |
97313 | Jacobi symbol | Generalization of the Legendre symbol in number theory
Jacobi symbol () for various "k" (along top) and "n" (along left side). Only 0 ≤ "k" < "n" are shown, since due to rule (2) below any other "k" can be reduced modulo "n". Quadratic residues are highlighted in yellow — note that no entry with a Jacobi symbol of −1 is a quadratic residue, and if "k" is a quadratic residue modulo a coprime "n", then () = 1, but not all entries with a Jacobi symbol of 1 (see the "n" = 9 and "n" = 15 rows) are quadratic residues. Notice also that when either "n" or "k" is a square, all values are nonnegative.
The Jacobi symbol is a generalization of the Legendre symbol. Introduced by Jacobi in 1837, it is of theoretical interest in modular arithmetic and other branches of number theory, but its main use is in computational number theory, especially primality testing and integer factorization; these in turn are important in cryptography.
Definition.
For any integer "a" and any positive odd integer "n", the Jacobi symbol () is defined as the product of the Legendre symbols corresponding to the prime factors of "n":
formula_0
where
formula_1
is the prime factorization of "n".
The Legendre symbol () is defined for all integers "a" and all odd primes "p" by
formula_2
Following the normal convention for the empty product, () = 1.
When the lower argument is an odd prime, the Jacobi symbol is equal to the Legendre symbol.
Table of values.
The following is a table of values of Jacobi symbol () with "n" ≤ 59, "k" ≤ 30, "n" odd.
Properties.
The following facts, even the reciprocity laws, are straightforward deductions from the definition of the Jacobi symbol and the corresponding properties of the Legendre symbol.
The Jacobi symbol is defined only when the upper argument ("numerator") is an integer and the lower argument ("denominator") is a positive odd integer.
1. If "n" is (an odd) prime, then the Jacobi symbol () is equal to (and written the same as) the corresponding Legendre symbol.
2. If "a" ≡ "b" (mod "n"), then formula_3
3. formula_4
If either the top or bottom argument is fixed, the Jacobi symbol is a completely multiplicative function in the remaining argument:
4. formula_5
5. formula_6
The law of quadratic reciprocity: if "m" and "n" are odd positive coprime integers, then
6. formula_7
and its supplements
7. formula_8,
and formula_9
8. formula_10
Combining properties 4 and 8 gives:
9. formula_11
Like the Legendre symbol:
But, unlike the Legendre symbol:
If () = 1 then "a" may or may not be a quadratic residue modulo "n".
This is because for "a" to be a quadratic residue modulo "n", it has to be a quadratic residue modulo "every" prime factor of "n". However, the Jacobi symbol equals one if, for example, "a" is a non-residue modulo exactly two of the prime factors of "n".
Although the Jacobi symbol cannot be uniformly interpreted in terms of squares and non-squares, it can be uniformly interpreted as the sign of a permutation by Zolotarev's lemma.
The Jacobi symbol () is a Dirichlet character to the modulus "n".
Calculating the Jacobi symbol.
The above formulas lead to an efficient "O"(log "a" log "b") algorithm for calculating the Jacobi symbol, analogous to the Euclidean algorithm for finding the gcd of two numbers. (This should not be surprising in light of rule 2.)
In addition to the code below, Riesel has it in Pascal.
Implementation in Lua.
function jacobi(n, k)
assert(k > 0 and k % 2 == 1)
n = n % k
t = 1
while n ~= 0 do
while n % 2 == 0 do
n = n / 2
r = k % 8
if r == 3 or r == 5 then
t = -t
end
end
n, k = k, n
if n % 4 == 3 and k % 4 == 3 then
t = -t
end
n = n % k
end
if k == 1 then
return t
else
return 0
end
end
Implementation in C++.
// a/n is represented as (a,n)
int jacobi(int a, int n) {
assert(n > 0 && n%2 == 1);
//step 1
a = a % n;
int t = 1;
int r;
//step 3
while (a != 0) {
//step 2
while (a % 2 == 0) {
a /= 2;
r = n % 8;
if (r == 3 || r == 5) {
t = -t;
//step 4
r = n;
n = a;
a = r;
if (a % 4 == 3 && n % 4 == 3) {
t = -t;
a = a % n;
if (n == 1) {
return t;
else {
return 0;
Example of calculations.
The Legendre symbol () is only defined for odd primes "p". It obeys the same rules as the Jacobi symbol (i.e., reciprocity and the supplementary formulas for () and () and multiplicativity of the "numerator".)
Problem: Given that 9907 is prime, calculate ().
formula_12
formula_13
Using the Jacobi symbol.
The difference between the two calculations is that when the Legendre symbol is used the "numerator" has to be factored into prime powers before the symbol is flipped. This makes the calculation using the Legendre symbol significantly slower than the one using the Jacobi symbol, as there is no known polynomial-time algorithm for factoring integers. In fact, this is why Jacobi introduced the symbol.
Primality testing.
There is another way the Jacobi and Legendre symbols differ. If the Euler's criterion formula is used modulo a composite number, the result may or may not be the value of the Jacobi symbol, and in fact may not even be −1 or 1. For example,
formula_14
So if it is unknown whether a number "n" is prime or composite, we can pick a random number "a", calculate the Jacobi symbol () and compare it with Euler's formula; if they differ modulo "n", then "n" is composite; if they have the same residue modulo "n" for many different values of "a", then "n" is "probably prime".
This is the basis for the probabilistic Solovay–Strassen primality test and refinements such as the Baillie–PSW primality test and the Miller–Rabin primality test.
As an indirect use, it is possible to use it as an error detection routine during the execution of the Lucas–Lehmer primality test which, even on modern computer hardware, can take weeks to complete when processing Mersenne numbers over formula_15 (the largest known Mersenne prime as of December 2018). In nominal cases, the Jacobi symbol:
formula_16
This also holds for the final residue formula_17 and hence can be used as a verification of probable validity. However, if an error occurs in the hardware, there is a 50% chance that the result will become 0 or 1 instead, and won't change with subsequent terms of formula_18 (unless another error occurs and changes it back to -1).
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left(\\frac{a}{n}\\right) := \\left(\\frac{a}{p_1}\\right)^{\\alpha_1}\\left(\\frac{a}{p_2}\\right)^{\\alpha_2}\\cdots \\left(\\frac{a}{p_k}\\right)^{\\alpha_k},"
},
{
"math_id": 1,
"text": "n=p_1^{\\alpha_1}p_2^{\\alpha_2}\\cdots p_k^{\\alpha_k}"
},
{
"math_id": 2,
"text": "\\left(\\frac{a}{p}\\right) := \\left\\{\n\\begin{array}{rl}\n0 & \\text{if } a \\equiv 0 \\pmod{p},\\\\\n1 & \\text{if } a \\not\\equiv 0\\pmod{p} \\text{ and for some integer } x\\colon\\;a\\equiv x^2\\pmod{p},\\\\\n-1 & \\text{if } a \\not\\equiv 0\\pmod{p} \\text{ and there is no such } x.\n\\end{array}\n\\right."
},
{
"math_id": 3,
"text": "\\left(\\frac{a}{n}\\right) = \\left(\\frac{b}{n}\\right) = \\left(\\frac{a \\pm m \\cdot n}{n}\\right)"
},
{
"math_id": 4,
"text": "\\left(\\frac{a}{n}\\right) = \n\\begin{cases}\n0 & \\text{if } \\gcd(a,n) \\ne 1,\\\\\n\\pm1 & \\text{if } \\gcd(a,n) = 1.\n\\end{cases}\n"
},
{
"math_id": 5,
"text": "\\left(\\frac{ab}{n}\\right) = \\left(\\frac{a}{n}\\right)\\left(\\frac{b}{n}\\right),\\quad\\text{so } \\left(\\frac{a^2}{n}\\right) = \\left(\\frac{a}{n}\\right)^2 = 1 \\text{ or } 0."
},
{
"math_id": 6,
"text": "\\left(\\frac{a}{mn}\\right)=\\left(\\frac{a}{m}\\right)\\left(\\frac{a}{n}\\right),\\quad\\text{so } \\left(\\frac{a}{n^2}\\right) = \\left(\\frac{a}{n}\\right)^2 = 1 \\text{ or } 0."
},
{
"math_id": 7,
"text": "\\left(\\frac{m}{n}\\right)\\left(\\frac{n}{m}\\right) = (-1)^{\\tfrac{m-1}{2}\\cdot\\tfrac{n-1}{2}} = \n\\begin{cases}\n1 & \\text{if } n \\equiv 1 \\pmod 4 \\text{ or } m \\equiv 1 \\pmod 4,\\\\\n-1 & \\text{if } n\\equiv m \\equiv 3 \\pmod 4\n\\end{cases}\n"
},
{
"math_id": 8,
"text": "\\left(\\frac{-1}{n}\\right) = (-1)^\\tfrac{n-1}{2} = \n\\begin{cases} \n1 & \\text{if }n \\equiv 1 \\pmod 4,\\\\\n-1 & \\text{if }n \\equiv 3 \\pmod 4,\n\\end{cases}\n"
},
{
"math_id": 9,
"text": "\\left(\\frac{1}{n}\\right) = \\left(\\frac{n}{1}\\right) = 1\n"
},
{
"math_id": 10,
"text": "\\left(\\frac{2}{n}\\right) = (-1)^\\tfrac{n^2-1}{8} = \n\\begin{cases} 1 & \\text{if }n \\equiv 1,7 \\pmod 8,\\\\\n-1 & \\text{if }n \\equiv 3,5\\pmod 8.\n\\end{cases}\n"
},
{
"math_id": 11,
"text": "\n\\left(\\frac{2a}{n}\\right) = \\left(\\frac{2}{n}\\right) \\left(\\frac{a}{n}\\right) = \n\\begin{cases} \\left(\\frac{a}{n}\\right) & \\text{if }n \\equiv 1,7 \\pmod 8,\\\\\n{-\\left(\\frac{a}{n}\\right)} & \\text{if }n \\equiv 3,5\\pmod 8.\n\\end{cases}\n"
},
{
"math_id": 12,
"text": "\\begin{align}\n\\left(\\frac{1001}{9907}\\right) \n&=\\left(\\frac{7}{9907}\\right) \\left(\\frac{11}{9907}\\right) \\left(\\frac{13}{9907}\\right). \n\\\\\n\\left(\\frac{7}{9907}\\right) \n&=-\\left(\\frac{9907}{7}\\right) \n=-\\left(\\frac{2}{7}\\right) \n=-1.\n\\\\\n\\left(\\frac{11}{9907}\\right) \n&=-\\left(\\frac{9907}{11}\\right) \n=-\\left(\\frac{7}{11}\\right) \n=\\left(\\frac{11}{7}\\right) \n=\\left(\\frac{4}{7}\\right)\n=1.\n\\\\\n\\left(\\frac{13}{9907}\\right) \n&=\\left(\\frac{9907}{13}\\right) \n=\\left(\\frac{1}{13}\\right)\n=1.\n\\\\\n\\left(\\frac{1001}{9907}\\right) &=-1. \\end{align}"
},
{
"math_id": 13,
"text": "\\begin{align}\n \\left(\\frac{1001}{9907}\\right) \n&=\\left(\\frac{9907}{1001}\\right) \n =\\left(\\frac{898}{1001}\\right) \n =\\left(\\frac{2}{1001}\\right)\\left(\\frac{449}{1001}\\right)\n =\\left(\\frac{449}{1001}\\right)\n\\\\\n&=\\left(\\frac{1001}{449}\\right) \n =\\left(\\frac{103}{449}\\right) \n =\\left(\\frac{449}{103}\\right) \n =\\left(\\frac{37}{103}\\right) \n =\\left(\\frac{103}{37}\\right) \n\\\\\n&=\\left(\\frac{29}{37}\\right) \n =\\left(\\frac{37}{29}\\right) \n =\\left(\\frac{8}{29}\\right) \n =\\left(\\frac{2}{29}\\right)^3 \n =-1.\n\\end{align}"
},
{
"math_id": 14,
"text": "\\begin{align}\n\\left(\\frac{19}{45}\\right) &= 1 &&\\text{ and } & 19^\\frac{45-1}{2} &\\equiv 1\\pmod{45}. \\\\\n\\left(\\frac{ 8}{21}\\right) &= -1 &&\\text{ but } & 8^\\frac{21-1}{2} &\\equiv 1\\pmod{21}. \\\\\n\\left(\\frac{ 5}{21}\\right) &= 1 &&\\text{ but } & 5^\\frac{21-1}{2} &\\equiv 16\\pmod{21}.\n\\end{align}"
},
{
"math_id": 15,
"text": "\\begin{align}2^{82,589,933} - 1\\end{align}"
},
{
"math_id": 16,
"text": "\\begin{align}\\left(\\frac{s_i - 2}{M_p}\\right) &= -1 & i \\ne 0\\end{align}"
},
{
"math_id": 17,
"text": "\\begin{align}s_{p-2}\\end{align}"
},
{
"math_id": 18,
"text": "\\begin{align}s\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=97313 |
9731945 | Probability matching | Probability matching is a decision strategy in which predictions of class membership are proportional to the class base rates. Thus, if in the training set positive examples are observed 60% of the time, and negative examples are observed 40% of the time, then the observer using a "probability-matching" strategy will predict (for unlabeled examples) a class label of "positive" on 60% of instances, and a class label of "negative" on 40% of instances.
The optimal Bayesian decision strategy (to maximize the number of correct predictions, see ) in such a case is to always predict "positive" (i.e., predict the majority category in the absence of other information), which has 60% chance of winning rather than matching which has 52% of winning (where "p" is the probability of positive realization, the result of matching would be formula_0, here formula_1). The probability-matching strategy is of psychological interest because it is frequently employed by human subjects in decision and classification studies (where it may be related to Thompson sampling).
The only case when probability matching will yield same results as Bayesian decision strategy mentioned above is when all class base rates are the same. So, if in the training set positive examples are observed 50% of the time, then the Bayesian strategy would yield 50% accuracy (1 × .5), just as probability matching (.5 ×.5 + .5 × .5). | [
{
"math_id": 0,
"text": "p^2+(1-p)^2"
},
{
"math_id": 1,
"text": ".6 \\times .6+ .4 \\times .4"
}
] | https://en.wikipedia.org/wiki?curid=9731945 |
9732133 | Midpoint circle algorithm | Determines the points needed for rasterizing a circle
In computer graphics, the midpoint circle algorithm is an algorithm used to determine the points needed for rasterizing a circle. It's a generalization of Bresenham's line algorithm. The algorithm can be further generalized to conic sections.
Summary.
This algorithm draws all eight octants simultaneously, starting from each cardinal direction (0°, 90°, 180°, 270°) and extends both ways to reach the nearest multiple of 45° (45°, 135°, 225°, 315°). It can determine where to stop because, when y = x, it has reached 45°. The reason for using these angles is shown in the above picture: As x increases, it neither skips nor repeats any x value until reaching 45°. So during the "while" loop, x increments by 1 with each iteration, and y decrements by 1 on occasion, never exceeding 1 in one iteration. This changes at 45° because that is the point where the tangent is rise=run. Whereas rise>run before and rise<run after.
The second part of the problem, the determinant, is far trickier. This determines when to decrement y. It usually comes after drawing the pixels in each iteration, because it never goes below the radius on the first pixel. Because in a continuous function, the function for a sphere is the function for a circle with the radius dependent on z (or whatever the third variable is), it stands to reason that the algorithm for a discrete (voxel) sphere would also rely on the midpoint circle algorithm. But when looking at a sphere, the integer radius of some adjacent circles is the same, but it is not expected to have the same exact circle adjacent to itself in the same hemisphere. Instead, a circle of the same radius needs a different determinant, to allow the curve to come in slightly closer to the center or extend out farther.
Algorithm.
The objective of the algorithm is to approximate a circle, more formally put, to approximate the curve formula_0 using pixels; in layman's terms every pixel should be approximately the same distance from the center, as is the definition of a circle. At each step, the path is extended by choosing the adjacent pixel which satisfies formula_1 but maximizes formula_2. Since the candidate pixels are adjacent, the arithmetic to calculate the latter expression is simplified, requiring only bit shifts and additions. But a simplification can be done in order to understand the bitshift. Keep in mind that a left bitshift of a binary number is the same as multiplying with 2. Ergo, a left bitshift of the radius only produces the diameter which is defined as radius times two.
This algorithm starts with the circle equation. For simplicity, assume the center of the circle is at formula_3. To start with, consider the first octant only, and draw a curve which starts at point formula_4 and proceeds counterclockwise, reaching the angle of 45°.
The "fast" direction here (the basis vector with the greater increase in value) is the formula_5 direction (see Differentiation of trigonometric functions). The algorithm always takes a step in the positive formula_5 direction (upwards), and occasionally takes a step in the "slow" direction (the negative formula_6 direction).
From the circle equation is obtained the transformed equation formula_7, where formula_8 is computed only once during initialization.
Let the points on the circle be a sequence of coordinates of the vector to the point (in the usual basis). Points are numbered according to the order in which drawn, with formula_9 assigned to the first point formula_4.
For each point, the following holds:
formula_10
This can be rearranged thus:
formula_11
And likewise for the next point:
formula_12
Since for the first octant the next point will always be at least 1 pixel higher than the last (but also at most 1 pixel higher to maintain continuity), it is true that:
formula_13
formula_14
So, rework the next-point-equation into a recursive one by substituting formula_15:
formula_16
Because of the continuity of a circle and because the maxima along both axes are the same, clearly it will not be skipping x points as it advances in the sequence. Usually it stays on the same x coordinate, and sometimes advances by one to the left.
The resulting coordinate is then translated by adding midpoint coordinates. These frequent integer additions do not limit the performance much, as those square (root) computations can be spared in the inner loop in turn. Again, the zero in the transformed circle equation is replaced by the error term.
The initialization of the error term is derived from an offset of ½ pixel at the start. Until the intersection with the perpendicular line, this leads to an accumulated value of formula_17 in the error term, so that this value is used for initialization.
The frequent computations of squares in the circle equation, trigonometric expressions and square roots can again be avoided by dissolving everything into single steps and using recursive computation of the quadratic terms from the preceding iterations.
Variant with integer-based arithmetic.
Just as with Bresenham's line algorithm, this algorithm can be optimized for integer-based math. Because of symmetry, if an algorithm can be found that only computes the pixels for one octant, the pixels can be reflected to get the whole circle.
We start by defining the radius error as the difference between the exact representation of the circle and the center point of each pixel (or any other arbitrary mathematical point on the pixel, so long as it's consistent across all pixels). For any pixel with a center at formula_18, the radius error is defined as:
formula_19
For clarity, this formula for a circle is derived at the origin, but the algorithm can be modified for any location. It is useful to start with the point formula_4 on the positive X-axis. Because the radius will be a whole number of pixels, clearly the radius error will be zero:
formula_20
Because it starts in the first counter-clockwise positive octant, it will step in the direction with the greatest "travel", the Y direction, so it is clear that formula_21. Also, because it concerns this octant only, the X values have only 2 options: to stay the same as the prior iteration, or decrease by 1. A decision variable can be created that determines if the following is true:
formula_22
If this inequality holds, then plot formula_23; if not, then plot formula_24. So, how to determine if this inequality holds? Start with a definition of radius error:
formula_25
The absolute value function does not help, so square both sides, since a square is always positive:
formula_26
Since x > 0, the term formula_27, so dividing gets:
formula_28
Thus, the decision criterion changes from using floating-point operations to simple integer addition, subtraction, and bit shifting (for the multiply by 2 operations). If &NoBreak;&NoBreak;, then decrement the x value. If &NoBreak;&NoBreak;, then keep the same x value. Again, by reflecting these points in all the octants, a full circle results.
We may reduce computation by only calculating the delta between the values of this decision formula from its value at the previous step. We start by assigning &NoBreak;&NoBreak; as &NoBreak;&NoBreak; which is the initial value of the formula at &NoBreak;&NoBreak;, then as above at each step if &NoBreak;&NoBreak; we update it as &NoBreak;&NoBreak; (and decrement X), otherwise &NoBreak;&NoBreak; thence increment Y as usual.
Jesko's Method.
The algorithm has already been explained to a large extent, but there are further optimizations.
The new presented method gets along with only 5 arithmetic operations per step (for 8 pixels) and is thus best suitable for low-performate systems. In the "if" operation, only the sign is checked (positive? Yes or No) and there is a variable assignment, which is also not considered an arithmetic operation.
The initialization in the first line (shifting by 4 bits to the right) is only due to beauty and not really necessary.
So we get countable operations within main-loop:
Operations: 5
"t1" = r / 16
"x" = r
"y" = 0
Repeat Until "x" < "y"
Pixel (x, y) and all symmetric pixels are colored (8 times)
"y" = "y" + 1
"t1" = "t1" + "y"
"t2" = "t1" - "x"
If "t2" >= 0
"t1" = "t2"
"x" = "x" - 1
Drawing incomplete octants.
The implementations above always draw only complete octants or circles. To draw only a certain arc from an angle formula_29 to an angle formula_30, the algorithm needs first to calculate the formula_6 and formula_5 coordinates of these end points, where it is necessary to resort to trigonometric or square root computations (see Methods of computing square roots). Then the Bresenham algorithm is run over the complete octant or circle and sets the pixels only if they fall into the wanted interval. After finishing this arc, the algorithm can be ended prematurely.
If the angles are given as slopes, then no trigonometry or square roots are necessary: simply check that formula_31 is between the desired slopes.
Generalizations.
It is also possible to use the same concept to rasterize a parabola, ellipse, or any other two-dimensional curve.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x^2 + y^2 = r^2"
},
{
"math_id": 1,
"text": "x^2 + y^2 \\leq r^2"
},
{
"math_id": 2,
"text": "x^2 + y^2 "
},
{
"math_id": 3,
"text": "(0,0)"
},
{
"math_id": 4,
"text": "(r,0)"
},
{
"math_id": 5,
"text": "y"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "x^2 + y^2 - r^2 = 0"
},
{
"math_id": 8,
"text": "r^2"
},
{
"math_id": 9,
"text": "n=1"
},
{
"math_id": 10,
"text": "\\begin{align} x_n^2 + y_n^2 = r^2 \\end{align}"
},
{
"math_id": 11,
"text": "\\begin{align} x_n^2 = r^2 - y_n^2 \\end{align}"
},
{
"math_id": 12,
"text": "\\begin{align} x_{n+1}^2 = r^2 - y_{n+1}^2 \\end{align}"
},
{
"math_id": 13,
"text": "\\begin{align} y_{n+1}^2 &= (y_n + 1)^2 \\\\ &= y_n^2 + 2y_n + 1 \\end{align}"
},
{
"math_id": 14,
"text": "\\begin{align} x_{n+1}^2 = r^2 - y_n^2 - 2y_n - 1 \\end{align}"
},
{
"math_id": 15,
"text": "x_n^2 = r^2 - y_n^2"
},
{
"math_id": 16,
"text": "\\begin{align} x_{n+1}^2 = x_n^2 - 2y_n - 1 \\end{align}"
},
{
"math_id": 17,
"text": "r"
},
{
"math_id": 18,
"text": "(x_i, y_i)"
},
{
"math_id": 19,
"text": "RE(x_i,y_i) = \\left\\vert x_i^2 + y_i^2 - r^2 \\right\\vert"
},
{
"math_id": 20,
"text": "RE(x_i,y_i) = \\left\\vert x_i^2 + 0^2 - r^2 \\right\\vert = 0"
},
{
"math_id": 21,
"text": " y_{i+1} = y_i + 1"
},
{
"math_id": 22,
"text": "RE(x_i-1, y_i+1) < RE(x_i,y_i+1)"
},
{
"math_id": 23,
"text": "(x_i-1, y_i+1)"
},
{
"math_id": 24,
"text": "(x_i,y_i+1)"
},
{
"math_id": 25,
"text": "\n\\begin{align}\nRE(x_i-1, y_i+1) & < RE(x_i,y_i+1) \\\\\n\\left\\vert (x_i-1)^2 + (y_i+1)^2 - r^2 \\right\\vert & < \\left\\vert x_i^2 + (y_i+1)^2 - r^2 \\right\\vert \\\\\n\\left\\vert (x_i^2 - 2 x_i + 1) + (y_i^2 + 2 y_i + 1) - r^2 \\right\\vert & < \\left\\vert x_i^2 + (y_i^2 + 2 y_i + 1) - r^2 \\right\\vert \\\\\n\\end{align}\n"
},
{
"math_id": 26,
"text": "\n\\begin{align}\n\\left [ (x_i^2 - 2 x_i + 1) + (y_i^2 + 2 y_i + 1) - r^2 \\right ]^2 & < \\left [ x_i^2 + (y_i^2 + 2 y_i + 1) - r^2 \\right ]^2 \\\\\n\\left [ (x_i^2 + y_i^2 - r^2 + 2 y_i + 1) + (1 - 2 x_i) \\right ]^2 & < \\left [ x_i^2 + y_i^2 - r^2 + 2 y_i + 1 \\right ]^2 \\\\\n\\left ( x_i^2 + y_i^2 - r^2 + 2 y_i + 1 \\right )^2 + 2 (1 - 2 x_i) (x_i^2 + y_i^2 - r^2 + 2 y_i + 1) + (1 - 2 x_i)^2 & < \\left [ x_i^2 + y_i^2 - r^2 + 2 y_i + 1 \\right ]^2 \\\\\n2 (1 - 2 x_i) (x_i^2 + y_i^2 - r^2 + 2 y_i + 1) + (1 - 2 x_i)^2 & < 0 \\\\\n\\end{align}\n"
},
{
"math_id": 27,
"text": "(1 - 2 x_i) < 0"
},
{
"math_id": 28,
"text": "\n\\begin{align}\n2 \\left [ (x_i^2 + y_i^2 - r^2) + (2 y_i + 1) \\right ] + (1 - 2 x_i) & > 0 \\\\\n2 \\left [ RE(x_i,y_i) + Y_\\text{Change} \\right ] + X_\\text{Change} & > 0 \\\\\n\\end{align}\n"
},
{
"math_id": 29,
"text": "\\alpha"
},
{
"math_id": 30,
"text": "\\beta"
},
{
"math_id": 31,
"text": "y/x"
}
] | https://en.wikipedia.org/wiki?curid=9732133 |
973479 | Pseudo-differential operator | Type of differential operator
In mathematical analysis a pseudo-differential operator is an extension of the concept of differential operator. Pseudo-differential operators are used extensively in the theory of partial differential equations and quantum field theory, e.g. in mathematical models that include ultrametric pseudo-differential equations in a non-Archimedean space.
History.
The study of pseudo-differential operators began in the mid 1960s with the work of Kohn, Nirenberg, Hörmander, Unterberger and Bokobza.
They played an influential role in the second proof of the Atiyah–Singer index theorem via K-theory. Atiyah and Singer thanked Hörmander for assistance with understanding the theory of pseudo-differential operators.
Motivation.
Linear differential operators with constant coefficients.
Consider a linear differential operator with constant coefficients,
formula_0
which acts on smooth functions formula_1 with compact support in R"n".
This operator can be written as a composition of a Fourier transform, a simple "multiplication" by the
polynomial function (called the symbol)
formula_2
and an inverse Fourier transform, in the form:
Here, formula_3 is a multi-index, formula_4 are complex numbers, and
formula_5
is an iterated partial derivative, where ∂"j" means differentiation with respect to the "j"-th variable. We introduce the constants formula_6 to facilitate the calculation of Fourier transforms.
The Fourier transform of a smooth function "u", compactly supported in R"n", is
formula_7
and Fourier's inversion formula gives
formula_8
By applying "P"("D") to this representation of "u" and using
formula_9
one obtains formula (1).
Representation of solutions to partial differential equations.
To solve the partial differential equation
formula_10
we (formally) apply the Fourier transform on both sides and obtain the "algebraic" equation
formula_11
If the symbol "P"(ξ) is never zero when ξ ∈ R"n", then it is possible to divide by "P"(ξ):
formula_12
By Fourier's inversion formula, a solution is
formula_13
Here it is assumed that:
The last assumption can be weakened by using the theory of distributions.
The first two assumptions can be weakened as follows.
In the last formula, write out the Fourier transform of ƒ to obtain
formula_14
This is similar to formula (1), except that 1/"P"(ξ) is not a polynomial function, but a function of a more general kind.
Definition of pseudo-differential operators.
Here we view pseudo-differential operators as a generalization of differential operators.
We extend formula (1) as follows. A pseudo-differential operator "P"("x","D") on R"n" is an operator whose value on the function "u(x)" is the function of "x":
where formula_15 is the Fourier transform of "u" and the symbol "P"("x",ξ) in the integrand belongs to a certain "symbol class".
For instance, if "P"("x",ξ) is an infinitely differentiable function on R"n" × R"n" with the property
formula_16
for all "x",ξ ∈R"n", all multiindices α,β, some constants "C"α, β and some real number "m", then "P" belongs to the symbol class formula_17 of Hörmander. The corresponding operator "P"("x","D") is called a pseudo-differential operator of order m and belongs to the class
formula_18
Properties.
Linear differential operators of order m with smooth bounded coefficients are pseudo-differential
operators of order "m".
The composition "PQ" of two pseudo-differential operators "P", "Q" is again a pseudo-differential operator and the symbol of "PQ" can be calculated by using the symbols of "P" and "Q". The adjoint and transpose of a pseudo-differential operator is a pseudo-differential operator.
If a differential operator of order "m" is (uniformly) elliptic (of order "m")
and invertible, then its inverse is a pseudo-differential operator of order −"m", and its symbol can be calculated. This means that one can solve linear elliptic differential equations more or less explicitly
by using the theory of pseudo-differential operators.
Differential operators are "local" in the sense that one only needs the value of a function in a neighbourhood of a point to determine the effect of the operator. Pseudo-differential operators are "pseudo-local", which means informally that when applied to a distribution they do not create a singularity at points where the distribution was already smooth.
Just as a differential operator can be expressed in terms of "D" = −id/d"x" in the form
formula_19
for a polynomial "p" in "D" (which is called the "symbol"), a pseudo-differential operator has a symbol in a more general class of functions. Often one can reduce a problem in analysis of pseudo-differential operators to a sequence of algebraic problems involving their symbols, and this is the essence of microlocal analysis.
Kernel of pseudo-differential operator.
Pseudo-differential operators can be represented by kernels. The singularity of the kernel on the diagonal depends on the degree of the corresponding operator. In fact, if the symbol satisfies the above differential inequalities with m ≤ 0, it can be shown that the kernel is a singular integral kernel.
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " P(D) := \\sum_\\alpha a_\\alpha \\, D^\\alpha "
},
{
"math_id": 1,
"text": "u"
},
{
"math_id": 2,
"text": " P(\\xi) = \\sum_\\alpha a_\\alpha \\, \\xi^\\alpha, "
},
{
"math_id": 3,
"text": "\\alpha = (\\alpha_1,\\ldots,\\alpha_n)"
},
{
"math_id": 4,
"text": "a_\\alpha"
},
{
"math_id": 5,
"text": "D^\\alpha=(-i \\partial_1)^{\\alpha_1} \\cdots (-i \\partial_n)^{\\alpha_n}"
},
{
"math_id": 6,
"text": "-i"
},
{
"math_id": 7,
"text": "\\hat u (\\xi) := \\int e^{- i y \\xi} u(y) \\, dy"
},
{
"math_id": 8,
"text": "u (x) = \\frac{1}{(2 \\pi)^n} \\int e^{i x \\xi} \\hat u (\\xi) d\\xi = \n\\frac{1}{(2 \\pi)^n} \\iint e^{i (x - y) \\xi} u (y) \\, dy \\, d\\xi "
},
{
"math_id": 9,
"text": "P(D_x) \\, e^{i (x - y) \\xi} = e^{i (x - y) \\xi} \\, P(\\xi) "
},
{
"math_id": 10,
"text": " P(D) \\, u = f "
},
{
"math_id": 11,
"text": " P(\\xi) \\, \\hat u (\\xi) = \\hat f(\\xi). "
},
{
"math_id": 12,
"text": " \\hat u(\\xi) = \\frac{1}{P(\\xi)} \\hat f(\\xi) "
},
{
"math_id": 13,
"text": " u (x) = \\frac{1}{(2 \\pi)^n} \\int e^{i x \\xi} \\frac{1}{P(\\xi)} \\hat f (\\xi) \\, d\\xi."
},
{
"math_id": 14,
"text": " u (x) = \\frac{1}{(2 \\pi)^n} \\iint e^{i (x-y) \\xi} \\frac{1}{P(\\xi)} f (y) \\, dy \\, d\\xi."
},
{
"math_id": 15,
"text": "\\hat{u}(\\xi)"
},
{
"math_id": 16,
"text": " |\\partial_\\xi^\\alpha \\partial_x^\\beta P(x,\\xi)| \\leq C_{\\alpha,\\beta} \\, (1 + |\\xi|)^{m - |\\alpha|} "
},
{
"math_id": 17,
"text": "\\scriptstyle{S^m_{1,0}}"
},
{
"math_id": 18,
"text": "\\Psi^m_{1,0}."
},
{
"math_id": 19,
"text": "p(x, D)\\,"
}
] | https://en.wikipedia.org/wiki?curid=973479 |
9735 | Electromagnetic field | Electric and magnetic fields produced by moving charged objects
An electromagnetic field (also EM field) is a physical field, mathematical functions of position and time, representing the influences on and due to electric charges. The field at any point in space and time can be regarded as a combination of an electric field and a magnetic field.
Because of the interrelationship between the fields, a disturbance in the electric field can create a disturbance in the magnetic field which in turn affects the electric field, leading to an oscillation that propagates through space, known as an "electromagnetic wave".
The way in which charges and currents (i.e. streams of charges) interact with the electromagnetic field is described by Maxwell's equations and the Lorentz force law. Maxwell's equations detail how the electric field converges towards or diverges away from electric charges, how the magnetic field curls around electrical currents, and how changes in the electric and magnetic fields influence each other. The Lorentz force law states that a charge subject to an electric field feels a force along the direction of the field, and a charge moving through a magnetic field feels a force that is perpendicular both to the magnetic field and to its direction of motion.
The electromagnetic field is described by classical electrodynamics, an example of a classical field theory. This theory describes many macroscopic physical phenomena accurately. However, it was unable to explain the photoelectric effect and atomic absorption spectroscopy, experiments at the atomic scale. That required the use of quantum mechanics, specifically the quantization of the electromagnetic field and the development of quantum electrodynamics.
History.
The empirical investigation of electromagnetism is at least as old as the ancient Greek philosopher, mathematician and scientist Thales of Miletus, who around 600 BCE described his experiments rubbing fur of animals on various materials such as amber creating static electricity. By the 18th century, it was understood that objects can carry positive or negative electric charge, that two objects carrying charge of the same sign repel each other, that two objects carrying charges of opposite sign attract one another, and that the strength of this force falls off as the square of the distance between them. Michael Faraday visualized this in terms of the charges interacting via the electric field. An electric field is produced when the charge is stationary with respect to an observer measuring the properties of the charge, and a magnetic field as well as an electric field are produced when the charge moves, creating an electric current with respect to this observer. Over time, it was realized that the electric and magnetic fields are better thought of as two parts of a greater whole—the electromagnetic field. In 1820, Hans Christian Ørsted showed that an electric current can deflect a nearby compass needle, establishing that electricity and magnetism are closely related phenomena. Faraday then made the seminal observation that time-varying magnetic fields could induce electric currents in 1831.
In 1861, James Clerk Maxwell synthesized all the work to date on electrical and magnetic phenomena into a single mathematical theory, from which he then deduced that light is an electromagnetic wave. Maxwell's continuous field theory was very successful until evidence supporting the atomic model of matter emerged. Beginning in 1877, Hendrik Lorentz developed an atomic model of electromagnetism and in 1897 J. J. Thomson completed experiments that defined the electron. The Lorentz theory works for free charges in electromagnetic fields, but fails to predict the energy spectrum for bound charges in atoms and molecules. For that problem, quantum mechanics is needed, ultimately leading to the theory of quantum electrodynamics.
Practical applications of the new understanding of electromagnetic fields emerged in the late 1800s. The electrical generator and motor were invented using only the empirical findings like Faraday's and Ampere's laws combined with practical experience.
Mathematical description.
There are different mathematical ways of representing the electromagnetic field. The first one views the electric and magnetic fields as three-dimensional vector fields. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as E("x", "y", "z", "t") (electric field) and B("x", "y", "z", "t") (magnetic field).
If only the electric field (E) is non-zero, and is constant in time, the field is said to be an electrostatic field. Similarly, if only the magnetic field (B) is non-zero and is constant in time, the field is said to be a magnetostatic field. However, if either the electric or magnetic field has a time-dependence, then both fields must be considered together as a coupled electromagnetic field using Maxwell's equations.
With the advent of special relativity, physical laws became amenable to the formalism of tensors. Maxwell's equations can be written in tensor form, generally viewed by physicists as a more elegant means of expressing physical laws.
The behavior of electric and magnetic fields, whether in cases of electrostatics, magnetostatics, or electrodynamics (electromagnetic fields), is governed by Maxwell's equations. In the vector field formalism, these are:
where formula_4 is the charge density, which is a function of time and position, formula_5 is the vacuum permittivity, formula_6 is the vacuum permeability, and J is the current density vector, also a function of time and position. Inside a linear material, Maxwell's equations change by switching the permeability and permittivity of free space with the permeability and permittivity of the linear material in question. Inside other materials which possess more complex responses to electromagnetic fields, these terms are often represented by complex numbers, or tensors.
The Lorentz force law governs the interaction of the electromagnetic field with charged matter.
When a field travels across to different media, the behavior of the field changes according to the properties of the media.
Properties of the field.
Electrostatics and magnetostatics.
The Maxwell equations simplify when the charge density at each point in space does not change over time and all electric currents likewise remain constant. All of the time derivatives vanish from the equations, leaving two expressions that involve the electric field,
formula_7
and
formula_8
along with two formulae that involve the magnetic field:
formula_9
and
formula_10
These expressions are the basic equations of electrostatics, which focuses on situations where electrical charges do not move, and magnetostatics, the corresponding area of magnetic phenomena.
Transformations of electromagnetic fields.
Whether a physical effect is attributable to an electric field or to a magnetic field is dependent upon the observer, in a way that special relativity makes mathematically precise. For example, suppose that a laboratory contains a long straight wire that carries an electrical current. In the frame of reference where the laboratory is at rest, the wire is motionless and electrically neutral: the current, composed of negatively charged electrons, moves against a background of positively charged ions, and the densities of positive and negative charges cancel each other out. A test charge near the wire would feel no electrical force from the wire. However, if the test charge is in motion parallel to the current, the situation changes. In the rest frame of the test charge, the positive and negative charges in the wire are moving at different speeds, and so the positive and negative charge distributions are Lorentz-contracted by different amounts. Consequently, the wire has a nonzero net charge density, and the test charge must experience a nonzero electric field and thus a nonzero force. In the rest frame of the laboratory, there is no electric field to explain the test charge being pulled towards or pushed away from the wire. So, an observer in the laboratory rest frame concludes that a magnetic field must be present.
In general, a situation that one observer describes using only an electric field will be described by an observer in a different inertial frame using a combination of electric and magnetic fields. Analogously, a phenomenon that one observer describes using only a magnetic field will be, in a relatively moving reference frame, described by a combination of fields. The rules for relating the fields required in different reference frames are the Lorentz transformations of the fields.
Thus, electrostatics and magnetostatics are now seen as studies of the static EM field when a particular frame has been selected to suppress the other type of field, and since an EM field with both electric and magnetic will appear in any other frame, these "simpler" effects are merely a consequence of different frames of measurement. The fact that the two field variations can be reproduced just by changing the motion of the observer is further evidence that there is only a single actual field involved which is simply being observed differently.
Reciprocal behavior of electric and magnetic fields.
The two Maxwell equations, Faraday's Law and the Ampère–Maxwell Law, illustrate a very practical feature of the electromagnetic field. Faraday's Law may be stated roughly as "a changing magnetic field inside a loop creates an electric voltage around the loop". This is the principle behind the electric generator.
Ampere's Law roughly states that "an electrical current around a loop creates a magnetic field through the loop". Thus, this law can be applied to generate a magnetic field and run an electric motor.
Behavior of the fields in the absence of charges or currents.
Maxwell's equations can be combined to derive wave equations. The solutions of these equations take the form of an electromagnetic wave. In a volume of space not containing charges or currents (free space) – that is, where formula_4 and J are zero, the electric and magnetic fields satisfy these electromagnetic wave equations:
formula_11
formula_12
James Clerk Maxwell was the first to obtain this relationship by his completion of Maxwell's equations with the addition of a displacement current term to Ampere's circuital law. This unified the physical understanding of electricity, magnetism, and light: visible light is but one portion of the full range of electromagnetic waves, the electromagnetic spectrum.
Time-varying EM fields in Maxwell's equations.
An electromagnetic field very far from currents and charges (sources) is called electromagnetic radiation (EMR) since it radiates from the charges and currents in the source. Such radiation can occur across a wide range of frequencies called the electromagnetic spectrum, including radio waves, microwave, infrared, visible light, ultraviolet light, X-rays, and gamma rays. The many commercial applications of these radiations are discussed in the named and linked articles.
A notable application of visible light is that this type of energy from the Sun powers all life on Earth that either makes or uses oxygen.
A changing electromagnetic field which is physically close to currents and charges (see near and far field for a definition of "close") will have a dipole characteristic that is dominated by either a changing electric dipole, or a changing magnetic dipole. This type of dipole field near sources is called an electromagnetic "near-field".
Changing electric dipole fields, as such, are used commercially as near-fields mainly as a source of dielectric heating. Otherwise, they appear parasitically around conductors which absorb EMR, and around antennas which have the purpose of generating EMR at greater distances.
Changing magnetic dipole fields (i.e., magnetic near-fields) are used commercially for many types of magnetic induction devices. These include motors and electrical transformers at low frequencies, and devices such as RFID tags, metal detectors, and MRI scanner coils at higher frequencies.
Health and safety.
The potential effects of electromagnetic fields on human health vary widely depending on the frequency, intensity of the fields, and the length of the exposure. Low frequency, low intensity, and short duration exposure to electromagnetic radiation is generally considered safe. On the other hand, radiation from other parts of the electromagnetic spectrum, such as ultraviolet light and gamma rays, are known to cause significant harm in some circumstances.
See also.
<templatestyles src="Div col/styles.css"/>
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\nabla \\cdot \\mathbf{E} = \\frac{\\rho}{\\varepsilon_0}"
},
{
"math_id": 1,
"text": "\\nabla \\cdot \\mathbf{B} = 0"
},
{
"math_id": 2,
"text": "\\nabla \\times \\mathbf{E} = -\\frac {\\partial \\mathbf{B}}{\\partial t}"
},
{
"math_id": 3,
"text": "\\nabla \\times \\mathbf{B} = \\mu_0 \\mathbf{J} + \\mu_0\\varepsilon_0 \\frac{\\partial \\mathbf{E}}{\\partial t}"
},
{
"math_id": 4,
"text": "\\rho"
},
{
"math_id": 5,
"text": "\\varepsilon_0"
},
{
"math_id": 6,
"text": "\\mu_0"
},
{
"math_id": 7,
"text": "\\nabla \\cdot \\mathbf{E} = \\frac{\\rho}{\\epsilon_0}"
},
{
"math_id": 8,
"text": "\\nabla\\times\\mathbf{E} = 0,"
},
{
"math_id": 9,
"text": "\\nabla \\cdot \\mathbf{B} = 0"
},
{
"math_id": 10,
"text": "\\nabla \\times \\mathbf{B} = \\mu_0 \\mathbf{J}."
},
{
"math_id": 11,
"text": " \\left( \\nabla^2 - { 1 \\over {c}^2 } {\\partial^2 \\over \\partial t^2} \\right) \\mathbf{E} \\ \\ = \\ \\ 0"
},
{
"math_id": 12,
"text": " \\left( \\nabla^2 - { 1 \\over {c}^2 } {\\partial^2 \\over \\partial t^2} \\right) \\mathbf{B} \\ \\ = \\ \\ 0"
}
] | https://en.wikipedia.org/wiki?curid=9735 |
973828 | Relativistic Euler equations | In fluid mechanics and astrophysics, the relativistic Euler equations are a generalization of the Euler equations that account for the effects of general relativity. They have applications in high-energy astrophysics and numerical relativity, where they are commonly used for describing phenomena such as gamma-ray bursts, accretion phenomena, and neutron stars, often with the addition of a magnetic field. "Note: for consistency with the literature, this article makes use of natural units, namely the speed of light" formula_0 "and the Einstein summation convention."
Motivation.
For most fluids observable on Earth, traditional fluid mechanics based on Newtonian mechanics is sufficient. However, as the fluid velocity approaches the speed of light or moves through strong gravitational fields, or the pressure approaches the energy density (formula_1), these equations are no longer valid. Such situations occur frequently in astrophysical applications. For example, gamma-ray bursts often feature speeds only formula_2 less than the speed of light, and neutron stars feature gravitational fields that are more than formula_3 times stronger than the Earth's. Under these extreme circumstances, only a relativistic treatment of fluids will suffice.
Introduction.
The equations of motion are contained in the continuity equation of the stress–energy tensor formula_4:
formula_5
where formula_6 is the covariant derivative. For a perfect fluid,
formula_7
Here formula_8 is the total mass-energy density (including both rest mass and internal energy density) of the fluid, formula_9 is the fluid pressure, formula_10 is the four-velocity of the fluid, and formula_11 is the metric tensor. To the above equations, a statement of conservation is usually added, usually conservation of baryon number. If formula_12 is the number density of baryons this may be stated
formula_13
These equations reduce to the classical Euler equations if the fluid three-velocity is much less than the speed of light, the pressure is much less than the energy density, and the latter is dominated by the rest mass density. To close this system, an equation of state, such as an ideal gas or a Fermi gas, is also added.
Equations of Motion in Flat Space.
In the case of flat space, that is formula_14 and using a metric signature of formula_15, the equations of motion are,
formula_16
Where formula_17 is the energy density of the system, with formula_9 being the pressure, and formula_18 being the four-velocity of the system.
Expanding out the sums and equations, we have, (using formula_19 as the material derivative)
formula_20
Then, picking formula_21 to observe the behavior of the velocity itself, we see that the equations of motion become
formula_22
Note that taking the non-relativistic limit, we have formula_23. This says that the energy of the fluid is dominated by its rest energy.
In this limit, we have formula_24 and formula_25, and can see that we return the Euler Equation of formula_26.
Derivation of the Equations of Motion.
In order to determine the equations of motion, we take advantage of the following spatial projection tensor condition:
formula_27
We prove this by looking at formula_28 and then multiplying each side by formula_29. Upon doing this, and noting that formula_30, we have formula_31. Relabeling the indices formula_32 as formula_33 shows that the two completely cancel. This cancellation is the expected result of contracting a temporal tensor with a spatial tensor.
Now, when we note that
formula_34
where we have implicitly defined that formula_35, we can calculate that
formula_36
and thus
formula_37
Then, let's note the fact that formula_38 and formula_39. Note that the second identity follows from the first. Under these simplifications, we find that
formula_40
and thus by formula_41, we have
formula_42
We have two cancellations, and are thus left with
formula_43
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "c=1"
},
{
"math_id": 1,
"text": "P\\sim\\rho"
},
{
"math_id": 2,
"text": "0.01%"
},
{
"math_id": 3,
"text": "10^{11}"
},
{
"math_id": 4,
"text": "T^{\\mu\\nu}"
},
{
"math_id": 5,
"text": "\\nabla_\\mu T^{\\mu\\nu}=0,"
},
{
"math_id": 6,
"text": "\\nabla_\\mu"
},
{
"math_id": 7,
"text": "T^{\\mu\\nu} \\, = (e+p)u^\\mu u^\\nu+p g^{\\mu\\nu}."
},
{
"math_id": 8,
"text": "e"
},
{
"math_id": 9,
"text": "p"
},
{
"math_id": 10,
"text": "u^\\mu"
},
{
"math_id": 11,
"text": "g^{\\mu\\nu}"
},
{
"math_id": 12,
"text": "n"
},
{
"math_id": 13,
"text": "\n\\nabla_\\mu\n(nu^\\mu)=0."
},
{
"math_id": 14,
"text": "\\nabla_{\\mu} = \\partial_{\\mu}"
},
{
"math_id": 15,
"text": "(-,+,+,+)"
},
{
"math_id": 16,
"text": "\n(e+p)u^{\\mu}\\partial_{\\mu}u^{\\nu} = -\\partial^{\\nu}p - u^{\\nu}u^{\\mu}\\partial_{\\mu}p\n"
},
{
"math_id": 17,
"text": "e = \\gamma \\rho c^2 + \\rho \\epsilon"
},
{
"math_id": 18,
"text": "u^{\\mu} = \\gamma(1, \\frac{\\mathbf{v}}{c})"
},
{
"math_id": 19,
"text": "\\frac{d}{dt}"
},
{
"math_id": 20,
"text": "\n(e+p)\\frac{\\gamma}{c}\\frac{du^{\\mu}}{dt} = -\\partial^{\\mu}p - \\frac{\\gamma}{c}\\frac{dp}{dt}u^{\\mu}\n"
},
{
"math_id": 21,
"text": "u^{\\nu} = u^i = \\frac{\\gamma}{c}v_i"
},
{
"math_id": 22,
"text": "\n(e+p)\\frac{\\gamma}{c^2}\\frac{d}{dt}(\\gamma v_i) = -\\partial_i p -\\frac{\\gamma^2}{c^2}\\frac{dp}{dt}v_i\n"
},
{
"math_id": 23,
"text": "\\frac{1}{c^2}(e+p) = \\gamma \\rho + \\frac{1}{c^2}\\rho \\epsilon + \\frac{1}{c^2}p \\approx \\rho"
},
{
"math_id": 24,
"text": "\\gamma \\rightarrow 1"
},
{
"math_id": 25,
"text": "c\\rightarrow \\infty"
},
{
"math_id": 26,
"text": "\\rho \\frac{dv_i}{dt} = -\\partial_i p"
},
{
"math_id": 27,
"text": "\n\\partial_{\\mu}T^{\\mu\\nu} + u_{\\alpha}u^{\\nu}\\partial_{\\mu}T^{\\mu\\alpha} = 0\n"
},
{
"math_id": 28,
"text": "\\partial_{\\mu}T^{\\mu\\nu} + u_{\\alpha}u^{\\nu}\\partial_{\\mu}T^{\\mu\\alpha}"
},
{
"math_id": 29,
"text": "u_{\\nu}"
},
{
"math_id": 30,
"text": "u^{\\mu}u_{\\mu} = -1"
},
{
"math_id": 31,
"text": "u_{\\nu}\\partial_{\\mu}T^{\\mu\\nu} - u_{\\alpha}\\partial_{\\mu}T^{\\mu\\alpha}"
},
{
"math_id": 32,
"text": "\\alpha"
},
{
"math_id": 33,
"text": "\\nu"
},
{
"math_id": 34,
"text": "\nT^{\\mu\\nu} = wu^{\\mu}u^{\\nu} + pg^{\\mu\\nu}\n"
},
{
"math_id": 35,
"text": "w \\equiv e+p"
},
{
"math_id": 36,
"text": "\n \\begin{align}\n \\partial_{\\mu}T^{\\mu\\nu} & = (\\partial_{\\mu}w)u^{\\mu}u^{\\nu} + w(\\partial_{\\mu}u^{\\mu}) u^{\\nu} + wu^{\\mu}\\partial_{\\mu}u^{\\nu} + \\partial^{\\nu}p \\\\\n \\partial_{\\mu}T^{\\mu\\alpha} & = (\\partial_{\\mu}w)u^{\\mu}u^{\\alpha} + w(\\partial_{\\mu}u^{\\mu}) u^{\\alpha} + wu^{\\mu}\\partial_{\\mu}u^{\\alpha} + \\partial^{\\alpha}p\n \\end{align}\n"
},
{
"math_id": 37,
"text": "\nu^{\\nu}u_{\\alpha}\\partial_{\\mu}T^{\\mu\\alpha} = (\\partial_{\\mu}w)u^{\\mu}u^{\\nu}u^{\\alpha}u_{\\alpha} + w(\\partial_{\\mu}u^{\\mu})u^{\\nu} u^{\\alpha}u_{\\alpha} + wu^{\\mu}u^{\\nu} u_{\\alpha}\\partial_{\\mu}u^{\\alpha} + u^{\\nu}u_{\\alpha}\\partial^{\\alpha}p\n"
},
{
"math_id": 38,
"text": "u^{\\alpha}u_{\\alpha} = -1"
},
{
"math_id": 39,
"text": "u^{\\alpha}\\partial_{\\nu}u_{\\alpha} = 0"
},
{
"math_id": 40,
"text": "\nu^{\\nu}u_{\\alpha}\\partial_{\\mu}T^{\\mu\\alpha} = -(\\partial_{\\mu}w)u^{\\mu}u^{\\nu} - w(\\partial_{\\mu}u^{\\mu})u^{\\nu} + u^{\\nu}u^{\\alpha}\\partial_{\\alpha}p\n"
},
{
"math_id": 41,
"text": "\\partial_{\\mu}T^{\\mu\\nu} + u_{\\alpha}u^{\\nu}\\partial_{\\mu}T^{\\mu\\alpha} = 0"
},
{
"math_id": 42,
"text": "\n(\\partial_{\\mu}w)u^{\\mu}u^{\\nu} + w(\\partial_{\\mu}u^{\\mu}) u^{\\nu} + wu^{\\mu}\\partial_{\\mu}u^{\\nu} + \\partial^{\\nu}p -(\\partial_{\\mu}w)u^{\\mu}u^{\\nu} - w(\\partial_{\\mu}u^{\\mu})u^{\\nu} + u^{\\nu}u^{\\alpha}\\partial_{\\alpha}p = 0\n"
},
{
"math_id": 43,
"text": "\n(e+p)u^{\\mu}\\partial_{\\mu}u^{\\nu} = - \\partial^{\\nu}p - u^{\\nu}u^{\\alpha}\\partial_{\\alpha}p\n"
}
] | https://en.wikipedia.org/wiki?curid=973828 |
9738540 | Phylogenetic comparative methods | Use of information on the historical relationships of lineages to test evolutionary hypotheses
Phylogenetic comparative methods (PCMs) use information on the historical relationships of lineages (phylogenies) to test evolutionary hypotheses. The comparative method has a long history in evolutionary biology; indeed, Charles Darwin used differences and similarities between species as a major source of evidence in "The Origin of Species". However, the fact that closely related lineages share many traits and trait combinations as a result of the process of descent with modification means that lineages are not independent. This realization inspired the development of explicitly phylogenetic comparative methods. Initially, these methods were primarily developed to control for phylogenetic history when testing for adaptation; however, in recent years the use of the term has broadened to include any use of phylogenies in statistical tests. Although most studies that employ PCMs focus on extant organisms, many methods can also be applied to extinct taxa and can incorporate information from the fossil record.
PCMs can generally be divided into two types of approaches: those that infer the evolutionary history of some character (phenotypic or genetic) across a phylogeny and those that infer the process of evolutionary branching itself (diversification rates), though there are some approaches that do both simultaneously. Typically the tree that is used in conjunction with PCMs has been estimated independently (see computational phylogenetics) such that both the relationships between lineages and the length of branches separating them is assumed to be known.
Applications.
Phylogenetic comparative approaches can complement other ways of studying adaptation, such as studying natural populations, experimental studies, and mathematical models. Interspecific comparisons allow researchers to assess the generality of evolutionary phenomena by considering independent evolutionary events. Such an approach is particularly useful when there is little or no variation within species. And because they can be used to explicitly model evolutionary processes occurring over very long time periods, they can provide insight into macroevolutionary questions, once the exclusive domain of paleontology.
Phylogenetic comparative methods are commonly applied to such questions as:
→ "Example: how does brain mass vary in relation to body mass?"
→ "Example: do canids have larger hearts than felids?"
→ "Example: do carnivores have larger home ranges than herbivores?"
→ "Example: where did endothermy evolve in the lineage that led to mammals?"
→ "Example: where, when, and why did placentas and viviparity evolve?"
→ "Example: are behavioral traits more labile during evolution?"
→ "Example: why do small-bodied species have shorter life spans than their larger relatives?"
Phylogenetically independent contrasts.
Felsenstein proposed the first general statistical method in 1985 for incorporating phylogenetic information, i.e., the first that could use any arbitrary topology (branching order) and a specified set of branch lengths. The method is now recognized as an algorithm that implements a special case of what are termed phylogenetic generalized least-squares models. The logic of the method is to use phylogenetic information (and an assumed Brownian motion like model of trait evolution) to transform the original tip data (mean values for a set of species) into values that are statistically independent and identically distributed.
The algorithm involves computing values at internal nodes as an intermediate step, but they are generally not used for inferences by themselves. An exception occurs for the basal (root) node, which can be interpreted as an estimate of the ancestral value for the entire tree (assuming that no directional evolutionary trends [e.g., Cope's rule] have occurred) or as a phylogenetically weighted estimate of the mean for the entire set of tip species (terminal taxa). The value at the root is equivalent to that obtained from the "squared-change parsimony" algorithm and is also the maximum likelihood estimate under Brownian motion. The independent contrasts algebra can also be used to compute a standard error or confidence interval.
Phylogenetic generalized least squares (PGLS).
Probably the most commonly used PCM is phylogenetic generalized least squares (PGLS). This approach is used to test whether there is a relationship between two (or more) variables while accounting for the fact that lineage are not independent. The method is a special case of generalized least squares (GLS) and as such the PGLS estimator is also unbiased, consistent, efficient, and asymptotically normal. In many statistical situations where GLS (or, ordinary least squares [OLS]) is used residual errors "ε" are assumed to be independent and identically distributed random variables that are assumed to be normal
formula_0
whereas in PGLS the errors are assumed to be distributed as
formula_1
where "V" is a matrix of expected variance and covariance of the residuals given an evolutionary model and a phylogenetic tree. Therefore, it is the structure of residuals and not the variables themselves that show phylogenetic signal. This has long been a source of confusion in the scientific literature. A number of models have been proposed for the structure of "V" such as Brownian motion Ornstein-Uhlenbeck, and Pagel's λ model. (When a Brownian motion model is used, PGLS is identical to the independent contrasts estimator.). In PGLS, the parameters of the evolutionary model are typically co-estimated with the regression parameters.
PGLS can only be applied to questions where the dependent variable is continuously distributed; however, the phylogenetic tree can also be incorporated into the residual distribution of generalized linear models, making it possible to generalize the approach to a broader set of distributions for the response.
Phylogenetically informed Monte Carlo computer simulations.
Martins and Garland proposed in 1991 that one way to account for phylogenetic relations when conducting statistical analyses was to use computer simulations to create many data sets that are consistent with the null hypothesis under test (e.g., no correlation between two traits, no difference between two ecologically defined groups of species) but that mimic evolution along the relevant phylogenetic tree. If such data sets (typically 1,000 or more) are analyzed with the same statistical procedure that is used to analyze a real data set, then results for the simulated data sets can be used to create phylogenetically correct (or "PC") null distributions of the test statistic (e.g., a correlation coefficient, t, F). Such simulation approaches can also be combined with such methods as phylogenetically independent contrasts or PGLS (see above).
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
External links.
Laboratories.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "\n \\varepsilon \\mid X\\sim \\mathcal{N}(0, \\sigma^2I_n).\n "
},
{
"math_id": 1,
"text": "\n \\varepsilon \\mid X\\sim \\mathcal{N}(0, \\mathbf{V}).\n "
}
] | https://en.wikipedia.org/wiki?curid=9738540 |
9740633 | Automated trading system | System for ordering algorithmic trades
An automated trading system (ATS), a subset of algorithmic trading, uses a computer program to create buy and sell orders and automatically submits the orders to a market center or exchange. The computer program will automatically generate orders based on predefined set of rules using a trading strategy which is based on technical analysis, advanced statistical and mathematical computations or input from other electronic sources.
Automated trading systems are often used with electronic trading in automated market centers, including electronic communication networks, "dark pools", and automated exchanges. Automated trading systems and electronic trading platforms can execute repetitive tasks at speeds orders of magnitude greater than any human equivalent. Traditional risk controls and safeguards that relied on human judgment are not appropriate for automated trading and this has caused issues such as the 2010 Flash Crash. New controls such as trading curbs or 'circuit breakers' have been put in place in some electronic markets to deal with automated trading systems.
Mechanism.
The automated trading system determines whether an order should be submitted based on, for example, the current market price of an option and theoretical buy and sell prices. The theoretical buy and sell prices are derived from, among other things, the current market price of the security underlying the option. A look-up table stores a range of theoretical buy and sell prices for a given range of current market price of the underlying security. Accordingly, as the price of the underlying security changes, a new theoretical price may be indexed in the look-up table, thereby avoiding calculations that would otherwise slow automated trading decisions. A distributed processing on-line automated trading system uses structured messages to represent each stage in the negotiation between a market maker (quoter) and a potential buyer or seller (requestor).
Strategies.
Trend following is a trading strategy that bases buying and selling decisions on observable market trends. For years, various forms of trend following have emerged, like the Turtle Trader software program. Unlike financial forecasting, this strategy does not predict market movements. Instead, it identifies a trend early in the day and then trades automatically according to a predefined strategy, regardless of directional shifts. Trend following gained popularity among speculators, though remains reliant on manual human judgment to configure trading rules and entry/exit conditions. Finding the optimal initial strategy is essential. Trend following is limited by market volatility and the difficulty of accurately identifying trends.
For example, the following formula could be used for trend following strategy:
"Consider a complete probability space (Ω, F, P). Let formula_0 denote the stock price at time formula_1 satisfying the equation
formula_2 formula_3 formula_4,
where formula_5 is a two-state Markov-Chain, formula_6 is the expected return rate in regime formula_7 is the constant volatility, formula_8 is a standard Brownian motion, and formula_9 and formula_10 are the initial and terminal times, respectively".
According to Volume-weighted average price Wikipedia page, VWAP is calculated using the following formula:
":formula_11
where:
formula_12 is Volume Weighted Average Price;
formula_13 is price of trade formula_14;
formula_15 is quantity of trade formula_14;
formula_14 is each individual trade that takes place over the defined period of time, excluding cross trades and basket cross trades".
"A continuous mean-reverting time series can be represented by an Ornstein-Uhlenbeck stochastic differential equation:
formula_16
Where formula_17 is the rate of reversion to the mean, formula_18 is the mean value of the process, formula_19 is the variance of the process and formula_20 is a Wiener Process or Brownian Motion".
History.
The concept of automated trading system was first introduced by Richard Donchian in 1949 when he used a set of rules to buy and sell the funds. Donchian proposed a novel concept in which trades would be initiated autonomously in response to the fulfillment of predetermined market conditions. Due to the absence of advanced technology at the time, Donchian's staff was obligated to perform manual market charting and assess the suitability of executing rule-based trades. Although this laborious procedure was susceptible to human error, it established the foundation for the subsequent development of transacting financial assets.
Then, in the 1980s, the concept of rule based trading (trend following) became more popular when famous traders like John Henry began to use such strategies. In the mid 1990s, some models were available for purchase. Also, improvements in technology increased the accessibility for retail investors. Later, Justin-Niall Swart employed a Donchian channel-based trend-following trading method for portfolio optimization in his South African futures market analysis.
The early form of an Automated Trading System, composed of software based on algorithms, that have historically been used by financial managers and brokers. This type of software was used to automatically manage clients' portfolios. However, the first service to free market without any supervision was first launched in 2008 which was Betterment by Jon Stein. Since then, this system has been improving with the development in the IT industry.
Around 2005, copy trading and mirror trading emerged as forms of automated algorithmic trading. These systems allowed traders to share their trading histories and strategies, which other traders could replicate in their accounts. One of the first companies to offer an auto-trading platform was Tradency in 2005 with its "Mirror Trader" software. This feature enabled traders to submit their strategies, allowing other users to replicate any trades produced by those strategies in their accounts. Subsequently, certain platforms allowed traders to connect their accounts directly in order to replicate trades automatically, without needing to code trading strategies. Since 2010, numerous online brokers have incorporated copy trading into their internet platforms, such as eToro, ZuluTrade, Ayondo, and Tradeo. Copy trading benefits from real-time trading decisions and order flow from credible investors, which lets less experienced traders mirror trades without performing the analysis themselves.
Now, Automated Trading System is managing huge assets all around the globe. In 2014, more than 75 percent of the stock shares traded on United States exchanges (including the New York Stock Exchange and NASDAQ) originated from automated trading system orders.
Market disruption and manipulation.
Automated trading, or high-frequency trading, causes regulatory concerns as a contributor to market fragility. United States regulators have published releases discussing several types of risk controls that could be used to limit the extent of such disruptions, including financial and regulatory controls to prevent the entry of erroneous orders as a result of computer malfunction or human error, the breaching of various regulatory requirements, and exceeding a credit or capital limit.
The use of high-frequency trading (HFT) strategies has grown substantially over the past several years and drives a significant portion of activity on U.S. markets. Although many HFT strategies are legitimate, some are not and may be used for manipulative trading. A strategy would be illegitimate or even illegal if it causes deliberate disruption in the market or tries to manipulate it. Such strategies include "momentum ignition strategies": spoofing and layering where a market participant places a non-bona fide order on one side of the market (typically, but not always, above the offer or below the bid) in an attempt to bait other market participants to react to the non-bona fide order and then trade with another order on the other side of the market. They are also referred to as predatory/abusive strategies. Given the scale of the potential impact that these practices may have, the surveillance of abusive algorithms remains a high priority for regulators. The Financial Industry Regulatory Authority (FINRA) has reminded firms using HFT strategies and other trading algorithms of their obligation to be vigilant when testing these strategies pre- and post-launch to ensure that the strategies do not result in abusive trading.
FINRA also focuses on the entry of problematic HFT and algorithmic activity through sponsored participants who initiate their activity from outside of the United States. In this regard, FINRA reminds firms of their surveillance and control obligations under the SEC's Market Access Rule and Notice to Members 04-66, as well as potential issues related to treating such accounts as customer accounts, anti-money laundering, and margin levels as highlighted in Regulatory Notice 10-18 and the SEC's Office of Compliance Inspections and Examination's National Exam Risk Alert dated September 29, 2011.
FINRA conducts surveillance to identify cross-market and cross-product manipulation of the price of underlying equity securities. Such manipulations are done typically through abusive trading algorithms or strategies that close out pre-existing option positions at favorable prices or establish new option positions at advantageous prices.
In recent years, there have been a number of algorithmic trading malfunctions that caused substantial market disruptions. These raise concern about firms' ability to develop, implement, and effectively supervise their automated systems. FINRA has stated that it will assess whether firms' testing and controls related to algorithmic trading and other automated trading strategies are adequate in light of the U.S. Securities and Exchange Commission and firms' supervisory obligations. This assessment may take the form of examinations and targeted investigations. Firms will be required to address whether they conduct separate, independent, and robust pre-implementation testing of algorithms and trading systems. Also, whether the firm's legal, compliance, and operations staff are reviewing the design and development of the algorithms and trading systems for compliance with legal requirements will be investigated. FINRA will review whether a firm actively monitors and reviews algorithms and trading systems once they are placed into production systems and after they have been modified, including procedures and controls used to detect potential trading abuses such as wash sales, marking, layering, and momentum ignition strategies. Finally, firms will need to describe their approach to firm-wide disconnect or "kill" switches, as well as procedures for responding to catastrophic system malfunctions.
Notable examples.
Examples of recent substantial market disruptions include the following:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S_r"
},
{
"math_id": 1,
"text": "r"
},
{
"math_id": 2,
"text": "dS_r = S_r[\\mu (\\alpha _r)dr + \\sigma dB_r],"
},
{
"math_id": 3,
"text": "S_t = X,"
},
{
"math_id": 4,
"text": "t \\leq r \\leq T < \\infty "
},
{
"math_id": 5,
"text": " \\alpha _r \\in \\{1,2\\} "
},
{
"math_id": 6,
"text": " \\mu (i) \\equiv \\mu _i "
},
{
"math_id": 7,
"text": " i = 1,2, \\sigma > 0 "
},
{
"math_id": 8,
"text": " B_r "
},
{
"math_id": 9,
"text": "t"
},
{
"math_id": 10,
"text": "T"
},
{
"math_id": 11,
"text": "P_{\\mathrm{VWAP}} = \\frac{\\sum_{j}{P_j \\cdot Q_j}}{\\sum_j{Q_j}} \\,"
},
{
"math_id": 12,
"text": "P_{\\mathrm{VWAP}}"
},
{
"math_id": 13,
"text": "P_j"
},
{
"math_id": 14,
"text": "j"
},
{
"math_id": 15,
"text": "Q_j"
},
{
"math_id": 16,
"text": " dx_t = \\theta(\\mu -x_t)dt + \\sigma dW_t "
},
{
"math_id": 17,
"text": " \\theta "
},
{
"math_id": 18,
"text": " \\mu "
},
{
"math_id": 19,
"text": " \\sigma "
},
{
"math_id": 20,
"text": " W_t "
}
] | https://en.wikipedia.org/wiki?curid=9740633 |
974148 | Automorphic function | Mathematical function on a space that is invariant under the action of some group
In mathematics, an automorphic function is a function on a space that is invariant under the action of some group, in other words a function on the quotient space. Often the space is a complex manifold and the group is a discrete group.
Factor of automorphy.
In mathematics, the notion of factor of automorphy arises for a group acting on a complex-analytic manifold. Suppose a group formula_0 acts on a complex-analytic manifold formula_1. Then, formula_0 also acts on the space of holomorphic functions from formula_1 to the complex numbers. A function formula_2 is termed an "automorphic form" if the following holds:
formula_3
where formula_4 is an everywhere nonzero holomorphic function. Equivalently, an automorphic form is a function whose divisor is invariant under the action of formula_0.
The "factor of automorphy" for the automorphic form formula_2 is the function formula_5. An "automorphic function" is an automorphic form for which formula_5 is the identity.
Some facts about factors of automorphy:
Relation between factors of automorphy and other notions:
The specific case of formula_6 a subgroup of "SL"(2, R), acting on the upper half-plane, is treated in the article on automorphic factors. | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "f(g.x) = j_g(x)f(x)"
},
{
"math_id": 4,
"text": "j_g(x)"
},
{
"math_id": 5,
"text": "j"
},
{
"math_id": 6,
"text": "\\Gamma"
},
{
"math_id": 7,
"text": "G/\\Gamma"
}
] | https://en.wikipedia.org/wiki?curid=974148 |
974169 | Algebraic function | Mathematical function
In mathematics, an algebraic function is a function that can be defined
as the root of an irreducible polynomial equation. Algebraic functions are often algebraic expressions using a finite number of terms, involving only the algebraic operations addition, subtraction, multiplication, division, and raising to a fractional power. Examples of such functions are:
Some algebraic functions, however, cannot be expressed by such finite expressions (this is the Abel–Ruffini theorem). This is the case, for example, for the Bring radical, which is the function implicitly defined by
formula_3.
In more precise terms, an algebraic function of degree "n" in one variable "x" is a function formula_4 that is continuous in its domain and satisfies a polynomial equation of positive degree
formula_5
where the coefficients "a""i"("x") are polynomial functions of "x", with integer coefficients. It can be shown that the same class of functions is obtained if algebraic numbers are accepted for the coefficients of the "a""i"("x")'s. If transcendental numbers occur in the coefficients the function is, in general, not algebraic, but it is "algebraic over the field" generated by these coefficients.
The value of an algebraic function at a rational number, and more generally, at an algebraic number is always an algebraic number.
Sometimes, coefficients formula_6 that are polynomial over a ring R are considered, and one then talks about "functions algebraic over R".
A function which is not algebraic is called a transcendental function, as it is for example the case of formula_7. A composition of transcendental functions can give an algebraic function: formula_8.
As a polynomial equation of degree "n" has up to "n" roots (and exactly "n" roots over an algebraically closed field, such as the complex numbers), a polynomial equation does not implicitly define a single function, but up to "n"
functions, sometimes also called branches. Consider for example the equation of the unit circle:
formula_9
This determines "y", except only up to an overall sign; accordingly, it has two branches:
formula_10
An algebraic function in "m" variables is similarly defined as a function formula_11 which solves a polynomial equation in "m" + 1 variables:
formula_12
It is normally assumed that "p" should be an irreducible polynomial. The existence of an algebraic function is then guaranteed by the implicit function theorem.
Formally, an algebraic function in "m" variables over the field "K" is an element of the algebraic closure of the field of rational functions "K"("x"1, ..., "x""m").
Algebraic functions in one variable.
Introduction and overview.
The informal definition of an algebraic function provides a number of clues about their properties. To gain an intuitive understanding, it may be helpful to regard algebraic functions as functions which can be formed by the usual algebraic operations: addition, multiplication, division, and taking an "n"th root. This is something of an oversimplification; because of the fundamental theorem of Galois theory, algebraic functions need not be expressible by radicals.
First, note that any polynomial function formula_13 is an algebraic function, since it is simply the solution "y" to the equation
formula_14
More generally, any rational function formula_15 is algebraic, being the solution to
formula_16
Moreover, the "n"th root of any polynomial formula_17 is an algebraic function, solving the equation
formula_18
Surprisingly, the inverse function of an algebraic function is an algebraic function. For supposing that "y" is a solution to
formula_19
for each value of "x", then "x" is also a solution of this equation for each value of "y". Indeed, interchanging the roles of "x" and "y" and gathering terms,
formula_20
Writing "x" as a function of "y" gives the inverse function, also an algebraic function.
However, not every function has an inverse. For example, "y" = "x"2 fails the horizontal line test: it fails to be one-to-one. The inverse is the algebraic "function" formula_21.
Another way to understand this, is that the set of branches of the polynomial equation defining our algebraic function is the graph of an algebraic curve.
The role of complex numbers.
From an algebraic perspective, complex numbers enter quite naturally into the study of algebraic functions. First of all, by the fundamental theorem of algebra, the complex numbers are an algebraically closed field. Hence any polynomial relation "p"("y", "x") = 0 is guaranteed to have at least one solution (and in general a number of solutions not exceeding the degree of "p" in "y") for "y" at each point "x", provided we allow "y" to assume complex as well as real values. Thus, problems to do with the domain of an algebraic function can safely be minimized.
Furthermore, even if one is ultimately interested in real algebraic functions, there may be no means to express the function in terms of addition, multiplication, division and taking "nth" roots without resorting to complex numbers (see casus irreducibilis). For example, consider the algebraic function determined by the equation
formula_22
Using the cubic formula, we get
formula_23
For formula_24 the square root is real and the cubic root is thus well defined, providing the unique real root. On the other hand, for formula_25 the square root is not real, and one has to choose, for the square root, either non-real square root. Thus the cubic root has to be chosen among three non-real numbers. If the same choices are done in the two terms of the formula, the three choices for the cubic root provide the three branches shown, in the accompanying image.
It may be proven that there is no way to express this function in terms of "nth" roots using real numbers only, even though the resulting function is real-valued on the domain of the graph shown.
On a more significant theoretical level, using complex numbers allows one to use the powerful techniques of complex analysis to discuss algebraic functions. In particular, the argument principle can be used to show that any algebraic function is in fact an analytic function, at least in the multiple-valued sense.
Formally, let "p"("x", "y") be a complex polynomial in the complex variables "x" and "y". Suppose that
"x"0 ∈ C is such that the polynomial "p"("x"0, "y") of "y" has "n" distinct zeros. We shall show that the algebraic function is analytic in a neighborhood of "x"0. Choose a system of "n" non-overlapping discs Δ"i" containing each of these zeros. Then by the argument principle
formula_26
By continuity, this also holds for all "x" in a neighborhood of "x"0. In particular, "p"("x", "y") has only one root in Δ"i", given by the residue theorem:
formula_27
which is an analytic function.
Monodromy.
Note that the foregoing proof of analyticity derived an expression for a system of "n" different function elements "f""i"
("x"), provided that "x" is not a critical point of "p"("x", "y"). A "critical point" is a point where the number of distinct zeros is smaller than the degree of "p", and this occurs only where the highest degree term of "p" or the discriminant vanish. Hence there are only finitely many such points "c"1, ..., "c""m".
A close analysis of the properties of the function elements "f""i" near the critical points can be used to show that the monodromy cover is ramified over the critical points (and possibly the point at infinity). Thus the holomorphic extension of the "f""i" has at worst algebraic poles and ordinary algebraic branchings over the critical points.
Note that, away from the critical points, we have
formula_28
since the "f""i" are by definition the distinct zeros of "p". The monodromy group acts by permuting the factors, and thus forms the monodromy representation of the Galois group of "p". (The monodromy action on the universal covering space is related but different notion in the theory of Riemann surfaces.)
History.
The ideas surrounding algebraic functions go back at least as far as René Descartes. The first discussion of algebraic functions appears to have been in Edward Waring's 1794 "An Essay on the Principles of Human Knowledge" in which he writes:
let a quantity denoting the ordinate, be an algebraic function of the abscissa "x", by the common methods of division and extraction of roots, reduce it into an infinite series ascending or descending according to the dimensions of "x", and then find the integral of each of the resulting terms. | [
{
"math_id": 0,
"text": "f(x) = 1/x"
},
{
"math_id": 1,
"text": "f(x) = \\sqrt{x}"
},
{
"math_id": 2,
"text": "f(x) = \\frac{\\sqrt{1 + x^3}}{x^{3/7} - \\sqrt{7} x^{1/3}}"
},
{
"math_id": 3,
"text": "f(x)^5+f(x)+x = 0"
},
{
"math_id": 4,
"text": "y = f(x),"
},
{
"math_id": 5,
"text": "a_n(x)y^n+a_{n-1}(x)y^{n-1}+\\cdots+a_0(x)=0"
},
{
"math_id": 6,
"text": "a_i(x)"
},
{
"math_id": 7,
"text": "\\exp x, \\tan x, \\ln x, \\Gamma(x)"
},
{
"math_id": 8,
"text": "f(x)=\\cos \\arcsin x = \\sqrt{1-x^2}"
},
{
"math_id": 9,
"text": "y^2+x^2=1.\\,"
},
{
"math_id": 10,
"text": "y=\\pm \\sqrt{1-x^2}.\\,"
},
{
"math_id": 11,
"text": "y=f(x_1,\\dots ,x_m)"
},
{
"math_id": 12,
"text": "p(y,x_1,x_2,\\dots,x_m) = 0."
},
{
"math_id": 13,
"text": "y = p(x)"
},
{
"math_id": 14,
"text": " y-p(x) = 0.\\,"
},
{
"math_id": 15,
"text": "y=\\frac{p(x)}{q(x)}"
},
{
"math_id": 16,
"text": "q(x)y-p(x)=0."
},
{
"math_id": 17,
"text": "y=\\sqrt[n]{p(x)}"
},
{
"math_id": 18,
"text": "y^n-p(x)=0."
},
{
"math_id": 19,
"text": "a_n(x)y^n+\\cdots+a_0(x)=0,"
},
{
"math_id": 20,
"text": "b_m(y)x^m+b_{m-1}(y)x^{m-1}+\\cdots+b_0(y)=0."
},
{
"math_id": 21,
"text": "x = \\pm\\sqrt{y}"
},
{
"math_id": 22,
"text": "y^3-xy+1=0.\\,"
},
{
"math_id": 23,
"text": "\ny=-\\frac{2x}{\\sqrt[3]{-108+12\\sqrt{81-12x^3}}}+\\frac{\\sqrt[3]{-108+12\\sqrt{81-12x^3}}}{6}.\n"
},
{
"math_id": 24,
"text": "x\\le \\frac{3}{\\sqrt[3]{4}},"
},
{
"math_id": 25,
"text": "x>\\frac{3}{\\sqrt[3]{4}},"
},
{
"math_id": 26,
"text": "\\frac{1}{2\\pi i}\\oint_{\\partial\\Delta_i} \\frac{p_y(x_0,y)}{p(x_0,y)}\\,dy = 1."
},
{
"math_id": 27,
"text": "f_i(x) = \\frac{1}{2\\pi i}\\oint_{\\partial\\Delta_i} y\\frac{p_y(x,y)}{p(x,y)}\\,dy"
},
{
"math_id": 28,
"text": "p(x,y) = a_n(x)(y-f_1(x))(y-f_2(x))\\cdots(y-f_n(x))"
}
] | https://en.wikipedia.org/wiki?curid=974169 |
9747155 | Abel's summation formula | Integration by parts version of Abel's method for summation by parts
In mathematics, Abel's summation formula, introduced by Niels Henrik Abel, is intensively used in analytic number theory and the study of special functions to compute series.
Formula.
Let formula_0 be a sequence of real or complex numbers. Define the partial sum function formula_1 by
formula_2
for any real number formula_3. Fix real numbers formula_4, and let formula_5 be a continuously differentiable function on formula_6. Then:
formula_7
The formula is derived by applying integration by parts for a Riemann–Stieltjes integral to the functions formula_1 and formula_5.
Variations.
Taking the left endpoint to be formula_8 gives the formula
formula_9
If the sequence formula_10 is indexed starting at formula_11, then we may formally define formula_12. The previous formula becomes
formula_13
A common way to apply Abel's summation formula is to take the limit of one of these formulas as formula_14. The resulting formulas are
formula_15
These equations hold whenever both limits on the right-hand side exist and are finite.
A particularly useful case is the sequence formula_16 for all formula_17. In this case, formula_18. For this sequence, Abel's summation formula simplifies to
formula_19
Similarly, for the sequence formula_12 and formula_16 for all formula_20, the formula becomes
formula_21
Upon taking the limit as formula_14, we find
formula_22
assuming that both terms on the right-hand side exist and are finite.
Abel's summation formula can be generalized to the case where formula_5 is only assumed to be continuous if the integral is interpreted as a Riemann–Stieltjes integral:
formula_23
By taking formula_5 to be the partial sum function associated to some sequence, this leads to the summation by parts formula.
Examples.
Harmonic numbers.
If formula_16 for formula_20 and formula_24 then formula_25 and the formula yields
formula_26
The left-hand side is the harmonic number formula_27.
Representation of Riemann's zeta function.
Fix a complex number formula_28. If formula_16 for formula_20 and formula_29 then formula_25 and the formula becomes
formula_30
If formula_31, then the limit as formula_14 exists and yields the formula
formula_32
where formula_33 is the Riemann zeta function.
This may be used to derive Dirichlet's theorem that formula_34 has a simple pole with residue 1 at "s"
1.
Reciprocal of Riemann zeta function.
The technique of the previous example may also be applied to other Dirichlet series. If formula_35 is the Möbius function and formula_36, then formula_37 is Mertens function and
formula_38
This formula holds for formula_31. | [
{
"math_id": 0,
"text": "(a_n)_{n=0}^\\infty"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "A(t) = \\sum_{0 \\le n \\le t} a_n"
},
{
"math_id": 3,
"text": "t"
},
{
"math_id": 4,
"text": "x < y"
},
{
"math_id": 5,
"text": "\\phi"
},
{
"math_id": 6,
"text": "[x, y]"
},
{
"math_id": 7,
"text": "\\sum_{x < n \\le y} a_n\\phi(n) = A(y)\\phi(y) - A(x)\\phi(x) - \\int_x^y A(u)\\phi'(u)\\,du."
},
{
"math_id": 8,
"text": "-1"
},
{
"math_id": 9,
"text": "\\sum_{0 \\le n \\le x} a_n\\phi(n) = A(x)\\phi(x) - \\int_0^x A(u)\\phi'(u)\\,du."
},
{
"math_id": 10,
"text": "(a_n)"
},
{
"math_id": 11,
"text": "n = 1"
},
{
"math_id": 12,
"text": "a_0 = 0"
},
{
"math_id": 13,
"text": "\\sum_{1 \\le n \\le x} a_n\\phi(n) = A(x)\\phi(x) - \\int_1^x A(u)\\phi'(u)\\,du."
},
{
"math_id": 14,
"text": "x \\to \\infty"
},
{
"math_id": 15,
"text": "\\begin{align}\n\\sum_{n=0}^\\infty a_n\\phi(n) &= \\lim_{x \\to \\infty}\\bigl(A(x)\\phi(x)\\bigr) - \\int_0^\\infty A(u)\\phi'(u)\\,du, \\\\\n\\sum_{n=1}^\\infty a_n\\phi(n) &= \\lim_{x \\to \\infty}\\bigl(A(x)\\phi(x)\\bigr) - \\int_1^\\infty A(u)\\phi'(u)\\,du.\n\\end{align}"
},
{
"math_id": 16,
"text": "a_n = 1"
},
{
"math_id": 17,
"text": "n \\ge 0"
},
{
"math_id": 18,
"text": "A(x) = \\lfloor x + 1 \\rfloor"
},
{
"math_id": 19,
"text": "\\sum_{0 \\le n \\le x} \\phi(n) = \\lfloor x + 1 \\rfloor\\phi(x) - \\int_0^x \\lfloor u + 1\\rfloor \\phi'(u)\\,du."
},
{
"math_id": 20,
"text": "n \\ge 1"
},
{
"math_id": 21,
"text": "\\sum_{1 \\le n \\le x} \\phi(n) = \\lfloor x \\rfloor\\phi(x) - \\int_1^x \\lfloor u \\rfloor \\phi'(u)\\,du."
},
{
"math_id": 22,
"text": "\\begin{align}\n\\sum_{n=0}^\\infty \\phi(n) &= \\lim_{x \\to \\infty}\\bigl(\\lfloor x + 1 \\rfloor\\phi(x)\\bigr) - \\int_0^\\infty \\lfloor u + 1\\rfloor \\phi'(u)\\,du, \\\\\n\\sum_{n=1}^\\infty \\phi(n) &= \\lim_{x \\to \\infty}\\bigl(\\lfloor x \\rfloor\\phi(x)\\bigr) - \\int_1^\\infty \\lfloor u\\rfloor \\phi'(u)\\,du,\n\\end{align}"
},
{
"math_id": 23,
"text": "\\sum_{x < n \\le y} a_n\\phi(n) = A(y)\\phi(y) - A(x)\\phi(x) - \\int_x^y A(u)\\,d\\phi(u)."
},
{
"math_id": 24,
"text": "\\phi(x) = 1/x,"
},
{
"math_id": 25,
"text": "A(x) = \\lfloor x \\rfloor"
},
{
"math_id": 26,
"text": "\\sum_{n=1}^{\\lfloor x \\rfloor} \\frac{1}{n} = \\frac{\\lfloor x \\rfloor}{x} + \\int_1^x \\frac{\\lfloor u \\rfloor}{u^2} \\,du."
},
{
"math_id": 27,
"text": "H_{\\lfloor x \\rfloor}"
},
{
"math_id": 28,
"text": "s"
},
{
"math_id": 29,
"text": "\\phi(x) = x^{-s},"
},
{
"math_id": 30,
"text": "\\sum_{n=1}^{\\lfloor x \\rfloor} \\frac{1}{n^s} = \\frac{\\lfloor x \\rfloor}{x^s} + s\\int_1^x \\frac{\\lfloor u\\rfloor}{u^{1+s}}\\,du."
},
{
"math_id": 31,
"text": "\\Re(s) > 1"
},
{
"math_id": 32,
"text": "\\zeta(s) = s\\int_1^\\infty \\frac{\\lfloor u\\rfloor}{u^{1+s}}\\,du."
},
{
"math_id": 33,
"text": "\\zeta(s)"
},
{
"math_id": 34,
"text": "\\zeta(s) "
},
{
"math_id": 35,
"text": "a_n = \\mu(n)"
},
{
"math_id": 36,
"text": "\\phi(x) = x^{-s}"
},
{
"math_id": 37,
"text": "A(x) = M(x) = \\sum_{n \\le x} \\mu(n)"
},
{
"math_id": 38,
"text": "\\frac{1}{\\zeta(s)} = \\sum_{n=1}^\\infty \\frac{\\mu(n)}{n^s} = s\\int_1^\\infty \\frac{M(u)}{u^{1+s}}\\,du."
}
] | https://en.wikipedia.org/wiki?curid=9747155 |
974723 | Estimation of covariance matrices | Statistics concept
In statistics, sometimes the covariance matrix of a multivariate random variable is not known but has to be estimated. Estimation of covariance matrices then deals with the question of how to approximate the actual covariance matrix on the basis of a sample from the multivariate distribution. Simple cases, where observations are complete, can be dealt with by using the sample covariance matrix. The sample covariance matrix (SCM) is an unbiased and efficient estimator of the covariance matrix if the space of covariance matrices is viewed as an extrinsic convex cone in R"p"×"p"; however, measured using the intrinsic geometry of positive-definite matrices, the SCM is a biased and inefficient estimator. In addition, if the random variable has a normal distribution, the sample covariance matrix has a Wishart distribution and a slightly differently scaled version of it is the maximum likelihood estimate. Cases involving missing data, heteroscedasticity, or autocorrelated residuals require deeper considerations. Another issue is the robustness to outliers, to which sample covariance matrices are highly sensitive.
Statistical analyses of multivariate data often involve exploratory studies of the way in which the variables change in relation to one another and this may be followed up by explicit statistical models involving the covariance matrix of the variables. Thus the estimation of covariance matrices directly from observational data plays two roles:
* to provide initial estimates that can be used to study the inter-relationships;
* to provide sample estimates that can be used for model checking.
Estimates of covariance matrices are required at the initial stages of principal component analysis and factor analysis, and are also involved in versions of regression analysis that treat the dependent variables in a data-set, jointly with the independent variable as the outcome of a random sample.
Estimation in a general context.
Given a sample consisting of "n" independent observations "x"1..., "x""n" of a "p"-dimensional random vector "X" ∈ R"p"×1 (a "p"×1 column-vector), an unbiased estimator of the ("p"×"p") covariance matrix
formula_0
is the sample covariance matrix
formula_1
where formula_2 is the "i"-th observation of the "p"-dimensional random vector, and the vector
formula_3
is the sample mean.
This is true regardless of the distribution of the random variable "X", provided of course that the theoretical means and covariances exist. The reason for the factor "n" − 1 rather than "n" is essentially the same as the reason for the same factor appearing in unbiased estimates of sample variances and sample covariances, which relates to the fact that the mean is not known and is replaced by the sample mean (see Bessel's correction).
In cases where the distribution of the random variable "X" is known to be within a certain family of distributions, other estimates may be derived on the basis of that assumption. A well-known instance is when the random variable "X" is normally distributed: in this case the maximum likelihood estimator of the covariance matrix is slightly different from the unbiased estimate, and is given by
formula_4
A derivation of this result is given below. Clearly, the difference between the unbiased estimator and the maximum likelihood estimator diminishes for large "n".
In the general case, the unbiased estimate of the covariance matrix provides an acceptable estimate when the data vectors in the observed data set are all complete: that is they contain no missing elements. One approach to estimating the covariance matrix is to treat the estimation of each variance or pairwise covariance separately, and to use all the observations for which both variables have valid values. Assuming the missing data are missing at random this results in an estimate for the covariance matrix which is unbiased. However, for many applications this estimate may not be acceptable because the estimated covariance matrix is not guaranteed to be positive semi-definite. This could lead to estimated correlations having absolute values which are greater than one, and/or a non-invertible covariance matrix.
When estimating the cross-covariance of a pair of signals that are wide-sense stationary, missing samples do "not" need be random (e.g., sub-sampling by an arbitrary factor is valid).
Maximum-likelihood estimation for the multivariate normal distribution.
A random vector "X" ∈ R"p" (a "p"×1 "column vector") has a multivariate normal distribution with a nonsingular covariance matrix Σ precisely if Σ ∈ R"p" × "p" is a positive-definite matrix and the probability density function of "X" is
formula_5
where "μ" ∈ R"p"×1 is the expected value of "X". The covariance matrix Σ is the multidimensional analog of what in one dimension would be the variance, and
formula_6
normalizes the density formula_7 so that it integrates to 1.
Suppose now that "X"1, ..., "X""n" are independent and identically distributed samples from the distribution above. Based on the observed values "x"1, ..., "x""n" of this sample, we wish to estimate Σ.
First steps.
The likelihood function is:
formula_8
It is fairly readily shown that the maximum-likelihood estimate of the mean vector "μ" is the "sample mean" vector:
formula_9
See the section on estimation in the article on the normal distribution for details; the process here is similar.
Since the estimate formula_10 does not depend on Σ, we can just substitute it for "μ" in the likelihood function, getting
formula_11
and then seek the value of Σ that maximizes the likelihood of the data (in practice it is easier to work with log formula_12).
The trace of a 1 × 1 matrix.
Now we come to the first surprising step: regard the scalar formula_13 as the trace of a 1×1 matrix. This makes it possible to use the identity tr("AB") = tr("BA") whenever "A" and "B" are matrices so shaped that both products exist. We get
formula_14
where
formula_15
formula_16 is sometimes called the scatter matrix, and is positive definite if there exists a subset of the data consisting of formula_17 affinely independent observations (which we will assume).
Using the spectral theorem.
It follows from the spectral theorem of linear algebra that a positive-definite symmetric matrix "S" has a unique positive-definite symmetric square root "S"1/2. We can again use the "cyclic property" of the trace to write
formula_18
Let "B" = "S"1/2 Σ −1 "S"1/2. Then the expression above becomes
formula_19
The positive-definite matrix "B" can be diagonalized, and then the problem of finding the value of "B" that maximizes
formula_20
Since the trace of a square matrix equals the sum of eigenvalues ("trace and eigenvalues"), the equation reduces to the problem of finding the eigenvalues "λ"1, ..., "λ""p" that maximize
formula_21
This is just a calculus problem and we get "λ""i" = "n" for all "i." Thus, assume "Q" is the matrix of eigen vectors, then
formula_22
i.e., "n" times the "p"×"p" identity matrix.
Concluding steps.
Finally we get
formula_23
i.e., the "p"×"p" "sample covariance matrix"
formula_24
is the maximum-likelihood estimator of the "population covariance matrix" Σ. At this point we are using a capital "X" rather than a lower-case "x" because we are thinking of it "as an estimator rather than as an estimate", i.e., as something random whose probability distribution we could profit by knowing. The random matrix "S" can be shown to have a Wishart distribution with "n" − 1 degrees of freedom. That is:
formula_25
Alternative derivation.
An alternative derivation of the maximum likelihood estimator can be performed via matrix calculus formulae (see also differential of a determinant and differential of the inverse matrix). It also verifies the aforementioned fact about the maximum likelihood estimate of the mean. Re-write the likelihood in the log form using the trace trick:
formula_26
The differential of this log-likelihood is
formula_27
It naturally breaks down into the part related to the estimation of the mean, and to the part related to the estimation of the variance. The first order condition for maximum, formula_28, is satisfied when the terms multiplying formula_29 and formula_30 are identically zero. Assuming (the maximum likelihood estimate of) formula_31 is non-singular, the first order condition for the estimate of the mean vector is
formula_32
which leads to the maximum likelihood estimator
formula_33
This lets us simplify
formula_34
as defined above. Then the terms involving formula_30 in formula_35 can be combined as
formula_36
The first order condition formula_28 will hold when the term in the square bracket is (matrix-valued) zero. Pre-multiplying the latter by formula_31 and dividing by formula_37 gives
formula_38
which of course coincides with the canonical derivation given earlier.
Dwyer points out that decomposition into two terms such as appears above is "unnecessary" and derives the estimator in two lines of working. Note that it may be not trivial to show that such derived estimator is the unique global maximizer for likelihood function.
Intrinsic covariance matrix estimation.
Intrinsic expectation.
Given a sample of "n" independent observations "x"1..., "x""n" of a "p"-dimensional zero-mean Gaussian random variable "X" with covariance R, the maximum likelihood estimator of R is given by
formula_39
The parameter formula_40 belongs to the set of positive-definite matrices, which is a Riemannian manifold, not a vector space, hence the usual vector-space notions of expectation, i.e. "formula_41", and estimator bias must be generalized to manifolds to make sense of the problem of covariance matrix estimation. This can be done by defining the expectation of a manifold-valued estimator formula_42 with respect to the manifold-valued point formula_40 as
formula_43
where
formula_44
formula_45
are the exponential map and inverse exponential map, respectively, "exp" and "log" denote the ordinary matrix exponential and matrix logarithm, and E[·] is the ordinary expectation operator defined on a vector space, in this case the tangent space of the manifold.
Bias of the sample covariance matrix.
The intrinsic bias vector field of the SCM estimator formula_42 is defined to be
formula_46
The intrinsic estimator bias is then given by formula_47.
For complex Gaussian random variables, this bias vector field can be shown to equal
formula_48
where
formula_49
and ψ(·) is the digamma function. The intrinsic bias of the sample covariance matrix equals
formula_50
and the SCM is asymptotically unbiased as "n" → ∞.
Similarly, the intrinsic inefficiency of the sample covariance matrix depends upon the Riemannian curvature of the space of positive-definite matrices.
Shrinkage estimation.
If the sample size "n" is small and the number of considered variables "p" is large, the above empirical estimators of covariance and correlation are very unstable. Specifically, it is possible to furnish estimators that improve considerably upon the maximum likelihood estimate in terms of mean squared error. Moreover, for "n" < "p" (the number of observations is less than the number of random variables) the empirical estimate of the covariance matrix becomes singular, i.e. it cannot be inverted to compute the precision matrix.
As an alternative, many methods have been suggested to improve the estimation of the covariance matrix. All of these approaches rely on the concept of shrinkage. This is implicit in Bayesian methods and in penalized maximum likelihood methods and explicit in the Stein-type shrinkage approach.
A simple version of a shrinkage estimator of the covariance matrix is represented by the Ledoit-Wolf shrinkage estimator. One considers a convex combination of the empirical estimator (formula_51) with some suitable chosen target (formula_52), e.g., the diagonal matrix. Subsequently, the mixing parameter (formula_53) is selected to maximize the expected accuracy of the shrunken estimator. This can be done by cross-validation, or by using an analytic estimate of the shrinkage intensity. The resulting regularized estimator (formula_54) can be shown to outperform the maximum likelihood estimator for small samples. For large samples, the shrinkage intensity will reduce to zero, hence in this case the shrinkage estimator will be identical to the empirical estimator. Apart from increased efficiency the shrinkage estimate has the additional advantage that it is always positive definite and well conditioned.
Various shrinkage targets have been proposed:
The shrinkage estimator can be generalized to a multi-target shrinkage estimator that utilizes several targets simultaneously. Software for computing a covariance shrinkage estimator is available in R (packages corpcor and ShrinkCovMat), in Python (scikit-learn library ), and in MATLAB. | [
{
"math_id": 0,
"text": "\\operatorname{\\Sigma} = \\operatorname{E}\\left[\\left(X-\\operatorname{E}[X]\\right ) \\left (X-\\operatorname{E}[X]\\right)^\\mathrm{T}\\right]"
},
{
"math_id": 1,
"text": "\\mathbf{Q} = {1 \\over {n-1}}\\sum_{i=1}^n (x_i-\\overline{x})(x_i-\\overline{x})^\\mathrm{T},"
},
{
"math_id": 2,
"text": "x_i"
},
{
"math_id": 3,
"text": "\\overline{x}= {1 \\over {n}}\\sum_{i=1}^n x_i"
},
{
"math_id": 4,
"text": "\\mathbf{Q_n} = {1 \\over n}\\sum_{i=1}^n (x_i-\\overline{x})(x_i-\\overline{x})^\\mathrm{T}."
},
{
"math_id": 5,
"text": "f(x)=(2\\pi)^{-\\frac{p}{2}}\\, \\det(\\Sigma)^{-\\frac{1}{2}} \\exp\\left(-{1 \\over 2} (x-\\mu)^\\mathrm{T} \\Sigma^{-1} (x-\\mu)\\right)"
},
{
"math_id": 6,
"text": "(2\\pi)^{-\\frac{p}{2}}\\det(\\Sigma)^{-\\frac{1}{2}}"
},
{
"math_id": 7,
"text": "f(x)"
},
{
"math_id": 8,
"text": " \\mathcal{L}(\\mu,\\Sigma)=(2\\pi)^{-\\frac{np}{2}}\\, \\prod_{i=1}^n \\det(\\Sigma)^{-\\frac{1}{2}} \\exp\\left(-\\frac{1}{2} (x_i-\\mu)^\\mathrm{T} \\Sigma^{-1} (x_i-\\mu)\\right) "
},
{
"math_id": 9,
"text": "\\overline{x}=\\frac{x_1+\\cdots+x_n}{n}."
},
{
"math_id": 10,
"text": "\\bar{x}"
},
{
"math_id": 11,
"text": "\\mathcal{L}(\\overline{x},\\Sigma) \\propto \\det(\\Sigma)^{-\\frac{n}{2}} \\exp\\left(-{1 \\over 2} \\sum_{i=1}^n (x_i-\\overline{x})^\\mathrm{T} \\Sigma^{-1} (x_i-\\overline{x})\\right),"
},
{
"math_id": 12,
"text": "\\mathcal{L}"
},
{
"math_id": 13,
"text": "(x_i-\\overline{x})^\\mathrm{T} \\Sigma^{-1} (x_i-\\overline{x})"
},
{
"math_id": 14,
"text": "\\begin{align}\n\\mathcal{L}(\\overline{x},\\Sigma) &\\propto \\det(\\Sigma)^{-\\frac{n}{2}} \\exp\\left(-{1 \\over 2} \\sum_{i=1}^n \\left ( \\left (x_i-\\overline{x} \\right )^\\mathrm{T} \\Sigma^{-1} \\left (x_i-\\overline{x} \\right ) \\right ) \\right) \\\\\n&=\\det(\\Sigma)^{-\\frac{n}{2}} \\exp\\left(-{1 \\over 2} \\sum_{i=1}^n \\operatorname{tr} \\left ( \\left (x_i-\\overline{x} \\right ) \\left (x_i-\\overline{x} \\right )^\\mathrm{T} \\Sigma^{-1} \\right ) \\right) \\\\\n&=\\det(\\Sigma)^{-\\frac{n}{2}} \\exp\\left(-{1 \\over 2} \\operatorname{tr} \\left( \\sum_{i=1}^n \\left (x_i-\\overline{x} \\right ) \\left (x_i-\\overline{x} \\right )^\\mathrm{T} \\Sigma^{-1} \\right) \\right)\\\\\n&=\\det(\\Sigma)^{-\\frac{n}{2}} \\exp\\left(-{1 \\over 2} \\operatorname{tr} \\left( S \\Sigma^{-1} \\right) \\right)\n\\end{align}"
},
{
"math_id": 15,
"text": "S=\\sum_{i=1}^n (x_i-\\overline{x}) (x_i-\\overline{x})^\\mathrm{T} \\in \\mathbf{R}^{p\\times p}."
},
{
"math_id": 16,
"text": "S"
},
{
"math_id": 17,
"text": "p"
},
{
"math_id": 18,
"text": "\\det(\\Sigma)^{-\\frac{n}{2}} \\exp\\left(-{1 \\over 2} \\operatorname{tr} \\left( S^{\\frac{1}{2}} \\Sigma^{-1} S^{\\frac{1}{2}} \\right) \\right)."
},
{
"math_id": 19,
"text": "\\det(S)^{-\\frac{n}{2}} \\det(B)^{\\frac{n}{2}} \\exp\\left(-{1 \\over 2} \\operatorname{tr} (B) \\right)."
},
{
"math_id": 20,
"text": "\\det(B)^{\\frac{n}{2}} \\exp\\left(-{1 \\over 2} \\operatorname{tr} (B) \\right)"
},
{
"math_id": 21,
"text": "\\prod_{i=1}^n \\lambda_i^{\\frac{n}{2}} \\exp \\left (-\\frac{\\lambda_i}{2} \\right )."
},
{
"math_id": 22,
"text": "B = Q (n I_p) Q^{-1} = n I_p "
},
{
"math_id": 23,
"text": "\\Sigma=S^{\\frac{1}{2}} B^{-1} S^{\\frac{1}{2}}=S^{\\frac{1}{2}}\\left(\\frac 1 n I_p\\right)S^{\\frac{1}{2}} = \\frac S n, "
},
{
"math_id": 24,
"text": "{S \\over n} = {1 \\over n}\\sum_{i=1}^n (X_i-\\overline{X})(X_i-\\overline{X})^\\mathrm{T}"
},
{
"math_id": 25,
"text": "\\sum_{i=1}^n (X_i-\\overline{X})(X_i-\\overline{X})^\\mathrm{T} \\sim W_p(\\Sigma,n-1)."
},
{
"math_id": 26,
"text": "\\ln \\mathcal{L}(\\mu,\\Sigma) = \\operatorname{constant} -{n \\over 2} \\ln \\det(\\Sigma) -{1 \\over 2} \\operatorname{tr} \\left[ \\Sigma^{-1} \\sum_{i=1}^n (x_i-\\mu) (x_i-\\mu)^\\mathrm{T} \\right]. "
},
{
"math_id": 27,
"text": "d \\ln \\mathcal{L}(\\mu,\\Sigma) = -\\frac{n}{2} \\operatorname{tr} \\left[ \\Sigma^{-1} \\left\\{ d \\Sigma \\right\\} \\right] -{1 \\over 2} \\operatorname{tr} \\left[ - \\Sigma^{-1} \\{ d \\Sigma \\} \\Sigma^{-1} \\sum_{i=1}^n (x_i-\\mu)(x_i-\\mu)^\\mathrm{T} - 2 \\Sigma^{-1} \\sum_{i=1}^n (x_i - \\mu) \\{ d \\mu \\}^\\mathrm{T} \\right]. "
},
{
"math_id": 28,
"text": "d \\ln \\mathcal{L}(\\mu,\\Sigma)=0"
},
{
"math_id": 29,
"text": "d \\mu"
},
{
"math_id": 30,
"text": "d \\Sigma"
},
{
"math_id": 31,
"text": "\\Sigma"
},
{
"math_id": 32,
"text": " \\sum_{i=1}^n (x_i - \\mu) = 0,"
},
{
"math_id": 33,
"text": "\\widehat \\mu = \\bar X = {1 \\over n} \\sum_{i=1}^n X_i."
},
{
"math_id": 34,
"text": "\\sum_{i=1}^n (x_i-\\mu)(x_i-\\mu)^\\mathrm{T} = \\sum_{i=1}^n (x_i-\\bar x)(x_i-\\bar x)^\\mathrm{T} = S"
},
{
"math_id": 35,
"text": "d \\ln L"
},
{
"math_id": 36,
"text": " -{1 \\over 2} \\operatorname{tr} \\left( \\Sigma^{-1} \\left\\{ d \\Sigma \\right\\} \\left[ nI_p - \\Sigma^{-1} S \\right] \\right). "
},
{
"math_id": 37,
"text": "n"
},
{
"math_id": 38,
"text": "\\widehat \\Sigma = {1 \\over n} S,"
},
{
"math_id": 39,
"text": "\\hat{\\mathbf{R}} = {1 \\over n}\\sum_{i=1}^n x_ix_i^\\mathrm{T}."
},
{
"math_id": 40,
"text": "R"
},
{
"math_id": 41,
"text": "\\mathrm{E} [\\hat{\\mathbf{R}}]"
},
{
"math_id": 42,
"text": "\\hat{\\mathbf{R}}"
},
{
"math_id": 43,
"text": "\\mathrm{E}_{\\mathbf{R}}[\\hat{\\mathbf{R}}]\\ \\stackrel{\\mathrm{def}}{=}\\ \\exp_{\\mathbf{R}}\\mathrm{E}\\left[\\exp_{\\mathbf{R}}^{-1}\\hat{\\mathbf{R}}\\right]"
},
{
"math_id": 44,
"text": "\\exp_{\\mathbf{R}}(\\hat{\\mathbf{R}}) =\\mathbf{R}^{\\frac{1}{2}}\\exp\\left(\\mathbf{R}^{-\\frac{1}{2}}\\hat{\\mathbf{R}}\\mathbf{R}^{-\\frac{1}{2}}\\right)\\mathbf{R}^{\\frac{1}{2}}"
},
{
"math_id": 45,
"text": "\\exp_{\\mathbf{R}}^{-1}(\\hat{\\mathbf{R}}) =\\mathbf{R}^{\\frac{1}{2}}\\left(\\log\\mathbf{R}^{-\\frac{1}{2}}\\hat{\\mathbf{R}}\\mathbf{R}^{-\\frac{1}{2}}\\right)\\mathbf{R}^{\\frac{1}{2}}"
},
{
"math_id": 46,
"text": "\\mathbf{B}(\\hat{\\mathbf{R}}) =\\exp_{\\mathbf{R}}^{-1}\\mathrm{E}_{\\mathbf{R}}\\left[\\hat{\\mathbf{R}}\\right] =\\mathrm{E}\\left[\\exp_{\\mathbf{R}}^{-1}\\hat{\\mathbf{R}}\\right]"
},
{
"math_id": 47,
"text": "\\exp_{\\mathbf{R}}\\mathbf{B}(\\hat{\\mathbf{R}})"
},
{
"math_id": 48,
"text": "\\mathbf{B}(\\hat{\\mathbf{R}}) =-\\beta(p,n)\\mathbf{R}"
},
{
"math_id": 49,
"text": "\\beta(p,n) =\\frac{1}{p}\\left(p\\log n + p -\\psi(n-p+1) +(n-p+1)\\psi(n-p+2) +\\psi(n+1) -(n+1)\\psi(n+2)\\right)"
},
{
"math_id": 50,
"text": "\\exp_{\\mathbf{R}}\\mathbf{B}(\\hat{\\mathbf{R}}) =e^{-\\beta(p,n)}\\mathbf{R}"
},
{
"math_id": 51,
"text": "A"
},
{
"math_id": 52,
"text": "B"
},
{
"math_id": 53,
"text": "\\delta"
},
{
"math_id": 54,
"text": "\\delta A + (1 - \\delta) B"
}
] | https://en.wikipedia.org/wiki?curid=974723 |
974778 | Spirograph | Geometric drawing device
Spirograph is a geometric drawing device that produces mathematical roulette curves of the variety technically known as hypotrochoids and epitrochoids. The well-known toy version was developed by British engineer Denys Fisher and first sold in 1965.
The name has been a registered trademark of Hasbro Inc. since 1998 following purchase of the company that had acquired the Denys Fisher company. The Spirograph brand was relaunched worldwide in 2013, with its original product configurations, by Kahootz Toys.
History.
In 1827, Greek-born English architect and engineer Peter Hubert Desvignes developed and advertised a "Speiragraph", a device to create elaborate spiral drawings. A man named J. Jopling soon claimed to have previously invented similar methods. When working in Vienna between 1845 and 1848, Desvignes constructed a version of the machine that would help prevent banknote forgeries, as any of the nearly endless variations of roulette patterns that it could produce were extremely difficult to reverse engineer. The mathematician Bruno Abakanowicz invented a new Spirograph device between 1881 and 1900. It was used for calculating an area delimited by curves.
Drawing toys based on gears have been around since at least 1908, when The Marvelous Wondergraph was advertised in the Sears catalog. An article describing how to make a Wondergraph drawing machine appeared in the "Boys Mechanic" publication in 1913.
The definitive Spirograph toy was developed by the British engineer Denys Fisher between 1962 and 1964 by creating drawing machines with Meccano pieces. Fisher exhibited his spirograph at the 1965 Nuremberg International Toy Fair. It was subsequently produced by his company. US distribution rights were acquired by Kenner, Inc., which introduced it to the United States market in 1966 and promoted it as a creative children's toy. Kenner later introduced Spirotot, Magnetic Spirograph, Spiroman, and various refill sets.
In 2013 the Spirograph brand was re-launched worldwide, with the original gears and wheels, by Kahootz Toys. The modern products use removable putty in place of pins to hold the stationary pieces in place. The Spirograph was Toy of the Year in 1967, and Toy of the Year finalist, in two categories, in 2014. Kahootz Toys was acquired by PlayMonster LLC in 2019.
Operation.
The original US-released Spirograph consisted of two differently sized plastic rings (or stators), with gear teeth on both the inside and outside of their circumferences. Once either of these rings were held in place (either by pins, with an adhesive, or by hand) any of several provided gearwheels (or rotors)—each having holes for a ballpoint pen—could be spun around the ring to draw geometric shapes. Later, the Super-Spirograph introduced additional shapes such as rings, triangles, and straight bars. All edges of each piece have teeth to engage any other piece; smaller gears fit inside the larger rings, but they also can rotate along the rings' outside edge or even around each other. Gears can be combined in many different arrangements. Sets often included variously colored pens, which could enhance a design by switching colors, as seen in the examples shown here.
Beginners often slip the gears, especially when using the holes near the edge of the larger wheels, resulting in broken or irregular lines. Experienced users may learn to move several pieces in relation to each other (say, the triangle around the ring, with a circle "climbing" from the ring onto the triangle).
Mathematical basis.
Consider a fixed outer circle formula_0 of radius formula_1 centered at the origin. A smaller inner circle formula_2 of radius formula_3 is rolling inside formula_0 and is continuously tangent to it. formula_2 will be assumed never to slip on formula_0 (in a real Spirograph, teeth on both circles prevent such slippage). Now assume that a point formula_4 lying somewhere inside formula_2 is located a distance formula_5 from formula_2's center. This point formula_4 corresponds to the pen-hole in the inner disk of a real Spirograph. Without loss of generality it can be assumed that at the initial moment the point formula_4 was on the formula_6 axis. In order to find the trajectory created by a Spirograph, follow point formula_4 as the inner circle is set in motion.
Now mark two points formula_7 on formula_0 and formula_8 on formula_2. The point formula_7 always indicates the location where the two circles are tangent. Point formula_8, however, will travel on formula_2, and its initial location coincides with formula_7. After setting formula_2 in motion counterclockwise around formula_0, formula_2 has a clockwise rotation with respect to its center. The distance that point formula_8 traverses on formula_2 is the same as that traversed by the tangent point formula_7 on formula_0, due to the absence of slipping.
Now define the new (relative) system of coordinates formula_9 with its origin at the center of formula_2 and its axes parallel to formula_6 and formula_10. Let the parameter formula_11 be the angle by which the tangent point formula_7 rotates on formula_0, and formula_12 be the angle by which formula_2 rotates (i.e. by which formula_8 travels) in the relative system of coordinates. Because there is no slipping, the distances traveled by formula_8 and formula_7 along their respective circles must be the same, therefore
formula_13
or equivalently,
formula_14
It is common to assume that a counterclockwise motion corresponds to a positive change of angle and a clockwise one to a negative change of angle. A minus sign in the above formula (formula_15) accommodates this convention.
Let formula_16 be the coordinates of the center of formula_2 in the absolute system of coordinates. Then formula_17 represents the radius of the trajectory of the center of formula_2, which (again in the absolute system) undergoes circular motion thus:
formula_18
As defined above, formula_12 is the angle of rotation in the new relative system. Because point formula_4 obeys the usual law of circular motion, its coordinates in the new relative coordinate system formula_19 are
formula_20
In order to obtain the trajectory of formula_4 in the absolute (old) system of coordinates, add these two motions:
formula_21
where formula_22 is defined above.
Now, use the relation between formula_11 and formula_12 as derived above to obtain equations describing the trajectory of point formula_4 in terms of a single parameter formula_11:
formula_23
(using the fact that function formula_24 is odd).
It is convenient to represent the equation above in terms of the radius formula_1 of formula_0 and dimensionless
parameters describing the structure of the Spirograph. Namely, let
formula_25
and
formula_26
The parameter formula_27 represents how far the point formula_4 is located from the center of formula_2. At the same time, formula_28 represents how big the inner circle formula_2 is with respect to the outer one formula_0.
It is now observed that
formula_29
and therefore the trajectory equations take the form
formula_30
Parameter formula_1 is a scaling parameter and does not affect the structure of the Spirograph. Different values of formula_1 would yield similar Spirograph drawings.
The two extreme cases formula_31 and formula_32 result in degenerate trajectories of the Spirograph. In the first extreme case, when formula_31, we have a simple circle of radius formula_1, corresponding to the case where formula_2 has been shrunk into a point. (Division by formula_31 in the formula is not a problem, since both formula_24 and formula_33 are bounded functions.)
The other extreme case formula_32 corresponds to the inner circle formula_2's radius formula_34 matching the radius formula_1 of the outer circle formula_0, i.e. formula_35. In this case the trajectory is a single point. Intuitively, formula_2 is too large to roll inside the same-sized formula_0 without slipping.
If formula_36, then the point formula_4 is on the circumference of formula_2. In this case the trajectories are called hypocycloids and the equations above reduce to those for a hypocycloid.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C_o"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "C_i"
},
{
"math_id": 3,
"text": "r < R"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "\\rho<r"
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "T"
},
{
"math_id": 8,
"text": "B"
},
{
"math_id": 9,
"text": "(X', Y')"
},
{
"math_id": 10,
"text": "Y"
},
{
"math_id": 11,
"text": "t"
},
{
"math_id": 12,
"text": "t'"
},
{
"math_id": 13,
"text": "tR = (t - t')r,"
},
{
"math_id": 14,
"text": "t' = -\\frac{R - r}{r} t."
},
{
"math_id": 15,
"text": "t' < 0"
},
{
"math_id": 16,
"text": "(x_c, y_c)"
},
{
"math_id": 17,
"text": "R - r"
},
{
"math_id": 18,
"text": "\\begin{align}\n x_c &= (R - r)\\cos t,\\\\\n y_c &= (R - r)\\sin t.\n\\end{align}"
},
{
"math_id": 19,
"text": "(x', y')"
},
{
"math_id": 20,
"text": "\\begin{align}\n x' &= \\rho\\cos t',\\\\\n y' &= \\rho\\sin t'.\n\\end{align}"
},
{
"math_id": 21,
"text": "\\begin{align}\n x &= x_c + x' = (R - r)\\cos t + \\rho\\cos t',\\\\\n y &= y_c + y' = (R - r)\\sin t + \\rho\\sin t',\\\\\n\\end{align}"
},
{
"math_id": 22,
"text": "\\rho"
},
{
"math_id": 23,
"text": "\\begin{align}\n x &= x_c + x' = (R - r)\\cos t + \\rho\\cos \\frac{R - r}{r}t,\\\\\n y &= y_c + y' = (R - r)\\sin t - \\rho\\sin \\frac{R - r}{r}t\\\\\n\\end{align}"
},
{
"math_id": 24,
"text": "\\sin"
},
{
"math_id": 25,
"text": "l = \\frac{\\rho}{r}"
},
{
"math_id": 26,
"text": "k = \\frac{r}{R}."
},
{
"math_id": 27,
"text": "0 \\le l \\le 1"
},
{
"math_id": 28,
"text": "0 \\le k \\le 1"
},
{
"math_id": 29,
"text": "\\frac{\\rho}{R} = lk,"
},
{
"math_id": 30,
"text": "\\begin{align}\n x(t) &= R\\left[(1 - k)\\cos t + lk\\cos \\frac{1 - k}{k}t\\right],\\\\\n y(t) &= R\\left[(1 - k)\\sin t - lk\\sin \\frac{1 - k}{k}t\\right].\\\\\n\\end{align}"
},
{
"math_id": 31,
"text": "k = 0"
},
{
"math_id": 32,
"text": "k = 1"
},
{
"math_id": 33,
"text": "\\cos"
},
{
"math_id": 34,
"text": "r"
},
{
"math_id": 35,
"text": "r = R"
},
{
"math_id": 36,
"text": "l = 1"
}
] | https://en.wikipedia.org/wiki?curid=974778 |
9748518 | Toda oscillator | In physics, the Toda oscillator is a special kind of nonlinear oscillator. It represents a chain of particles with exponential potential interaction between neighbors. These concepts are named after Morikazu Toda. The Toda oscillator is used as a simple model to understand the phenomenon of self-pulsation, which is a quasi-periodic pulsation of the output intensity of a solid-state laser in the transient regime.
Definition.
The Toda oscillator is a dynamical system of any origin, which can be described with dependent coordinate formula_0 and independent coordinate formula_1, characterized in that the evolution along independent coordinate formula_1 can be approximated with equation
formula_2
where
formula_3, formula_4 and prime denotes the derivative.
Physical meaning.
The independent coordinate formula_1 has sense of time. Indeed, it may be proportional to time formula_5 with some relation like formula_6, where formula_7 is constant.
The derivative formula_8 may have sense of velocity of particle with coordinate formula_0; then formula_9 can be interpreted as acceleration; and the mass of such a particle is equal to unity.
The dissipative function formula_10 may have sense of coefficient of the speed-proportional friction.
Usually, both parameters formula_11 and formula_12 are supposed to be positive; then this speed-proportional friction coefficient grows exponentially at large positive values of coordinate formula_0.
The potential formula_4 is a fixed function, which also shows exponential growth at large positive values of coordinate formula_0.
In the application in laser physics, formula_0 may have a sense of logarithm of number of photons in the laser cavity, related to its steady-state value. Then, the output power of such a laser is proportional to formula_13 and may show pulsation at oscillation of formula_0.
Both analogies, with a unity mass particle and logarithm of number of photons, are useful in the analysis of behavior of the Toda oscillator.
Energy.
Rigorously, the oscillation is periodic only at formula_14. Indeed, in the realization of the Toda oscillator as a self-pulsing laser, these parameters may have values of order of formula_15; during several pulses, the amplitude of pulsation does not change much. In this case, we can speak about the period of pulsation, since the function formula_16 is almost periodic.
In the case formula_14, the energy of the oscillator formula_17 does not depend on formula_1, and can be treated as a constant of motion. Then, during one period of pulsation, the relation between formula_0 and formula_1 can be expressed analytically:
formula_18
where
formula_19 and formula_20 are minimal and maximal values of formula_0; this solution is written for the case when formula_21.
however, other solutions may be obtained using the principle of translational invariance.
The ratio formula_22 is a convenient parameter to characterize the amplitude of pulsation. Using this, we can express the median value
formula_23
as
formula_24;
and the energy
formula_25
is also an elementary function of formula_26.
In application, the quantity formula_27 need not be the physical energy of the system; in these cases, this dimensionless quantity may be called quasienergy.
Period of pulsation.
The period of pulsation is an increasing function of the amplitude formula_26.
When formula_28,
the period
formula_29
When formula_30, the period
formula_31
In the whole range
formula_32, the period formula_33 and frequency formula_34 can be approximated by
formula_35
formula_36
to at least 8 significant figures. The relative error of this approximation does not exceed formula_37.
Decay of pulsation.
At small (but still positive) values of formula_11 and formula_12, the pulsation decays slowly, and this decay can be described analytically. In the first approximation, the parameters formula_11 and formula_12 give additive contributions to the decay; the decay rate, as well as the amplitude and phase of the nonlinear oscillation, can be approximated with elementary functions in a manner similar to the period above. In describing the behavior of the idealized Toda oscillator, the error of such approximations is smaller than the differences between the ideal and its experimental realization as a self-pulsing laser at the optical bench. However, a self-pulsing laser shows qualitatively very similar behavior.
Continuous limit.
The Toda chain equations of motion, in the continuous limit in which the distance between neighbors goes to zero, become the Korteweg–de Vries equation (KdV) equation. Here the index labeling the particle in the chain becomes the new spatial coordinate.
In contrast, the Toda field theory is achieved by introducing a new spatial coordinate which is independent of the chain index label. This is done in a relativistically invariant way, so that time and space are treated on equal grounds. This means that the Toda field theory is not a continuous limit of the Toda chain.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "~x~"
},
{
"math_id": 1,
"text": "~z~"
},
{
"math_id": 2,
"text": "\n\\frac{{\\rm d^{2}}x}{{\\rm d}z^{2}}+\nD(x)\\frac{{\\rm d}x}{{\\rm d}z}+\n\\Phi'(x) =0,\n"
},
{
"math_id": 3,
"text": "~D(x)=u e^{x}+v~"
},
{
"math_id": 4,
"text": "~\\Phi(x)=e^x-x-1~"
},
{
"math_id": 5,
"text": "~t~"
},
{
"math_id": 6,
"text": "~z=t/t_0~"
},
{
"math_id": 7,
"text": "~t_0~"
},
{
"math_id": 8,
"text": "~\\dot x=\\frac{{\\rm d}x}{{\\rm d}z}"
},
{
"math_id": 9,
"text": "~\\ddot x=\\frac{{\\rm d}^2x}{{\\rm d}z^2}~"
},
{
"math_id": 10,
"text": "~D~"
},
{
"math_id": 11,
"text": "~u~"
},
{
"math_id": 12,
"text": "~v~"
},
{
"math_id": 13,
"text": "~\\exp(x)~"
},
{
"math_id": 14,
"text": "~u=v=0~"
},
{
"math_id": 15,
"text": "~10^{-4}~"
},
{
"math_id": 16,
"text": "~x=x(t)~"
},
{
"math_id": 17,
"text": "~E=\\frac 12 \\left(\\frac{{\\rm d}x}{{\\rm d}z}\\right)^{2}+\\Phi(x)~"
},
{
"math_id": 18,
"text": "\nz=\\pm\\int_{x_\\min}^{x_\\max}\\!\\!\\frac{{\\rm d}a}\n{\\sqrt{2}\\sqrt{E-\\Phi(a)}}\n"
},
{
"math_id": 19,
"text": "~x_{\\min}~"
},
{
"math_id": 20,
"text": "~x_{\\max}~"
},
{
"math_id": 21,
"text": "\\dot x(0)=0"
},
{
"math_id": 22,
"text": "~x_\\max/x_\\min=2\\gamma~"
},
{
"math_id": 23,
"text": "\n\\delta=\\frac{x_\\max -x_\\min}{1}\n"
},
{
"math_id": 24,
"text": "\n\\delta=\n\\ln\\frac{\\sin(\\gamma)}{\\gamma}\n"
},
{
"math_id": 25,
"text": "\n E=E(\\gamma)=\\frac{\\gamma}{\\tanh(\\gamma)}+\\ln\\frac{\\sinh \\gamma}{\\gamma}-1\n"
},
{
"math_id": 26,
"text": "~\\gamma~"
},
{
"math_id": 27,
"text": "E"
},
{
"math_id": 28,
"text": "~\\gamma \\ll 1~"
},
{
"math_id": 29,
"text": "~T(\\gamma)=2\\pi\n\\left(\n 1 + \\frac{\\gamma^2} {24} + O(\\gamma^4)\n\\right)\n~"
},
{
"math_id": 30,
"text": "~\\gamma \\gg 1~"
},
{
"math_id": 31,
"text": "~T(\\gamma)=\n4\\gamma^{1/2}\n\\left(1+O(1/\\gamma)\\right) ~"
},
{
"math_id": 32,
"text": "~\\gamma > 0~"
},
{
"math_id": 33,
"text": "~{T(\\gamma)}~"
},
{
"math_id": 34,
"text": "~k(\\gamma)=\\frac{2\\pi}{T(\\gamma)}~"
},
{
"math_id": 35,
"text": "\nk_\\text{fit}(\\gamma)=\n\\frac{2\\pi}\n{T_\\text{fit}(\\gamma)}=\n"
},
{
"math_id": 36,
"text": "\n\\left(\n\\frac\n{\n10630\n+ 674\\gamma \n+ 695.2419\\gamma^2 \n+ 191.4489\\gamma^3 \n+ 16.86221\\gamma^4 \n+ 4.082607\\gamma^5 + \\gamma^6\n}\n{10630 + 674\\gamma + 2467\\gamma^2 + 303.2428 \\gamma^3+164.6842\\gamma^4 + 36.6434\\gamma^5 + 3.9596\\gamma^6 +\n0.8983\\gamma^7 +\\frac{16}{\\pi^4} \\gamma^8}\n\\right)^{1/4}\n"
},
{
"math_id": 37,
"text": "22 \\times 10^{-9} "
}
] | https://en.wikipedia.org/wiki?curid=9748518 |
9749127 | Heavy fermion material | Intermetallic compound with 4f and 5f electrons in unfilled electron bands
In materials science, heavy fermion materials are a specific type of intermetallic compound, containing elements with 4f or 5f electrons in unfilled electron bands. Electrons are one type of fermion, and when they are found in such materials, they are sometimes referred to as heavy electrons. Heavy fermion materials have a low-temperature specific heat whose linear term is up to 1000 times larger than the value expected from the free electron model. The properties of the heavy fermion compounds often derive from the partly filled f-orbitals of rare-earth or actinide ions, which behave like localized magnetic moments.
The name "heavy fermion" comes from the fact that the fermion behaves as if it has an effective mass greater than its rest mass. In the case of electrons, below a characteristic temperature (typically 10 K), the conduction electrons in these metallic compounds behave as if they had an effective mass up to 1000 times the free particle mass. This large effective mass is also reflected in a large contribution to the resistivity from electron-electron scattering via the Kadowaki–Woods ratio. Heavy fermion behavior has been found in a broad variety of states including metallic, superconducting, insulating and magnetic states. Characteristic examples are CeCu6, CeAl3, CeCu2Si2, YbAl3, UBe13 and UPt3.
Historical overview.
Heavy fermion behavior was discovered by K. Andres, J.E. Graebner and H.R. Ott in 1975, who observed enormous magnitudes of the linear specific heat capacity in CeAl3.
While investigations on doped superconductors led to the conclusion that the existence of localized magnetic moments and superconductivity in one material was incompatible, the opposite was shown, when in 1979 Frank Steglich "et al." discovered heavy fermion superconductivity in the material CeCu2Si2.
In 1994, the discovery of a quantum critical point and non-Fermi liquid behavior in the phase diagram of heavy fermion compounds by H. von Löhneysen "et al." led to a new rise of interest in the research of these compounds. Another experimental breakthrough was the demonstration in 1998 (by the group of Gil Lonzarich) that quantum criticality in heavy fermions can be the reason for unconventional superconductivity.
Heavy fermion materials play an important role in current scientific research, acting as prototypical materials for unconventional superconductivity, non-Fermi liquid behavior and quantum criticality. The actual interaction between localized magnetic moments and conduction electrons in heavy fermion compounds is still not completely understood and a topic of ongoing investigation.
Properties.
Heavy fermion materials belong to the group of strongly correlated electron systems.
Several members of the group of heavy fermion materials become superconducting below a critical temperature. The superconductivity is unconventional, i.e., not covered by BCS theory.
At high temperatures, heavy fermion compounds behave like normal metals and the electrons can be described as a Fermi gas, in which the electrons are assumed to be non-interacting fermions. In this case, the interaction between the "f" electrons, which present a local magnetic moment and the conduction electrons, can be neglected.
The Fermi liquid theory of Lev Landau provides a good model to describe the properties of most heavy fermion materials at low temperatures. In this theory, the electrons are described by quasiparticles, which have the same quantum numbers and charge, but the interaction of the electrons is taken into account by introducing an effective mass, which differs from the actual mass of a free electron.
Optical properties.
In order to obtain the optical properties of heavy fermion systems, these materials have been investigated by optical spectroscopy measurements. In these experiments the sample is irradiated by electromagnetic waves with tunable wavelength. Measuring the reflected or transmitted light reveals the characteristic energies of the sample.
Above the characteristic coherence temperature formula_0, heavy fermion materials behave like normal metals; i.e. their optical response is described by the Drude model. Compared to a good metal however, heavy fermion compounds at high temperatures have a high scattering rate because of the large density of local magnetic moments (at least one f electron per unit cell), which cause (incoherent) Kondo scattering. Due to the high scattering rate, the conductivity for dc and at low frequencies is rather low. A conductivity roll-off (Drude roll-off) occurs at the frequency that corresponds to the relaxation rate.
Below formula_0, the localized "f" electrons hybridize with the conduction electrons. This leads to the enhanced effective mass, and a hybridization gap develops. In contrast to Kondo insulators, the chemical potential of heavy fermion compounds lies within the conduction band. These changes lead to two important features in the optical response of heavy fermions.
The frequency-dependent conductivity of heavy-fermion materials can be expressed by formula_1, containing the effective mass formula_2 and the renormalized relaxation rate formula_3. Due to the large effective mass, the renormalized relaxation time is also enhanced, leading to a narrow Drude roll-off at very low frequencies compared to normal metals.
The lowest such Drude relaxation rate observed in heavy fermions so far, in the low GHz range, was found in UPd2Al3.
The gap-like feature in the optical conductivity represents directly the hybridization gap, which opens due to the interaction of localized f electrons and conduction electrons. Since the conductivity does not vanish completely, the observed gap is actually a pseudogap. At even higher frequencies we can observe a local maximum in the optical conductivity due to normal interband excitations.
Heat capacity.
Specific heat for normal metals.
At low temperature and for normal metals, the specific heat formula_4 consists of the specific heat of the electrons formula_5 which depends linearly on temperature formula_6 and of the specific heat of the crystal lattice vibrations (phonons) formula_7 which depends cubically on temperature
formula_8
with proportionality constants formula_9 and formula_10.
In the temperature range mentioned above, the electronic contribution is the major part of the specific heat. In the free electron model — a simple model system that neglects electron interaction — or metals that could be described by it, the electronic specific heat is given by
formula_11
with Boltzmann constant formula_12, the electron density formula_13 and the Fermi energy formula_14 (the highest single particle energy of occupied electronic states). The proportionality constant formula_10 is called the Sommerfeld coefficient.
Relation between heat capacity and "thermal effective mass".
For electrons with a quadratic dispersion relation (as for the free-electron gas), the Fermi energy "ε"F is inversely proportional to the particle's mass "m":
formula_15
where formula_16 stands for the Fermi wave number that depends on the electron density and is the absolute value of the wave number of the highest occupied electron state. Thus, because the Sommerfeld parameter formula_10 is inversely proportional to formula_14, formula_10 is proportional to the particle's mass and for high values of formula_10, the metal behaves as a Fermi gas in which the conduction electrons have a high thermal effective mass.
Example: UBe13 at low temperatures.
Experimental results for the specific heat of the heavy fermion compound UBe13 show a peak at a temperature around 0.75 K that goes down to zero with a high slope if the temperature approaches 0 K. Due to this peak, the formula_10 factor is much higher than the free electron model in this temperature range. In contrast, above 6 K, the specific heat for this heavy fermion compound approaches the value expected from free-electron theory.
Quantum criticality.
The presence of local moment and delocalized conduction electrons leads to a competition of the Kondo interaction (which favors a non-magnetic ground state) and the RKKY interaction (which generates magnetically ordered states, typically antiferromagnetic for heavy fermions). By suppressing the Néel temperature of a heavy-fermion antiferromagnet down to zero (e.g. by applying pressure or magnetic field or by changing the material composition), a quantum phase transition can be induced. For several heavy-fermion materials it was shown that such a quantum phase transition can generate very pronounced non-Fermi liquid properties at finite temperatures. Such quantum-critical behavior is also studied in great detail in the context of unconventional superconductivity.
Examples of heavy-fermion materials with well-studied quantum-critical properties are CeCu6−xAu, CeIn3, CePd2Si2, YbRh2Si2, and CeCoIn5.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T_{\\rm coh}"
},
{
"math_id": 1,
"text": "\\sigma(\\omega)=\\frac{ne^2}{m^*}\\frac{\\tau^*}{1+\\omega^2\\tau^{*2}}"
},
{
"math_id": 2,
"text": "m^*"
},
{
"math_id": 3,
"text": "\\frac{1}{\\tau^*}=\\frac{m}{m^*}\\frac{1}{\\tau}"
},
{
"math_id": 4,
"text": "C_P"
},
{
"math_id": 5,
"text": "C_{P, \\rm el}"
},
{
"math_id": 6,
"text": "T"
},
{
"math_id": 7,
"text": "C_{P, \\rm ph}"
},
{
"math_id": 8,
"text": "C_P = C_{P, \\rm el}+C_{P,\\rm ph} = \\gamma T + \\beta T^3 \\ "
},
{
"math_id": 9,
"text": "\\beta"
},
{
"math_id": 10,
"text": "\\gamma"
},
{
"math_id": 11,
"text": "C_{P, \\rm el} = \\gamma T = \\frac{\\pi^2}{2}\\frac{k_{\\rm B}}{\\epsilon_{\\rm F}}nk_{\\rm B}T"
},
{
"math_id": 12,
"text": "k_{\\rm B}"
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": "\\epsilon_{\\rm F}"
},
{
"math_id": 15,
"text": "\\epsilon_{\\rm F} = \\frac{\\hbar^2 k_{\\rm F}^2}{2m}"
},
{
"math_id": 16,
"text": "k_{\\rm F}"
}
] | https://en.wikipedia.org/wiki?curid=9749127 |
9749236 | Phonon drag | Phonon drag is an increase in the effective mass of conduction electrons or valence holes due to interactions with the crystal lattice in which the electron moves. As an electron moves past atoms in the lattice its charge distorts or polarizes the nearby lattice. This effect leads to a decrease in the electron (or hole, as may be the case) mobility, which results in a decreased conductivity. However, as the magnitude of the Seebeck coefficient increases with phonon drag, it may be beneficial in a thermoelectric material for direct energy conversion applications. The magnitude of this effect is typically appreciable only at low temperatures (<200 K).
Phonons are not always in local thermal equilibrium; they move against the thermal gradient. They lose momentum by interacting with electrons (or other carriers) and imperfections in the crystal. If the phonon-electron interaction is predominant, the phonons will tend to push the electrons to one end of the material, losing momentum in the process. This contributes to the already present thermoelectric field. This contribution is most important in the temperature region where phonon-electron scattering is predominant. This happens for
formula_0
where "θ"D is the Debye temperature. At lower temperatures there are fewer phonons available for drag, and at higher temperatures they tend to lose momentum in phonon-phonon scattering instead of phonon-electron scattering.
This region of the Seebeck coefficient-versus-temperature function is highly variable under a magnetic field.
References.
Kittel, Charles (1996) Introduction to Solid State Physics, 7th Ed., John Wiley and Sons, Inc. | [
{
"math_id": 0,
"text": "T \\approx {1 \\over 5} \\theta_\\mathrm{D}"
}
] | https://en.wikipedia.org/wiki?curid=9749236 |
974967 | Equivalent air depth | Method of comparing decompression requirements for air and a given nitrox mix
The equivalent air depth (EAD) is a way of approximating the decompression requirements of breathing gas mixtures that contain nitrogen and oxygen in different proportions to those in air, known as nitrox.
The equivalent air depth, for a given nitrox mix and depth, is the depth of a dive when breathing air that would have the same partial pressure of nitrogen. So, for example, a gas mix containing 36% oxygen (EAN36) being used at has an EAD of .
Calculations in metres.
The equivalent air depth can be calculated for depths in metres as follows:
EAD = (Depth + 10) × (Fraction of N2 / 0.79) − 10
Working the earlier example, for a nitrox mix containing 64% nitrogen (EAN36) being used at 27 metres, the EAD is:
EAD = (27 + 10) × (0.64 / 0.79) − 10
EAD = 37 × 0.81 − 10
EAD = 30 − 10
EAD = 20 metres
So at 27 metres on this mix, the diver would calculate their decompression requirements as if on air at 20 metres.
Calculations in feet.
The equivalent air depth can be calculated for depths in feet as follows:
EAD = (Depth + 33) × (Fraction of N2 / 0.79) − 33
Working the earlier example, for a nitrox mix containing 64% nitrogen (EAN36) being used at 90 feet, the EAD is:
EAD = (90 + 33) × (0.64 / 0.79) − 33
EAD = 123 × 0.81 − 33
EAD = 100 − 33
EAD = 67 feet
So at 90 feet on this mix, the diver would calculate their decompression requirements as if on air at 67 feet.
Derivation of the formulas.
For a given nitrox mixture and a given depth, the equivalent air depth expresses the theoretical depth that would produce the same partial pressure of nitrogen if regular air (79% nitrogen) was used instead:
formula_0
Hence, following the definition of partial pressure:
formula_1
with formula_2 expressing the fraction of nitrogen and formula_3 expressing the pressure at the given depth. Solving for formula_4 then yields a general formula:
formula_5
In this formula, formula_6 and formula_7 are absolute pressures. In practice, it is much more convenient to work with the equivalent columns of seawater depth, because the depth can be read off directly from the depth gauge or dive computer. The relationship between pressure and depth is governed by Pascal's law:
formula_8
Using the SI system with pressures expressed in pascal, we have:
formula_9
Expressing the pressures in atmospheres yields a convenient formula (1 atm ≡ 101325 Pa):
formula_10
To simplify the algebra we will define formula_11. Combining the general formula and Pascal's law, we have:
formula_12
so that
formula_13
Since formula_14, the equivalent formula for the imperial system becomes
formula_15
Substituting R again, and noting that formula_16, we have the concrete formulas:
formula_17
formula_18
Dive tables.
Although not all dive tables are recommended for use in this way, the Bühlmann tables are suitable for use with these kind of calculations. At 27 metres depth the Bühlmann 1986 table (for altitudes of 0–700 m) allows 20 minutes bottom time without requiring a decompression stop, while at 20 metres the no-stop time is 35 minutes. This shows that using EAN36 for a 27-metre dive can give a 75% increase in no-stop bottom time over using air at the same theoretical level of risk of developing symptoms of decompression sickness.
US Navy tables have also been used with equivalent air depth, with similar effect. The calculations are theoretically valid for all Haldanean decompression models.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "ppN_2(nitrox, depth) = ppN_2(air, EAD)"
},
{
"math_id": 1,
"text": "FN_2(nitrox) \\cdot P_{depth} = FN_2(air) \\cdot P_{EAD} "
},
{
"math_id": 2,
"text": "FN_2"
},
{
"math_id": 3,
"text": "P_{depth}"
},
{
"math_id": 4,
"text": "P_{EAD}"
},
{
"math_id": 5,
"text": "P_{EAD} = {FN_2(nitrox) \\over FN_2(air)} \\cdot P_{depth} "
},
{
"math_id": 6,
"text": "P_{EAD}\\,"
},
{
"math_id": 7,
"text": "P_{depth}\\,"
},
{
"math_id": 8,
"text": " P_{depth} = P_{atmosphere} + \\rho_{seawater} \\cdot g \\cdot h_{depth}\\,"
},
{
"math_id": 9,
"text": " P_{depth}(Pa) = P_{atmosphere}(Pa) + \\rho_{seawater} \\cdot g \\cdot h_{depth}(m)\\,"
},
{
"math_id": 10,
"text": " P_{depth}(atm) = 1 + \\frac{\\rho_{seawater} \\cdot g \\cdot h_{depth}}{P_{atmosphere}(Pa)} = 1 + \\frac{1027 \\cdot 9.8 \\cdot h_{depth}}{101325}\\ \\approx 1 + \\frac{h_{depth}(m)}{10}"
},
{
"math_id": 11,
"text": "\\frac{FN_2(nitrox)}{FN_2(air)} = R"
},
{
"math_id": 12,
"text": "1 + \\frac{h_{EAD}}{10} = R \\cdot (1 + \\frac{h_{depth}}{10})"
},
{
"math_id": 13,
"text": "h_{EAD} = 10 \\cdot (R + R \\cdot \\frac{h_{depth}}{10} - 1) = R \\cdot (h_{depth} + 10) - 10"
},
{
"math_id": 14,
"text": "h(ft) \\approx 3.3 \\cdot h (m)\\,"
},
{
"math_id": 15,
"text": "h_{EAD}(ft) = 3.3 \\cdot \\Bigl(R \\cdot (\\frac{h_{depth}(ft)}{3.3} + 10) - 10 \\Bigr) = R \\cdot (h_{depth}(ft) + 33) - 33"
},
{
"math_id": 16,
"text": "FN_2(air) = 0.79"
},
{
"math_id": 17,
"text": "h_{EAD}(m) = \\frac{FN_2(nitrox)}{0.79} \\cdot (h_{depth}(m) + 10) - 10"
},
{
"math_id": 18,
"text": "h_{EAD}(ft) = \\frac{FN_2(nitrox)}{0.79} \\cdot (h_{depth}(ft) + 33) - 33"
}
] | https://en.wikipedia.org/wiki?curid=974967 |
9749892 | Commutant lifting theorem | Operator theorem
In operator theory, the commutant lifting theorem, due to Sz.-Nagy and Foias, is a powerful theorem used to prove several interpolation results.
Statement.
The commutant lifting theorem states that if formula_0 is a contraction on a Hilbert space formula_1, formula_2 is its minimal unitary dilation acting on some Hilbert space formula_3 (which can be shown to exist by Sz.-Nagy's dilation theorem), and formula_4 is an operator on formula_1 commuting with formula_0, then there is an operator formula_5 on formula_3 commuting with formula_2 such that
formula_6
and
formula_7
Here, formula_8 is the projection from formula_3 onto formula_1. In other words, an operator from the commutant of "T" can be "lifted" to an operator in the commutant of the unitary dilation of "T".
Applications.
The commutant lifting theorem can be used to prove the left Nevanlinna-Pick interpolation theorem, the Sarason interpolation theorem, and the two-sided Nudelman theorem, among others. | [
{
"math_id": 0,
"text": "T"
},
{
"math_id": 1,
"text": "H"
},
{
"math_id": 2,
"text": "U"
},
{
"math_id": 3,
"text": "K"
},
{
"math_id": 4,
"text": "R"
},
{
"math_id": 5,
"text": "S"
},
{
"math_id": 6,
"text": "R T^n = P_H S U^n \\vert_H \\; \\forall n \\geq 0,"
},
{
"math_id": 7,
"text": "\\Vert S \\Vert = \\Vert R \\Vert."
},
{
"math_id": 8,
"text": "P_H"
}
] | https://en.wikipedia.org/wiki?curid=9749892 |
9750065 | Sz.-Nagy's dilation theorem | Dilation theorem
The Sz.-Nagy dilation theorem (proved by Béla Szőkefalvi-Nagy) states that every contraction formula_0 on a Hilbert space formula_1 has a unitary dilation formula_2 to a Hilbert space formula_3, containing formula_1, with
formula_4
where formula_5 is the projection from formula_3 onto formula_1.
Moreover, such a dilation is unique (up to unitary equivalence) when one assumes "K" is minimal, in the sense that the linear span of formula_6 is dense in "K". When this minimality condition holds, "U" is called the minimal unitary dilation of "T".
Proof.
For a contraction "T" (i.e., (formula_7), its defect operator "DT" is defined to be the (unique) positive square root "DT" = ("I - T*T")½. In the special case that "S" is an isometry, "DS*" is a projector and "DS=0", hence the following is an Sz. Nagy unitary dilation of "S" with the required polynomial functional calculus property:
formula_8
Returning to the general case of a contraction "T", every contraction "T" on a Hilbert space "H" has an isometric dilation, again with the calculus property, on
formula_9
given by
formula_10
Substituting the "S" thus constructed into the previous Sz.-Nagy unitary dilation for an isometry "S", one obtains a unitary dilation for a contraction "T":
formula_11
Schaffer form.
The Schaffer form of a unitary Sz. Nagy dilation can be viewed as a beginning point for the characterization of all unitary dilations, with the required property, for a given contraction.
Remarks.
A generalisation of this theorem, by Berger, Foias and Lebow, shows that if "X" is a spectral set for "T", and
formula_12
is a Dirichlet algebra, then "T" has a minimal normal "δX" dilation, of the form above. A consequence of this is that any operator with a simply connected spectral set "X" has a minimal normal "δX" dilation.
To see that this generalises Sz.-Nagy's theorem, note that contraction operators have the unit disc D as a spectral set, and that normal operators with spectrum in the unit circle "δ"D are unitary. | [
{
"math_id": 0,
"text": "T"
},
{
"math_id": 1,
"text": "H"
},
{
"math_id": 2,
"text": "U"
},
{
"math_id": 3,
"text": "K"
},
{
"math_id": 4,
"text": "T^n = P_H U^n \\vert_H,\\quad n\\ge 0,"
},
{
"math_id": 5,
"text": "P_H"
},
{
"math_id": 6,
"text": "\\bigcup\\nolimits_{n\\in \\mathbb N} \\,U^n H"
},
{
"math_id": 7,
"text": "\\|T\\|\\le1"
},
{
"math_id": 8,
"text": "U = \n\\begin{bmatrix} S & D_{S^*} \\\\ D_S & -S^* \\end{bmatrix}.\n"
},
{
"math_id": 9,
"text": "\\oplus_{n \\geq 0} H"
},
{
"math_id": 10,
"text": "S = \n\n\\begin{bmatrix} T & 0 & 0 & \\cdots & \\\\ D_T & 0 & 0 & & \\\\ 0 & I & 0 & \\ddots \\\\ 0 & 0 & I & \\ddots \\\\ \\vdots & & \\ddots & \\ddots \\end{bmatrix}\n."
},
{
"math_id": 11,
"text": "\nT^n = P_H S^n \\vert_H = P_H (Q_{H'} U \\vert_{H'})^n \\vert_H = P_H U^n \\vert_H.\n"
},
{
"math_id": 12,
"text": "\\mathcal{R}(X)"
}
] | https://en.wikipedia.org/wiki?curid=9750065 |
9750368 | Kuratowski's free set theorem | Kuratowski's free set theorem, named after Kazimierz Kuratowski, is a result of set theory, an area of mathematics. It is a result which has been largely forgotten for almost 50 years, but has been applied recently in solving several lattice theory problems, such as the congruence lattice problem.
Denote by formula_0 the set of all finite subsets of a set formula_1. Likewise, for a positive integer formula_2, denote by formula_3 the set of all formula_2-elements subsets of formula_1. For a mapping formula_4, we say that a subset formula_5 of formula_1 is "free" (with respect to formula_6), if for any formula_2-element subset formula_7 of formula_5 and any formula_8, formula_9. Kuratowski published in 1951 the following result, which characterizes the infinite cardinals of the form formula_10.
The theorem states the following. Let formula_2 be a positive integer and let formula_1 be a set. Then the cardinality of formula_1 is greater than or equal to formula_10 if and only if for every mapping formula_6 from formula_3 to formula_0,
there exists an formula_11-element free subset of formula_1 with respect to formula_6.
For formula_12, Kuratowski's free set theorem is superseded by Hajnal's set mapping theorem.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "[X]^{<\\omega}"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "[X]^n"
},
{
"math_id": 4,
"text": "\\Phi\\colon[X]^n\\to[X]^{<\\omega}"
},
{
"math_id": 5,
"text": "U"
},
{
"math_id": 6,
"text": "\\Phi"
},
{
"math_id": 7,
"text": "V"
},
{
"math_id": 8,
"text": "u\\in U\\setminus V"
},
{
"math_id": 9,
"text": "u\\notin\\Phi(V)"
},
{
"math_id": 10,
"text": "\\aleph_n"
},
{
"math_id": 11,
"text": "(n+1)"
},
{
"math_id": 12,
"text": "n=1"
}
] | https://en.wikipedia.org/wiki?curid=9750368 |
9751167 | Self-pulsation | Self-pulsation is a transient phenomenon in continuous-wave lasers. Self-pulsation takes place at the beginning of laser action. As the pump is switched on, the gain in the active medium rises and exceeds the steady-state value. The number of photons in the cavity increases, depleting the gain below the steady-state value, and so on. The laser pulsates; the output power at the peaks can be orders of magnitude larger than that between pulses. After several strong peaks, the amplitude of pulsation reduces, and the system behaves as a linear oscillator with damping. Then the pulsation decays; this is the beginning of the continuous-wave operation.
Equations.
The simple model of self-pulsation deals with number formula_0 of photons in the laser cavity and number formula_1 of excitations in the gain medium. The evolution can be described with equations:
formula_2
where
formula_3 is coupling constant,<br>
formula_4 is rate of relaxation of photons in the laser cavity,<br>
formula_5 is rate of relaxation of excitation of the gain medium,<br>
formula_6 is the pumping rate;<br>
formula_7 is the round-trip time of light in the laser resonator,<br>
formula_8 is area of the pumped region (good mode matching is assumed);<br>
formula_9 is the emission cross-section at the signal frequency formula_10.<br>
formula_11 is the transmission coefficient of the output coupler.<br>
formula_12 is the lifetime of excitation of the gain medium.<br>
formula_13 is power of pump absorbed in the gain medium (which is assumed to be constant).
Such equations appear in the similar form (with various notations for variables) in textbooks on laser physics, for example, the monography by A.Siegman.
formula_14
Weak pulsation.
Decay of small pulsation occurs with rate
formula_15
where
formula_16
Practically, this rate can be orders of magnitude smaller than the repetition rate of pulses. In this case, the decay of the self-pulsation in a real lasers is determined by other physical processes, not taken into account with the initial equations above.
Strong pulsation.
The transient regime can be important for the quasi-continuous lasers that needs to operate in the pulsed regime, for example, to avoid the overheating.
The only numerical solutions were believed to exist for the strong pulsation, spiking. The strong spiking is possible, when formula_17, i.e., the lifetime of excitations in the active medium is large compared to the lifetime of photons inside the cavity. The spiking is possible at low dumping of self-pulsation, in the corresponding both parameters formula_18 and formula_19 should be small.
The intent of realization of the oscillator Toda at the optical bench is shown in Fig.4. The colored curves are oscillograms of two shouts of the quasi-continuous diode-pumped microchip solid-state laser on ceramics, described by. The thick black curve represents the approximation within the simple model with oscillator Toda. Only qualitative agreement takes place.
Toda Oscillator.
Change of variables
formula_20
lead to the equation for Toda oscillator. At weak decay of the self-pulsation (even in the case of strong spiking), the solution of corresponding equation can be approximated through elementary function. The error of such approximation of the solution of the initial equations is small compared to the precision of the model.
The pulsation of real the output of a real lasers in the transient regime usually show significant deviation from the simple model above, although the model gives good qualitative description of the
phenomenon of self-pulsation. | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "~Y~"
},
{
"math_id": 2,
"text": "~\\begin{align}\n{{\\rm d}X}/{{\\rm d}t} & = KXY-UX \\\\ \n{{\\rm d}Y}/{{\\rm d}t} & = - KXY-VY+W\n\\end{align}\n"
},
{
"math_id": 3,
"text": "~K = \\sigma/(s t_{\\rm r})~"
},
{
"math_id": 4,
"text": "~U = \\theta L~"
},
{
"math_id": 5,
"text": "~V = 1/\\tau~"
},
{
"math_id": 6,
"text": "~W = P_{\\rm p}/({\\hbar\\omega_{\\rm p}})~"
},
{
"math_id": 7,
"text": "~t_{\\rm r}~"
},
{
"math_id": 8,
"text": "~s~"
},
{
"math_id": 9,
"text": "~\\sigma~"
},
{
"math_id": 10,
"text": "~\\omega_{\\rm s}~"
},
{
"math_id": 11,
"text": "~\\theta~"
},
{
"math_id": 12,
"text": "~\\tau~"
},
{
"math_id": 13,
"text": "P_{\\rm p}"
},
{
"math_id": 14,
"text": "\n\\begin{align}\nX_0 & = \\frac{W}{U}-\\frac{V}{K} \\\\\nY_0 & = \\frac{U}{K}\n\\end{align}\n"
},
{
"math_id": 15,
"text": "\n\\begin{align}\n\\Gamma & = KW/(2U) \\\\\n\\Omega & = \\sqrt{w^2-\\Gamma^2}\n\\end{align}\n"
},
{
"math_id": 16,
"text": "w=\\sqrt{KW-UV}"
},
{
"math_id": 17,
"text": "U/V \\ll 1"
},
{
"math_id": 18,
"text": "u"
},
{
"math_id": 19,
"text": "~v^{}~"
},
{
"math_id": 20,
"text": "\n\\begin{align}\nX & = X_0 \\exp(x) \\\\\nY & = Y_0+X_0 y \\\\\nt & = z/w\n\\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=9751167 |
9751485 | Refinement monoid | In mathematics, a refinement monoid is a commutative monoid "M" such that for any elements "a0", "a1", "b0", "b1" of "M" such that "a0+a1=b0+b1", there are elements "c00", "c01", "c10", "c11" of "M" such that "a0=c00+c01", "a1=c10+c11", "b0=c00+c10", and "b1=c01+c11".
A commutative monoid "M" is said to be conical if "x"+"y"=0 implies that "x"="y"=0, for any elements "x","y" of "M".
Basic examples.
A join-semilattice with zero is a refinement monoid if and only if it is distributive.
Any abelian group is a refinement monoid.
The positive cone "G+" of a partially ordered abelian group "G" is a refinement monoid if and only if "G" is an "interpolation group", the latter meaning that for any elements "a0", "a1", "b0", "b1" of "G" such that "ai ≤ bj" for all "i, j<2", there exists an element "x" of "G" such that "ai ≤ x ≤ bj" for all "i, j<2". This holds, for example, in case "G" is lattice-ordered.
The "isomorphism type" of a Boolean algebra "B" is the class of all Boolean algebras isomorphic to "B". (If we want this to be a set, restrict to Boolean algebras of set-theoretical rank below the one of "B".)
The class of isomorphism types of Boolean algebras, endowed with the addition defined by
formula_0 (for any Boolean algebras "X" and "Y", where formula_1 denotes the isomorphism type of "X"), is a conical refinement monoid.
Vaught measures on Boolean algebras.
For a Boolean algebra "A" and a commutative monoid "M", a map "μ" : "A" → "M" is a "measure", if "μ(a)=0" if and only if "a=0", and "μ(a ∨ b)=μ(a)+μ(b)" whenever "a" and "b" are disjoint (that is, "a ∧ b=0"), for any "a, b" in "A". We say in addition that "μ" is a "Vaught measure" (after Robert Lawson Vaught), or "V-measure", if for all "c" in "A" and all "x,y" in "M" such that "μ(c)=x+y", there are disjoint "a, b" in "A" such that "c=a ∨ b", "μ(a)=x", and "μ(b)=y".
An element "e" in a commutative monoid "M" is "measurable" (with respect to "M"), if there are a Boolean algebra "A" and a V-measure "μ" : "A" → "M" such that "μ(1)=e"---we say that "μ" "measures" "e". We say that "M" is "measurable", if any element of "M" is measurable (with respect to "M"). Of course, every measurable monoid is a conical refinement monoid.
Hans Dobbertin proved in 1983 that any conical refinement monoid with at most ℵ1 elements is measurable. He also proved that any element in an at most countable conical refinement monoid is measured by a unique (up to isomorphism) V-measure on a unique at most countable Boolean algebra.
He raised there the problem whether any conical refinement monoid is measurable. This was answered in the negative by Friedrich Wehrung in 1998. The counterexamples can have any cardinality greater than or equal to ℵ2.
Nonstable K-theory of von Neumann regular rings.
For a ring (with unit) "R", denote by FP("R") the class of finitely generated projective right "R"-modules. Equivalently, the objects of FP("R") are the direct summands of all modules of the form "Rn", with "n" a positive integer, viewed as a right module over itself. Denote by formula_1 the isomorphism type of an object "X" in FP("R"). Then the set "V(R)" of all isomorphism types of members of FP("R"), endowed with the addition defined by formula_2, is a conical commutative monoid. In addition, if "R" is von Neumann regular, then "V(R)" is a refinement monoid. It has the order-unit formula_3. We say that "V(R)" encodes the "nonstable K-theory of R".
For example, if "R" is a division ring, then the members of FP("R") are exactly the finite-dimensional right vector spaces over "R", and two vector spaces are isomorphic if and only if they have the same dimension. Hence "V(R)" is isomorphic to the monoid formula_4 of all natural numbers, endowed with its usual addition.
A slightly more complicated example can be obtained as follows. A "matricial algebra" over a field "F" is a finite product of rings of the form formula_5, the ring of all square matrices with "n" rows and entries in "F", for variable positive integers "n". A direct limit of matricial algebras over "F" is a "locally matricial algebra over F". Every locally matricial algebra is von Neumann regular. For any locally matricial algebra "R", "V(R)" is the positive cone of a so-called "dimension group". By definition, a dimension group is a partially ordered abelian group whose underlying order is directed, whose positive cone is a refinement monoid, and which is "unperforated", the letter meaning that "mx≥0" implies that "x≥0", for any element "x" of "G" and any positive integer "m". Any "simplicial" group, that is, a partially ordered abelian group of the form formula_6, is a dimension group. Effros, Handelman, and Shen proved in 1980 that dimension groups are exactly the direct limits of simplicial groups, where the transition maps are positive homomorphisms. This result had already been proved in 1976, in a slightly different form, by P. A. Grillet. Elliott proved in 1976 that the positive cone of any countable direct limit of simplicial groups is isomorphic to "V(R)", for some locally matricial ring "R". Finally, Goodearl and Handelman proved in 1986 that the positive cone of any dimension group with at most ℵ1 elements is isomorphic to "V(R)", for some locally matricial ring "R" (over any given field).
Wehrung proved in 1998 that there are dimension groups with order-unit whose positive cone cannot be represented as "V(R)", for a von Neumann regular ring "R". The given examples can have any cardinality greater than or equal to ℵ2. Whether any conical refinement monoid with at most ℵ1 (or even ℵ0) elements can be represented as "V(R)" for "R" von Neumann regular is an open problem.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "[X]+[Y]=[X\\times Y]"
},
{
"math_id": 1,
"text": "[X]"
},
{
"math_id": 2,
"text": "[X]+[Y]=[X\\oplus Y]"
},
{
"math_id": 3,
"text": "[R]"
},
{
"math_id": 4,
"text": "\\mathbb{Z}^+=\\{0,1,2,\\dots\\}"
},
{
"math_id": 5,
"text": "M_n(F)"
},
{
"math_id": 6,
"text": "\\mathbb{Z}^n"
}
] | https://en.wikipedia.org/wiki?curid=9751485 |
9752560 | Numerical continuation | Numerical continuation is a method of computing approximate solutions of a system of parameterized nonlinear equations,
formula_0
The parameter formula_1 is usually a real scalar and the "solution" formula_2 is an "n"-vector. For a fixed parameter value formula_1, formula_3 maps Euclidean n-space into itself.
Often the original mapping formula_4 is from a Banach space into itself, and the Euclidean n-space is a finite-dimensional Banach space.
A steady state, or fixed point, of a parameterized family of flows or maps are of this form, and by discretizing trajectories of a flow or iterating a map, periodic orbits and heteroclinic orbits can also be posed as a solution of formula_5.
Other forms.
In some nonlinear systems, parameters are explicit. In others they are implicit, and the system of nonlinear equations is written
formula_6
where formula_2 is an "n"-vector, and its image formula_7 is an "n-1" vector.
This formulation, without an explicit parameter space is not usually suitable for the formulations in the following sections, because they refer to parameterized autonomous nonlinear dynamical systems of the form:
formula_8
However, in an algebraic system there is no distinction between unknowns formula_2 and the parameters.
Periodic motions.
A periodic motion is a closed curve in phase space. That is, for some "period" formula_9,
formula_10
The textbook example of a periodic motion is the undamped pendulum.
If the phase space is periodic in one or more coordinates, say formula_11, with formula_12 a vector , then there is a second kind of periodic motions defined by
formula_13
for every integer formula_14.
The first step in writing an implicit system for a periodic motion is to move the period formula_9 from the boundary conditions to the ODE:
formula_15
The second step is to add an additional equation, a "phase constraint", that can be thought of as determining the period. This is necessary because any solution of the above boundary value problem can be shifted in time by an arbitrary amount (time does not appear in the defining equations—the dynamical system is called autonomous).
There are several choices for the phase constraint. If formula_16 is a known periodic orbit at a parameter value formula_17 near formula_1, then, Poincaré used
formula_18
which states that formula_2 lies in a plane which is orthogonal to the tangent vector of the closed curve. This plane is called a "Poincaré section".
For a general problem a better phase constraint is an integral constraint introduced by Eusebius Doedel, which chooses the phase so that the distance between the known and unknown orbits is minimized:
formula_19
Definitions.
Solution component.
A solution component formula_20 of the nonlinear system formula_4 is a set of points formula_21 which satisfy formula_22 and are "connected" to the initial solution formula_23 by a path of solutions formula_24 for which formula_25
and formula_26.
Numerical continuation.
A numerical continuation is an algorithm which takes as input a system of parametrized nonlinear equations and an initial solution formula_23, formula_27, and produces a set of points on the solution component formula_20.
Regular point.
A regular point of formula_4 is a point formula_21 at which the Jacobian of formula_4 is full rank formula_28.
Near a regular point the solution component is an isolated curve passing through the regular point (the implicit function theorem). In the figure above the point formula_23 is a regular point.
Singular point.
A singular point of formula_4 is a point formula_21 at which the Jacobian of F is not full rank.
Near a singular point the solution component may not be an isolated curve passing through the regular point. The local structure is determined by higher derivatives of formula_4. In the figure above the point where the two blue curves cross is a singular point.
In general solution components formula_29 are branched curves. The branch points are singular points. Finding the solution curves leaving a
singular point is called branch switching, and uses techniques from bifurcation theory (singularity theory, catastrophe theory).
For finite-dimensional systems (as defined above) the Lyapunov-Schmidt decomposition may be used to produce two systems to which the Implicit Function Theorem applies. The Lyapunov-Schmidt decomposition uses the restriction of the system to the complement of the null space of the Jacobian and the range of the Jacobian.
If the columns of the matrix formula_30 are an orthonormal basis for the null space of
formula_31
and the columns of the matrix formula_32 are an orthonormal basis for the left null space of formula_33, then
the system formula_34
can be rewritten as
formula_35
where formula_36 is in the complement of the null space of formula_33 formula_37.
In the first equation, which is parametrized by the null space of the Jacobian (formula_38), the Jacobian with respect to formula_36 is non-singular. So the implicit function theorem states that there is a mapping formula_39 such that formula_40 and formula_41. The second equation (with formula_39 substituted) is called the bifurcation equation (though it may be a system of equations).
The bifurcation equation has a Taylor expansion which lacks the constant and linear terms. By scaling the equations and the null space of the Jacobian of the original system a system can be found with non-singular Jacobian. The constant term in the Taylor series of the scaled bifurcation equation is called the algebraic bifurcation equation, and the implicit function theorem applied the bifurcation equations states that for each isolated solution of the algebraic bifurcation equation there is a branch of solutions of the original problem which passes through the singular point.
Another type of singular point is a turning point bifurcation, or saddle-node bifurcation, where the direction of the parameter formula_1
reverses as the curve is followed. The red curve in the figure above illustrates a turning point.
Particular algorithms.
Natural parameter continuation.
Most methods of solution of nonlinear systems of equations are iterative methods. For a particular parameter value formula_17 a mapping is repeatedly applied to an initial guess formula_42. If the method converges, and is consistent, then in the
limit the iteration approaches a solution of formula_43.
"Natural parameter continuation" is a very simple adaptation of the iterative solver to a parametrized problem. The solution at
one value of formula_1 is used as the initial guess for the solution at formula_44. With formula_45 sufficiently small the iteration applied to the initial
guess should converge.
One advantage of natural parameter continuation is that it uses the solution method for the problem as a black box. All that is required is that an initial solution be given (some solvers used to always start at a fixed initial guess). There has been a lot of work in the area of large scale continuation on applying more sophisticated algorithms to black box solvers (see e.g. LOCA).
However, natural parameter continuation fails at turning points, where the branch of solutions turns round. So for problems with turning points, a more sophisticated method such as pseudo-arclength continuation must be used (see below).
Simplicial or piecewise linear continuation.
Simplicial Continuation, or Piecewise Linear Continuation (Allgower and Georg) is based on three basic results.
The first is
The second result is:
Please see the article on piecewise linear continuation for details.
With these two operations this continuation algorithm is easy to state (although of course an efficient implementation requires a more sophisticated approach. See [B1]). An initial simplex is assumed to be given, from a reference simplicial decomposition of formula_46. The initial simplex must have at least one face which contains a zero of the unique linear interpolant on that face. The other faces of the simplex are then tested, and typically there will be one additional face with an interior zero. The initial simplex is then replaced by the simplex which lies across either face containing zero, and the process is repeated.
References: Allgower and Georg [B1] provides a crisp, clear description of the algotihm.
Pseudo-arclength continuation.
This method is based on the observation that the "ideal" parameterization of a curve is arclength. Pseudo-arclength is an approximation of the arclength in the tangent space of the curve. The resulting modified natural continuation method makes a step in pseudo-arclength (rather than formula_1). The iterative solver is required to find a point at the given pseudo-arclength, which requires appending an additional
constraint (the pseudo-arclength constraint) to the n by n+1 Jacobian. It produces a square Jacobian, and if the stepsize is sufficiently small the modified Jacobian is full rank.
Pseudo-arclength continuation was independently developed by Edward Riks and Gerald Wempner for finite element applications in the late 1960s, and published in journals in the early 1970s by H.B. Keller. A detailed account of these early developments is
provided in the textbook by M. A. Crisfield: Nonlinear Finite Element Analysis of Solids and Structures, Vol 1: Basic Concepts, Wiley, 1991. Crisfield was one of the most active developers of this class of methods, which are by now standard procedures of commercial nonlinear finite element programs.
The algorithm is a predictor-corrector method. The prediction step finds the point (in IR^(n+1) ) which is a step formula_47 along the tangent vector at the current pointer. The corrector is usually Newton's method, or some variant, to solve the nonlinear system
formula_48
where formula_49 is the tangent vector at formula_50.
The Jacobian of this system is the bordered matrix
formula_51
At regular points, where the unmodified Jacobian is full rank, the tangent vector spans the null space of the top row of this new Jacobian. Appending the tangent vector as the last row can be seen as determining the coefficient of the null vector in the general solution of the Newton system (particular solution plus an arbitrary multiple of the null vector).
Gauss–Newton continuation.
This method is a variant of pseudo-arclength continuation. Instead of using the tangent at the initial point in the arclength constraint, the tangent at the current solution is used. This is equivalent to using the pseudo-inverse of the Jacobian in Newton's method, and allows longer steps to be made. [B17]
Continuation in more than one parameter.
The parameter formula_1 in the algorithms described above is a real scalar. Most physical and design problems generally have many more than one parameter. Higher-dimensional continuation refers to the case when formula_1 is a k-vector.
The same terminology applies. A regular solution is a solution at which the Jacobian is full rank formula_28. A singular solution is a solution at which the Jacobian is less than full rank.
A regular solution lies on a k-dimensional surface, which can be parameterized by a point in the tangent space (the null space of the Jacobian). This is again a straightforward application of the Implicit Function Theorem.
Applications of numerical continuation techniques.
Numerical continuation techniques have found a great degree of acceptance in the study of chaotic dynamical systems and various other systems which belong to the realm of catastrophe theory. The reason for such usage stems from the fact that various non-linear dynamical systems behave in a deterministic and predictable manner within a range of parameters which are included in the equations of the system. However, for a certain parameter value the system starts behaving chaotically and hence it became necessary to follow the parameter in order to be able to decipher the occurrences of when the system starts being non-predictable, and what exactly (theoretically) makes the system become unstable.
Analysis of parameter continuation can lead to more insights about stable/critical point bifurcations. Study of saddle-node, transcritical, pitch-fork, period doubling, Hopf, secondary Hopf (Neimark) bifurcations of stable solutions allows for a theoretical discussion of the circumstances and occurrences which arise at the critical points. Parameter continuation also gives a more dependable system to analyze a dynamical system as it is more stable than more interactive, time-stepped numerical solutions. Especially in cases where the dynamical system is prone to blow-up at certain parameter values (or combination of values for multiple parameters).
It is extremely insightful as to the presence of stable solutions (attracting or repelling) in the study of nonlinear differential equations where time stepping in the form of the Crank Nicolson algorithm is extremely time consuming as well as unstable in cases of nonlinear growth of the dependent variables in the system. The study of turbulence is another field where the Numerical Continuation techniques have been used to study the advent of turbulence in a system starting at low Reynolds numbers. Also, research using these techniques has provided the possibility of finding stable manifolds and bifurcations to invariant-tori in the case of the restricted three-body problem in Newtonian gravity and have also given interesting and deep insights into the behaviour of systems such as the Lorenz equations.
Software.
(Under Construction) See also The SIAM Activity Group on Dynamical Systems' list http://www.dynamicalsystems.org/sw/sw/
0 where formula_52 by taking advantage of iterative methods, sparse formulation and specific hardwares (e.g. GPU).
Examples.
This problem, of finding the points which "F" maps into the origin appears in computer graphics as the problems of drawing contour maps (n=2), or isosurface(n=3). The contour with value "h" is the set of all solution components of "F-h=0"
References.
Books.
[B1] ""Introduction to Numerical Continuation Methods", Eugene L. Allgower and Kurt Georg, SIAM Classics in Applied Mathematics 45. 2003.
[B2] "Numerical Methods for Bifurcations of Dynamical Equilibria", Willy J. F. Govaerts, SIAM 2000.
[B3] "Lyapunov-Schmidt Methods in Nonlinear Analysis and Applications", Nikolay Sidorov, Boris Loginov, Aleksandr Sinitsyn, and Michail Falaleev, Kluwer Academic Publishers, 2002.
[B4] "Methods of Bifurcation Theory", Shui-Nee Chow and Jack K. Hale, Springer-Verlag 1982.
[B5] "Elements of Applied Bifurcation Theory"", Yuri A. Kunetsov, Springer-Verlag Applied Mathematical Sciences 112, 1995.
[B6] "Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields", John Guckenheimer and Philip Holmes, Springer-Verlag Applied Mathematical Sciences 42, 1983.
[B7] ""Elementary Stability and Bifurcation Theory", Gerard Iooss and Daniel D. Joseph, Springer-Verlag Undergraduate Texts in Mathematics, 1980.
[B8] "Singularity Theory and an Introduction to Catastrophe Theory", Yung-Chen Lu, Springer-Verlag, 1976.
[B9] "Global Bifurcations and Chaos, Analytic Methods", S. Wiggins, Springer-Verlag Applied Mathematical Sciences 73, 1988.
[B10] "Singularities and Groups in Bifurcation Theory, volume I", Martin Golubitsky and David G. Schaeffer, Springer-Verlag Applied Mathematical Sciences 51, 1985.
[B11] "Singularities and Groups in Bifurcation Theory, volume II", Martin Golubitsky, Ian Stewart and David G. Schaeffer, Springer-Verlag Applied Mathematical Sciences 69, 1988.
[B12] "Solving Polynomial Systems Using Continuation for Engineering and Scientific Problems", Alexander Morgan, Prentice-Hall, Englewood Cliffs, N.J. 1987.
[B13] "Pathways to Solutions, Fixed Points and Equilibria", C. B. Garcia and W. I. Zangwill, Prentice-Hall, 1981.
[B14] "The Implicit Function Theorem: History, Theory and Applications", Steven G. Krantz and Harold R. Parks, Birkhauser, 2002.
[B15] "Nonlinear Functional Analysis", J. T. Schwartz, Gordon and Breach Science Publishers, Notes on Mathematics and its Applications, 1969.
[B16] "Topics in Nonlinear Functional Analysis", Louis Nirenberg (notes by Ralph A. Artino), AMS Courant Lecture Notes in Mathematics 6, 1974.
[B17] "Newton Methods for Nonlinear Problems -- Affine Invariance and Adaptive Algorithms", P. Deuflhard,
Series Computational Mathematics 35, Springer, 2006.
Journal articles.
[A1] "An Algorithm for Piecewise Linear Approximation of Implicitly Defined Two-Dimensional Surfaces", Eugene L. Allgower and Stefan Gnutzmann, SIAM Journal on Numerical Analysis, Volume 24, Number 2, 452—469, 1987.
[A2] "Simplicial and Continuation Methods for Approximations, Fixed Points and Solutions to Systems of Equations", E. L. Allgower and K. Georg, SIAM Review, Volume 22, 28—85, 1980.
[A3] "An Algorithm for Piecewise-Linear Approximation of an Implicitly Defined Manifold", Eugene L. Allgower and Phillip H. Schmidt, SIAM Journal on Numerical Analysis, Volume 22, Number 2, 322—346, April 1985.
[A4] "Contour Tracing by Piecewise Linear Approximations", David P. Dobkin, Silvio V. F. Levy, William P. Thurston and Allan R. Wilks, ACM Transactions on Graphics, 9(4) 389-423, 1990.
[A5] "Numerical Solution of Bifurcation and Nonlinear Eigenvalue Problems"", H. B. Keller, in "Applications of Bifurcation Theory", P. Rabinowitz ed., Academic Press, 1977.
[A6] ""A Locally Parameterized Continuation Process", W.C. Rheinboldt and J.V. Burkardt, ACM Transactions on Mathematical Software, Volume 9, 236—246, 1983.
[A7] "Nonlinear Numerics" E. Doedel, International Journal of Bifurcation and Chaos, 7(9):2127-2143, 1997.
[A8] "Nonlinear Computation", R. Seydel, International Journal of Bifurcation and Chaos, 7(9):2105-2126, 1997.
[A9] "On a Moving Frame Algorithm and the Triangulation of Equilibrium Manifolds"", W.C. Rheinboldt, In T. Kuper, R. Seydel, and H. Troger eds. "ISNM79: Bifurcation: Analysis, Algorithms, Applications", pages 256-267. Birkhauser, 1987.
[A10] ""On the Computation of Multi-Dimensional Solution Manifolds of Parameterized Equations", W.C. Rheinboldt, Numerishe Mathematik, 53, 1988, pages 165-181.
[A11] "On the Simplicial Approximation of Implicitly Defined Two-Dimensional Manifolds", M. L. Brodzik and W.C. Rheinboldt, Computers and Mathematics with Applications, 28(9): 9-21, 1994.
[A12] "The Computation of Simplicial Approximations of Implicitly Defined p-Manifolds", M. L. Brodzik, Computers and Mathematics with Applications, 36(6):93-113, 1998.
[A13] "New Algorithm for Two-Dimensional Numerical Continuation", R. Melville and D. S. Mackey, Computers and Mathematics with Applications, 30(1):31-46, 1995.
[A14] "Multiple Parameter Continuation: Computing Implicitly Defined k-manifolds", M. E. Henderson, IJBC 12[3]:451-76, 2003.
[A15] "MANPACK: a set of algorithms for computations on implicitly defined manifolds", W. C. Rheinboldt, Comput. Math. Applic. 27 pages 15–9, 1996.
[A16] "CANDYS/QA - A Software System For Qualitative Analysis Of Nonlinear Dynamical Systems"", Feudel, U. and W. Jansen, Int. J. Bifurcation and Chaos, vol. 2 no. 4, pp. 773–794, World Scientific, 1992. | [
{
"math_id": 0,
"text": "F(\\mathbf u,\\lambda) = 0."
},
{
"math_id": 1,
"text": "\\lambda"
},
{
"math_id": 2,
"text": "\\mathbf u"
},
{
"math_id": 3,
"text": "F(\\cdot,\\lambda)"
},
{
"math_id": 4,
"text": "F"
},
{
"math_id": 5,
"text": "F=0"
},
{
"math_id": 6,
"text": "F(\\mathbf u) = 0"
},
{
"math_id": 7,
"text": "F(\\mathbf u)"
},
{
"math_id": 8,
"text": "\\mathbf u' = F(\\mathbf u,\\lambda)."
},
{
"math_id": 9,
"text": "T"
},
{
"math_id": 10,
"text": "\\mathbf u' = F(\\mathbf u,\\lambda),\\, \\mathbf u(0) = \\mathbf u(T)."
},
{
"math_id": 11,
"text": "\\mathbf u(t) = \\mathbf u(t + \\Omega)"
},
{
"math_id": 12,
"text": "\\Omega"
},
{
"math_id": 13,
"text": "\\mathbf u' = \\mathbf F(\\mathbf u,\\lambda),\\, \\mathbf u(0) = \\mathbf u(T + N.\\Omega)"
},
{
"math_id": 14,
"text": "N"
},
{
"math_id": 15,
"text": "\\mathbf u' = T\\mathbf F(\\mathbf u,\\lambda),\\, \\mathbf u(0)=\\mathbf u(1 + N.\\Omega)."
},
{
"math_id": 16,
"text": "\\mathbf u_0(t)"
},
{
"math_id": 17,
"text": "\\lambda_0"
},
{
"math_id": 18,
"text": "\\langle \\mathbf u(0) - \\mathbf u_0(0),\\mathbf F(\\mathbf u_0(0),\\lambda_0)\\rangle = 0."
},
{
"math_id": 19,
"text": "\\int_0^1 \\langle\\mathbf u(t) - \\mathbf u_0(t),\\mathbf F(\\mathbf u_0(t),\\lambda_0)\\rangle dt = 0."
},
{
"math_id": 20,
"text": "\\Gamma(\\mathbf u_0,\\lambda_0)"
},
{
"math_id": 21,
"text": "(\\mathbf u,\\lambda)"
},
{
"math_id": 22,
"text": "F(\\mathbf u,\\lambda)=0"
},
{
"math_id": 23,
"text": "(\\mathbf u_0,\\lambda_0)"
},
{
"math_id": 24,
"text": "(\\mathbf u(s),\\lambda(s))"
},
{
"math_id": 25,
"text": "(\\mathbf u(0),\\lambda(0))=(\\mathbf u_0,\\lambda_0),\\, (\\mathbf u(1),\\lambda(1)) = (\\mathbf u,\\lambda)"
},
{
"math_id": 26,
"text": "F(\\mathbf u(s),\\lambda(s))=0"
},
{
"math_id": 27,
"text": "F(\\mathbf u_0,\\lambda_0)=0"
},
{
"math_id": 28,
"text": "(n)"
},
{
"math_id": 29,
"text": "\\Gamma"
},
{
"math_id": 30,
"text": "\\Phi"
},
{
"math_id": 31,
"text": "J=\\left[\n\\begin{array}{cc}\nF_x & F_{\\lambda}\\\\\n\\end{array}\n\\right]"
},
{
"math_id": 32,
"text": "\\Psi"
},
{
"math_id": 33,
"text": "J"
},
{
"math_id": 34,
"text": "F(x,\\lambda)=0"
},
{
"math_id": 35,
"text": "\n\\left[\n\\begin{array}{l}\n(I-\\Psi\\Psi^T)F(x+\\Phi\\xi + \\eta)\\\\\n\\Psi^T F(x+\\Phi\\xi + \\eta)\\\\\n\\end{array}\n\\right]=0,\n"
},
{
"math_id": 36,
"text": "\\eta"
},
{
"math_id": 37,
"text": "(\\Phi^T\\,\\eta=0)"
},
{
"math_id": 38,
"text": "\\xi"
},
{
"math_id": 39,
"text": "\\eta(\\xi)"
},
{
"math_id": 40,
"text": "\\eta(0)=0"
},
{
"math_id": 41,
"text": "(I-\\Psi\\Psi^T)F(x+\\Phi\\xi+\\eta(\\xi))=0)"
},
{
"math_id": 42,
"text": "\\mathbf u_0"
},
{
"math_id": 43,
"text": "F(\\mathbf u,\\lambda_0)=0"
},
{
"math_id": 44,
"text": "\\lambda+\\Delta \\lambda"
},
{
"math_id": 45,
"text": "\\Delta \\lambda"
},
{
"math_id": 46,
"text": "\\mathbb{R}^n"
},
{
"math_id": 47,
"text": "\\Delta s"
},
{
"math_id": 48,
"text": "\n\\begin{array}{l}\nF(u,\\lambda)=0\\\\\n\\dot u^*_0(u-u_0)+\\dot \\lambda_0 (\\lambda-\\lambda_0) = \\Delta s\\\\\n\\end{array}\n"
},
{
"math_id": 49,
"text": "(\\dot u_0,\\dot\\lambda_0)"
},
{
"math_id": 50,
"text": "(u_0,\\lambda_0)"
},
{
"math_id": 51,
"text": "\\left[\n\\begin{array}{cc}\nF_u & F_{\\lambda}\\\\\n\\dot u^* & \\dot \\lambda\\\\\n\\end{array}\n\\right]"
},
{
"math_id": 52,
"text": "\\lambda \\isin \\mathbb{R}"
}
] | https://en.wikipedia.org/wiki?curid=9752560 |
975300 | Eccentricity vector | In celestial mechanics, the eccentricity vector of a Kepler orbit is the dimensionless vector with direction pointing from apoapsis to periapsis and with magnitude equal to the orbit's scalar eccentricity. For Kepler orbits the eccentricity vector is a constant of motion. Its main use is in the analysis of almost circular orbits, as perturbing (non-Keplerian) forces on an actual orbit will cause the osculating eccentricity vector to change continuously as opposed to the eccentricity and argument of periapsis parameters for which eccentricity zero (circular orbit) corresponds to a singularity.
Calculation.
The eccentricity vector formula_0 is:
formula_1
which follows immediately from the vector identity:
formula_2
where:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathbf{e} \\,"
},
{
"math_id": 1,
"text": " \\mathbf{e} = {\\mathbf{v}\\times\\mathbf{h}\\over{\\mu}} - {\\mathbf{r}\\over{\\left|\\mathbf{r}\\right|}} = \n\\left ( {\\mathbf{\\left |v \\right |}^2 \\over {\\mu} }- {1 \\over{\\left|\\mathbf{r}\\right|}} \\right ) \\mathbf{r} - {\\mathbf{r} \\cdot \\mathbf{v} \\over{\\mu}} \\mathbf{v}\n"
},
{
"math_id": 2,
"text": " \\mathbf{v}\\times \\left ( \\mathbf{r}\\times \\mathbf{v} \\right ) = \\left ( \\mathbf{v} \\cdot \\mathbf{v} \\right ) \\mathbf{r} - \\left ( \\mathbf{r} \\cdot \\mathbf{v} \\right ) \\mathbf{v}"
},
{
"math_id": 3,
"text": "\\mathbf{r}\\,\\!"
},
{
"math_id": 4,
"text": "\\mathbf{v}\\,\\!"
},
{
"math_id": 5,
"text": "\\mathbf{h}\\,\\!"
},
{
"math_id": 6,
"text": "\\mathbf{r}\\times\\mathbf{v}"
},
{
"math_id": 7,
"text": "\\mu\\,\\!"
}
] | https://en.wikipedia.org/wiki?curid=975300 |