id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
9946905 | Ammonium perchlorate composite propellant | Solid-rocket propellant
Ammonium perchlorate composite propellant (APCP) is a solid rocket propellant. It differs from many traditional solid rocket propellants such as black powder or zinc-sulfur, not only in chemical composition and overall performance but also by being cast into shape, as opposed to powder pressing as with black powder. This provides manufacturing regularity and repeatability, which are necessary requirements for use in the aerospace industry.
Uses.
Ammonium perchlorate composite propellant is typically for aerospace rocket propulsion where simplicity and reliability are desired and specific impulses (depending on the composition and operating pressure) of are adequate. Because of these performance attributes, APCP has been used in the Space Shuttle Solid Rocket Boosters, aircraft ejection seats, and specialty space exploration applications such as NASA's Mars Exploration Rover descent stage retrorockets. In addition, the high-power rocketry community regularly uses APCP in the form of commercially available propellant "reloads", as well as single-use motors. Experienced experimental and amateur rocketeers also often work with APCP, processing the APCP themselves.
Composition.
Overview.
Ammonium perchlorate composite propellant is a composite propellant, meaning that it has both fuel and oxidizer combined into a homogeneous mixture, in this case with a rubbery binder as part of the fuel. The propellant is most often composed of ammonium perchlorate (AP), an elastomer binder such as hydroxyl-terminated polybutadiene (HTPB) or polybutadiene acrylic acid acrylonitrile prepolymer (PBAN), powdered metal (typically aluminium), and various burn rate catalysts. In addition, curing additives induce elastomer binder cross-linking to solidify the propellant before use. The perchlorate serves as the oxidizer, while the binder and aluminium serve as the fuel. Burn rate catalysts determine how quickly the mixture burns. The resulting cured propellant is fairly elastic (rubbery), which also helps limit fracturing during accumulated damage (such as shipping, installing, cutting) and high acceleration applications such as hobby or military rocketry. This includes the Space Shuttle missions, in which APCP was used for the two SRBs.
The composition of APCP can vary significantly depending on the application, intended burn characteristics, and constraints such as nozzle thermal limitations or specific impulse (Isp). Rough mass proportions (in high-performance configurations) tend to be about 70/15/15 AP/HTPB/Al, though fairly high performance "low-smoke" can have compositions of roughly 80/18/2 AP/HTPB/Al. While metal fuel is not required in APCP, most formulations include at least a few percent as a combustion stabilizer, propellant opacifier (to limit excessive infrared propellant preheating), and increase the temperature of the combustion gases (increasing Isp).
Special considerations.
Though increasing the ratio of metal-fuel to oxidizer up to the stoichiometric point increases the combustion temperature, the presence of an increasing molar fraction of metal oxides, particularly aluminium oxide (Al2O3) precipitating from the gaseous solution creates globules of solids or liquids that slow down the flow velocity as the mean molecular mass of the flow increases. In addition, the chemical composition of the gases changes, varying the effective heat capacity of the gas. Because of these phenomena, there exists an optimal non-stoichiometric composition for maximizing Isp of roughly 16% by mass, assuming the combustion reaction goes to completion inside the combustion chamber.
The combustion time of the aluminium particles in the hot combustion gas varies depending on aluminium particle size and shape. In small APCP motors with high aluminium content, the residence time of the combustion gases does not allow for full combustion of the aluminium and thus a substantial fraction of the aluminium is burned outside the combustion chamber, leading to decreased performance. This effect is often mitigated by reducing aluminium particle size, inducing turbulence (and therefore a long characteristic path length and residence time), and/or by reducing the aluminium content to ensure a combustion environment with a higher net oxidizing potential, ensuring more complete aluminium combustion. Aluminium combustion inside the motor is the rate-limiting pathway since the liquid-aluminium droplets (even still liquid at temperatures 3000 K) limit the reaction to a heterogeneous globule interface, making the surface area to volume ratio an important factor in determining the combustion residence time and required combustion chamber size/length.
Particle size.
The propellant particle size distribution has a profound impact on APCP rocket motor performance. Smaller AP and Al particles lead to higher combustion efficiency but also lead to increased linear burn rate. The burn rate is heavily dependent on mean AP particle size as the AP absorbs heat to decompose into a gas before it can oxidize the fuel components. This process may be a rate-limiting step in the overall combustion rate of APCP. The phenomenon can be explained by considering the heat-flux-to-mass ratio: As the particle radius increases the volume (and, therefore, mass and heat capacity) increases as the cube of the radius. However, the surface area increases as the square of the radius, which is roughly proportional to the heat flux into the particle. Therefore, a particle's rate of temperature rise is maximized when the particle size is minimized.
Common APCP formulations call for 30–400 μm AP particles (often spherical), as well as 2–50 μm Al particles (often spherical). Because of the size discrepancy between the AP and Al, Al will often take an interstitial position in a pseudo-lattice of AP particles.
Characteristics.
Geometric.
APCP deflagrates from the surface of exposed propellant in the combustion chamber. In this fashion, the geometry of the propellant inside the rocket motor plays an important role in the overall motor performance. As the surface of the propellant burns, the shape evolves (a subject of study in internal ballistics), most often changing the propellant surface area exposed to the combustion gases. The mass flux (kg/s) [and therefore pressure] of combustion gases generated is a function of the instantaneous surface area formula_0 (m2), propellant density formula_1 (kg/m3), and linear burn rate formula_2 (m/s):
formula_3
Several geometric configurations are often used depending on the application and desired thrust curve:
Burn rate.
While the surface area can be easily tailored by careful geometric design of the propellant, the burn rate is dependent on several subtle factors:
In summary, however, most formulations have a burn rate between 1–3 mm/s at STP and 6–12 mm/s at 68 atm. The burn characteristics (such as linear burn rate) are often determined prior to rocket motor firing using a strand burner test. This test allows the APCP manufacturer to characterize the burn rate as a function of pressure. Empirically, APCP adheres fairly well to the following power-function model:
formula_4
It is worth noting that typically for APCP, "n" is 0.3–0.5 indicating that APCP is sub-critically pressure sensitive. That is, if surface area were maintained constant during a burn the combustion reaction would not run away to (theoretically) infinite as the pressure would reach an internal equilibrium. This isn't to say that APCP cannot cause an explosion, just that it will not detonate. Thus, any explosion would be caused by the pressure surpassing the burst pressure of the container (rocket motor).
Model/high-power rocketry applications.
Commercial APCP rocket engines usually come in the form of reloadable motor systems (RMS) and fully assembled single-use rocket motors. For RMS, the APCP "grains" (cylinders of propellant) are loaded into the reusable motor casing along with a sequence of insulator disks and o-rings and a (graphite or glass-filled phenolic resin) nozzle. The motor casing and closures are typically bought separately from the motor manufacturer and are often precision-machined from aluminium. The assembled RMS contains both reusable (typically metal) and disposable components.
The major APCP suppliers for hobby use are:
To achieve different visual effects and flight characteristics, hobby APCP suppliers offer a variety of different characteristic propellant types. These can range from fast-burning with little smoke and blue flame to classic white smoke and white flame. In addition, colored formulations are available to display reds, greens, blues, and even black smoke.
In the medium- and high-power rocket applications, APCP has largely replaced black powder as a rocket propellant. Compacted black powder slugs become prone to fracture in larger applications, which can result in catastrophic failure in rocket vehicles. APCP's elastic material properties make it less vulnerable to fracture from accidental shock or high-acceleration flights. Due to these attributes, widespread adoption of APCP and related propellant types in the hobby has significantly enhanced the safety of rocketry.
Environmental and other concerns.
The exhaust from APCP solid rocket motors contains mostly water, carbon dioxide, hydrogen chloride, and a metal oxide (typically aluminium oxide). The hydrogen chloride can easily dissolve in water and create corrosive hydrochloric acid. The environmental fate of hydrogen chloride is not well documented. The hydrochloric acid component of APCP exhaust leads to the condensation of atmospheric moisture in the plume and this enhances the visible signature of the contrail. This visible signature, among other reasons, led to research in cleaner burning propellants with no visible signatures. Minimum signature propellants contain primarily nitrogen-rich organic molecules (e.g., ammonium dinitramide) and depending on their oxidizer source can be hotter burning than APCP composite propellants.
Regulation and legality.
In the United States, APCP for hobby use is regulated indirectly by two non-government agencies: the National Association of Rocketry (NAR), and the Tripoli Rocketry Association (TRA). Both agencies set forth rules regarding the impulse classification of rocket motors and the level of certification required by rocketeers in order to purchase certain impulse (size) motors. The NAR and TRA require motor manufacturers to certify their motors for distribution to vendors and ultimately hobbyists. The vendor is charged with the responsibility (by the NAR and TRA) to check hobbyists for high-power rocket certification before a sale can be made. The amount of APCP that can be purchased (in the form of a rocket motor reload) correlates to the impulse classification, and therefore the quantity of APCP purchasable by a hobbyist (in any single reload kit) is regulated by the NAR and TRA.
The overarching legality concerning the implementation of APCP in rocket motors is outlined in NFPA 1125. Use of APCP outside hobby use is regulated by state and municipal fire codes. On March 16, 2009, it was ruled that APCP is not an explosive and that manufacture and use of APCP no longer requires a license or permit from the ATF.
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A_\\text{s}"
},
{
"math_id": 1,
"text": "\\rho"
},
{
"math_id": 2,
"text": "b_r"
},
{
"math_id": 3,
"text": "\\dot{m} = \\rho A_\\text{s} b_r"
},
{
"math_id": 4,
"text": "b_r = a p^n"
}
] | https://en.wikipedia.org/wiki?curid=9946905 |
9947735 | Zero-fuel weight | Total weight of an aircraft and its contents, minus usable fuel
The zero-fuel weight (ZFW) of an aircraft is the total weight of the airplane and all its contents, minus the total weight of the usable fuel on board. Unusable fuel is included in ZFW.
Remember the takeoff weight components contributions:
formula_0
Where OEW is the Operating Empty Weight (that is a charactersitic of the plane), PL is the Payload actually embarqued, and FOB the Fuel actually embarqued and TOW the actual take-off weight.
ZFW is also defined as OEW + PL. The previous formula becomes:
formula_1.
For many types of airplane, the airworthiness limitations include a maximum zero-fuel weight. This limitation is specified to ensure bending moments on the wing roots are not excessive during flight. When the aircraft is loaded before flight, the zero-fuel weight must not exceed the maximum zero-fuel weight.
Maximum zero fuel weight.
The maximum zero fuel weight (MZFW) is the maximum weight allowed before usable fuel and other specified usable agents (engine injection fluid, and other consumable propulsion agents) are loaded in defined sections of the aircraft as limited by strength and airworthiness requirements. It may include usable fuel in specified tanks when carried in lieu of payload. The addition of usable and consumable items to the zero fuel weight must be in accordance with the applicable government regulations so that airplane structure and airworthiness requirements are not exceeded.
If the limitations applicable to a transport category airplane type include a maximum zero-fuel weight it must be specified in the Airplane Flight Manual and the type certificate data sheet for the airplane type.
Maximum zero fuel weight in aircraft operations.
When an aircraft is being loaded with crew, passengers, baggage and freight it is most important to ensure that the ZFW does not exceed the MZFW. When an aircraft is being loaded with fuel it is most important to ensure that the takeoff weight will not exceed the maximum permissible takeoff weight.
MZFW : The maximum permissible weight of an aircraft with no disposable fuel or oil.
formula_2
where FOB is Fuel On Board.
For any aircraft with a defined MZFW, the maximum payload (formula_3) can be calculated as the MZFW minus the OEW (operational empty weight)
formula_4
Maximum zero fuel weight in type certification.
The maximum zero fuel weight is an important parameter in demonstrating compliance with gust design criteria for transport category airplanes.
Wing bending relief.
In fixed-wing aircraft, fuel is usually carried in the wings. While the aircraft is in the air, weight in the wings does not contribute as significantly to the bending moment in the wing as does weight in the fuselage. This is because the lift on the wings and the weight of the fuselage bend the wing tips upwards and the wing roots downwards; but the weight of the wings, including the weight of fuel in the wings, bend the wing tips downwards, providing relief to the bending effect on the wing.
Considering the bending moment at the wing root, the capacity for extra weight in the wings is greater than the capacity for extra weight in the fuselage. Designers of airplanes can optimise the maximum takeoff weight and prevent overloading in the fuselage by specifying a MZFW. This is usually done for large airplanes with cantilever wings. (Airplanes with strut-braced wings achieve substantial wing bending relief by having the load of the fuselage applied by the strut mid-way along the wing semi-span. Extra wing bending relief cannot be achieved by particular placement of the fuel. There is usually no MZFW specified for an airplane with a strut-braced wing.)
Most small airplanes do not have an MZFW specified among their limitations. For these airplanes with cantilever wings, the loading case that must be considered when determining the maximum takeoff weight is the airplane with zero fuel and all disposable load in the fuselage. With zero fuel in the wing the only wing bending relief is due to the weight of the wing.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " OEW + PL + FOB = TOW"
},
{
"math_id": 1,
"text": " ZFW + FOB = TOW"
},
{
"math_id": 2,
"text": "ZFW + FOB = TOW"
},
{
"math_id": 3,
"text": "PL_{max}"
},
{
"math_id": 4,
"text": "PL_{max} = MZFW - OEW"
}
] | https://en.wikipedia.org/wiki?curid=9947735 |
99491 | Exponentiation | Arithmetic operation
\\[1ex]\scriptstyle\frac{\scriptstyle\text{numerator}}{\scriptstyle\text{denominator}}\end{matrix}\right\}\,=\,</math>
In mathematics, exponentiation is an operation involving two numbers: the "base" and the "exponent" or "power". Exponentiation is written as "b""n", where b is the "base" and n is the "power"; this is pronounced as "b (raised) to the (power of) n". When n is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, "b""n" is the product of multiplying n bases:
formula_0
The exponent is usually shown as a superscript to the right of the base. In that case, "b""n" is called ""b" raised to the "n"th power", ""b" (raised) to the power of "n"", "the "n"th power of "b", "b" to the "n"th power", or most briefly as ""b" to the "n"(th)".
Starting from the basic fact stated above that, for any positive integer formula_1, formula_2 is formula_1 occurrences of formula_3 all multiplied by each other, several other properties of exponentiation directly follow. In particular:
formula_4
In other words, when multiplying a base raised to one exponent by the same base raised to another exponent, the exponents add. From this basic rule that exponents add, we can derive that formula_5 must be equal to 1 for any formula_6, as follows. For any formula_1, formula_7. Dividing both sides by formula_2 gives formula_8.
The fact that formula_9 can similarly be derived from the same rule. For example, formula_10. Taking the cube root of both sides gives formula_9.
The rule that multiplying makes exponents add can also be used to derive the properties of negative integer exponents. Consider the question of what formula_11 should mean. In order to respect the "exponents add" rule, it must be the case that formula_12. Dividing both sides by formula_13 gives formula_14, which can be more simply written as formula_15, using the result from above that formula_9. By a similar argument, formula_16.
The properties of fractional exponents also follow from the same rule. For example, suppose we consider formula_17 and ask if there is some suitable exponent, which we may call formula_18, such that formula_19. From the definition of the square root, we have that formula_20. Therefore, the exponent formula_18 must be such that formula_21. Using the fact that multiplying makes exponents add gives formula_22. The formula_23 on the right-hand side can also be written as formula_24, giving formula_25. Equating the exponents on both sides, we have formula_26. Therefore, formula_27, so formula_28.
The definition of exponentiation can be extended to allow any real or complex exponent. Exponentiation by integer exponents can also be defined for a wide variety of algebraic structures, including matrices.
Exponentiation is used extensively in many fields, including economics, biology, chemistry, physics, and computer science, with applications such as compound interest, population growth, chemical reaction kinetics, wave behavior, and public-key cryptography.
Etymology.
The term "exponent" originates from the Latin "exponentem", the present participle of "exponere", meaning "to put forth". The term "power" () is a mistranslation of the ancient Greek δύναμις ("dúnamis", here: "amplification") used by the Greek mathematician Euclid for the square of a line, following Hippocrates of Chios.
History.
Antiquity.
The Sand Reckoner.
In "The Sand Reckoner", Archimedes proved the law of exponents, 10"a" · 10"b" = 10"a"+"b", necessary to manipulate powers of 10. He then used powers of 10 to estimate the number of grains of sand that can be contained in the universe.
Islamic Golden Age.
"Māl" and "kaʿbah" ("square" and "cube").
In the 9th century, the Persian mathematician Al-Khwarizmi used the terms مَال ("māl", "possessions", "property") for a square—the Muslims, "like most mathematicians of those and earlier times, thought of a squared number as a depiction of an area, especially of land, hence property"—and كَعْبَة ("Kaʿbah", "cube") for a cube, which later Islamic mathematicians represented in mathematical notation as the letters "mīm" (m) and "kāf" (k), respectively, by the 15th century, as seen in the work of Abu'l-Hasan ibn Ali al-Qalasadi.
15th–18th century.
Introducing exponents.
Nicolas Chuquet used a form of exponential notation in the 15th century, for example 122 to represent 12"x"2. This was later used by Henricus Grammateus and Michael Stifel in the 16th century. In the late 16th century, Jost Bürgi would use Roman numerals for exponents in a way similar to that of Chuquet, for example iii4 for 4"x"3.
"Exponent"; "square" and "cube".
The word "exponent" was coined in 1544 by Michael Stifel. In the 16th century, Robert Recorde used the terms square, cube, zenzizenzic (fourth power), sursolid (fifth), zenzicube (sixth), second sursolid (seventh), and zenzizenzizenzic (eighth). "Biquadrate" has been used to refer to the fourth power as well.
Modern exponential notation.
In 1636, James Hume used in essence modern notation, when in "L'algèbre de Viète" he wrote "A"iii for "A"3. Early in the 17th century, the first form of our modern exponential notation was introduced by René Descartes in his text titled "La Géométrie"; there, the notation is introduced in Book I.
<templatestyles src="Template:Blockquote/styles.css" />I designate ... "aa", or "a"2 in multiplying "a" by itself; and "a"3 in multiplying it once more again by "a", and thus to infinity.
Some mathematicians (such as Descartes) used exponents only for powers greater than two, preferring to represent squares as repeated multiplication. Thus they would write polynomials, for example, as "ax" + "bxx" + "cx"3 + "d".
"Indices".
Samuel Jeake introduced the term "indices" in 1696. The term "involution" was used synonymously with the term "indices", but had declined in usage and should not be confused with its more common meaning.
Variable exponents, non-integer exponents.
In 1748, Leonhard Euler introduced variable exponents, and, implicitly, non-integer exponents by writing:<templatestyles src="Template:Blockquote/styles.css" />
Terminology.
The expression "b"2 = "b" · "b" is called "the square of "b"" or ""b" squared", because the area of a square with side-length "b" is "b"2. (It is true that it could also be called ""b" to the second power", but "the square of "b"" and ""b" squared" are so ingrained by tradition and convenience that ""b" to the second power" tends to sound unusual or clumsy.)
Similarly, the expression "b"3 = "b" · "b" · "b" is called "the cube of "b"" or ""b" cubed", because the volume of a cube with side-length "b" is "b"3.
When an exponent is a positive integer, that exponent indicates how many copies of the base are multiplied together. For example, 35 = 3 · 3 · 3 · 3 · 3 = 243. The base 3 appears 5 times in the multiplication, because the exponent is 5. Here, 243 is the "5th power of 3", or "3 raised to the 5th power".
The word "raised" is usually omitted, and sometimes "power" as well, so 35 can be simply read "3 to the 5th", or "3 to the 5". Therefore, the exponentiation "b""n" can be expressed as ""b" to the power of "n", "b" to the "n"th power", ""b" to the "n"th", or most briefly as ""b" to the "n"".
Integer exponents.
The exponentiation operation with integer exponents may be defined directly from elementary arithmetic operations.
Positive exponents.
The definition of the exponentiation as an iterated multiplication can be formalized by using induction, and this definition can be used as soon one has an associative multiplication:
The base case is
formula_9
and the recurrence is
formula_29
The associativity of multiplication implies that for any positive integers m and n,
formula_30
and
formula_31
Zero exponent.
As mentioned earlier, a (nonzero) number raised to the 0 power is 1:
formula_32
This value is also obtained by the empty product convention, which may be used in every algebraic structure with a multiplication that has an identity. This way the formula
formula_33
also holds for formula_34.
The case of 00 is controversial. In contexts where only integer powers are considered, the value 1 is generally assigned to 00 but, otherwise, the choice of whether to assign it a value and what value to assign may depend on context. <templatestyles src="Crossreference/styles.css" />
Negative exponents.
Exponentiation with negative exponents is defined by the following identity, which holds for any integer n and nonzero b:
formula_35.
Raising 0 to a negative exponent is undefined but, in some circumstances, it may be interpreted as infinity (formula_36).
This definition of exponentiation with negative exponents is the only one that allows extending the identity formula_33 to negative exponents (consider the case formula_37).
The same definition applies to invertible elements in a multiplicative monoid, that is, an algebraic structure, with an associative multiplication and a multiplicative identity denoted 1 (for example, the square matrices of a given dimension). In particular, in such a structure, the inverse of an invertible element x is standardly denoted formula_38
Identities and properties.
The following identities, often called <templatestyles src="Template:Visible anchor/styles.css" />exponent rules, hold for all integer exponents, provided that the base is non-zero:
formula_39
Unlike addition and multiplication, exponentiation is not commutative. For example, 23 = 8 ≠ 32 = 9. Also unlike addition and multiplication, exponentiation is not associative. For example, (23)2 = 82 = 64, whereas 2(32) = 29 = 512. Without parentheses, the conventional order of operations for serial exponentiation in superscript notation is top-down (or "right"-associative), not bottom-up (or "left"-associative). That is,
formula_40
which, in general, is different from
formula_41
Powers of a sum.
The powers of a sum can normally be computed from the powers of the summands by the binomial formula
formula_42
However, this formula is true only if the summands commute (i.e. that "ab" = "ba"), which is implied if they belong to a structure that is commutative. Otherwise, if a and b are, say, square matrices of the same size, this formula cannot be used. It follows that in computer algebra, many algorithms involving integer exponents must be changed when the exponentiation bases do not commute. Some general purpose computer algebra systems use a different notation (sometimes ^^ instead of ^) for exponentiation with non-commuting bases, which is then called non-commutative exponentiation.
Combinatorial interpretation.
For nonnegative integers n and m, the value of "n""m" is the number of functions from a set of m elements to a set of n elements (see cardinal exponentiation). Such functions can be represented as m-tuples from an n-element set (or as m-letter words from an n-letter alphabet). Some examples for particular values of m and n are given in the following table:
Particular bases.
Powers of ten.
In the base ten (decimal) number system, integer powers of 10 are written as the digit 1 followed or preceded by a number of zeroes determined by the sign and magnitude of the exponent. For example, = and =.
Exponentiation with base 10 is used in scientific notation to denote large or small numbers. For instance, (the speed of light in vacuum, in metres per second) can be written as and then approximated as .
SI prefixes based on powers of 10 are also used to describe small or large quantities. For example, the prefix kilo means =, so a kilometre is .
Powers of two.
The first negative powers of 2 are commonly used, and have special names, e.g.: "half" and "quarter".
Powers of 2 appear in set theory, since a set with "n" members has a power set, the set of all of its subsets, which has 2"n" members.
Integer powers of 2 are important in computer science. The positive integer powers 2"n" give the number of possible values for an "n"-bit integer binary number; for example, a byte may take 28 = 256 different values. The binary number system expresses any number as a sum of powers of 2, and denotes it as a sequence of 0 and 1, separated by a binary point, where 1 indicates a power of 2 that appears in the sum; the exponent is determined by the place of this 1: the nonnegative exponents are the rank of the 1 on the left of the point (starting from 0), and the negative exponents are determined by the rank on the right of the point.
Powers of one.
Every power of one equals: 1"n" = 1. This is true even if n is negative.
The first power of a number is the number itself: "n"1 = "n".
Powers of zero.
If the exponent n is positive ("n" > 0), the nth power of zero is zero: 0"n" = 0.
If the exponent n is negative ("n" < 0), the nth power of zero 0"n" is undefined, because it must equal formula_43 with −"n" > 0, and this would be formula_44 according to above.
The expression 00 is either defined as 1, or it is left undefined.
Powers of negative one.
If "n" is an even integer, then (−1)"n" = 1. This is because a negative number multiplied by another negative number cancels the sign, and thus gives a positive number.
If "n" is an odd integer, then (−1)"n" = −1. This is because there will be a remaining −1 after removing −1 pairs.
Because of this, powers of −1 are useful for expressing alternating sequences. For a similar discussion of powers of the complex number "i", see "".
Large exponents.
The limit of a sequence of powers of a number greater than one diverges; in other words, the sequence grows without bound:
"b""n" → ∞ as "n" → ∞ when "b" > 1
This can be read as ""b" to the power of "n" tends to +∞ as "n" tends to infinity when "b" is greater than one".
Powers of a number with absolute value less than one tend to zero:
"b""n" → 0 as "n" → ∞ when
Any power of one is always one:
"b""n" = 1 for all "n" if "b" = 1
Powers of –1 alternate between 1 and –1 as "n" alternates between even and odd, and thus do not tend to any limit as "n" grows.
If "b" < –1, "b""n" alternates between larger and larger positive and negative numbers as "n" alternates between even and odd, and thus does not tend to any limit as "n" grows.
If the exponentiated number varies while tending to 1 as the exponent tends to infinity, then the limit is not necessarily one of those above. A particularly important case is
(1 + 1/"n")"n" → "e" as "n" → ∞
See " below.
Other limits, in particular those of expressions that take on an indeterminate form, are described in " below.
Power functions.
Real functions of the form formula_45, where formula_46, are sometimes called power functions. When formula_1 is an integer and formula_47, two primary families exist: for formula_1 even, and for formula_1 odd. In general for formula_48, when formula_1 is even formula_45 will tend towards positive infinity with increasing formula_49, and also towards positive infinity with decreasing formula_49. All graphs from the family of even power functions have the general shape of formula_50, flattening more in the middle as formula_1 increases. Functions with this kind of symmetry (formula_51) are called even functions.
When formula_1 is odd, formula_52's asymptotic behavior reverses from positive formula_49 to negative formula_49. For formula_48, formula_45 will also tend towards positive infinity with increasing formula_49, but towards negative infinity with decreasing formula_49. All graphs from the family of odd power functions have the general shape of formula_53, flattening more in the middle as formula_1 increases and losing all flatness there in the straight line for formula_54. Functions with this kind of symmetry (formula_55) are called odd functions.
For formula_56, the opposite asymptotic behavior is true in each case.
Rational exponents.
If x is a nonnegative real number, and n is a positive integer, formula_57 or formula_58 denotes the unique positive real nth root of x, that is, the unique positive real number y such that formula_59
If x is a positive real number, and formula_60 is a rational number, with p and q > 0 integers, then formula_61 is defined as
formula_62
The equality on the right may be derived by setting formula_63 and writing formula_64
If r is a positive rational number, 0"r" = 0, by definition.
All these definitions are required for extending the identity formula_65 to rational exponents.
On the other hand, there are problems with the extension of these definitions to bases that are not positive real numbers. For example, a negative real number has a real nth root, which is negative, if n is odd, and no real root if n is even. In the latter case, whichever complex nth root one chooses for formula_66 the identity formula_67 cannot be satisfied. For example,
formula_68
See ' and ' for details on the way these problems may be handled.
Real exponents.
For positive real numbers, exponentiation to real powers can be defined in two equivalent ways, either by extending the rational powers to reals by continuity (', below), or in terms of the logarithm of the base and the exponential function (', below). The result is always a positive real number, and the identities and properties shown above for integer exponents remain true with these definitions for real exponents. The second definition is more commonly used, since it generalizes straightforwardly to complex exponents.
On the other hand, exponentiation to a real power of a negative real number is much more difficult to define consistently, as it may be non-real and have several values. One may choose one of these values, called the principal value, but there is no choice of the principal value for which the identity
formula_69
is true; see "". Therefore, exponentiation with a basis that is not a positive real number is generally viewed as a multivalued function.
Limits of rational exponents.
Since any irrational number can be expressed as the limit of a sequence of rational numbers, exponentiation of a positive real number b with an arbitrary real exponent x can be defined by continuity with the rule
formula_70
where the limit is taken over rational values of r only. This limit exists for every positive b and every real x.
For example, if "x" = π, the non-terminating decimal representation "π" = 3.14159... and the monotonicity of the rational powers can be used to obtain intervals bounded by rational powers that are as small as desired, and must contain formula_71
formula_72
So, the upper bounds and the lower bounds of the intervals form two sequences that have the same limit, denoted formula_73
This defines formula_74 for every positive b and real x as a continuous function of b and x. See also "Well-defined expression".
Exponential function.
The "exponential function" is often defined as formula_75 where formula_76 is Euler's number. To avoid circular reasoning, this definition cannot be used here. So, a definition of the exponential function, denoted formula_77 and of Euler's number are given, which rely only on exponentiation with positive integer exponents. Then a proof is sketched that, if one uses the definition of exponentiation given in preceding sections, one has
formula_78
There are many equivalent ways to define the exponential function, one of them being
formula_79
One has formula_80 and the "exponential identity" formula_81 holds as well, since
formula_82
and the second-order term formula_83 does not affect the limit, yielding formula_84.
Euler's number can be defined as formula_85. It follows from the preceding equations that formula_86 when x is an integer (this results from the repeated-multiplication definition of the exponentiation). If x is real, formula_86 results from the definitions given in preceding sections, by using the exponential identity if x is rational, and the continuity of the exponential function otherwise.
The limit that defines the exponential function converges for every complex value of x, and therefore it can be used to extend the definition of formula_87, and thus formula_88 from the real numbers to any complex argument z. This extended exponential function still satisfies the exponential identity, and is commonly used for defining exponentiation for complex base and exponent.
Powers via logarithms.
The definition of "e""x" as the exponential function allows defining "b""x" for every positive real numbers b, in terms of exponential and logarithm function. Specifically, the fact that the natural logarithm ln("x") is the inverse of the exponential function "e""x" means that one has
formula_89
for every "b" > 0. For preserving the identity formula_90 one must have
formula_91
So, formula_92 can be used as an alternative definition of "b""x" for any positive real b. This agrees with the definition given above using rational exponents and continuity, with the advantage to extend straightforwardly to any complex exponent.
Complex exponents with a positive real base.
If b is a positive real number, exponentiation with base b and complex exponent z is defined by means of the exponential function with complex argument (see the end of "", above) as
formula_93
where formula_94 denotes the natural logarithm of b.
This satisfies the identity
formula_95
In general,
formula_96 is not defined, since "b""z" is not a real number. If a meaning is given to the exponentiation of a complex number (see "", below), one has, in general,
formula_97
unless z is real or t is an integer.
Euler's formula,
formula_98
allows expressing the polar form of formula_99 in terms of the real and imaginary parts of z, namely
formula_100
where the absolute value of the trigonometric factor is one. This results from
formula_101
Non-integer powers of complex numbers.
In the preceding sections, exponentiation with non-integer exponents has been defined for positive real bases only. For other bases, difficulties appear already with the apparently simple case of nth roots, that is, of exponents formula_102 where n is a positive integer. Although the general theory of exponentiation with non-integer exponents applies to nth roots, this case deserves to be considered first, since it does not need to use complex logarithms, and is therefore easier to understand.
nth roots of a complex number.
Every nonzero complex number z may be written in polar form as
formula_103
where formula_104 is the absolute value of z, and formula_105 is its argument. The argument is defined up to an integer multiple of ; this means that, if formula_105 is the argument of a complex number, then formula_106 is also an argument of the same complex number for every integer formula_107.
The polar form of the product of two complex numbers is obtained by multiplying the absolute values and adding the arguments. It follows that the polar form of an nth root of a complex number can be obtained by taking the nth root of the absolute value and dividing its argument by n:
formula_108
If formula_109 is added to formula_105, the complex number is not changed, but this adds formula_110 to the argument of the nth root, and provides a new nth root. This can be done n times, and provides the n nth roots of the complex number.
It is usual to choose one of the n nth root as the principal root. The common choice is to choose the nth root for which formula_111 that is, the nth root that has the largest real part, and, if there are two, the one with positive imaginary part. This makes the principal nth root a continuous function in the whole complex plane, except for negative real values of the radicand. This function equals the usual nth root for positive real radicands. For negative real radicands, and odd exponents, the principal nth root is not real, although the usual nth root is real. Analytic continuation shows that the principal nth root is the unique complex differentiable function that extends the usual nth root to the complex plane without the nonpositive real numbers.
If the complex number is moved around zero by increasing its argument, after an increment of formula_112 the complex number comes back to its initial position, and its nth roots are permuted circularly (they are multiplied by formula_113). This shows that it is not possible to define a nth root function that is continuous in the whole complex plane.
Roots of unity.
The nth roots of unity are the n complex numbers such that "w""n" = 1, where n is a positive integer. They arise in various areas of mathematics, such as in discrete Fourier transform or algebraic solutions of algebraic equations (Lagrange resolvent).
The n nth roots of unity are the n first powers of formula_114, that is formula_115 The nth roots of unity that have this generating property are called "primitive nth roots of unity"; they have the form formula_116 with k coprime with n. The unique primitive square root of unity is formula_117 the primitive fourth roots of unity are formula_118 and formula_119
The nth roots of unity allow expressing all nth roots of a complex number z as the n products of a given nth roots of z with a nth root of unity.
Geometrically, the nth roots of unity lie on the unit circle of the complex plane at the vertices of a regular n-gon with one vertex on the real number 1.
As the number formula_120 is the primitive nth root of unity with the smallest positive argument, it is called the "principal primitive nth root of unity", sometimes shortened as "principal nth root of unity", although this terminology can be confused with the principal value of formula_121, which is 1.
Complex exponentiation.
Defining exponentiation with complex bases leads to difficulties that are similar to those described in the preceding section, except that there are, in general, infinitely many possible values for formula_122. So, either a principal value is defined, which is not continuous for the values of z that are real and nonpositive, or formula_122 is defined as a multivalued function.
In all cases, the complex logarithm is used to define complex exponentiation as
formula_123
where formula_124 is the variant of the complex logarithm that is used, which is, a function or a multivalued function such that
formula_125
for every z in its domain of definition.
Principal value.
The principal value of the complex logarithm is the unique continuous function, commonly denoted formula_126 such that, for every nonzero complex number z,
formula_127
and the argument of z satisfies
formula_128
The principal value of the complex logarithm is not defined for formula_129 it is discontinuous at negative real values of z, and it is holomorphic (that is, complex differentiable) elsewhere. If z is real and positive, the principal value of the complex logarithm is the natural logarithm: formula_130
The principal value of formula_131 is defined as
formula_123
where formula_124 is the principal value of the logarithm.
The function formula_132 is holomorphic except in the neighbourhood of the points where z is real and nonpositive.
If z is real and positive, the principal value of formula_131 equals its usual value defined above. If formula_133 where n is an integer, this principal value is the same as the one defined above.
Multivalued function.
In some contexts, there is a problem with the discontinuity of the principal values of formula_124 and formula_131 at the negative real values of z. In this case, it is useful to consider these functions as multivalued functions.
If formula_124 denotes one of the values of the multivalued logarithm (typically its principal value), the other values are formula_134 where k is any integer. Similarly, if formula_131 is one value of the exponentiation, then the other values are given by
formula_135
where k is any integer.
Different values of k give different values of formula_131 unless w is a rational number, that is, there is an integer d such that dw is an integer. This results from the periodicity of the exponential function, more specifically, that formula_136 if and only if formula_137 is an integer multiple of formula_138
If formula_139 is a rational number with m and n coprime integers with formula_140 then formula_131 has exactly n values. In the case formula_141 these values are the same as those described in § nth roots of a complex number. If w is an integer, there is only one value that agrees with that of .
The multivalued exponentiation is holomorphic for formula_142 in the sense that its graph consists of several sheets that define each a holomorphic function in the neighborhood of every point. If z varies continuously along a circle around 0, then, after a turn, the value of formula_131 has changed of sheet.
Computation.
The "canonical form" formula_143 of formula_131 can be computed from the canonical form of z and w. Although this can be described by a single formula, it is clearer to split the computation in several steps.
Examples.
In both examples, all values of formula_131 have the same argument. More generally, this is true if and only if the real part of w is an integer.
Failure of power and logarithm identities.
Some identities for powers and logarithms for positive real numbers will fail for complex numbers, no matter how complex powers and complex logarithms are defined "as single-valued functions". For example:
Irrationality and transcendence.
If b is a positive real algebraic number, and x is a rational number, then "b""x" is an algebraic number. This results from the theory of algebraic extensions. This remains true if b is any algebraic number, in which case, all values of "b""x" (as a multivalued function) are algebraic. If x is irrational (that is, "not rational"), and both b and x are algebraic, Gelfond–Schneider theorem asserts that all values of "b""x" are transcendental (that is, not algebraic), except if b equals 0 or 1.
In other words, if x is irrational and formula_170 then at least one of b, x and "b""x" is transcendental.
Integer powers in algebra.
The definition of exponentiation with positive integer exponents as repeated multiplication may apply to any associative operation denoted as a multiplication. The definition of "x"0 requires further the existence of a multiplicative identity.
An algebraic structure consisting of a set together with an associative operation denoted multiplicatively, and a multiplicative identity denoted by 1 is a monoid. In such a monoid, exponentiation of an element x is defined inductively by
If n is a negative integer, formula_173 is defined only if x has a multiplicative inverse. In this case, the inverse of x is denoted "x"−1, and "x""n" is defined as formula_174
Exponentiation with integer exponents obeys the following laws, for x and y in the algebraic structure, and m and n integers:
formula_175
These definitions are widely used in many areas of mathematics, notably for groups, rings, fields, square matrices (which form a ring). They apply also to functions from a set to itself, which form a monoid under function composition. This includes, as specific instances, geometric transformations, and endomorphisms of any mathematical structure.
When there are several operations that may be repeated, it is common to indicate the repeated operation by placing its symbol in the superscript, before the exponent. For example, if f is a real function whose valued can be multiplied, formula_176 denotes the exponentiation with respect of multiplication, and formula_177 may denote exponentiation with respect of function composition. That is,
formula_178
and
formula_179
Commonly, formula_180 is denoted formula_181 while formula_182 is denoted formula_183
In a group.
A multiplicative group is a set with as associative operation denoted as multiplication, that has an identity element, and such that every element has an inverse.
So, if G is a group, formula_173 is defined for every formula_184 and every integer n.
The set of all powers of an element of a group form a subgroup. A group (or subgroup) that consists of all powers of a specific element x is the cyclic group generated by x. If all the powers of x are distinct, the group is isomorphic to the additive group formula_185 of the integers. Otherwise, the cyclic group is finite (it has a finite number of elements), and its number of elements is the order of x. If the order of x is n, then formula_186 and the cyclic group generated by x consists of the n first powers of x (starting indifferently from the exponent 0 or 1).
Order of elements play a fundamental role in group theory. For example, the order of an element in a finite group is always a divisor of the number of elements of the group (the "order" of the group). The possible orders of group elements are important in the study of the structure of a group (see Sylow theorems), and in the classification of finite simple groups.
Superscript notation is also used for conjugation; that is, "g""h" = "h"−1"gh", where "g" and "h" are elements of a group. This notation cannot be confused with exponentiation, since the superscript is not an integer. The motivation of this notation is that conjugation obeys some of the laws of exponentiation, namely formula_187 and formula_188
In a ring.
In a ring, it may occur that some nonzero elements satisfy formula_189 for some integer n. Such an element is said to be nilpotent. In a commutative ring, the nilpotent elements form an ideal, called the nilradical of the ring.
If the nilradical is reduced to the zero ideal (that is, if formula_190 implies formula_191 for every positive integer n), the commutative ring is said to be reduced. Reduced rings are important in algebraic geometry, since the coordinate ring of an affine algebraic set is always a reduced ring.
More generally, given an ideal I in a commutative ring R, the set of the elements of R that have a power in I is an ideal, called the radical of I. The nilradical is the radical of the zero ideal. A radical ideal is an ideal that equals its own radical. In a polynomial ring formula_192 over a field k, an ideal is radical if and only if it is the set of all polynomials that are zero on an affine algebraic set (this is a consequence of Hilbert's Nullstellensatz).
Matrices and linear operators.
If "A" is a square matrix, then the product of "A" with itself "n" times is called the matrix power. Also formula_193 is defined to be the identity matrix, and if "A" is invertible, then formula_194.
Matrix powers appear often in the context of discrete dynamical systems, where the matrix "A" expresses a transition from a state vector "x" of some system to the next state "Ax" of the system. This is the standard interpretation of a Markov chain, for example. Then formula_195 is the state of the system after two time steps, and so forth: formula_196 is the state of the system after "n" time steps. The matrix power formula_197 is the transition matrix between the state now and the state at a time "n" steps in the future. So computing matrix powers is equivalent to solving the evolution of the dynamical system. In many cases, matrix powers can be expediently computed by using eigenvalues and eigenvectors.
Apart from matrices, more general linear operators can also be exponentiated. An example is the derivative operator of calculus, formula_198, which is a linear operator acting on functions formula_52 to give a new function formula_199. The "n"th power of the differentiation operator is the "n"th derivative:
formula_200
These examples are for discrete exponents of linear operators, but in many circumstances it is also desirable to define powers of such operators with continuous exponents. This is the starting point of the mathematical theory of semigroups. Just as computing matrix powers with discrete exponents solves discrete dynamical systems, so does computing matrix powers with continuous exponents solve systems with continuous dynamics. Examples include approaches to solving the heat equation, Schrödinger equation, wave equation, and other partial differential equations including a time evolution. The special case of exponentiating the derivative operator to a non-integer power is called the fractional derivative which, together with the fractional integral, is one of the basic operations of the fractional calculus.
Finite fields.
A field is an algebraic structure in which multiplication, addition, subtraction, and division are defined and satisfy the properties that multiplication is associative and every nonzero element has a multiplicative inverse. This implies that exponentiation with integer exponents is well-defined, except for nonpositive powers of 0. Common examples are the field of complex numbers, the real numbers and the rational numbers, considered earlier in this article, which are all infinite.
A "finite field" is a field with a finite number of elements. This number of elements is either a prime number or a prime power; that is, it has the form formula_201 where p is a prime number, and k is a positive integer. For every such q, there are fields with q elements. The fields with q elements are all isomorphic, which allows, in general, working as if there were only one field with q elements, denoted formula_202
One has
formula_203
for every formula_204
A primitive element in formula_205 is an element g such that the set of the "q" − 1 first powers of g (that is, formula_206) equals the set of the nonzero elements of formula_202 There are formula_207 primitive elements in formula_208 where formula_209 is Euler's totient function.
In formula_208 the freshman's dream identity
formula_210
is true for the exponent p. As formula_211 in formula_208 It follows that the map
formula_212
is linear over formula_208 and is a field automorphism, called the Frobenius automorphism. If formula_201 the field formula_205 has k automorphisms, which are the k first powers (under composition) of F. In other words, the Galois group of formula_205 is cyclic of order k, generated by the Frobenius automorphism.
The Diffie–Hellman key exchange is an application of exponentiation in finite fields that is widely used for secure communications. It uses the fact that exponentiation is computationally inexpensive, whereas the inverse operation, the discrete logarithm, is computationally expensive. More precisely, if g is a primitive element in formula_208 then formula_213 can be efficiently computed with exponentiation by squaring for any e, even if q is large, while there is no known computationally practical algorithm that allows retrieving e from formula_213 if q is sufficiently large.
Powers of sets.
The Cartesian product of two sets S and T is the set of the ordered pairs formula_214 such that formula_215 and formula_216 This operation is not properly commutative nor associative, but has these properties up to canonical isomorphisms, that allow identifying, for example, formula_217 formula_218 and formula_219
This allows defining the nth power formula_220 of a set S as the set of all n-tuples formula_221 of elements of S.
When S is endowed with some structure, it is frequent that formula_220 is naturally endowed with a similar structure. In this case, the term "direct product" is generally used instead of "Cartesian product", and exponentiation denotes product structure. For example formula_222 (where formula_223 denotes the real numbers) denotes the Cartesian product of n copies of formula_224 as well as their direct product as vector space, topological spaces, rings, etc.
Sets as exponents.
A n-tuple formula_221 of elements of S can be considered as a function from formula_225 This generalizes to the following notation.
Given two sets S and T, the set of all functions from T to S is denoted formula_226. This exponential notation is justified by the following canonical isomorphisms (for the first one, see Currying):
formula_227
formula_228
where formula_229 denotes the Cartesian product, and formula_230 the disjoint union.
One can use sets as exponents for other operations on sets, typically for direct sums of abelian groups, vector spaces, or modules. For distinguishing direct sums from direct products, the exponent of a direct sum is placed between parentheses. For example, formula_231 denotes the vector space of the infinite sequences of real numbers, and formula_232 the vector space of those sequences that have a finite number of nonzero elements. The latter has a basis consisting of the sequences with exactly one nonzero element that equals 1, while the Hamel bases of the former cannot be explicitly described (because their existence involves Zorn's lemma).
In this context, 2 can represents the set formula_233 So, formula_234 denotes the power set of S, that is the set of the functions from S to formula_235 which can be identified with the set of the subsets of S, by mapping each function to the inverse image of 1.
This fits in with the exponentiation of cardinal numbers, in the sense that |"S""T"| = |"S"||"T"|, where is the cardinality of "X".
In category theory.
In the category of sets, the morphisms between sets X and Y are the functions from X to Y. It results that the set of the functions from X to Y that is denoted formula_236 in the preceding section can also be denoted formula_237 The isomorphism formula_238 can be rewritten
formula_239
This means the functor "exponentiation to the power " is a right adjoint to the functor "direct product with ".
This generalizes to the definition of exponentiation in a category in which finite direct products exist: in such a category, the functor formula_240 is, if it exists, a right adjoint to the functor formula_241 A category is called a "Cartesian closed category", if direct products exist, and the functor formula_242 has a right adjoint for every T.
Repeated exponentiation.
Just as exponentiation of natural numbers is motivated by repeated multiplication, it is possible to define an operation based on repeated exponentiation; this operation is sometimes called hyper-4 or tetration. Iterating tetration leads to another operation, and so on, a concept named hyperoperation. This sequence of operations is expressed by the Ackermann function and Knuth's up-arrow notation. Just as exponentiation grows faster than multiplication, which is faster-growing than addition, tetration is faster-growing than exponentiation. Evaluated at (3, 3), the functions addition, multiplication, exponentiation, and tetration yield 6, 9, 27, and (=327 = 333 = 33) respectively.
Limits of powers.
Zero to the power of zero gives a number of examples of limits that are of the indeterminate form 00. The limits in these examples exist, but have different values, showing that the two-variable function "x""y" has no limit at the point (0, 0). One may consider at what points this function does have a limit.
More precisely, consider the function formula_243 defined on formula_244. Then "D" can be viewed as a subset of (that is, the set of all pairs ("x", "y") with "x", "y" belonging to the extended real number line R = [−∞, +∞], endowed with the product topology), which will contain the points at which the function "f" has a limit.
In fact, "f" has a limit at all accumulation points of "D", except for (0, 0), (+∞, 0), (1, +∞) and (1, −∞). Accordingly, this allows one to define the powers "x""y" by continuity whenever 0 ≤ "x" ≤ +∞, −∞ ≤ y ≤ +∞, except for 00, (+∞)0, 1+∞ and 1−∞, which remain indeterminate forms.
Under this definition by continuity, we obtain:
These powers are obtained by taking limits of "x""y" for "positive" values of "x". This method does not permit a definition of "x""y" when "x" < 0, since pairs ("x", "y") with "x" < 0 are not accumulation points of "D".
On the other hand, when "n" is an integer, the power "x""n" is already meaningful for all values of "x", including negative ones. This may make the definition 0"n" = +∞ obtained above for negative "n" problematic when "n" is odd, since in this case "x""n" → +∞ as "x" tends to 0 through positive values, but not negative ones.
Efficient computation with integer exponents.
Computing "b""n" using iterated multiplication requires "n" − 1 multiplication operations, but it can be computed more efficiently than that, as illustrated by the following example. To compute 2100, apply Horner's rule to the exponent 100 written in binary:
formula_245.
Then compute the following terms in order, reading Horner's rule from right to left.
<templatestyles src="Template:Static row numbers/styles.css" />
This series of steps only requires 8 multiplications instead of 99.
In general, the number of multiplication operations required to compute "b""n" can be reduced to formula_246 by using exponentiation by squaring, where formula_247 denotes the number of 1s in the binary representation of n. For some exponents (100 is not among them), the number of multiplications can be further reduced by computing and using the minimal addition-chain exponentiation. Finding the "minimal" sequence of multiplications (the minimal-length addition chain for the exponent) for "b""n" is a difficult problem, for which no efficient algorithms are currently known (see Subset sum problem), but many reasonably efficient heuristic algorithms are available. However, in practical computations, exponentiation by squaring is efficient enough, and much more easy to implement.
Iterated functions.
Function composition is a binary operation that is defined on functions such that the codomain of the function written on the right is included in the domain of the function written on the left. It is denoted formula_248 and defined as
formula_249
for every x in the domain of f.
If the domain of a function f equals its codomain, one may compose the function with itself an arbitrary number of time, and this defines the nth power of the function under composition, commonly called the "nth iterate" of the function. Thus formula_176 denotes generally the nth iterate of f; for example, formula_250 means formula_251
When a multiplication is defined on the codomain of the function, this defines a multiplication on functions, the pointwise multiplication, which induces another exponentiation. When using functional notation, the two kinds of exponentiation are generally distinguished by placing the exponent of the functional iteration "before" the parentheses enclosing the arguments of the function, and placing the exponent of pointwise multiplication "after" the parentheses. Thus formula_252 and formula_253 When functional notation is not used, disambiguation is often done by placing the composition symbol before the exponent; for example formula_254 and formula_255 For historical reasons, the exponent of a repeated multiplication is placed before the argument for some specific functions, typically the trigonometric functions. So, formula_256 and formula_257 both mean formula_258 and not formula_259 which, in any case, is rarely considered. Historically, several variants of these notations were used by different authors.
In this context, the exponent formula_260 denotes always the inverse function, if it exists. So formula_261 For the multiplicative inverse fractions are generally used as in formula_262
In programming languages.
Programming languages generally express exponentiation either as an infix operator or as a function application, as they do not support superscripts. The most common operator symbol for exponentiation is the caret (codice_0). The original version of ASCII included an uparrow symbol (codice_1), intended for exponentiation, but this was replaced by the caret in 1967, so the caret became usual in programming languages.
The notations include:
In most programming languages with an infix exponentiation operator, it is right-associative, that is, codice_10 is interpreted as codice_11. This is because codice_12 is equal to codice_13 and thus not as useful. In some languages, it is left-associative, notably in Algol, MATLAB, and the Microsoft Excel formula language.
Other programming languages use functional notation:
Still others only provide exponentiation as part of standard libraries:
In some statically typed languages that prioritize type safety such as Rust, exponentiation is performed via a multitude of methods:
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "b^n = \\underbrace{b \\times b \\times \\dots \\times b \\times b}_{n \\text{ times}}."
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "b^n"
},
{
"math_id": 3,
"text": "b"
},
{
"math_id": 4,
"text": "\n\\begin{align}\nb^{n+m} & = \\underbrace{b \\times \\dots \\times b}_{n+m \\text{ times}} \\\\[1ex]\n& = \\underbrace{b \\times \\dots \\times b}_{n \\text{ times}} \\times \\underbrace{b \\times \\dots \\times b}_{m \\text{ times}} \\\\[1ex]\n& = b^n \\times b^m\n\\end{align}\n"
},
{
"math_id": 5,
"text": "b^0"
},
{
"math_id": 6,
"text": "b \\neq 0"
},
{
"math_id": 7,
"text": "b^0 \\times b^n = b^{0+n} = b^n"
},
{
"math_id": 8,
"text": "b^0 = b^n / b^n = 1"
},
{
"math_id": 9,
"text": "b^1 = b"
},
{
"math_id": 10,
"text": " (b^1)^3 = b^1 \\times b^1 \\times b^1 = b^{1+1+1} = b^3 "
},
{
"math_id": 11,
"text": "b^{-1}"
},
{
"math_id": 12,
"text": "b^{-1} \\times b^1 = b^{-1+1} = b^0 = 1 "
},
{
"math_id": 13,
"text": "b^{1}"
},
{
"math_id": 14,
"text": "b^{-1} = 1 / b^1"
},
{
"math_id": 15,
"text": "b^{-1} = 1 / b"
},
{
"math_id": 16,
"text": "b^{-n} = 1 / b^n"
},
{
"math_id": 17,
"text": "\\sqrt{b}"
},
{
"math_id": 18,
"text": "r"
},
{
"math_id": 19,
"text": " b^r = \\sqrt{b}"
},
{
"math_id": 20,
"text": " \\sqrt{b} \\times \\sqrt{b} = b "
},
{
"math_id": 21,
"text": " b^r \\times b^r = b "
},
{
"math_id": 22,
"text": " b^{r+r} = b "
},
{
"math_id": 23,
"text": " b "
},
{
"math_id": 24,
"text": " b^1 "
},
{
"math_id": 25,
"text": " b^{r+r} = b^1 "
},
{
"math_id": 26,
"text": " r+r = 1 "
},
{
"math_id": 27,
"text": " r = \\tfrac{1}{2} "
},
{
"math_id": 28,
"text": "\\sqrt{b} = b^{1/2} "
},
{
"math_id": 29,
"text": "b^{n+1} = b^n \\cdot b."
},
{
"math_id": 30,
"text": "b^{m+n} = b^m \\cdot b^n,"
},
{
"math_id": 31,
"text": "(b^m)^n=b^{mn}."
},
{
"math_id": 32,
"text": "b^0=1."
},
{
"math_id": 33,
"text": "b^{m+n}=b^m\\cdot b^n"
},
{
"math_id": 34,
"text": "n=0"
},
{
"math_id": 35,
"text": "b^{-n} = \\frac{1}{b^n}"
},
{
"math_id": 36,
"text": "\\infty"
},
{
"math_id": 37,
"text": "m=-n"
},
{
"math_id": 38,
"text": "x^{-1}."
},
{
"math_id": 39,
"text": "\\begin{align}\n b^{m + n} &= b^m \\cdot b^n \\\\\n \\left(b^m\\right)^n &= b^{m \\cdot n} \\\\\n (b \\cdot c)^n &= b^n \\cdot c^n\n\\end{align}"
},
{
"math_id": 40,
"text": "b^{p^q} = b^{\\left(p^q\\right)},"
},
{
"math_id": 41,
"text": "\\left(b^p\\right)^q = b^{p q} ."
},
{
"math_id": 42,
"text": "(a+b)^n=\\sum_{i=0}^n \\binom{n}{i}a^ib^{n-i}=\\sum_{i=0}^n \\frac{n!}{i!(n-i)!}a^ib^{n-i}."
},
{
"math_id": 43,
"text": "1/0^{-n}"
},
{
"math_id": 44,
"text": "1/0"
},
{
"math_id": 45,
"text": "f(x) = cx^n"
},
{
"math_id": 46,
"text": "c \\ne 0"
},
{
"math_id": 47,
"text": "n \\ge 1"
},
{
"math_id": 48,
"text": "c > 0"
},
{
"math_id": 49,
"text": "x"
},
{
"math_id": 50,
"text": "y=cx^2"
},
{
"math_id": 51,
"text": "f(-x)= f(x)"
},
{
"math_id": 52,
"text": "f(x)"
},
{
"math_id": 53,
"text": "y=cx^3"
},
{
"math_id": 54,
"text": "n=1"
},
{
"math_id": 55,
"text": "f(-x)= -f(x)"
},
{
"math_id": 56,
"text": "c < 0"
},
{
"math_id": 57,
"text": "x^{1/n}"
},
{
"math_id": 58,
"text": "\\sqrt[n]x"
},
{
"math_id": 59,
"text": "y^n=x."
},
{
"math_id": 60,
"text": "\\frac pq"
},
{
"math_id": 61,
"text": "x^{p/q}"
},
{
"math_id": 62,
"text": "x^\\frac pq= \\left(x^p\\right)^\\frac 1q=(x^\\frac 1q)^p."
},
{
"math_id": 63,
"text": "y=x^\\frac 1q,"
},
{
"math_id": 64,
"text": "(x^\\frac 1q)^p=y^p=\\left((y^p)^q\\right)^\\frac 1q=\\left((y^q)^p\\right)^\\frac 1q=(x^p)^\\frac 1q."
},
{
"math_id": 65,
"text": "(x^r)^s = x^{rs}"
},
{
"math_id": 66,
"text": "x^\\frac 1n,"
},
{
"math_id": 67,
"text": "(x^a)^b=x^{ab}"
},
{
"math_id": 68,
"text": "\\left((-1)^2\\right)^\\frac 12 = 1^\\frac 12= 1\\neq (-1)^{2\\cdot\\frac 12} =(-1)^1=-1."
},
{
"math_id": 69,
"text": "\\left(b^r\\right)^s = b^{r s}"
},
{
"math_id": 70,
"text": " b^x = \\lim_{r (\\in \\mathbb{Q}) \\to x} b^r \\quad (b \\in \\mathbb{R}^+,\\, x \\in \\mathbb{R}),"
},
{
"math_id": 71,
"text": "b^\\pi:"
},
{
"math_id": 72,
"text": "\\left[b^3, b^4\\right], \\left[b^{3.1}, b^{3.2}\\right], \\left[b^{3.14}, b^{3.15}\\right], \\left[b^{3.141}, b^{3.142}\\right], \\left[b^{3.1415}, b^{3.1416}\\right], \\left[b^{3.14159}, b^{3.14160}\\right], \\ldots"
},
{
"math_id": 73,
"text": "b^\\pi."
},
{
"math_id": 74,
"text": "b^x"
},
{
"math_id": 75,
"text": "x\\mapsto e^x,"
},
{
"math_id": 76,
"text": "e\\approx 2.718"
},
{
"math_id": 77,
"text": "\\exp(x),"
},
{
"math_id": 78,
"text": "\\exp(x)=e^x."
},
{
"math_id": 79,
"text": "\\exp(x) = \\lim_{n\\rightarrow\\infty} \\left(1 + \\frac{x}{n}\\right)^n."
},
{
"math_id": 80,
"text": "\\exp(0)=1,"
},
{
"math_id": 81,
"text": "\\exp(x+y)=\\exp(x)\\exp(y)"
},
{
"math_id": 82,
"text": "\\exp(x)\\exp(y) = \\lim_{n\\rightarrow\\infty} \\left(1 + \\frac{x}{n}\\right)^n\\left(1 + \\frac{y}{n}\\right)^n = \\lim_{n\\rightarrow\\infty} \\left(1 + \\frac{x+y}{n} + \\frac{xy}{n^2}\\right)^n,"
},
{
"math_id": 83,
"text": "\\frac{xy}{n^2}"
},
{
"math_id": 84,
"text": "\\exp(x)\\exp(y) = \\exp(x+y)"
},
{
"math_id": 85,
"text": "e=\\exp(1)"
},
{
"math_id": 86,
"text": "\\exp(x)=e^x"
},
{
"math_id": 87,
"text": "\\exp(z)"
},
{
"math_id": 88,
"text": "e^z,"
},
{
"math_id": 89,
"text": "b = \\exp(\\ln b)=e^{\\ln b}"
},
{
"math_id": 90,
"text": "(e^x)^y=e^{xy},"
},
{
"math_id": 91,
"text": "b^x=\\left(e^{\\ln b} \\right)^x = e^{x \\ln b}"
},
{
"math_id": 92,
"text": "e^{x \\ln b}"
},
{
"math_id": 93,
"text": "b^z = e^{(z\\ln b)},"
},
{
"math_id": 94,
"text": "\\ln b"
},
{
"math_id": 95,
"text": "b^{z+t} = b^z b^t,"
},
{
"math_id": 96,
"text": "\\left(b^z\\right)^t"
},
{
"math_id": 97,
"text": "\\left(b^z\\right)^t \\ne b^{zt},"
},
{
"math_id": 98,
"text": "e^{iy} = \\cos y + i \\sin y,"
},
{
"math_id": 99,
"text": "b^z"
},
{
"math_id": 100,
"text": "b^{x+iy}= b^x(\\cos(y\\ln b)+i\\sin(y\\ln b)),"
},
{
"math_id": 101,
"text": "b^{x+iy}=b^x b^{iy}=b^x e^{iy\\ln b} =b^x(\\cos(y\\ln b)+i\\sin(y\\ln b))."
},
{
"math_id": 102,
"text": "1/n,"
},
{
"math_id": 103,
"text": "z=\\rho e^{i\\theta}=\\rho(\\cos \\theta +i \\sin \\theta),"
},
{
"math_id": 104,
"text": "\\rho"
},
{
"math_id": 105,
"text": "\\theta"
},
{
"math_id": 106,
"text": "\\theta +2k\\pi"
},
{
"math_id": 107,
"text": "k"
},
{
"math_id": 108,
"text": "\\left(\\rho e^{i\\theta}\\right)^\\frac 1n=\\sqrt[n]\\rho \\,e^\\frac{i\\theta}n."
},
{
"math_id": 109,
"text": "2\\pi"
},
{
"math_id": 110,
"text": "2i\\pi/n"
},
{
"math_id": 111,
"text": "-\\pi<\\theta\\le \\pi,"
},
{
"math_id": 112,
"text": "2\\pi,"
},
{
"math_id": 113,
"text": "e^{2i\\pi/n}"
},
{
"math_id": 114,
"text": "\\omega =e^\\frac{2\\pi i}{n}"
},
{
"math_id": 115,
"text": "1=\\omega^0=\\omega^n, \\omega=\\omega^1, \\omega^2, \\omega^{n-1}."
},
{
"math_id": 116,
"text": "\\omega^k=e^\\frac{2k\\pi i}{n},"
},
{
"math_id": 117,
"text": "-1;"
},
{
"math_id": 118,
"text": "i"
},
{
"math_id": 119,
"text": "-i."
},
{
"math_id": 120,
"text": "e^\\frac{2k\\pi i}{n}"
},
{
"math_id": 121,
"text": "1^{1/n}"
},
{
"math_id": 122,
"text": "z^w"
},
{
"math_id": 123,
"text": "z^w=e^{w\\log z},"
},
{
"math_id": 124,
"text": "\\log z"
},
{
"math_id": 125,
"text": "e^{\\log z}=z"
},
{
"math_id": 126,
"text": "\\log,"
},
{
"math_id": 127,
"text": "e^{\\log z}=z,"
},
{
"math_id": 128,
"text": "-\\pi <\\operatorname{Arg}z \\le \\pi."
},
{
"math_id": 129,
"text": "z=0,"
},
{
"math_id": 130,
"text": "\\log z=\\ln z."
},
{
"math_id": 131,
"text": "z^w"
},
{
"math_id": 132,
"text": "(z,w)\\to z^w"
},
{
"math_id": 133,
"text": "w=1/n,"
},
{
"math_id": 134,
"text": "2ik\\pi +\\log z,"
},
{
"math_id": 135,
"text": "e^{w(2ik\\pi +\\log z)} = z^we^{2ik\\pi w},"
},
{
"math_id": 136,
"text": "e^a=e^b"
},
{
"math_id": 137,
"text": "a-b"
},
{
"math_id": 138,
"text": "2\\pi i."
},
{
"math_id": 139,
"text": "w=\\frac mn"
},
{
"math_id": 140,
"text": "n>0,"
},
{
"math_id": 141,
"text": "m=1,"
},
{
"math_id": 142,
"text": "z\\ne 0,"
},
{
"math_id": 143,
"text": "x+iy"
},
{
"math_id": 144,
"text": "z=a+ib"
},
{
"math_id": 145,
"text": "z=\\rho e^{i\\theta}= \\rho (\\cos\\theta + i \\sin\\theta),"
},
{
"math_id": 146,
"text": "\\rho=\\sqrt{a^2+b^2}"
},
{
"math_id": 147,
"text": "\\theta=\\operatorname{atan2}(a,b)"
},
{
"math_id": 148,
"text": "\\log z=\\ln \\rho+i\\theta,"
},
{
"math_id": 149,
"text": "\\ln"
},
{
"math_id": 150,
"text": "2ik\\pi"
},
{
"math_id": 151,
"text": "w\\log z."
},
{
"math_id": 152,
"text": "w=c+di"
},
{
"math_id": 153,
"text": "w\\log z"
},
{
"math_id": 154,
"text": "w\\log z = (c\\ln \\rho - d\\theta-2dk\\pi) +i (d\\ln \\rho + c\\theta+2ck\\pi),"
},
{
"math_id": 155,
"text": "k=0."
},
{
"math_id": 156,
"text": "e^{x+y}=e^xe^y"
},
{
"math_id": 157,
"text": "e^{y\\ln x} = x^y,"
},
{
"math_id": 158,
"text": "z^w=\\rho^c e^{-d(\\theta+2k\\pi)} \\left(\\cos (d\\ln \\rho + c\\theta+2ck\\pi) +i\\sin(d\\ln \\rho + c\\theta+2ck\\pi)\\right),"
},
{
"math_id": 159,
"text": "k=0"
},
{
"math_id": 160,
"text": "i^i"
},
{
"math_id": 161,
"text": "i=e^{i\\pi/2},"
},
{
"math_id": 162,
"text": "\\log i"
},
{
"math_id": 163,
"text": "\\log i=i\\left(\\frac \\pi 2 +2k\\pi\\right)."
},
{
"math_id": 164,
"text": "i^i=e^{i\\log i}=e^{-\\frac \\pi 2} e^{-2k\\pi}."
},
{
"math_id": 165,
"text": " e^{-\\frac \\pi 2} \\approx 0.2079."
},
{
"math_id": 166,
"text": "(-2)^{3+4i}"
},
{
"math_id": 167,
"text": "-2 = 2e^{i \\pi}."
},
{
"math_id": 168,
"text": "\\begin{align}\n(-2)^{3 + 4i} &= 2^3 e^{-4(\\pi+2k\\pi)} (\\cos(4\\ln 2 + 3(\\pi +2k\\pi)) +i\\sin(4\\ln 2 + 3(\\pi+2k\\pi)))\\\\\n&=-2^3 e^{-4(\\pi+2k\\pi)}(\\cos(4\\ln 2) +i\\sin(4\\ln 2)).\n\\end{align}"
},
{
"math_id": 169,
"text": "4\\ln 2,"
},
{
"math_id": 170,
"text": "b\\not\\in \\{0,1\\},"
},
{
"math_id": 171,
"text": "x^0 = 1,"
},
{
"math_id": 172,
"text": "x^{n+1} = x x^n"
},
{
"math_id": 173,
"text": "x^n"
},
{
"math_id": 174,
"text": "\\left(x^{-1}\\right)^{-n}."
},
{
"math_id": 175,
"text": "\\begin{align}\nx^0&=1\\\\\nx^{m+n}&=x^m x^n\\\\\n(x^m)^n&=x^{mn}\\\\\n(xy)^n&=x^n y^n \\quad \\text{if } xy=yx, \\text{and, in particular, if the multiplication is commutative.}\n\\end{align}"
},
{
"math_id": 176,
"text": "f^n"
},
{
"math_id": 177,
"text": "f^{\\circ n}"
},
{
"math_id": 178,
"text": "(f^n)(x)=(f(x))^n=f(x) \\,f(x) \\cdots f(x),"
},
{
"math_id": 179,
"text": "(f^{\\circ n})(x)=f(f(\\cdots f(f(x))\\cdots))."
},
{
"math_id": 180,
"text": "(f^n)(x)"
},
{
"math_id": 181,
"text": "f(x)^n,"
},
{
"math_id": 182,
"text": "(f^{\\circ n})(x)"
},
{
"math_id": 183,
"text": "f^n(x)."
},
{
"math_id": 184,
"text": "x\\in G"
},
{
"math_id": 185,
"text": "\\Z"
},
{
"math_id": 186,
"text": "x^n=x^0=1,"
},
{
"math_id": 187,
"text": "(g^h)^k=g^{hk}"
},
{
"math_id": 188,
"text": "(gh)^k=g^kh^k."
},
{
"math_id": 189,
"text": "x^n=0"
},
{
"math_id": 190,
"text": "x\\neq 0"
},
{
"math_id": 191,
"text": "x^n\\neq 0"
},
{
"math_id": 192,
"text": "k[x_1, \\ldots, x_n]"
},
{
"math_id": 193,
"text": "A^0"
},
{
"math_id": 194,
"text": "A^{-n} = \\left(A^{-1}\\right)^n"
},
{
"math_id": 195,
"text": "A^2x"
},
{
"math_id": 196,
"text": "A^nx"
},
{
"math_id": 197,
"text": "A^n"
},
{
"math_id": 198,
"text": "d/dx"
},
{
"math_id": 199,
"text": "(d/dx)f(x) = f'(x)"
},
{
"math_id": 200,
"text": "\\left(\\frac{d}{dx}\\right)^nf(x) = \\frac{d^n}{dx^n}f(x) = f^{(n)}(x)."
},
{
"math_id": 201,
"text": "q=p^k,"
},
{
"math_id": 202,
"text": "\\mathbb F_q."
},
{
"math_id": 203,
"text": "x^q=x"
},
{
"math_id": 204,
"text": "x\\in \\mathbb F_q."
},
{
"math_id": 205,
"text": "\\mathbb F_q"
},
{
"math_id": 206,
"text": "\\{g^1=g, g^2, \\ldots, g^{p-1}=g^0=1\\}"
},
{
"math_id": 207,
"text": "\\varphi (p-1)"
},
{
"math_id": 208,
"text": "\\mathbb F_q,"
},
{
"math_id": 209,
"text": "\\varphi"
},
{
"math_id": 210,
"text": "(x+y)^p = x^p+y^p"
},
{
"math_id": 211,
"text": "x^p=x"
},
{
"math_id": 212,
"text": "\\begin{align}\nF\\colon{} & \\mathbb F_q \\to \\mathbb F_q\\\\\n& x\\mapsto x^p\n\\end{align}"
},
{
"math_id": 213,
"text": "g^e"
},
{
"math_id": 214,
"text": "(x,y)"
},
{
"math_id": 215,
"text": "x\\in S"
},
{
"math_id": 216,
"text": "y\\in T."
},
{
"math_id": 217,
"text": "(x,(y,z)),"
},
{
"math_id": 218,
"text": "((x,y),z),"
},
{
"math_id": 219,
"text": "(x,y,z)."
},
{
"math_id": 220,
"text": "S^n"
},
{
"math_id": 221,
"text": "(x_1, \\ldots, x_n)"
},
{
"math_id": 222,
"text": "\\R^n"
},
{
"math_id": 223,
"text": "\\R"
},
{
"math_id": 224,
"text": "\\R,"
},
{
"math_id": 225,
"text": "\\{1,\\ldots, n\\}."
},
{
"math_id": 226,
"text": "S^T"
},
{
"math_id": 227,
"text": "(S^T)^U\\cong S^{T\\times U},"
},
{
"math_id": 228,
"text": "S^{T\\sqcup U}\\cong S^T\\times S^U,"
},
{
"math_id": 229,
"text": "\\times"
},
{
"math_id": 230,
"text": "\\sqcup"
},
{
"math_id": 231,
"text": "\\R^\\N"
},
{
"math_id": 232,
"text": "\\R^{(\\N)}"
},
{
"math_id": 233,
"text": "\\{0,1\\}."
},
{
"math_id": 234,
"text": "2^S"
},
{
"math_id": 235,
"text": "\\{0,1\\},"
},
{
"math_id": 236,
"text": "Y^X"
},
{
"math_id": 237,
"text": "\\hom(X,Y)."
},
{
"math_id": 238,
"text": "(S^T)^U\\cong S^{T\\times U}"
},
{
"math_id": 239,
"text": "\\hom(U,S^T)\\cong \\hom(T\\times U,S)."
},
{
"math_id": 240,
"text": "X\\to X^T"
},
{
"math_id": 241,
"text": "Y\\to T\\times Y."
},
{
"math_id": 242,
"text": "Y\\to X\\times Y"
},
{
"math_id": 243,
"text": "f(x,y) = x^y"
},
{
"math_id": 244,
"text": " D = \\{(x, y) \\in \\mathbf{R}^2 : x > 0 \\}"
},
{
"math_id": 245,
"text": "100 = 2^2 +2^5 + 2^6 = 2^2(1+2^3(1+2))"
},
{
"math_id": 246,
"text": "\\sharp n +\\lfloor \\log_{2} n\\rfloor -1,"
},
{
"math_id": 247,
"text": "\\sharp n"
},
{
"math_id": 248,
"text": "g\\circ f,"
},
{
"math_id": 249,
"text": "(g\\circ f)(x)=g(f(x))"
},
{
"math_id": 250,
"text": "f^3(x)"
},
{
"math_id": 251,
"text": "f(f(f(x)))."
},
{
"math_id": 252,
"text": "f^2(x)= f(f(x)),"
},
{
"math_id": 253,
"text": "f(x)^2= f(x)\\cdot f(x)."
},
{
"math_id": 254,
"text": "f^{\\circ 3}=f\\circ f \\circ f,"
},
{
"math_id": 255,
"text": "f^3=f\\cdot f\\cdot f."
},
{
"math_id": 256,
"text": "\\sin^2 x"
},
{
"math_id": 257,
"text": "\\sin^2(x)"
},
{
"math_id": 258,
"text": "\\sin(x)\\cdot\\sin(x)"
},
{
"math_id": 259,
"text": "\\sin(\\sin(x)),"
},
{
"math_id": 260,
"text": "-1"
},
{
"math_id": 261,
"text": "\\sin^{-1}x=\\sin^{-1}(x) = \\arcsin x."
},
{
"math_id": 262,
"text": "1/\\sin(x)=\\frac 1{\\sin x}."
}
] | https://en.wikipedia.org/wiki?curid=99491 |
9949339 | Two-dimensional point vortex gas | The two-dimensional point vortex gas is a discrete particle model used to study turbulence in two-dimensional ideal fluids. The two-dimensional guiding-center plasma is a completely equivalent model used in plasma physics.
General setup.
The model is a Hamiltonian system of "N" points in the two-dimensional plane executing the motion
formula_0
Interpretations.
In the point-vortex gas interpretation, the particles represent either point vortices in a two-dimensional fluid, or parallel line vortices in a three-dimensional fluid. The constant "k""i" is the circulation of the fluid around the "i"th vortex. The Hamiltonian "H" is the interaction term of the fluid's integrated kinetic energy; it may be either positive or negative. The equations of motion simply reflect the drift of each vortex's position in the velocity field of the other vortices.
In the guiding-center plasma interpretation, the particles represent long filaments of charge parallel to some external magnetic field. The constant "k""i" is the linear charge density of the "i"th filament. The Hamiltonian "H" is just the two-dimensional Coulomb potential between lines. The equations of motion reflect the guiding center drift of the charge filaments, hence the name.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k_i\\frac{dx_i}{dt} = \\frac{\\partial H}{\\partial y_i},\\qquad k_i\\frac{dy_i}{dt} = -\\frac{\\partial H}{\\partial x_i},"
}
] | https://en.wikipedia.org/wiki?curid=9949339 |
99494 | Presentation of a group | Specification of a mathematical group by generators and relations
In mathematics, a presentation is one method of specifying a group. A presentation of a group "G" comprises a set "S" of generators—so that every element of the group can be written as a product of powers of some of these generators—and a set "R" of relations among those generators. We then say "G" has presentation
formula_0
Informally, "G" has the above presentation if it is the "freest group" generated by "S" subject only to the relations "R". Formally, the group "G" is said to have the above presentation if it is isomorphic to the quotient of a free group on "S" by the normal subgroup generated by the relations "R".
As a simple example, the cyclic group of order "n" has the presentation
formula_1
where 1 is the group identity. This may be written equivalently as
formula_2
thanks to the convention that terms that do not include an equals sign are taken to be equal to the group identity. Such terms are called relators, distinguishing them from the relations that do include an equals sign.
Every group has a presentation, and in fact many different presentations; a presentation is often the most compact way of describing the structure of the group.
A closely related but different concept is that of an absolute presentation of a group.
Background.
A free group on a set "S" is a group where each element can be "uniquely" described as a finite length product of the form:
formula_3
where the "si" are elements of S, adjacent "si" are distinct, and "ai" are non-zero integers (but "n" may be zero). In less formal terms, the group consists of words in the generators "and their inverses", subject only to canceling a generator with an adjacent occurrence of its inverse.
If "G" is any group, and "S" is a generating subset of "G", then every element of "G" is also of the above form; but in general, these products will not "uniquely" describe an element of "G".
For example, the dihedral group D8 of order sixteen can be generated by a rotation, "r", of order 8; and a flip, "f", of order 2; and certainly any element of D8 is a product of "r"'s and "f"'s.
However, we have, for example, "rfr" = "f"−1, "r"7 = "r"−1, etc., so such products are "not unique" in D8. Each such product equivalence can be expressed as an equality to the identity, such as
"rfrf" = 1,
"r"8 = 1, or
"f"2 = 1.
Informally, we can consider these products on the left hand side as being elements of the free group "F" = ⟨"r", "f"⟩, and let "R" = ⟨"rfrf", "r"8, "f"2⟩. That is, we let "R" be the subgroup generated by the strings "rfrf", "r"8, "f"2, each of which is also equivalent to 1 when considered as products in D8.
If we then let "N" be the subgroup of "F" generated by all conjugates "x"−1"Rx" of "R", then it follows by definition that every element of "N" is a finite product "x"1−1"r"1"x"1 ... "xm"−1"rm" "xm" of members of such conjugates. It follows that each element of "N", when considered as a product in D8, will also evaluate to 1; and thus that "N" is a normal subgroup of "F". Thus D8 is isomorphic to the quotient group "F"/"N". We then say that D8 has presentation
formula_4
Here the set of generators is "S" = {"r", "f" }, and the set of relations is "R" = {"r" 8 = 1, "f" 2 = 1, ("rf" )2 = 1}. We often see "R" abbreviated, giving the presentation
formula_5
An even shorter form drops the equality and identity signs, to list just the set of relators, which is {"r" 8, "f" 2, ("rf" )2}. Doing this gives the presentation
formula_6
All three presentations are equivalent.
Notation.
Although the notation used in this article for a presentation is now the most common, earlier writers used different variations on the same format. Such notations include the following:
Definition.
Let "S" be a set and let "FS" be the free group on "S". Let "R" be a set of words on "S", so "R" naturally gives a subset of formula_7. To form a group with presentation formula_8, take the quotient of formula_7 by the smallest normal subgroup that contains each element of "R". (This subgroup is called the normal closure "N" of "R" in formula_7.) The group formula_8 is then defined as the quotient group
formula_9
The elements of "S" are called the generators of formula_8 and the elements of "R" are called the relators. A group "G" is said to have the presentation formula_8 if "G" is isomorphic to formula_8.
It is a common practice to write relators in the form formula_10 where "x" and "y" are words on "S". What this means is that formula_11. This has the intuitive meaning that the images of "x" and "y" are supposed to be equal in the quotient group. Thus, for example, "rn" in the list of relators is equivalent with formula_12.
For a finite group "G", it is possible to build a presentation of "G" from the group multiplication table, as follows. Take "S" to be the set elements formula_13 of "G" and "R" to be all words of the form formula_14, where formula_15 is an entry in the multiplication table.
Alternate definition.
The definition of group presentation may alternatively be recast in terms of equivalence classes of words on the alphabet formula_16. In this perspective, we declare two words to be equivalent if it is possible to get from one to the other by a sequence of moves, where each move consists of adding or removing a consecutive pair formula_17 or formula_18 for some x in S, or by adding or removing a consecutive copy of a relator. The group elements are the equivalence classes, and the group operation is concatenation.
This point of view is particularly common in the field of combinatorial group theory.
Finitely presented groups.
A presentation is said to be finitely generated if "S" is finite and finitely related if "R" is finite. If both are finite it is said to be a finite presentation. A group is finitely generated (respectively finitely related, <templatestyles src="Template:Visible anchor/styles.css" />finitely presented) if it has a presentation that is finitely generated (respectively finitely related, a finite presentation). A group which has a finite presentation with a single relation is called a one-relator group.
Recursively presented groups.
If "S" is indexed by a set "I" consisting of all the natural numbers N or a finite subset of them, then it is easy to set up a simple one to one coding (or Gödel numbering) "f" : "FS" → N from the free group on "S" to the natural numbers, such that we can find algorithms that, given "f"("w"), calculate "w", and vice versa. We can then call a subset "U" of "FS" recursive (respectively recursively enumerable) if "f"("U") is recursive (respectively recursively enumerable). If "S" is indexed as above and "R" recursively enumerable, then the presentation is a recursive presentation and the corresponding group is recursively presented. This usage may seem odd, but it is possible to prove that if a group has a presentation with "R" recursively enumerable then it has another one with "R" recursive.
Every finitely presented group is recursively presented, but there are recursively presented groups that cannot be finitely presented. However a theorem of Graham Higman states that a finitely generated group has a recursive presentation if and only if it can be embedded in a finitely presented group. From this we can deduce that there are (up to isomorphism) only countably many finitely generated recursively presented groups. Bernhard Neumann has shown that there are uncountably many non-isomorphic two generator groups. Therefore, there are finitely generated groups that cannot be recursively presented.
History.
One of the earliest presentations of a group by generators and relations was given by the Irish mathematician William Rowan Hamilton in 1856, in his icosian calculus – a presentation of the icosahedral group.
The first systematic study was given by Walther von Dyck, student of Felix Klein, in the early 1880s, laying the foundations for combinatorial group theory.
Examples.
The following table lists some examples of presentations for commonly studied groups. Note that in each case there are many other presentations that are possible. The presentation listed is not necessarily the most efficient one possible.
An example of a finitely generated group that is not finitely presented is the wreath product formula_19 of the group of integers with itself.
Some theorems.
Theorem. Every group has a presentation.
To see this, given a group "G", consider the free group "FG" on "G". By the universal property of free groups, there exists a unique group homomorphism φ : "FG" → "G" whose restriction to "G" is the identity map. Let "K" be the kernel of this homomorphism. Then "K" is normal in "FG", therefore is equal to its normal closure, so ⟨"G" | "K"⟩ = "FG"/"K". Since the identity map is surjective, "φ" is also surjective, so by the First Isomorphism Theorem, ⟨"G" | "K"⟩ ≅ im("φ") = "G". This presentation may be highly inefficient if both "G" and "K" are much larger than necessary.
Corollary. Every finite group has a finite presentation.
One may take the elements of the group for generators and the Cayley table for relations.
Novikov–Boone theorem.
The negative solution to the word problem for groups states that there is a finite presentation ⟨"S" | "R"⟩ for which there is no algorithm which, given two words "u", "v", decides whether "u" and "v" describe the same element in the group. This was shown by Pyotr Novikov in 1955 and a different proof was obtained by William Boone in 1958.
Constructions.
Suppose "G" has presentation ⟨"S" | "R"⟩ and "H" has presentation ⟨"T" | "Q"⟩ with "S" and "T" being disjoint. Then
Deficiency.
The deficiency of a finite presentation ⟨"S" | "R"⟩ is just and the "deficiency" of a finitely presented group "G", denoted def("G"), is the maximum of the deficiency over all presentations of "G". The deficiency of a finite group is non-positive. The Schur multiplicator of a finite group "G" can be generated by −def("G") generators, and "G" is efficient if this number is required.
Geometric group theory.
A presentation of a group determines a geometry, in the sense of geometric group theory: one has the Cayley graph, which has a metric, called the word metric. These are also two resulting orders, the "weak order" and the "Bruhat order", and corresponding Hasse diagrams. An important example is in the Coxeter groups.
Further, some properties of this graph (the coarse geometry) are intrinsic, meaning independent of choice of generators.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\langle S \\mid R\\rangle."
},
{
"math_id": 1,
"text": "\\langle a \\mid a^n = 1\\rangle,"
},
{
"math_id": 2,
"text": "\\langle a \\mid a^n\\rangle,"
},
{
"math_id": 3,
"text": "s_1^{a_1} s_2^{a_2} \\cdots s_n^{a_n}"
},
{
"math_id": 4,
"text": "\\langle r, f \\mid r^8 = 1, f^2 = 1, (rf)^2 = 1\\rangle."
},
{
"math_id": 5,
"text": "\\langle r, f \\mid r^8 = f^2 = (rf)^2 = 1\\rangle."
},
{
"math_id": 6,
"text": "\\langle r, f \\mid r^8, f^2, (rf)^2 \\rangle."
},
{
"math_id": 7,
"text": "F_S"
},
{
"math_id": 8,
"text": "\\langle S \\mid R\\rangle"
},
{
"math_id": 9,
"text": "\\langle S \\mid R \\rangle = F_S / N."
},
{
"math_id": 10,
"text": "x = y"
},
{
"math_id": 11,
"text": "y^{-1}x\\in R"
},
{
"math_id": 12,
"text": "r^n=1"
},
{
"math_id": 13,
"text": "g_i"
},
{
"math_id": 14,
"text": "g_ig_jg_k^{-1}"
},
{
"math_id": 15,
"text": "g_ig_j=g_k"
},
{
"math_id": 16,
"text": "S \\cup S^{-1}"
},
{
"math_id": 17,
"text": "x x^{-1}"
},
{
"math_id": 18,
"text": "x^{-1} x"
},
{
"math_id": 19,
"text": "\\mathbf{Z} \\wr \\mathbf{Z}"
}
] | https://en.wikipedia.org/wiki?curid=99494 |
995019 | Mass gap | Energy difference between ground state and lightest excited state(s)
In quantum field theory, the mass gap is the difference in energy between the lowest energy state, the vacuum, and the next lowest energy state. The energy of the vacuum is zero by definition, and assuming that all energy states can be thought of as particles in plane-waves, the mass gap is the mass of the lightest particle.
Since the energies of exact (i.e. nonperturbative) energy eigenstates are spread out and therefore they are not technically eigenstates, a more precise definition is that the mass gap is the greatest lower bound of the energy of any state which is orthogonal to the vacuum.
The analog of a mass gap in many-body physics on a discrete lattice arises from a gapped Hamiltonian.
Mathematical definitions.
For a given real-valued quantum field formula_0, where formula_1, we can say that the theory has a mass gap if the two-point function has the property
formula_2
with formula_3 being the lowest energy value in the spectrum of the Hamiltonian and thus the mass gap. This quantity, easy to generalize to other fields, is what is generally measured in lattice computations. It was proved in this way that Yang–Mills theory develops a mass gap on a lattice. The corresponding time-ordered value, the propagator, will have the property
formula_4
with the constant being finite. A typical example is offered by a free massive particle and, in this case, the constant has the value 1/"m"2. In the same limit, the propagator for a massless particle is singular.
Examples from classical theories.
An example of mass gap arising for massless theories, already at the classical level, can be seen in spontaneous breaking of symmetry or Higgs mechanism. In the former case, one has to cope with the appearance of massless excitations, Goldstone bosons, that are removed in the latter case due to gauge freedom. Quantization preserves this gauge freedom property.
A quartic massless scalar field theory develops a mass gap already at classical level. Consider the equation
formula_5
This equation has the exact solution
formula_6
—where formula_7 and formula_8 are integration constants, and sn is a Jacobi elliptic function—provided
formula_9
At the classical level, a mass gap appears while, at quantum level, one has a tower of excitations and this property of the theory is preserved after quantization in the limit of momenta going to zero.
Yang–Mills theory.
While lattice computations have suggested that Yang–Mills theory indeed has a mass gap and a tower of excitations, a theoretical proof is still missing. This is one of the Clay Institute Millennium problems and it remains an open problem. Such states for Yang–Mills theory should be physical states, named glueballs, and should be observable in the laboratory.
Källén–Lehmann representation.
If Källén–Lehmann spectral representation holds, at this stage we exclude gauge theories, the spectral density function can take a very simple form with a discrete spectrum starting with a mass gap
formula_10
being formula_11 the contribution from multi-particle part of the spectrum. In this case, the propagator will take the simple form
formula_12
being formula_13 approximatively the starting point of the multi-particle sector. Now, using the fact that
formula_14
we arrive at the following conclusion for the constants in the spectral density
formula_15.
This could not be true in a gauge theory. Rather it must be proved that a Källén–Lehmann representation for the propagator holds also for this case. Absence of multi-particle contributions implies that the theory is trivial, as no bound states appear in the theory and so there is no interaction, even if the theory has a mass gap. In this case we have immediately the propagator just setting formula_16 in the formulas above.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi(x)"
},
{
"math_id": 1,
"text": "x = (\\boldsymbol{x},t)"
},
{
"math_id": 2,
"text": "\\langle\\phi(0,t)\\phi(0,0)\\rangle\\sim \\sum_nA_n\\exp\\left(-\\Delta_nt\\right)"
},
{
"math_id": 3,
"text": "\\Delta_0>0"
},
{
"math_id": 4,
"text": "\\lim_{p\\rightarrow 0}\\Delta(p)=\\mathrm{constant}"
},
{
"math_id": 5,
"text": "\\Box\\phi+\\lambda\\phi^3=0."
},
{
"math_id": 6,
"text": "\\phi(x)=\\mu\\left(\\frac{2}{\\lambda}\\right)^\\frac{1}{4}{\\rm sn}\\left(p\\cdot x+\\theta,-1\\right)"
},
{
"math_id": 7,
"text": "\\mu"
},
{
"math_id": 8,
"text": "\\theta"
},
{
"math_id": 9,
"text": "p^2=\\mu^2\\sqrt{\\frac{\\lambda}{2}}."
},
{
"math_id": 10,
"text": "\\rho(\\mu^2)=\\sum_{n=1}^NZ_n\\delta(\\mu^2-m_n^2)+\\rho_c(\\mu^2)"
},
{
"math_id": 11,
"text": "\\rho_c(\\mu^2)"
},
{
"math_id": 12,
"text": "\\Delta(p)=\\sum_{n=1}^N\\frac{Z_n}{p^2-m^2_n+i\\epsilon}+\\int_{4m_N^2}^\\infty d\\mu^2\\rho_c(\\mu^2)\\frac{1}{p^2-\\mu^2+i\\epsilon}"
},
{
"math_id": 13,
"text": "4m_N^2"
},
{
"math_id": 14,
"text": "\\int_0^\\infty d\\mu^2\\rho(\\mu^2)=1"
},
{
"math_id": 15,
"text": "1=\\sum_{n=1}^NZ_n+\\int_0^\\infty d\\mu^2\\rho_c(\\mu^2)"
},
{
"math_id": 16,
"text": "\\rho_c(\\mu^2)=0"
}
] | https://en.wikipedia.org/wiki?curid=995019 |
9950498 | Least squares inference in phylogeny | Generation of phylogenetic trees based on an observed matrix of pairwise genetic distances
Least squares inference in phylogeny generates a
phylogenetic tree based on an
observed matrix of pairwise genetic distances and
optionally a weight
matrix. The goal is to find a tree which satisfies the distance constraints as
best as possible.
Ordinary and weighted least squares.
The discrepancy between the observed pairwise distances formula_0
and the distances formula_1 over a phylogenetic tree (i.e. the sum
of the branch lengths in the path from leaf formula_2 to leaf
formula_3) is measured by
formula_4
where the weights formula_5 depend on the least squares method used.
Least squares
distance tree construction aims to find the tree (topology and branch lengths)
with minimal S. This is a non-trivial problem. It involves searching the
discrete space of unrooted binary tree topologies whose size is exponential in
the number of leaves. For n leaves there are
1 • 3 • 5 • ... • (2n-3)
different topologies. Enumerating them is not feasible already for a small
number of leaves. Heuristic search methods are used to find a reasonably
good topology. The evaluation of S for a given topology (which includes the
computation of the branch lengths) is a linear least squares problem.
There are several ways to weight the squared errors
formula_6,
depending on the knowledge and assumptions about the variances of the observed
distances. When nothing is known about the errors, or if they are assumed to be
independently distributed and equal for all observed distances, then all the
weights formula_5 are set to one. This leads to an ordinary least
squares estimate.
In the weighted least squares case the errors are assumed to be independent
(or their correlations are not known). Given independent errors, a particular
weight should ideally be set to the inverse of the variance of the corresponding distance
estimate. Sometimes the variances may not be known, but they
can be modeled as a function of the distance estimates. In the Fitch and
Margoliash method
for instance it is assumed that the variances are proportional to the squared
distances.
Generalized least squares.
The ordinary and weighted least squares methods described above
assume independent distance estimates. If the distances
are derived from genomic data their estimates covary, because evolutionary
events on internal
branches (of the true tree) can push several distances up or down at
the same time. The resulting covariances can be taken into account using the
method of generalized least squares, i.e. minimizing the following quantity
formula_7
where formula_8 are the entries of the inverse of the covariance matrix of the distance estimates.
Computational Complexity.
Finding the tree and branch lengths minimizing the least squares residual is an NP-complete problem. However, for a given tree, the optimal branch lengths can be determined in formula_9 time for ordinary least squares, formula_10 time for weighted least squares, and formula_11 time for generalised least squares (given the inverse of the covariance matrix).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "D_{ij}"
},
{
"math_id": 1,
"text": "T_{ij}"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "j"
},
{
"math_id": 4,
"text": " S = \\sum_{ij} w_{ij} (D_{ij}-T_{ij})^2 "
},
{
"math_id": 5,
"text": "w_{ij}"
},
{
"math_id": 6,
"text": "(D_{ij}-T_{ij})^2"
},
{
"math_id": 7,
"text": "\\sum_{ij, kl} w_{ij,kl} (D_{ij}-T_{ij}) (D_{kl}-T_{kl})"
},
{
"math_id": 8,
"text": "w_{ij,kl}"
},
{
"math_id": 9,
"text": "O(n^2)"
},
{
"math_id": 10,
"text": "O(n^3)"
},
{
"math_id": 11,
"text": "O(n^4)"
}
] | https://en.wikipedia.org/wiki?curid=9950498 |
9950644 | Schröder–Bernstein theorems for operator algebras | The Schröder–Bernstein theorem from set theory has analogs in the context operator algebras. This article discusses such operator-algebraic results.
For von Neumann algebras.
Suppose M is a von Neumann algebra and "E", "F" are projections in M. Let ~ denote the Murray-von Neumann equivalence relation on M. Define a partial order « on the family of projections by "E" « "F" if "E" ~ "F' " ≤ "F". In other words, "E" « "F" if there exists a partial isometry "U" ∈ M such that "U*U" = "E" and "UU*" ≤ "F".
For closed subspaces "M" and "N" where projections "PM" and "PN", onto "M" and "N" respectively, are elements of M, "M" « "N" if "PM" « "PN".
The Schröder–Bernstein theorem states that if "M" « "N" and "N" « "M", then "M" ~ "N".
A proof, one that is similar to a set-theoretic argument, can be sketched as follows. Colloquially, "N" « "M" means that "N" can be isometrically embedded in "M". So
formula_0
where "N"0 is an isometric copy of "N" in "M". By assumption, it is also true that, "N", therefore "N"0, contains an isometric copy "M"1 of "M". Therefore, one can write
formula_1
By induction,
formula_2
It is clear that
formula_3
Let
formula_4
So
formula_5
and
formula_6
Notice
formula_7
The theorem now follows from the countable additivity of ~.
Representations of C*-algebras.
There is also an analog of Schröder–Bernstein for representations of C*-algebras. If "A" is a C*-algebra, a representation of "A" is a *-homomorphism "φ" from "A" into "L"("H"), the bounded operators on some Hilbert space "H".
If there exists a projection "P" in "L"("H") where "P" "φ"("a") = "φ"("a") "P" for every "a" in "A", then a subrepresentation "σ" of "φ" can be defined in a natural way: "σ"("a") is "φ"("a") restricted to the range of "P". So "φ" then can be expressed as a direct sum of two subrepresentations "φ" = "φ' " ⊕ "σ".
Two representations "φ"1 and "φ"2, on "H"1 and "H"2 respectively, are said to be unitarily equivalent if there exists a unitary operator "U": "H"2 → "H"1 such that "φ"1("a")"U" = "Uφ"2("a"), for every "a".
In this setting, the Schröder–Bernstein theorem reads:
If two representations "ρ" and "σ", on Hilbert spaces "H" and "G" respectively, are each unitarily equivalent to a subrepresentation of the other, then they are unitarily equivalent.
A proof that resembles the previous argument can be outlined. The assumption implies that there exist surjective partial isometries from "H" to "G" and from "G" to "H". Fix two such partial isometries for the argument. One has
formula_8
In turn,
formula_9
By induction,
formula_10
and
formula_11
Now each additional summand in the direct sum expression is obtained using one of the two fixed partial isometries, so
formula_12
This proves the theorem.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M = M_0 \\supset N_0"
},
{
"math_id": 1,
"text": "M = M_0 \\supset N_0 \\supset M_1."
},
{
"math_id": 2,
"text": "M = M_0 \\supset N_0 \\supset M_1 \\supset N_1 \\supset M_2 \\supset N_2 \\supset \\cdots ."
},
{
"math_id": 3,
"text": "R = \\cap_{i \\geq 0} M_i = \\cap_{i \\geq 0} N_i."
},
{
"math_id": 4,
"text": "M \\ominus N \\stackrel{\\mathrm{def}}{=} M \\cap (N)^{\\perp}."
},
{
"math_id": 5,
"text": "\nM = \\oplus_{i \\geq 0} ( M_i \\ominus N_i ) \\quad \\oplus \\quad \\oplus_{j \\geq 0} ( N_j \\ominus M_{j+1}) \\quad \\oplus R\n"
},
{
"math_id": 6,
"text": "\nN_0 = \\oplus_{i \\geq 1} ( M_i \\ominus N_i ) \\quad \\oplus \\quad \\oplus_{j \\geq 0} ( N_j \\ominus M_{j+1}) \\quad \\oplus R.\n"
},
{
"math_id": 7,
"text": "M_i \\ominus N_i \\sim M \\ominus N \\quad \\mbox{for all} \\quad i."
},
{
"math_id": 8,
"text": "\\rho = \\rho_1 \\simeq \\rho_1 ' \\oplus \\sigma_1 \\quad \\mbox{where} \\quad \\sigma_1 \\simeq \\sigma."
},
{
"math_id": 9,
"text": "\\rho_1 \\simeq \\rho_1 ' \\oplus (\\sigma_1 ' \\oplus \\rho_2) \\quad \\mbox{where} \\quad \\rho_2 \\simeq \\rho ."
},
{
"math_id": 10,
"text": "\n\\rho_1 \\simeq \\rho_1 ' \\oplus \\sigma_1 ' \\oplus \\rho_2' \\oplus \\sigma_2 ' \\cdots \\simeq ( \\oplus_{i \\geq 1} \\rho_i ' ) \\oplus \n( \\oplus_{i \\geq 1} \\sigma_i '),\n"
},
{
"math_id": 11,
"text": "\n\\sigma_1 \\simeq \\sigma_1 ' \\oplus \\rho_2' \\oplus \\sigma_2 ' \\cdots \\simeq ( \\oplus_{i \\geq 2} \\rho_i ' ) \\oplus \n( \\oplus_{i \\geq 1} \\sigma_i ').\n"
},
{
"math_id": 12,
"text": "\n\\rho_i ' \\simeq \\rho_j ' \\quad \\mbox{and} \\quad \\sigma_i ' \\simeq \\sigma_j ' \\quad \\mbox{for all} \\quad i,j \\;. \n"
}
] | https://en.wikipedia.org/wiki?curid=9950644 |
995084 | John Selfridge | John Lewis Selfridge (February 17, 1927 – October 31, 2010), was an American mathematician who contributed to the fields of analytic number theory, computational number theory, and combinatorics.
Education.
Selfridge received his Ph.D. in 1958 from the University of California, Los Angeles under the supervision of Theodore Motzkin.
Career.
Selfridge served on the faculties of the University of Illinois at Urbana-Champaign and Northern Illinois University (NIU) from 1971 to 1991 (retirement), chairing the NIU Department of Mathematical Sciences 1972–1976 and 1986–1990.
He was executive editor of "Mathematical Reviews" from 1978 to 1986, overseeing the computerization of its operations. He was a founder of the Number Theory Foundation, which has named its Selfridge prize in his honour.
Research.
In 1962, he proved that 78,557 is a Sierpinski number; he showed that, when "k" = 78,557, all numbers of the form "k"2"n" + 1 have a factor in the covering set {3, 5, 7, 13, 19, 37, 73}. Five years later, he and Sierpiński conjectured that 78,557 is the smallest Sierpinski number, and thus the answer to the Sierpinski problem. A distributed computing project, Seventeen or Bust, is devoted to finding a computational proof of this statement.
In 1964, Selfridge and Alexander Hurwitz proved that the 14th Fermat number formula_0 was composite.
However, their proof did not provide a factor. It was not until 2010 that the first factor of the 14th Fermat number was found.
In 1975 John Brillhart, Derrick Henry Lehmer, and Selfridge developed a method of proving the primality of p given only partial factorizations of "p" − 1 and "p" + 1.
Together with Samuel Wagstaff they also all participated in the Cunningham project.
Together with Paul Erdős, Selfridge solved a 150-year-old problem, proving that the product of consecutive numbers is never a power. It took them many years to find the proof, and John made extensive use of computers, but the final version of the proof requires only a modest amount of computation, namely evaluating an easily computed function f(n) for 30,000 consecutive values of "n". Selfridge suffered from writer's block and thanked "R. B. Eggleton for reorganizing and writing the paper in its final form".
Selfridge also developed the Selfridge–Conway discrete procedure for creating an envy-free cake-cutting among three people. Selfridge developed this in 1960, and John Conway independently discovered it in 1993. Neither of them ever published the result, but Richard Guy told many people Selfridge's solution in the 1960s, and it was eventually attributed to the two of them in a number of books and articles.
Selfridge's conjecture about Fermat numbers.
Selfridge made the following conjecture about the Fermat numbers "Fn" = 22"n" + 1 . Let "g"("n") be the number of distinct prime factors of "Fn" (sequence in the OEIS). As to 2024, "g"("n") is known only up to "n" = 11, and it is monotonic. Selfridge conjectured that contrary to appearances, "g"("n") is "not" monotonic. In support of his conjecture he showed a sufficient (but not necessary) condition for its truth is the existence of another Fermat "prime" beyond the five known (3, 5, 17, 257, 65537).
Selfridge's conjecture about primality testing.
This conjecture is also called the PSW conjecture, after Selfridge, Carl Pomerance, and Samuel Wagstaff.
Let "p" be an odd number, with "p" ≡ ± 2 (mod 5). Selfridge conjectured that if
where "f""k" is the "k"th Fibonacci number, then "p" is a prime number, and he offered $500 for an example disproving this. He also offered $20 for a proof that the conjecture was true. The Number Theory Foundation will now cover this prize. An example will actually yield you $620 because Samuel Wagstaff offers $100 for an example or a proof, and Carl Pomerance offers $20 for an example and $500 for a proof. Selfridge requires that a factorization be supplied, but Pomerance does not. The related test that "f""p"−1 ≡ 0 (mod "p") for "p" ≡ ±1 (mod 5) is false and has e.g. a 6-digit counterexample. The smallest counterexample for +1 (mod 5) is 6601 = 7 × 23 × 41 and the smallest for −1 (mod 5) is 30889 = 17 × 23 × 79. It should be known that a heuristic by Pomerance may show this conjecture is false (and therefore, a counterexample should exist).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2^{2^{14}} + 1"
}
] | https://en.wikipedia.org/wiki?curid=995084 |
9951602 | Earth mass | Unit of mass equal to that of Earth
<templatestyles src="Template:Infobox/styles-images.css" />
An Earth mass (denoted as "M"🜨, "M"♁ or "M"E, where 🜨 and ♁ are the astronomical symbols for Earth), is a unit of mass equal to the mass of the planet Earth. The current best estimate for the mass of Earth is , with a relative uncertainty of 10−4. It is equivalent to an average density of . Using the nearest metric prefix, the Earth mass is approximately six ronnagrams, or 6.0 Rg.
The Earth mass is a standard unit of mass in astronomy that is used to indicate the masses of other planets, including rocky terrestrial planets and exoplanets. One Solar mass is close to Earth masses. The Earth mass excludes the mass of the Moon. The mass of the Moon is about 1.2% of that of the Earth, so that the mass of the Earth–Moon system is close to .
Most of the mass is accounted for by iron and oxygen (c. 32% each), magnesium and silicon (c. 15% each), calcium, aluminium and nickel (c. 1.5% each).
Precise measurement of the Earth mass is difficult, as it is equivalent to measuring the gravitational constant, which is the fundamental physical constant known with least accuracy, due to the relative weakness of the gravitational force. The mass of the Earth was first measured with any accuracy (within about 20% of the correct value) in the Schiehallion experiment in the 1770s, and within 1% of the modern value in the Cavendish experiment of 1798.
Unit of mass in astronomy.
The mass of Earth is estimated to be:
formula_0,
which can be expressed in terms of solar mass as:
formula_1.
The ratio of Earth mass to lunar mass has been measured to great accuracy. The current best estimate is:
formula_2
The product of ME and the universal gravitational constant ("G") is known as the geocentric gravitational constant ("G"ME) and equals . It is determined using laser ranging data from Earth-orbiting satellites, such as LAGEOS-1. "G"ME can also be calculated by observing the motion of the Moon or the period of a pendulum at various elevations, although these methods are less precise than observations of artificial satellites.
The relative uncertainty of "G"ME is just , considerably smaller than the relative uncertainty for ME itself. ME can be found out only by dividing "G"ME by "G", and "G" is known only to a relative uncertainty of , so ME will have the same uncertainty at best. For this reason and others, astronomers prefer to use "G"ME, or mass ratios (masses expressed in units of Earth mass or Solar mass) rather than mass in kilograms when referencing and comparing planetary objects.
Composition.
Earth's density varies considerably, between less than in the upper crust to as much as in the inner core. The Earth's core accounts for 15% of Earth's volume but more than 30% of the mass, the mantle for 84% of the volume and close to 70% of the mass, while the crust accounts for less than 1% of the mass. About 90% of the mass of the Earth is composed of the iron–nickel alloy (95% iron) in the core (30%), and the silicon dioxides (c. 33%) and magnesium oxide (c. 27%) in the mantle and crust. Minor contributions are from iron(II) oxide (5%), aluminium oxide (3%) and calcium oxide (2%), besides numerous trace elements (in elementary terms: iron and oxygen c. 32% each, magnesium and silicon c. 15% each, calcium, aluminium and nickel c. 1.5% each). Carbon accounts for 0.03%, water for 0.02%, and the atmosphere for about one part per million.
History of measurement.
The mass of Earth is measured indirectly by determining other quantities such as Earth's density, gravity, or gravitational constant. The first measurement in the 1770s Schiehallion experiment resulted in a value about 20% too low. The Cavendish experiment of 1798 found the correct value within 1%. Uncertainty was reduced to about 0.2% by the 1890s, to 0.1% by 1930.
The figure of the Earth has been known to better than four significant digits since the 1960s (WGS66), so that since that time, the uncertainty of the Earth mass is determined essentially by the uncertainty in measuring the gravitational constant. Relative uncertainty was cited at 0.06% in the 1970s, and at 0.01% (10−4) by the 2000s. The current relative uncertainty of 10−4 amounts to in absolute terms, of the order of the mass of a minor planet (70% of the mass of Ceres).
Early estimates.
Before the direct measurement of the gravitational constant, estimates of the Earth mass were limited to estimating Earth's mean density from observation of the crust and estimates on Earth's volume. Estimates on the volume of the Earth in the 17th century were based on a circumference estimate of to the degree of latitude, corresponding to a radius of (86% of the Earth's actual radius of about ), resulting in an estimated volume of about one third smaller than the correct value.
The average density of the Earth was not accurately known. Earth was assumed to consist either mostly of water (Neptunism) or mostly of igneous rock (Plutonism), both suggesting average densities far too low, consistent with a total mass of the order of . Isaac Newton estimated, without access to reliable measurement, that the density of Earth would be five or six times as great as the density of water, which is surprisingly accurate (the modern value is 5.515). Newton under-estimated the Earth's volume by about 30%, so that his estimate would be roughly equivalent to .
In the 18th century, knowledge of Newton's law of universal gravitation permitted indirect estimates on the mean density of the Earth, via estimates of (what in modern terminology is known as) the gravitational constant. Early estimates on the mean density of the Earth were made by observing the slight deflection of a pendulum near a mountain, as in the Schiehallion experiment. Newton considered the experiment in "Principia", but pessimistically concluded that the effect would be too small to be measurable.
An expedition from 1737 to 1740 by Pierre Bouguer and Charles Marie de La Condamine attempted to determine the density of Earth by measuring the period of a pendulum (and therefore the strength of gravity) as a function of elevation. The experiments were carried out in Ecuador and Peru, on Pichincha Volcano and mount Chimborazo. Bouguer wrote in a 1749 paper that they had been able to detect a deflection of 8 seconds of arc, the accuracy was not enough for a definite estimate on the mean density of the Earth, but Bouguer stated that it was at least sufficient to prove that the Earth was not hollow.
Schiehallion experiment.
That a further attempt should be made on the experiment was proposed to the Royal Society in 1772 by Nevil Maskelyne, Astronomer Royal. He suggested that the experiment would "do honour to the nation where it was made" and proposed Whernside in Yorkshire, or the Blencathra-Skiddaw massif in Cumberland as suitable targets. The Royal Society formed the Committee of Attraction to consider the matter, appointing Maskelyne, Joseph Banks and Benjamin Franklin amongst its members. The Committee despatched the astronomer and surveyor Charles Mason to find a suitable mountain.
After a lengthy search over the summer of 1773, Mason reported that the best candidate was Schiehallion, a peak in the central Scottish Highlands. The mountain stood in isolation from any nearby hills, which would reduce their gravitational influence, and its symmetrical east–west ridge would simplify the calculations. Its steep northern and southern slopes would allow the experiment to be sited close to its centre of mass, maximising the deflection effect. Nevil Maskelyne, Charles Hutton and Reuben Burrow performed the experiment, completed by 1776. Hutton (1778) reported that the mean density of the Earth was estimated at that of Schiehallion mountain. This corresponds to a mean density about <templatestyles src="Fraction/styles.css" />4+1⁄2 higher than that of water (i.e., about ), about 20% below the modern value, but still significantly larger than the mean density of normal rock, suggesting for the first time that the interior of the Earth might be substantially composed of metal. Hutton estimated this metallic portion to occupy some (or 65%) of the diameter of the Earth (modern value 55%). With a value for the mean density of the Earth, Hutton was able to set some values to Jérôme Lalande's planetary tables, which had previously only been able to express the densities of the major Solar System objects in relative terms.
Cavendish experiment.
Henry Cavendish (1798) was the first to attempt to measure the gravitational attraction between two bodies directly in the laboratory. Earth's mass could be then found by combining two equations; Newton's second law, and Newton's law of universal gravitation.
In modern notation, the mass of the Earth is derived from the gravitational constant and the mean Earth radius by
formula_3
Where gravity of Earth, "little g", is
formula_4.
Cavendish found a mean density of , about 1% below the modern value.
19th century.
While the mass of the Earth is implied by stating the Earth's radius and density, it was not usual to state the absolute mass explicitly prior to the introduction of scientific notation using powers of 10 in the later 19th century, because the absolute numbers would have been too awkward. Ritchie (1850) gives the mass of the Earth's atmosphere as "11,456,688,186,392,473,000 lbs". ( = , modern value is ) and states that "compared with the weight of the globe this mighty sum dwindles to insignificance".
Absolute figures for the mass of the Earth are cited only beginning in the second half of the 19th century, mostly in popular rather than expert literature. An early such figure was given as "14 septillion pounds" ("14 Quadrillionen Pfund") [] in Masius (1859). Beckett (1871) cites the "weight of the earth" as "5842 quintillion tons" []. The "mass of the earth in gravitational measure" is stated as "9.81996×63709802" in "The New Volumes of the Encyclopaedia Britannica" (Vol. 25, 1902) with a "logarithm of earth's mass" given as "14.600522" []. This is the gravitational parameter in m3·s−2 (modern value ) and not the absolute mass.
Experiments involving pendulums continued to be performed in the first half of the 19th century. By the second half of the century, these were outperformed by repetitions of the Cavendish experiment, and the modern value of "G" (and hence, of the Earth mass) is still derived from high-precision repetitions of the Cavendish experiment.
In 1821, Francesco Carlini determined a density value of "ρ" = through measurements made with pendulums in the Milan area. This value was refined in 1827 by Edward Sabine to , and then in 1841 by Carlo Ignazio Giulio to . On the other hand, George Biddell Airy sought to determine ρ by measuring the difference in the period of a pendulum between the surface and the bottom of a mine.
The first tests and experiments took place in Cornwall between 1826 and 1828. The experiment was a failure due to a fire and a flood. Finally, in 1854, Airy got the value by measurements in a coal mine in Harton, Sunderland. Airy's method assumed that the Earth had a spherical stratification. Later, in 1883, the experiments conducted by Robert von Sterneck (1839 to 1910) at different depths in mines of Saxony and Bohemia provided the average density values "ρ" between 5.0 and . This led to the concept of isostasy, which limits the ability to accurately measure "ρ", by either the deviation from vertical of a plumb line or using pendulums. Despite the little chance of an accurate estimate of the average density of the Earth in this way, Thomas Corwin Mendenhall in 1880 realized a gravimetry experiment in Tokyo and at the top of Mount Fuji. The result was "ρ" =.
Modern value.
The uncertainty in the modern value for the Earth's mass has been entirely due to the uncertainty in the gravitational constant "G" since at least the 1960s. "G" is notoriously difficult to measure, and some high-precision measurements during the 1980s to 2010s have yielded mutually exclusive results. Sagitov (1969) based on the measurement of "G" by Heyl and Chrzanowski (1942) cited a value of ME
(relative uncertainty ).
Accuracy has improved only slightly since then. Most modern measurements are repetitions of the Cavendish experiment, with results (within standard uncertainty) ranging between 6.672 and (relative uncertainty ) in results reported since the 1980s, although the 2014 CODATA recommended value is close to with a relative uncertainty below 10−4. The "Astronomical Almanach Online" as of 2016 recommends a standard uncertainty of for Earth mass, ME
Variation.
Earth's mass is variable, subject to both gain and loss due to the accretion of in-falling material, including micrometeorites and cosmic dust and the loss of hydrogen and helium gas, respectively. The combined effect is a net loss of material, estimated at per year. This amount is 10-17 of the total earth mass. The annual net loss is essentially due to 100,000 tons lost due to atmospheric escape, and an average of 45,000 tons gained from in-falling dust and meteorites. This is well within the mass uncertainty of 0.01% (), so the estimated value of Earth's mass is unaffected by this factor.
Mass loss is due to atmospheric escape of gases. About 95,000 tons of hydrogen per year () and 1,600 tons of helium per year are lost through atmospheric escape. The main factor in mass gain is in-falling material, cosmic dust, meteors, etc. are the most significant contributors to Earth's increase in mass. The sum of material is estimated to be annually, although this can vary significantly; to take an extreme example, the Chicxulub impactor, with a midpoint mass estimate of , added 900 million times that annual dustfall amount to the Earth's mass in a single event.
Additional changes in mass are due to the mass–energy equivalence principle, although these changes are relatively negligible. Mass loss due to the combination of nuclear fission and natural radioactive decay is estimated to amount to 16 tons per year.
An additional loss due to spacecraft on escape trajectories has been estimated at since the mid-20th century. Earth lost about 3473 tons in the initial 53 years of the space age, but the trend is currently decreasing.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M_\\oplus=(5.9722\\;\\pm\\;0.0006)\\times10^{24}\\;\\mathrm{kg}"
},
{
"math_id": 1,
"text": "M_\\oplus=\\frac{1}{332\\;946.0487\\;\\pm\\;0.0007}\\;M_\\odot \\approx 3.003\\times10^{-6}\\;M_\\odot "
},
{
"math_id": 2,
"text": "M_\\oplus/M_L=81.3005678\\;\\pm\\;0.0000027"
},
{
"math_id": 3,
"text": " M_\\oplus =\\frac{ GM_\\oplus}{ G } = \\frac{ g R_\\oplus^2}{G}."
},
{
"math_id": 4,
"text": "g = G\\frac{M_\\oplus}{R_\\oplus^2}"
}
] | https://en.wikipedia.org/wiki?curid=9951602 |
995169 | Landau theory | Theory of continuous phase transitions
Landau theory (also known as Ginzburg–Landau theory, despite the confusing name) in physics is a theory that Lev Landau introduced in an attempt to formulate a general theory of continuous (i.e., second-order) phase transitions. It can also be adapted to systems under externally-applied fields, and used as a quantitative model for discontinuous (i.e., first-order) transitions. Although the theory has now been superseded by the renormalization group and scaling theory formulations, it remains an exceptionally broad and powerful framework for phase transitions, and the associated concept of the order parameter as a descriptor of the essential character of the transition has proven transformative.
Mean-field formulation (no long-range correlation).
Landau was motivated to suggest that the free energy of any system should obey two conditions:
Given these two conditions, one can write down (in the vicinity of the critical temperature, "T""c") a phenomenological expression for the free energy as a Taylor expansion in the order parameter.
Second-order transitions.
Consider a system that breaks some symmetry below a phase transition, which is characterized by an order parameter formula_0. This order parameter is a measure of the order before and after a phase transition; the order parameter is often zero above some critical temperature and non-zero below the critical temperature. In a simple ferromagnetic system like the Ising model, the order parameter is characterized by the net magnetization formula_1, which becomes spontaneously non-zero below a critical temperature formula_2. In Landau theory, one considers a free energy functional that is an analytic function of the order parameter. In many systems with certain symmetries, the free energy will only be a function of even powers of the order parameter, for which it can be expressed as the series expansion
formula_3
In general, there are higher order terms present in the free energy, but it is a reasonable approximation to consider the series to fourth order in the order parameter, as long as the order parameter is small. For the system to be thermodynamically stable (that is, the system does not seek an infinite order parameter to minimize the energy), the coefficient of the highest even power of the order parameter must be positive, so formula_4. For simplicity, one can assume that formula_5, a constant, near the critical temperature. Furthermore, since formula_6 changes sign above and below the critical temperature, one can likewise expand formula_7, where it is assumed that formula_8 for the high-temperature phase while formula_9 for the low-temperature phase, for a transition to occur. With these assumptions, minimizing the free energy with respect to the order parameter requires
formula_10
The solution to the order parameter that satisfies this condition is either formula_11, or
formula_12
It is clear that this solution only exists for formula_13, otherwise formula_11 is the only solution. Indeed, formula_11 is the minimum solution for formula_14, but the solution formula_15 minimizes the free energy for formula_13, and thus is a stable phase. Furthermore, the order parameter follows the relation
formula_16
below the critical temperature, indicating a critical exponent formula_17 for this Landau mean-theory model.
The free-energy will vary as a function of temperature given by
formula_18
From the free energy, one can compute the specific heat,
formula_19
which has a finite jump at the critical temperature of size formula_20. This finite jump is therefore not associated with a discontinuity that would occur if the system absorbed latent heat, since formula_21. It is also noteworthy that the discontinuity in the specific heat is related to the discontinuity in the "second" derivative of the free energy, which is characteristic of a "second"-order phase transition. Furthermore, the fact that the specific heat has no divergence or cusp at the critical point indicates its critical exponent for formula_22 is formula_23.
Irreducible representations.
Landau expanded his theory to consider the restraints that it imposes on the symmetries before and after a transition of second order. They need to comply with a number of requirements:
In the latter case more than one daughter structure should be reacheable through a continuous transition. A good example of this are the structure of MnP (space group Cmca) and the low temperature structure of NbS (space group P63mc). They are both daughters of the NiAs-structure and their distortions transform according to the same irrep of that spacegroup.
Applied fields.
In many systems, one can consider a perturbing field formula_24 that couples linearly to the order parameter. For example, in the case of a classical dipole moment formula_25, the energy of the dipole-field system is formula_26. In the general case, one can assume an energy shift of formula_27 due to the coupling of the order parameter to the applied field formula_24, and the Landau free energy will change as a result:
formula_28
In this case, the minimization condition is
formula_29
One immediate consequence of this equation and its solution is that, if the applied field is non-zero, then the magnetization is non-zero at any temperature. This implies there is no longer a spontaneous symmetry breaking that occurs at any temperature. Furthermore, some interesting thermodynamic and universal quantities can be obtained from this above condition. For example, at the critical temperature where formula_30, one can find the dependence of the order parameter on the external field:
formula_31
indicating a critical exponent formula_32.
Furthermore, from the above condition, it is possible to find the zero-field susceptibility formula_33, which must satisfy
formula_34
formula_35
In this case, recalling in the zero-field case that formula_36 at low temperatures, while formula_37 for temperatures above the critical temperature, the zero-field susceptibility therefore has the following temperature dependence:
formula_38
which is reminiscent of the Curie-Weiss law for the temperature dependence of magnetic susceptibility in magnetic materials, and yields the mean-field critical exponent formula_39.
It is noteworthy that although the critical exponents so obtained are incorrect for many models and systems, they correctly satisfy various exponent equalities such as the Rushbrook equality: formula_40.
First-order transitions.
Landau theory can also be used to study first-order transitions. There are two different formulations, depending on whether or not the system is symmetric under a change in sign of the order parameter.
I. Symmetric Case.
Here we consider the case where the system has a symmetry and the energy is invariant when the order parameter changes sign.
A first-order transition will arise if the quartic term in formula_41 is negative. To ensure that the free energy remains positive at large formula_0, one must carry the free-energy expansion to sixth-order,
formula_42
where formula_43, and formula_44 is some temperature at which formula_45 changes sign. We denote this temperature by formula_44 and not formula_2, since it will emerge below that it is not the temperature of the first-order transition, and since there is no critical point, the notion of a "critical temperature" is misleading to begin with. formula_46 and formula_47 are positive coefficients.
We analyze this free energy functional as follows: (i) For formula_48, the formula_49 and formula_50 terms are concave upward for all formula_0, while the formula_51 term is concave downward. Thus for sufficiently high temperatures formula_41 is concave upward for all formula_0, and the equilibrium solution is formula_52. (ii) For formula_53, both the formula_49 and formula_51 terms are negative, so formula_52 is a local maximum, and the minimum of formula_41 is at some non-zero value formula_54, with
formula_55. (iii) For formula_56 just above formula_57, formula_52 turns into a local minimum, but the minimum at formula_58 continues to be the global minimum since it has a lower free energy. It follows that as the temperature is raised above formula_44, the global minimum cannot continuously evolve from
formula_58 to 0. Rather, at some intermediate temperature formula_59, the minima at formula_60 and formula_52 must become degenerate. For formula_61, the global minimum will jump discontinuously from formula_60 to 0.
To find formula_59, we demand that free energy be zero at formula_62 (just like the formula_63 solution), and furthermore that this point should be a local minimum. These two conditions yield two equations,
formula_64
formula_65
which are satisfied when formula_66. The same equations also imply that formula_67. That is,
formula_68
From this analysis both points made above can be seen explicitly. First, the order parameter suffers a discontinuous jump from formula_69 to 0. Second, the transition temperature formula_59 is not the same as the temperature formula_44 where formula_45 vanishes.
At temperatures below the transition temperature, formula_70, the order parameter is given by
formula_71
which is plotted to the right. This shows the clear discontinuity associated with the order parameter as a function of the temperature. To further demonstrate that the transition is first-order, one can show that the free energy for this order parameter is continuous at the transition temperature formula_59, but its first derivative (the entropy) suffers from a discontinuity, reflecting the existence of a non-zero latent heat.
II. Nonsymmetric Case.
Next we consider the case where the system does not have a symmetry. In this case there is no reason to keep only even powers of formula_0 in the expansion of formula_41, and a cubic term must be allowed (The linear term can always be eliminated by a shift formula_72 + constant.) We thus consider a free energy functional
formula_73
Once again formula_43, and formula_74 are all positive. The sign of the cubic term can always be chosen to be negative as we have done by reversing the sign of formula_0 if necessary.
We analyze this free energy functional as follows: (i) For formula_53, we have a local maximum at formula_52, and since the free energy is bounded below, there must be two local minima at nonzero values formula_75 and formula_76. The cubic term ensures that formula_77 is the global minimum since it is deeper. (ii) For formula_78 just above formula_44, the minimum at formula_79 disappears, the maximum at formula_52 turns into a local minimum, but the minimum at formula_77 persists and continues to be the global minimum. As the temperature is further raised, formula_80 rises until it equals zero at some temperature formula_59. At formula_59 we get a discontinuous jump in the global minimum from formula_81 to 0. (The minima cannot coalesce for that would require the first three derivatives of formula_41 to vanish at formula_52.)
To find formula_59, we demand that free energy be zero at formula_82 (just like the formula_63 solution), and furthermore that this point should be a local minimum. These two conditions yield two equations,
formula_83
formula_84
which are satisfied when formula_85. The same equations also imply that formula_86. That is,
formula_87
As in the symmetric case the order parameter suffers a discontinuous jump from formula_88 to 0. Second, the transition temperature formula_59 is not the same as the temperature formula_44 where formula_45 vanishes.
Applications.
It was known experimentally that the liquid–gas coexistence curve and the ferromagnet magnetization curve both exhibited a scaling relation of the form formula_89, where formula_90 was mysteriously the same for both systems. This is the phenomenon of universality. It was also known that simple liquid–gas models are exactly mappable to simple magnetic models, which implied that the two systems possess the same symmetries. It then followed from Landau theory why these two apparently disparate systems should have the same critical exponents, despite having different microscopic parameters. It is now known that the phenomenon of universality arises for other reasons (see Renormalization group). In fact, Landau theory predicts the incorrect critical exponents for the Ising and liquid–gas systems.
The great virtue of Landau theory is that it makes specific predictions for what kind of non-analytic behavior one should see when the underlying free energy is analytic. Then, all the non-analyticity at the critical point, the critical exponents, are because the "equilibrium value" of the order parameter changes non-analytically, as a square root, whenever the free energy loses its unique minimum.
The extension of Landau theory to include fluctuations in the order parameter shows that Landau theory is only strictly valid near the critical points of ordinary systems with spatial dimensions higher than 4. This is the upper critical dimension, and it can be much higher than four in more finely tuned phase transition. In Mukhamel's analysis of the isotropic Lifschitz point, the critical dimension is 8. This is because Landau theory is a mean field theory, and does not include long-range correlations.
This theory does not explain non-analyticity at the critical point, but when applied to superfluid and superconductor phase transition, Landau's theory provided inspiration for another theory, the Ginzburg–Landau theory of superconductivity.
Including long-range correlations.
Consider the Ising model free energy above. Assume that the order parameter formula_91 and external magnetic field, formula_24, may have spatial variations. Now, the free energy of the system can be assumed to take the following modified form:
formula_92
where formula_93 is the total "spatial" dimensionality. So,
formula_94
Assume that, for a "localized" external magnetic perturbation formula_95, the order parameter takes the form formula_96. Then,
formula_97
That is, the fluctuation formula_98 in the order parameter corresponds to the order-order correlation. Hence, neglecting this fluctuation (like in the earlier mean-field approach) corresponds to neglecting the order-order correlation, which diverges near the critical point.
One can also solve for formula_99, from which the scaling exponent, formula_100, for correlation length formula_101 can deduced. From these, the Ginzburg criterion for the upper critical dimension for the validity of the Ising mean-field Landau theory (the one without long-range correlation) can be calculated as:
formula_102
In our current Ising model, mean-field Landau theory gives formula_103 and so, it (the Ising mean-field Landau theory) is valid only for spatial dimensionality greater than or equal to 4 (at the marginal values of formula_104, there are small corrections to the exponents). This modified version of mean-field Landau theory is sometimes also referred to as the Landau–Ginzburg theory of Ising phase transitions. As a clarification, there is also a Landau-Ginzburg theory specific to superconductivity phase transition, which also includes fluctuations. | [
{
"math_id": 0,
"text": "\\eta"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "T_c"
},
{
"math_id": 3,
"text": "F(T,\\eta) - F_0 = a(T) \\eta^2 + \\frac{b(T)}{2} \\eta^4 + \\cdots"
},
{
"math_id": 4,
"text": "b(T)>0"
},
{
"math_id": 5,
"text": "b(T) = b_0"
},
{
"math_id": 6,
"text": "a(T)"
},
{
"math_id": 7,
"text": "a(T) \\approx a_0 (T-T_c)"
},
{
"math_id": 8,
"text": "a>0"
},
{
"math_id": 9,
"text": "a<0"
},
{
"math_id": 10,
"text": "\\frac{\\partial F}{\\partial \\eta} = 2a(T) \\eta + 2b(T) \\eta^3 = 0"
},
{
"math_id": 11,
"text": "\\eta= 0"
},
{
"math_id": 12,
"text": "\\eta_0^2 = -\\frac{a}{b} = - \\frac{a_0}{b_0}(T-T_c)"
},
{
"math_id": 13,
"text": "T<T_c"
},
{
"math_id": 14,
"text": "T>T_c"
},
{
"math_id": 15,
"text": "\\eta_0"
},
{
"math_id": 16,
"text": "\\eta(T) \\propto \\left| T - T_c \\right|^{1/2}"
},
{
"math_id": 17,
"text": "\\beta = 1/2"
},
{
"math_id": 18,
"text": "F - F_0 = \n\\begin{cases}\n- \\dfrac{a_0^2}{2b_0} (T-T_c)^2, & T<T_c \\\\\n0, & T>T_c\n\\end{cases}\n"
},
{
"math_id": 19,
"text": "c_p = -T\\frac{\\partial^2 F}{\\partial T^2} = \n\\begin{cases}\n\\dfrac{a_0^2}{b_0} T, & T<T_c \\\\\n0, & T>T_c\n\\end{cases}\n"
},
{
"math_id": 20,
"text": "\\Delta c = a_0^2 T_c/b_0"
},
{
"math_id": 21,
"text": "T_c \\Delta S = 0"
},
{
"math_id": 22,
"text": "c\\sim |T-T_c|^{-\\alpha}"
},
{
"math_id": 23,
"text": "\\alpha=0"
},
{
"math_id": 24,
"text": "h"
},
{
"math_id": 25,
"text": "\\mu"
},
{
"math_id": 26,
"text": "-\\mu B"
},
{
"math_id": 27,
"text": "-\\eta h"
},
{
"math_id": 28,
"text": "F(T,\\eta) - F_0 = a_0 (T-T_c) \\eta^2 + \\frac{b_0}{2} \\eta^4 - \\eta h"
},
{
"math_id": 29,
"text": "\\frac{\\partial F}{\\partial \\eta} = 2 a(T) \\eta + 2b_0 \\eta^3 - h = 0 "
},
{
"math_id": 30,
"text": "a(T_c)=0"
},
{
"math_id": 31,
"text": "\\eta(T_c) = \\left( \\frac{h}{2b_0} \\right)^{1/3} \\propto h^{1/\\delta} "
},
{
"math_id": 32,
"text": "\\delta = 3"
},
{
"math_id": 33,
"text": "\\chi\\equiv \\partial \\eta/\\partial h|_{h=0}"
},
{
"math_id": 34,
"text": "0 = 2 a \\frac{\\partial \\eta}{\\partial h} + 6b \\eta^2 \\frac{\\partial \\eta}{\\partial h} - 1"
},
{
"math_id": 35,
"text": "[2 a + 6b \\eta^2] \\frac{\\partial \\eta}{\\partial h} = 1"
},
{
"math_id": 36,
"text": "\\eta^2 = -a/b"
},
{
"math_id": 37,
"text": "\\eta^2=0"
},
{
"math_id": 38,
"text": "\\chi(T,h\\to 0) = \\begin{cases} \\frac{1}{2a_0(T-T_c)}, & T>T_c \\\\ \\frac{1}{-4a_0(T-T_c)}, & T<T_c \\end{cases} \\propto |T-T_c|^{-\\gamma}"
},
{
"math_id": 39,
"text": "\\gamma=1"
},
{
"math_id": 40,
"text": " \\alpha + 2\\beta + \\gamma = 1"
},
{
"math_id": 41,
"text": "F"
},
{
"math_id": 42,
"text": "F(T,\\eta) = A(T) \\eta^2 - B_0 \\eta^4 + C_0 \\eta^6,"
},
{
"math_id": 43,
"text": "A(T)=A_0(T-T_0)"
},
{
"math_id": 44,
"text": "T_0"
},
{
"math_id": 45,
"text": "A(T)"
},
{
"math_id": 46,
"text": "A_0, B_0,"
},
{
"math_id": 47,
"text": "C_0"
},
{
"math_id": 48,
"text": " T > T_0 "
},
{
"math_id": 49,
"text": "\\eta^2"
},
{
"math_id": 50,
"text": "\\eta^6"
},
{
"math_id": 51,
"text": "\\eta^4"
},
{
"math_id": 52,
"text": "\\eta = 0"
},
{
"math_id": 53,
"text": " T < T_0 "
},
{
"math_id": 54,
"text": "\\pm\\eta_0(T)"
},
{
"math_id": 55,
"text": " F(T_0,\\eta_0(T_0)) < 0"
},
{
"math_id": 56,
"text": " T "
},
{
"math_id": 57,
"text": " T_0 "
},
{
"math_id": 58,
"text": "\\eta_0(T)"
},
{
"math_id": 59,
"text": "T_*"
},
{
"math_id": 60,
"text": "\\eta_0(T_*)"
},
{
"math_id": 61,
"text": "T > T_*"
},
{
"math_id": 62,
"text": "\\eta = \\eta_0(T_*)"
},
{
"math_id": 63,
"text": "\\eta=0"
},
{
"math_id": 64,
"text": "0=A(T) \\eta^2 - B_0 \\eta^4 + C_0 \\eta^6,"
},
{
"math_id": 65,
"text": "0=2A(T) \\eta - 4 B_0 \\eta^3 + 6 C_0 \\eta^5,"
},
{
"math_id": 66,
"text": "\\eta^2(T_*) = {B_0}/{2C_0}"
},
{
"math_id": 67,
"text": "A(T_*) = A_0(T_*-T_0) = B_0^2/4C_0"
},
{
"math_id": 68,
"text": " T_* = T_0 + \\frac{B_0^2}{4 A_0 C_0}."
},
{
"math_id": 69,
"text": "(B_0/2C_0)^{1/2}"
},
{
"math_id": 70,
"text": "T<T_*"
},
{
"math_id": 71,
"text": "\\eta_0^2 = \\frac{B_0}{3C_0} \\left[ 1 + \\sqrt{1 - \\frac{3A(T) C_0}{B_0^2}} \\right]"
},
{
"math_id": 72,
"text": " \\eta \\to \\eta"
},
{
"math_id": 73,
"text": "F(T,\\eta) = A(T) \\eta^2 - C_0 \\eta^3 + B_0 \\eta^4 + \\cdots."
},
{
"math_id": 74,
"text": "A_0, B_0, C_0"
},
{
"math_id": 75,
"text": "\\eta_-(T) < 0"
},
{
"math_id": 76,
"text": "\\eta_+(T) > 0"
},
{
"math_id": 77,
"text": "\\eta_+"
},
{
"math_id": 78,
"text": "T"
},
{
"math_id": 79,
"text": "\\eta_-"
},
{
"math_id": 80,
"text": " F(T,\\eta_+(T)) "
},
{
"math_id": 81,
"text": "\\eta_+(T_*)"
},
{
"math_id": 82,
"text": "\\eta = \\eta_+(T_*)"
},
{
"math_id": 83,
"text": "0=A(T) \\eta^2 - C_0 \\eta^3 + B_0 \\eta^4,"
},
{
"math_id": 84,
"text": "0=2A(T) \\eta - 3 C_0 \\eta^2 + 4 B_0 \\eta^3,"
},
{
"math_id": 85,
"text": "\\eta(T_*) = {C_0}/{2B_0}"
},
{
"math_id": 86,
"text": "A(T_*) = A_0(T_*-T_0) = C_0^2/4B_0"
},
{
"math_id": 87,
"text": " T_* = T_0 + \\frac{C_0^2}{4 A_0 B_0}."
},
{
"math_id": 88,
"text": "(C_0/2B_0)"
},
{
"math_id": 89,
"text": " |T - T_c|^{\\beta} "
},
{
"math_id": 90,
"text": "\\beta"
},
{
"math_id": 91,
"text": "\\Psi "
},
{
"math_id": 92,
"text": " F := \\int d^D x \\ \\left( a(T) + r(T) \\psi^2(x) + s(T) \\psi^4(x) \\ + f(T) (\\nabla \\psi(x))^2 \\ +h(x) \\psi(x)\\ \\ + \\mathcal{O}(\\psi^6 ; (\\nabla \\psi)^4) \\right) "
},
{
"math_id": 93,
"text": "D"
},
{
"math_id": 94,
"text": " \\langle \\psi(x) \\rangle := \\frac{\\text{Tr}\\ \\psi(x) {\\rm e}^{-\\beta H}}{Z} "
},
{
"math_id": 95,
"text": " h(x) \\rightarrow 0 + h_0 \\delta(x) "
},
{
"math_id": 96,
"text": " \\psi(x) \\rightarrow \\psi_0 + \\phi(x) "
},
{
"math_id": 97,
"text": " \\frac{\\delta \\langle \\psi(x) \\rangle}{\\delta h(0)} = \\frac{\\phi(x)}{h_0} = \\beta \\left ( \\langle \\psi(x) \\psi(0) \\rangle - \\langle \\psi(x) \\rangle \\langle \\psi(0) \\rangle \\right ) "
},
{
"math_id": 98,
"text": "\\phi(x)"
},
{
"math_id": 99,
"text": " \\phi(x)"
},
{
"math_id": 100,
"text": " \\nu "
},
{
"math_id": 101,
"text": " \\xi \\sim (T-T_c)^{-\\nu} "
},
{
"math_id": 102,
"text": " D \\ge 2 + 2 \\frac{\\beta}{\\nu}"
},
{
"math_id": 103,
"text": "\\beta = 1/2 = \\nu "
},
{
"math_id": 104,
"text": "D=4"
}
] | https://en.wikipedia.org/wiki?curid=995169 |
995417 | Geodetic datum | Reference frame for measuring location
A geodetic datum or geodetic system (also: geodetic reference datum, geodetic reference system, or geodetic reference frame, or terrestrial reference frame) is a global datum reference or reference frame for unambiguously representing the position of locations on Earth by means of either geodetic coordinates (and related vertical coordinates) or geocentric coordinates.
Datums are crucial to any technology or technique based on spatial location, including geodesy, navigation, surveying, geographic information systems, remote sensing, and cartography.
A horizontal datum is used to measure a horizontal position, across the Earth's surface, in latitude and longitude or another related coordinate system. A "vertical datum" is used to measure the elevation or depth relative to a standard origin, such as mean sea level (MSL). A three-dimensional datum enables the expression of both horizontal and vertical position components in a unified form.
The concept can be generalized for other celestial bodies as in "planetary datums".
Since the rise of the global positioning system (GPS), the ellipsoid and datum WGS 84 it uses has supplanted most others in many applications. The WGS 84 is intended for global use, unlike most earlier datums.
Before GPS, there was no precise way to measure the position of a location that was far from reference points used in the realization of local datums, such as from the Prime Meridian at the Greenwich Observatory for longitude, from the Equator for latitude, or from the nearest coast for sea level. Astronomical and chronological methods have limited precision and accuracy, especially over long distances. Even GPS requires a predefined framework on which to base its measurements, so WGS 84 essentially functions as a datum, even though it is different in some particulars from a traditional standard horizontal or vertical datum.
A standard datum specification (whether horizontal, vertical, or 3D) consists of several parts: a model for Earth's shape and dimensions, such as a "reference ellipsoid" or a "geoid"; an "origin" at which the ellipsoid/geoid is tied to a known (often monumented) location on or inside Earth (not necessarily at 0 latitude 0 longitude); and multiple control points or reference points that have been precisely measured from the origin and physically monumented. Then the coordinates of other places are measured from the nearest control point through surveying. Because the ellipsoid or geoid differs between datums, along with their origins and orientation in space, the relationship between coordinates referred to one datum and coordinates referred to another datum is undefined and can only be approximated. Using local datums, the disparity on the ground between a point having the same horizontal coordinates in two different datums could reach kilometers if the point is far from the origin of one or both datums. This phenomenon is called "datum shift" or, more generally, "datum transformation", as it may involve rotation and scaling, in addition to displacement.
Because Earth is an imperfect ellipsoid, local datums can give a more accurate representation of some specific area of coverage than WGS 84 can. OSGB36, for example, is a better approximation to the geoid covering the British Isles than the global WGS 84 ellipsoid. However, as the benefits of a global system outweigh the greater accuracy, the global WGS 84 datum has become widely adopted.
History.
The spherical nature of Earth was known by the ancient Greeks, who also developed the concepts of latitude and longitude, and the first astronomical methods for measuring them. These methods, preserved and further developed by Muslim and Indian astronomers, were sufficient for the global explorations of the 15th and 16th Centuries.
However, the scientific advances of the Age of Enlightenment brought a recognition of errors in these measurements, and a demand for greater precision. This led to technological innovations such as the 1735 Marine chronometer by John Harrison, but also to a reconsideration of the underlying assumptions about the shape of Earth itself. Isaac Newton postulated that the conservation of momentum should make Earth oblate (wider at the equator), while the early surveys of Jacques Cassini (1720) led him to believe Earth was prolate (wider at the poles). The subsequent French geodesic missions (1735-1739) to Lapland and Peru corroborated Newton, but also discovered variations in gravity that would eventually lead to the geoid model.
A contemporary development was the use of the trigonometric survey to accurately measure distance and location over great distances. Starting with the surveys of Jacques Cassini (1718) and the Anglo-French Survey (1784–1790), by the end of the 18th century, survey control networks covered France and the United Kingdom. More ambitious undertakings such as the Struve Geodetic Arc across Eastern Europe (1816-1855) and the Great Trigonometrical Survey of India (1802-1871) took much longer, but resulted in more accurate estimations of the shape of the Earth ellipsoid. The first triangulation across the United States was not completed until 1899.
The U.S. survey resulted in the North American Datum (horizontal) of 1927 (NAD27) and the Vertical Datum of 1929 (NAVD29), the first standard datums available for public use. This was followed by the release of national and regional datums over the next several decades. Improving measurements, including the use of early satellites, enabled more accurate datums in the later 20th century, such as NAD83 in North America, ETRS89 in Europe, and GDA94 in Australia. At this time global datums were also first developed for use in satellite navigation systems, especially the World Geodetic System (WGS 84) used in the U.S. global positioning system (GPS), and the International Terrestrial Reference System and Frame (ITRF) used in the European Galileo system.
Dimensions.
Horizontal datum.
A horizontal datum is a model used to precisely measure positions on Earth; it is thus a crucial component of any spatial reference system or map projection. A horizontal datum binds a specified reference ellipsoid, a mathematical model of the shape of the earth, to the physical earth. Thus, the geographic coordinate system on that ellipsoid can be used to measure the latitude and longitude of real-world locations. Regional horizontal datums, such as NAD27 and NAD83, usually create this binding with a series of physically monumented geodetic control points of known location. Global datums, such as WGS84 and ITRF, are typically bound to the center of mass of the Earth (making them useful for tracking satellite orbits and thus for use in satellite navigation systems.
A specific point can have substantially different coordinates, depending on the datum used to make the measurement. For example, coordinates in NAD83 can differ from NAD27 by up to several hundred feet. There are hundreds of local horizontal datums around the world, usually referenced to some convenient local reference point. Contemporary datums, based on increasingly accurate measurements of the shape of Earth, are intended to cover larger areas. The WGS 84 datum, which is almost identical to the NAD83 datum used in North America and the ETRS89 datum used in Europe, is a common standard datum.
Vertical datum.
A vertical datum is a reference surface for vertical positions, such as the elevations of Earth features including terrain, bathymetry, water level, and human-made structures.
An approximate definition of sea level is the datum WGS 84, an ellipsoid, whereas a more accurate definition is Earth Gravitational Model 2008 (EGM2008), using at least 2,159 spherical harmonics. Other datums are defined for other areas or at other times; ED50 was defined in 1950 over Europe and differs from WGS 84 by a few hundred meters depending on where in Europe you look.
Mars has no oceans and so no sea level, but at least two martian datums have been used to locate places there.
Geodetic coordinates.
In "geodetic coordinates", Earth's surface is approximated by an ellipsoid, and locations near the surface are described in terms of "geodetic latitude" (formula_0), "longitude" (formula_1), and "ellipsoidal height" (formula_2).
Earth reference ellipsoid.
Defining and derived parameters.
The ellipsoid is completely parameterised by the semi-major axis formula_3 and the flattening formula_4.
From formula_3 and formula_4 it is possible to derive the semi-minor axis formula_5, first eccentricity formula_6 and second eccentricity formula_7 of the ellipsoid
Parameters for some geodetic systems.
The two main reference ellipsoids used worldwide are the GRS80
and the WGS 84.
A more comprehensive list of geodetic systems can be found here.
World Geodetic System 1984 (WGS 84).
The Global Positioning System (GPS) uses the World Geodetic System 1984 (WGS 84) to determine the location of a point near the surface of Earth.
Datum transformation.
The difference in co-ordinates between datums is commonly referred to as "datum shift". The datum shift between two particular datums can vary from one place to another within one country or region, and can be anything from zero to hundreds of meters (or several kilometers for some remote islands). The North Pole, South Pole and Equator will be in different positions on different datums, so True North will be slightly different. Different datums use different interpolations for the precise shape and size of Earth (reference ellipsoids). For example, in Sydney there is a 200 metres (700 feet) difference between GPS coordinates configured in GDA (based on global standard WGS 84) and AGD (used for most local maps), which is an unacceptably large error for some applications, such as surveying or site location for scuba diving.
Datum conversion is the process of converting the coordinates of a point from one datum system to another. Because the survey networks upon which datums were traditionally based are irregular, and the error in early surveys is not evenly distributed, datum conversion cannot be performed using a simple parametric function. For example, converting from NAD27 to NAD83 is performed using NADCON (later improved as HARN), a raster grid covering North America, with the value of each cell being the average adjustment distance for that area in latitude and longitude. Datum conversion may frequently be accompanied by a change of map projection.
Discussion and examples.
A geodetic reference datum is a known and constant surface which is used to describe the location of unknown points on Earth. Since reference datums can have different radii and different center points, a specific point on Earth can have substantially different coordinates depending on the datum used to make the measurement. There are hundreds of locally developed reference datums around the world, usually referenced to some convenient local reference point. Contemporary datums, based on increasingly accurate measurements of the shape of Earth, are intended to cover larger areas. The most common reference Datums in use in North America are NAD27, NAD83, and WGS 84.
The North American Datum of 1927 (NAD 27) is "the horizontal control datum for the United States that was defined by a location and azimuth on the Clarke spheroid of 1866, with origin at (the survey station) Meades Ranch (Kansas)." ... The geoidal height at Meades Ranch was assumed to be zero, as sufficient gravity data was not available, and this was needed to relate surface measurements to the datum. "Geodetic positions on the North American Datum of 1927 were derived from the (coordinates of and an azimuth at Meades Ranch) through a readjustment of the triangulation of the entire network in which Laplace azimuths were introduced, and the Bowie method was used." (http://www.ngs.noaa.gov/faq.shtml#WhatDatum ) NAD27 is a local referencing system covering North America.
The North American Datum of 1983 (NAD 83) is "The horizontal control datum for the United States, Canada, Mexico, and Central America, based on a geocentric origin and the Geodetic Reference System 1980 (GRS80). "This datum, designated as NAD 83 ...is based on the adjustment of 250,000 points including 600 satellite Doppler stations which constrain the system to a geocentric origin." NAD83 may be considered a local referencing system.
WGS 84 is the World Geodetic System of 1984. It is the reference frame used by the U.S. Department of Defense (DoD) and is defined by the National Geospatial-Intelligence Agency (NGA) (formerly the Defense Mapping Agency, then the National Imagery and Mapping Agency). WGS 84 is used by the DoD for all its mapping, charting, surveying, and navigation needs, including its GPS "broadcast" and "precise" orbits. WGS 84 was defined in January 1987 using Doppler satellite surveying techniques. It was used as the reference frame for broadcast GPS Ephemerides (orbits) beginning January 23, 1987. At 0000 GMT January 2, 1994, WGS 84 was upgraded in accuracy using GPS measurements. The formal name then became WGS 84 (G730), since the upgrade date coincided with the start of GPS Week 730. It became the reference frame for broadcast orbits on June 28, 1994. At 0000 GMT September 30, 1996 (the start of GPS Week 873), WGS 84 was redefined again and was more closely aligned with International Earth Rotation Service (IERS) frame ITRF 94. It was then formally called WGS 84 (G873). WGS 84 (G873) was adopted as the reference frame for broadcast orbits on January 29, 1997. Another update brought it to WGS 84 (G1674).
The WGS 84 datum, within two meters of the NAD83 datum used in North America, is the only world referencing system in place today. WGS 84 is the default standard datum for coordinates stored in recreational and commercial GPS units.
Users of GPS are cautioned that they must always check the datum of the maps they are using. To correctly enter, display, and to store map related map coordinates, the datum of the map must be entered into the GPS map datum field.
Examples.
Examples of map datums are:
Plate movement.
The Earth's tectonic plates move relative to one another in different directions at speeds on the order of per year. Therefore, locations on different plates are in motion relative to one another. For example, the longitudinal difference between a point on the equator in Uganda, on the African Plate, and a point on the equator in Ecuador, on the South American Plate, increases by about 0.0014 arcseconds per year. These tectonic movements likewise affect latitude.
If a global reference frame (such as WGS84) is used, the coordinates of a place on the surface generally will change from year to year. Most mapping, such as within a single country, does not span plates. To minimize coordinate changes for that case, a different reference frame can be used, one whose coordinates are fixed to that particular plate. Examples of these reference frames are "NAD83" for North America and "ETRS89" for Europe.
Footnotes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi"
},
{
"math_id": 1,
"text": "\\lambda"
},
{
"math_id": 2,
"text": "h"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": "b"
},
{
"math_id": 6,
"text": "e"
},
{
"math_id": 7,
"text": "e'"
}
] | https://en.wikipedia.org/wiki?curid=995417 |
9954641 | Line moiré | Type of moiré pattern
Line moiré is one type of moiré pattern; a pattern that appears when superposing two transparent layers containing correlated opaque patterns. Line moiré is the case when the superposed patterns comprise straight or curved lines. When moving the layer patterns, the moiré patterns transform or move at a faster speed. This effect is called optical moiré speedup.
Superposition of layers with periodically repeating parallel lines.
Simple moiré patterns can be observed when superposing two transparent layers comprising periodically repeating opaque parallel lines as shown in Figure 1. The lines of one layer are parallel to the lines of the second layer.
The superposition image does not change if transparent layers with their opaque patterns are inverted. When considering printed samples, one of the layers is denoted as the base layer and the other one as the revealing layer. It is assumed that the revealing layer is printed on a transparency and is superimposed on top of the base layer, which can be printed either on a transparency or on an opaque paper. The periods of the two layer patterns are close. We denote the period of the base layer as "p"b and the period of the revealing layer as "p"r.
The superposition image of Figure 1 outlines periodically repeating dark parallel bands, called moiré lines. Spacing between the moiré lines is much larger than the periods of lines in the two layers.
Light bands of the superposition image correspond to the zones where the lines of both layers overlap. The dark bands of the superposition image forming the moiré lines correspond to the zones where the lines of the two layers interleave, hiding the white background. The labels of Figure 2 show the passages from light zones with overlapping layer lines to dark zones with interleaving layer lines. The light and dark zones are periodically interchanging.
Figure 3 shows a detailed diagram of the superposition image between two adjacent zones with overlapping lines of the revealing and base layers (i.e., between two light bands).
The period "p"m of moiré lines is the distance from one point where the lines of both layers overlap (at the bottom of the figure) to the next such point (at the top). Let us count the layer lines, starting from the bottom point. At the count 0 the lines of both layers overlap. Since in our case "p"r<"p"b, for the same number of counted lines, the base layer lines with a long period advance faster than the revealing layer lines with a short period. At the halfway of the distance "p"m, the base layer lines are ahead the revealing layer lines by a half a period ("p"r/2) of the revealing layer lines, due to which the lines are interleaving, forming a dark moiré band. At the full distance "p"m, the base layer lines are ahead of the revealing layer lines by a full period "p"r, so the lines of the layers again overlap. The base layer lines gain the distance "p"m with as many lines ("p"m/"p"b) as the number of the revealing layer lines ("p"m/"p"r) for the same distance minus one: "p"m/"p"r = "p"m/"p"b + 1. From here we obtain the well known formula for the period "p"m of the superposition image:
formula_0
For the case when the revealing layer period is longer than the base layer period, the distance between moiré bands is the absolute value computed by the formula. The superposition of two layers comprising parallel lines forms an optical image comprising parallel moiré lines with a magnified period. According to the formula for computing "p"m, the closer the periods of the two layers, the stronger the magnification factor is.
The thicknesses of layer lines affect the overall darkness of the superposition image and the thickness of the moiré bands, but the period "p"m does not depend on the layer lines’ thickness.
Speedup of movements with moiré.
The moiré bands of Figure 1 will move if we displace the revealing layer. When the revealing layer moves perpendicularly to layer lines, the moiré bands move along the same axis, but several times faster than the movement of the revealing layer.
The GIF animation shown in Figure 4 corresponds to a slow movement of the revealing layer. The GIF file repeatedly animates an upward movement of the revealing layer (perpendicular to layer lines) across a distance equal to "p"r. The animation demonstrates that the moiré lines of the superposition image move up at a speed, much faster than the movement speed of the revealing layer.
When the revealing layer is shifted up perpendicularly to the layer lines by one full period ("p"r) of its pattern, the superposition optical image must be the same as the initial one. It means that the moiré lines traverse a distance equal to the period of the superposition image "p"m while the revealing layer traverses the distance equal to its period "p"r. Assuming that the base layer is immobile ("v"b=0), the following equation represents the ratio of the optical speed to the revealing layer’s speed:
formula_1
By replacing "p"m with its formula, we have
formula_2
In case the period of the revealing layer is longer than the period of the base layer, the optical image moves in the opposite direction. The negative value of the ratio computed according to this formula signifies a movement in the reverse direction.
Superposition of layers with inclined lines.
Here we present patterns with inclined lines. When we are interested in optical speedup we can represent the case of inclined patterns such that the formulas for computing moiré periods and optical speedups remain valid in their current simplest form. For this purpose, the values of periods "p"r, "p"b, and "p"m correspond to the distances between the lines along the axis of movements (the vertical axis in the animated example of Figure 4). When the layer lines are perpendicular to the movement axis, the periods ("p") are equal to the distances (denoted as "T") between the lines (as in Figure 4). If the lines are inclined, the periods ("p") along the axis of the movement are not equal to the distances ("T") between the lines.
Computing moiré lines’ inclination as function of the inclination of layers’ lines.
The superposition of two layers with identically inclined lines forms moiré lines inclined at the same angle. Figure 5 is obtained from Figure 1 with a vertical shearing. In Figure 5 the layer lines and the moiré lines are inclined by 10 degrees. Since the inclination is not a rotation, during the inclination the distance ("p") between the layer lines along the vertical axis is conserved, but the true distance ("T") between the lines (along an axis perpendicular to these lines) is changed. The difference between the vertical periods "p"b, "p"r, and the distances "T"b, "T"r is shown in the diagram of Figure 8.
The inclination degree of layer lines may change along the horizontal axis forming curves. The superposition of two layers with identical inclination pattern forms moiré curves with the same inclination pattern. In Figure 6 the inclination degree of layer lines gradually changes according to the following sequence of degrees (+30, –30, +30, –30, +30). Layer periods "p"b and "p"r represent the distances between the curves along the vertical axis. The presented formulas for computing the period "p"m (the vertical distance between the moiré curves) and the optical speedup (along the vertical axis) are valid for Figure 6.
More interesting is the case when the inclination degrees of layer lines are not the same for the base and revealing layers. Figure 7 shows an animation of a superposition images where the inclination degree of base layer lines is constant (10 degrees), but the inclination of the revealing layer lines oscillates between 5 and 15 degrees. The periods of layers along the vertical axis "p"b and "p"r are the same all the time. Correspondingly, the period "p"m (along the vertical axis) computed with the basic formula also remains the same.
Figure 8 helps to compute the inclination degree of moiré optical lines as a function of the inclination of the revealing and the base layer lines. We draw the layer lines schematically without showing their true thicknesses. The bold lines of the diagram inclined by "α"b degrees are the base layer lines. The bold lines inclined by "α"r degrees are the revealing layer lines. The base layer lines are vertically spaced by a distance equal to "p"b, and the revealing layer lines are vertically spaced by a distance equal to "p"r. The distances "T"b and "T"r represent the true space between the base layer and revealing layer lines, correspondingly. The intersections of the lines of the base and the revealing layers (marked in the figure by two arrows) lie on a central axis of a light moiré band. The dashed line of Figure 8 corresponds to the axis of the light moiré band. The inclination degree of moiré lines is therefore the inclination "α"m of the dashed line.
From Figure 8 we deduce the following two equations:
formula_3
From these equations we deduce the equation for computing the inclination of moiré lines as a function of the inclinations of the base layer and the revealing layer lines:
formula_4
Deducing other known formulas.
The true pattern periods "T"b, "T"r, and "T"m (along the axes perpendicular to pattern lines) are computed as follows (see Figure 8):
formula_5
From here, using the formula for computing tan("α"m) with periods "p", we deduce a well known formula for computing the moiré angle "α"m with periods "T":
formula_6
From formula for computing "p"m we deduce another well known formula for computing the period "T"m of moiré pattern (along the axis perpendicular to moiré bands):
formula_7
In the particular case when "T"b="T"r="T", the formula for the period "T"m is reduced into well known formula:
formula_8
And the formula for computing αm is reduced to:
formula_9
The revealing lines inclination as a function of the superposition image’s lines inclination.
Here is the equation for computing the revealing layer line inclination "α"r for a given base layer line inclination "α"b, and a desired moiré line inclination "α"m:
formula_10
For any given base layer line inclination, this equation permits us to obtain a desired moiré line inclination by properly choosing the revealing layer inclination. In Figure 6 we showed an example where the curves of layers follow an identical inclination pattern forming a superposition image with the same inclination pattern. The inclination degrees of the layers’ and moiré lines change along the horizontal axis according to the following sequence of alternating degree values (+30, –30, +30, –30, +30). In Figure 9 we obtain the same superposition pattern as in Figure 6, but with a base layer comprising straight lines inclined by –10 degrees. The revealing pattern of Figure 9 is computed by interpolating the curves into connected straight lines, where for each position along the horizontal axis, the revealing line’s inclination angle "α"r is computed as a function of "α"b and "α"m according to the equation above.
Figure 9 demonstrates that the difference between the inclination angles of revealing and base layer lines has to be several times smaller than the difference between inclination angles of moiré and base layer lines.
Another example forming the same superposition patterns as in Figure 6 and Figure 9 is shown in Figure 10. In Figure 10 the desired inclination pattern (+30, –30, +30, –30, +30) is obtained using a base layer with an inverted inclination pattern (–30, +30, –30, +30, –30).
Figure 11 shows an animation where we obtain a superposition image with a constant inclination pattern of moiré lines (+30, –30, +30, –30, +30) for continuously modifying pairs of base and revealing layers. The base layer inclination pattern gradually changes and the revealing layer inclination pattern correspondingly adapts such that the superposition image’s inclination pattern remains the same. | [
{
"math_id": 0,
"text": "p_m=\\frac{p_b \\cdot p_r}{p_b - p_r}."
},
{
"math_id": 1,
"text": "\\frac{v_m}{v_r}=\\frac{p_m}{p_r}."
},
{
"math_id": 2,
"text": "\\frac{v_m}{v_r}=\\frac{p_b}{p_b-p_r}."
},
{
"math_id": 3,
"text": "\n\\begin{cases}\n \\tan \\alpha_m=\\frac{p_b+l \\cdot \\tan\\alpha_b}{l} \\\\\n \\tan \\alpha_r=\\frac{p_b-p_r+l \\cdot \\tan\\alpha_b}{l}\n\\end{cases}\n"
},
{
"math_id": 4,
"text": " \\tan\\alpha_m=\\frac{p_b \\cdot \\tan\\alpha_r - p_r \\cdot \\tan\\alpha_b}{p_b-p_r} "
},
{
"math_id": 5,
"text": "T_b=p_b \\cdot \\cos\\alpha_b,\\ T_r=p_r \\cdot\\cos\\alpha_r,\\ T_m=p_m \\cdot\\cos\\alpha_m"
},
{
"math_id": 6,
"text": " \\alpha_m=\\arctan\\left(\\frac{T_b\\cdot\\sin\\alpha_r-T_r\\cdot\\sin\\alpha_b}{T_b \\cdot\\cos\\alpha_r-T_r\\cdot\\cos\\alpha_b}\\right)"
},
{
"math_id": 7,
"text": "T_m=\\frac{T_b \\cdot T_r}{\\sqrt{T_b^2+T_r^2-2 \\cdot T_b \\cdot T_r \\cdot \\cos(\\alpha_r-\\alpha_b)}}"
},
{
"math_id": 8,
"text": "T_m=\\frac{T}{2 \\cdot \\sin \\left ( \\frac{\\alpha_r-\\alpha_b}{2} \\right )}"
},
{
"math_id": 9,
"text": "\\alpha_m=90^\\circ +\\frac{\\alpha_r+\\alpha_b}{2}"
},
{
"math_id": 10,
"text": " \\tan\\alpha_r=\\frac{p_r}{p_b} \\cdot \\tan\\alpha_b+\\left(1-\\frac{p_r}{p_b}\\right)\\cdot \\tan\\alpha_m."
}
] | https://en.wikipedia.org/wiki?curid=9954641 |
9955 | Nitrox | Breathing gas, mixture of nitrogen and oxygen
Nitrox refers to any gas mixture composed (excepting trace gases) of nitrogen and oxygen. This includes atmospheric air, which is approximately 78% nitrogen, 21% oxygen, and 1% other gases, primarily argon. In the usual application, underwater diving, nitrox is normally distinguished from air and handled differently. The most common use of nitrox mixtures containing oxygen in higher proportions than atmospheric air is in scuba diving, where the reduced partial pressure of nitrogen is advantageous in reducing nitrogen uptake in the body's tissues, thereby extending the practicable underwater dive time by reducing the decompression requirement, or reducing the risk of decompression sickness (also known as "the bends").
Nitrox is used to a lesser extent in surface-supplied diving, as these advantages are reduced by the more complex logistical requirements for nitrox compared to the use of simple low-pressure compressors for breathing gas supply. Nitrox can also be used in hyperbaric treatment of decompression illness, usually at pressures where pure oxygen would be hazardous. Nitrox is not a safer gas than compressed air in all respects; although its use can reduce the risk of decompression sickness, it increases the risks of oxygen toxicity and fire.
Though not generally referred to as nitrox, an oxygen-enriched air mixture is routinely provided at normal surface ambient pressure as oxygen therapy to patients with compromised respiration and circulation.
Physiological effects under pressure.
Decompression benefits.
Reducing the proportion of nitrogen by increasing the proportion of oxygen reduces the risk of decompression sickness for the same dive profile, or allows extended dive times without increasing the need for decompression stops for the same risk. The significant aspect of extended no-stop time when using nitrox mixtures is reduced risk in a situation where breathing gas supply is compromised, as the diver can make a direct ascent to the surface with an acceptably low risk of decompression sickness. The exact values of the extended no-stop times vary depending on the decompression model used to derive the tables, but as an approximation, it is based on the partial pressure of nitrogen at the dive depth. This principle can be used to calculate an equivalent air depth (EAD) with the same partial pressure of nitrogen as the mix to be used, and this depth is less than the actual dive depth for oxygen enriched mixtures. The equivalent air depth is used with air decompression tables to calculate decompression obligation and no-stop times. The Goldman decompression model predicts a significant risk reduction by using nitrox (more so than the PADI tables suggest).
Nitrogen narcosis.
Controlled tests have not shown breathing nitrox to reduce the effects of nitrogen narcosis, as oxygen seems to have similarly narcotic properties under pressure to nitrogen; thus one should not expect a reduction in narcotic effects due only to the use of nitrox. Nonetheless, there are people in the diving community who insist that they feel reduced narcotic effects at depths breathing nitrox. This may be due to a dissociation of the subjective and behavioural effects of narcosis. Although oxygen appears chemically more narcotic at the surface, relative narcotic effects at depth have never been studied in detail, but it is known that different gases produce different narcotic effects as depth increases. Helium has no narcotic effect, but results in HPNS when breathed at high pressures, which does not happen with gases that have greater narcotic potency. However, because of risks associated with oxygen toxicity, divers do not usually use nitrox at greater depths where more pronounced narcosis symptoms are more likely to occur. For deep diving, trimix or heliox gases are typically used; these gases contain helium to reduce the amount of narcotic gases in the mixture.
Oxygen toxicity.
Diving with and handling nitrox raise a number of potentially fatal dangers due to the high partial pressure of oxygen (ppO2). Nitrox is not a deep-diving gas mixture owing to the increased proportion of oxygen, which becomes toxic when breathed at high pressure. For example, the maximum operating depth of nitrox with 36% oxygen, a popular recreational diving mix, is to ensure a maximum ppO2 of no more than . The exact value of the maximum allowed ppO2 and maximum operating depth varies depending on factors such as the training agency, the type of dive, the breathing equipment and the level of surface support, with professional divers sometimes being allowed to breathe higher ppO2 than those recommended to recreational divers.
To dive safely with nitrox, the diver must learn good buoyancy control, a vital part of scuba diving in its own right, and a disciplined approach to preparing, planning and executing a dive to ensure that the ppO2 is known, and the maximum operating depth is not exceeded. Many dive shops, dive operators, and gas blenders (individuals trained to blend gases) require the diver to present a nitrox certification card before selling nitrox to divers.
Some training agencies, such as PADI and Technical Diving International, teach the use of two depth limits to protect against oxygen toxicity. The shallower depth is called the "maximum operating depth" and is reached when the partial pressure of oxygen in the breathing gas reaches . The deeper depth, called the "contingency depth", is reached when the partial pressure reaches . Diving at or beyond this level exposes the diver to a greater risk of central nervous system (CNS) oxygen toxicity. This can be extremely dangerous since its onset is often without warning and can lead to drowning, as the regulator may be spat out during convulsions, which occur in conjunction with sudden unconsciousness (general seizure induced by oxygen toxicity).
Divers trained to use nitrox may memorise the acronym VENTID-C or sometimes ConVENTID, (which stands for Vision (blurriness), Ears (ringing sound), Nausea, Twitching, Irritability, Dizziness, and Convulsions). However, evidence from non-fatal oxygen convulsions indicates that most convulsions are not preceded by any warning symptoms at all. Further, many of the suggested warning signs are also symptoms of nitrogen narcosis, and so may lead to misdiagnosis by a diver. A solution to either is to ascend to a shallower depth.
Carbon dioxide retention.
Use of nitrox may cause a reduced ventilatory response, and when breathing dense gas at the deeper limits of the usable range, this may result in carbon dioxide retention when exercise levels are high, with an increased risk of loss of consciousness.
Other effects.
There is anecdotal evidence that the use of nitrox reduces post-dive fatigue, particularly in older and or obese divers; however a double-blind study to test this found no statistically significant reduction in reported fatigue. There was, however, some suggestion that post-dive fatigue is due to sub-clinical decompression sickness (DCS) (i.e. micro bubbles in the blood insufficient to cause symptoms of DCS); the fact that the study mentioned was conducted in a dry chamber with an ideal decompression profile may have been sufficient to reduce sub-clinical DCS and prevent fatigue in both nitrox and air divers. In 2008, a study was published using wet divers at the same depth no statistically significant reduction in reported fatigue was seen.
Further studies with a number of different dive profiles, and also different levels of exertion, would be necessary to fully investigate this issue. For example, there is much better scientific evidence that breathing high-oxygen gases increases exercise tolerance, during aerobic exertion. Though even moderate exertion while breathing from the regulator is a relatively uncommon occurrence in recreational scuba, as divers usually try to minimize it in order to conserve gas, episodes of exertion while regulator-breathing do occasionally occur in recreational diving. Examples are surface-swimming a distance to a boat or beach after surfacing, where residual "safety" cylinder gas is often used freely, since the remainder will be wasted anyway when the dive is completed, and unplanned contingencies due to currents or buoyancy problems. It is possible that these so-far un-studied situations have contributed to some of the positive reputation of nitrox.
A 2010 study using critical flicker fusion frequency and perceived fatigue criteria found that diver alertness after a dive on nitrox was significantly better than after an air dive.
Uses.
Underwater diving.
Enriched Air Nitrox, nitrox with an oxygen content above 21%, is mainly used in scuba diving to reduce the proportion of nitrogen in the breathing gas mixture. The main benefit is reduced decompression risk. To a considerably lesser extent it is also used in surface supplied diving, where the logistics are relatively complex, similar to the use of other diving gas mixtures like heliox and trimix.
Training and certification.
Recreational nitrox certification (Nitrox diver) allows the diver to use a single nitrox gas mixture with 40% or less oxygen by volume on a dive without obligatory decompression. The reason for using nitrox on this type of dive profile can be to extend the no-decompression limit, and for shorter dives, to reduce the decompression stress. The course is short, with a theory module on the risks of oxygen toxicity and the calculation of maximum operating depth, and a practical module of generally two dives using nitrox. It is one of the most popular further training programmes for entry level divers as it makes longer dives possible at a large number of popular sites.
Advanced nitrox certification requires competence to carry two nitrox mixtures in separate scuba sets, and to use the richer mix for accelerated decompression at the end of the dive, switching gases underwater at the correct planned depth and selecting the new gas on the dive computer if one is carried. For the purposes of the certification any mixture from air to nominally 100% oxygen may be used, though at least one agency prefers to limit oxygen fraction to 80% as they consider this has a lower risk for oxygen toxicity.
Therapeutic recompression.
Nitrox50 is used as one of the options in the first stages of therapeutic recompression using the Comex therapeutic table CX 30 for treatment of vestibular or general decompression sickness. Nitrox is breathed at 30 msw and 24 msw and the ascents from these depths to the next stop. At 18 m the gas is switched to oxygen for the rest of the treatment.
Medicine, mountaineering and unpressurised aircraft.
The use of oxygen at high altitudes or as oxygen therapy may be as supplementary oxygen, added to the inspired air, which would technically be a use of nitrox, blended on site, but this is not normally referred to as such, as the gas provided for the purpose is oxygen.
Terminology.
Nitrox is known by many names: Enriched Air Nitrox, Oxygen Enriched Air, Nitrox, EANx or Safe Air. Since the word is a compound contraction or coined word and not an acronym, it should not be written in all upper case characters as "NITROX", but may be initially capitalized when referring to specific mixtures such as Nitrox32, which contains 68% nitrogen and 32% oxygen. When one figure is stated, it refers to the oxygen percentage, not the nitrogen percentage. The original convention, Nitrox68/32 became shortened as the first figure is redundant.
The term "nitrox" was originally used to refer to the breathing gas in a seafloor habitat where the oxygen has to be kept to a lower fraction than in air to avoid long term oxygen toxicity problems. It was later used by Dr Morgan Wells of NOAA for mixtures with an oxygen fraction higher than air, and has become a generic term for binary mixtures of nitrogen and oxygen with any oxygen fraction, and in the context of recreational and technical diving, now usually refers to a mixture of nitrogen and oxygen with more than 21% oxygen. "Enriched Air Nitrox" or "EAN", and "Oxygen Enriched Air" are used to emphasize richer than air mixtures. In "EANx", the "x" was originally the x of nitrox, but has come to indicate the percentage of oxygen in the mix and is replaced by a number when the percentage is known; for example, a 40% oxygen mix is called EAN40. The two most popular blends are EAN32 and EAN36, developed by NOAA for scientific diving, and also named Nitrox I and Nitrox II, respectively, or Nitrox68/32 and Nitrox64/36. These two mixtures were first utilized to the depth and oxygen limits for scientific diving designated by NOAA at the time.
The term Oxygen Enriched Air (OEN) was accepted by the (American) scientific diving community, but although it is probably the most unambiguous and simply descriptive term yet proposed, it was resisted by the recreational diving community, sometimes in favour of less appropriate terminology.
In its early days of introduction to non-technical divers, nitrox has occasionally also been known by detractors by less complimentary terms, such as "devil gas" or "voodoo gas" (a term now sometimes used with pride).
American Nitrox Divers International (ANDI) uses the term "SafeAir", which they define as any oxygen-enriched air mixture with O2 concentrations between 22% and 50% that meet their gas quality and handling specifications, and specifically claim that these mixtures are safer than normally produced breathing air for the end user not envolved to the mix production which. Considering the complexities and hazards of mixing, handling, analyzing, and using oxygen-enriched air, this name is considered inappropriate by those who consider that it is not inherently "safe", but merely has decompression advantages.
The constituent gas percentages are what the gas blender aims for, but the final actual mix may vary from the specification, and so a small flow of gas from the cylinder must be measured with an oxygen analyzer, before the cylinder is used underwater.
MOD.
Maximum Operating Depth (MOD) is the maximum safe depth at which a given nitrox mixture can be used. MOD depends on the allowed partial pressure of oxygen, which is related to exposure time and the acceptable risk assumed for central nervous system oxygen toxicity. Acceptable maximum ppO2 varies depending on the application:
Higher values are used by commercial and military divers in special circumstances, often when the diver uses surface supplied breathing apparatus, or for treatment in a chamber, where the airway is relatively secure.
Choice of mixture.
The two most common recreational diving nitrox mixes contain 32% and 36% oxygen, which have maximum operating depths (MODs) of and respectively when limited to a maximum partial pressure of oxygen of . Divers may calculate an equivalent air depth to determine their decompression requirements or may use nitrox tables or a nitrox-capable dive computer.
Nitrox with more than 40% oxygen is uncommon within recreational diving. There are two main reasons for this: the first is that all pieces of diving equipment that come into contact with mixes containing higher proportions of oxygen, particularly at high pressure, need special cleaning and servicing to reduce the risk of fire. The second reason is that richer mixes extend the time the diver can stay underwater without needing decompression stops far further than the duration permitted by the capacity of typical diving cylinders. For example, based on the PADI nitrox recommendations, the maximum operating depth for EAN45 would be and the maximum dive time available at this depth even with EAN36 is nearly 1 hour 15 minutes: a diver with a breathing rate of 20 litres per minute using twin 10-litre, 230-bar (about double 85 cu. ft.) cylinders would have completely emptied the cylinders after 1 hour 14 minutes at this depth.
Use of nitrox mixtures containing 50% to 80% oxygen is common in technical diving as decompression gas, which by virtue of its lower partial pressure of inert gases such as nitrogen and helium, allows for more efficient (faster) elimination of these gases from the tissues than leaner oxygen mixtures.
In deep open circuit technical diving, where hypoxic gases are breathed during the bottom portion of the dive, a Nitrox mix with 50% or less oxygen called a "travel mix" is sometimes breathed during the beginning of the descent in order to avoid hypoxia. Normally, however, the most oxygen-lean of the diver's decompression gases would be used for this purpose, since descent time spent reaching a depth where bottom mix is no longer hypoxic is normally small, and the distance between this depth and the MOD of any nitrox decompression gas is likely to be very short, if it occurs at all.
Best mix.
The composition of a nitrox mix can be optimized for a given planned dive profile. This is termed "Best mix", for the dive, and provides the maximum no-decompression time compatible with acceptable oxygen exposure. An acceptable maximum partial pressure of oxygen is selected based on depth and planned bottom time, and this value is used to calculate the oxygen content of the best mix for the dive:
formula_0
Production.
There are several methods of production:
Cylinder markings to identify contents.
Any diving cylinder containing a blend of gasses other than standard air is required by most diver training organizations, and some national governments, to be clearly marked to indicate the current gas mixture. In practice it is common to use a printed adhesive label to indicate the type of gas (in this case nitrox), and to add a temporary label to specify the analysis of the current mix.
Training standards for nitrox certification suggest the composition must be verified by the diver by using an oxygen analyzer before use.
Regional standards and conventions.
European Union.
Within the EU, valves with M26x2 outlet thread are recommended for cylinders with increased oxygen content. Regulators for use with these cylinders require compatible connectors, and are not directly connectable with cylinders for compressed air.
Germany.
A nitrox cylinder is specially cleaned and identified. According to EN 144-3 the cylinder colour is overall white with the letter N on opposite sides of the cylinder. The fraction of oxygen in the bottle is checked after filling and marked on the cylinder.
South Africa.
South African National Standard 10019:2008 specifies the colour of all scuba cylinders as Golden yellow with French gray shoulder. This applies to all underwater breathing gases except medical oxygen, which must be carried in cylinders that are Black with a White shoulder. Nitrox cylinders must be identified by a transparent, self-adhesive label with green lettering, fitted below the shoulder. In effect this is green lettering on a yellow cylinder, with a gray shoulder. The composition of the gas must also be specified on the label. In practice this is done by a small additional self-adhesive label marked with the measured oxygen fraction, which is changed when a new mix is filled.
The 2021 revision of SANS 10019 changed the colour specification to Light navy grey for the shoulder, and a different label specification which includes hazard symbols for high pressure and oxidising materials.
United States.
Every nitrox cylinder should also have a sticker stating whether or not the cylinder is "oxygen clean" and suitable for partial pressure blending. Any oxygen-clean cylinder may have any mix up to 100% oxygen inside. If by some accident an oxygen-clean cylinder is filled at a station that does not supply gas to oxygen-clean standards it is then considered contaminated and must be re-cleaned before a gas containing more than 40% oxygen may again be added. Cylinders marked as 'not oxygen clean' may only be filled with oxygen-enriched air mixtures from membrane or stick blending systems where the gas is mixed before being added to the cylinder, and to an oxygen fraction not exceeding 40% by volume.
Hazards.
Nitrox can be a hazard to the blender and to the user, for different reasons.
Fire and toxic cylinder contamination from oxygen reactions.
Partial pressure blending using pure oxygen decanted into the cylinder before topping up with air may involve very high oxygen fractions and oxygen partial pressures during the decanting process, which constitute a relatively high fire hazard. This procedure requires care and precautions by the operator, and decanting equipment and cylinders which are clean for oxygen service, but the equipment is relatively simple and inexpensive. Partial pressure blending using pure oxygen is often used to provide nitrox on live-aboard dive boats, but it is also used in some dive shops and clubs.
Any gas which contains a significantly larger percentage of oxygen than air is a fire hazard, and such gases can react with hydrocarbons or lubricants and sealing materials inside the filling system to produce toxic gases, even if a fire is not apparent. Some organisations exempt equipment from oxygen-clean standards if the oxygen fraction is limited to 40% or less.
Among recreational training agencies, only ANDI subscribes to the guideline of requiring oxygen cleaning for equipment used with more than 23% oxygen fraction. The USCG, NOAA, U.S. Navy, OSHA, and the other recreational training agencies accept the limit as 40% as no accident or incident has been known to occur when this guideline has been properly applied. Tens of thousands of recreational divers are trained each year and the overwhelming majority of these divers are taught the "over 40% rule". Most nitrox fill stations which supply pre-mixed nitrox will fill cylinders with mixtures below 40% without certification of cleanliness for oxygen service. Luxfer cylinders specify oxygen cleaning for all mixtures exceeding 23.5% oxygen.
The following references for oxygen cleaning specifically cite the "over 40%" guideline that has been in widespread use since the 1960s, and consensus at the 1992 Enriched Air Workshop was to accept that guideline and continue the status quo.
Much of the confusion appears to be a result of misapplying PVHO (pressure vessel for human occupancy) guidelines which prescribe a maximum ambient oxygen content of 25% when a human is sealed into a pressure vessel (chamber). The concern here is for a fire hazard to a living person who could be trapped in an oxygen-rich burning environment.
Of the three commonly applied methods of producing enriched air mixes – continuous blending, partial pressure blending, and membrane separation systems – only partial pressure blending would require the valve and cylinder components to be oxygen cleaned for mixtures with less than 40% oxygen. The other two methods ensure that the equipment is never subjected to greater than 40% oxygen content.
In a fire, the pressure in a gas cylinder rises in direct proportion to its absolute temperature. If the internal pressure exceeds the mechanical limitations of the cylinder and there are no means to safely vent the pressurized gas to the atmosphere, the vessel will fail mechanically. If the vessel contents are ignitable or a contaminant is present this event may result in a "fireball".
Incorrect gas mix.
Use of a gas mix that differs from the planned mix introduces an increased risk of decompression sickness or an increased risk of oxygen toxicity, depending on the error. It may be possible to simply recalculate the dive plan or set the dive computer accordingly, but in some cases the planned dive may not be practicable.
Many training agencies such as PADI, CMAS, SSI and NAUI train their divers to personally check the oxygen percentage content of each nitrox cylinder before every dive. If the oxygen percentage deviates by more than 1% from the planned mix, the diver must either recalculate the dive plan with the actual mix, or else abort the dive to avoid increased risk of oxygen toxicity or decompression sickness. Under IANTD and ANDI rules for use of nitrox, which are followed by dive resorts around the world, filled nitrox cylinders are signed out personally in a blended gas records book, which contains, for each cylinder and fill, the cylinder number, the measured oxygen fraction by percentage, the calculated maximum operating depth for that mix, and the signature of the receiving diver, who should have personally measured the oxygen fraction before taking delivery. All of these steps reduce risk but increase complexity of operations as each diver must use the specific cylinder they have checked out. In South Africa, the national standard for handling and filling portable cylinders with pressurised gases (SANS 10019) requires that the cylinder be labelled with a sticker identifying the contents as nitrox, and specifying the oxygen fraction. Similar requirements may apply in other countries.
History.
In 1874, Henry Fleuss made what was possibly the first Nitrox dive using a rebreather.
In 1911, Draeger of Germany tested an injector operated rebreather backpack for a standard diving suit. This concept was produced and marketed as the DM20 oxygen rebreather system and the DM40 nitrox rebreather system, in which air from one cylinder and oxygen from a second cylinder were mixed during injection through a nozzle which circulated the breathing gas through the scrubber and the rest of the loop. The DM40 was rated for depths up to 40m.
Christian J. Lambertsen proposed calculations for nitrogen addition to prevent oxygen toxicity in divers utilizing nitrogen–oxygen rebreather diving.
In World War II or soon after, British commando frogmen and clearance divers started occasionally diving with oxygen rebreathers adapted for semi-closed-circuit nitrox (which they called "mixture") diving by fitting larger cylinders and carefully setting the gas flow rate using a flow meter. These developments were kept secret until independently duplicated by civilians in the 1960s.
Lambertson published a paper on nitrox in 1947.
In the 1950s, the United States Navy (USN) documented enriched oxygen gas procedures for military use of what we today call nitrox, in the US Navy Diving Manual.
In 1955, E. Lanphier described the use of nitrogen–oxygen diving mixtures, and the equivalent air depth method for calculating decompression from air tables.
In the 1960s, A. Galerne used on-line blending for commercial diving.
In 1970, Morgan Wells, who was the first director of the National Oceanographic and Atmospheric Administration (NOAA) Diving Center, began instituting diving procedures for oxygen-enriched air. He introduced the concept of Equivalent Air Depth (EAD). He also developed a process for mixing oxygen and air which he called a continuous blending system. For many years Wells' invention was the only practical alternative to partial pressure blending. In 1979 NOAA published Wells' procedures for the scientific use of nitrox in the NOAA Diving Manual.
In 1985 Dick Rutkowski, a former NOAA diving safety officer, formed IAND (International Association of Nitrox Divers) and began teaching nitrox use for recreational diving. This was considered dangerous by some, and met with heavy skepticism by the diving community.
In 1989, the Harbor Branch Oceanographic institution workshop addressed blending, oxygen limits and decompression issues.
In 1991, Bove, Bennett and "Skin Diver" magazine took a stand against nitrox use for recreational diving. "Skin Diver" editor Bill Gleason dubbed nitrox the "Voodoo Gas". The annual DEMA show (held in Houston, Texas that year) banned nitrox training providers from the show. This caused a backlash, and when DEMA relented, a number of organizations took the opportunity to present nitrox workshops outside the show.
In 1992, the Scuba Diving Resources Group organised a workshop where some guidelines were established, and some misconceptions addressed.
In 1992, BSAC banned its members from using nitrox during BSAC activities. IAND's name was changed to the International Association of Nitrox and Technical Divers (IANTD), the T being added when the European Association of Technical Divers (EATD) merged with IAND. In the early 1990s, these agencies were teaching nitrox, but the main scuba agencies were not. Additional new organizations, including the American Nitrox Divers International (ANDI) – which invented the term "Safe Air" for marketing purposes – and Technical Diving International (TDI) were begun. NAUI became the first existing major recreational diver training agency to sanction nitrox.
In 1993, the Sub-Aqua Association was the first UK recreational diving training agency to acknowledge and endorse the Nitrox training their members had undertaken with one of the tech agencies. The SAA's first recreational Nitrox qualification was issued in April 1993. The SAA's first Nitrox instructor was Vic Bonfante and he was certified in September 1993.
Meanwhile, diving stores were finding a purely economic reason to offer nitrox: not only was an entire new course and certification needed to use it, but instead of cheap or free tank fills with compressed air, dive shops found they could charge premium amounts of money for custom-gas blending of nitrox to their ordinary, moderately experienced divers. With the new dive computers which could be programmed to allow for the longer bottom-times and shorter residual nitrogen times that nitrox gave, the incentive for the sport diver to use the gas increased.
In 1993, "Skin Diver" magazine, the leading recreational diving publication at the time, published a three-part series arguing that nitrox was unsafe for sport divers. DiveRite manufactured the first nitrox-compatible dive computer, called the Bridge, the aquaCorps TEK93 conference was held in San Francisco, and a practicable oil limit of 0.1 mg/m3 for oxygen compatible air was set. The Canadian armed forces issued EAD tables with an upper PO2 of 1.5 ATA.
In 1994, John Lamb and Vandagraph launched the first oxygen analyser built specifically for Nitrox and mixed-gas divers, at the Birmingham Dive Show.
In 1994, BSAC reversed its policy on Nitrox and announced BSAC nitrox training to start in 1995.
In 1996, the Professional Association of Diving Instructors (PADI) announced full educational support for nitrox. While other mainline scuba organizations had announced their support of nitrox earlier, it was PADI's endorsement that established nitrox as a standard recreational diving option.
In 1997, ProTec started with Nitrox 1 (recreational) and Nitrox 2 (technical). A German ProTec Nitrox manual (ref to the 6th edition) has been published.
In 1999, a survey by R.W. Hamilton showed that over hundreds of thousands of nitrox dives, the DCS record is good. Nitrox had become popular with recreational divers, but not used much by commercial divers who tend to use surface supplied breathing apparatus. The OSHA accepted a petition for a variance from the commercial diving regulations for recreational scuba instructors.
The 2001 edition of the "NOAA Diving Manual" included a chapter intended for Nitrox training.
In nature.
At times in the geological past, the Earth's atmosphere contained much more than 20% oxygen: e.g. up to 35% in the Upper Carboniferous period. This let animals absorb oxygen more easily and influenced their evolutionary patterns.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f_{\\text{O}_2,\\text{max}} = \\frac{p_{\\text{O}_2,\\text{max}}}{p} = \\frac{\\text{Maximum acceptable partial pressure of oxygen}}{\\text{Maximum ambient pressure of the dive}}"
}
] | https://en.wikipedia.org/wiki?curid=9955 |
9956299 | Circular mean | Method for calculating average values
In mathematics and statistics, a circular mean or angular mean is a mean designed for angles and similar cyclic quantities, such as times of day, and fractional parts of real numbers.
This is necessary since most of the usual means may not be appropriate on angle-like quantities. For example, the arithmetic mean of 0° and 360° is 180°, which is misleading because 360° equals 0° modulo a full cycle. As another example, the "average time" between 11 PM and 1 AM is either midnight or noon, depending on whether the two times are part of a single night or part of a single calendar day.
The circular mean is one of the simplest examples of directional statistics and of statistics of non-Euclidean spaces.
This computation produces a different result than the arithmetic mean, with the difference being greater when the angles are widely distributed. For example, the arithmetic mean of the three angles 0°, 0°, and 90° is (0° + 0° + 90°) / 3 = 30°, but the vector mean is arctan(1/2) = 26.565°. Moreover, with the arithmetic mean the circular variance is only defined ±180°.
Definition.
Since the arithmetic mean is not always appropriate for angles, the following method can be used to obtain both a mean value and measure for the variance of the angles:
Convert all angles to corresponding points on the unit circle, e.g., formula_0 to formula_1. That is, convert polar coordinates to Cartesian coordinates. Then compute the arithmetic mean of these points. The resulting point will lie within the unit disk but generally not on the unit circle. Convert that point back to polar coordinates. The angle is a reasonable mean of the input angles. The resulting radius will be 1 if all angles are equal. If the angles are uniformly distributed on the circle, then the resulting radius will be 0, and there is no circular mean. (In fact, it is impossible to define a continuous mean operation on the circle.) In other words, the radius measures the concentration of the angles.
Given the angles formula_2 a common formula of the mean using the atan2 variant of the arctangent function is
formula_3
Using complex arithmetic.
An equivalent definition can be formulated using complex numbers:
formula_4.
In order to match the above derivation using arithmetic means of points, the sums would have to be divided by formula_5. However, the scaling does not matter for formula_6 and formula_7, thus it can be omitted.
This may be more succinctly stated by realizing that directional data are in fact vectors of unit length. In the case of one-dimensional data, these data points can be represented conveniently as complex numbers of unit magnitude formula_8, where formula_9 is the measured angle. The mean resultant vector for the sample is then:
formula_10
The sample mean angle is then the argument of the mean resultant:
formula_11
The length of the sample mean resultant vector is:
formula_12
and will have a value between 0 and 1. Thus the sample mean resultant vector can be represented as:
formula_13
Similar calculations are also used to define the circular variance.
Properties.
The circular mean, formula_14
formula_15
The distance formula_16 is equal to half the squared Euclidean distance between the two points on the unit circle associated with formula_17 and formula_18.
Example.
A simple way to calculate the mean of a series of angles (in the interval [0°, 360°)) is to calculate the mean of the cosines and sines of each angle, and obtain the angle by calculating the inverse tangent. Consider the following three angles as an example: 10, 20, and 30 degrees. Intuitively, calculating the mean would involve adding these three angles together and dividing by 3, in this case indeed resulting in a correct mean angle of 20 degrees. By rotating this system anticlockwise through 15 degrees the three angles become 355 degrees, 5 degrees and 15 degrees. The arithmetic mean is now 125 degrees, which is the wrong answer, as it should be 5 degrees. The vector mean formula_19 can be calculated in the following way, using the mean sine formula_20 and the mean cosine formula_21:
formula_22
formula_23
formula_24
Implementation.
In this python code we use day hours to find circular average of them:
import math
def circular_mean(hours):
# Convert hours to radians
# To convert from hours to degrees, we need to
# multiply hour by 360/24 = 15.
radians = [math.radians(hour * 15) for hour in hours]
# Calculate the sum of sin and cos values
sin_sum = sum([math.sin(rad) for rad in radians])
cos_sum = sum([math.cos(rad) for rad in radians])
# Calculate the circular mean using arctan2
mean_rad = math.atan2(sin_sum, cos_sum)
# Convert the mean back to hours
mean_hour = (math.degrees(mean_rad) / 15) % 24
return mean_hour
hours = [0, 12, 18]
mean_hour = circular_mean(hours)
print("First Circular mean:", round(mean_hour, 2))
hours = [0, 12]
mean_hour = circular_mean(hours)
print("Second Circular mean:", round(mean_hour, 2))
hours = [0, 0, 12, 12, 24]
mean_hour = circular_mean(hours)
print("Third Circular mean:", round(mean_hour, 2))
Generalizations.
Weighted spherical mean.
A weighted spherical mean can be defined based on spherical linear interpolation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "(\\cos\\alpha,\\sin\\alpha)"
},
{
"math_id": 2,
"text": "\\alpha_1,\\dots,\\alpha_n"
},
{
"math_id": 3,
"text": "\\bar{\\alpha} = \\operatorname{atan2}\\left(\\frac{1}{n}\\sum_{j=1}^n \\sin\\alpha_j, \\frac{1}{n}\\sum_{j=1}^n \\cos\\alpha_j\\right) = \\operatorname{atan2}\\left(\\sum_{j=1}^n \\sin\\alpha_j, \\sum_{j=1}^n \\cos\\alpha_j\\right) "
},
{
"math_id": 4,
"text": "\\bar{\\alpha} = \\arg\\left(\\frac{1}{n}\\sum_{j=1}^n \\exp(i \\cdot\\alpha_j)\\right) = \\arg\\left(\\sum_{j=1}^n \\exp(i \\cdot\\alpha_j)\\right) "
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "\\operatorname{atan2}"
},
{
"math_id": 7,
"text": "\\arg"
},
{
"math_id": 8,
"text": "z=\\cos(\\theta)+i\\,\\sin(\\theta)=e^{i\\theta}"
},
{
"math_id": 9,
"text": "\\theta"
},
{
"math_id": 10,
"text": "\\overline{\\mathbf{\\rho}}=\\frac{1}{N}\\sum_{n=1}^N z_n."
},
{
"math_id": 11,
"text": "\\overline{\\theta}=\\operatorname{Arg}(\\overline{\\mathbf{\\rho}})."
},
{
"math_id": 12,
"text": "\\overline{R}=|\\overline{\\mathbf{\\rho}}|"
},
{
"math_id": 13,
"text": "\\overline{\\mathbf{\\rho}}=\\overline{R}\\,e^{i\\overline{\\theta}}."
},
{
"math_id": 14,
"text": "\\bar{\\alpha}"
},
{
"math_id": 15,
"text": "\\bar{\\alpha} = \\underset{\\beta}{\\operatorname{argmin}} \\sum_{j=1}^n d(\\alpha_j,\\beta), \\text{ where } d(\\varphi,\\beta) = 1-\\cos(\\varphi-\\beta)."
},
{
"math_id": 16,
"text": "d(\\varphi,\\beta)"
},
{
"math_id": 17,
"text": "\\varphi"
},
{
"math_id": 18,
"text": "\\beta"
},
{
"math_id": 19,
"text": "\\bar \\theta"
},
{
"math_id": 20,
"text": "\\bar s"
},
{
"math_id": 21,
"text": "\\bar c \\not = 0"
},
{
"math_id": 22,
"text": "\\bar s = \\frac{1}{3} ( \\sin (355^\\circ) + \\sin (5^\\circ) + \\sin (15^\\circ) ) = \\frac{1}{3} ( -0.087 + 0.087 + 0.259 ) \\approx 0.086"
},
{
"math_id": 23,
"text": "\\bar c = \\frac{1}{3} ( \\cos (355^\\circ) + \\cos (5^\\circ) + \\cos (15^\\circ) ) = \\frac{1}{3} ( 0.996 + 0.996 + 0.966 ) \\approx 0.986"
},
{
"math_id": 24,
"text": "\n\\bar \\theta =\n\\left.\n\\begin{cases}\n\\arctan \\left( \\frac{\\bar s}{\\bar c} \\right) & \\bar s > 0 ,\\ \\bar c > 0 \\\\\n\\arctan \\left( \\frac{\\bar s}{\\bar c} \\right) + 180^\\circ & \\bar c < 0 \\\\\n\\arctan \\left( \\frac{\\bar s}{\\bar c}\n\\right)+360^\\circ & \\bar s <0 ,\\ \\bar c >0\n\\end{cases}\n\\right\\}\n= \\arctan \\left( \\frac{0.086}{0.986} \\right) = \\arctan (0.087) = 5^\\circ.\n"
}
] | https://en.wikipedia.org/wiki?curid=9956299 |
995746 | Multiplier (Fourier analysis) | In Fourier analysis, a multiplier operator is a type of linear operator, or transformation of functions. These operators act on a function by altering its Fourier transform. Specifically they multiply the Fourier transform of a function by a specified function known as the multiplier or symbol. Occasionally, the term "multiplier operator" itself is shortened simply to "multiplier". In simple terms, the multiplier reshapes the frequencies involved in any function. This class of operators turns out to be broad: general theory shows that a translation-invariant operator on a group which obeys some (very mild) regularity conditions can be expressed as a multiplier operator, and conversely. Many familiar operators, such as translations and differentiation, are multiplier operators, although there are many more complicated examples such as the Hilbert transform.
In signal processing, a multiplier operator is called a "filter", and the multiplier is the filter's frequency response (or transfer function).
In the wider context, multiplier operators are special cases of spectral multiplier operators, which arise from the functional calculus of an operator (or family of commuting operators). They are also special cases of pseudo-differential operators, and more generally Fourier integral operators. There are natural questions in this field that are still open, such as characterizing the "Lp" bounded multiplier operators (see below).
Multiplier operators are unrelated to Lagrange multipliers, except that they both involve the multiplication operation.
"For the necessary background on the Fourier transform, see that page. Additional important background may be found on the pages operator norm and "Lp" space."
Examples.
In the setting of periodic functions defined on the unit circle, the Fourier transform of a function is simply the sequence of its Fourier coefficients. To see that differentiation can be realized as multiplier, consider the Fourier series for the derivative of a periodic function formula_0 After using integration by parts in the definition of the Fourier coefficient we have that
formula_1.
So, formally, it follows that the Fourier series for the derivative is simply the Fourier series for formula_2 multiplied by a factor formula_3. This is the same as saying that differentiation is a multiplier operator with multiplier formula_3.
An example of a multiplier operator acting on functions on the real line is the Hilbert transform. It can be shown that the Hilbert transform is a multiplier operator whose multiplier is given by the formula_4, where sgn is the signum function.
Finally another important example of a multiplier is the characteristic function of the unit cube in formula_5 which arises in the study of "partial sums" for the Fourier transform (see Convergence of Fourier series).
Definition.
Multiplier operators can be defined on any group "G" for which the Fourier transform is also defined (in particular, on any locally compact abelian group). The general definition is as follows. If formula_6 is a sufficiently regular function, let formula_7 denote its Fourier transform (where formula_8 is the Pontryagin dual of "G"). Let formula_9 denote another function, which we shall call the "multiplier". Then the multiplier operator formula_10 associated to this symbol "m" is defined via the formula
formula_11
In other words, the Fourier transform of "Tf" at a frequency ξ is given by the Fourier transform of "f" at that frequency, multiplied by the value of the multiplier at that frequency. This explains the terminology "multiplier".
Note that the above definition only defines Tf implicitly; in order to recover "Tf" explicitly one needs to invert the Fourier transform. This can be easily done if both "f" and "m" are sufficiently smooth and integrable. One of the major problems in the subject is to determine, for any specified multiplier "m", whether the corresponding Fourier multiplier operator continues to be well-defined when "f" has very low regularity, for instance if it is only assumed to lie in an "Lp" space. See the discussion on the "boundedness problem" below. As a bare minimum, one usually requires the multiplier "m" to be bounded and measurable; this is sufficient to establish boundedness on formula_12 but is in general not strong enough to give boundedness on other spaces.
One can view the multiplier operator "T" as the composition of three operators, namely the Fourier transform, the operation of pointwise multiplication by "m", and then the inverse Fourier transform. Equivalently, "T" is the conjugation of the pointwise multiplication operator by the Fourier transform. Thus one can think of multiplier operators as operators which are diagonalized by the Fourier transform.
Multiplier operators on common groups.
We now specialize the above general definition to specific groups "G". First consider the unit circle formula_13 functions on "G" can thus be thought of as 2π-periodic functions on the real line. In this group, the Pontryagin dual is the group of integers, formula_14 The Fourier transform (for sufficiently regular functions "f") is given by
formula_15
and the inverse Fourier transform is given by
formula_16
A multiplier in this setting is simply a sequence formula_17 of numbers, and the operator formula_10 associated to this multiplier is then given by the formula
formula_18
at least for sufficiently well-behaved choices of the multiplier formula_17 and the function "f".
Now let "G" be a Euclidean space formula_19. Here the dual group is also Euclidean, formula_20 and the Fourier and inverse Fourier transforms are given by the formulae
formula_21
A multiplier in this setting is a function formula_22 and the associated multiplier operator formula_10 is defined by
formula_23
again assuming sufficiently strong regularity and boundedness assumptions on the multiplier and function.
In the sense of distributions, there is no difference between multiplier operators and convolution operators; every multiplier "T" can also be expressed in the form "Tf" = "f"∗"K" for some distribution "K", known as the "convolution kernel" of "T". In this view, translation by an amount "x"0 is convolution with a Dirac delta function δ(· − "x"0), differentiation is convolution with δ'. Further examples are given in the table below.
Further examples.
On the unit circle.
The following table shows some common examples of multiplier operators on the unit circle formula_24
On the Euclidean space.
The following table shows some common examples of multiplier operators on Euclidean space formula_19.
General considerations.
The map formula_27 is a homomorphism of C*-algebras. This follows because the sum of two multiplier operators formula_28 and formula_29 is a multiplier operators with multiplier formula_30, the composition of these two multiplier operators is a multiplier operator with multiplier formula_31 and the adjoint of a multiplier operator formula_28 is another multiplier operator with multiplier formula_32.
In particular, we see that any two multiplier operators commute with each other. It is known that multiplier operators are translation-invariant. Conversely, one can show that any translation-invariant linear operator which is bounded on "L"2("G") is a multiplier operator.
The "Lp" boundedness problem.
The "Lp" boundedness problem (for any particular "p") for a given group "G" is, stated simply, to identify the multipliers "m" such that the corresponding multiplier operator is bounded from "Lp"("G") to "Lp"("G"). Such multipliers are usually simply referred to as ""Lp" multipliers". Note that as multiplier operators are always linear, such operators are bounded if and only if they are continuous. This problem is considered to be extremely difficult in general, but many special cases can be treated. The problem depends greatly on "p", although there is a duality relationship: if formula_33 and 1 ≤ "p", "q" ≤ ∞, then a multiplier operator is bounded on "Lp" if and only if it is bounded on "Lq".
The Riesz-Thorin theorem shows that if a multiplier operator is bounded on two different "Lp" spaces, then it is also bounded on all intermediate spaces. Hence we get that the space of multipliers is smallest for "L"1 and "L"∞ and grows as one approaches "L"2, which has the largest multiplier space.
Boundedness on "L"2.
This is the easiest case. Parseval's theorem allows to solve this problem completely and obtain that a function "m" is an "L"2("G") multiplier if and only if it is bounded and measurable.
Boundedness on "L"1 or "L"∞.
This case is more complicated than the Hilbertian ("L"2) case, but is fully resolved. The following is true:
Theorem: In the euclidean space formula_5 a function formula_25 is an" "L"1 "multiplier (equivalently an "L"∞ multiplier) if and only if there exists a finite Borel measure μ such that" "m" "is the Fourier transform of μ.
Boundedness on "L""p" for 1 < "p" < ∞.
In this general case, necessary and sufficient conditions for boundedness have not been established, even for Euclidean space or the unit circle. However, several necessary conditions and several sufficient conditions are known. For instance it is known that in order for a multiplier operator to be bounded on even a single "Lp" space, the multiplier must be bounded and measurable (this follows from the characterisation of "L"2 multipliers above and the inclusion property). However, this is not sufficient except when "p" = 2.
Results that give sufficient conditions for boundedness are known as multiplier theorems. Three such results are given below.
Marcinkiewicz multiplier theorem.
Let formula_34 be a bounded function that is continuously differentiable on every set of the form formula_35 for formula_36 and has derivative such that
formula_37
Then "m" is an "Lp" multiplier for all 1 < "p" < ∞.
Mikhlin multiplier theorem.
Let "m" be a bounded function on formula_5 which is smooth except possibly at the origin, and such that the function formula_38 is bounded for all integers formula_39: then "m" is an "Lp" multiplier for all 1 < "p" < ∞.
This is a special case of the Hörmander-Mikhlin multiplier theorem.
The proofs of these two theorems are fairly tricky, involving techniques from Calderón–Zygmund theory and the Marcinkiewicz interpolation theorem: for the original proof, see or .
Radial multipliers.
For radial multipliers, a necessary and sufficient condition for formula_40 boundedness is known for some partial range of formula_41. Let formula_42 and formula_43. Suppose that formula_44 is a radial multiplier compactly supported away from the origin. Then formula_44 is an formula_40 multiplier if and only if the Fourier transform of formula_44 belongs to formula_40.
This is a theorem of Heo, Nazarov, and Seeger. They also provided a necessary and sufficient condition which is valid without the compact support assumption on formula_44.
Examples.
Translations are bounded operators on any "Lp". Differentiation is not bounded on any "Lp". The Hilbert transform is bounded only for "p" strictly between 1 and ∞. The fact that it is unbounded on "L"∞ is easy, since it is well known that the Hilbert transform of a step function is unbounded. Duality gives the same for "p"
1. However, both the Marcinkiewicz and Mikhlin multiplier theorems show that the Hilbert transform is bounded in "Lp" for all 1 < "p" < ∞.
Another interesting case on the unit circle is when the sequence formula_45 that is being proposed as a multiplier is constant for "n" in each of the sets formula_46 and formula_47 From the Marcinkiewicz multiplier theorem (adapted to the context of the unit circle) we see that any such sequence (also assumed to be bounded, of course) is a multiplier for every 1 < "p" < ∞.
In one dimension, the disk multiplier operator formula_26(see table above) is bounded on "Lp" for every 1 < "p" < ∞. However, in 1972, Charles Fefferman showed the surprising result that in two and higher dimensions the disk multiplier operator formula_26 is unbounded on "Lp" for every "p" ≠ 2. The corresponding problem for Bochner–Riesz multipliers is only partially solved; see also Bochner–Riesz conjecture.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(t)."
},
{
"math_id": 1,
"text": "\\mathcal{F}(f')(n)=\\int_{-\\pi}^\\pi f'(t)e^{-int}\\,dt=\\int_{-\\pi}^\\pi (i n) f(t)e^{-int}\\,dt = in\\cdot\\mathcal{F}(f)(n)"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": " i n "
},
{
"math_id": 4,
"text": " m(\\xi) = -i \\operatorname{sgn}(\\xi) "
},
{
"math_id": 5,
"text": "\\R^n"
},
{
"math_id": 6,
"text": "f:G\\to\\Complex"
},
{
"math_id": 7,
"text": "\\hat f: \\hat G \\to \\Complex"
},
{
"math_id": 8,
"text": "\\hat G"
},
{
"math_id": 9,
"text": "m: \\hat G \\to \\Complex"
},
{
"math_id": 10,
"text": "T = T_m"
},
{
"math_id": 11,
"text": " \\widehat{Tf}(\\xi) := m(\\xi) \\hat{f}(\\xi)."
},
{
"math_id": 12,
"text": "L^2"
},
{
"math_id": 13,
"text": "G = \\R / 2\\pi\\Z;"
},
{
"math_id": 14,
"text": "\\hat G = \\Z."
},
{
"math_id": 15,
"text": "\\hat f(n) := \\frac{1}{2\\pi} \\int_0^{2\\pi} f(t) e^{-int} dt "
},
{
"math_id": 16,
"text": "f(t) = \\sum_{n=-\\infty}^\\infty \\hat f(n) e^{int}."
},
{
"math_id": 17,
"text": "(m_n)_{n=-\\infty}^\\infty"
},
{
"math_id": 18,
"text": "(Tf)(t) := \\sum_{n=-\\infty}^{\\infty}m_n \\hat{f}(n)e^{int},"
},
{
"math_id": 19,
"text": "G = \\R^n"
},
{
"math_id": 20,
"text": "\\hat G = \\R^n,"
},
{
"math_id": 21,
"text": "\\begin{align}\n \\hat f(\\xi) :={} &\\int_{\\R^n} f(x) e^{-2\\pi i x \\cdot \\xi} dx \\\\\n f(x) ={} &\\int_{\\R^n} \\hat f(\\xi) e^{2\\pi i x \\cdot \\xi} d\\xi.\n\\end{align}"
},
{
"math_id": 22,
"text": "m: \\R^n \\to \\Complex,"
},
{
"math_id": 23,
"text": "Tf(x) := \\int_{\\R^n} m(\\xi) \\hat f(\\xi) e^{2\\pi i x \\cdot \\xi} d\\xi,"
},
{
"math_id": 24,
"text": "G = \\R/2\\pi \\Z."
},
{
"math_id": 25,
"text": "m(\\xi)"
},
{
"math_id": 26,
"text": "S^0_R"
},
{
"math_id": 27,
"text": "m \\mapsto T_m"
},
{
"math_id": 28,
"text": "T_m"
},
{
"math_id": 29,
"text": "T_{m'}"
},
{
"math_id": 30,
"text": "m+m'"
},
{
"math_id": 31,
"text": "mm',"
},
{
"math_id": 32,
"text": "\\overline{m}"
},
{
"math_id": 33,
"text": "1/p + 1/q = 1"
},
{
"math_id": 34,
"text": "m: \\R \\to \\R"
},
{
"math_id": 35,
"text": "\\left(-2^{j+1}, -2^j\\right) \\cup \\left(2^j, 2^{j+1}\\right)"
},
{
"math_id": 36,
"text": "j \\in \\Z"
},
{
"math_id": 37,
"text": "\\sup_{j \\in \\Z} \\left( \\int_{-2^{j+1}}^{-2^j} \\left|m'(\\xi)\\right| \\, d\\xi + \\int_{2^j}^{2^{j+1}} \\left|m'(\\xi)\\right| \\, d\\xi \\right) < \\infty."
},
{
"math_id": 38,
"text": "|x|^k \\left|\\nabla^k m\\right|"
},
{
"math_id": 39,
"text": "0 \\leq k \\leq \\frac{n}{2} + 1"
},
{
"math_id": 40,
"text": "L^p\\left(\\mathbb{R}^n\\right)"
},
{
"math_id": 41,
"text": "p"
},
{
"math_id": 42,
"text": "n \\geq 4"
},
{
"math_id": 43,
"text": "1 < p < 2\\frac{n - 1}{n + 1}"
},
{
"math_id": 44,
"text": "m"
},
{
"math_id": 45,
"text": "(x_n)"
},
{
"math_id": 46,
"text": "\\left\\{2^n, \\ldots, 2^{n+1} - 1\\right\\}"
},
{
"math_id": 47,
"text": "\\left\\{-2^{n+1} + 1, \\ldots, -2^n\\right\\}."
}
] | https://en.wikipedia.org/wiki?curid=995746 |
995908 | Tomographic reconstruction | Estimate object properties from a finite number of projections
Tomographic reconstruction is a type of multidimensional inverse problem where the challenge is to yield an estimate of a specific system from a finite number of projections. The mathematical basis for tomographic imaging was laid down by Johann Radon. A notable example of applications is the reconstruction of computed tomography (CT) where cross-sectional images of patients are obtained in non-invasive manner. Recent developments have seen the Radon transform and its inverse used for tasks related to realistic object insertion required for testing and evaluating computed tomography use in airport security.
This article applies in general to reconstruction methods for all kinds of tomography, but some of the terms and physical descriptions refer directly to the reconstruction of X-ray computed tomography.
Introducing formula.
The projection of an object, resulting from the tomographic measurement process at a given angle formula_0, is made up of a set of line integrals (see Fig. 1). A set of many such projections under different angles organized in 2D is called a sinogram (see Fig. 3). In X-ray CT, the line integral represents the total attenuation of the beam of X-rays as it travels in a straight line through the object. As mentioned above, the resulting image is a 2D (or 3D) model of the attenuation coefficient. That is, we wish to find the image formula_1. The simplest and easiest way to visualise the method of scanning is the system of parallel projection, as used in the first scanners. For this discussion we consider the data to be collected as a series of parallel rays, at position formula_2, across a projection at angle formula_0. This is repeated for various angles. Attenuation occurs exponentially in tissue:
formula_3
where formula_1 is the attenuation coefficient as a function of position. Therefore, generally the total attenuation formula_4 of a ray at position formula_2, on the projection at angle formula_0, is given by the line integral:
formula_5
Using the coordinate system of Figure 1, the value of formula_2 onto which the point formula_6 will be projected at angle formula_0 is given by:
formula_7
So the equation above can be rewritten as
formula_8
where formula_9 represents formula_1 and formula_10 is the Dirac delta function. This function is known as the Radon transform (or "sinogram") of the 2D object.
The Fourier Transform of the projection can be written as
formula_11 where formula_12
formula_13 represents a slice of the 2D Fourier transform of formula_9 at angle formula_0. Using the inverse Fourier transform, the inverse Radon transform formula can be easily derived.
formula_14
where formula_15 is the derivative of the Hilbert transform of formula_16
In theory, the inverse Radon transformation would yield the original image. The projection-slice theorem tells us that if we had an infinite number of one-dimensional projections of an object taken at an infinite number of angles, we could perfectly reconstruct the original object, formula_9. However, there will only be a finite number of projections available in practice.
Assuming formula_9 has effective diameter formula_17 and desired resolution is formula_18, a rule of thumb for the number of projections needed for reconstruction is formula_19
Reconstruction algorithms.
Practical reconstruction algorithms have been developed to implement the process of reconstruction of a three-dimensional object from its projections. These algorithms are designed largely based on the mathematics of the X-ray transform, statistical knowledge of the data acquisition process and geometry of the data imaging system.
Fourier-domain reconstruction algorithm.
Reconstruction can be made using interpolation. Assume formula_20 projections of formula_9 are generated at equally spaced angles, each sampled at the same rate. The discrete Fourier transform (DFT) on each projection yields sampling in the frequency domain. Combining all the frequency-sampled projections generates a polar raster in the frequency domain. The polar raster is sparse, so interpolation is used to fill the unknown DFT points, and reconstruction can be done through the inverse discrete Fourier transform. Reconstruction performance may improve by designing methods to change the sparsity of the polar raster, facilitating the effectiveness of interpolation.
For instance, a concentric square raster in the frequency domain can be obtained by changing the angle between each projection as follow:
formula_21
where formula_22 is highest frequency to be evaluated.
The concentric square raster improves computational efficiency by allowing all the interpolation positions to be on rectangular DFT lattice. Furthermore, it reduces the interpolation error. Yet, the Fourier-Transform algorithm has a disadvantage of producing inherently noisy output.
Back projection algorithm.
In practice of tomographic image reconstruction, often a stabilized and discretized version of the inverse Radon transform is used, known as the filtered back projection algorithm.
With a sampled discrete system, the inverse Radon transform is
formula_23
formula_24
where formula_25 is the angular spacing between the projections and formula_26 is a Radon kernel with frequency response formula_27.
The name "back-projection" comes from the fact that a one-dimensional projection needs to be filtered by a one-dimensional Radon kernel (back-projected) in order to obtain a two-dimensional signal. The filter used does not contain DC gain, so adding DC bias may be desirable. Reconstruction using back-projection allows better resolution than interpolation method described above. However, it induces greater noise because the filter is prone to amplify high-frequency content.
Iterative reconstruction algorithm.
The iterative algorithm is computationally intensive but it allows the inclusion of "a priori" information about the system formula_9.
Let formula_20 be the number of projections and formula_28 be the distortion operator for the formula_29th projection taken at an angle formula_30. formula_31 are a set of parameters to optimize the conversion of iterations.
formula_32
formula_33
An alternative family of recursive tomographic reconstruction algorithms are the algebraic reconstruction techniques and iterative sparse asymptotic minimum variance.
Fan-beam reconstruction.
Use of a noncollimated fan beam is common since a collimated beam of radiation is difficult to obtain. Fan beams will generate series of line integrals, not parallel to each other, as projections. The fan-beam system requires a 360-degree range of angles, which imposes mechanical constraints, but it allows faster signal acquisition time, which may be advantageous in certain settings such as in the field of medicine. Back projection follows a similar two-step procedure that yields reconstruction by computing weighted sum back-projections obtained from filtered projections.
Deep learning reconstruction.
Deep learning methods are widely applied to image reconstruction nowadays and have achieved impressive results in various image reconstruction tasks, including low-dose denoising, sparse-view reconstruction, limited angle tomography and metal artifact reduction. An excellent overview can be found in the special issue of IEEE Transaction on Medical Imaging. One group of deep learning reconstruction algorithms apply post-processing neural networks to achieve image-to-image reconstruction, where input images are reconstructed by conventional reconstruction methods. Artifact reduction using the U-Net in limited angle tomography is such an example application. However, incorrect structures may occur in an image reconstructed by such a completely data-driven method, as displayed in the figure. Therefore, integration of known operators into the architecture design of neural networks appears beneficial, as described in the concept of precision learning. For example, direct image reconstruction from projection data can be learnt from the framework of filtered back-projection. Another example is to build neural networks by unrolling iterative reconstruction algorithms. Except for precision learning, using conventional reconstruction methods with deep learning reconstruction prior is also an alternative approach to improve the image quality of deep learning reconstruction.
Tomographic reconstruction software.
Tomographic systems have significant variability in their applications and geometries (locations of sources and detectors). This variability creates the need for very specific, tailored implementations of the processing and reconstruction algorithms. Thus, most CT manufacturers provide their own custom proprietary software. This is done not only to protect intellectual property, but may also be enforced by a government regulatory agency. Regardless, there are a number of general purpose tomographic reconstruction software packages that have been developed over the last couple decades, both commercial and open-source.
Most of the commercial software packages that are available for purchase focus on processing data for benchtop cone-beam CT systems. A few of these software packages include Volume Graphics, InstaRecon, iTomography, Livermore Tomography Tools (LTT), and Cone Beam Software Tools (CST).
Some noteworthy examples of open-source reconstruction software include: Reconstruction Toolkit (RTK), CONRAD, TomoPy, the ASTRA toolbox, PYRO-NN, ODL, TIGRE, and LEAP.
Gallery.
Shown in the gallery is the complete process for a simple object tomography and the following tomographic reconstruction based on ART.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\theta"
},
{
"math_id": 1,
"text": "\\mu(x,y)"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "I = I_0\\exp\\left({-\\int\\mu(x,y)\\,ds}\\right)"
},
{
"math_id": 4,
"text": "p"
},
{
"math_id": 5,
"text": "p_{\\theta}(r) = \\ln \\left(\\frac{I}{I_0}\\right) = -\\int\\mu(x,y)\\,ds"
},
{
"math_id": 6,
"text": "(x,y)"
},
{
"math_id": 7,
"text": "x\\cos\\theta + y\\sin\\theta = r\\ "
},
{
"math_id": 8,
"text": "p_{\\theta}(r)=\\int^\\infty_{-\\infty}\\int^\\infty_{-\\infty}f(x,y)\\delta(x\\cos\\theta+y\\sin\\theta-r)\\,dx\\,dy"
},
{
"math_id": 9,
"text": "f(x,y)"
},
{
"math_id": 10,
"text": "\\delta()"
},
{
"math_id": 11,
"text": "P_\\theta(\\omega)=\\int^\\infty_{-\\infty}\\int^\\infty_{-\\infty}f(x,y)\\exp[-j\\omega(x\\cos\\theta+y\\sin\\theta)]\\,dx\\,dy = F(\\Omega_1,\\Omega_2)"
},
{
"math_id": 12,
"text": "\\Omega_1 =\\omega\\cos\\theta, \\Omega_2 =\\omega\\sin\\theta "
},
{
"math_id": 13,
"text": "P_\\theta(\\omega)"
},
{
"math_id": 14,
"text": "f(x,y) = \\frac{1}{2\\pi} \\int\\limits_{0}^{\\pi} g_\\theta(x\\cos\\theta+y\\sin\\theta)d\\theta"
},
{
"math_id": 15,
"text": "g_\\theta(x\\cos\\theta+y\\sin\\theta) "
},
{
"math_id": 16,
"text": "p_{\\theta}(r)"
},
{
"math_id": 17,
"text": "d"
},
{
"math_id": 18,
"text": "R_s"
},
{
"math_id": 19,
"text": "N > \\pi d / R_s"
},
{
"math_id": 20,
"text": "N"
},
{
"math_id": 21,
"text": "\\theta' = \\frac{R_0}{ \\max\\{|\\cos\\theta|, |\\sin\\theta|\\}}"
},
{
"math_id": 22,
"text": "R_0"
},
{
"math_id": 23,
"text": "f(x,y) = \\frac{1}{2\\pi} \\sum_{i=0}^{N-1}\\Delta\\theta_i g_{\\theta_i}(x\\cos\\theta_i+y\\sin\\theta_i)"
},
{
"math_id": 24,
"text": "g_\\theta(t) = p_\\theta(t) \\cdot k(t) "
},
{
"math_id": 25,
"text": "\\Delta\\theta"
},
{
"math_id": 26,
"text": "k(t)"
},
{
"math_id": 27,
"text": "|\\omega|"
},
{
"math_id": 28,
"text": "D_i"
},
{
"math_id": 29,
"text": "i"
},
{
"math_id": 30,
"text": "\\theta_i"
},
{
"math_id": 31,
"text": "\\{\\lambda_i\\}"
},
{
"math_id": 32,
"text": "f_0(x,y) = \\sum_{i=1}^N \\lambda_i p_{\\theta_i}(r) "
},
{
"math_id": 33,
"text": "f_k(x,y) = f_{k-1} (x,y) + \\sum_{i=1}^N \\lambda_i [p_{\\theta_i}(r)-D_if_{k-1}(x,y)]"
}
] | https://en.wikipedia.org/wiki?curid=995908 |
9960 | Elliptic integral | Special function defined by an integral
In integral calculus, an elliptic integral is one of a number of related functions defined as the value of certain integrals, which were first studied by Giulio Fagnano and Leonhard Euler (c. 1750). Their name originates from their originally arising in connection with the problem of finding the arc length of an ellipse.
Modern mathematics defines an "elliptic integral" as any function "f" which can be expressed in the form
formula_0
where "R" is a rational function of its two arguments, "P" is a polynomial of degree 3 or 4 with no repeated roots, and "c" is a constant.
In general, integrals in this form cannot be expressed in terms of elementary functions. Exceptions to this general rule are when "P" has repeated roots, or when "R"("x", "y") contains no odd powers of "y" or if the integral is pseudo-elliptic. However, with the appropriate reduction formula, every elliptic integral can be brought into a form that involves integrals over rational functions and the three Legendre canonical forms (i.e. the elliptic integrals of the first, second and third kind).
Besides the Legendre form given below, the elliptic integrals may also be expressed in Carlson symmetric form. Additional insight into the theory of the elliptic integral may be gained through the study of the Schwarz–Christoffel mapping. Historically, elliptic functions were discovered as inverse functions of elliptic integrals.
Argument notation.
"Incomplete elliptic integrals" are functions of two arguments; "complete elliptic integrals" are functions of a single argument. These arguments are expressed in a variety of different but equivalent ways (they give the same elliptic integral). Most texts adhere to a canonical naming scheme, using the following naming conventions.
For expressing one argument:
Each of the above three quantities is completely determined by any of the others (given that they are non-negative). Thus, they can be used interchangeably.
The other argument can likewise be expressed as "φ", the "amplitude", or as "x" or "u", where "x" = sin "φ" = sn "u" and sn is one of the Jacobian elliptic functions.
Specifying the value of any one of these quantities determines the others. Note that "u" also depends on "m". Some additional relationships involving "u" include
formula_1
The latter is sometimes called the "delta amplitude" and written as Δ("φ") = dn "u". Sometimes the literature also refers to the "complementary parameter", the "complementary modulus," or the "complementary modular angle". These are further defined in the article on quarter periods.
In this notation, the use of a vertical bar as delimiter indicates that the argument following it is the "parameter" (as defined above), while the backslash indicates that it is the modular angle. The use of a semicolon implies that the argument preceding it is the sine of the amplitude:
formula_2
This potentially confusing use of different argument delimiters is traditional in elliptic integrals and much of the notation is compatible with that used in the reference book by Abramowitz and Stegun and that used in the integral tables by Gradshteyn and Ryzhik.
There are still other conventions for the notation of elliptic integrals employed in the literature. The notation with interchanged arguments, "F"("k", "φ"), is often encountered; and similarly "E"("k", "φ") for the integral of the second kind. Abramowitz and Stegun substitute the integral of the first kind, "F"("φ", "k"), for the argument φ in their definition of the integrals of the second and third kinds, unless this argument is followed by a vertical bar: i.e. "E"("F"("φ", "k") | "k"2) for "E"("φ" | "k"2). Moreover, their complete integrals employ the "parameter" "k"2 as argument in place of the modulus "k", i.e. "K"("k"2) rather than "K"("k"). And the integral of the third kind defined by Gradshteyn and Ryzhik, Π("φ", "n", "k"), puts the amplitude φ first and not the "characteristic" n.
Thus one must be careful with the notation when using these functions, because various reputable references and software packages use different conventions in the definitions of the elliptic functions. For example, Wolfram's Mathematica software and Wolfram Alpha define the complete elliptic integral of the first kind in terms of the parameter "m", instead of the elliptic modulus "k".
Incomplete elliptic integral of the first kind.
The incomplete elliptic integral of the first kind F is defined as
formula_3
This is the trigonometric form of the integral; substituting "t" = sin "θ" and "x" = sin "φ", one obtains the Legendre normal form:
formula_4
Equivalently, in terms of the amplitude and modular angle one has:
formula_5
With "x" = sn("u", "k") one has:
formula_6
demonstrating that this Jacobian elliptic function is a simple inverse of the incomplete elliptic integral of the first kind.
The incomplete elliptic integral of the first kind has following addition theorem:
formula_7
The elliptic modulus can be transformed that way:
formula_8
Incomplete elliptic integral of the second kind.
The incomplete elliptic integral of the second kind "E" in trigonometric form is
formula_9
Substituting "t" = sin "θ" and "x" = sin "φ", one obtains the Legendre normal form:
formula_10
Equivalently, in terms of the amplitude and modular angle:
formula_11
Relations with the Jacobi elliptic functions include
formula_12
The meridian arc length from the equator to latitude "φ" is written in terms of "E":
formula_13
where "a" is the semi-major axis, and "e" is the eccentricity.
The incomplete elliptic integral of the second kind has following addition theorem:
formula_14
The elliptic modulus can be transformed that way:
formula_15
Incomplete elliptic integral of the third kind.
The incomplete elliptic integral of the third kind Π is
formula_16
or
formula_17
The number "n" is called the characteristic and can take on any value, independently of the other arguments. Note though that the value Π(1; | "m") is infinite, for any "m".
A relation with the Jacobian elliptic functions is
formula_18
The meridian arc length from the equator to latitude "φ" is also related to a special case of Π:
formula_19
Complete elliptic integral of the first kind.
Elliptic Integrals are said to be 'complete' when the amplitude "φ" = and therefore "x" = 1. The complete elliptic integral of the first kind "K" may thus be defined as
formula_20
or more compactly in terms of the incomplete integral of the first kind as
formula_21
It can be expressed as a power series
formula_22
where "P""n" is the Legendre polynomials, which is equivalent to
formula_23
where "n"!! denotes the double factorial. In terms of the Gauss hypergeometric function, the complete elliptic integral of the first kind can be expressed as
formula_24
The complete elliptic integral of the first kind is sometimes called the quarter period. It can be computed very efficiently in terms of the arithmetic–geometric mean:
formula_25
Therefore the modulus can be transformed as:
formula_26
This expression is valid for all formula_27 and 0 ≤ "k" ≤ 1:
formula_28
Relation to the gamma function.
If "k"2 = "λ"("i"√"r") and formula_29 (where λ is the modular lambda function), then "K"("k") is expressible in closed form in terms of the gamma function. For example, "r" = 2, "r" = 3 and "r" = 7 give, respectively,
formula_30
and
formula_31
and
formula_32
More generally, the condition that
formula_33
be in an imaginary quadratic field is sufficient. For instance, if "k" = "e"5"πi"/6, then = "e"2"πi"/3 and
formula_34
Asymptotic expressions.
formula_35
This approximation has a relative precision better than for "k" <. Keeping only the first two terms is correct to 0.01 precision for "k" <.
Differential equation.
The differential equation for the elliptic integral of the first kind is
formula_36
A second solution to this equation is formula_37. This solution satisfies the relation
formula_38
Continued fraction.
A continued fraction expansion is:
formula_39
where the nome is formula_40 in its definition.
Inverting the period ratio.
Here, we use the complete elliptic integral of the first kind with the "parameter" formula_41 instead, because the squaring function introduces problems when inverting in the complex plane. So let
formula_42
and let
formula_43
formula_44
be the theta functions.
The equation
formula_45
can then be solved (provided that a solution formula_41 exists) by
formula_46
which is in fact the modular lambda function.
For the purposes of computation, the error analysis is given by
formula_47
formula_48
where formula_49 and formula_50.
Also
formula_51
where formula_52.
Complete elliptic integral of the second kind.
The complete elliptic integral of the second kind "E" is defined as
formula_53
or more compactly in terms of the incomplete integral of the second kind "E"("φ","k") as
formula_54
For an ellipse with semi-major axis "a" and semi-minor axis "b" and eccentricity "e" = √1 − "b"2/"a"2, the complete elliptic integral of the second kind "E"("e") is equal to one quarter of the circumference "C" of the ellipse measured in units of the semi-major axis "a". In other words:
formula_55
The complete elliptic integral of the second kind can be expressed as a power series
formula_56
which is equivalent to
formula_57
In terms of the Gauss hypergeometric function, the complete elliptic integral of the second kind can be expressed as
formula_58
The modulus can be transformed that way:
formula_59
Computation.
Like the integral of the first kind, the complete elliptic integral of the second kind can be computed very efficiently using the arithmetic–geometric mean.
Define sequences an and gn, where "a"0 = 1, "g"0 = √1 − "k"2 = "k"′ and the recurrence relations "a""n" + 1 =, "g""n" + 1 = √"an gn" hold. Furthermore, define
formula_60
By definition,
formula_61
Also
formula_62
Then
formula_63
In practice, the arithmetic-geometric mean would simply be computed up to some limit. This formula converges quadratically for all . To speed up computation further, the relation "c""n" + 1 = can be used.
Furthermore, if "k"2 = "λ"("i"√"r") and formula_29 (where λ is the modular lambda function), then "E"("k") is expressible in closed form in terms of
formula_64
and hence can be computed without the need for the infinite summation term. For example, "r" = 1, "r" = 3 and "r" = 7 give, respectively,
formula_65
and
formula_66
and
formula_67
Derivative and differential equation.
formula_68
formula_69
A second solution to this equation is "E"(√1 − "k"2) − "K"(√1 − "k"2).
Complete elliptic integral of the third kind.
The complete elliptic integral of the third kind Π can be defined as
formula_70
Note that sometimes the elliptic integral of the third kind is defined with an inverse sign for the "characteristic" "n",
formula_71
Just like the complete elliptic integrals of the first and second kind, the complete elliptic integral of the third kind can be computed very efficiently using the arithmetic-geometric mean.
Partial derivatives.
formula_72
Jacobi zeta function.
In 1829, Jacobi defined the Jacobi zeta function:
formula_73
It is periodic in formula_74 with minimal period formula_75. It is related to the Jacobi zn function by formula_76. In the literature (e.g. Whittaker and Watson (1927)), sometimes formula_77 means Wikipedia's formula_78. Some authors (e.g. King (1924)) use formula_77 for both Wikipedia's formula_77 and formula_78.
Legendre's relation.
The Legendre's relation or "Legendre Identity" shows the relation of the integrals K and E of an elliptic modulus and its anti-related counterpart in an integral equation of second degree:
For two modules that are Pythagorean counterparts to each other, this relation is valid:
formula_79
For example:
formula_80
And for two modules that are tangential counterparts to each other, the following relationship is valid:
formula_81
For example:
formula_82
The Legendre's relation for tangential modular counterparts results directly from the Legendre's identity for Pythagorean modular counterparts by using the Landen modular transformation on the Pythagorean counter modulus.
Special identity for the lemniscatic case.
For the lemniscatic case, the elliptic modulus or specific eccentricity ε is equal to half the square root of two. Legendre's identity for the lemniscatic case can be proved as follows:
According to the Chain rule these derivatives hold:
formula_83
formula_84
By using the Fundamental theorem of calculus these formulas can be generated:
formula_85
formula_86
The Linear combination of the two now mentioned integrals leads to the following formula:
formula_87
formula_88
By forming the original antiderivative related to x from the function now shown using the Product rule this formula results:
formula_89
formula_90
If the value formula_91 is inserted in this integral identity, then the following identity emerges:
formula_92
formula_93
This is how this lemniscatic excerpt from Legendre's identity appears:
formula_94
Generalization for the overall case.
Now the modular general case is worked out. For this purpose, the derivatives of the complete elliptic integrals are derived after the modulus formula_95 and then they are combined. And then the Legendre's identity balance is determined.
Because the derivative of the "circle function" is the negative product of the "identical mapping function" and the reciprocal of the circle function:
formula_96
These are the derivatives of K and E shown in this article in the sections above:
formula_97
formula_98
In combination with the derivative of the circle function these derivatives are valid then:
formula_99
formula_100
Legendre's identity includes products of any two complete elliptic integrals. For the derivation of the function side from the equation scale of Legendre's identity, the Product rule is now applied in the following:
formula_101
formula_102
formula_103
Of these three equations, adding the top two equations and subtracting the bottom equation gives this result:
formula_104
In relation to the formula_95 the equation balance constantly gives the value zero.
The previously determined result shall be combined with the Legendre equation to the modulus formula_105 that is worked out in the section before:
formula_94
The combination of the last two formulas gives the following result:
formula_106
Because if the derivative of a continuous function constantly takes the value zero, then the concerned function is a constant function. This means that this function results in the same function value for each abscissa value formula_95 and the associated function graph is therefore a horizontal straight line.
See also.
<templatestyles src="Div col/styles.css"/>
References.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " f(x) = \\int_{c}^{x} R{\\left({\\textstyle t, \\sqrt{ P(t)} }\\right)} \\, dt,"
},
{
"math_id": 1,
"text": "\\cos \\varphi = \\operatorname{cn} u, \\quad \\textrm{and} \\quad \\sqrt{1 - m \\sin^2 \\varphi} = \\operatorname{dn} u."
},
{
"math_id": 2,
"text": " F(\\varphi, \\sin \\alpha) = F\\left(\\varphi \\mid \\sin^2 \\alpha\\right) = F(\\varphi \\setminus \\alpha) = F(\\sin \\varphi ; \\sin \\alpha)."
},
{
"math_id": 3,
"text": " F(\\varphi,k) = F\\left(\\varphi \\mid k^2\\right) = F(\\sin \\varphi ; k) = \\int_0^\\varphi \\frac {d\\theta}{\\sqrt{1 - k^2 \\sin^2 \\theta}}."
},
{
"math_id": 4,
"text": " F(x ; k) = \\int_{0}^{x} \\frac{dt}{\\sqrt{\\left(1 - t^2\\right)\\left(1 - k^2 t^2\\right)}}."
},
{
"math_id": 5,
"text": " F(\\varphi \\setminus \\alpha) = F(\\varphi, \\sin \\alpha) = \\int_0^\\varphi \\frac{d\\theta}{\\sqrt{1-\\left(\\sin \\theta \\sin \\alpha\\right)^2}}."
},
{
"math_id": 6,
"text": "F(x;k) = u;"
},
{
"math_id": 7,
"text": "F\\bigl[\\arctan(x),k\\bigr] + F\\bigl[\\arctan(y),k\\bigr] = F\\left[\\arctan\\left(\\frac{x\\sqrt{k'^2y^2+1}}{\\sqrt{y^2+1}}\\right) + \\arctan\\left(\\frac{y\\sqrt{k'^2x^2+1}}{\\sqrt{x^2+1}}\\right),k\\right] "
},
{
"math_id": 8,
"text": "F\\bigl[\\arcsin(x),k\\bigr] = \\frac{2}{1+\\sqrt{1-k^2}}F\\left[\\arcsin\\left(\\frac{\\left(1+\\sqrt{1-k^2}\\right)x}{1+\\sqrt{1-k^2x^2}}\\right),\\frac{1-\\sqrt{1-k^2}}{1+\\sqrt{1-k^2}}\\right] "
},
{
"math_id": 9,
"text": " E(\\varphi,k) = E\\left(\\varphi \\,|\\,k^2\\right) = E(\\sin\\varphi;k) = \\int_0^\\varphi \\sqrt{1-k^2 \\sin^2\\theta}\\, d\\theta."
},
{
"math_id": 10,
"text": " E(x;k) = \\int_0^x \\frac{\\sqrt{1-k^2 t^2} }{\\sqrt{1-t^2}}\\,dt."
},
{
"math_id": 11,
"text": " E(\\varphi \\setminus \\alpha) = E(\\varphi, \\sin \\alpha) = \\int_0^\\varphi \\sqrt{1-\\left(\\sin \\theta \\sin \\alpha\\right)^2} \\, d\\theta."
},
{
"math_id": 12,
"text": "\\begin{align}\nE{\\left(\\operatorname{sn}(u ; k) ; k\\right)}\n= \\int_0^u \\operatorname{dn}^2 (w ; k) \\, dw\n&= u - k^2 \\int_0^u \\operatorname{sn}^2 (w ; k) \\, dw \\\\[1ex]\n&= \\left(1-k^2\\right) u + k^2 \\int_0^u \\operatorname{cn}^2 (w ; k) \\,dw.\n\\end{align}"
},
{
"math_id": 13,
"text": "m(\\varphi) = a\\left(E(\\varphi,e)+\\frac{d^2}{d\\varphi^2}E(\\varphi,e)\\right),"
},
{
"math_id": 14,
"text": "E{\\left[\\arctan(x), k\\right]}\n + E{\\left[\\arctan(y), k\\right]}\n= E{\\left[\\arctan\\left(\\frac{x\\sqrt{k'^2y^2+1}}{\\sqrt{y^2+1}}\\right) + \\arctan\\left(\\frac{y\\sqrt{k'^2x^2+1}}{\\sqrt{x^2+1}}\\right),k\\right]}\n + \\frac{k^2xy}{k'^2x^2y^2+x^2+y^2+1}\\left(\\frac{x\\sqrt{k'^2y^2+1}}{\\sqrt{y^2+1}}+\\frac{y\\sqrt{k'^2x^2+1}}{\\sqrt{x^2+1}}\\right) "
},
{
"math_id": 15,
"text": "E{\\left[\\arcsin(x),k\\right]}\n= \\left(1+\\sqrt{1-k^2}\\right) E{\\left[\\arcsin\\left(\\frac{\\left(1+\\sqrt{1-k^2}\\right)x}{1+\\sqrt{1-k^2x^2}}\\right),\\frac{1-\\sqrt{1-k^2}}{1+\\sqrt{1-k^2}}\\right]}\n - \\sqrt{1-k^2} F{\\left[\\arcsin(x),k\\right]}\n + \\frac{k^2x\\sqrt{1-x^2}}{1+\\sqrt{1-k^2x^2}} "
},
{
"math_id": 16,
"text": " \\Pi(n ; \\varphi \\setminus \\alpha) = \\int_0^\\varphi \\frac{1}{1-n\\sin^2 \\theta} \\frac{d\\theta}{\\sqrt{1-\\left(\\sin\\theta\\sin \\alpha\\right)^2}}"
},
{
"math_id": 17,
"text": " \\Pi(n ; \\varphi \\,|\\,m) = \\int_{0}^{\\sin \\varphi} \\frac{1}{1-nt^2} \\frac{dt}{\\sqrt{\\left(1-m t^2\\right)\\left(1-t^2\\right) }}."
},
{
"math_id": 18,
"text": " \\Pi\\bigl(n; \\,\\operatorname{am}(u;k); \\,k\\bigr) = \\int_0^u \\frac{dw} {1 - n \\,\\operatorname{sn}^2 (w;k)}."
},
{
"math_id": 19,
"text": "m(\\varphi)=a\\left(1-e^2\\right)\\Pi\\left(e^2 ; \\varphi \\,|\\,e^2\\right)."
},
{
"math_id": 20,
"text": "K(k) = \\int_0^\\tfrac{\\pi}{2} \\frac{d\\theta}{\\sqrt{1-k^2 \\sin^2\\theta}} = \\int_0^1 \\frac{dt}{\\sqrt{\\left(1-t^2\\right)\\left(1-k^2 t^2\\right)}},"
},
{
"math_id": 21,
"text": "K(k) = F\\left(\\tfrac{\\pi}{2},k\\right) = F\\left(\\tfrac{\\pi}{2} \\,|\\, k^2\\right) = F(1;k)."
},
{
"math_id": 22,
"text": "K(k) = \\frac{\\pi}{2}\\sum_{n=0}^\\infty \\left(\\frac{(2n)!}{2^{2 n} (n!)^2}\\right)^2 k^{2n} = \\frac{\\pi}{2} \\sum_{n=0}^\\infty \\bigl(P_{2 n}(0)\\bigr)^2 k^{2n},"
},
{
"math_id": 23,
"text": "K(k) = \\frac{\\pi}{2}\\left(1+\\left(\\frac{1}{2}\\right)^2 k^2+\\left(\\frac{1\\cdot 3}{2\\cdot 4}\\right)^2 k^4+\\cdots+\\left(\\frac{\\left(2n-1\\right)!!}{\\left(2n\\right)!!}\\right)^2 k^{2n}+\\cdots\\right),"
},
{
"math_id": 24,
"text": "K(k) = \\tfrac{\\pi}{2} \\,{}_2F_1 \\left(\\tfrac{1}{2}, \\tfrac{1}{2}; 1; k^2\\right)."
},
{
"math_id": 25,
"text": "K(k) = \\frac{\\pi}{2\\operatorname{agm}\\left(1,\\sqrt{1-k^2}\\right)}."
},
{
"math_id": 26,
"text": "\\begin{align}\nK(k) &= \\frac{\\pi}{2\\operatorname{agm}\\left(1,\\sqrt{1-k^2}\\right)} \\\\[4pt]\n& = \\frac{\\pi}{2\\operatorname{agm}\\left(\\frac12+\\frac\\sqrt{1-k^2}{2},\\sqrt[4]{1-k^2}\\right)} \\\\[4pt]\n&= \\frac{\\pi}{\\left(1+\\sqrt{1-k^2}\\right)\\operatorname{agm}\\left(1,\\frac{2\\sqrt[4]{1-k^2}}{\\left(1+\\sqrt{1-k^2}\\right)}\\right)} \\\\[4pt]\n& = \\frac{2}{1+\\sqrt{1-k^2}}K\\left(\\frac{1-\\sqrt{1-k^2}}{1+\\sqrt{1-k^2}}\\right)\n\\end{align}"
},
{
"math_id": 27,
"text": "n \\isin \\mathbb{N}"
},
{
"math_id": 28,
"text": "K(k) = n\\left[\\sum_{a = 1}^{n} \\operatorname{dn}\\left(\\frac{2a}{n}K(k);k\\right)\\right]^{-1}K\\left[k^n\\prod_{a=1}^{n}\\operatorname{sn}\\left(\\frac{2a-1}{n}K(k);k\\right)^2\\right] "
},
{
"math_id": 29,
"text": "r \\isin \\mathbb{Q}^+"
},
{
"math_id": 30,
"text": "K\\left(\\sqrt{2}-1\\right)=\\frac{\\Gamma \\left(\\frac18\\right)\\Gamma \\left(\\frac38\\right)\\sqrt{\\sqrt{2}+1}}{8\\sqrt[4]{2}\\sqrt{\\pi}},"
},
{
"math_id": 31,
"text": "K\\left(\\frac{\\sqrt{3}-1}{2\\sqrt{2}}\\right)=\\frac{1}{8\\pi}\\sqrt[4]{3}\\,\\sqrt[3]{4}\\,\\Gamma\\biggl(\\frac{1}{3}\\biggr)^3"
},
{
"math_id": 32,
"text": "K\\left(\\frac{3-\\sqrt{7}}{4\\sqrt{2}}\\right)=\\frac{\\Gamma \\left(\\frac17\\right)\\Gamma \\left(\\frac27\\right)\\Gamma \\left(\\frac47\\right)}{4\\sqrt[4]{7}\\pi}."
},
{
"math_id": 33,
"text": "\\frac{iK'}{K}=\\frac{iK\\left(\\sqrt{1-k^2}\\right)}{K(k)}"
},
{
"math_id": 34,
"text": "K\\left(e^{5\\pi i/6}\\right)=\\frac{e^{-\\pi i/12}\\Gamma ^3\\left(\\frac13\\right)\\sqrt[4]{3}}{4\\sqrt[3]{2}\\pi}."
},
{
"math_id": 35,
"text": "K\\left(k\\right)\\approx\\frac{\\pi}{2}+\\frac{\\pi}{8}\\frac{k^2}{1-k^2}-\\frac{\\pi}{16}\\frac{k^4}{1-k^2}"
},
{
"math_id": 36,
"text": "\\frac{d}{dk}\\left(k\\left(1-k^2\\right)\\frac{dK(k)}{dk}\\right) = k \\, K(k)"
},
{
"math_id": 37,
"text": "K\\left(\\sqrt{1-k^2}\\right)"
},
{
"math_id": 38,
"text": "\\frac{d}{dk}K(k) = \\frac{E(k)}{k\\left(1-k^2\\right)}-\\frac{K(k)}{k}."
},
{
"math_id": 39,
"text": "\\frac{K(k)}{2\\pi} = -\\frac{1}{4} + \\sum^{\\infty}_{n=0} \\frac{q^n}{1+q^{2n}} = -\\frac{1}{4} + \\cfrac{1}{1-q+ \\cfrac{\\left(1-q\\right)^2}{1-q^3+ \\cfrac{q\\left(1-q^2\\right)^2}{1-q^5+ \\cfrac{q^2\\left(1-q^3\\right)^2}{1-q^7+\\cfrac{q^3\\left(1-q^4\\right)^2}{1-q^9+\\cdots}}}}},"
},
{
"math_id": 40,
"text": " q = q(k) = \\exp[-\\pi K'(k)/K(k)] "
},
{
"math_id": 41,
"text": "m"
},
{
"math_id": 42,
"text": "K[m]=\\int_0^{\\pi/2}\\dfrac{d\\theta}{\\sqrt{1-m\\sin^2\\theta}}"
},
{
"math_id": 43,
"text": "\\theta_2(\\tau)=2e^{\\pi i\\tau/4}\\sum_{n=0}^\\infty q^{n(n+1)},\\quad q=e^{\\pi i\\tau},\\, \\operatorname{Im}\\tau >0,"
},
{
"math_id": 44,
"text": "\\theta_3(\\tau)=1+2\\sum_{n=1}^\\infty q^{n^2},\\quad q=e^{\\pi i\\tau},\\,\\operatorname{Im}\\tau >0"
},
{
"math_id": 45,
"text": "\\tau=i\\frac{K[1-m]}{K[m]}"
},
{
"math_id": 46,
"text": "m=\\frac{\\theta_2(\\tau)^4}{\\theta_3(\\tau)^4}"
},
{
"math_id": 47,
"text": "\\left|{e}^{-\\pi i \\tau / 4} \\theta_{2}\\!\\left(\\tau\\right) - 2\\sum_{n=0}^{N - 1} {q}^{n \\left(n + 1\\right)}\\right| \\le \\begin{cases} \\frac{2 {\\left|q\\right|}^{N \\left(N + 1\\right)}}{1 - \\left|q\\right|^{2N+1}}, & \\left|q\\right|^{2N+1} < 1\\\\\\infty, & \\text{otherwise}\\\\ \\end{cases}\\;"
},
{
"math_id": 48,
"text": "\\left|\\theta_{3}\\!\\left(\\tau\\right) - \\left(1+2\\sum_{n=1}^{N - 1} {q}^{n^2}\\right)\\right| \\le \\begin{cases} \\frac{2 {\\left|q\\right|}^{N^2}}{1 - \\left|q\\right|^{2N+1}}, & \\left|q\\right|^{2N+1} < 1\\\\\\infty, & \\text{otherwise}\\\\ \\end{cases}\\;"
},
{
"math_id": 49,
"text": "N\\in\\mathbb{Z}_{\\ge 1}"
},
{
"math_id": 50,
"text": "\\operatorname{Im}\\tau >0"
},
{
"math_id": 51,
"text": "K[m]=\\frac{\\pi}{2}\\theta_3(\\tau )^2,\\quad \\tau=i\\frac{K[1-m]}{K[m]}"
},
{
"math_id": 52,
"text": "m\\in\\mathbb{C}\\setminus\\{0,1\\}"
},
{
"math_id": 53,
"text": "E(k) = \\int_0^\\tfrac{\\pi}{2} \\sqrt{1-k^2 \\sin^2\\theta} \\, d\\theta = \\int_0^1 \\frac{\\sqrt{1-k^2 t^2}}{\\sqrt{1-t^2}} \\, dt,"
},
{
"math_id": 54,
"text": "E(k) = E\\left(\\tfrac{\\pi}{2},k\\right) = E(1;k)."
},
{
"math_id": 55,
"text": "C = 4 a E(e)."
},
{
"math_id": 56,
"text": "E(k) = \\frac{\\pi}{2}\\sum_{n=0}^\\infty \\left(\\frac{(2n)!}{2^{2n} \\left(n!\\right)^2}\\right)^2 \\frac{k^{2n}}{1-2n},"
},
{
"math_id": 57,
"text": "E(k) = \\frac{\\pi}{2}\\left(1-\\left(\\frac12\\right)^2 \\frac{k^2}{1}-\\left(\\frac{1\\cdot 3}{2\\cdot 4}\\right)^2 \\frac{k^4}{3}-\\cdots-\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^2 \\frac{k^{2n}}{2n-1}-\\cdots\\right)."
},
{
"math_id": 58,
"text": "E(k) = \\tfrac{\\pi}{2} \\,{}_2F_1 \\left(\\tfrac12, -\\tfrac12; 1; k^2 \\right)."
},
{
"math_id": 59,
"text": "E(k) = \\left(1+\\sqrt{1-k^2}\\right)\\,E\\left(\\frac{1-\\sqrt{1-k^2}}{1+\\sqrt{1-k^2}}\\right) - \\sqrt{1-k^2}\\,K(k) "
},
{
"math_id": 60,
"text": "c_n=\\sqrt{\\left|a_n^2-g_n^2\\right|}."
},
{
"math_id": 61,
"text": "a_\\infty = \\lim_{n\\to\\infty} a_n = \\lim_{n\\to\\infty} g_n = \\operatorname{agm}\\left(1, \\sqrt{1-k^2}\\right)."
},
{
"math_id": 62,
"text": "\\lim_{n\\to\\infty} c_n=0."
},
{
"math_id": 63,
"text": "E(k) = \\frac{\\pi}{2a_\\infty}\\left(1-\\sum_{n=0}^{\\infty} 2^{n-1} c_n^2\\right)."
},
{
"math_id": 64,
"text": "K(k)=\\frac{\\pi}{2\\operatorname{agm}\\left(1,\\sqrt{1-k^2}\\right)}"
},
{
"math_id": 65,
"text": "E\\left(\\frac{1}{\\sqrt{2}}\\right)=\\frac{1}{2}K\\left(\\frac{1}{\\sqrt{2}}\\right)+\\frac{\\pi}{4K\\left(\\frac{1}{\\sqrt{2}}\\right)},"
},
{
"math_id": 66,
"text": "E\\left(\\frac{\\sqrt{3}-1}{2\\sqrt{2}}\\right)=\\frac{3+\\sqrt{3}}{6}K\\left(\\frac{\\sqrt{3}-1}{2\\sqrt{2}}\\right)+\\frac{\\pi\\sqrt{3}}{12K\\left(\\frac{\\sqrt{3}-1}{2\\sqrt{2}}\\right)},"
},
{
"math_id": 67,
"text": "E\\left(\\frac{3-\\sqrt{7}}{4\\sqrt{2}}\\right)=\\frac{7+2\\sqrt{7}}{14}K\\left(\\frac{3-\\sqrt{7}}{4\\sqrt{2}}\\right)+\\frac{\\pi\\sqrt{7}}{28K\\left(\\frac{3-\\sqrt{7}}{4\\sqrt{2}}\\right)}."
},
{
"math_id": 68,
"text": "\\frac{dE(k)}{dk} = \\frac{E(k)-K(k)}{k}"
},
{
"math_id": 69,
"text": "\\left(k^2-1\\right) \\frac{d}{dk} \\left( k \\;\\frac{dE(k)}{dk} \\right) = k E(k)"
},
{
"math_id": 70,
"text": "\\Pi(n,k) = \\int_0^\\frac{\\pi}{2} \\frac{d\\theta}{\\left(1-n\\sin^2\\theta\\right)\\sqrt{1-k^2 \\sin^2\\theta}}."
},
{
"math_id": 71,
"text": "\\Pi'(n,k) = \\int_0^\\frac{\\pi}{2} \\frac{d\\theta}{\\left(1+n\\sin^2\\theta\\right)\\sqrt{1-k^2 \\sin^2\\theta}}."
},
{
"math_id": 72,
"text": "\\begin{align}\n\\frac{\\partial\\Pi(n,k)}{\\partial n} &= \\frac{1}{2\\left(k^2-n\\right)(n-1)}\\left(E(k)+\\frac{1}{n}\\left(k^2-n\\right)K(k) + \\frac{1}{n} \\left(n^2-k^2\\right)\\Pi(n,k)\\right) \\\\[8pt]\n\n\\frac{\\partial\\Pi(n,k)}{\\partial k} &= \\frac{k}{n-k^2}\\left(\\frac{E(k)}{k^2-1}+\\Pi(n,k)\\right)\n\\end{align}"
},
{
"math_id": 73,
"text": "Z(\\varphi,k)=E(\\varphi,k)-\\frac{E(k)}{K(k)}F(\\varphi,k)."
},
{
"math_id": 74,
"text": "\\varphi"
},
{
"math_id": 75,
"text": "\\pi"
},
{
"math_id": 76,
"text": "Z(\\varphi,k)=\\operatorname{zn}(F(\\varphi,k),k)"
},
{
"math_id": 77,
"text": "Z"
},
{
"math_id": 78,
"text": "\\operatorname{zn}"
},
{
"math_id": 79,
"text": "K(\\varepsilon) E\\left(\\sqrt{1-\\varepsilon^2}\\right) + E(\\varepsilon) K\\left(\\sqrt{1-\\varepsilon^2}\\right) - K(\\varepsilon) K\\left(\\sqrt{1-\\varepsilon^2}\\right) = \\frac {\\pi}{2}"
},
{
"math_id": 80,
"text": "K({\\color{blueviolet}\\tfrac{3}{5}})E({\\color{blue}\\tfrac{4}{5}}) + E({\\color{blueviolet}\\tfrac{3}{5}})K({\\color{blue}\\tfrac{4}{5}}) - K({\\color{blueviolet}\\tfrac{3}{5}})K({\\color{blue}\\tfrac{4}{5}}) = \\tfrac{1}{2}\\pi"
},
{
"math_id": 81,
"text": "(1 + \\varepsilon)K(\\varepsilon)E(\\tfrac{1 - \\varepsilon}{1 + \\varepsilon}) + \\tfrac{2}{1 + \\varepsilon}E(\\varepsilon)K (\\tfrac{1 - \\varepsilon}{1 + \\varepsilon}) - 2K(\\varepsilon)K(\\tfrac{1 - \\varepsilon}{1 + \\varepsilon}) = \\tfrac{1}{2}\\pi "
},
{
"math_id": 82,
"text": "\\tfrac{4}{3}K({\\color{blue}\\tfrac{1}{3}})E({\\color{green}\\tfrac{1}{2}}) + \\tfrac{3}{2}E({\\color{blue}\\tfrac{1}{3}})K({\\color{green}\\tfrac{1}{2}}) - 2K({\\color{blue}\\tfrac{1}{3}})K({\\color{green}\\tfrac{1}{2}}) = \\tfrac{1}{2}\\pi"
},
{
"math_id": 83,
"text": "\\frac{\\mathrm{d}}{\\mathrm{d}y} \\,K\\bigl(\\frac{1}{2}\\sqrt{2}\\bigr) - F\\biggl[\\arccos (xy);\\frac{1}{2}\\sqrt{2}\\biggr] = \\frac{\\sqrt{2}\\,x}{\\sqrt{1 - x^4 y^4}}"
},
{
"math_id": 84,
"text": "\\frac{\\mathrm{d}}{\\mathrm{d}y} \\,2E\\bigl(\\frac{1}{2}\\sqrt{2}\\bigr) - K\\bigl(\\frac {1}{2}\\sqrt{2}\\bigr) - 2E\\biggl[\\arccos(xy);\\frac{1}{2}\\sqrt{2}\\biggr] + F\\biggl[\\arccos(xy );\\frac{1}{2}\\sqrt{2}\\biggr] = \\frac{\\sqrt{2}\\,x^3 y^2}{\\sqrt{1 - x^4 y^4}} "
},
{
"math_id": 85,
"text": "K\\bigl(\\frac{1}{2}\\sqrt{2}\\bigr) - F\\biggl[\\arccos (x);\\frac{1}{2}\\sqrt{2}\\biggr] = \\int_{0}^{1} \\frac{\\sqrt{2}\\,x}{\\sqrt{1 - x^4 y^4}} \\,\\mathrm{d}y "
},
{
"math_id": 86,
"text": "2E\\bigl(\\frac{1}{2}\\sqrt{2}\\bigr) - K\\bigl(\\frac {1}{2}\\sqrt{2}\\bigr) - 2E\\biggl[\\arccos(x);\\frac{1}{2}\\sqrt{2}\\biggr] + F\\biggl[\\arccos(x);\\frac{1}{2}\\sqrt{2}\\biggr] = \\int_{0}^{1} \\frac{\\sqrt{2}\\,x^3 y^2}{\\sqrt{1 - x^4 y^4}} \\,\\mathrm{d}y "
},
{
"math_id": 87,
"text": "\n\\frac{\\sqrt{2}}{\\sqrt{1 - x^4}} \\biggl\\{2E\\bigl(\\frac{1}{2}\\sqrt{2}\\bigr) - K\\bigl(\\frac{1}{2}\\sqrt{2}\\bigr) - 2E\\biggl[\\arccos(x);\\frac{1}{2}\\sqrt{2}\\biggr] + F\\biggl[\\arccos( x);\\frac{1}{2}\\sqrt{2}\\biggr]\\biggr\\} \\,+\n"
},
{
"math_id": 88,
"text": "\n+ \\,\\frac{\\sqrt{2} \\,x^2}{\\sqrt{1 - x^4}} \\biggl\\{K\\bigl(\\frac{1}{2}\\sqrt{2}\\bigr) - F\\biggl[\\arccos(x);\\frac{1}{2}\\sqrt{2}\\biggr]\\biggr\\} = \\int_{0}^{1} \\frac{2\\,x ^3 (y^2 + 1)}{\\sqrt{(1 - x^4)(1 - x^4\\,y^4)}} \\,\\mathrm{d}y\n"
},
{
"math_id": 89,
"text": " \\biggl\\{K\\bigl(\\frac{1}{2}\\sqrt{2}\\bigr) - F\\biggl[\\arccos(x);\\frac{1}{2}\\sqrt{ 2}\\biggr]\\biggr\\}\\biggl\\{2E\\bigl(\\frac{1}{2}\\sqrt{2}\\bigr) - K\\bigl(\\frac{1}{2}\\sqrt{2 }\\bigr) - 2E\\biggl[\\arccos(x);\\frac{1}{2}\\sqrt{2}\\biggr] + F\\biggl[\\arccos(x);\\frac{1}{2} \\sqrt{2}\\biggr]\\biggr\\} =\n"
},
{
"math_id": 90,
"text": " = \\int_{0}^{1} \\frac{1}{y^2}(y^2 + 1)\\biggl[\\text{artanh}(y^2) - \\text{artanh} \\bigl(\\frac{\\sqrt{1 - x^4}\\,y^2}{\\sqrt{1 - x^4 y^4}}\\bigr)\\biggr] \\mathrm{d}y "
},
{
"math_id": 91,
"text": "x = 1"
},
{
"math_id": 92,
"text": " K\\bigl(\\frac{1}{2}\\sqrt{2}\\bigr)\\biggl[2\\,E\\bigl(\\frac{1}{2}\\sqrt{2}\\bigr) - K\\bigl (\\frac{1}{2}\\sqrt{2}\\bigr)\\biggr] = \\int_{0}^{1} \\frac{1}{y^2}(y^2 + 1) \\,\\text{artanh}(y^2) \\,\\mathrm{d}y = "
},
{
"math_id": 93,
"text": " = \\biggl[2\\arctan(y) - \\frac{1}{y}(1 - y^2)\\,\\text{artanh}(y^2)\\biggr]_{y = 0}^{y = 1} = 2\\arctan(1) = \\frac{\\pi}{2} "
},
{
"math_id": 94,
"text": "2E\\bigl(\\frac{1}{2}\\sqrt{2}\\bigr)K\\bigl(\\frac{1}{2}\\sqrt{2}\\bigr) - K\\bigl(\\frac{1}{2}\\sqrt{2}\\bigr)^2 = \\frac{\\pi}{2}"
},
{
"math_id": 95,
"text": " \\varepsilon "
},
{
"math_id": 96,
"text": "\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}\\sqrt{1 - \\varepsilon^2} = -\\,\\frac{\\varepsilon}{\\sqrt{1 - \\varepsilon^2}}"
},
{
"math_id": 97,
"text": "\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon} K(\\varepsilon) = \\frac{1}{\\varepsilon(1-\\varepsilon^2)} \\bigl[E( \\varepsilon) - (1-\\varepsilon^2)K(\\varepsilon)\\bigr]"
},
{
"math_id": 98,
"text": "\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon} E(\\varepsilon) = - \\,\\frac{1}{\\varepsilon}\\bigl[K(\\varepsilon) - E (\\varepsilon)\\bigr]"
},
{
"math_id": 99,
"text": "\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}K(\\sqrt{1 - \\varepsilon^2}) = \\frac{1}{\\varepsilon(1-\\varepsilon^ 2)} \\bigl[\\varepsilon^2 K(\\sqrt{1 - \\varepsilon^2}) - E(\\sqrt{1 - \\varepsilon^2})\\bigr]"
},
{
"math_id": 100,
"text": "\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon }E(\\sqrt{1 - \\varepsilon ^2}) = \\frac{\\varepsilon }{1 - \\varepsilon ^2} \\bigl[K(\\sqrt{1 - \\varepsilon^2}) - E(\\sqrt{1 - \\varepsilon^2})\\bigr]"
},
{
"math_id": 101,
"text": "\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}K(\\varepsilon)E(\\sqrt{1 - \\varepsilon^2}) = \\frac{1}{\\varepsilon( 1-\\varepsilon^2)} \\bigl[E(\\varepsilon)E(\\sqrt{1 - \\varepsilon^2}) - K(\\varepsilon)E(\\sqrt{1 - \\varepsilon^2}) + \\varepsilon^2 K(\\varepsilon)K(\\sqrt{1 - \\varepsilon^2})\\bigr]"
},
{
"math_id": 102,
"text": "\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}E(\\varepsilon)K(\\sqrt{1 - \\varepsilon^2}) = \\frac{1}{\\varepsilon( 1-\\varepsilon^2)} \\bigl[- E(\\varepsilon)E(\\sqrt{1 - \\varepsilon^2}) + E(\\varepsilon)K(\\sqrt{1 - \\varepsilon^2}) - (1 - \\varepsilon^2) K(\\varepsilon)K(\\sqrt{1 - \\varepsilon^2})\\bigr]"
},
{
"math_id": 103,
"text": "\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}K(\\varepsilon)K(\\sqrt{1 - \\varepsilon^2}) = \\frac{1}{\\varepsilon( 1-\\varepsilon^2)} \\bigl[E(\\varepsilon)K(\\sqrt{1 - \\varepsilon^2}) - K(\\varepsilon)E(\\sqrt{1 - \\varepsilon^2}) - ( 1 - 2\\varepsilon^2) K(\\varepsilon)K(\\sqrt{1 - \\varepsilon^2})\\bigr]"
},
{
"math_id": 104,
"text": "\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon} \\bigl[K(\\varepsilon)E(\\sqrt{1 - \\varepsilon^2}) + E(\\varepsilon)K (\\sqrt{1 - \\varepsilon^2}) - K(\\varepsilon)K(\\sqrt{1 - \\varepsilon^2})\\bigr] = 0"
},
{
"math_id": 105,
"text": "\\varepsilon = 1/\\sqrt{2}"
},
{
"math_id": 106,
"text": "K(\\varepsilon)E(\\sqrt{1 - \\varepsilon^2}) + E(\\varepsilon)K(\\sqrt{1 - \\varepsilon^2}) - K(\\varepsilon)K(\\sqrt{1 - \\varepsilon^2}) = \\tfrac{1}{2}\\pi"
}
] | https://en.wikipedia.org/wiki?curid=9960 |
996107 | Universal coefficient theorem | Establish relationships between homology and cohomology theories
In algebraic topology, universal coefficient theorems establish relationships between homology groups (or cohomology groups) with different coefficients. For instance, for every topological space X, its "integral homology groups":
"H""i"("X"; Z)
completely determine its "homology groups with coefficients in" A, for any abelian group A:
"H""i"("X"; "A")
Here "H""i" might be the simplicial homology, or more generally the singular homology. The usual proof of this result is a pure piece of homological algebra about chain complexes of free abelian groups. The form of the result is that other coefficients A may be used, at the cost of using a Tor functor.
For example it is common to take A to be Z/2Z, so that coefficients are modulo 2. This becomes straightforward in the absence of 2-torsion in the homology. Quite generally, the result indicates the relationship that holds between the Betti numbers "bi" of X and the Betti numbers "b""i","F" with coefficients in a field F. These can differ, but only when the characteristic of F is a prime number p for which there is some p-torsion in the homology.
Statement of the homology case.
Consider the tensor product of modules "H""i"("X"; Z) ⊗ "A". The theorem states there is a short exact sequence involving the Tor functor
formula_0
Furthermore, this sequence splits, though not naturally. Here μ is the map induced by the bilinear map "Hi"("X"; Z) × "A" → "Hi"("X"; "A").
If the coefficient ring A is Z/"p"Z, this is a special case of the Bockstein spectral sequence.
Universal coefficient theorem for cohomology.
Let G be a module over a principal ideal domain R (e.g., Z or a field.)
There is also a universal coefficient theorem for cohomology involving the Ext functor, which asserts that there is a natural short exact sequence
formula_1
As in the homology case, the sequence splits, though not naturally.
In fact, suppose
formula_2
and define:
formula_3
Then h above is the canonical map:
formula_4
An alternative point-of-view can be based on representing cohomology via Eilenberg–MacLane space where the map h takes a homotopy class of maps from X to "K"("G", "i") to the corresponding homomorphism induced in homology. Thus, the Eilenberg–MacLane space is a "weak right adjoint" to the homology functor.
Example: mod 2 cohomology of the real projective space.
Let "X"
P"n"(R), the real projective space. We compute the singular cohomology of X with coefficients in "R"
Z/2Z.
Knowing that the integer homology is given by:
formula_5
We have Ext("R", "R")
"R", Ext(Z, "R")
0, so that the above exact sequences yield
formula_6
In fact the total cohomology ring structure is
formula_7
Corollaries.
A special case of the theorem is computing integral cohomology. For a finite CW complex X, "Hi"("X"; Z) is finitely generated, and so we have the following decomposition.
formula_8
where "βi"("X") are the Betti numbers of X and formula_9 is the torsion part of formula_10. One may check that
formula_11
and
formula_12
This gives the following statement for integral cohomology:
formula_13
For X an orientable, closed, and connected n-manifold, this corollary coupled with Poincaré duality gives that "β""i"("X")
"β""n"−"i"("X").
Universal coefficient spectral sequence.
There is a generalization of the universal coefficient theorem for (co)homology with twisted coefficients.
For cohomology we have
formula_14
Where formula_15 is a ring with unit, formula_16 is a chain complex of free modules over formula_15, formula_17 is any formula_18-bimodule for some ring with a unit formula_19, formula_20 is the Ext group. The differential formula_21 has degree formula_22.
Similarly for homology
formula_23
for Tor the Tor group and the differential formula_24 having degree formula_25.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " 0 \\to H_i(X; \\mathbf{Z})\\otimes A \\, \\overset{\\mu}\\to \\, H_i(X;A) \\to \\operatorname{Tor}_1(H_{i-1}(X; \\mathbf{Z}),A)\\to 0."
},
{
"math_id": 1,
"text": " 0 \\to \\operatorname{Ext}_R^1(H_{i-1}(X; R), G) \\to H^i(X; G) \\, \\overset{h} \\to \\, \\operatorname{Hom}_R(H_i(X; R), G)\\to 0."
},
{
"math_id": 2,
"text": "H_i(X;G) = \\ker \\partial_i \\otimes G / \\operatorname{im}\\partial_{i+1} \\otimes G"
},
{
"math_id": 3,
"text": "H^*(X; G) = \\ker(\\operatorname{Hom}(\\partial, G)) / \\operatorname{im}(\\operatorname{Hom}(\\partial, G))."
},
{
"math_id": 4,
"text": "h([f])([x]) = f(x)."
},
{
"math_id": 5,
"text": "H_i(X; \\mathbf{Z}) =\n\\begin{cases}\n\\mathbf{Z} & i = 0 \\text{ or } i = n \\text{ odd,}\\\\\n\\mathbf{Z}/2\\mathbf{Z} & 0<i<n,\\ i\\ \\text{odd,}\\\\\n0 & \\text{otherwise.}\n\\end{cases}"
},
{
"math_id": 6,
"text": "\\forall i = 0, \\ldots, n: \\qquad \\ H^i (X; R) = R."
},
{
"math_id": 7,
"text": "H^*(X; R) = R [w] / \\left \\langle w^{n+1} \\right \\rangle."
},
{
"math_id": 8,
"text": " H_i(X; \\mathbf{Z}) \\cong \\mathbf{Z}^{\\beta_i(X)}\\oplus T_{i},"
},
{
"math_id": 9,
"text": "T_i"
},
{
"math_id": 10,
"text": "H_i"
},
{
"math_id": 11,
"text": " \\operatorname{Hom}(H_i(X),\\mathbf{Z}) \\cong \\operatorname{Hom}(\\mathbf{Z}^{\\beta_i(X)},\\mathbf{Z}) \\oplus \\operatorname{Hom}(T_i, \\mathbf{Z}) \\cong \\mathbf{Z}^{\\beta_i(X)},"
},
{
"math_id": 12,
"text": "\\operatorname{Ext}(H_i(X),\\mathbf{Z}) \\cong \\operatorname{Ext}(\\mathbf{Z}^{\\beta_i(X)},\\mathbf{Z}) \\oplus \\operatorname{Ext}(T_i, \\mathbf{Z}) \\cong T_i."
},
{
"math_id": 13,
"text": " H^i(X;\\mathbf{Z}) \\cong \\mathbf{Z}^{\\beta_i(X)} \\oplus T_{i-1}. "
},
{
"math_id": 14,
"text": "E^{p,q}_2=Ext_{R}^q(H_p(C_*),G)\\Rightarrow H^{p+q}(C_*;G)"
},
{
"math_id": 15,
"text": "R"
},
{
"math_id": 16,
"text": "C_*"
},
{
"math_id": 17,
"text": "G"
},
{
"math_id": 18,
"text": "(R,S)"
},
{
"math_id": 19,
"text": "S"
},
{
"math_id": 20,
"text": "Ext"
},
{
"math_id": 21,
"text": "d^r"
},
{
"math_id": 22,
"text": "(1-r,r)"
},
{
"math_id": 23,
"text": "E_{p,q}^2=Tor^{R}_q(H_p(C_*),G)\\Rightarrow H_*(C_*;G)"
},
{
"math_id": 24,
"text": "d_r"
},
{
"math_id": 25,
"text": "(r-1,-r)"
}
] | https://en.wikipedia.org/wiki?curid=996107 |
996278 | Molecular geometry | Study of the 3D shapes of molecules
Molecular geometry is the three-dimensional arrangement of the atoms that constitute a molecule. It includes the general shape of the molecule as well as bond lengths, bond angles, torsional angles and any other geometrical parameters that determine the position of each atom.
Molecular geometry influences several properties of a substance including its reactivity, polarity, phase of matter, color, magnetism and biological activity. The angles between bonds that an atom forms depend only weakly on the rest of molecule, i.e. they can be understood as approximately local and hence transferable properties.
Determination.
The molecular geometry can be determined by various spectroscopic methods and diffraction methods. IR, microwave and Raman spectroscopy can give information about the molecule geometry from the details of the vibrational and rotational absorbance detected by these techniques. X-ray crystallography, neutron diffraction and electron diffraction can give molecular structure for crystalline solids based on the distance between nuclei and concentration of electron density. Gas electron diffraction can be used for small molecules in the gas phase. NMR and FRET methods can be used to determine complementary information including relative distances,
dihedral angles,
angles, and connectivity. Molecular geometries are best determined at low temperature because at higher temperatures the molecular structure is averaged over more accessible geometries (see next section). Larger molecules often exist in multiple stable geometries (conformational isomerism) that are close in energy on the potential energy surface. Geometries can also be computed by ab initio quantum chemistry methods to high accuracy. The molecular geometry can be different as a solid, in solution, and as a gas.
The position of each atom is determined by the nature of the chemical bonds by which it is connected to its neighboring atoms. The molecular geometry can be described by the positions of these atoms in space, evoking bond lengths of two joined atoms, bond angles of three connected atoms, and torsion angles (dihedral angles) of three consecutive bonds.
Influence of thermal excitation.
Since the motions of the atoms in a molecule are determined by quantum mechanics, "motion" must be defined in a quantum mechanical way. The overall (external) quantum mechanical motions translation and rotation hardly change the geometry of the molecule. (To some extent rotation influences the geometry via Coriolis forces and centrifugal distortion, but this is negligible for the present discussion.) In addition to translation and rotation, a third type of motion is molecular vibration, which corresponds to internal motions of the atoms such as bond stretching and bond angle variation. The molecular vibrations are harmonic (at least to good approximation), and the atoms oscillate about their equilibrium positions, even at the absolute zero of temperature. At absolute zero all atoms are in their vibrational ground state and show zero point quantum mechanical motion, so that the wavefunction of a single vibrational mode is not a sharp peak, but approximately a Gaussian function (the wavefunction for "n" = 0 depicted in the article on the quantum harmonic oscillator). At higher temperatures the vibrational modes may be thermally excited (in a classical interpretation one expresses this by stating that "the molecules will vibrate faster"), but they oscillate still around the recognizable geometry of the molecule.
To get a feeling for the probability that the vibration of molecule may be thermally excited,
we inspect the Boltzmann factor "β" ≡ exp(−), where Δ"E" is the excitation energy of the vibrational mode, "k" the Boltzmann constant and "T" the absolute temperature. At 298 K (25 °C), typical values for the Boltzmann factor β are:
(The reciprocal centimeter is an energy unit that is commonly used in infrared spectroscopy; 1 cm−1 corresponds to ). When an excitation energy is 500 cm−1, then about 8.9 percent of the molecules are thermally excited at room temperature. To put this in perspective: the lowest excitation vibrational energy in water is the bending mode (about 1600 cm−1). Thus, at room temperature less than 0.07 percent of all the molecules of a given amount of water will vibrate faster than at absolute zero.
As stated above, rotation hardly influences the molecular geometry. But, as a quantum mechanical motion, it is thermally excited at relatively (as compared to vibration) low temperatures. From a classical point of view it can be stated that at higher temperatures more molecules will rotate faster,
which implies that they have higher angular velocity and angular momentum. In quantum mechanical language: more eigenstates of higher angular momentum become thermally populated with rising temperatures. Typical rotational excitation energies are on the order of a few cm−1. The results of many spectroscopic experiments are broadened because they involve an averaging over rotational states. It is often difficult to extract geometries from spectra at high temperatures, because the number of rotational states probed in the experimental averaging increases with increasing temperature. Thus, many spectroscopic observations can only be expected to yield reliable molecular geometries at temperatures close to absolute zero, because at higher temperatures too many higher rotational states are thermally populated.
Bonding.
Molecules, by definition, are most often held together with covalent bonds involving single, double, and/or triple bonds, where a "bond" is a shared pair of electrons (the other method of bonding between atoms is called ionic bonding and involves a positive cation and a negative anion).
Molecular geometries can be specified in terms of 'bond lengths', 'bond angles' and 'torsional angles'. The bond length is defined to be the average distance between the nuclei of two atoms bonded together in any given molecule. A bond angle is the angle formed between three atoms across at least two bonds. For four atoms bonded together in a chain, the torsional angle is the angle between the plane formed by the first three atoms and the plane formed by the last three atoms.
There exists a mathematical relationship among the bond angles for one central atom and four peripheral atoms (labeled 1 through 4) expressed by the following determinant. This constraint removes one degree of freedom from the choices of (originally) six free bond angles to leave only five choices of bond angles. (The angles "θ"11, "θ"22, "θ"33, and "θ"44 are always zero and that this relationship can be modified for a different number of peripheral atoms by expanding/contracting the square matrix.)
formula_0
Molecular geometry is determined by the quantum mechanical behavior of the electrons. Using the valence bond approximation this can be understood by the type of bonds between the atoms that make up the molecule. When atoms interact to form a chemical bond, the atomic orbitals of each atom are said to combine in a process called orbital hybridisation. The two most common types of bonds are sigma bonds (usually formed by hybrid orbitals) and pi bonds (formed by unhybridized p orbitals for atoms of main group elements). The geometry can also be understood by molecular orbital theory where the electrons are delocalised.
An understanding of the wavelike behavior of electrons in atoms and molecules is the subject of quantum chemistry.
Isomers.
Isomers are types of molecules that share a chemical formula but have difference geometries, resulting in different properties:
Types of molecular structure.
A bond angle is the geometric angle between two adjacent bonds. Some common shapes of simple molecules include:
VSEPR table.
The bond angles in the table below are ideal angles from the simple VSEPR theory (pronounced "Vesper Theory"), followed by the actual angle for the example given in the following column where this differs. For many cases, such as trigonal pyramidal and bent, the actual angle for the example differs from the ideal angle, and examples differ by different amounts. For example, the angle in H2S (92°) differs from the tetrahedral angle by much more than the angle for H2O (104.48°) does.
3D representations.
The greater the amount of lone pairs contained in a molecule, the smaller the angles between the atoms of that molecule. The VSEPR theory predicts that lone pairs repel each other, thus pushing the different atoms away from them.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "0 = \\begin{vmatrix}\n\n\\cos \\theta_{11} & \\cos \\theta_{12} & \\cos \\theta_{13} & \\cos \\theta_{14} \\\\\n\\cos \\theta_{21} & \\cos \\theta_{22} & \\cos \\theta_{23} & \\cos \\theta_{24} \\\\\n\\cos \\theta_{31} & \\cos \\theta_{32} & \\cos \\theta_{33} & \\cos \\theta_{34} \\\\\n\\cos \\theta_{41} & \\cos \\theta_{42} & \\cos \\theta_{43} & \\cos \\theta_{44} \\end{vmatrix}"
}
] | https://en.wikipedia.org/wiki?curid=996278 |
996315 | Downhill creep | Slow, downward progression of rock and soil down a low grade slope
Downhill creep, also known as soil creep or commonly just creep, is a type of creep characterized by the slow, downward progression of rock and soil down a low grade slope; it can also refer to slow deformation of such materials as a result of prolonged pressure and stress. Creep may appear to an observer to be continuous, but it really is the sum of numerous minute, discrete movements of slope material caused by the force of gravity. Friction, being the primary force to resist gravity, is produced when one body of material slides past another offering a mechanical resistance between the two which acts to hold objects (or slopes) in place. As slope on a hill increases, the gravitational force that is perpendicular to the slope decreases and results in less friction between the material that could cause the slope to slide.
Overview.
Water is a very important factor when discussing soil deformation and movement. For instance, a sandcastle will only stand up when it is made with damp sand. The water offers cohesion to the sand which binds the sand particles together. However, pouring water over the sandcastle destroys it. This is because the presence of too much water fills the pores between the grains with water creating a slip plane between the particles and offering no cohesion causing them to slip and slide away. This holds for hillsides and creeps as well. The presence of water may help the hillside stay put and give it cohesion, but in a very wet environment or during or after a large amount of precipitation the pores between the grains can become saturated with water and cause the ground to slide along the slip plane it creates.
Creep can also be caused by the expansion of materials such as clay when they are exposed to water. Clay expands when wet, then contracts after drying. The expansion portion pushes downhill, then the contraction results in consolidation at the new offset.
Objects resting on top of the soil are carried by it as it descends the slope. This can be seen in churchyards, where older headstones are often situated at an angle and several meters away from where they were originally erected.
Vegetation plays a role in slope stability and creep. When a hillside contains much flora their roots create an interlocking network that can strengthen unconsolidated material. They also aid in absorbing the excess water in the soil to help keep the slope stable. However, they do add to the weight of the slope giving gravity that much more of a driving force to act on in pushing the slope downward. In general, though, slopes without vegetation have a greater chance of movement.
Design engineers sometimes need to guard against downhill creep during their planning to prevent building foundations from being undermined. Pilings are planted sufficiently deep into the surface material to guard against this action taking place.
Modeling regolith diffusion.
For shallow to moderate slopes, diffusional sediment flux is modeled linearly as (Culling, 1960; McKean et al., 1993)
formula_0
where formula_1 is the diffusion constant, and formula_2 is slope. For steep slopes, diffusional sediment flux is more appropriately modeled as a non-linear function of slope
formula_3
where formula_4 is the critical gradient for sliding of dry soil.
On long timescales, diffusive creep in hillslope soils leads to a characteristic rounding of ridges in the landscape.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "q_s = k_d S \\,\\!"
},
{
"math_id": 1,
"text": "k_d\\,\\!"
},
{
"math_id": 2,
"text": "S\\,\\!"
},
{
"math_id": 3,
"text": "q_s = \\frac{k_d S}{1 - (S/S_c)^2}\\,\\!"
},
{
"math_id": 4,
"text": "S_c\\,\\!"
}
] | https://en.wikipedia.org/wiki?curid=996315 |
9965379 | Equivalent concentration | Molar concentration divided by equivalence factor
In chemistry, the equivalent concentration or normality (N) of a solution is defined as the molar concentration ci divided by an equivalence factor or n-factor "f"eq:
formula_0
<templatestyles src="Template:TOC_right/styles.css" />
Definition.
Normality is defined as the number of gram or mole equivalents of solute present in one liter of solution. The SI unit of normality is equivalents per liter (Eq/L).
formula_1
where N is normality, "m"sol is the mass of solute in grams, "EW"sol is the equivalent weight of solute, and "V"soln is the volume of the entire solution in liters.
Usage.
There are three common types of chemical reaction where normality is used as a measure of reactive species in solution:
Normal concentration of an ionic solution is also related to conductivity (electrolytic) through the use of equivalent conductivity.
Medical.
Although losing favor in the medical industry, reporting of serum concentrations in units of "eq/L" (= 1 N) or "meq/L" (= 0.001 N) still occurs.
Examples.
Normality can be used for acid-base titrations. For example, sulfuric acid (H2SO4) is a diprotic acid. Since only 0.5 mol of H2SO4 are needed to neutralize 1 mol of OH−, the equivalence factor is:
"f"eq(H2SO4) = 0.5
If the concentration of a sulfuric acid solution is "c"(H2SO4) = 1 mol/L, then its normality is 2 N. It can also be called a "2 normal" solution.
Similarly, for a solution with "c"(H3PO4) = 1 mol/L, the normality is 3 N because phosphoric acid contains 3 acidic H atoms.
Criticism of the term "normality".
Normality is an ambiguous measure of the concentration of a given reagent in solution. It needs a definition of the equivalence factor, which depends on the definition of the reaction unit (and therefore equivalents). The same solution can possess "different" normalities for "different" reactions. The definition of the equivalence factor varies depending on the type of chemical reaction that is discussed: It may refer to equations, bases, redox species, precipitating ions, or isotopes. Since a reagent solution with a definite concentration may have different normality depending on which reaction is considered, IUPAC and NIST discourage the use of the terms "normality" and "normal solution".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " N = \\frac{c_i}{f_{\\rm eq}}"
},
{
"math_id": 1,
"text": "N = \\frac{m_{\\rm sol}}{EW_{\\rm sol} \\times V_{\\rm soln}}"
}
] | https://en.wikipedia.org/wiki?curid=9965379 |
9966 | Elliptic-curve cryptography | Approach to public-key cryptography
Elliptic-curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. ECC allows smaller keys to provide equivalent security, compared to cryptosystems based on modular exponentiation in Galois fields, such as the RSA cryptosystem and ElGamal cryptosystem.
Elliptic curves are applicable for key agreement, digital signatures, pseudo-random generators and other tasks. Indirectly, they can be used for encryption by combining the key agreement with a symmetric encryption scheme. They are also used in several integer factorization algorithms that have applications in cryptography, such as Lenstra elliptic-curve factorization.
History.
The use of elliptic curves in cryptography was suggested independently by Neal Koblitz and Victor S. Miller in 1985. Elliptic curve cryptography algorithms entered wide use in 2004 to 2005.
In 1999, NIST recommended fifteen elliptic curves. Specifically, FIPS 186-4 has ten recommended finite fields:
The NIST recommendation thus contains a total of five prime curves and ten binary curves. The curves were chosen for optimal security and implementation efficiency.
At the RSA Conference 2005, the National Security Agency (NSA) announced Suite B, which exclusively uses ECC for digital signature generation and key exchange. The suite is intended to protect both classified and unclassified national security systems and information. National Institute of Standards and Technology (NIST) has endorsed elliptic curve cryptography in its Suite B set of recommended algorithms, specifically elliptic-curve Diffie–Hellman (ECDH) for key exchange and Elliptic Curve Digital Signature Algorithm (ECDSA) for digital signature. The NSA allows their use for protecting information classified up to top secret with 384-bit keys.
Recently, a large number of cryptographic primitives based on bilinear mappings on various elliptic curve groups, such as the Weil and Tate pairings, have been introduced. Schemes based on these primitives provide efficient identity-based encryption as well as pairing-based signatures, signcryption, key agreement, and proxy re-encryption.
Elliptic curve cryptography is used successfully in numerous popular protocols, such as Transport Layer Security and Bitcoin.
Security concerns.
In 2013, "The New York Times" stated that Dual Elliptic Curve Deterministic Random Bit Generation (or Dual_EC_DRBG) had been included as a NIST national standard due to the influence of NSA, which had included a deliberate weakness in the algorithm and the recommended elliptic curve. RSA Security in September 2013 issued an advisory recommending that its customers discontinue using any software based on Dual_EC_DRBG. In the wake of the exposure of Dual_EC_DRBG as "an NSA undercover operation", cryptography experts have also expressed concern over the security of the NIST recommended elliptic curves, suggesting a return to encryption based on non-elliptic-curve groups.
Additionally, in August 2015, the NSA announced that it plans to replace Suite B with a new cipher suite due to concerns about quantum computing attacks on ECC.
Patents.
While the RSA patent expired in 2000, there may be patents in force covering certain aspects of ECC technology, including at least one ECC scheme (ECMQV). However, RSA Laboratories and Daniel J. Bernstein have argued that the US government elliptic curve digital signature standard (ECDSA; NIST FIPS 186-3) and certain practical ECC-based key exchange schemes (including ECDH) can be implemented without infringing those patents.
Elliptic curve theory.
For the purposes of this article, an "elliptic curve" is a plane curve over a finite field (rather than the real numbers) which consists of the points satisfying the equation:
formula_2
along with a distinguished point at infinity, denoted ∞. The coordinates here are to be chosen from a fixed finite field of characteristic not equal to 2 or 3, or the curve equation would be somewhat more complicated.
This set of points, together with the group operation of elliptic curves, is an abelian group, with the point at infinity as an identity element. The structure of the group is inherited from the divisor group of the underlying algebraic variety:
formula_3
Application to cryptography.
Public-key cryptography is based on the intractability of certain mathematical problems. Early public-key systems, such as RSA's 1983 patent, based their security on the assumption that it is difficult to factor a large integer composed of two or more large prime factors which are far apart. For later elliptic-curve-based protocols, the base assumption is that finding the discrete logarithm of a random elliptic curve element with respect to a publicly known base point is infeasible (the computational Diffie–Hellman assumption): this is the "elliptic curve discrete logarithm problem" (ECDLP). The security of elliptic curve cryptography depends on the ability to compute a point multiplication and the inability to compute the multiplicand given the original point and product point. The size of the elliptic curve, measured by the total number of discrete integer pairs satisfying the curve equation, determines the difficulty of the problem.
The primary benefit promised by elliptic curve cryptography over alternatives such as RSA is a smaller key size, reducing storage and transmission requirements. For example, a 256-bit elliptic curve public key should provide comparable security to a 3072-bit RSA public key.
Cryptographic schemes.
Several discrete logarithm-based protocols have been adapted to elliptic curves, replacing the group formula_4 with an elliptic curve:
Implementation.
Some common implementation considerations include:
Domain parameters.
To use ECC, all parties must agree on all the elements defining the elliptic curve, that is, the "domain parameters" of the scheme. The size of the field used is typically either prime (and denoted as p) or is a power of two (formula_5); the latter case is called "the binary case", and this case necessitates the choice of an auxiliary curve denoted by "f". Thus the field is defined by "p" in the prime case and the pair of "m" and "f" in the binary case. The elliptic curve is defined by the constants "a" and "b" used in its defining equation. Finally, the cyclic subgroup is defined by its generator (a.k.a. "base point") "G". For cryptographic application, the order of "G", that is the smallest positive number "n" such that formula_6 (the point at infinity of the curve, and the identity element), is normally prime. Since "n" is the size of a subgroup of formula_7 it follows from Lagrange's theorem that the number formula_8 is an integer. In cryptographic applications, this number "h", called the "cofactor", must be small (formula_9) and, preferably, formula_10. To summarize: in the prime case, the domain parameters are formula_11; in the binary case, they are formula_12.
Unless there is an assurance that domain parameters were generated by a party trusted with respect to their use, the domain parameters "must" be validated before use.
The generation of domain parameters is not usually done by each participant because this involves computing the number of points on a curve which is time-consuming and troublesome to implement. As a result, several standard bodies published domain parameters of elliptic curves for several common field sizes. Such domain parameters are commonly known as "standard curves" or "named curves"; a named curve can be referenced either by name or by the unique object identifier defined in the standard documents:
SECG test vectors are also available. NIST has approved many SECG curves, so there is a significant overlap between the specifications published by NIST and SECG. EC domain parameters may be specified either by value or by name.
If, despite the preceding admonition, one decides to construct one's own domain parameters, one should select the underlying field and then use one of the following strategies to find a curve with appropriate (i.e., near prime) number of points using one of the following methods:
Several classes of curves are weak and should be avoided:
Key sizes.
Because all the fastest known algorithms that allow one to solve the ECDLP (baby-step giant-step, Pollard's rho, etc.), need formula_19 steps, it follows that the size of the underlying field should be roughly twice the security parameter. For example, for 128-bit security one needs a curve over formula_18, where formula_20. This can be contrasted with finite-field cryptography (e.g., DSA) which requires 3072-bit public keys and 256-bit private keys, and integer factorization cryptography (e.g., RSA) which requires a 3072-bit value of "n", where the private key should be just as large. However, the public key may be smaller to accommodate efficient encryption, especially when processing power is limited.
The hardest ECC scheme (publicly) broken to date had a 112-bit key for the prime field case and a 109-bit key for the binary field case. For the prime field case, this was broken in July 2009 using a cluster of over 200 PlayStation 3 game consoles and could have been finished in 3.5 months using this cluster when running continuously. The binary field case was broken in April 2004 using 2600 computers over 17 months.
A current project is aiming at breaking the ECC2K-130 challenge by Certicom, by using a wide range of different hardware: CPUs, GPUs, FPGA.
Projective coordinates.
A close examination of the addition rules shows that in order to add two points, one needs not only several additions and multiplications in formula_18 but also an inversion operation. The inversion (for given formula_21 find formula_22 such that formula_23) is one to two orders of magnitude slower than multiplication. However, points on a curve can be represented in different coordinate systems which do not require an inversion operation to add two points. Several such systems were proposed: in the "projective" system each point is represented by three coordinates formula_24 using the following relation: formula_25, formula_26; in the "Jacobian system" a point is also represented with three coordinates formula_24, but a different relation is used: formula_27, formula_28; in the "López–Dahab system" the relation is formula_25, formula_29; in the "modified Jacobian" system the same relations are used but four coordinates are stored and used for calculations formula_30; and in the "Chudnovsky Jacobian" system five coordinates are used formula_31. Note that there may be different naming conventions, for example, IEEE P1363-2000 standard uses "projective coordinates" to refer to what is commonly called Jacobian coordinates. An additional speed-up is possible if mixed coordinates are used.
Fast reduction (NIST curves).
Reduction modulo "p" (which is needed for addition and multiplication) can be executed much faster if the prime "p" is a pseudo-Mersenne prime, that is formula_32; for example, formula_33 or formula_34 Compared to Barrett reduction, there can be an order of magnitude speed-up. The speed-up here is a practical rather than theoretical one, and derives from the fact that the moduli of numbers against numbers near powers of two can be performed efficiently by computers operating on binary numbers with bitwise operations.
The curves over formula_0 with pseudo-Mersenne "p" are recommended by NIST. Yet another advantage of the NIST curves is that they use "a" = −3, which improves addition in Jacobian coordinates.
According to Bernstein and Lange, many of the efficiency-related decisions in NIST FIPS 186-2 are suboptimal. Other curves are more secure and run just as fast.
Security.
Side-channel attacks.
Unlike most other DLP systems (where it is possible to use the same procedure for squaring and multiplication), the EC addition is significantly different for doubling ("P" = "Q") and general addition ("P" ≠ "Q") depending on the coordinate system used. Consequently, it is important to counteract side-channel attacks (e.g., timing or simple/differential power analysis attacks) using, for example, fixed pattern window (a.k.a. comb) methods (note that this does not increase computation time). Alternatively one can use an Edwards curve; this is a special family of elliptic curves for which doubling and addition can be done with the same operation. Another concern for ECC-systems is the danger of fault attacks, especially when running on smart cards.
Backdoors.
Cryptographic experts have expressed concerns that the National Security Agency has inserted a kleptographic backdoor into at least one elliptic curve-based pseudo random generator. Internal memos leaked by former NSA contractor Edward Snowden suggest that the NSA put a backdoor in the Dual EC DRBG standard. One analysis of the possible backdoor concluded that an adversary in possession of the algorithm's secret key could obtain encryption keys given only 32 bytes of PRNG output.
The SafeCurves project has been launched in order to catalog curves that are easy to implement securely and are designed in a fully publicly verifiable way to minimize the chance of a backdoor.
Quantum computing attack.
Shor's algorithm can be used to break elliptic curve cryptography by computing discrete logarithms on a hypothetical quantum computer. The latest quantum resource estimates for breaking a curve with a 256-bit modulus (128-bit security level) are 2330 qubits and 126 billion Toffoli gates. For the binary elliptic curve case, 906 qubits are necessary (to break 128 bits of security). In comparison, using Shor's algorithm to break the RSA algorithm requires 4098 qubits and 5.2 trillion Toffoli gates for a 2048-bit RSA key, suggesting that ECC is an easier target for quantum computers than RSA. All of these figures vastly exceed any quantum computer that has ever been built, and estimates place the creation of such computers at a decade or more away.
Supersingular Isogeny Diffie–Hellman Key Exchange claimed to provide a post-quantum secure form of elliptic curve cryptography by using isogenies to implement Diffie–Hellman key exchanges. This key exchange uses much of the same field arithmetic as existing elliptic curve cryptography and requires computational and transmission overhead similar to many currently used public key systems. However, new classical attacks undermined the security of this protocol.
In August 2015, the NSA announced that it planned to transition "in the not distant future" to a new cipher suite that is resistant to quantum attacks. "Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy."
Invalid curve attack.
When ECC is used in virtual machines, an attacker may use an invalid curve to get a complete PDH private key.
Alternative representations.
Alternative representations of elliptic curves include:
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{F}_p"
},
{
"math_id": 1,
"text": "\\mathbb{F}_{2^m}"
},
{
"math_id": 2,
"text": "y^2 = x^3 + ax + b, \\, "
},
{
"math_id": 3,
"text": "\\mathrm{Div}^0 (E) \\to \\mathrm{Pic}^0 (E) \\simeq E, \\, "
},
{
"math_id": 4,
"text": "(\\mathbb{Z}_{p})^\\times"
},
{
"math_id": 5,
"text": "2^m"
},
{
"math_id": 6,
"text": "n G = \\mathcal{O}"
},
{
"math_id": 7,
"text": "E(\\mathbb{F}_p)"
},
{
"math_id": 8,
"text": "h = \\frac{1}{n}|E(\\mathbb{F}_p)|"
},
{
"math_id": 9,
"text": "h \\le 4"
},
{
"math_id": 10,
"text": "h = 1"
},
{
"math_id": 11,
"text": "(p,a,b,G,n,h)"
},
{
"math_id": 12,
"text": "(m,f,a,b,G,n,h)"
},
{
"math_id": 13,
"text": "p^B-1"
},
{
"math_id": 14,
"text": "2"
},
{
"math_id": 15,
"text": "\\mathbb{F}_{p^B}"
},
{
"math_id": 16,
"text": "E(\\mathbb{F}_q)"
},
{
"math_id": 17,
"text": "|E(\\mathbb{F}_q)| = q"
},
{
"math_id": 18,
"text": "\\mathbb{F}_q"
},
{
"math_id": 19,
"text": "O(\\sqrt{n})"
},
{
"math_id": 20,
"text": "q \\approx 2^{256}"
},
{
"math_id": 21,
"text": "x \\in \\mathbb{F}_q"
},
{
"math_id": 22,
"text": "y \\in \\mathbb{F}_q"
},
{
"math_id": 23,
"text": "x y = 1"
},
{
"math_id": 24,
"text": "(X,Y,Z)"
},
{
"math_id": 25,
"text": "x = \\frac{X}{Z}"
},
{
"math_id": 26,
"text": "y = \\frac{Y}{Z}"
},
{
"math_id": 27,
"text": "x = \\frac{X}{Z^2}"
},
{
"math_id": 28,
"text": "y = \\frac{Y}{Z^3}"
},
{
"math_id": 29,
"text": "y = \\frac{Y}{Z^2}"
},
{
"math_id": 30,
"text": "(X,Y,Z,aZ^4)"
},
{
"math_id": 31,
"text": "(X,Y,Z,Z^2,Z^3)"
},
{
"math_id": 32,
"text": "p \\approx 2^d"
},
{
"math_id": 33,
"text": "p = 2^{521} - 1"
},
{
"math_id": 34,
"text": "p = 2^{256} - 2^{32} - 2^9 - 2^8 - 2^7 - 2^6 - 2^4 - 1."
}
] | https://en.wikipedia.org/wiki?curid=9966 |
9966817 | Modes of convergence | Property of a sequence or series
In mathematics, there are many senses in which a sequence or a series is said to be convergent. This article describes various modes (senses or species) of convergence in the settings where they are defined. For a list of modes of convergence, see Modes of convergence (annotated index)
Each of the following objects is a special case of the types preceding it: sets, topological spaces, uniform spaces, TAGs (topological abelian groups), normed spaces, Euclidean spaces, and the real/complex numbers. Also, any metric space is a uniform space.
Elements of a topological space.
Convergence can be defined in terms of sequences in first-countable spaces. Nets are a generalization of sequences that are useful in spaces which are not first countable. Filters further generalize the concept of convergence.
In metric spaces, one can define Cauchy sequences. Cauchy nets and filters are generalizations to uniform spaces. Even more generally, Cauchy spaces are spaces in which Cauchy filters may be defined. Convergence implies "Cauchy-convergence", and Cauchy-convergence, together with the existence of a convergent subsequence implies convergence. The concept of completeness of metric spaces, and its generalizations is defined in terms of Cauchy sequences.
Series of elements in a topological abelian group.
In a topological abelian group, convergence of a series is defined as convergence of the sequence of partial sums. An important concept when considering series is unconditional convergence, which guarantees that the limit of the series is invariant under permutations of the summands.
In a normed vector space, one can define absolute convergence as convergence of the series of norms (formula_0). Absolute convergence implies Cauchy convergence of the sequence of partial sums (by the triangle inequality), which in turn implies absolute-convergence of some grouping (not reordering). The sequence of partial sums obtained by grouping is a subsequence of the partial sums of the original series. The norm convergence of absolutely convergent series is an equivalent condition for a normed linear space to be Banach (i.e.: complete).
Absolute convergence and convergence together imply unconditional convergence, but unconditional convergence does not imply absolute convergence in general, even if the space is Banach, although the implication holds in formula_1.
Convergence of sequence of functions on a topological space.
The most basic type of convergence for a sequence of functions (in particular, it does not assume any topological structure on the domain of the functions) is pointwise convergence. It is defined as convergence of the sequence of values of the functions at every point. If the functions take their values in a uniform space, then one can define pointwise Cauchy convergence, uniform convergence, and uniform Cauchy convergence of the sequence.
Pointwise convergence implies pointwise Cauchy-convergence, and the converse holds if the space in which the functions take their values is complete. Uniform convergence implies pointwise convergence and uniform Cauchy convergence. Uniform Cauchy convergence and pointwise convergence of a subsequence imply uniform convergence of the sequence, and if the codomain is complete, then uniform Cauchy convergence implies uniform convergence.
If the domain of the functions is a topological space, local uniform convergence (i.e. uniform convergence on a neighborhood of each point) and compact (uniform) convergence (i.e. uniform convergence on all compact subsets) may be defined. "Compact convergence" is always short for "compact uniform convergence," since "compact pointwise convergence" would mean the same thing as "pointwise convergence" (points are always compact).
Uniform convergence implies both local uniform convergence and compact convergence, since both are local notions while uniform convergence is global. If "X" is locally compact (even in the weakest sense: every point has compact neighborhood), then local uniform convergence is equivalent to compact (uniform) convergence. Roughly speaking, this is because "local" and "compact" connote the same thing.
Series of functions on a topological abelian group.
Pointwise and uniform convergence of series of functions are defined in terms of convergence of the sequence of partial sums.
For functions taking values in a normed linear space, absolute convergence refers to convergence of the series of positive, real-valued functions formula_2 . "Pointwise absolute convergence" is then simply pointwise convergence of formula_2.
Normal convergence is convergence of the series of non-negative real numbers obtained by taking the uniform (i.e. "sup") norm of each function in the series (uniform convergence of formula_2). In Banach spaces, pointwise absolute convergence implies pointwise convergence, and normal convergence implies uniform convergence.
For functions defined on a topological space, one can define (as above) local uniform convergence and compact (uniform) convergence in terms of the partial sums of the series. If, in addition, the functions take values in a normed linear space, then local normal convergence (local, uniform, absolute convergence) and compact normal convergence (absolute convergence on compact sets) can be defined.
Normal convergence implies both local normal convergence and compact normal convergence. And if the domain is locally compact (even in the weakest sense), then local normal convergence implies compact normal convergence.
Functions defined on a measure space.
If one considers sequences of measurable functions, then several modes of convergence that depend on measure-theoretic, rather than solely topological properties, arise. This includes pointwise convergence almost-everywhere, convergence in "p"-mean and convergence in measure. These are of particular interest in probability theory. | [
{
"math_id": 0,
"text": "\\Sigma|b_k|"
},
{
"math_id": 1,
"text": "\\mathbb{R}^d"
},
{
"math_id": 2,
"text": "\\Sigma|g_k|"
}
] | https://en.wikipedia.org/wiki?curid=9966817 |
996726 | Parabolic trajectory | Type of orbit
In astrodynamics or celestial mechanics a parabolic trajectory is a Kepler orbit with the eccentricity equal to 1 and is an unbound orbit that is exactly on the border between elliptical and hyperbolic. When moving away from the source it is called an escape orbit, otherwise a capture orbit. It is also sometimes referred to as a C3 = 0 orbit (see Characteristic energy).
Under standard assumptions a body traveling along an escape orbit will coast along a parabolic trajectory to infinity, with velocity relative to the central body tending to zero, and therefore will never return. Parabolic trajectories are minimum-energy escape trajectories, separating positive-energy hyperbolic trajectories from negative-energy elliptic orbits.
Velocity.
The orbital velocity (formula_0) of a body travelling along a parabolic trajectory can be computed as:
formula_1
where:
At any position the orbiting body has the escape velocity for that position.
If a body has an escape velocity with respect to the Earth, this is not enough to escape the Solar System, so near the Earth the orbit resembles a parabola, but further away it bends into an elliptical orbit around the Sun.
This velocity (formula_0) is closely related to the orbital velocity of a body in a circular orbit of the radius equal to the radial position of orbiting body on the parabolic trajectory:
formula_4
where:
Equation of motion.
For a body moving along this kind of trajectory the orbital equation is:
formula_6
where:
Energy.
Under standard assumptions, the specific orbital energy (formula_11) of a parabolic trajectory is zero, so the orbital energy conservation equation for this trajectory takes the form:
formula_12
where:
This is entirely equivalent to the characteristic energy (square of the speed at infinity) being 0:
formula_14
Barker's equation.
Barker's equation relates the time of flight formula_15 to the true anomaly formula_16 of a parabolic trajectory:
formula_17
where:
More generally, the time between any two points on an orbit is
formula_22
Alternately, the equation can be expressed in terms of periapsis distance, in a parabolic orbit formula_23:
formula_24
Unlike Kepler's equation, which is used to solve for true anomalies in elliptical and hyperbolic trajectories, the true anomaly in Barker's equation can be solved directly for formula_15. If the following substitutions are made
formula_25
then
formula_26
With hyperbolic functions the solution can be also expressed as:
formula_27
where
formula_28
Radial parabolic trajectory.
A radial parabolic trajectory is a non-periodic trajectory on a straight line where the relative velocity of the two objects is always the escape velocity. There are two cases: the bodies move away from each other or towards each other.
There is a rather simple expression for the position as function of time:
formula_29
where
At any time the average speed from formula_30 is 1.5 times the current speed, i.e. 1.5 times the local escape velocity.
To have formula_30 at the surface, apply a time shift; for the Earth (and any other spherically symmetric body with the same average density) as central body this time shift is 6 minutes and 20 seconds; seven of these periods later the height above the surface is three times the radius, etc.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "v"
},
{
"math_id": 1,
"text": "v = \\sqrt{2\\mu \\over r}"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "\\mu"
},
{
"math_id": 4,
"text": "v = \\sqrt{2}\\, v_o"
},
{
"math_id": 5,
"text": "v_o"
},
{
"math_id": 6,
"text": "r = {h^2 \\over \\mu}{1 \\over {1 + \\cos\\nu}}"
},
{
"math_id": 7,
"text": "r\\,"
},
{
"math_id": 8,
"text": "h\\,"
},
{
"math_id": 9,
"text": "\\nu\\,"
},
{
"math_id": 10,
"text": "\\mu\\,"
},
{
"math_id": 11,
"text": "\\epsilon"
},
{
"math_id": 12,
"text": "\\epsilon = {v^2 \\over 2} - {\\mu \\over r} = 0"
},
{
"math_id": 13,
"text": "v\\,"
},
{
"math_id": 14,
"text": "C_3 = 0"
},
{
"math_id": 15,
"text": "t"
},
{
"math_id": 16,
"text": "\\nu"
},
{
"math_id": 17,
"text": "t - T = \\frac{1}{2} \\sqrt{\\frac{p^3}{\\mu}} \\left(D + \\frac{1}{3} D^3 \\right)"
},
{
"math_id": 18,
"text": "D = \\tan \\frac{\\nu}{2}"
},
{
"math_id": 19,
"text": "T"
},
{
"math_id": 20,
"text": "p"
},
{
"math_id": 21,
"text": "p = h^2/\\mu"
},
{
"math_id": 22,
"text": "\n t_f - t_0 = \\frac{1}{2} \\sqrt{\\frac{p^3}{\\mu}} \\left(D_f + \\frac{1}{3} D_f^3 - D_0 - \\frac{1}{3} D_0^3\\right)\n"
},
{
"math_id": 23,
"text": "r_p = p/2"
},
{
"math_id": 24,
"text": "t - T = \\sqrt{\\frac{2 r_p^3}{\\mu}} \\left(D + \\frac{1}{3} D^3\\right)"
},
{
"math_id": 25,
"text": "\\begin{align}\n A &= \\frac{3}{2} \\sqrt{\\frac{\\mu}{2r_p^3}} (t - T) \\\\[3pt]\n B &= \\sqrt[3]{A + \\sqrt{A^{2}+1}}\n\\end{align}"
},
{
"math_id": 26,
"text": "\\nu = 2\\arctan\\left(B - \\frac{1}{B}\\right)"
},
{
"math_id": 27,
"text": "\\nu = 2\\arctan\\left(2\\sinh\\frac{\\mathrm{arcsinh} \\frac{3M}{2}}{3}\\right)"
},
{
"math_id": 28,
"text": " M = \\sqrt{\\frac{\\mu}{2r_p^3}} (t - T)"
},
{
"math_id": 29,
"text": "r = \\sqrt[3]{\\frac{9}{2} \\mu t^2}"
},
{
"math_id": 30,
"text": "t = 0\\!\\,"
}
] | https://en.wikipedia.org/wiki?curid=996726 |
996808 | Standard gravitational parameter | Concept in celestial mechanics
In celestial mechanics, the standard gravitational parameter "μ" of a celestial body is the product of the gravitational constant "G" and the total mass "M" of the bodies. For two bodies, the parameter may be expressed as "G"("m"1 + "m"2), or as "GM" when one body is much larger than the other:
formula_0
For several objects in the Solar System, the value of "μ" is known to greater accuracy than either "G" or "M". The SI unit of the standard gravitational parameter is m3⋅s−2. However, the unit km3⋅s−2 is frequently used in the scientific literature and in spacecraft navigation.
Definition.
Small body orbiting a central body.
The central body in an orbital system can be defined as the one whose mass ("M") is much larger than the mass of the orbiting body ("m"), or "M" ≫ "m". This approximation is standard for planets orbiting the Sun or most moons and greatly simplifies equations. Under Newton's law of universal gravitation, if the distance between the bodies is "r", the force exerted on the smaller body is:
formula_1
Thus only the product of "G" and "M" is needed to predict the motion of the smaller body. Conversely, measurements of the smaller body's orbit only provide information on the product, "μ", not "G" and "M" separately. The gravitational constant, "G", is difficult to measure with high accuracy, while orbits, at least in the solar system, can be measured with great precision and used to determine "μ" with similar precision.
For a circular orbit around a central body, where the centripetal force provided by gravity is "F" = "mv"2"r"−1:
formula_2
where "r" is the orbit radius, "v" is the orbital speed, "ω" is the angular speed, and "T" is the orbital period.
This can be generalized for elliptic orbits:
formula_3
where "a" is the semi-major axis, which is Kepler's third law.
For parabolic trajectories "rv"2 is constant and equal to 2"μ". For elliptic and hyperbolic orbits magnitude of "μ" = 2 times the magnitude of "a" times the magnitude of "ε", where "a" is the semi-major axis and "ε" is the specific orbital energy.
General case.
In the more general case where the bodies need not be a large one and a small one, e.g. a binary star system, we define:
Then:
In a pendulum.
The standard gravitational parameter can be determined using a pendulum oscillating above the surface of a body as:
formula_4
where "r" is the radius of the gravitating body, "L" is the length of the pendulum, and "T" is the period of the pendulum (for the reason of the approximation see Pendulum in mechanics).
Solar system.
Geocentric gravitational constant.
"G"ME, the gravitational parameter for the Earth as the central body, is called the geocentric gravitational constant. It equals .
The value of this constant became important with the beginning of spaceflight in the 1950s, and great effort was expended to determine it as accurately as possible during the 1960s. Sagitov (1969) cites a range of values reported from 1960s high-precision measurements, with a relative uncertainty of the order of 10−6.
During the 1970s to 1980s, the increasing number of artificial satellites in Earth orbit further facilitated high-precision measurements,
and the relative uncertainty was decreased by another three orders of magnitude, to about (1 in 500 million) as of 1992.
Measurement involves observations of the distances from the satellite to Earth stations at different times, which can be obtained to high accuracy using radar or laser ranging.
Heliocentric gravitational constant.
"G"M☉, the gravitational parameter for the Sun as the central body,
is called the heliocentric gravitational constant or "geopotential of the Sun" and equals
The relative uncertainty in "G"M☉, cited at below 10−10 as of 2015, is smaller than the uncertainty in "G"ME
because "G"M☉ is derived from the ranging of interplanetary probes, and the absolute error of the distance measures to them is about the same as the earth satellite ranging measures, while the absolute distances involved are much bigger.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu=G(M+m)\\approx GM ."
},
{
"math_id": 1,
"text": "F = \\frac{G M m}{r^2} = \\frac{\\mu m}{r^2}"
},
{
"math_id": 2,
"text": "\\mu = rv^2 = r^3\\omega^2 = \\frac{4\\pi^2r^3}{T^2} ,"
},
{
"math_id": 3,
"text": "\\mu = \\frac{4\\pi^2a^3}{T^2} ,"
},
{
"math_id": 4,
"text": "\\mu \\approx \\frac{4 \\pi^2 r^2 L}{T^2} "
}
] | https://en.wikipedia.org/wiki?curid=996808 |
996828 | Orbiting body | In astrodynamics, an orbiting body is any physical body that orbits a more massive one, called the primary body. The orbiting body is properly referred to as the secondary body (formula_0), which is less massive than the primary body (formula_1).
Thus, formula_2 or formula_3.
Under standard assumptions in astrodynamics, the barycenter of the two bodies is a focus of both orbits.
An orbiting body may be a spacecraft (i.e. an artificial satellite) or a natural satellite, such as a planet, dwarf planet, moon, moonlet, asteroid, or comet.
A system of two orbiting bodies is modeled by the Two-Body Problem and a system of three orbiting bodies is modeled by the Three-Body Problem. These problems can be generalized to an N-body problem. While there are a few analytical solutions to the n-body problem, it can be reduced to a 2-body system if the secondary body stays out of other bodies' Sphere of Influence and remains in the primary body's sphere of influence.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m_2"
},
{
"math_id": 1,
"text": "m_1"
},
{
"math_id": 2,
"text": "m_2 < m_1"
},
{
"math_id": 3,
"text": "m_1 > m_2"
}
] | https://en.wikipedia.org/wiki?curid=996828 |
996910 | Hyperbolic trajectory | Concept in astrodynamics
In astrodynamics or celestial mechanics, a hyperbolic trajectory or hyperbolic orbit is the trajectory of any object around a central body with more than enough speed to escape the central object's gravitational pull. The name derives from the fact that according to Newtonian theory such an orbit has the shape of a hyperbola. In more technical terms this can be expressed by the condition that the orbital eccentricity is greater than one.
Under simplistic assumptions a body traveling along this trajectory will coast towards infinity, settling to a final excess velocity relative to the central body. Similarly to parabolic trajectories, all hyperbolic trajectories are also escape trajectories. The specific energy of a hyperbolic trajectory orbit is positive.
Planetary flybys, used for gravitational slingshots, can be described within the planet's sphere of influence using hyperbolic trajectories.
Parameters describing a hyperbolic trajectory.
Like an elliptical orbit, a hyperbolic trajectory for a given system can be defined (ignoring orientation) by its semi major axis and the eccentricity. However, with a hyperbolic orbit other parameters may be more useful in understanding a body's motion. The following table lists the main parameters describing the path of body following a hyperbolic trajectory around another under standard assumptions and the formula connecting them.
Semi-major axis, energy and hyperbolic excess velocity.
The semi major axis (formula_1) is not immediately visible with an hyperbolic trajectory but can be constructed as it is the distance from periapsis to the point where the two asymptotes cross. Usually, by convention, it is negative, to keep various equations consistent with elliptical orbits.
The semi major axis is directly linked to the specific orbital energy (formula_2) or characteristic energy formula_3 of the orbit, and to the velocity the body attains at as the distance tends to infinity, the hyperbolic excess velocity (formula_4).
formula_5 or formula_6
where: formula_7 is the standard gravitational parameter and formula_3 is characteristic energy, commonly used in planning interplanetary missions
Note that the total energy is positive in the case of a hyperbolic trajectory (whereas it is negative for an elliptical orbit).
Eccentricity and angle between approach and departure.
With a hyperbolic trajectory the orbital eccentricity (formula_8) is greater than 1. The eccentricity is directly related to the angle between the asymptotes. With eccentricity just over 1 the hyperbola is a sharp "v" shape. At formula_9 the asymptotes are at right angles. With formula_10 the asymptotes are more than 120° apart, and the periapsis distance is greater than the semi major axis. As eccentricity increases further the motion approaches a straight line.
The angle between the direction of periapsis and an asymptote from the central body is the true anomaly as distance tends to infinity (formula_11), so formula_12 is the external angle between approach and departure directions (between asymptotes). Then
formula_13 or formula_14
Impact parameter and the distance of closest approach.
The impact parameter is the distance by which a body, if it continued on an unperturbed path, would miss the central body at its closest approach. With bodies experiencing gravitational forces and following hyperbolic trajectories it is equal to the semi-minor axis of the hyperbola.
In the situation of a spacecraft or comet approaching a planet, the impact parameter and excess velocity will be known accurately. If the central body is known the trajectory can now be found, including how close the approaching body will be at periapsis. If this is less than planet's radius an impact should be expected. The distance of closest approach, or periapsis distance, is given by:
formula_15
So if a comet approaching Earth (effective radius ~6400 km) with a velocity of 12.5 km/s (the approximate minimum approach speed of a body coming from the outer Solar System) is to avoid a collision with Earth, the impact parameter will need to be at least 8600 km, or 34% more than the Earth's radius. A body approaching Jupiter (radius 70000 km) from the outer Solar System with a speed of 5.5 km/s, will need the impact parameter to be at least 770,000 km or 11 times Jupiter radius to avoid collision.
If the mass of the central body is not known, its standard gravitational parameter, and hence its mass, can be determined by the deflection of the smaller body together with the impact parameter and approach speed. Because typically all these variables can be determined accurately, a spacecraft flyby will provide a good estimate of a body's mass.
formula_16 where formula_17 is the angle the smaller body is deflected from a straight line in its course.
Equations of motion.
Position.
In a hyperbolic trajectory the true anomaly formula_18 is linked to the distance between the orbiting bodies (formula_19) by the orbit equation:
formula_20
The relation between the true anomaly θ and the eccentric anomaly "E" (alternatively the hyperbolic anomaly "H") is:
formula_21 or formula_22 or formula_23
The eccentric anomaly "E" is related to the mean anomaly "M" by Kepler's equation:
formula_24
The mean anomaly is proportional to time
formula_25 where "μ" is a gravitational parameter and "a" is the semi-major axis of the orbit.
Flight path angle.
The flight path angle (φ) is the angle between the direction of velocity and the perpendicular to the radial direction, so it is zero at periapsis and tends to 90 degrees at infinity.
formula_26
Velocity.
Under standard assumptions the orbital speed (formula_27) of a body traveling along a hyperbolic trajectory can be computed from the "vis-viva" equation as:
formula_28
where:
Under standard assumptions, at any position in the orbit the following relation holds for orbital velocity (formula_27), local escape velocity (formula_29) and hyperbolic excess velocity (formula_4):
formula_30
Note that this means that a relatively small extra delta-"v" above that needed to accelerate to the escape speed results in a relatively large speed at infinity. For example, at a place where escape speed is 11.2 km/s, the addition of 0.4 km/s yields a hyperbolic excess speed of 3.02 km/s.
formula_31
This is an example of the Oberth effect. The converse is also true - a body does not need to be slowed by much compared to its hyperbolic excess speed (e.g. by atmospheric drag near periapsis) for velocity to fall below escape velocity and so for the body to be captured.
Radial hyperbolic trajectory.
A radial hyperbolic trajectory is a non-periodic trajectory on a straight line where the relative speed of the two objects always exceeds the escape velocity. There are two cases: the bodies move away from each other or towards each other. This is a hyperbolic orbit with semi-minor axis = 0 and eccentricity = 1. Although the eccentricity is 1 this is not a parabolic orbit.
Deflection with finite sphere of influence.
A more accurate formula for the deflection angle formula_32 considering the sphere of influence radius formula_33 of the deflecting body, assuming a periapsis formula_34 is:
formula_35
Relativistic two-body problem.
In context of the two-body problem in general relativity, trajectories of objects with enough energy to escape the gravitational pull of the other no longer are shaped like a hyperbola. Nonetheless, the term "hyperbolic trajectory" is still used to describe orbits of this type.
References.
<templatestyles src="Refbegin/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu\\,"
},
{
"math_id": 1,
"text": "a\\,\\!"
},
{
"math_id": 2,
"text": "\\epsilon\\,"
},
{
"math_id": 3,
"text": "C_3"
},
{
"math_id": 4,
"text": "v_\\infty\\,\\!"
},
{
"math_id": 5,
"text": "v_{\\infty}^2=2\\epsilon=C_3=-\\mu/a"
},
{
"math_id": 6,
"text": "a=-{\\mu/{v_\\infty^2}}"
},
{
"math_id": 7,
"text": "\\mu=Gm\\,\\!"
},
{
"math_id": 8,
"text": "e\\,"
},
{
"math_id": 9,
"text": "e=\\sqrt 2"
},
{
"math_id": 10,
"text": "e>2"
},
{
"math_id": 11,
"text": "\\theta_\\infty\\,"
},
{
"math_id": 12,
"text": "2\\theta_\\infty\\,"
},
{
"math_id": 13,
"text": "\\theta{_\\infty}=\\cos^{-1}(-1/e)\\,"
},
{
"math_id": 14,
"text": "e=-1/\\cos\\theta{_\\infty}\\,"
},
{
"math_id": 15,
"text": "r_p = -a(e-1)= \\frac{\\mu}{v_\\infty^2} \\left(\\sqrt{1 + \\left(b \\frac {v_\\infty^2}{\\mu}\\right)^2} - 1\\right)"
},
{
"math_id": 16,
"text": "\\mu=b v_\\infty^2 \\tan \\delta/2"
},
{
"math_id": 17,
"text": " \\delta = 2\\theta_\\infty - \\pi "
},
{
"math_id": 18,
"text": "\\theta"
},
{
"math_id": 19,
"text": "r\\,"
},
{
"math_id": 20,
"text": "r = \\frac{\\ell}{1 + e\\cdot\\cos\\theta}"
},
{
"math_id": 21,
"text": "\\cosh{E} = {{\\cos{\\theta} + e} \\over {1 + e \\cdot \\cos{\\theta}}} "
},
{
"math_id": 22,
"text": " \\tan \\frac{\\theta}{2} = \\sqrt{\\frac{e+1}{e-1}} \\cdot \\tanh \\frac{E}{2}"
},
{
"math_id": 23,
"text": " \\tanh \\frac{E}{2} = \\sqrt{\\frac{e-1}{e+1}} \\cdot \\tan \\frac{\\theta}{2}"
},
{
"math_id": 24,
"text": " M = e \\sinh E - E "
},
{
"math_id": 25,
"text": "M=\\sqrt{\\frac{\\mu}{-a^3}}.(t-\\tau),"
},
{
"math_id": 26,
"text": "\\tan(\\phi) = \\frac{e\\cdot\\sin\\theta}{1 + e\\cdot \\cos\\theta}"
},
{
"math_id": 27,
"text": "v\\,"
},
{
"math_id": 28,
"text": "v=\\sqrt{\\mu\\left({2\\over{r}}+{1\\over{a}}\\right)}"
},
{
"math_id": 29,
"text": "{v_{esc}}\\,"
},
{
"math_id": 30,
"text": "v^2={v_{esc}}^2+{v_\\infty}^2"
},
{
"math_id": 31,
"text": "\\sqrt{11.6^2-11.2^2}=3.02"
},
{
"math_id": 32,
"text": "\\delta "
},
{
"math_id": 33,
"text": "R_\\text{SOI}"
},
{
"math_id": 34,
"text": "p_e"
},
{
"math_id": 35,
"text": "\\delta = 2\\arcsin\\left( \\frac{\\sqrt{1 - \\frac{p_e}{R_\\text{SOI}}} \\sqrt{1 + \\frac{p_e}{R_\\text{SOI}} - \\frac{2 \\mu p_e}{v_{\\infty}^2 R_\\text{SOI}^2}}}{1 + \\frac{v_{\\infty}^2 p_e}{\\mu} - \\frac{2 p_e}{R_\\text{SOI}}} \\right)"
}
] | https://en.wikipedia.org/wiki?curid=996910 |
9969312 | Hydrogen-oxidizing bacteria | Hydrogen-oxidizing bacteria are a group of facultative autotrophs that can use hydrogen as an electron donor. They can be divided into aerobes and anaerobes. The former use hydrogen as an electron donor and oxygen as an acceptor while the latter use sulphate or nitrogen dioxide as electron acceptors. Species of both types have been isolated from a variety of environments, including fresh waters, sediments, soils, activated sludge, hot springs, hydrothermal vents and percolating water.
These bacteria are able to exploit the special properties of molecular hydrogen (for instance redox potential and diffusion coefficient) thanks to the presence of hydrogenases. The aerobic hydrogen-oxidizing bacteria are facultative autotrophs, but they can also have mixotrophic or completely heterotrophic growth. Most of them show greater growth on organic substrates. The use of hydrogen as an electron donor coupled with the ability to synthesize organic matter, through the reductive assimilation of CO2, characterize the hydrogen-oxidizing bacteria.
Among the most represented genera of these organisms are "Caminibacter", "Aquifex", "Ralstonia" and "Paracoccus".
Sources of hydrogen.
Hydrogen is the most widespread element in the universe, representing around three-quarters of all atoms. In the atmosphere, the concentration of molecular hydrogen (H2) gas is about 0.5–0.6 ppm, and so it represents the second-most-abundant trace gas after methane. H2 can be used as energy source in biological processes because it has a highly negative redox potential ("E"0′ = –0.414 V). It can be coupled with O2, in oxidative respiration (2H2 + O2 → 2H2O), or with oxidized compounds, such as carbon dioxide or sulfate.
In an ecosystem, hydrogen can be produced through abiotic and biological processes. The abiotic processes are mainly due to geothermal production and serpentinization.
In geothermal processes, hydrogen is usually present as a gas and may be obtained by different reactions:
1. Water may react with the silicon radical at high temperature:
Si· + H2O → SiOH + H·
H· + H· → H2
2. A proposed reaction between iron oxides and water may occur at temperatures higher than 800 °C:
2FeO + H2O → Fe2O3 + H2
2Fe3O4 + H2O → 3Fe2O3 + H2
Occurring at ambient temperature, serpentinization is an exothermic geochemical mechanism that takes place when ultramafic rocks from deep in the Earth rise and encounter water. This process can produce large quantities of H2, as well as methane and organic substances.
The main biotic mechanisms that lead to the formation of hydrogen are nitrogen fixation and fermentation. The first happens in bacteria, such as cyanobacteria, that have a specialized enzyme, nitrogenase, which catalyzes the reduction of N2 to NH4+. In addition, these microorganisms have another enzyme, hydrogenase, that oxidizes the H2 released as a by-product. If the nitrogen-fixing bacteria have low amounts of hydrogenase, excess H2 can be released into the environment. The amount of hydrogen released depends on the ratio between H2 production and consumption. The second mechanism, fermentation, is performed by some anaerobic heterotrophic bacteria, in particular "Clostridia", that degrade organic molecules, producing hydrogen as one of the products. This type of metabolism mainly occurs in anoxic sites, such as lake sediments, deep-sea hydrothermal vents and the animal gut.
The ocean is supersaturated with hydrogen, presumably as a result of these biotic processes. Nitrogen fixation is thought to be the major mechanism involved in the production of H2 in the oceans. Release of hydrogen in the oceans is dependent on solar radiation, with a daily peak at noon. The highest concentrations are in the first metres near the surface, decreasing to the thermocline and reaching their minimum in the deep oceans. Globally, tropical and subtropical oceans have the greatest abundance of H2.
Examples.
Hydrothermal vents.
H2 is an important electron donor in hydrothermal vents. In this environment hydrogen oxidation represents a significant origin of energy, sufficient to conduct ATP synthesis and autotrophic CO2 fixation, so hydrogen-oxidizing bacteria form an important part of the ecosystem in deep sea habitats. Among the main chemosynthetic reactions that take place in hydrothermal vents, the oxidation of sulphide and hydrogen holds a central role. In particular, for autotrophic carbon fixation, hydrogen oxidation metabolism is more favored than sulfide or thiosulfate oxidation, although less energy is released (only –237 kJ/mol compared to –797 kJ/mol). To fix a mole of carbon during the hydrogen oxidation, one-third of the energy necessary for the sulphide oxidation is used. This is because hydrogen has a more negative redox potential than NAD(P)H. Depending on the relative amounts of sulphide, hydrogen and other species, energy production by oxidation of hydrogen can be as much as 10–18 times higher than production by the oxidation of sulphide.
Knallgas bacteria.
Aerobic hydrogen-oxidizing bacteria, sometimes called knallgas bacteria, are bacteria that oxidize hydrogen with oxygen as final electron acceptor. These bacteria include "Hydrogenobacter thermophilus", "Cupriavidus necator", and "Hydrogenovibrio marinus". There are both Gram positive and Gram negative knallgas bacteria.
Most grow best under microaerobic conditions because the hydrogenase enzyme is inhibited by the presence of oxygen and yet oxygen is still needed as a terminal electron acceptor in energy metabolism.
The word "Knallgas" means "oxyhydrogen" (a mixture of hydrogen and oxygen, literally "bang-gas") in German.
Strain MH-110.
Ocean surface water is characterized by a high concentration of hydrogen. In 1989, an aerobic hydrogen-oxidizing bacterium was isolated from sea water. The MH-110 strain (aka DSM 11271, type strain of "Hydrogenovibrio marinus") is able to grow under normal temperature conditions and in an atmosphere (under a continuous gas flow system) characterized by an oxygen saturation of 40% (analogous characteristics are present in the surface water from which the bacteria were isolated, which is a fairly aerated medium). This differs from the usual behaviour of hydrogen-oxidizing bacteria, which in general thrive under [[Microaerophile|microaerophilic] conditions (<10% O2 saturation).
This strain is also capable of coupling the hydrogen oxidation with the reduction of sulfur compounds such as thiosulfate and tetrathionate.
Metabolism.
Knallgas bacteria are able to fix carbon dioxide using H2 as their chemical energy source. Knallgas bacteria stand out from other [[hydrogen]]-oxidizing bacteria that, although using H2 as energy source, are not able to fix [[Carbon dioxide|CO2]], as Knallgas do.
This aerobic hydrogen oxidation (H2 + O2 formula_0 H2O), also known as the Knallgas reaction, releases a considerable amount of energy, having a [[Gibbs free energy|ΔGo]] of –237 kJ/mol. The energy is captured as a [[Proton-motive force|proton motive force]] for use by the cell.
The key enzymes involved in this reaction are the [[Hydrogenase maturation protease family|hydrogenases]], which cleave molecular hydrogen and feed its electrons into the [[electron transport chain]], where they are carried to the final acceptor, O2, extracting energy in the process. The hydrogen is ultimately oxidized to water, the end product. The hydrogenases are divided into three categories according to the type of metal present in the active site. These enzymes were first found in "[[Pelomonas saccharophila|Pseudomonas saccharophila]]", "[[Alcaligenes ruhlandii]]" and "[[Alcaligenes eutrophus|Alcaligenese eutrophus]]", in which there are two types of hydrogenases: cytoplasmic and membrane-bound. While the first enzyme takes up hydrogen and reduces [[Nicotinamide adenine dinucleotide|NAD+]] to [[Nicotinamide adenine dinucleotide|NADH]] for carbon fixation, the second is involved in the generation of the proton motive force. In most knallgas bacteria only the second is found.
While these microorganisms are facultative [[autotroph]]s, some are also able to live [[heterotroph|heterotrophicically]] using organic substances as electron donors; in this case, the hydrogenase activity is less important or completely absent.
However, knallgas bacteria, when growing as [[Chemolitoautotrophs|chemolithoautotrophs]], can integrate a molecule of CO2 to produce, through the [[Calvin cycle|Calvin–Benson cycle]], biomolecules necessary for the cell:
6H2 + 2O2 + CO2 formula_0 (CH2O) + 5H2O
A study of "[[Alcaligenes eutrophus|Alcaligenes eutropha]]", a representative knallgas bacterium, found that at low concentrations of O2 (about 10 mol %) and consequently with a low ΔH2/ΔCO2 molar ratio (3.3), the energy efficiency of CO2 fixation increases to 50%. Once assimilated, some of the carbon may be stored as [[polyhydroxybutyrate]].
Uses.
Given enough nutrients, H2, O2 and CO2, many knallgas bacteria can be grown quickly in vats using only a small amount of land area. This makes it possible to cultivate them as an environmentally sustainable source of food and other products. For example, the [[polyhydroxybutyrate]] the bacteria produce can be used as a feedstock to produce [[Biodegradation|biodegradable]] plastics in various eco-sustainable applications.
[[Solar Foods]] is a startup that has sought to commercialize knallgas bacteria for food production, using renewable energy to split hydrogen to grow a neutral-tasting, protein-rich food source for use in products such as artificial meat. Research studies have suggested that knallgas cultivation is more environmentally friendly than traditional agriculture.
References.
<templatestyles src="Reflist/styles.css" />
[[Category:Bacteria]]
[[Category:Hydrogen]]
[[Category:Lithotrophs]] | [
{
"math_id": 0,
"text": "\\longrightarrow"
}
] | https://en.wikipedia.org/wiki?curid=9969312 |
996955 | Characteristic energy | In astrodynamics, the characteristic energy (formula_0) is a measure of the excess specific energy over that required to just barely escape from a massive body. The units are length2 time−2, i.e. velocity squared, or energy per mass.
Every object in a 2-body ballistic trajectory has a constant specific orbital energy formula_1 equal to the sum of its specific kinetic and specific potential energy:
formula_2
where formula_3 is the standard gravitational parameter of the massive body with mass formula_4, and formula_5 is the radial distance from its center. As an object in an escape trajectory moves outward, its kinetic energy decreases as its potential energy (which is always negative) increases, maintaining a constant sum.
Note that "C"3 is "twice" the specific orbital energy formula_1 of the escaping object.
Non-escape trajectory.
A spacecraft with insufficient energy to escape will remain in a closed orbit (unless it intersects the central body), with
formula_6
where
If the orbit is circular, of radius "r", then
formula_8
Parabolic trajectory.
A spacecraft leaving the central body on a parabolic trajectory has exactly the energy needed to escape and no more:
formula_9
Hyperbolic trajectory.
A spacecraft that is leaving the central body on a hyperbolic trajectory has more than enough energy to escape:
formula_10
where
Also,
formula_11
where formula_12 is the asymptotic velocity at infinite distance. Spacecraft's velocity approaches formula_12 as it is further away from the central object's gravity.
Examples.
MAVEN, a Mars-bound spacecraft, was launched into a trajectory with a characteristic energy of 12.2 km2/s2 with respect to the Earth. When simplified to a two-body problem, this would mean the MAVEN escaped Earth on a hyperbolic trajectory slowly decreasing its speed towards formula_13. However, since the Sun's gravitational field is much stronger than Earth's, the two-body solution is insufficient. The characteristic energy with respect to Sun was negative, and MAVEN – instead of heading to infinity – entered an elliptical orbit around the Sun. But the maximal velocity on the new orbit could be approximated to 33.5 km/s by assuming that it reached practical "infinity" at 3.5 km/s and that such Earth-bound "infinity" also moves with Earth's orbital velocity of about 30 km/s.
The InSight mission to Mars launched with a C3 of 8.19 km2/s2. The Parker Solar Probe (via Venus) plans a maximum C3 of 154 km2/s2.
Typical ballistic C3 (km2/s2) to get from Earth to various planets: Mars 8-16, Jupiter 80, Saturn or Uranus 147. To Pluto (with its orbital inclination) needs about 160–164 km2/s2.
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C_3"
},
{
"math_id": 1,
"text": "\\epsilon"
},
{
"math_id": 2,
"text": "\\epsilon = \\frac{1}{2} v^2 - \\frac{\\mu}{r} = \\text{constant} = \\frac{1}{2} C_3,"
},
{
"math_id": 3,
"text": "\\mu = GM"
},
{
"math_id": 4,
"text": "M"
},
{
"math_id": 5,
"text": "r"
},
{
"math_id": 6,
"text": "C_3 = -\\frac{\\mu}{a} < 0"
},
{
"math_id": 7,
"text": "a"
},
{
"math_id": 8,
"text": "C_3 = -\\frac{\\mu}{r}"
},
{
"math_id": 9,
"text": "C_3 = 0"
},
{
"math_id": 10,
"text": "C_3 = \\frac{\\mu}{|a|} > 0"
},
{
"math_id": 11,
"text": "C_3 = v_\\infty^2"
},
{
"math_id": 12,
"text": "v_\\infty"
},
{
"math_id": 13,
"text": "\\sqrt{12.2}\\text{ km/s} = 3.5\\text{ km/s}"
}
] | https://en.wikipedia.org/wiki?curid=996955 |
997021 | Asymptotic gain model | The asymptotic gain model (also known as the Rosenstark method) is a representation of the gain of negative feedback amplifiers given by the asymptotic gain relation:
formula_0
where formula_1 is the return ratio with the input source disabled (equal to the negative of the loop gain in the case of a single-loop system composed of unilateral blocks), "G∞" is the asymptotic gain and "G0" is the direct transmission term. This form for the gain can provide intuitive insight into the circuit and often is easier to derive than a direct attack on the gain.
Figure 1 shows a block diagram that leads to the asymptotic gain expression. The asymptotic gain relation also can be expressed as a signal flow graph. See Figure 2. The asymptotic gain model is a special case of the extra element theorem.
As follows directly from limiting cases of the gain expression, the asymptotic gain "G∞" is simply the gain of the system when the return ratio approaches infinity:
formula_2
while the direct transmission term "G0" is the gain of the system when the return ratio is zero:
formula_3
Implementation.
Direct application of the model involves these steps:
These steps can be implemented directly in SPICE using the small-signal circuit of hand analysis. In this approach the dependent sources of the devices are readily accessed. In contrast, for experimental measurements using real devices or SPICE simulations using numerically generated device models with inaccessible dependent sources, evaluating the return ratio requires special methods.
Connection with classical feedback theory.
Classical feedback theory neglects feedforward ("G"0). If feedforward is dropped, the gain from the asymptotic gain model becomes
formula_4
while in classical feedback theory, in terms of the open loop gain "A", the gain with feedback (closed loop gain) is:
formula_5
Comparison of the two expressions indicates the feedback factor "β"FB is:
formula_6
while the open-loop gain is:
formula_7
If the accuracy is adequate (usually it is), these formulas suggest an alternative evaluation of "T": evaluate the open-loop gain and "G∞" and use these expressions to find "T". Often these two evaluations are easier than evaluation of "T" directly.
Examples.
The steps in deriving the gain using the asymptotic gain formula are outlined below for two negative feedback amplifiers. The single transistor example shows how the method works in principle for a transconductance amplifier, while the second two-transistor example shows the approach to more complex cases using a current amplifier.
Single-stage transistor amplifier.
Consider the simple FET feedback amplifier in Figure 3. The aim is to find the low-frequency, open-circuit, transresistance gain of this circuit "G" = "v"out / "i"in using the asymptotic gain model.
The small-signal equivalent circuit is shown in Figure 4, where the transistor is replaced by its hybrid-pi model.
Return ratio.
It is most straightforward to begin by finding the return ratio "T", because "G0" and "G∞" are defined as limiting forms of the gain as "T" tends to either zero or infinity. To take these limits, it is necessary to know what parameters "T" depends upon. There is only one dependent source in this circuit, so as a starting point the return ratio related to this source is determined as outlined in the article on return ratio.
The return ratio is found using Figure 5. In Figure 5, the input current source is set to zero, By cutting the dependent source out of the output side of the circuit, and short-circuiting its terminals, the output side of the circuit is isolated from the input and the feedback loop is broken. A test current "it" replaces the dependent source. Then the return current generated in the dependent source by the test current is found. The return ratio is then "T" = −"ir / it". Using this method, and noticing that "R"D is in parallel with "r"O, "T" is determined as:
formula_8
where the approximation is accurate in the common case where "r"O » "R"D. With this relationship it is clear that the limits "T" → 0, or ∞ are realized if we let transconductance "g"m → 0, or ∞.
Asymptotic gain.
Finding the asymptotic gain "G∞" provides insight, and usually can be done by inspection. To find "G∞" we let "g"m → ∞ and find the resulting gain. The drain current, "i"D = "g"m "v"GS, must be finite. Hence, as "g"m approaches infinity, "v"GS also must approach zero. As the source is grounded, "v"GS = 0 implies "v"G = 0 as well. With "v"G = 0 and the fact that all the input current flows through "R"f (as the FET has an infinite input impedance), the output voltage is simply −"i"in "R"f. Hence
formula_9
Alternatively "G∞" is the gain found by replacing the transistor by an ideal amplifier with infinite gain - a nullor.
Direct feedthrough.
To find the direct feedthrough formula_10 we simply let "gm" → 0 and compute the resulting gain. The currents through "R"f and the parallel combination of "R"D || "r"O must therefore be the same and equal to "i"in. The output voltage is therefore "i"in "(R"D "|| r"O")".
Hence
formula_11
where the approximation is accurate in the common case where "rO" » "RD".
Overall gain.
The overall transresistance gain of this amplifier is therefore:
formula_12
Examining this equation, it appears to be advantageous to make "RD" large in order make the overall gain approach the asymptotic gain, which makes the gain insensitive to amplifier parameters ("gm" and "RD"). In addition, a large first term reduces the importance of the direct feedthrough factor, which degrades the amplifier. One way to increase "RD" is to replace this resistor by an active load, for example, a current mirror.
Two-stage transistor amplifier.
Figure 6 shows a two-transistor amplifier with a feedback resistor "Rf". This amplifier is often referred to as a "shunt-series feedback" amplifier, and analyzed on the basis that resistor "R2" is in series with the output and samples output current, while "Rf" is in shunt (parallel) with the input and subtracts from the input current. See the article on negative feedback amplifier and references by Meyer or Sedra. That is, the amplifier uses current feedback. It frequently is ambiguous just what type of feedback is involved in an amplifier, and the asymptotic gain approach has the advantage/disadvantage that it works whether or not you understand the circuit.
Figure 6 indicates the output node, but does not indicate the choice of output variable. In what follows, the output variable is selected as the short-circuit current of the amplifier, that is, the collector current of the output transistor. Other choices for output are discussed later.
To implement the asymptotic gain model, the dependent source associated with either transistor can be used. Here the first transistor is chosen.
Return ratio.
The circuit to determine the return ratio is shown in the top panel of Figure 7. Labels show the currents in the various branches as found using a combination of Ohm's law and Kirchhoff's laws. Resistor "R"1 "= R"B "// r"π1 and "R"3 "= R"C2 "// R"L. KVL from the ground of "R"1 to the ground of "R"2 provides:
formula_13
KVL provides the collector voltage at the top of "RC" as
formula_14
Finally, KCL at this collector provides
formula_15
Substituting the first equation into the second and the second into the third, the return ratio is found as
formula_16
formula_17
Gain "G0" with T = 0.
The circuit to determine "G0" is shown in the center panel of Figure 7. In Figure 7, the output variable is the output current β"iB" (the short-circuit load current), which leads to the short-circuit current gain of the amplifier, namely β"iB" / "i"S:
formula_18
Using Ohm's law, the voltage at the top of "R1" is found as
formula_19
or, rearranging terms,
formula_20
Using KCL at the top of "R2":
formula_21
Emitter voltage "vE" already is known in terms of "iB" from the diagram of Figure 7. Substituting the second equation in the first, "iB" is determined in terms of "iS" alone, and "G0" becomes:
formula_22
Gain "G0" represents feedforward through the feedback network, and commonly is negligible.
Gain "G∞" with "T" → ∞.
The circuit to determine "G∞" is shown in the bottom panel of Figure 7. The introduction of the ideal op amp (a nullor) in this circuit is explained as follows. When "T "→ ∞, the gain of the amplifier goes to infinity as well, and in such a case the differential voltage driving the amplifier (the voltage across the input transistor "rπ1") is driven to zero and (according to Ohm's law when there is no voltage) it draws no input current. On the other hand, the output current and output voltage are whatever the circuit demands. This behavior is like a nullor, so a nullor can be introduced to represent the infinite gain transistor.
The current gain is read directly off the schematic:
formula_23
Comparison with classical feedback theory.
Using the classical model, the feed-forward is neglected and the feedback factor βFB is (assuming transistor β » 1):
formula_24
and the open-loop gain "A" is:
formula_25
Overall gain.
The above expressions can be substituted into the asymptotic gain model equation to find the overall gain G. The resulting gain is the "current" gain of the amplifier with a short-circuit load.
Gain using alternative output variables.
In the amplifier of Figure 6, "R"L and "R"C2 are in parallel.
To obtain the transresistance gain, say "A"ρ, that is, the gain using voltage as output variable, the short-circuit current gain "G" is multiplied by "RC2 // RL" in accordance with Ohm's law:
formula_26
The "open-circuit" voltage gain is found from "A"ρ by setting "R"L → ∞.
To obtain the current gain when load current "iL" in load resistor "R"L is the output variable, say "A"i, the formula for current division is used: "iL = iout × RC2 / ( RC2 + RL )" and the short-circuit current gain "G" is multiplied by this loading factor:
formula_27
Of course, the short-circuit current gain is recovered by setting "R"L = 0 Ω.
References and notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G = G_{\\infty} \\left( \\frac{T}{T + 1} \\right) + G_0 \\left( \\frac{1}{T + 1} \\right) \\ ,"
},
{
"math_id": 1,
"text": "T"
},
{
"math_id": 2,
"text": "G_{\\infty} = G\\ \\Big |_{T \\rightarrow \\infty}\\ , "
},
{
"math_id": 3,
"text": "G_{0} = G\\ \\Big |_{T \\rightarrow 0}\\ ."
},
{
"math_id": 4,
"text": "G = G_{\\infin} \\frac {T} {1+T} =\\frac {G_{\\infin}T}{1+\\frac{1} {G_{\\infin}} G_{\\infin} T} \\ , "
},
{
"math_id": 5,
"text": "A_\\mathrm{FB} = \\frac {A} {1 + {\\beta}_\\mathrm{FB} A} \\ . "
},
{
"math_id": 6,
"text": " \\beta_\\mathrm{FB} = \\frac {1} {G_{\\infin}} \\ , "
},
{
"math_id": 7,
"text": " A = G_{\\infin} \\ T \\ . "
},
{
"math_id": 8,
"text": "T = g_\\mathrm{m} \\left( R_\\mathrm{D}\\ ||r_\\mathrm{O} \\right) \\approx g_\\mathrm{m} R_\\mathrm{D} \\ , "
},
{
"math_id": 9,
"text": "G_{\\infty} = \\frac{v_\\mathrm{out}}{i_\\mathrm{in}} = -R_\\mathrm{f}\\ ."
},
{
"math_id": 10,
"text": "G_0"
},
{
"math_id": 11,
"text": "G_0 = \\frac{v_{out}}{i_{in}} = R_D\\|r_O \\approx R_D \\ ,"
},
{
"math_id": 12,
"text": "G = \\frac{v_{out}}{i_{in}} = -R_f \\frac{g_m R_D}{1+g_m R_D} + R_D \\frac{1}{1+g_m R_D} = \\frac{R_D\\left(1-g_m R_f\\right)}{1+g_m R_D}\\ ."
},
{
"math_id": 13,
"text": " i_\\mathrm{B} = -v_{ \\pi} \\frac {1+R_2/R_1+R_\\mathrm{f}/R_1} {(\\beta +1) R_2} \\ . "
},
{
"math_id": 14,
"text": "v_\\mathrm{C} = v_{ \\pi} \\left(1+ \\frac {R_\\mathrm{f}} {R_1} \\right ) -i_\\mathrm{B} r_{ \\pi 2} \\ . "
},
{
"math_id": 15,
"text": " i_\\mathrm{T} = i_\\mathrm{B} - \\frac {v_\\mathrm{C}} {R_\\mathrm{C}} \\ . "
},
{
"math_id": 16,
"text": "T = - \\frac {i_\\mathrm{R}} {i_\\mathrm{T}} = -g_\\mathrm{m} \\frac {v_{ \\pi} }{i_\\mathrm{T}} "
},
{
"math_id": 17,
"text": " = \\frac {g_\\mathrm{m} R_\\mathrm{C}} { \\left( 1 + \\frac {R_\\mathrm{f}} {R_1} \\right) \\left( 1+ \\frac {R_\\mathrm{C}+r_{ \\pi 2}}{( \\beta +1)R_2} \\right) +\\frac {R_\\mathrm{C}+r_{ \\pi 2}}{(\\beta +1)R_1} } \\ . "
},
{
"math_id": 18,
"text": " G_0 = \\frac { \\beta i_B} {i_S} \\ . "
},
{
"math_id": 19,
"text": " ( i_S - i_R ) R_1 = i_R R_f +v_E \\ \\ ,"
},
{
"math_id": 20,
"text": " i_S = i_R \\left( 1 + \\frac {R_f}{R_1} \\right) +\\frac {v_E} {R_1} \\ . "
},
{
"math_id": 21,
"text": " i_R = \\frac {v_E} {R_2} + ( \\beta +1 ) i_B \\ . "
},
{
"math_id": 22,
"text": "G_0 = \\frac { \\beta } {\n ( \\beta +1) \\left( 1 + \\frac{R_f}{R_1} \\right ) +(r_{ \\pi 2} +R_C ) \\left[ \\frac {1} {R_1} + \\frac {1} {R_2} \\left( 1 + \\frac {R_f} {R_1} \\right ) \\right]\n} "
},
{
"math_id": 23,
"text": " G_{ \\infty } = \\frac { \\beta i_B } {i_S} = \\left( \\frac {\\beta} {\\beta +1} \\right) \\left( 1 + \\frac {R_f} {R_2} \\right) \\ . "
},
{
"math_id": 24,
"text": " \\beta_{FB} = \\frac {1} {G_{\\infin}} \\approx \\frac {1} {(1+ \\frac {R_f}{R_2} )} = \\frac {R_2} {(R_f + R_2)} \\ , "
},
{
"math_id": 25,
"text": "A = G_{\\infin}T \\approx \\frac {\\left( 1+\\frac {R_f}{R_2} \\right) g_m R_C} { \\left( 1 + \\frac {R_f} {R_1} \\right) \\left( 1+ \\frac {R_C+r_{ \\pi 2}}{( \\beta +1)R_2} \\right) +\\frac {R_C+r_{ \\pi 2}}{(\\beta +1)R_1} } \\ . "
},
{
"math_id": 26,
"text": " A_{ \\rho} = G \\left( R_\\mathrm{C2} // R_\\mathrm{L} \\right) \\ . "
},
{
"math_id": 27,
"text": " A_i = G \\left( \\frac {R_{C2}} {R_{C2}+ R_{L}} \\right) \\ . "
}
] | https://en.wikipedia.org/wiki?curid=997021 |
997062 | Stability and Growth Pact | Main European Union fiscal agreement
The Stability and Growth Pact (SGP) is an agreement, among all the 27 member states of the European Union (EU), to facilitate and maintain the stability of the Economic and Monetary Union (EMU). Based primarily on and of the Treaty on the Functioning of the European Union, it consists of fiscal monitoring of member states by the European Commission and the Council of the European Union, and the issuing of a yearly Country-Specific Recommendation for fiscal policy actions to ensure a full compliance with the SGP also in the medium-term. If a member state breaches the SGP's outlined maximum limit for government deficit and debt, the surveillance and request for corrective action will intensify through the declaration of an Excessive Deficit Procedure (EDP); and if these corrective actions continue to remain absent after multiple warnings, a member state of the eurozone can ultimately also be issued economic sanctions. The pact was outlined by a European Council resolution in June 1997, and two Council regulations in July 1997. The first regulation "on the strengthening of the surveillance of budgetary positions and the surveillance and coordination of economic policies", known as the "preventive arm", entered into force 1 July 1998. The second regulation "on speeding up and clarifying the implementation of the excessive deficit procedure", sometimes referred to as the "dissuasive arm" but commonly known as the "corrective arm", entered into force 1 January 1999.
The purpose of the pact was to ensure that fiscal discipline would be maintained and enforced in the EMU. All EU member states are automatically members of both the EMU and the SGP, as this is defined by paragraphs in the EU Treaty itself. The fiscal discipline is ensured by the SGP by requiring each Member State, to implement a fiscal policy aiming for the country to stay within the limits on government deficit (3% of GDP) and debt (60% of GDP); and in case of having a debt level above 60% it should each year decrease with a satisfactory pace towards a level below. As outlined by the "preventive arm" regulation, all EU member states are each year obliged to submit a SGP compliance report for the scrutiny and evaluation of the European Commission and the Council of the European Union, that will present the country's expected fiscal development for the current and subsequent three years. These reports are called "stability programmes" for eurozone Member States and "convergence programmes" for non-eurozone Member States, but despite having different titles they are identical in regards of the content. After the reform of the SGP in 2005, these programmes have also included the Medium-Term budgetary Objectives (MTO), being individually calculated for each Member State as the medium-term sustainable average-limit for the country's structural deficit, and the Member State is also obliged to outline the measures it intends to implement to attain its MTO. If the EU Member State does not comply with both the deficit limit and the debt limit, a so-called "Excessive Deficit Procedure" (EDP) is initiated along with a deadline to comply, which basically includes and outlines an "adjustment path towards reaching the MTO". This procedure is outlined by the "dissuasive arm" regulation.
The SGP was initially proposed by German finance minister Theo Waigel in the mid-1990s. Germany had long maintained a low-inflation policy, which had been an important part of the German economy's robust performance since the 1950s. The German government hoped to ensure the continuation of that policy through the SGP, which would ensure the prevalence of fiscal responsibility, and limit the ability of governments to exert inflationary pressures on the European economy. As such, it was also described to be a key tool for the member states adopting the euro, to ensure that they did not only meet the Maastricht convergence criteria at the time of adopting the euro but kept on complying with the fiscal criteria for the following years. The Excessive Deficit Procedure (EDP), also known as the corrective arm of the SGP, was suspended via activation of the "general escape clause" during 2020–2023 to allow for higher deficit spending; first due to the COVID-19 pandemic arriving as an extraordinary circumstance, and later during 2022-2023 due to the Russian invasion of Ukraine having sent energy prices up, defence spending up and budgetary pressures up across the EU. Despite the EDP suspension in 2020-2023, Romania still experienced the opening of an EDP in April 2020; but only because of existence of a deficit limit breach being recorded already for its 2019 fiscal year, which required corrective action across 2020–2024, to remedy a budgetary imbalance created before 2020. 16 out of 27 member states had a technical SGP criteria breach, when their 2022 fiscal results and 2023 budgets were analyzed in May 2023; because those breaches were exempted due to the finding of temporary and exceptional circumstances, reflected by the activation of the general escape clause, no new EDPs were opened against those member states.
The EDP will be assessed again starting from 19 June 2024, where each country will have their usual set of a "2024 National Reform Programme" and "2024 Stability or Convergence Programme" analyzed, with a compliance check of the 2023 fiscal result and 2024 budget with the existing 2019-version of the SGP rules, although only 3% deficit breaches will be evaluated because no debt limit or debt reduction breach can trigger an EDP in 2024. The European Commission reasoned for its continued deactivation for another year of the debt limit or debt reduction rule in 2023–2024, stating "that compliance with the debt reduction benchmark could imply a too demanding frontloaded fiscal effort that would risk to jeopardise economic growth. Therefore, in the view of the Commission, compliance with the debt reduction benchmark is not warranted under the prevailing economic conditions." In February 2024, the EU approved a revised set of SGP rules, that will introduce acceptance of a slower adjustment path towards respecting the deficit and debt limit of the SGP, and extend the maximum duration of an Excessive Deficit Procedure from four to seven years if certain reform requirements are respected. The new revised rules will be finally adopted by the European Parliament and Council of Ministers before the 2024 European Parliament election; and fully applied starting from the presented drafts for 2025 budgets. The first "national medium-term fiscal-structural plans" guided by the new revised fiscal rules, will cover the four-year period 2025–2028, and need to be submitted by each member state by 20 September 2024.
Timeline.
This is a timeline of how the Stability and Growth Pact evolved over time:
Reform 2005.
In March 2005, the EU Council, under the pressure of France and Germany, relaxed the rules; the EC said it was to respond to criticisms of insufficient flexibility and to make the pact more enforceable. The Ecofin agreed on a reform of the SGP. The ceilings of 3% for budget deficit and 60% for public debt were maintained, but the decision to declare a country in excessive deficit can now rely on certain parameters: the behaviour of the cyclically adjusted budget, the level of debt, the duration of the slow growth period and the possibility that the deficit is related to productivity-enhancing procedures. The pact is part of a set of Council Regulations, decided upon the European Council Summit 22–23 March 2005.
Reforms 2011–13.
The 2010 European sovereign debt crisis proved the serious shortcomings embedded in the SGP. On one hand, fiscal wisdom was not spontaneously followed by the majority of Eurozone Members during the early-2000s expansion cycle. On the other hand, the EDP was not duly carried out, when necessary, as the cases of France and Germany clearly show.
In order to stabilise the Eurozone, Member States adopted an extensive package of reforms, aiming at straightening both the substantive budgetary rules and the enforcement framework. The result was a complete revision of the SGP. The measures adopted soon proved highly controversial, because they implied an unprecedented curtailment of national sovereignty and the conferral upon the Union of penetrating surveillance competences. The new framework consists of a patchwork of normative acts, both within and outside the formal EU edifice. Consequently, the system is now much more complex.
Treaty on Stability, Coordination and Governance.
The Treaty on Stability, Coordination and Governance (TSCG), commonly labeled as European Fiscal Compact, was signed on 2 March 2012 by all eurozone member states and eight other EU member states and entered into force on 1 January 2013. As of today, all current 27 EU member states ratified or acceded to the treaty, while the main opponent against the TSCG (the United Kingdom) left the EU in January 2020. The TSCG was intended to promote the launch of a new intergovernmental economic cooperation, outside the formal framework of the EU treaties, because most (but not all) member states at the time of its creation were willing to be bound by extra commitments.
Despite being an intergovernmental treaty outside the EU legal framework, all treaty provisions function as an extension to pre-existing EU regulations, utilising the same reporting instruments and organisational structures already created within the EU in the three areas: Budget discipline enforced by Stability and Growth Pact (extended by ), Coordination of economic policies (extended by ), and Governance within the EMU (extended by ). The full treaty applies for all eurozone member states. A voluntary opt-in for non-eurozone member states to be bound by the fiscal and economic provisions of the treaty (Title III+IV) has been declared by Denmark, Bulgaria and Romania, while this main part of the treaty currently does not apply for Sweden, Poland, Hungary and Czech Republic - until the point of time they either declare otherways or adopt the euro.
Member states bound by Title III of the TSCG have to transpose these fiscal provisions (referred to as the Fiscal Compact) into their national legislation. In particular, the general government budget has to be in balance or surplus, under the treaty's definition. As a novelty, an automatic correction mechanism has to be established by written law in order to correct potential significant deviations. Establishment is also required of a national independent monitoring institution to provide fiscal surveillance (commonly referred to as a fiscal council), with a mandate to verify all statistical data and fiscal budgets of the government are in compliance with the agreed fiscal rules, and ensure the proper functioning of the automatic correction mechanism.
The treaty defines a balanced budget exactly the same way the SGP did, as a government budget deficit not exceeding 3.0% of the gross domestic product (GDP), and a structural deficit not exceeding a country-specific Medium-Term budgetary Objective (MTO). The Fiscal Compact however introduced a more strict upper MTO-limit compared to SGP, as it now at most can be set to 0.5% of GDP for states with a debt‑to‑GDP ratio exceeding 60%, while only states with debt levels below 60% of GDP will be subject to respect an upper MTO-limit at the SGP-allowed 1.0% of GDP. The exact applying country-specific minimum MTO is recalculated and set by the European Commission for each country every third year, and might be set at levels stricter than the greatest latitude permitted by the treaty.
In line with the existing SGP rules, the general government budget balance of a member state will be in compliance with the TSCG deficit criteria, either if its found to be within the country-specific MTO-limit, or if its found to display "rapid progress" on its "adjustment path" towards respecting the country-specific MTO-limit. On this point the TSCG is only stricter than SGP by using the phrase "rapid progress" (without quantifying this term), while the SGP regulation opted instead to use the phrase "sufficient progress". In line with the existing SGP rules, the European Commission will for each country set the available time-frame for the "adjustment path" until the MTO-limit shall be achieved, based on consideration of a country-specific debt sustainability risk assessment, while also respecting the requirement that the annual improvements for the structural budget balance shall be minimum 0.5% of GDP.
The treaty refers to that the compliance check and calculation of sufficiently required corrections for the debt-limit and "debt brake" criteria, shall be identical with the existing operating debt rules outlined by the Stability and Growth Pact. The outlined debt-limit and debt brake criteria established four ways for a member state to comply with the debt rules, either by simply having a gross debt level below 60% of GDP, or if above 60% of GDP it then needs to be found "sufficiently diminishing" by specific calculation formulas, either over a "3‑year forward looking period" or a "3‑year backward looking period" or a "3‑year backward looking period based on cyclical adjusted data".
If any of the periodic checks conducted by the national fiscal council finds the budget or estimated fiscal account of the general government to be noncompliant with the deficit or debt criteria of the treaty, the state is obliged to immediately rectify the issue by implementing sufficient counteracting fiscal measures or changes to its ongoing fiscal policy for the specific year(s) in concern. If a state is in breach at the time of the treaty's entry into force, the correction will be deemed to be sufficient if it delivers sufficiently large annual improvements to remain on a country specific predefined "adjustment path" towards the limits at a midterm horizon. Similar to the general escape clause of the SGP, a state suffering a significant recession or a temporary exceptional event outside its control with major budgetary impact, will be exempted from the requirement to deliver a fiscal automatic correction for as long as it lasts.
The treaty states that the signatories shall attempt to incorporate the treaty into EU's legal framework, on the basis of an assessment of the experience with its implementation, by 1 January 2018 at the latest. In December 2017, the European Commission proposed a new Council Directive to incorporate the main fiscal provisions of the TSCG (all articles of its Title III - except ) into EU law. The ECB proposed several clarifying amendments to this proposed Council Directive in May 2018, while noting a potential adoption of this Directive should only happen together with an amendment of the pre-existing Council Regulation 1466/97, in order to reflect the TSCG had introduced a stricter upper limit for the structural deficit (MTO) at 0.5% of GDP for member states indebted by a debt-to-GDP ratio above 60%, which was a stricter limit than the maximum 1% of GDP being allowed by Council Regulation 1466/97 for all eurozone member states regardless of their debt-to-GDP ratio. If the Council Directive is adopted, it will align the EU fiscal rules with the TSCG fiscal rules. As the content of the Directive does not cover all articles of the TSCG, it will however not replace it, but continue to coexist with TSCG. The proposed Council Directive was never adopted, but the latest 2024 reform is a new attempt to integrate the TSCG into EU law, that will likely succeed.
Secondary legislation.
Several secondary legislative acts were implemented to strengthen both the preventive and the corrective arms of the SGP. One must distinguish between the 2011 Sixpack and the 2013 Twopack.
Sixpack.
The Sixpack consists of five Regulations and one Directive, which all entered into force on 13 December 2011, although compliance with the Directive was only required by 31 December 2013.
New debt reduction rule (Regulation 1177/2011).
The corrective arm of the SGP (Regulation 1467/97) was amended by Regulation 1177/2011. By an entirely rewritten "article 2", this amendment introduced and operationalised a new "debt reduction rule", commonly referred to as the "debt brake rule", and legislatively referred to as the "1/20 numerical benchmark for debt reduction". The new debt reduction rule entered into force at the EU level on 13 December 2011.
* Backwards-checking formula for the debt reduction bechmark (bbt):bbt = 60% + 0.95*(bt-1-60%)/3 + 0.952*(bt-2-60%)/3 + 0.953*(bt-3-60%)/3. The bb-value is the calculated benchmark limit for year "t". The formula feature three t-year-indexes for backwards-checking.
* Forwards-checking formula for the debt reduction bechmark (bbt+2):bbt+2 = 60% + 0.95*(bt+1-60%)/3 + 0.952*(bt-60%)/3 + 0.953*(bt-1-60%)/3. When checking forwards, the same formula is applied as the backwards-checking formula, just with all the t-year-indexes being pushed two years forward.
* The year referred to as "t" in the backward-looking and forward-looking formula listed above, is always the latest completed fiscal year with available outturn data. For example, a backward-check conducted in 2024 will always check whether outturn data from the completed 2023 fiscal year (t) featured a debt-to-GDP ratio (bt) at a level respecting the "2023 debt reduction benchmark" (bbt) calculated on basis of outturn data for the debt-to-GDP ratio from 2020+2021+2022, while the forward-looking check conducted in 2024 will be all about whether the forecast 2025-data (bt+2) will respect the "2025 debt reduction benchmark" (bbt+2) calculated on basis of debt-to-GDP ratio data for 2022+2023+2024. It shall be noted, that whenever a "b" input-value (debt-to-GDP ratio) is recorded/forecast below 60%, its data-input shall be replaced by a fictive 60% value in the formula.
* Besides of the backward-looking debt-brake compliance check (bt formula_0 bbt) and forward-looking debt-brake compliance check (bt+2 formula_0 bbt+2), a third "cyclically adjusted backward-looking debt-brake check (b*t formula_0 bbt)" also form part of the assessment whether or not a member state is in abeyance with the debt-criterion. This check applies the same backwards-checking formula for the debt reduction bechmark (bbt), but now checks if the "cyclically adjusted debt-to-GDP ratio" (b*t) respects this calculated benchmark-limit (bbt) by being compliant with the equation: b*t formula_0 bbt. The exact formula used to calculate the "cyclically adjusted debt-to-GDP ratio for the latest completed year t with outturn data (b*t)", is displayed by the formula box below.
If just one of the four quantitative debt-requirements (including the first one requesting the debt-to-GDP ratio to be below 60% in the latest recorded fiscal year) is complied with: bt formula_0 60% or bt formula_0 bbt or b*t formula_0 bbt or bt+2 formula_0 bbt+2, then a member state will be declared to be in abeyance with the debt brake rule. Otherwise the Commission will declare existence of an "apparent breach" of the debt-criterion by the publication of a , which shall investigate if the "apparent breach" was "real" after having taken a range of allowed exemptions into consideration. Provided no special "breach exemptions" can be found to exist by the 126(3) report (i.e. finding the debt breach was solely caused by "structural improving pension reforms" or "payment of bailout funds to financial stability mechanisms" or "payment of national funds to the new European Fund for Strategic Investments" or "appearance of an EU-wide recession"), then the Commission will recommend the Council to open a debt-breached EDP against the member state by the publication of a .
For transitional reasons, the regulation granted all 23 EU member states with an ongoing EDP in November 2011, a 3‑year exemption period to comply with the rule, which shall start in the year when the member state have its 2011-EDP abrogated. For example, Ireland will only be obliged to comply with the new debt brake rule in 2019, if they, as expected, manage to correct their EDP in fiscal year 2015 – with the formal EDP abrogation then taking place in 2016. During the years where the 23 member states are exempted from complying with the new debt brake rule, they are still obliged to comply with the old debt brake rule that requires the debt-to-GDP ratios in excess of 60% to be "sufficiently diminished", meaning that it must approach the 60% reference value at a "satisfactory pace" ensuring it will succeed to meet the debt reduction requirement of the new debt brake rule three years after its EDP is abrogated. This special transitional "satisfactory pace" is calculated by the Commission individually for each of the concerned member states, and is published to them in form of a figure for: "The annually required Minimum Linear Structural Adjustment (MLSA) of the deficit in each of the 3 years in the transition period – ensuring the compliance with the new debt brake rule by the end of the transition period."
Twopack.
The Twopack consists of two Regulations that entered into force on 30 May 2013. They are exclusively applicable to eurozone member states and introduced additional coordination and surveillance of their budgetary processes. They were deemed necessary given the higher potential for spillover effects of budgetary policies in a common currency area. The additional regulations complement the SGP's requirement for surveillance, by enhancing the frequency and scope of scrutiny of the member state's policymaking, but do not place additional requirements on the policy itself. The degree of surveillance will depend on the economic health of the member state.
Regulation 473/2013 is directed at all eurozone member states and requires a draft budgetary plan for the upcoming year to be submitted annually by 15 October, for a SGP compliance assessment conducted by the European Commission. The member state shall then await receiving the Commission's opinion before the draft budgetary plan is debated and voted for in their national parliament. The Commission will not be granted any veto right over the national parliaments potential pass of a fiscal budget, but will have the role to issue warnings in advance to the national parliaments, if the proposed draft budget is found to compromise the debt and deficit rules of the SGP.
Regulation 472/2013 is concerned with the subgroup of eurozone member states experiencing or being threatened by financial instability, which is understood to be the case if the state has an ongoing Excessive Imbalance Procedure (EIP) or receives any macroeconomic financial assistance from EFSM/EFSF/ESM/IMF/other bilateral basis. These member states are made subject to even more in depth and frequent "enhanced surveillance", in order to prevent a possible sovereign debt crises to emerge.
Assessment and criticism.
In the post-crisis period, the legal debate on EMU largely focused on assessing the effects of both the Six- and the Two-Pack on the SGP. Most scholars admit that a considerable improvement occurred in the field of budgetary enforcement, especially for what concerns the imposition of dissuasive sanctions upon noncompliant Members. However, critical positions generally outnumber positive ones.
Many have criticised the growing complexity of the enforcement procedures. The reform process had to reconcile a strong tightening of the EDP with the pressure for wider escape clauses. The tension between these opposite trends fostered the developments of complicated assessment criteria, often translated in sophisticated mathematical formulas. This not only induces confusion in the overall framework, but also makes the procedural outcome hardly predictable for Member States.
Another widespread criticism concerns the high democratic deficit embedded by the SGP. National policymakers are elected democratically backed up at national level, whereas the EU (in its quality of central watchdog) is only in an indirect way. The friction between the two levels is ever more perceived in periods of economic distress, when the quest for budgetary consolidation becomes more compelling. Scholars agree in referring the issue of democratic deficit to the lack of a more federalised institutional framework for the Eurozone economic governance. The argument goes that strongly legitimated Union institutions would avoid the need for penetrating surveillance mechanisms, as they would partially shift economic policymaking at central level.
Bailout programs.
Because of the crisis, some Members lost access to financial markets to refinance their debt. Clearly, the SGP framework proved not enough to ensure the stability of the Eurozone. For this reason, a bailout facility was deemed necessary to face such extraordinary challenges. The first attempt was the European Financial Stability Facility (EFSF), specifically created in 2010 to help Greece, Portugal, and Ireland. However, a permanent facility was created two years later with the establishment of the European Stability Mechanism (ESM). The latter consists of an international treaty signed on 2 February 2012 by Eurozone Members only.
Ailing Members receive financial aid in the form of low-interest loans whose disbursement is attached policy conditionalities. The latter usually consist in Macroeconomic Adjustment Programs (MAPs) whose adoption is deemed necessary to fix the imbalances which gave rise to the original instability.
Bailout programs do not constitute enforcement procedure "stricto sensu". However, since financial support always entails compliance with several budgetary and economic conditionalities, they can be construed as a sort of "ex post" enforcement mechanism.
Reform 2024.
On 26 April 2023, the Commission presented three legislative proposals to implement a comprehensive reform of the EU fiscal framework:
The proposed reform aims to strengthen public debt sustainability, promote sustainable and inclusive growth through reforms and investments, increase national ownership for fiscal plans and fiscal corrections, simplify the legal framework, move towards a greater medium-term approach to budgetary policies, and ensure more effective and coherent enforcement of the fiscal rules.
As per legal assessment of ECB, the Commissions reform proposals also aim to integrate the of the European Fiscal Compact (TSCG), and wherever provisions would be different this does not necessitate the subsequent amendment or repeal of the TSCG, because ensures that the TSCG provisions will always apply and be interpreted in accordance with the existing economic governance framework of the European Union.
In February 2024, the trilogue negotiations between the co-legislators ended with a provisional political agreement on the Commissions proposal for a comprehensive reform of the SGP rules. The reform will introduce acceptance of a slower adjustment path towards respecting the deficit and debt limit of the SGP, and extend the maximum duration of an Excessive Deficit Procedure from four to seven years if certain reform requirements are respected. The new revised rules will be finally adopted by the European Parliament and Council of Ministers before the 2024 European Parliament election; and fully applied starting from the presented drafts for 2025 budgets. The first "national medium-term fiscal-structural plans" guided by the new revised fiscal rules, will cover the four-year period 2025-2028, and need to be submitted by each member state by 20 September 2024.
The European Parliament is expected to vote on the new Regulation on the preventive arm in April 2024. After the Parliament's approval, the Council of Ministers is expected to adopt the new Regulation, adopt the Regulation amending the corrective arm, and adopt the Directive amending the Directive on national budgetary frameworks. In the meantime, as a new legal framework is not yet in place, the current legal framework continues to apply in spring 2024.
SGP changes induced by the reform.
The reform was adopted and entered into force on 30 April 2024, and induced the following changes to the SGP:
Nominal net primary expenditure growth = yearly potential GDP growth + inflation (as measured by the GDP deflator) – required change in the Structural Primary Balance / primary expenditure-to-GDP ratio
Member states by SGP criteria.
Both eurozone and non-eurozone EU member states are subjected to a regular compliance check with the SGP deficit and debt criterion. Minimum one ordinary check per year was conducted for all member states since 1998, and minimum two ordinary checks per year for all eurozone member states since the twopack reform entered into force in 2013. If just one of the two criteria is not complied with when conducting the first numerical check, and the following investigative of the Commission concludes this "apparent breach" was non-exempted, then there will be the opening of an Excessive Deficit Procedure (EDP) against the concerned member state - declared by the Council's adoption of a 126(6) report; and a deadline for the needed correction of the criteria breach - along with annual targets for the structural deficit and nominal budget balance - will be set by the simultaneous adoption of a 126(7) report.
The EDP - also known as the corrective arm of the SGP, was however suspended via activation of the "general escape clause" during 2020-2023 to allow for higher deficit spending; first due to the Covid-19 pandemic arriving as an extraordinary circumstance, and later during 2022-2023 due to the Russian invasion of Ukraine having sent energy prices up, defence spending up and budgetary pressures up across the EU. Despite the EDP suspension in 2020-2023, Romania still experienced the opening of an EDP in April 2020; but only because of existence of a deficit limit breach being recorded already for its 2019 fiscal year, which required corrective action across 2020-2024, to remedy a budgetary imbalance created before 2020.
Compliance in 2023.
The data in the table below are from the ordinary compliance check of all EU member states in May 2023, with outturn data for the 2022 fiscal year as they were published on the Eurostat website in April 2023, and budget values for 2023-2026 as they were reported by the submitted Stability programme or Convergence programme of each member state in April 2023. 16 out of 27 member states had a technical "SGP criteria breach" when their 2022 fiscal results and 2023 budgets were analyzed in May 2023, but because those breaches were exempted due to the finding of temporary and exceptional circumstances - reflected by the activation of the general escape clause, no new EDPs were opened against those member states.
Compliance in 2024.
The EDP will be assessed again starting from 19 June 2024, where each country will have their usual set of a "2024 National Reform Programme" and "2024 Stability or Convergence Programme" analyzed, with a compliance check of the 2023 fiscal result and 2024 budget with the existing 2019-version of the SGP rules; although only 3% deficit breaches will be evaluated - because no debt limit or debt reduction breach can trigger an EDP in 2024. The European Commission reasoned for its continued deactivation for another year of the debt limit or debt reduction rule in 2023-2024: "that compliance with the debt reduction benchmark could imply a too demanding frontloaded fiscal effort that would risk to jeopardise economic growth. Therefore, in the view of the Commission, compliance with the debt reduction benchmark is not warranted under the prevailing economic conditions."
10 out of 27 member states (Belgium, Czechia, France, Hungary, Italy, Malta, Poland, Romania, Slovakia and Spain) had a technical deficit-based "SGP criteria breach" as per their 2023 fiscal results published by Eurostat in April 2024. The European Commission will await receiving the budget values for 2024-2027 via the submitted Stability programme or Convergence programme of each member state, before deciding whether or not to open an EDP for the concerned member states.
The 2024 SGP compliance table below will be updated with the latest data, as soon as the 2024 stability/convergence programme has been published for each country. The colors used to indicate compliance with the SGP criteria, are only preliminary selected based on whether the reported fiscal data exceeds the criteria limits after taking the Commission's latest fiscal policy statements into consideration exempting all debt-related breaches, but without taking any additional factors or subcriteria into consideration when assessing compliance with the deficit criteria. The final assessment for compliance with the SGP criteria of each country, will be published by the European Commission on 19 June 2024, in the form of an assessment report investigating if the "apparent breach" was "real" (indicated by a red color) or can be "exempted" (indicated by a yellow color).
An article 126(3) assessment report can declare an apparent numerical breach "exempted" and hereby "accepted", if the breach in example was solely caused by "extra expenditures caused by implementation of structural improving pension reforms" or "payment of bailout funds to financial stability mechanisms" or "payment of national funds to the European Fund for Strategic Investments" or "appearance of an EU-wide recession" or "other temporary and extraordinary expences specifically allowed by the currently agreed fiscal policy of the EU". Any non-exempted breach of either the deficit or debt criteria of the Stability and Growth Pact declared by a 126(3) report, will however result in the publication of a and 126(7) report shortly afterwards, in which the Council will be recommended to open an EDP and set a deadline for when the breaching of the criteria shall have been corrected by the member state. If any EDP recommendation is issued by the Commission in June, then the EDP is expected to be formally adopted and opened by the Council in July.
Medium-Term budgetary Objective (MTO).
1999–2005.
Across the first seven years, since the entry into force of the Stability and Growth Pact, all EU Member States were required to strive towards a common MTO being "to achieve a budgetary position of close to balance or in surplus over a complete business cycle – while providing a safety margin towards continuously respecting the government's 3% deficit limit". The first part of this MTO, was interpreted by the Commission Staff Service to mean "continuously achievement each year throughout the business cycle of a "cyclically-adjusted budget balance net of one-off and temporary measures" (also referred to as the "structural balance") at minimum 0.0%". In 2000, the second part was interpreted and operationalized into a calculation formula for the MTO also to respect the so-called "Minimal Benchmark" (later referred to as "MTO Minimum Benchmark"). When assessing the annual Convergence/Stability programmes of the Member States, the Commission Staff Service checked whether the structural balance of the state complied with both the common "close to balance or surplus" criteria and the country-specific "Minimal Benchmark" criteria. The last round under this assessment scheme took place in Spring 2005, while all subsequent assessments were conducted according to a new reformed scheme – introducing the concept of a single country-specific MTO as the overall steering anchor for the fiscal policy.
Calculation of a country-specific Minimum MTO (2005–present).
In order to ensure long-term compliance with the SGP deficit and debt criteria, the member states have since the SGP-reform in March 2005 striven towards achieving their country-specific Medium-Term budgetary Objective (MTO). The MTO is the set limit, that the structural balance relative to GDP needs to equal or be above for each year in the medium-term. Each state selects its own MTO, but it needs to equal or be better than a calculated minimum requirement (Minimum MTO) ensuring sustainability of the government accounts throughout the long-term (calculated on basis of both future potential GDP growth, future cost of government debt, and future increases in age-related costs).
The structural balance is calculated by the European Commission as the cyclically adjusted balance minus "one-off measures" (i.e. one-off payments due to reforming a pension scheme). The cyclically adjusted balance is calculated by adjusting the achieved general government balance (in % of GDP) compared to each year's relative economic growth position in the business cycle (referred to as the "output gap"), which is found by subtraction of the achieved GDP growth with the potential GDP growth. So, if a year is recorded with average GDP growth in the business cycle (equal to the potential GDP growth rate), the output gap will then be zero, meaning that the "cyclically-adjusted balance" then will be equal to the "government budget balance". In this way, because it is resistant to GDP growth changes, the structural balance is considered to be neutral and comparable across an entire business cycle (including both recession years and "overheated years"), making it perfect to be used consistently as a medium-term budgetary objective.
Whenever a country does not reach its MTO, it is required in the subsequent year(s) to implement annual improvements for its structural balance equal to minimum 0.5% of GDP, although several sub-rules (including the "expenditure benchmark") has the potential slightly to alter this requirement. When Member States are in this process of improving their structural balance until it reaches its MTO, they are referred to as being on the "adjustment path", and they shall annually report an updated target year for when they expect to reach their MTO. It is the responsibility of each Member State through a note in their annual Convergence/Stability report, to select their contemporary MTO at a point being equal to or above the "minimum MTO" calculated every third year by the European Commission (most recently in October 2012). The "minimum MTO" that the "nationally selected MTO" needs to respect, is equal to the strictest of the following three limits :
(1) MTOMB
(2) MTOILD
(3) MTOea/erm2/fc .
The third minimum limit listed above (MTOea/erm2/fc), mean that EU member states having ratified the Fiscal Compact and being bound by its fiscal provisions in Title III (which requires a special additional declaration of intent for non-eurozone member states), are obliged to select a MTO which does not exceed a structural deficit of 1.0% of GDP at maximum if they have a debt-to-GDP ratio significantly below 60%, and of 0.5% of GDP maximum if they have a debt-to-GDP ratio above 60%. In 2013-22, the following six states were not bound by the fiscal provisions of the Fiscal Compact: "UK, Czech Republic, Croatia, Poland, Sweden, Hungary." Croatia became bound by the Fiscal Compact provisions and its stricter -0.5% limit when they adopted the euro on 1 January 2023, and were bound by the -1.0% limit while being an ERM-2 member from 10 July 2020 until 31 December 2022. Those of the non-eurozone member states neither being ERM-2 members nor having committed to respect the fiscal provisions of the Fiscal Compact ("Czech Republic, Poland, Sweden, and Hungary," as of April 2024), will still be required to set a national MTO respecting the calculated "minimum MTO" being equal to the strictest limit set by MTOMB and MTOILD.
The only EU member state ever exempted from complying with this MTO procedure outlined above, was the former member UK, as it was exempted from complying with the SGP per a protocol to the EU treaty. In other words, while all other member states are obliged nationally to select a MTO respecting their calculated Minimum MTO, the calculated Minimum MTO for the UK were only presented by the European Commission for advice, with no obligation for the UK to set a compliant national MTO in structural terms.
The "Minimum MTOs" are recalculated every third year by the Economic and Financial Committee, based on the above-described procedure and formulas, that among others require the prior publication of the commission's triennial Ageing Report. Although member states having an open "macroeconomic adjustment programme" covering the entire first year of which a recalculated "Minimum MTO" should have applied, shall not be subject to recalculation of its MTOILD due to an ongoing implementation of structural reforms as part of that program, and consequently will not be bound by any "Minimum MTO" for this specific three‑year period (i.e. Greece in 2012-15 and 2015-18, while a "Minimum MTO was calculated for 2010 due to its first 2010-12 programme only starting in May 2010). A Member State can also have its "Minimum MTO" updated outside the ordinary three‑year schedule, if it implements structural reforms with a major impact on the long-term sustainability of public finances (i.e. a major pension reform) – and subsequently submit a formal request for an extraordinary recalculation.
In example, after the ordinary recalculation of minimum MTOs had been conducted by the Commission staff service in autumn 2012, based partially on input values from the 2012 Ageing report published in November 2011, many extraordinary recalculations were subsequently conducted during 2013-14. The MTOILD limits for Belgium, Denmark, Hungary and the Netherlands were revised in 2013, due to the impact of their 2012 pension reforms only subsequently being incorporated for some updated S2COA values in the Commission's Fiscal Sustainability Report 2012 released on 18 December 2012. The MTOILD limits were also later revised in a similar fashion for Spain, Poland, Latvia, Slovakia, and Slovenia, as the data impact from their 2012 enacted pension reforms had only been assessed with publication of some revised S2COA values by graph 5.4 in the July 2014 report entitled "2014 Stability and Convergence Programmes: An Overview". The revised S2COA values due to enacted pension reforms consequently changed the calculated MTOILD limits to less strict limits, for all of the concerned countries.
In March 2017, the Commission made a commitment to begin updating the MTO minimum benchmark (MTOMB) annually in March/April (based on recalculated ROG input data from its latest autumn economic forecast), as the MTOMB had become a critical part of how the recently introduced "Flexibility Clauses" (a collective term for the "Structural Reform Clause" and "Investment Clause") were applied, when making the annual assessment of compliance with the deficit and debt criteria for each member state within the preventive arm of the SGP. Except from the situation where a member state request an extraordinary recalculation of its MTOILD (due to implementation of major structural reforms), all calculated "Minimum MTOs" will however remain frozen for the entire three‑year period it covers, and not be changed by the annually revised MTOMB.
The table below, display the input data and calculated "Minimum MTOs" only from the five latest ordinary recalculations, with no display of any potential extraordinary recalculations in between.
Nationally selected MTOs (2005–present).
Whenever a "Minimum MTO" gets recalculated for a country, the announcement of a "nationally selected MTO" that is equal to or above this recalculated "Minimum MTO" shall occur as part of the following ordinary stability/convergence report, while only taking effect compliance-wise for the fiscal account(s) in the years after the new "nationally selected MTO" has been announced. The tables below have listed all country-specific MTOs selected by national governments throughout 2005–2015, and colored each year red/green to display whether or not the "nationally selected MTO" was achieved, according to the latest revision of the structural balance data as calculated by the "European Commission method". Some states, i.e. Denmark and Latvia, apply a national method to calculate the structural balance figures reported in their convergence report (which greatly differs from the results of the commission's method), but for the sake of presenting comparable results for all Member States, the "MTO achieved" coloring of the tables (and if not met also the noted forecast year of reaching it) is decided solely by the results of the commission's calculation method.
Criticism.
The Pact has been criticised by some as being insufficiently flexible and needing to be applied over the economic cycle rather than in any one year. The problem is, that countries in the EMU cannot react to economic shocks with a change of their monetary policy since it is coordinated by the ECB and not by national central banks. Consequently, countries must use fiscal policy i.e. government spending to absorb the shock. They fear that by limiting governments' abilities to spend during economic slumps may intensify recessions and hamper growth. In contrast, other critics think that the Pact is too flexible; economist Antonio Martino writes: "The fiscal constraints introduced with the new currency must be criticized not because they are undesirable—in my view they are a necessary component of a liberal order—but because they are ineffective. This is amply evidenced by the "creative accounting" gimmickry used by many countries to achieve the required deficit to GDP ratio of 3 per cent, and by the immediate abandonment of fiscal prudence by some countries as soon as they were included in the euro club. Also, the Stability Pact has been watered down at the request of Germany and France."
The Maastricht criteria has been applied inconsistently: the Council failed to apply sanctions against the first two countries that broke the 3% rule: France and Germany, yet punitive proceedings were started (but fines never applied) when dealing with Portugal (2002) and Greece (2005). In 2002 the European Commission President (1999–2004) Romano Prodi described it as "stupid", but was still required by the Treaty to seek to apply its provisions.
The Pact has proved to be unenforceable against big countries that dominate the EU economically, such as France and Germany, which were its strongest promoters when it was created. These countries have run "excessive" deficits under the Pact definition for some years. The reasons that larger countries have not been punished include their influence and large number of votes on the Council, which must approve sanctions; their greater resistance to "naming and shaming" tactics, since their electorates tend to be less concerned by their perceptions in the European Union; their weaker commitment to the euro compared to smaller states; and the greater role of government spending in their larger and more enclosed economies. The Pact was further weakened in 2005 to waive France's and Germany's violations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\scriptscriptstyle\\leq"
}
] | https://en.wikipedia.org/wiki?curid=997062 |
997097 | Elliptic orbit | Kepler orbit with an eccentricity of less than one
In astrodynamics or celestial mechanics, an elliptic orbit or elliptical orbit is a Kepler orbit with an eccentricity of less than 1; this includes the special case of a circular orbit, with eccentricity equal to 0. In a stricter sense, it is a Kepler orbit with the eccentricity greater than 0 and less than 1 (thus excluding the circular orbit). In a wider sense, it is a Kepler orbit with negative energy. This includes the radial elliptic orbit, with eccentricity equal to 1.
In a gravitational two-body problem with negative energy, both bodies follow similar elliptic orbits with the same orbital period around their common barycenter. Also the relative position of one body with respect to the other follows an elliptic orbit.
Examples of elliptic orbits include Hohmann transfer orbits, Molniya orbits, and tundra orbits.
Velocity.
Under standard assumptions, no other forces acting except two spherically symmetrical bodies m1 and m2, the orbital speed (formula_0) of one body traveling along an elliptic orbit can be computed from the vis-viva equation as:
formula_1
where:
The velocity equation for a hyperbolic trajectory has either + formula_5, or it is the same with the convention that in that case "a" is negative.
Orbital period.
Under standard assumptions the orbital period (formula_6) of a body travelling along an elliptic orbit can be computed as:
formula_7
where:
Conclusions:
Energy.
Under standard assumptions, the specific orbital energy (formula_9) of an elliptic orbit is negative and the orbital energy conservation equation (the Vis-viva equation) for this orbit can take the form:
formula_10
where:
Conclusions:
Using the virial theorem to find:
Energy in terms of semi major axis.
It can be helpful to know the energy in terms of the semi major axis (and the involved masses). The total energy of the orbit is given by
formula_12,
where a is the semi major axis.
Derivation.
Since gravity is a central force, the angular momentum is constant:
formula_13
At the closest and furthest approaches, the angular momentum is perpendicular to the distance from the mass orbited, therefore:
formula_14.
The total energy of the orbit is given by
formula_15.
Substituting for v, the equation becomes
formula_16.
This is true for r being the closest / furthest distance so two simultaneous equations are made, which when solved for E:
formula_17
Since formula_18 and formula_19, where epsilon is the eccentricity of the orbit, the stated result is reached.
Flight path angle.
The flight path angle is the angle between the orbiting body's velocity vector (equal to the vector tangent to the instantaneous orbit) and the local horizontal. Under standard assumptions of the conservation of angular momentum the flight path angle formula_20 satisfies the equation:
formula_21
where:
formula_24 is the angle between the orbital velocity vector and the semi-major axis. formula_25 is the local true anomaly. formula_26, therefore,
formula_27
formula_28
where formula_29 is the eccentricity.
The angular momentum is related to the vector cross product of position and velocity, which is proportional to the sine of the angle between these two vectors. Here formula_20 is defined as the angle which differs by 90 degrees from this, so the cosine appears in place of the sine.
Equation of motion.
From initial position and velocity.
An orbit equation defines the path of an orbiting body formula_30 around central body formula_31 relative to formula_31, without specifying position as a function of time. If the eccentricity is less than 1 then the equation of motion describes an elliptical orbit. Because Kepler's equation formula_32 has no general closed-form solution for the Eccentric anomaly (E) in terms of the Mean anomaly (M), equations of motion as a function of time also have no closed-form solution (although numerical solutions exist for both).
However, closed-form time-independent path equations of an elliptic orbit with respect to a central body can be determined from just an initial position (formula_33) and velocity (formula_34).
For this case it is convenient to use the following assumptions which differ somewhat from the standard assumptions above:
# The central body's position is at the origin and is the primary focus (formula_35) of the ellipse (alternatively, the center of mass may be used instead if the orbiting body has a significant mass)
# The central body's mass (m1) is known
# The orbiting body's initial position(formula_33) and velocity(formula_34) are known
# The ellipse lies within the XY-plane
The fourth assumption can be made without loss of generality because any three points (or vectors) must lie within a common plane. Under these assumptions the second focus (sometimes called the "empty" focus) must also lie within the XY-plane: formula_36 .
Using vectors.
The general equation of an ellipse under these assumptions using vectors is:
formula_37
where:
The semi-major axis length (a) can be calculated as:
formula_39
where formula_40 is the standard gravitational parameter.
The empty focus (formula_36) can be found by first determining the Eccentricity vector:
formula_41
Where formula_42 is the specific angular momentum of the orbiting body:
formula_43
Then
formula_44
Using XY Coordinates.
This can be done in cartesian coordinates using the following procedure:
The general equation of an ellipse under the assumptions above is:
formula_45
Given:
formula_46 the initial position coordinates
formula_47 the initial velocity coordinates
and
formula_48 the gravitational parameter
Then:
formula_49 specific angular momentum
formula_50 initial distance from F1 (at the origin)
formula_51 the semi-major axis length
formula_52 the Eccentricity vector coordinates
formula_53
Finally, the empty focus coordinates
formula_54
formula_55
Now the result values "fx, fy" and "a" can be applied to the general ellipse equation above.
Orbital parameters.
The state of an orbiting body at any given time is defined by the orbiting body's position and velocity with respect to the central body, which can be represented by the three-dimensional Cartesian coordinates (position of the orbiting body represented by x, y, and z) and the similar Cartesian components of the orbiting body's velocity. This set of six variables, together with time, are called the orbital state vectors. Given the masses of the two bodies they determine the full orbit. The two most general cases with these 6 degrees of freedom are the elliptic and the hyperbolic orbit. Special cases with fewer degrees of freedom are the circular and parabolic orbit.
Because at least six variables are absolutely required to completely represent an elliptic orbit with this set of parameters, then six variables are required to represent an orbit with any set of parameters. Another set of six parameters that are commonly used are the orbital elements.
Solar System.
In the Solar System, planets, asteroids, most comets, and some pieces of space debris have approximately elliptical orbits around the Sun. Strictly speaking, both bodies revolve around the same focus of the ellipse, the one closer to the more massive body, but when one body is significantly more massive, such as the sun in relation to the earth, the focus may be contained within the larger massing body, and thus the smaller is said to revolve around it. The following chart of the perihelion and aphelion of the planets, dwarf planets, and Halley's Comet demonstrates the variation of the eccentricity of their elliptical orbits. For similar distances from the sun, wider bars denote greater eccentricity. Note the almost-zero eccentricity of Earth and Venus compared to the enormous eccentricity of Halley's Comet and Eris.
Distances of selected bodies of the Solar System from the Sun. The left and right edges of each bar correspond to the perihelion and aphelion of the body, respectively, hence long bars denote high orbital eccentricity. The radius of the Sun is 0.7 million km, and the radius of Jupiter (the largest planet) is 0.07 million km, both too small to resolve on this image.
Radial elliptic trajectory.
A radial trajectory can be a double line segment, which is a degenerate ellipse with semi-minor axis = 0 and eccentricity = 1. Although the eccentricity is 1, this is not a parabolic orbit. Most properties and formulas of elliptic orbits apply. However, the orbit cannot be closed. It is an open orbit corresponding to the part of the degenerate ellipse from the moment the bodies touch each other and move away from each other until they touch each other again. In the case of point masses one full orbit is possible, starting and ending with a singularity. The velocities at the start and end are infinite in opposite directions and the potential energy is equal to minus infinity.
The radial elliptic trajectory is the solution of a two-body problem with at some instant zero speed, as in the case of dropping an object (neglecting air resistance).
History.
The Babylonians were the first to realize that the Sun's motion along the ecliptic was not uniform, though they were unaware of why this was; it is today known that this is due to the Earth moving in an elliptic orbit around the Sun, with the Earth moving faster when it is nearer to the Sun at perihelion and moving slower when it is farther away at aphelion.
In the 17th century, Johannes Kepler discovered that the orbits along which the planets travel around the Sun are ellipses with the Sun at one focus, and described this in his first law of planetary motion. Later, Isaac Newton explained this as a corollary of his law of universal gravitation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "v\\,"
},
{
"math_id": 1,
"text": "v = \\sqrt{\\mu\\left({2\\over{r}} - {1\\over{a}}\\right)}"
},
{
"math_id": 2,
"text": "\\mu\\,"
},
{
"math_id": 3,
"text": "r\\,"
},
{
"math_id": 4,
"text": "a\\,\\!"
},
{
"math_id": 5,
"text": "{1\\over{a}}"
},
{
"math_id": 6,
"text": "T\\,\\!"
},
{
"math_id": 7,
"text": "T=2\\pi\\sqrt{a^3\\over{\\mu}}"
},
{
"math_id": 8,
"text": "\\mu"
},
{
"math_id": 9,
"text": "\\epsilon"
},
{
"math_id": 10,
"text": "{v^2\\over{2}}-{\\mu\\over{r}}=-{\\mu\\over{2a}}=\\epsilon<0"
},
{
"math_id": 11,
"text": "a\\,"
},
{
"math_id": 12,
"text": "E = - G \\frac{M m}{2a}"
},
{
"math_id": 13,
"text": "\\dot{\\mathbf{L}} = \\mathbf{r} \\times \\mathbf{F} = \\mathbf{r} \\times F(r)\\mathbf{\\hat{r}} = 0"
},
{
"math_id": 14,
"text": "L = r p = r m v"
},
{
"math_id": 15,
"text": "E = \\frac{1}{2}m v^2 - G \\frac{Mm}{r}"
},
{
"math_id": 16,
"text": "E = \\frac{1}{2}\\frac{L^2}{mr^2} - G \\frac{Mm}{r}"
},
{
"math_id": 17,
"text": "E = - G \\frac{Mm}{r_1 + r_2}"
},
{
"math_id": 18,
"text": "r_1 = a + a \\epsilon"
},
{
"math_id": 19,
"text": "r_2 = a - a \\epsilon"
},
{
"math_id": 20,
"text": "\\phi"
},
{
"math_id": 21,
"text": "h\\, = r\\, v\\, \\cos \\phi"
},
{
"math_id": 22,
"text": "h\\,"
},
{
"math_id": 23,
"text": "\\phi \\,"
},
{
"math_id": 24,
"text": "\\psi"
},
{
"math_id": 25,
"text": "\\nu"
},
{
"math_id": 26,
"text": "\\phi = \\nu + \\frac{\\pi}{2} - \\psi"
},
{
"math_id": 27,
"text": "\\cos \\phi = \\sin(\\psi - \\nu) = \\sin\\psi\\cos\\nu - \\cos\\psi\\sin\\nu = \\frac{1 + e\\cos\\nu}{\\sqrt{1 + e^2 + 2e\\cos\\nu}}"
},
{
"math_id": 28,
"text": "\\tan \\phi = \\frac{e\\sin\\nu}{1 + e\\cos\\nu}"
},
{
"math_id": 29,
"text": "e"
},
{
"math_id": 30,
"text": "m_2\\,\\!"
},
{
"math_id": 31,
"text": "m_1\\,\\!"
},
{
"math_id": 32,
"text": "M = E - e \\sin E "
},
{
"math_id": 33,
"text": "\\mathbf{r}"
},
{
"math_id": 34,
"text": "\\mathbf{v}"
},
{
"math_id": 35,
"text": "\\mathbf{F1}"
},
{
"math_id": 36,
"text": "\\mathbf{F2} = \\left(f_x,f_y\\right)"
},
{
"math_id": 37,
"text": " |\\mathbf{F2} - \\mathbf{p}| + |\\mathbf{p}| = 2a \\qquad\\mid z=0"
},
{
"math_id": 38,
"text": "\\mathbf{p} = \\left(x,y\\right)"
},
{
"math_id": 39,
"text": "a = \\frac{\\mu |\\mathbf{r}|}{2\\mu - |\\mathbf{r}| \\mathbf{v}^2}"
},
{
"math_id": 40,
"text": "\\mu\\ = Gm_1"
},
{
"math_id": 41,
"text": "\\mathbf{e} = \\frac{\\mathbf{r}}{|\\mathbf{r}|} - \\frac{\\mathbf{v}\\times \\mathbf{h}}{\\mu}"
},
{
"math_id": 42,
"text": "\\mathbf{h}"
},
{
"math_id": 43,
"text": "\\mathbf{h} = \\mathbf{r} \\times \\mathbf{v}"
},
{
"math_id": 44,
"text": "\\mathbf{F2} = -2a\\mathbf{e}"
},
{
"math_id": 45,
"text": " \\sqrt{ \\left(f_x - x\\right)^2 + \\left(f_y - y\\right)^2} + \\sqrt{ x^2 + y^2 } = 2a \\qquad\\mid z=0"
},
{
"math_id": 46,
"text": "r_x, r_y \\quad"
},
{
"math_id": 47,
"text": "v_x, v_y \\quad"
},
{
"math_id": 48,
"text": "\\mu = Gm_1 \\quad"
},
{
"math_id": 49,
"text": "h = r_x v_y - r_y v_x \\quad"
},
{
"math_id": 50,
"text": "r = \\sqrt{r_x^2 + r_y^2} \\quad"
},
{
"math_id": 51,
"text": "a = \\frac{\\mu r}{2\\mu - r \\left(v_x^2 + v_y^2 \\right)} \\quad"
},
{
"math_id": 52,
"text": "e_x = \\frac{r_x}{r} - \\frac{h v_y}{\\mu} \\quad"
},
{
"math_id": 53,
"text": "e_y = \\frac{r_y}{r} + \\frac{h v_x}{\\mu} \\quad"
},
{
"math_id": 54,
"text": "f_x = - 2 a e_x \\quad"
},
{
"math_id": 55,
"text": "f_y = - 2 a e_y \\quad"
}
] | https://en.wikipedia.org/wiki?curid=997097 |
997141 | Areostationary orbit | Circular areosynchronous orbit in the Martian equatorial plane
An areostationary orbit, areosynchronous equatorial orbit (AEO), or Mars geostationary orbit is a circular areosynchronous orbit (ASO) approximately in altitude above the Mars equator and following the direction of Mars's rotation.
An object in such an orbit has an orbital period equal to Mars's rotational period, and so to ground observers it appears motionless in a fixed position in the sky. It is the Martian analog of a Geostationary orbit (GEO). The prefix "areo-" derives from Ares, the ancient Greek god of war and counterpart to the Roman god Mars, with whom the planet was identified.
Although it would allow for uninterrupted communication and observation of the Martian surface, no artificial satellites have been placed in this orbit due to the technical complexity of achieving and maintaining one.
Characteristics.
The radius of an areostationary orbit can be calculated using Kepler's Third Law.
formula_0
Where:
Substituting the mass of Mars for M and the Martian sidereal day for T and solving for the semimajor axis yields a synchronous orbit radius of above the surface of the Mars equator. Subtracting Mars's radius gives an orbital altitude of .
Two stable longitudes exist - 17.92°W and 167.83°E. Satellites placed at any other longitude will tend to drift to these stable longitudes over time.
Feasibility.
Several factors make placing a spacecraft into an areostationary orbit more difficult than a geostationary orbit. Since the areostationary orbit lies between Mars's two natural satellites, Phobos (semi-major axis: 9,376 km) and Deimos (semi-major axis: 23,463 km), any satellites in the orbit will suffer increased orbital station keeping costs due to unwanted orbital resonance effects. Mars's gravity is also much less spherical than earth due to uneven volcanism (i.e. Olympus Mons). This creates additional gravitational disturbances not present on earth, further destabilizing the orbit. Solar radiation pressure and sun-based perturbations are also present, as with an earth-based geostationary orbit. Actually placing a satellite into such an orbit is further complicated by the distance from earth and related challenges shared by any attempted Mars mission.
Uses.
Satellites in an areostationary orbit would allow for greater amounts of data to be relayed back from the Martian surface easier than by using current methods. Satellites in the orbit would also be ideal advantageous for monitoring Martian weather and mapping of the Martian surface.
In the early 2000s NASA explored the feasibility of placing communications satellites in an areocentric orbit as a part of the Mars Communication Network. In the concept, an areostationary relay satellite would transmit data from a network of landers and smaller satellites in lower Martian orbits back to earth.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T^2 = {2\\pi\\over{\\sqrt{GM}}} * a^{3\\over2}"
}
] | https://en.wikipedia.org/wiki?curid=997141 |
997205 | Circular orbit | Orbit with a fixed distance from the barycenter
A circular orbit is an orbit with a fixed distance around the barycenter; that is, in the shape of a circle.
In this case, not only the distance, but also the speed, angular speed, potential and kinetic energy are constant. There is no periapsis or apoapsis. This orbit has no radial version.
Listed below is a circular orbit in astrodynamics or celestial mechanics under standard assumptions. Here the centripetal force is the gravitational force, and the axis mentioned above is the line through the center of the central mass perpendicular to the orbital plane.
Circular acceleration.
Transverse acceleration (perpendicular to velocity) causes a change in direction. If it is constant in magnitude and changing in direction with the velocity, circular motion ensues. Taking two derivatives of the particle's coordinates concerning time gives the centripetal acceleration
formula_0
where:
The formula is dimensionless, describing a ratio true for all units of measure applied uniformly across the formula. If the numerical value formula_4 is measured in meters per second squared, then the numerical values formula_1 will be in meters per second, formula_2 in meters, and formula_3 in radians per second.
Velocity.
The speed (or the magnitude of velocity) relative to the central object is constant:
formula_5
where:
Equation of motion.
The orbit equation in polar coordinates, which in general gives "r" in terms of "θ", reduces to:
formula_10
where:
This is because formula_12
formula_13
Angular speed and orbital period.
Hence the orbital period (formula_14) can be computed as:
formula_15
Compare two proportional quantities, the free-fall time (time to fall to a point mass from rest)
formula_16 (17.7% of the orbital period in a circular orbit)
and the time to fall to a point mass in a radial parabolic orbit
formula_17 (7.5% of the orbital period in a circular orbit)
The fact that the formulas only differ by a constant factor is a priori clear from dimensional analysis.
Energy.
The specific orbital energy (formula_18) is negative, and
formula_19
formula_20
Thus the virial theorem applies even without taking a time-average:
The escape velocity from any distance is √2 times the speed in a circular orbit at that distance: the kinetic energy is twice as much, hence the total energy is zero.
Delta-v to reach a circular orbit.
Maneuvering into a large circular orbit, e.g. a geostationary orbit, requires a larger delta-v than an escape orbit, although the latter implies getting arbitrarily far away and having more energy than needed for the orbital speed of the circular orbit. It is also a matter of maneuvering into the orbit. See also Hohmann transfer orbit.
Orbital velocity in general relativity.
In Schwarzschild metric, the orbital velocity for a circular orbit with radius formula_21 is given by the following formula:
formula_22
where formula_23 is the Schwarzschild radius of the central body.
Derivation.
For the sake of convenience, the derivation will be written in units in which formula_24.
The four-velocity of a body on a circular orbit is given by:
formula_25
(formula_26 is constant on a circular orbit, and the coordinates can be chosen so that formula_27). The dot above a variable denotes derivation with respect to proper time formula_28.
For a massive particle, the components of the four-velocity satisfy the following equation:
formula_29
We use the geodesic equation:
formula_30
The only nontrivial equation is the one for formula_31. It gives:
formula_32
From this, we get:
formula_33
Substituting this into the equation for a massive particle gives:
formula_34
Hence:
formula_35
Assume we have an observer at radius formula_26, who is not moving with respect to the central body, that is, their four-velocity is proportional to the vector formula_36. The normalization condition implies that it is equal to:
formula_37
The dot product of the four-velocities of the observer and the orbiting body equals the gamma factor for the orbiting body relative to the observer, hence:
formula_38
This gives the velocity:
formula_39
Or, in SI units:
formula_22
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " a\\, = \\frac {v^2} {r} \\, = {\\omega^2} {r} "
},
{
"math_id": 1,
"text": "v\\,"
},
{
"math_id": 2,
"text": "r\\,"
},
{
"math_id": 3,
"text": " \\omega \\ "
},
{
"math_id": 4,
"text": " \\mathbf{a}"
},
{
"math_id": 5,
"text": " v = \\sqrt{ GM\\! \\over{r}} = \\sqrt{\\mu\\over{r}} "
},
{
"math_id": 6,
"text": "G"
},
{
"math_id": 7,
"text": "M"
},
{
"math_id": 8,
"text": "(M_1+M_2)"
},
{
"math_id": 9,
"text": " \\mu = GM "
},
{
"math_id": 10,
"text": "r={{h^2}\\over{\\mu}}"
},
{
"math_id": 11,
"text": "h=rv"
},
{
"math_id": 12,
"text": "\\mu=rv^2"
},
{
"math_id": 13,
"text": "\\omega^2 r^3=\\mu"
},
{
"math_id": 14,
"text": "T\\,\\!"
},
{
"math_id": 15,
"text": "T=2\\pi\\sqrt{r^3\\over{\\mu}}"
},
{
"math_id": 16,
"text": "T_\\text{ff}=\\frac{\\pi}{2\\sqrt{2}}\\sqrt{r^3\\over{\\mu}}"
},
{
"math_id": 17,
"text": "T_\\text{par}=\\frac{\\sqrt{2}}{3}\\sqrt{r^3\\over{\\mu}}"
},
{
"math_id": 18,
"text": "\\epsilon\\,"
},
{
"math_id": 19,
"text": "\\epsilon=-{v^2\\over{2}}"
},
{
"math_id": 20,
"text": "\\epsilon=-{\\mu\\over{2r}}"
},
{
"math_id": 21,
"text": "r"
},
{
"math_id": 22,
"text": "v = \\sqrt{\\frac{GM}{r-r_S}}"
},
{
"math_id": 23,
"text": "\\scriptstyle r_S = \\frac{2GM}{c^2}"
},
{
"math_id": 24,
"text": "\\scriptstyle c=G=1"
},
{
"math_id": 25,
"text": "u^\\mu = (\\dot{t}, 0, 0, \\dot{\\phi})"
},
{
"math_id": 26,
"text": "\\scriptstyle r"
},
{
"math_id": 27,
"text": "\\scriptstyle \\theta=\\frac{\\pi}{2}"
},
{
"math_id": 28,
"text": "\\scriptstyle \\tau"
},
{
"math_id": 29,
"text": "\\left(1-\\frac{2M}{r}\\right) \\dot{t}^2 - r^2 \\dot{\\phi}^2 = 1"
},
{
"math_id": 30,
"text": "\\ddot{x}^\\mu + \\Gamma^\\mu_{\\nu\\sigma}\\dot{x}^\\nu\\dot{x}^\\sigma = 0"
},
{
"math_id": 31,
"text": "\\scriptstyle \\mu = r"
},
{
"math_id": 32,
"text": "\\frac{M}{r^2}\\left(1-\\frac{2M}{r}\\right)\\dot{t}^2 - r\\left(1-\\frac{2M}{r}\\right)\\dot{\\phi}^2 = 0"
},
{
"math_id": 33,
"text": "\\dot{\\phi}^2 = \\frac{M}{r^3}\\dot{t}^2"
},
{
"math_id": 34,
"text": "\\left(1-\\frac{2M}{r}\\right) \\dot{t}^2 - \\frac{M}{r} \\dot{t}^2 = 1"
},
{
"math_id": 35,
"text": "\\dot{t}^2 = \\frac{r}{r-3M}"
},
{
"math_id": 36,
"text": "\\scriptstyle \\partial_t"
},
{
"math_id": 37,
"text": "v^\\mu = \\left(\\sqrt{\\frac{r}{r-2M}},0,0,0\\right)"
},
{
"math_id": 38,
"text": "\\gamma = g_{\\mu\\nu}u^\\mu v^\\nu = \\left(1-\\frac{2M}{r}\\right) \\sqrt{\\frac{r}{r-3M}} \\sqrt{\\frac{r}{r-2M}} = \\sqrt{\\frac{r-2M}{r-3M}}"
},
{
"math_id": 39,
"text": "v = \\sqrt{\\frac{M}{r-2M}}"
}
] | https://en.wikipedia.org/wiki?curid=997205 |
997260 | Specific angular momentum | Vector quantity in celestial mechanics
In celestial mechanics, the specific relative angular momentum (often denoted formula_0 or formula_1) of a body is the angular momentum of that body divided by its mass. In the case of two orbiting bodies it is the vector product of their relative position and relative linear momentum, divided by the mass of the body in question.
Specific relative angular momentum plays a pivotal role in the analysis of the two-body problem, as it remains constant for a given orbit under ideal conditions. "Specific" in this context indicates angular momentum per unit mass. The SI unit for specific relative angular momentum is square meter per second.
Definition.
The specific relative angular momentum is defined as the cross product of the relative position vector formula_2 and the relative velocity vector formula_3.
formula_4
where formula_5 is the angular momentum vector, defined as formula_6.
The formula_7 vector is always perpendicular to the instantaneous osculating orbital plane, which coincides with the instantaneous perturbed orbit. It is not necessarily perpendicular to the average orbital plane over time.
Proof of constancy in the two body case.
Under certain conditions, it can be proven that the specific angular momentum is constant. The conditions for this proof include:
Proof.
The proof starts with the two body equation of motion, derived from Newton's law of universal gravitation:
formula_10
where:
The cross product of the position vector with the equation of motion is:
formula_17
Because formula_18 the second term vanishes:
formula_19
It can also be derived that:
formula_20
Combining these two equations gives:
formula_21
Since the time derivative is equal to zero, the quantity formula_22 is constant. Using the velocity vector formula_23 in place of the rate of change of position, and formula_1 for the specific angular momentum:
formula_24 is constant.
This is different from the normal construction of momentum, formula_25, because it does not include the mass of the object in question.
Kepler's laws of planetary motion.
Kepler's laws of planetary motion can be proved almost directly with the above relationships.
First law.
The proof starts again with the equation of the two-body problem. This time the cross product is multiplied with the specific relative angular momentum
formula_26
The left hand side is equal to the derivative formula_27 because the angular momentum is constant.
After some steps (which includes using the vector triple product and defining the scalar formula_28 to be the radial velocity, as opposed to the norm of the vector formula_29) the right hand side becomes:
formula_30
Setting these two expression equal and integrating over time leads to (with the constant of integration formula_31)
formula_32
Now this equation is multiplied (dot product) with formula_8 and rearranged
formula_33
Finally one gets the orbit equation
formula_34
which is the equation of a conic section in polar coordinates with semi-latus rectum formula_35 and eccentricity formula_36.
Second law.
The second law follows instantly from the second of the three equations to calculate the absolute value of the specific relative angular momentum.
If one connects this form of the equation formula_37 with the relationship formula_38 for the area of a sector with an infinitesimal small angle formula_39 (triangle with one very small side), the equation
formula_40
Third law.
Kepler's third is a direct consequence of the second law. Integrating over one revolution gives the orbital period
formula_41
for the area formula_42 of an ellipse. Replacing the semi-minor axis with formula_43 and the specific relative angular momentum with formula_44 one gets
formula_45
There is thus a relationship between the semi-major axis and the orbital period of a satellite that can be reduced to a constant of the central body.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\vec{h}"
},
{
"math_id": 1,
"text": "\\mathbf{h}"
},
{
"math_id": 2,
"text": " \\mathbf{r}"
},
{
"math_id": 3,
"text": " \\mathbf{v} "
},
{
"math_id": 4,
"text": " \\mathbf{h} = \\mathbf{r}\\times \\mathbf{v} = \\frac{\\mathbf{L}}{m} "
},
{
"math_id": 5,
"text": "\\mathbf{L}"
},
{
"math_id": 6,
"text": " \\mathbf{r} \\times m \\mathbf{v}"
},
{
"math_id": 7,
"text": " \\mathbf{h}"
},
{
"math_id": 8,
"text": " \\mathbf{r} "
},
{
"math_id": 9,
"text": " m_1 \\gg m_2 "
},
{
"math_id": 10,
"text": " \\ddot{\\mathbf{r}} + \\frac{G m_1}{r^2}\\frac{\\mathbf{r}}{r} = 0"
},
{
"math_id": 11,
"text": "\\mathbf{r}"
},
{
"math_id": 12,
"text": "m_1"
},
{
"math_id": 13,
"text": "m_2"
},
{
"math_id": 14,
"text": "r"
},
{
"math_id": 15,
"text": "\\ddot{\\mathbf{r}}"
},
{
"math_id": 16,
"text": "G"
},
{
"math_id": 17,
"text": " \\mathbf{r} \\times \\ddot{\\mathbf{r}} + \\mathbf{r} \\times \\frac{G m_1}{r^2}\\frac{\\mathbf{r}}{r} = 0"
},
{
"math_id": 18,
"text": "\\mathbf{r} \\times \\mathbf{r} = 0"
},
{
"math_id": 19,
"text": " \\mathbf{r} \\times \\ddot{\\mathbf{r}} = 0 "
},
{
"math_id": 20,
"text": "\n\\frac{\\mathrm{d}}{\\mathrm{d}t} \\left(\\mathbf{r}\\times\\dot{\\mathbf{r}}\\right) =\n\\dot{\\mathbf{r}} \\times \\dot{\\mathbf{r}} + \\mathbf{r} \\times \\ddot{\\mathbf{r}} =\n\\mathbf{r} \\times \\ddot{\\mathbf{r}}\n"
},
{
"math_id": 21,
"text": "\\frac{\\mathrm{d}}{\\mathrm{d}t} \\left(\\mathbf{r}\\times\\dot{\\mathbf{r}}\\right) = 0"
},
{
"math_id": 22,
"text": "\\mathbf{r} \\times \\dot{\\mathbf{r}}"
},
{
"math_id": 23,
"text": "\\mathbf{v}"
},
{
"math_id": 24,
"text": " \\mathbf{h} = \\mathbf{r}\\times\\mathbf{v}"
},
{
"math_id": 25,
"text": "\\mathbf{r} \\times \\mathbf{p}"
},
{
"math_id": 26,
"text": " \\ddot{\\mathbf{r}} \\times \\mathbf{h} = - \\frac{\\mu}{r^2}\\frac{\\mathbf{r}}{r} \\times \\mathbf{h} "
},
{
"math_id": 27,
"text": " \\frac{\\mathrm{d}}{\\mathrm{d}t} \\left(\\dot{\\mathbf{r}}\\times\\mathbf{h}\\right)"
},
{
"math_id": 28,
"text": "\\dot{r}"
},
{
"math_id": 29,
"text": "\\dot{\\mathbf{r}}"
},
{
"math_id": 30,
"text": "\n -\\frac{\\mu}{r^3}\\left(\\mathbf{r} \\times \\mathbf{h}\\right) =\n -\\frac{\\mu}{r^3} \\left(\\left(\\mathbf{r}\\cdot\\mathbf{v}\\right)\\mathbf{r} - r^2\\mathbf{v}\\right) =\n -\\left(\\frac{\\mu}{r^2}\\dot{r}\\mathbf{r} - \\frac{\\mu}{r}\\mathbf{v}\\right) =\n \\mu \\frac{\\mathrm{d}}{\\mathrm{d}t}\\left(\\frac{\\mathbf{r}}{r}\\right)\n"
},
{
"math_id": 31,
"text": " \\mathbf{C} "
},
{
"math_id": 32,
"text": " \\dot{\\mathbf{r}}\\times\\mathbf{h} = \\mu\\frac{\\mathbf{r}}{r} + \\mathbf{C} "
},
{
"math_id": 33,
"text": "\\begin{align}\n \\mathbf{r} \\cdot \\left(\\dot{\\mathbf{r}}\\times\\mathbf{h}\\right) &= \\mathbf{r} \\cdot \\left(\\mu\\frac{\\mathbf{r}}{r} + \\mathbf{C}\\right) \\\\\n\\Rightarrow \\left(\\mathbf{r}\\times\\dot{\\mathbf{r}}\\right) \\cdot \\mathbf{h} &= \\mu r + r C\\cos\\theta \\\\\n\\Rightarrow h^2 &= \\mu r + r C\\cos\\theta\n\\end{align}"
},
{
"math_id": 34,
"text": " r = \\frac{\\frac{h^2}{\\mu}}{1 + \\frac{C}{\\mu}\\cos\\theta} "
},
{
"math_id": 35,
"text": " p = \\frac{h^2}{\\mu} "
},
{
"math_id": 36,
"text": " e = \\frac{C}{\\mu} "
},
{
"math_id": 37,
"text": " \\mathrm{d}t = \\frac{r^2}{h} \\, \\mathrm{d}\\theta "
},
{
"math_id": 38,
"text": " \\mathrm{d}A = \\frac{r^2}{2} \\, \\mathrm{d}\\theta "
},
{
"math_id": 39,
"text": " \\mathrm{d}\\theta "
},
{
"math_id": 40,
"text": " \\mathrm{d}t = \\frac{2}{h} \\, \\mathrm{d}A "
},
{
"math_id": 41,
"text": " T = \\frac{2\\pi ab}{h} "
},
{
"math_id": 42,
"text": " \\pi ab "
},
{
"math_id": 43,
"text": " b=\\sqrt{ap} "
},
{
"math_id": 44,
"text": " h = \\sqrt{\\mu p} "
},
{
"math_id": 45,
"text": " T = 2\\pi \\sqrt{\\frac{a^3}{\\mu}} "
}
] | https://en.wikipedia.org/wiki?curid=997260 |
997387 | Specific orbital energy | Parameter in the gravitational two-body problem
In the gravitational two-body problem, the specific orbital energy formula_0 (or vis-viva energy) of two orbiting bodies is the constant sum of their mutual potential energy (formula_1) and their total kinetic energy (formula_2), divided by the reduced mass. According to the orbital energy conservation equation (also referred to as vis-viva equation), it does not vary with time:
formula_3
where
It is typically expressed in formula_10 (megajoule per kilogram) or formula_11 (squared kilometer per squared second). For an elliptic orbit the specific orbital energy is the negative of the additional energy required to accelerate a mass of one kilogram to escape velocity (parabolic orbit). For a hyperbolic orbit, it is equal to the excess energy compared to that of a parabolic orbit. In this case the specific orbital energy is also referred to as characteristic energy.
Equation forms for different orbits.
For an elliptic orbit, the specific orbital energy equation, when combined with conservation of specific angular momentum at one of the orbit's apsides, simplifies to:
formula_12
where
<templatestyles src="Math_proof/styles.css" />Proof
For an elliptic orbit with specific angular momentum "h" given by
formula_14
we use the general form of the specific orbital energy equation,
formula_15
with the relation that the relative velocity at periapsis is
formula_16
Thus our specific orbital energy equation becomes
formula_17
and finally with the last simplification we obtain:
formula_18
For a parabolic orbit this equation simplifies to
formula_19
For a hyperbolic trajectory this specific orbital energy is either given by
formula_20
or the same as for an ellipse, depending on the convention for the sign of "a".
In this case the specific orbital energy is also referred to as characteristic energy (or formula_21) and is equal to the excess specific energy compared to that for a parabolic orbit.
It is related to the hyperbolic excess velocity formula_22 (the orbital velocity at infinity) by
formula_23
It is relevant for interplanetary missions.
Thus, if orbital position vector (formula_24) and orbital velocity vector (formula_25) are known at one position, and formula_26 is known, then the energy can be computed and from that, for any other position, the orbital speed.
Rate of change.
For an elliptic orbit the rate of change of the specific orbital energy with respect to a change in the semi-major axis is
formula_27
where
In the case of circular orbits, this rate is one half of the gravitation at the orbit. This corresponds to the fact that for such orbits the total energy is one half of the potential energy, because the kinetic energy is minus one half of the potential energy.
Additional energy.
If the central body has radius "R", then the additional specific energy of an elliptic orbit compared to being stationary at the surface is
formula_30
The quantity formula_31 is the height the ellipse extends above the surface, plus the periapsis distance (the distance the ellipse extends beyond the center of the Earth). For the Earth and formula_9 just little more than formula_32 the additional specific energy is formula_33; which is the kinetic energy of the horizontal component of the velocity, i.e. formula_34, formula_35.
Examples.
ISS.
The International Space Station has an orbital period of 91.74 minutes (5504s), hence by Kepler's Third Law the semi-major axis of its orbit is 6,738km.
The specific orbital energy associated with this orbit is −29.6MJ/kg: the potential energy is −59.2MJ/kg, and the kinetic energy 29.6MJ/kg. Compare with the potential energy at the surface, which is −62.6MJ/kg. The extra potential energy is 3.4MJ/kg, the total extra energy is 33.0MJ/kg. The average speed is 7.7km/s, the net delta-v to reach this orbit is 8.1km/s (the actual delta-v is typically 1.5–2.0km/s more for atmospheric drag and gravity drag).
The increase per meter would be 4.4J/kg; this rate corresponds to one half of the local gravity of 8.8m/s2.
For an altitude of 100km (radius is 6471km):
The energy is −30.8MJ/kg: the potential energy is −61.6MJ/kg, and the kinetic energy 30.8MJ/kg. Compare with the potential energy at the surface, which is −62.6MJ/kg. The extra potential energy is 1.0MJ/kg, the total extra energy is 31.8MJ/kg.
The increase per meter would be 4.8J/kg; this rate corresponds to one half of the local gravity of 9.5m/s2. The speed is 7.8km/s, the net delta-v to reach this orbit is 8.0km/s.
Taking into account the rotation of the Earth, the delta-v is up to 0.46km/s less (starting at the equator and going east) or more (if going west).
"Voyager 1".
For "Voyager 1", with respect to the Sun:
Hence:
formula_37
Thus the hyperbolic excess velocity (the theoretical orbital velocity at infinity) is given by
formula_38
However, "Voyager 1" does not have enough velocity to leave the Milky Way. The computed speed applies far away from the Sun, but at such a position that the potential energy with respect to the Milky Way as a whole has changed negligibly, and only if there is no strong interaction with celestial bodies other than the Sun.
Applying thrust.
Assume:
Then the time-rate of change of the specific energy of the rocket is formula_39: an amount formula_40 for the kinetic energy and an amount formula_41 for the potential energy.
The change of the specific energy of the rocket per unit change of delta-v is
formula_42
which is |v| times the cosine of the angle between v and a.
Thus, when applying delta-v to increase specific orbital energy, this is done most efficiently if a is applied in the direction of v, and when |v| is large. If the angle between v and g is obtuse, for example in a launch and in a transfer to a higher orbit, this means applying the delta-v as early as possible and at full capacity. See also gravity drag. When passing by a celestial body it means applying thrust when nearest to the body. When gradually making an elliptic orbit larger, it means applying thrust each time when near the periapsis.
When applying delta-v to "decrease" specific orbital energy, this is done most efficiently if a is applied in the direction opposite to that of v, and again when |v| is large. If the angle between v and g is acute, for example in a landing (on a celestial body without atmosphere) and in a transfer to a circular orbit around a celestial body when arriving from outside, this means applying the delta-v as late as possible. When passing by a planet it means applying thrust when nearest to the planet. When gradually making an elliptic orbit smaller, it means applying thrust each time when near the periapsis.
If a is in the direction of v:
formula_43
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varepsilon"
},
{
"math_id": 1,
"text": "\\varepsilon_p"
},
{
"math_id": 2,
"text": "\\varepsilon_k"
},
{
"math_id": 3,
"text": "\\begin{align}\n\\varepsilon &= \\varepsilon_k + \\varepsilon_p \\\\\n &= \\frac{v^2}{2} - \\frac{\\mu}{r}\n = -\\frac{1}{2} \\frac{\\mu^2}{h^2} \\left(1 - e^2\\right)\n = -\\frac{\\mu}{2a}\n\\end{align}"
},
{
"math_id": 4,
"text": "v"
},
{
"math_id": 5,
"text": "r"
},
{
"math_id": 6,
"text": "\\mu = {G}(m_1 + m_2)"
},
{
"math_id": 7,
"text": "h"
},
{
"math_id": 8,
"text": "e"
},
{
"math_id": 9,
"text": "a"
},
{
"math_id": 10,
"text": "\\frac{\\text{MJ}}{\\text{kg}}"
},
{
"math_id": 11,
"text": "\\frac{\\text{km}^2}{\\text{s}^2}"
},
{
"math_id": 12,
"text": "\\varepsilon = -\\frac{\\mu}{2a}"
},
{
"math_id": 13,
"text": "\\mu = G\\left(m_1 + m_2\\right)"
},
{
"math_id": 14,
"text": "h^2 = \\mu p = \\mu a \\left(1 - e^2\\right)"
},
{
"math_id": 15,
"text": "\\varepsilon = \\frac{v^2}{2} - \\frac{\\mu}{r}"
},
{
"math_id": 16,
"text": " v_p^2\n = {h^2 \\over r_p^2}\n = {h^2 \\over a^2(1 - e)^2}\n = {\\mu a \\left(1 - e^2\\right) \\over a^2(1 - e)^2}\n = {\\mu \\left(1 - e^2\\right) \\over a(1 - e)^2}\n"
},
{
"math_id": 17,
"text": " \\varepsilon\n = \\frac{\\mu}{a} {\\left[ { 1 - e^2 \\over 2(1 - e)^2} - {1 \\over 1 - e} \\right]}\n = \\frac{\\mu}{a} {\\left[ {(1 - e)(1 + e) \\over 2(1 - e)^2} - {1 \\over 1 - e} \\right]}\n = \\frac{\\mu}{a} {\\left[ { 1 + e \\over 2(1 - e)} - {2 \\over 2(1 - e)} \\right]}\n = \\frac{\\mu}{a} {\\left[ { e - 1 \\over 2(1 - e)} \\right]}\n"
},
{
"math_id": 18,
"text": "\\varepsilon = -{\\mu \\over 2a}"
},
{
"math_id": 19,
"text": "\\varepsilon = 0."
},
{
"math_id": 20,
"text": "\\varepsilon = {\\mu \\over 2a}."
},
{
"math_id": 21,
"text": "C_3"
},
{
"math_id": 22,
"text": "v_\\infty"
},
{
"math_id": 23,
"text": "2\\varepsilon = C_3 = v_\\infty^2."
},
{
"math_id": 24,
"text": "\\mathbf{r}"
},
{
"math_id": 25,
"text": "\\mathbf{v}"
},
{
"math_id": 26,
"text": "\\mu"
},
{
"math_id": 27,
"text": "\\frac{\\mu}{2a^2}"
},
{
"math_id": 28,
"text": " \\mu={G}(m_1 + m_2)"
},
{
"math_id": 29,
"text": "a\\,\\!"
},
{
"math_id": 30,
"text": " -\\frac{\\mu}{2a}+\\frac{\\mu}{R} = \\frac{\\mu(2a-R)}{2aR}"
},
{
"math_id": 31,
"text": "2a-R"
},
{
"math_id": 32,
"text": "R"
},
{
"math_id": 33,
"text": "(gR/2)"
},
{
"math_id": 34,
"text": "\\frac{1}{2}V^2 = \\frac{1}{2}gR"
},
{
"math_id": 35,
"text": "V=\\sqrt{gR}"
},
{
"math_id": 36,
"text": "\\mu = GM"
},
{
"math_id": 37,
"text": "\\varepsilon = \\varepsilon_k + \\varepsilon_p = \\frac{v^2}{2} - \\frac{\\mu}{r} = \\mathrm{146\\,km^2 s^{-2}} - \\mathrm{8\\, km^2 s^{-2}} = \\mathrm{138\\,km^2 s^{-2}}"
},
{
"math_id": 38,
"text": "v_\\infty = \\mathrm{16.6\\,km/s}"
},
{
"math_id": 39,
"text": " \\mathbf{v} \\cdot \\mathbf{a}"
},
{
"math_id": 40,
"text": "\\mathbf{v} \\cdot (\\mathbf{a}-\\mathbf{g})"
},
{
"math_id": 41,
"text": "\\mathbf{v} \\cdot \\mathbf{g}"
},
{
"math_id": 42,
"text": "\\frac{\\mathbf{v \\cdot a}}{|\\mathbf{a}|}"
},
{
"math_id": 43,
"text": "\\Delta \\varepsilon = \\int v\\, d (\\Delta v) = \\int v\\, a dt"
}
] | https://en.wikipedia.org/wiki?curid=997387 |
997464 | Orbit equation | Astrodynamic equation
In astrodynamics, an orbit equation defines the path of orbiting body formula_0 around central body formula_1 relative to formula_1, without specifying position as a function of time. Under standard assumptions, a body moving under the influence of a force, directed to a central body, with a magnitude inversely proportional to the square of the distance (such as gravity), has an orbit that is a conic section (i.e. circular orbit, elliptic orbit, parabolic trajectory, hyperbolic trajectory, or radial trajectory) with the central body located at one of the two foci, or "the" focus (Kepler's first law).
If the conic section intersects the central body, then the actual trajectory can only be the part above the surface, but for that part the orbit equation and many related formulas still apply, as long as it is a freefall (situation of weightlessness).
Central, inverse-square law force.
Consider a two-body system consisting of a central body of mass "M" and a much smaller, orbiting body of mass formula_2, and suppose the two bodies interact via a central, inverse-square law force (such as gravitation). In polar coordinates, the orbit equation can be written as
formula_3
where
The above relation between formula_4 and formula_5 describes a conic section. The value of formula_13 controls what kind of conic section the orbit is:
The minimum value of formula_4 in the equation is:
formula_20
while, if formula_16, the maximum value is:
formula_21
If the maximum is less than the radius of the central body, then the conic section is an ellipse which is fully inside the central body and no part of it is a possible trajectory. If the maximum is more, but the minimum is less than the radius, part of the trajectory is possible:
If formula_4 becomes such that the orbiting body enters an atmosphere, then the standard assumptions no longer apply, as in atmospheric reentry.
Low-energy trajectories.
If the central body is the Earth, and the energy is only slightly larger than the potential energy at the surface of the Earth, then the orbit is elliptic with eccentricity close to 1 and one end of the ellipse just beyond the center of the Earth, and the other end just above the surface. Only a small part of the ellipse is applicable.
If the horizontal speed is formula_22, then the periapsis distance is formula_23. The energy at the surface of the Earth corresponds to that of an elliptic orbit with formula_24 (with formula_25 the radius of the Earth), which can not actually exist because it is an ellipse fully below the surface. The energy increase with increase of formula_26 is at a rate formula_27. The maximum height above the surface of the orbit is the length of the ellipse, minus formula_25, minus the part "below" the center of the Earth, hence twice the increase of formula_28 minus the periapsis distance. At the top the potential energy is formula_29 times this height, and the kinetic energy is formula_30. This adds up to the energy increase just mentioned. The width of the ellipse is 19 minutes times formula_22.
The part of the ellipse above the surface can be approximated by a part of a parabola, which is obtained in a model where gravity is assumed constant. This should be distinguished from the parabolic orbit in the sense of astrodynamics, where the velocity is the escape velocity.
See also trajectory.
Categorization of orbits.
Consider orbits which are at one point horizontal, near the surface of the Earth. For increasing speeds at this point the orbits are subsequently:
Note that in the sequence above, formula_9, formula_31 and formula_26 increase monotonically, but formula_13 first decreases from 1 to 0, then increases from 0 to infinity. The reversal is when the center of the Earth changes from being the far focus to being the near focus (the other focus starts near the surface and passes the center of the Earth). We have
formula_32
Extending this to orbits which are horizontal at another height, and orbits of which the extrapolation is horizontal below the surface of the Earth, we get a categorization of all orbits, except the radial trajectories, for which, by the way, the orbit equation can not be used. In this categorization ellipses are considered twice, so for ellipses with both sides above the surface one can restrict oneself to taking the side which is lower as the reference side, while for ellipses of which only one side is above the surface, taking that side.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m_2\\,\\!"
},
{
"math_id": 1,
"text": "m_1\\,\\!"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "r = \\frac{\\ell^2}{m^2\\mu}\\frac{1}{1+e\\cos\\theta}"
},
{
"math_id": 4,
"text": "r"
},
{
"math_id": 5,
"text": "\\theta"
},
{
"math_id": 6,
"text": "\\mathbf{r}"
},
{
"math_id": 7,
"text": "\\ell"
},
{
"math_id": 8,
"text": "mr^2\\dot{\\theta}"
},
{
"math_id": 9,
"text": "h"
},
{
"math_id": 10,
"text": "\\mu"
},
{
"math_id": 11,
"text": "\\mu/r^2"
},
{
"math_id": 12,
"text": "-GM"
},
{
"math_id": 13,
"text": "e"
},
{
"math_id": 14,
"text": "e = \\sqrt{1+\\frac{2E\\ell^2}{m^3\\mu^2}}"
},
{
"math_id": 15,
"text": "E"
},
{
"math_id": 16,
"text": "e<1"
},
{
"math_id": 17,
"text": "e=0"
},
{
"math_id": 18,
"text": "e=1"
},
{
"math_id": 19,
"text": "e>1"
},
{
"math_id": 20,
"text": "r={{\\ell^2}\\over{m^2\\mu}}{{1}\\over{1+e}}"
},
{
"math_id": 21,
"text": "r={{\\ell^2}\\over{m^2\\mu}}{{1}\\over{1-e}}"
},
{
"math_id": 22,
"text": "v\\,\\!"
},
{
"math_id": 23,
"text": "\\frac{v^2}{2g}"
},
{
"math_id": 24,
"text": "a=R/2\\,\\!"
},
{
"math_id": 25,
"text": "R\\,\\!"
},
{
"math_id": 26,
"text": "a"
},
{
"math_id": 27,
"text": "2g\\,\\!"
},
{
"math_id": 28,
"text": "a\\,\\!"
},
{
"math_id": 29,
"text": "g"
},
{
"math_id": 30,
"text": "\\frac{v^2}{2}"
},
{
"math_id": 31,
"text": "\\epsilon"
},
{
"math_id": 32,
"text": "e=\\left | \\frac{R}{a}-1\\right |"
}
] | https://en.wikipedia.org/wiki?curid=997464 |
9975 | Linear filter | Filter that has a linear response
Linear filters process time-varying input signals to produce output signals, subject to the constraint of linearity. In most cases these linear filters are also time invariant (or shift invariant) in which case they can be analyzed exactly using LTI ("linear time-invariant") system theory revealing their transfer functions in the frequency domain and their impulse responses in the time domain. Real-time implementations of such linear signal processing filters in the time domain are inevitably causal, an additional constraint on their transfer functions. An analog electronic circuit consisting only of linear components (resistors, capacitors, inductors, and linear amplifiers) will necessarily fall in this category, as will comparable mechanical systems or digital signal processing systems containing only linear elements. Since linear time-invariant filters can be completely characterized by their response to sinusoids of different frequencies (their frequency response), they are sometimes known as frequency filters.
Non real-time implementations of linear time-invariant filters need not be causal. Filters of more than one dimension are also used such as in image processing. The general concept of linear filtering also extends into other fields and technologies such as statistics, data analysis, and mechanical engineering.
Impulse response and transfer function.
A linear time-invariant (LTI) filter can be uniquely specified by its impulse response "h", and the output of any filter is mathematically expressed as the convolution of the input with that impulse response. The frequency response, given by the filter's transfer function formula_0, is an alternative characterization of the filter. Typical filter design goals are to realize a particular frequency response, that is, the magnitude of the transfer function formula_1; the importance of the phase of the transfer function varies according to the application, inasmuch as the shape of a waveform can be distorted to a greater or lesser extent in the process of achieving a desired (amplitude) response in the frequency domain. The frequency response may be tailored to, for instance, eliminate unwanted frequency components from an input signal, or to limit an amplifier to signals within a particular band of frequencies.
The impulse response "h" of a linear time-invariant causal filter specifies the output that the filter would produce if it were to receive an input consisting of a single impulse at time 0. An "impulse" in a continuous time filter means a Dirac delta function; in a discrete time filter the Kronecker delta function would apply. The impulse response completely characterizes the response of any such filter, inasmuch as any possible input signal can be expressed as a (possibly infinite) combination of weighted delta functions. Multiplying the impulse response shifted in time according to the arrival of each of these delta functions by the amplitude of each delta function, and summing these responses together (according to the superposition principle, applicable to all linear systems) yields the output waveform.
Mathematically this is described as the convolution of a time-varying input signal "x(t)" with the filter's impulse response "h", defined as:
formula_2 or
formula_3.
The first form is the continuous-time form, which describes mechanical and analog electronic systems, for instance. The second equation is a discrete-time version used, for example, by digital filters implemented in software, so-called "digital signal processing". The impulse response "h" completely characterizes any linear time-invariant (or shift-invariant in the discrete-time case) filter. The input "x" is said to be "convolved" with the impulse response "h" having a (possibly infinite) duration of time "T" (or of "N" sampling periods).
Filter design consists of finding a possible transfer function that can be implemented within certain practical constraints dictated by the technology or desired complexity of the system, followed by a practical design that realizes that transfer function using the chosen technology. The complexity of a filter may be specified according to the order of the filter.
Among the time-domain filters we here consider, there are two general classes of filter transfer functions that can approximate a desired frequency response. Very different mathematical treatments apply to the design of filters termed infinite impulse response (IIR) filters, characteristic of mechanical and analog electronics systems, and finite impulse response (FIR) filters, which can be implemented by discrete time systems such as computers (then termed "digital signal processing").
Implementation issues.
Classical analog filters are IIR filters, and classical filter theory centers on the determination of transfer functions given by low order rational functions, which can be synthesized using the same small number of reactive components. Using digital computers, on the other hand, both FIR and IIR filters are straightforward to implement in software.
A digital IIR filter can generally approximate a desired filter response using less computing power than a FIR filter, however this advantage is more often unneeded given the increasing power of digital processors. The ease of designing and characterizing FIR filters makes them preferable to the filter designer (programmer) when ample computing power is available. Another advantage of FIR filters is that their impulse response can be made symmetric, which implies a response in the frequency domain that has zero phase at all frequencies (not considering a finite delay), which is absolutely impossible with any IIR filter.
Frequency response.
The frequency response or transfer function formula_1 of a filter can be obtained if the impulse response is known, or directly through analysis using Laplace transforms, or in discrete-time systems the Z-transform. The frequency response also includes the phase as a function of frequency, however in many cases the phase response is of little or no interest. FIR filters can be made to have zero phase, but with IIR filters that is generally impossible. With most IIR transfer functions there are related transfer functions having a frequency response with the same magnitude but a different phase; in most cases the so-called minimum phase transfer function is preferred.
Filters in the time domain are most often requested to follow a specified frequency response. Then, a mathematical procedure finds a filter transfer function that can be realized (within some constraints), and approximates the desired response to within some criterion. Common filter response specifications are described as follows:
FIR transfer functions.
Meeting a frequency response requirement with an FIR filter uses relatively straightforward procedures. In the most basic form, the desired frequency response itself can be sampled with a resolution of formula_4 and Fourier transformed to the time domain. This obtains the filter coefficients "hi", which implements a zero phase FIR filter that matches the frequency response at the sampled frequencies used. To better match a desired response, formula_4 must be reduced. However the duration of the filter's impulse response, and the number of terms that must be summed for each output value (according to the above discrete time convolution) is given by formula_5 where "T" is the sampling period of the discrete time system (N-1 is also termed the "order" of an FIR filter). Thus the complexity of a digital filter and the computing time involved, grows inversely with formula_4, placing a higher cost on filter functions that better approximate the desired behavior. For the same reason, filter functions whose critical response is at lower frequencies (compared to the sampling frequency "1/T") require a higher order, more computationally intensive FIR filter. An IIR filter can thus be much more efficient in such cases.
Elsewhere the reader may find further discussion of design methods for practical FIR filter design.
IIR transfer functions.
Since classical analog filters are IIR filters, there has been a long history of studying the range of possible transfer functions implementing various of the above desired filter responses in continuous time systems. Using transforms it is possible to convert these continuous time frequency responses to ones that are implemented in discrete time, for use in digital IIR filters. The complexity of any such filter is given by the "order" N, which describes the order of the rational function describing the frequency response. The order N is of particular importance in analog filters, because an Nth order electronic filter requires N reactive elements (capacitors and/or inductors) to implement. If a filter is implemented using, for instance, biquad stages using op-amps, N/2 stages are needed. In a digital implementation, the number of computations performed per sample is proportional to N. Thus the mathematical problem is to obtain the best approximation (in some sense) to the desired response using a smaller N, as we shall now illustrate.
Below are the frequency responses of several standard filter functions that approximate a desired response, optimized according to some criterion. These are all fifth-order low-pass filters, designed for a cutoff frequency of .5 in normalized units. Frequency responses are shown for the Butterworth, Chebyshev, inverse Chebyshev, and elliptic filters.
As is clear from the image, the elliptic filter is sharper than the others, but at the expense of ripples in both its passband and stopband. The Butterworth filter has the poorest transition but has a more even response, avoiding ripples in either the passband or stopband. A Bessel filter (not shown) has an even poorer transition in the frequency domain, but maintains the best phase fidelity of a waveform. Different applications emphasize different design requirements, leading to different choices among these (and other) optimizations, or requiring a filter of a higher order.
Example implementations.
A popular circuit implementing a second order active R-C filter is the Sallen-Key design, whose schematic diagram is shown here. This topology can be adapted to produce low-pass, band-pass, and high pass filters.
An Nth order FIR filter can be implemented in a discrete time system using a computer program or specialized hardware in which the input signal is subject to N delay stages. The output of the filter is formed as the weighted sum of those delayed signals, as is depicted in the accompanying signal flow diagram. The response of the filter depends on the weighting coefficients denoted "b0", "b1", ... "bN". For instance, if all of the coefficients were equal to unity, a so-called boxcar function, then it would implement a low-pass filter with a low frequency gain of N+1 and a frequency response given by the sinc function. Superior shapes for the frequency response can be obtained using coefficients derived from a more sophisticated design procedure.
Mathematics of filter design.
LTI system theory describes linear "time-invariant" (LTI) filters of all types. LTI filters can be completely described by their frequency response and phase response, the specification of which uniquely defines their impulse response, and "vice versa". From a mathematical viewpoint, continuous-time IIR LTI filters may be described in terms of linear differential equations, and their impulse responses considered as Green's functions of the equation. Continuous-time LTI filters may also be described in terms of the Laplace transform of their impulse response, which allows all of the characteristics of the filter to be analyzed by considering the pattern of zeros and poles of their Laplace transform in the complex plane. Similarly, discrete-time LTI filters may be analyzed via the Z-transform of their impulse response.
Before the advent of computer filter synthesis tools, graphical tools such as Bode plots and Nyquist plots were extensively used as design tools. Even today, they are invaluable tools to understanding filter behavior. Reference books had extensive plots of frequency response, phase response, group delay, and impulse response for various types of filters, of various orders. They also contained tables of values showing how to implement such filters as RLC ladders - very useful when amplifying elements were expensive compared to passive components. Such a ladder can also be designed to have minimal sensitivity to component variation a property hard to evaluate without computer tools.
Many different analog filter designs have been developed, each trying to optimise some feature of the system response. For practical filters, a custom design is sometimes desirable, that can offer the best tradeoff between different design criteria, which may include component count and cost, as well as filter response characteristics.
These descriptions refer to the "mathematical" properties of the filter (that is, the frequency and phase response). These can be "implemented" as analog circuits (for instance, using a Sallen Key filter topology, a type of active filter), or as algorithms in digital signal processing systems.
Digital filters are much more flexible to synthesize and use than analog filters, where the constraints of the design permits their use. Notably, there is no need to consider component tolerances, and very high Q levels may be obtained.
FIR digital filters may be implemented by the direct convolution of the desired impulse response with the input signal.
They can easily be designed to give a matched filter for any arbitrary pulse shape.
IIR digital filters are often more difficult to design, due to problems including dynamic range issues, quantization noise and instability.
Typically digital IIR filters are designed as a series of digital biquad filters.
All low-pass second-order continuous-time filters have a transfer function given by
formula_6
All band-pass second-order continuous-time filters have a transfer function given by
formula_7
where
Notes and references.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "H(\\omega)"
},
{
"math_id": 1,
"text": "|H(\\omega)|"
},
{
"math_id": 2,
"text": "y(t) = \\int_{0}^{T} x(t-\\tau)\\, h(\\tau)\\, d\\tau"
},
{
"math_id": 3,
"text": "y_k = \\sum_{i=0}^{N} x_{k-i}\\, h_i "
},
{
"math_id": 4,
"text": "\\Delta f"
},
{
"math_id": 5,
"text": "N=1/(\\Delta f \\, T)"
},
{
"math_id": 6,
"text": "H(s)=\\frac{K \\omega^{2}_{0}}{s^{2}+\\frac{\\omega_{0}}{Q}s+\\omega^{2}_{0}}."
},
{
"math_id": 7,
"text": "H(s)=\\frac{K \\frac{\\omega_{0}}{Q}s}{s^{2}+\\frac{\\omega_{0}}{Q}s+\\omega^{2}_{0}}."
},
{
"math_id": 8,
"text": "\\omega_{0}"
},
{
"math_id": 9,
"text": "s=\\sigma+j\\omega"
}
] | https://en.wikipedia.org/wiki?curid=9975 |
99767 | Chandra | Hindu god of the Moon
Chandra (), also known as Soma (), is the Hindu god of the Moon, and is associated with the night, plants and vegetation. He is one of the Navagraha (nine planets of Hinduism) and Dikpala (guardians of the directions).
Etymology and other names.
The word "Chandra" literally means "bright, shining or glittering" and is used for the "Moon" in Sanskrit and other Indian languages. It is also the name of various other figures in Hindu mythology, including an asura and a Suryavamsha king. It is also a common Indian name and surname. Both male and female name variations exist in many South Asian languages that originate from Sanskrit.
Some of the synonyms of Chandra include "Soma" (distill), "Indu" (bright drop), "Atrisuta" (son of Atri), "Shashin" or "Shachin" (marked by hare), "Taradhipa" (lord of stars) and "Nishakara" (the night maker), "Nakshatrapati" (lord of the Nakshatra), "Oshadhipati" (lord of herbs), "Uduraj or Udupati" (water lord), "Kumudanatha" (lord of lotuses) and "Udupa" (boat).
Soma.
Soma is one of the most common other names used for the deity; but the earliest use of the word to refer to the Moon is a subject of scholarly debate. Some scholars state that the word Soma is occasionally used for the Moon in the Vedas, while other scholars suggest that such usage emerged only in the post-Vedic literature.
In the Vedas, the word Soma is primarily used for an intoxicating and energizing/healing plant drink and the deity representing it. In post-Vedic Hindu mythology, Soma is used for Chandra, who is associated with the moon and the plant. The Hindu texts state that the Moon is lit and nourished by the Sun, and that it is Moon where the divine nectar of immortality resides. In Puranas, Soma is sometimes also used to refer to Vishnu, Shiva (as "Somanatha"), Yama and Kubera. In some Indian texts, Soma is the name of an apsara; alternatively it is the name of any medicinal concoction, or rice-water gruel, or heaven and sky, as well as the name of certain places of pilgrimage.
Inspired by his interest in Indian mysticism, Aldous Huxley took the name for the drug used by the state in his novel Brave New World to control the population after the Vedic ritual drink Soma.
Literature.
The origin of Soma is traced back to the Hindu Vedic texts, where he is the personification of a drink made from a plant with the same name. Scholars state that the plant had an important role in Vedic civilization and thus, the deity was one of the most important gods of the pantheon. In these Vedic texts, Soma is praised as the lord of plants and forests; the king of rivers and earth; and the father of the gods. The entire Mandala 9 of the "Rigveda" is dedicated to Soma, both the plant and the deity. The identification of Soma as a lunar deity in the Vedic texts is a controversial topic among scholars. According to William J. Wilkins, "In later years the name Soma was [...] given to the moon. How and why this change took place is not known; but in the later of the Vedic hymns there is some evidence of the transition.
In post Vedic texts like the "Ramayana", the "Mahabharata" and the "Puranas", Soma is mentioned as a lunar deity and has many epithets including Chandra. According to most of these texts, Chandra, along with his brothers Dattatreya and Durvasa, were the sons of the sage Atri and his wife Anasuya. The "Devi Bhagavata Purana" states Chandra to be the avatar of the creator god Brahma. Some texts contain varying accounts regarding Chandra's birth. According to one text, he is the son of Dharma; while another mention Prabhakar as his father. Many legends about Chandra are told in the scriptures.
In one version of the puranas, Chandra and Tara—the star goddess and the wife of devas' guru Brihaspati—fell in love with each another. He abducted her and made her his queen. Brihaspati, after multiple failed peace missions and threats, declared war against Chandra. The Devas sided with their teacher, while Shukra, an enemy of Brihaspati and the teacher of Asuras, aided Chandra. After the intervention of Brahma stopped the war, Tara, pregnant, was returned to her husband. She later gave birth to a son named Budha, but there was a controversy over the paternity of the child; with both Chandra and Brihaspati claiming themselves as his father. Brahma once again interfered and questioned Tara, who eventually confirmed Chandra as the father of Budha. Budha's son was Pururavas who established the Chandravamsha Dynasty.
Chandra married 27 daughters of Prajapati Daksha — Ashvini, Bharani, Krittika, Rohini, Mrigashiras, Ardra, Punarvasu, Pushya, Ashlesha, Magha, Pūrvaphalguni, Uttaraphalguni,
Hasta, Chitra, Svati, Vishakha, Anuradha, Jyeshtha,
Mula, Purvashadha, Uttarashadha, Shravana, Dhanishta,
Shatabhisha, Purvabhadrapada, Uttarabhadrapada,
Revati. They all represent one of the 27 Nakshatra or constellations near the moon. Among all of his 27 wives, Chandra loved Rohini the most and spent most of his time with her. The 26 other wives became upset and complained to Daksha who placed a curse on Chandra.
According to another legend, Ganesha was returning home on his mount Krauncha (a shrew) late on a full moon night after a mighty feast given by Kubera. On the journey back, a snake crossed their path and frightened by it, his mount ran away dislodging Ganesha in the process. An overstuffed Ganesha fell to the ground on his stomach, vomiting out all the Modaks he had eaten. On observing this, Chandra laughed at Ganesha. Ganesha lost his temper and broke off one of his tusks and flung it straight at the Moon, hurting him, and cursed him so that he would never be whole again. Therefore, It is forbidden to behold Chandra on Ganesh Chaturthi. This legend accounts for the Moon's waxing and waning including a big crater on the Moon, a dark spot, visible even from Earth.
Iconography.
Soma's iconography varies in Hindu texts. The most common is one where he is a white-coloured deity, holding a mace in his hand, riding a chariot with three wheels and three or more white horses (up to ten).
Soma as the Moon-deity is also found in Buddhism, and Jainism.
Zodiac and calendar.
Soma is the root of the word "Somavara" or Monday in the Hindu calendar. The word "Monday" in the Greco-Roman and other Indo-European calendars is also dedicated to the Moon. Soma is part of the Navagraha in the Hindu zodiac system. The role and importance of the Navagraha developed over time with various influences. Deifying the moon and its astrological significance occurred as early as the Vedic period and was recorded in the Vedas. The earliest work of astrology recorded in India is the Vedanga Jyotisha which began to be compiled in the 14th century BCE. The moon and various classical planets were referenced in the Atharvaveda around 1000 BCE.
The Navagraha was furthered by additional contributions from Western Asia, including Zoroastrian and Hellenistic influences. The Yavanajataka, or 'Science of the Yavanas', was written by the Indo-Greek named "Yavanesvara" ("Lord of the Greeks") under the rule of the Western Kshatrapa king Rudrakarman I. The Navagraha would further develop and culminate in the Shaka era with the Saka, or Scythian, people. Additionally the contributions by the Saka people would be the basis of the Indian national calendar, which is also called the Saka calendar.
The Hindu calendar is a lunisolar calendar which records both lunar and solar cycles. Like the Navagraha, it was developed with the successive contributions of various works.
Astronomy.
Soma was presumed to be a planet in Hindu astronomical texts. It is often discussed in various Sanskrit astronomical texts, such as the 5th century "Aryabhatiya" by Aryabhatta, the 6th century "Romaka" by Latadeva and "Panca Siddhantika" by Varahamihira, the 7th century "Khandakhadyaka" by Brahmagupta and the 8th century "Sisyadhivrddida" by Lalla. Other texts such as "Surya Siddhanta" dated to have been complete sometime between the 5th century and 10th century present their chapters on various planets with deity mythologies. However, they show that the Hindu scholars were aware of elliptical orbits, and the texts include sophisticated formulae to calculate its past and future positions:
The longitude of Moon = formula_0
– "Surya Siddhanta" II.39.43
where "m" is the Moon's mean longitude, a is the longitude at apogee, P is epicycle of apsis, R=3438'.
Chandra temples.
Besides worship in Navagraha temples, Chandra is also worshipped in the following temples (please help expand this partial list)
In popular culture.
Chandra plays an important role in one of the first novel-length mystery stories in English, "The Moonstone" (1868).
The Sanskrit word "Chandrayāna" (, Moon Vehicle) is used to refer to India's lunar orbiters.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
External links.
| [
{
"math_id": 0,
"text": " m - \\frac{P \\times R \\sin (m - a)}{360} "
}
] | https://en.wikipedia.org/wiki?curid=99767 |
997715 | Spin isomers of hydrogen | Spin states of hydrogen
Molecular hydrogen occurs in two isomeric forms, one with its two proton nuclear spins aligned parallel (orthohydrogen), the other with its two proton spins aligned antiparallel (parahydrogen). These two forms are often referred to as spin isomers or as nuclear spin isomers.
Parahydrogen is in a lower energy state than is orthohydrogen. At room temperature and thermal equilibrium, thermal excitation causes hydrogen to consist of approximately 75% orthohydrogen and 25% parahydrogen. When hydrogen is liquified at low temperature, there is a slow spontaneous transition to a predominantly para ratio, with the released energy having implications for storage. Essentially pure parahydrogen form can be obtained at very low temperatures, but it is not possible to obtain a sample containing more than 75% orthohydrogen by heating.
A mixture or 50:50 mixture of ortho- and parahydrogen can be made in the laboratory by passing it over an iron(III) oxide catalyst at liquid nitrogen temperature (77 K) or by storing hydrogen at 77 K for 2–3 hours in the presence of activated charcoal. In the absence of a catalyst, gas phase parahydrogen takes days to relax to normal hydrogen at room temperature while it takes hours to do so in organic solvents.
Nuclear spin states of H2.
Each hydrogen molecule (H2) consists of two hydrogen atoms linked by a covalent bond. If we neglect the small proportion of deuterium and tritium which may be present, each hydrogen atom consists of one proton and one electron. Each proton has an associated magnetic moment, which is associated with the proton's spin of <templatestyles src="Fraction/styles.css" />1⁄2. In the H2 molecule, the spins of the two hydrogen nuclei (protons) couple to form a triplet state known as orthohydrogen, and a singlet state known as parahydrogen.
The triplet orthohydrogen state has total nuclear spin "I" = 1 so that the component along a defined axis can have the three values "M""I" = 1, 0, or −1. The corresponding nuclear spin wavefunctions are formula_0, formula_1 and formula_2. This formalism uses standard bra–ket notation; the symbol ↑ represents the spin-up wavefunction and the symbol ↓ the spin-down wavefunction for a nucleus, so ↑↓ means that the first nucleus is up and the second down. Each orthohydrogen energy level then has a (nuclear) spin degeneracy of three, meaning that it corresponds to three states of the same energy (in the absence of a magnetic field). The singlet parahydrogen state has nuclear spin quantum numbers "I" = 0 and "M""I" = 0, with wavefunction formula_3. Since there is only one possibility, each parahydrogen level has a spin degeneracy of one and is said to be non-degenerate.
Allowed rotational energy levels.
Since protons have spin <templatestyles src="Fraction/styles.css" />1⁄2, they are fermions and the permutational antisymmetry of the total H2 wavefunction imposes restrictions on the possible rotational states of the two forms of H2. Orthohydrogen, with symmetric nuclear spin functions, can only have rotational wavefunctions that are antisymmetric with respect to permutation of the two protons, corresponding to odd values of the rotational quantum number "J"; conversely, parahydrogen with an antisymmetric nuclear spin function, can only have rotational wavefunctions that are symmetric with respect to permutation of the two protons, corresponding to even "J".
The para form whose lowest level is "J" = 0 is more stable by 1.455 kJ/mol than the ortho form whose lowest level is "J" = 1. The ratio between numbers of ortho and para molecules is about 3:1 at standard temperature where many rotational energy levels are populated, favoring the ortho form as a result of thermal energy. However, at low temperatures only the "J" = 0 level is appreciably populated, so that the para form dominates at low temperatures (approximately 99.8% at 20 K). The heat of vaporization is only 0.904 kJ/mol. As a result, ortho liquid hydrogen equilibrating to the para form releases enough energy to cause significant loss by boiling.
Thermal properties.
Applying the rigid rotor approximation, the energies and degeneracies of the rotational states are given by:
formula_4.
The rotational partition function is conventionally written as:
formula_5.
However, as long as the two spin isomers are not in equilibrium, it is more useful to write separate partition functions for each:
formula_6
The factor of 3 in the partition function for orthohydrogen accounts for the spin degeneracy associated with the +1 spin state; when equilibrium between the spin isomers is possible, then a general partition function incorporating this degeneracy difference can be written as:
formula_7
The molar rotational energies and heat capacities are derived for any of these cases from:
formula_8
Plots shown here are molar rotational energies and heat capacities for ortho- and parahydrogen, and the "normal" ortho:para ratio (3:1) and equilibrium mixtures:
Because of the antisymmetry-imposed restriction on possible rotational states, orthohydrogen has residual rotational energy at low temperature wherein nearly all the molecules are in the "J" = 1 state (molecules in the symmetric spin-triplet state cannot fall into the lowest, symmetric rotational state) and possesses nuclear-spin entropy due to the triplet state's threefold degeneracy. The residual energy is significant because the rotational energy levels are relatively widely spaced in H2; the gap between the first two levels when expressed in temperature units is twice the characteristic rotational temperature for H2:
formula_9.
This is the "T" = 0 intercept seen in the molar energy of orthohydrogen. Since "normal" room-temperature hydrogen is a 3:1 ortho:para mixture, its molar residual rotational energy at low temperature is (3/4) × 2"Rθ"rot ≈ 1091 J/mol, which is somewhat larger than the enthalpy of vaporization of normal hydrogen, 904 J/mol at the boiling point, "T"b ≈ 20.369 K. Notably, the boiling points of parahydrogen and normal (3:1) hydrogen are nearly equal; for parahydrogen ∆Hvap ≈ 898 J/mol at "T"b ≈ 20.277 K, and it follows that nearly all the residual rotational energy of orthohydrogen is retained in the liquid state.
However, orthohydrogen is thermodynamically unstable at low temperatures and spontaneously converts into parahydrogen. This process lacks any natural de-excitation radiation mode, so it is slow in the absence of a catalyst which can facilitate interconversion of the singlet and triplet spin states. At room temperature, hydrogen contains 75% orthohydrogen, a proportion which the liquefaction process preserves if carried out in the absence of a catalyst like ferric oxide, activated carbon, platinized asbestos, rare earth metals, uranium compounds, chromic oxide, or some nickel compounds to accelerate the conversion of the liquid hydrogen into parahydrogen. Alternatively, additional refrigeration equipment can be used to slowly absorb the heat that the orthohydrogen fraction will (more slowly) release as it spontaneously converts into parahydrogen. If orthohydrogen is not removed from rapidly liquified hydrogen, without a catalyst, the heat released during its decay can boil off as much as 50% of the original liquid.
History.
The unusual heat capacity of hydrogen was discovered in 1912 by Arnold Eucken. The two forms of molecular hydrogen were first proposed by Werner Heisenberg and Friedrich Hund in 1927. Taking into account this theoretical framework, pure parahydrogen was first synthesized by Paul Harteck and Karl Friedrich Bonhoeffer in 1929 at the Kaiser Wilhelm Institute for Physical Chemistry and Electrochemistry. When Heisenberg was awarded the 1932 Nobel prize in physics for the creation of quantum mechanics, this discovery of the "allotropic forms of hydrogen" was singled out as its most noteworthy application. Further work on the properties and chemical reactivity of parahydrogen was carried out in the following decade by Elly Schwab-Agallidis and Georg-Maria Schwab.
Modern isolation of pure parahydrogen has since been achieved using rapid in-vacuum deposition of millimeters thick solid parahydrogen (p–H2) samples which are notable for their excellent optical qualities.
Use in NMR and MRI.
When an excess of parahydrogen is used during hydrogenation reactions (instead of the normal mixture of orthohydrogen to parahydrogen of 3:1), the resultant product exhibits hyperpolarized signals in proton NMR spectra, an effect termed PHIP (Parahydrogen Induced Polarisation) or, equivalently, PASADENA (Parahydrogen And Synthesis Allow Dramatically Enhanced Nuclear Alignment; named for first recognition of the effect by Bowers and Weitekamp of Caltech), a phenomenon that has been used to study the mechanism of hydrogenation reactions.
Signal amplification by reversible exchange (SABRE) is a technique to hyperpolarize samples without chemically modifying them. Compared to orthohydrogen or organic molecules, a much greater fraction of the hydrogen nuclei in parahydrogen align with an applied magnetic field. In SABRE, a metal center reversibly binds to both the test molecule and a parahydrogen molecule facilitating the target molecule to pick up the polarization of the parahydrogen. This technique can be improved and utilized for a wide range of organic molecules by using an intermediate "relay" molecule like ammonia. The ammonia efficiently binds to the metal center and picks up the polarization from the parahydrogen. The ammonia then transfers the polarization to other molecules that don't bind as well to the metal catalyst. This enhanced NMR signal allows the rapid analysis of very small amounts of material and has great potential for applications in magnetic resonance imaging.
Deuterium.
Diatomic deuterium (D2) has nuclear spin isomers like diatomic hydrogen, but with different populations of the two forms because the deuterium nucleus (deuteron) is a boson with nuclear spin equal to one. There are six possible nuclear spin wave functions which are ortho or symmetric to exchange of the two nuclei, and three which are para or antisymmetric. Ortho states correspond to even rotational levels with symmetric rotational functions so that the total wavefunction is symmetric as required for the exchange of two bosons, and para states correspond to odd rotational levels. The ground state ("J" = 0) populated at low temperature is ortho, and at standard temperature the ortho:para ratio is 2:1.
Other substances with spin isomers.
Other molecules and functional groups containing two hydrogen atoms, such as water and methylene (CH2), also have ortho- and para- forms (e.g. orthowater and parawater), but this is of little significance for their thermal properties. Their ortho:para ratios differ from that of dihydrogen. The ortho and para forms of water have recently been isolated. Para water was found to be 25% more reactive for a proton-transfer reaction.
Molecular oxygen (O2) also exists in three lower-energy triplet states and one singlet state, as ground-state paramagnetic triplet oxygen and energized highly reactive diamagnetic singlet oxygen. These states arise from the spins of their unpaired electrons, not their protons or nuclei.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left|\\uparrow \\uparrow \\right\\rangle"
},
{
"math_id": 1,
"text": "\\textstyle\\frac{1}{\\sqrt{2}}(\\left|\\uparrow \\downarrow \\right\\rangle + \\left|\\downarrow \\uparrow \\right\\rangle)"
},
{
"math_id": 2,
"text": "\\left|\\downarrow \\downarrow \\right\\rangle"
},
{
"math_id": 3,
"text": "\\textstyle\\frac{1}{\\sqrt{2}}(\\left|\\uparrow \\downarrow \\right\\rangle - \\left|\\downarrow \\uparrow \\right\\rangle)"
},
{
"math_id": 4,
"text": "E_J = \\frac{J(J + 1)\\hbar^2}{2I};\\quad g_J = 2J + 1"
},
{
"math_id": 5,
"text": "Z_\\text{rot} = \\sum\\limits_{J=0}^\\infty{g_J e^{-E_J/k_\\text{B} T\\;}}"
},
{
"math_id": 6,
"text": "\\begin{align}\n Z_{\\text{para}} &= \\sum\\limits_{\\text{even }J}{(2J + 1)e^{{-J(J + 1)\\hbar^2}/{2Ik_\\text{B} T}\\;}}\\\\\n Z_{\\text{ortho}} &= 3\\sum\\limits_{\\text{odd }J}{(2J + 1)e^{{-J(J + 1)\\hbar^2}/{2Ik_\\text{B} T}\\;}}\n\\end{align}"
},
{
"math_id": 7,
"text": "Z_\\text{equil} = \\sum\\limits_{J=0}^\\infty{\\left(2 - (-1)^{J}\\right)(2J + 1)e^{{-J(J + 1)\\hbar^2}/{2Ik_\\text{B} T}\\;}}"
},
{
"math_id": 8,
"text": "\\begin{align}\n U_\\text{rot} &= RT^2 \\left( \\frac{\\partial \\ln Z_\\text{rot}}{\\partial T} \\right) \\\\\n C_{v,\\text{ rot}} &= \\frac{\\partial U_\\text{rot}}{\\partial T}\n\\end{align}"
},
{
"math_id": 9,
"text": "\\frac{E_{J=1} - E_{J=0}}{k_\\text{B}} = 2\\theta_\\text{rot} = \\frac{\\hbar^2}{k_\\text{B}I} \\approx 174.98\\text{ K}"
}
] | https://en.wikipedia.org/wiki?curid=997715 |
9980598 | Nth-term test |
Test for the divergence of an infinite series
In mathematics, the n"th-term test for divergence is a simple test for the divergence of an infinite series:If formula_0 or if the limit does not exist, then formula_1 diverges.Many authors do not name this test or give it a shorter name.
When testing if a series converges or diverges, this test is often checked first due to its ease of use.
In the case of p-adic analysis the term test is a necessary and sufficient condition for convergence due to the non-archimedean triangle inequality.
Usage.
Unlike stronger convergence tests, the term test cannot prove by itself that a series converges. In particular, the converse to the test is not true; instead all one can say is:If formula_2 then formula_1 may or may not converge. In other words, if formula_2 the test is inconclusive.The harmonic series is a classic example of a divergent series whose terms limit to zero. The more general class of "p"-series,
formula_3
exemplifies the possible results of the test:
Proofs.
The test is typically proven in contrapositive form:If formula_1 converges, then formula_4
Limit manipulation.
If "s""n" are the partial sums of the series, then the assumption that the series
converges means that
formula_5
for some number "L". Then
formula_6
Cauchy's criterion.
The assumption that the series converges means that it passes Cauchy's convergence test: for every formula_7 there is a number "N" such that
formula_8
holds for all "n" > "N" and "p" ≥ 1. Setting "p" = 1 recovers the definition of the statement
formula_9
Scope.
The simplest version of the term test applies to infinite series of real numbers. The above two proofs, by invoking the Cauchy criterion or the linearity of the limit, also work in any other normed vector space (or any (additively written) abelian group).
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lim_{n \\to \\infty} a_n \\neq 0"
},
{
"math_id": 1,
"text": "\\sum_{n=1}^\\infty a_n"
},
{
"math_id": 2,
"text": "\\lim_{n \\to \\infty} a_n = 0,"
},
{
"math_id": 3,
"text": "\\sum_{n=1}^\\infty \\frac{1}{n^p},"
},
{
"math_id": 4,
"text": "\\lim_{n \\to \\infty} a_n = 0."
},
{
"math_id": 5,
"text": "\\lim_{n\\to\\infty} s_n = L"
},
{
"math_id": 6,
"text": "\\lim_{n\\to\\infty} a_n = \\lim_{n\\to\\infty}(s_n-s_{n-1}) = \\lim_{n\\to\\infty} s_n - \\lim_{n\\to\\infty} s_{n-1} = L-L = 0."
},
{
"math_id": 7,
"text": "\\varepsilon>0"
},
{
"math_id": 8,
"text": "\\left|a_{n+1}+a_{n+2}+\\cdots+a_{n+p}\\right|<\\varepsilon"
},
{
"math_id": 9,
"text": "\\lim_{n\\to\\infty} a_n = 0."
}
] | https://en.wikipedia.org/wiki?curid=9980598 |
998070 | Node (physics) | Point with minimum wave amplitude
A node is a point along a standing wave where the wave has minimum amplitude. For instance, in a vibrating guitar string, the ends of the string are nodes. By changing the position of the end node through frets, the guitarist changes the effective length of the vibrating string and thereby the note played. The opposite of a node is an anti-node, a point where the amplitude of the standing wave is at maximum. These occur midway between the nodes.
Explanation.
Standing waves result when two sinusoidal wave trains of the same frequency are moving in opposite directions in the same space and interfere with each other. They occur when waves are reflected at a boundary, such as sound waves reflected from a wall or electromagnetic waves reflected from the end of a transmission line, and particularly when waves are confined in a resonator at resonance, bouncing back and forth between two boundaries, such as in an organ pipe or guitar string.
In a standing wave the nodes are a series of locations at equally spaced intervals where the wave amplitude (motion) is zero (see animation above). At these points the two waves add with opposite phase and cancel each other out. They occur at intervals of half a wavelength (λ/2). Midway between each pair of nodes are locations where the amplitude is maximum. These are called the antinodes. At these points the two waves add with the same phase and reinforce each other.
In cases where the two opposite wave trains are not the same amplitude, they do not cancel perfectly, so the amplitude of the standing wave at the nodes is not zero but merely a minimum. This occurs when the reflection at the boundary is imperfect. This is indicated by a finite standing wave ratio (SWR), the ratio of the amplitude of the wave at the antinode to the amplitude at the node.
In resonance of a two dimensional surface or membrane, such as a drumhead or vibrating metal plate, the nodes become nodal lines, lines on the surface where the surface is motionless, dividing the surface into separate regions vibrating with opposite phase. These can be made visible by sprinkling sand on the surface, and the intricate patterns of lines resulting are called Chladni figures.
In transmission lines a voltage node is a current antinode, and a voltage antinode is a current node.
Nodes are the points of zero displacement, not the points where two constituent waves intersect.
Boundary conditions.
Where the nodes occur in relation to the boundary reflecting the waves depends on the end conditions or boundary conditions. Although there are many types of end conditions, the ends of resonators are usually one of two types that cause total reflection:
Examples.
Sound.
A sound wave consists of alternating cycles of compression and expansion of the wave medium. During compression, the molecules of the medium are forced together, resulting in the increased pressure and density. During expansion the molecules are forced apart, resulting in the decreased pressure and density.
The number of nodes in a specified length is directly proportional to the frequency of the wave.
Occasionally on a guitar, violin, or other stringed instrument, nodes are used to create harmonics. When the finger is placed on top of the string at a certain point, but does not push the string all the way down to the fretboard, a third node is created (in addition to the bridge and nut) and a harmonic is sounded. During normal play when the frets are used, the harmonics are always present, although they are quieter. With the artificial node method, the overtone is louder and the fundamental tone is quieter. If the finger is placed at the midpoint of the string, the first overtone is heard, which is an octave above the fundamental note which would be played, had the harmonic not been sounded. When two additional nodes divide the string into thirds, this creates an octave and a perfect fifth (twelfth). When three additional nodes divide the string into quarters, this creates a double octave. When four additional nodes divide the string into fifths, this creates a double-octave and a major third (17th). The octave, major third and perfect fifth are the three notes present in a major chord.
The characteristic sound that allows the listener to identify a particular instrument is largely due to the relative magnitude of the harmonics created by the instrument.
Waves in two or three dimensions.
In two dimensional standing waves, nodes are curves (often straight lines or circles when displayed on simple geometries.) For example, sand collects along the nodes of a vibrating Chladni plate to indicate regions where the plate is not moving.
In chemistry, quantum mechanical waves, or "orbitals", are used to describe the wave-like properties of electrons. Many of these quantum waves have nodes and antinodes as well. The number and position of these nodes and antinodes give rise to many of the properties of an atom or covalent bond. Atomic orbitals are classified according to the number of radial and angular nodes. A radial node for the hydrogen atom is a sphere that occurs where the wavefunction for an atomic orbital is equal to zero, while the angular node is
a flat plane.
Molecular orbitals are classified according to bonding character. Molecular orbitals with an antinode between nuclei are very stable, and are known as "bonding orbitals" which strengthen the bond. In contrast, molecular orbitals with a node between nuclei will not be stable due to electrostatic repulsion and are known as "anti-bonding orbitals" which weaken the bond. Another such quantum mechanical concept is the particle in a box where the number of nodes of the wavefunction can help determine the quantum energy state—zero nodes corresponds to the ground state, one node corresponds to the 1st excited state, etc. In general, "If one arranges the eigenstates in the order of increasing energies, formula_0, the eigenfunctions likewise fall in the order of increasing number of nodes; the "n"th eigenfunction has "n−1" nodes, between each of which the following eigenfunctions have at least one node". | [
{
"math_id": 0,
"text": "\\epsilon_1,\\epsilon_2, \\epsilon_3,..."
}
] | https://en.wikipedia.org/wiki?curid=998070 |
998087 | Ibn Yunus | Egyptian mathematician (c. 950–1009)
Abu al-Hasan 'Ali ibn Abi al-Said 'Abd al-Rahman ibn Ahmad ibn Yunus ibn Abd al-'Ala al-Sadafi al-Misri (Egyptian Arabic: ابن يونس; c. 950 – 1009) was an important Arab Egyptian astronomer and mathematician, whose works are noted for being ahead of their time, having been based on meticulous calculations and attention to detail. He is one of the famous Muslim astronomers who appeared after Al-Battani and Abu al-Wafa' al-Buzjani, and he was perhaps the greatest astronomer of his time. Because of his brilliance, the Fatimids gave him generous gifts and established an observatory for him on Mount Mokattam near Fustat. Al-Aziz Billah ordered him to make astronomical tables, which he completed during the reign of Al-Hakim bi-Amr Allah, son of Al-Aziz, and called it al-Zij al-Kabir al-Hakimi.
The crater Ibn Yunus on the Moon is named after him.
Life.
Information regarding his early life and education is uncertain. He was born in Egypt between 950 and 952 and came from a respected family in Fustat. His father was a historian, biographer, and scholar of hadith who wrote two volumes about the history of Egypt—one about the Egyptians and one based on traveller commentary on Egypt. A prolific writer, ibn Yunus' father has been described as "Egypt's most celebrated early historian and first known compiler of a biographical dictionary devoted exclusively to Egyptians". His grandfather was also one of the scholars who specialized in astronomy, and Ibn Yunus enjoyed great prestige among the Fatimid caliphs, who encouraged him to pursue his astronomical and mathematical research. They built an observatory for him near Fustat (Cairo), and equipped it with all the necessary machinery and tools. Sarton says of him that he was perhaps the greatest Muslim astronomer. His great-grandfather had been an associate of the noted legal scholar al-Shafi'i.
Early in the life of ibn Yunus, the Fatimid dynasty came to power and the new city of Cairo was founded. In Cairo, he worked as an astronomer for the Fatimid dynasty for twenty-six years, first for the Caliph Al-Aziz Billah and then for al-Hakim. Ibn Yunus dedicated his most famous astronomical work, "al-Zij al-Kabir al-Hakimi", to the latter.
As well as for his mathematics, Ibn Yunus was also known as an eccentric and a poet.
Works.
One of his greatest astronomical works was that he calculated with great accuracy the inclination of the ecliptic circle, after observing the solar and lunar eclipses.
Ibn Yunus excelled in trigonometry, and he was the first to solve some of the trigonometric equations that are used in astronomy, and he conducted valuable research in it that helped advance trigonometry. He was the first to establish a law for spherical trigonometry, and it was of great importance to scholars of astronomy, before the discovery of logarithms, since by means of that law multiplication operations in trigonometry could be converted into addition operations, it facilitated the solution of many long and complex problems.
Ibn Yunus showed great ingenuity in solving many difficult problems in astronomy.
Ibn Yunus observed the eclipse of the sun and moon in Cairo in 978 AD, and his calculation came closest to what was known, until modern observing machines appeared.
Astrology.
In astrology, noted for making predictions and having written the "Kitab bulugh al-umniyya" ("On the Attainment of Desire"), a work concerning the heliacal risings of Sirius, and on predictions concerning what day of the week the Coptic year will start on.
Astronomy.
Ibn Yunus' most famous work in Islamic astronomy, "al-Zij al-Kabir al-Hakimi" (c. 1000), was a handbook of astronomical tables which contained very accurate observations, many of which may have been obtained with very large astronomical instruments. According to N. M. Swerdlow, the "Zij al-Kabir al-Hakimi" is "a work of outstanding originality of which just over half survives".
Yunus expressed the solutions in his "zij" without mathematical symbols, but Delambre noted in his 1819 translation of the Hakemite tables that two of Ibn Yunus' methods for determining the time from solar or stellar altitude were equivalent to the trigonometric identity formula_0 identified in Johannes Werner's 16th-century manuscript on conic sections. Now recognized as one of Werner's formulas, it was essential for the development of prosthaphaeresis and logarithms decades later.
Ibn Yunus described 40 planetary conjunctions and 30 lunar eclipses. For example, he accurately describes the planetary conjunction that occurred in the year 1000 as follows:
A conjunction of Venus and Mercury in Gemini, observed in the western sky: The two planets were in conjunction after sunset on the night [of Sunday 19 May 1000]. The time was approximately eight equinoctial hours after midday on Sunday. Mercury was north of Venus and their latitude difference was a third of a degree.
Modern knowledge of the positions of the planets confirms that his description and his calculation of the distance being one-third of a degree is exactly correct. Ibn Yunus's observations on conjunctions and eclipses were used in Richard Dunthorne and Simon Newcombs' respective calculations of the secular acceleration of the Moon.
Pendulum.
Recent encyclopaedias and popular accounts claim that the tenth century astronomer Ibn Yunus used a pendulum for time measurement, despite the fact that it has been known for nearly a hundred years that this is based on nothing more than an error made in 1684 by the Savilian Professor of Astronomy at Oxford Edward Bernard.
Ibn Yunus's philosophy.
In his scientific studies, he only believed in what his mind was convinced of, and he did not care what people said about him. His philosophy was summarized in three points:
Narration of Hadith.
Ibn Yunus narrated hadiths and reports from his father, Abi al-Sa'id, but the scholars rejected his narration due to his preoccupation with astrology and magic.
Books.
Ibn Yunus had many works, and his most important book is "Al-Zij Al-Kabir Al-Hakimi", which is the book he began writing, by order of the Fatimid Caliph Al-Aziz in the year 380 AH/990 AD, and completed it in 1007 AD during the reign of Caliph Al-Hakim Ould Al-Aziz, and he called it Al-Zij Al-Hakimi, after the caliph.
The word "Zij" is a Persian word (Zik), and its meaning in our modern sense is mathematical astronomical tables. He had another book called "Zij Ibn Yunus", and the numbers he included in his two "zij" books are correct up to the seventh decimal number, which indicates unparalleled mathematical accuracy in calculations, and many astronomers have been transmitted from him, especially after his "zij" moved to the East, and the Egyptians relied on their calendars on Ibn Yunus’ "zij" for a long period of time.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2\\cos(a)\\cos(b) = \\cos(a+b)+\\cos(a-b)"
}
] | https://en.wikipedia.org/wiki?curid=998087 |
9981485 | Analemmatic sundial | Analemmatic sundials are a type of horizontal sundial that has a vertical gnomon and hour markers positioned in an elliptical pattern. The gnomon is not fixed and must change position daily to accurately indicate time of day. Hence there are no hour lines on the dial and the time of day is read only on the ellipse. As with most sundials, analemmatic sundials mark solar time rather than clock time.
Description.
An analemmatic sundial is completely defined by
Analemmatic sundials are sometimes designed with a human as the gnomon. In this case the size of the hour marker ellipse is constrained by human height and the latitude of the sundial location, since the human gnomon shadow must fall on the hour marker ellipse to accurately indicate the time of day. Human gnomon analemmatic sundials are not practical at lower latitudes where a human shadow is quite short during the summer months. A 66-inch tall person casts a 4-inch shadow at 27 deg latitude on the summer solstice.
The use of the adjective "analemmatic" to describe this class of sundial can be misleading, because there is no use of the equation of time or the analemma in the design of an analemmatic sundial. Mayall refers to the analemmatic sundial as "the so-called Analemmatic Dial", implying a lack of connection to the analemma. The dial of Brou in front of the church of Brou in Bourg-en-Bresse, France is an example of the erroneous use of the analemma in the construction of an analemmatic sundial. Rohr states "The gnomon is displaced on the short axis of the ellipse and not on the meridian, whose presence here in the shape of an 8 is a mistake."
Construction.
An analemmatic sundial uses a vertical gnomon and its hour lines are the vertical projection of the hour lines of a circular equatorial sundial onto a flat plane. Therefore, the analemmatic sundial is an ellipse, where the short axis is aligned north–south and the long axis is aligned east–west. The noon hour line points true North, whereas the hour lines for 6am and 6pm point due West and East, respectively; the ratio of the short to long axes equals the sine sin(Φ) of the local geographical latitude, denoted Φ. All the hour lines converge to a single centre; the angle θ of a given hour line with the noon hour is given by the formula
formula_0
where "t" is the time (in hours) before or after noon.
However, the vertical gnomon does not always stand at the centre of the hour lines; rather, to show the correct time, the gnomon must be moved daily northwards from the centre by the distance
formula_1
where "W" is half the width of the ellipse and δ is the Sun's declination at that time of year. The declination measures how far the sun is above the celestial equator; at the equinoxes, δ=0 whereas it equals roughly ±23.5° at the summer and winter solstices.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\tan \\theta = \\frac{\\tan (15^{\\circ} \\times t)}{\\sin \\phi }\n"
},
{
"math_id": 1,
"text": "\nY = W \\cos \\phi \\tan \\delta \\,\n"
}
] | https://en.wikipedia.org/wiki?curid=9981485 |
9981737 | Scoring algorithm | Scoring algorithm, also known as Fisher's scoring, is a form of Newton's method used in statistics to solve maximum likelihood equations numerically, named after Ronald Fisher.
Sketch of derivation.
Let formula_0 be random variables, independent and identically distributed with twice differentiable p.d.f. formula_1, and we wish to calculate the maximum likelihood estimator (M.L.E.) formula_2 of formula_3. First, suppose we have a starting point for our algorithm formula_4, and consider a Taylor expansion of the score function, formula_5, about formula_4:
formula_6
where
formula_7
is the observed information matrix at formula_4. Now, setting formula_8, using that formula_9 and rearranging gives us:
formula_10
We therefore use the algorithm
formula_11
and under certain regularity conditions, it can be shown that formula_12.
Fisher scoring.
In practice, formula_13 is usually replaced by formula_14, the Fisher information, thus giving us the Fisher Scoring Algorithm:
formula_15..
Under some regularity conditions, if formula_16 is a consistent estimator, then formula_17 (the correction after a single step) is 'optimal' in the sense that its error distribution is asymptotically identical to that of the true max-likelihood estimate.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Y_1,\\ldots,Y_n"
},
{
"math_id": 1,
"text": "f(y; \\theta)"
},
{
"math_id": 2,
"text": "\\theta^*"
},
{
"math_id": 3,
"text": "\\theta"
},
{
"math_id": 4,
"text": "\\theta_0"
},
{
"math_id": 5,
"text": "V(\\theta)"
},
{
"math_id": 6,
"text": "V(\\theta) \\approx V(\\theta_0) - \\mathcal{J}(\\theta_0)(\\theta - \\theta_0), \\,"
},
{
"math_id": 7,
"text": "\\mathcal{J}(\\theta_0) = - \\sum_{i=1}^n \\left. \\nabla \\nabla^{\\top} \\right|_{\\theta=\\theta_0} \\log f(Y_i ; \\theta)"
},
{
"math_id": 8,
"text": "\\theta = \\theta^*"
},
{
"math_id": 9,
"text": "V(\\theta^*) = 0"
},
{
"math_id": 10,
"text": "\\theta^* \\approx \\theta_{0} + \\mathcal{J}^{-1}(\\theta_{0})V(\\theta_{0}). \\,"
},
{
"math_id": 11,
"text": "\\theta_{m+1} = \\theta_{m} + \\mathcal{J}^{-1}(\\theta_{m})V(\\theta_{m}), \\,"
},
{
"math_id": 12,
"text": "\\theta_m \\rightarrow \\theta^*"
},
{
"math_id": 13,
"text": "\\mathcal{J}(\\theta)"
},
{
"math_id": 14,
"text": "\\mathcal{I}(\\theta)= \\mathrm{E}[\\mathcal{J}(\\theta)]"
},
{
"math_id": 15,
"text": "\\theta_{m+1} = \\theta_{m} + \\mathcal{I}^{-1}(\\theta_{m})V(\\theta_{m})"
},
{
"math_id": 16,
"text": "\\theta_m"
},
{
"math_id": 17,
"text": "\\theta_{m+1}"
}
] | https://en.wikipedia.org/wiki?curid=9981737 |
9982439 | Graph enumeration | In combinatorics, an area of mathematics, graph enumeration describes a class of combinatorial enumeration problems in which one must count undirected or directed graphs of certain types, typically as a function of the number of vertices of the graph. These problems may be solved either exactly (as an algebraic enumeration problem) or asymptotically.
The pioneers in this area of mathematics were George Pólya, Arthur Cayley and J. Howard Redfield.
Labeled vs unlabeled problems.
In some graphical enumeration problems, the vertices of the graph are considered to be "labeled" in such a way as to be distinguishable from each other, while in other problems any permutation of the vertices is considered to form the same graph, so the vertices are considered identical or "unlabeled". In general, labeled problems tend to be easier. As with combinatorial enumeration more generally, the Pólya enumeration theorem is an important tool for reducing unlabeled problems to labeled ones: each unlabeled class is considered as a symmetry class of labeled objects.
The number of unlabelled graphs with formula_0 vertices is still not known in a closed-form solution, but as almost all graphs are asymmetric this number is asymptotic to
formula_1
Exact enumeration formulas.
Some important results in this area include the following.
formula_2
from which one may easily calculate, for "n" = 1, 2, 3, ..., that the values for "Cn" are
1, 1, 4, 38, 728, 26704, 1866256, ...(sequence in the OEIS)
formula_3
Graph database.
Various research groups have provided searchable database that lists graphs with certain properties of a small sizes. For example
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "\\frac{2^{\\tbinom{n}{2}}}{n!}."
},
{
"math_id": 2,
"text": "C_n=2^{n\\choose 2} - \\frac{1}{n}\\sum_{k=1}^{n-1} k{n\\choose k} 2^{n-k\\choose 2} C_k."
},
{
"math_id": 3,
"text": "2^{n-4}+2^{\\lfloor (n-4)/2\\rfloor}."
}
] | https://en.wikipedia.org/wiki?curid=9982439 |
998446 | Frobenius method | Method for solving ordinary differential equations
In mathematics, the method of Frobenius, named after Ferdinand Georg Frobenius, is a way to find an infinite series solution for a linear second-order ordinary differential equation of the form
formula_0
with formula_1 and formula_2.
in the vicinity of the regular singular point formula_3.
One can divide by formula_4 to obtain a differential equation of the form
formula_5
which will not be solvable with regular power series methods if either "p"("z")/"z" or "q"("z")/"z"2 is not analytic at "z" = 0. The Frobenius method enables one to create a power series solution to such a differential equation, provided that "p"("z") and "q"("z") are themselves analytic at 0 or, being analytic elsewhere, both their limits at 0 exist (and are finite).
History: Frobenius' actual contributions.
Frobenius' contribution was not so much in all the possible "forms" of the series solutions involved (see below). These forms had all been established earlier, by Fuchs. The "indicial polynomial" (see below) and its role had also been established by Fuchs.
A first contribution by Frobenius to the theory was to show that - as regards a first, linearly independent solution, which then has the form of an analytical power series multiplied by an arbitrary power "r" of the independent variable (see below) - the coefficients of the generalized power series obey a "recurrence relation", so that they can always be straightforwardly calculated.
A second contribution by Frobenius was to show that, in cases in which the roots of the indicial equation differ by an integer, the general "form" of the second linearly independent solution (see below) can be obtained by a procedure which is based on differentiation with respect to the parameter "r", mentioned above.
A large part of Frobenius' 1873 publication was devoted to proofs of convergence of all the series involved in the solutions, as well as establishing the radii of convergence of these series.
Explanation of Frobenius Method: first, linearly independent solution.
The method of Frobenius is to seek a power series solution of the form
formula_6
Differentiating:
formula_7
formula_8
Substituting the above differentiation into our original ODE:
formula_9
The expression
formula_10
is known as the "indicial polynomial", which is quadratic in "r". The general definition of the "indicial polynomial" is the coefficient of the lowest power of "z" in the infinite series. In this case it happens to be that this is the "r"th coefficient but, it is possible for the lowest possible exponent to be "r" − 2, "r" − 1 or, something else depending on the given differential equation. This detail is important to keep in mind. In the process of synchronizing all the series of the differential equation to start at the same index value (which in the above expression is "k" = 1), one can end up with complicated expressions. However, in solving for the indicial roots attention is focused only on the coefficient of the lowest power of "z".
Using this, the general expression of the coefficient of "z""k" + "r" is
formula_11
These coefficients must be zero, since they should be solutions of the differential equation, so
formula_12
The series solution with "A""k" above,
formula_13
satisfies
formula_14
If we choose one of the roots to the indicial polynomial for "r" in "U""r"("z"), we gain a solution to the differential equation. If the difference between the roots is not an integer, we get another, linearly independent solution in the other root.
Example.
Let us solve
formula_15
Divide throughout by "z"2 to give
formula_16
which has the requisite singularity at "z" = 0.
Use the series solution
formula_17
Now, substituting
formula_18
From ("r" − 1)2 = 0 we get a double root of 1. Using this root, we set the coefficient of "z""k" + "r" − 2 to be zero (for it to be a solution), which gives us:
formula_19
hence we have the recurrence relation:
formula_20
Given some initial conditions, we can either solve the recurrence entirely or obtain a solution in power series form.
Since the ratio of coefficients formula_21 is a rational function, the power series can be written as a generalized hypergeometric series.
"The exceptional cases": roots separated by an integer.
The previous example involved an indicial polynomial with a repeated root, which gives only one solution to the given differential equation. In general, the Frobenius method gives two independent solutions provided that the indicial equation's roots are not separated by an integer (including zero).
If the root is repeated or the roots differ by an integer, then the second solution can be found using:
formula_22
where formula_23 is the first solution (based on the larger root in the case of unequal roots), formula_24 is the smaller root, and the constant C and the coefficients formula_25 are to be determined. Once formula_26 is chosen (for example by setting it to 1) then C and the formula_25 are determined up to but not including formula_27, which can be set arbitrarily. This then determines the rest of the formula_28 In some cases the constant C must be zero.
Example: consider the following differential equation (Kummer's equation with "a" = 1 and "b" = 2):
formula_29
The roots of the indicial equation are −1 and 0. Two independent solutions are formula_30 and formula_31 so we see that the logarithm does not appear in any solution. The solution formula_32 has a power series starting with the power zero. In a power series starting with formula_33 the recurrence relation places no restriction on the coefficient for the term formula_34 which can be set arbitrarily. If it is set to zero then with this differential equation all the other coefficients will be zero and we obtain the solution 1/"z".
Tandem recurrence relations for series coefficients in the exceptional cases.
In cases in which roots of the indicial polynomial differ by an integer (including zero), the coefficients of all series involved in second linearly independent solutions can be calculated straightforwardly from "tandem recurrence relations". These tandem relations can be constructed by further developing Frobenius' original invention of differentiating with respect to the parameter "r", and using this approach to actually calculate the series coefficients in all cases. | [
{
"math_id": 0,
"text": "z^2 u'' + p(z)z u'+ q(z) u = 0"
},
{
"math_id": 1,
"text": "u' \\equiv \\frac{du}{dz}"
},
{
"math_id": 2,
"text": "u'' \\equiv \\frac{d^2 u}{dz^2}"
},
{
"math_id": 3,
"text": "z=0"
},
{
"math_id": 4,
"text": "z^2"
},
{
"math_id": 5,
"text": "u'' + \\frac{p(z)}{z}u' + \\frac{q(z)}{ z^2}u = 0"
},
{
"math_id": 6,
"text": "u(z)=z^r \\sum_{k=0}^\\infty A_k z^k, \\qquad (A_0 \\neq 0)"
},
{
"math_id": 7,
"text": "u'(z)=\\sum_{k=0}^\\infty (k+r)A_kz^{k+r-1}"
},
{
"math_id": 8,
"text": "u''(z)=\\sum_{k=0}^\\infty (k+r-1)(k+r)A_kz^{k+r-2}"
},
{
"math_id": 9,
"text": "\\begin{align}\n& z^2\\sum_{k=0}^\\infty (k+r-1)(k+r)A_kz^{k+r-2} + zp(z) \\sum_{k=0}^\\infty (k+r)A_kz^{k+r-1} + q(z)\\sum_{k=0}^\\infty A_kz^{k+r} \\\\\n= {} & \\sum_{k=0}^\\infty (k+r-1) (k+r)A_kz^{k+r} + p(z) \\sum_{k=0}^\\infty (k+r)A_kz^{k+r} + q(z) \\sum_{k=0}^\\infty A_kz^{k+r} \\\\\n= {} & \\sum_{k=0}^\\infty [(k+r-1)(k+r) A_kz^{k+r} + p(z) (k+r) A_kz^{k+r} + q(z) A_kz^{k+r}] \\\\\n= {} & \\sum_{k=0}^\\infty \\left[(k+r-1)(k+r) + p(z)(k+r) + q(z)\\right] A_kz^{k+r} \\\\\n= {} & \\left[ r(r-1)+p(z)r+q(z) \\right] A_0z^r+\\sum_{k=1}^\\infty \\left[ (k+r-1)(k+r)+p(z)(k+r)+q(z) \\right] A_kz^{k+r}=0 \n\\end{align}"
},
{
"math_id": 10,
"text": "r\\left(r-1\\right) + p\\left(0\\right)r + q\\left(0\\right) = I(r)"
},
{
"math_id": 11,
"text": "I(k+r)A_k + \\sum_{j=0}^{k-1}{(j+r)p^{(k-j)}(0)+q^{(k-j)}(0) \\over (k-j)!}A_j,"
},
{
"math_id": 12,
"text": "\\begin{align}\nI(k+r)A_k + \\sum_{j=0}^{k-1} {(j+r)p^{(k-j)}(0)+q^{(k-j)}(0) \\over (k-j)!} A_j &= 0 \\\\[4pt]\n\\sum_{j=0}^{k-1}{(j+r)p^{(k-j)}(0)+q^{(k-j)}(0) \\over (k-j)!}A_j &=-I(k+r)A_k \\\\[4pt]\n{1\\over-I(k+r)}\\sum_{j=0}^{k-1}{(j+r)p^{(k-j)}(0)+q^{(k-j)}(0) \\over (k-j)!}A_j &= A_k\n\\end{align}"
},
{
"math_id": 13,
"text": "U_r(z)= \\sum_{k=0}^{\\infty} A_kz^{k+r}"
},
{
"math_id": 14,
"text": "z^2U_r(z)'' + p(z)zU_r(z)' + q(z)U_r(z) = I(r)z^r"
},
{
"math_id": 15,
"text": "z^2f''-zf'+(1-z)f = 0"
},
{
"math_id": 16,
"text": "f''-{1\\over z}f'+{1-z \\over z^2}f=f''-{1\\over z}f'+\\left({1\\over z^2} - {1 \\over z}\\right) f = 0"
},
{
"math_id": 17,
"text": "\\begin{align}\nf &= \\sum_{k=0}^\\infty A_kz^{k+r} \\\\\nf' &= \\sum_{k=0}^\\infty (k+r)A_kz^{k+r-1} \\\\\nf'' &= \\sum_{k=0}^\\infty (k+r)(k+r-1)A_kz^{k+r-2}\n\\end{align}"
},
{
"math_id": 18,
"text": "\\begin{align}\n\\sum_{k=0}^\\infty &(k+r)(k+r-1) A_kz^{k+r-2}-\\frac{1}{z} \\sum_{k=0}^\\infty (k+r)A_kz^{k+r-1} + \\left(\\frac{1}{z^2} - \\frac{1}{z}\\right) \\sum_{k=0}^\\infty A_kz^{k+r} \\\\\n&= \\sum_{k=0}^\\infty (k+r)(k+r-1) A_kz^{k+r-2} -\\frac{1}{z} \\sum_{k=0}^\\infty (k+r) A_kz^{k+r-1} +\\frac{1}{z^2} \\sum_{k=0}^\\infty A_kz^{k+r} -\\frac{1}{z} \\sum_{k=0}^\\infty A_kz^{k+r} \\\\\n&= \\sum_{k=0}^\\infty (k+r)(k+r-1)A_kz^{k+r-2}-\\sum_{k=0}^\\infty (k+r)A_kz^{k+r-2}+\\sum_{k=0}^\\infty A_kz^{k+r-2}-\\sum_{k=0}^\\infty A_kz^{k+r-1} \\\\\n&= \\sum_{k=0}^\\infty (k+r)(k+r-1)A_kz^{k+r-2}-\\sum_{k=0}^\\infty (k+r) A_kz^{k+r-2} + \\sum_{k=0}^\\infty A_kz^{k+r-2} - \\sum_{k-1=0}^\\infty A_{k-1}z^{k-1+r-1} \\\\\n&= \\sum_{k=0}^\\infty (k+r)(k+r-1)A_kz^{k+r-2}-\\sum_{k=0}^\\infty (k+r)A_kz^{k+r-2}+\\sum_{k=0}^\\infty A_kz^{k+r-2}-\\sum_{k=1}^\\infty A_{k-1}z^{k+r-2} \\\\\n&= \\left \\{ \\sum_{k=0}^{\\infty} \\left ((k+r)(k+r-1) - (k+r) + 1\\right ) A_kz^{k+r-2} \\right \\} -\\sum_{k=1}^\\infty A_{k-1}z^{k+r-2} \\\\\n&= \\left \\{ \\left ( r(r-1) - r +1 \\right ) A_0 z^{r-2} + \\sum_{k=1}^{\\infty} \\left ((k+r)(k+r-1) - (k+r) + 1\\right ) A_kz^{k+r-2} \\right \\} - \\sum_{k=1}^\\infty A_{k-1}z^{k+r-2} \\\\\n&= (r-1)^2 A_0 z^{r-2} + \\left \\{ \\sum_{k=1}^{\\infty} (k+r-1)^2 A_kz^{k+r-2} - \\sum_{k=1}^\\infty A_{k-1}z^{k+r-2} \\right \\} \\\\\n&= (r-1)^2 A_0 z^{r-2} + \\sum_{k=1}^{\\infty} \\left ( (k+r-1)^2 A_k - A_{k-1} \\right ) z^{k+r-2}\n\\end{align}"
},
{
"math_id": 19,
"text": "(k+1-1)^2 A_k - A_{k-1} =k^2A_k-A_{k-1} = 0"
},
{
"math_id": 20,
"text": " A_k = \\frac{A_{k-1}}{k^2} "
},
{
"math_id": 21,
"text": "A_k/A_{k-1}"
},
{
"math_id": 22,
"text": " y_2 = C y_1 \\ln x + \\sum_{k=0}^\\infty B_kx^{k+r_2}"
},
{
"math_id": 23,
"text": "y_1(x)"
},
{
"math_id": 24,
"text": "r_2"
},
{
"math_id": 25,
"text": "B_k"
},
{
"math_id": 26,
"text": "B_0"
},
{
"math_id": 27,
"text": "B_{r_1-r_2}"
},
{
"math_id": 28,
"text": "B_k."
},
{
"math_id": 29,
"text": "zu''+(2-z)u'-u = 0"
},
{
"math_id": 30,
"text": "1/z"
},
{
"math_id": 31,
"text": "e^z/z,"
},
{
"math_id": 32,
"text": "(e^z-1)/z"
},
{
"math_id": 33,
"text": "z^{-1}"
},
{
"math_id": 34,
"text": "z^0,"
}
] | https://en.wikipedia.org/wiki?curid=998446 |
9986126 | Complex multiplier | "This article deals with the concept in economics. For the multiplication of complex numbers, see Complex number#Multiplication."
The complex multiplier is the multiplier principle in Keynesian economics (formulated by John Maynard Keynes). The simplistic multiplier that is the reciprocal of the marginal propensity to save is a special case used for illustrative purposes only. The multiplier applies to any change in autonomous expenditure, in other words, an externally induced change in consumption, investment, government expenditure or net exports. Each of these operates to increase or reduce the equilibrium level of income in the economy.
and...
The size of the multiplier should take account of all leakages from the circular flow of income and expenditure occurring in all sectors. The complex multiplier can be measured by the following formula:
formula_0
where MPS= Marginal propensity to save,
MRT= Marginal rate of taxation,
MPM= marginal propensity to import.
MPW = Marginal propensity to withdraw
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k = 1 / [MPS+MRT+MPM] = 1 / MPW\\,\\!"
}
] | https://en.wikipedia.org/wiki?curid=9986126 |
99862 | Hyperplane | Subspace of n-space whose dimension is (n-1)
In geometry, a hyperplane is a generalization of a two-dimensional plane in three-dimensional space to mathematical spaces of arbitrary dimension. Like a plane in space, a hyperplane is a flat hypersurface, a subspace whose dimension is one less than that of the ambient space. Two lower-dimensional examples of hyperplanes are one-dimensional lines in a plane and zero-dimensional points on a line.
Most commonly, the ambient space is n-dimensional Euclidean space, in which case the hyperplanes are the ("n" − 1)-dimensional flats, each of which separates the space into two half spaces. A reflection across a hyperplane is a kind of motion (geometric transformation preserving distance between points), and the group of all motions is generated by the reflections. A convex polytope is the intersection of half-spaces.
In non-Euclidean geometry, the ambient space might be the n-dimensional sphere or hyperbolic space, or more generally a pseudo-Riemannian space form, and the hyperplanes are the hypersurfaces consisting of all geodesics through a point which are perpendicular to a specific normal geodesic.
In other kinds of ambient spaces, some properties from Euclidean space are no longer relevant. For example, in affine space, there is no concept of distance, so there are no reflections or motions. In a non-orientable space such as elliptic space or projective space, there is no concept of half-planes. In greatest generality, the notion of hyperplane is meaningful in any mathematical space in which the concept of the dimension of a subspace is defined.
The difference in dimension between a subspace and its ambient space is known as its "codimension". A hyperplane has codimension 1.
Technical description.
In geometry, a hyperplane of an "n"-dimensional space "V" is a subspace of dimension "n" − 1, or equivalently, of codimension 1 in "V". The space "V" may be a Euclidean space or more generally an affine space, or a vector space or a projective space, and the notion of hyperplane varies correspondingly since the definition of subspace differs in these settings; in all cases however, any hyperplane can be given in coordinates as the solution of a single (due to the "codimension 1" constraint) algebraic equation of degree 1.
If "V" is a vector space, one distinguishes "vector hyperplanes" (which are linear subspaces, and therefore must pass through the origin) and "affine hyperplanes" (which need not pass through the origin; they can be obtained by translation of a vector hyperplane). A hyperplane in a Euclidean space separates that space into two half spaces, and defines a reflection that fixes the hyperplane and interchanges those two half spaces.
Special types of hyperplanes.
Several specific types of hyperplanes are defined with properties that are well suited for particular purposes. Some of these specializations are described here.
Affine hyperplanes.
An affine hyperplane is an affine subspace of codimension 1 in an affine space.
In Cartesian coordinates, such a hyperplane can be described with a single linear equation of the following form (where at least one of the formula_0s is non-zero and formula_1 is an arbitrary constant):
formula_2
In the case of a real affine space, in other words when the coordinates are real numbers, this affine space separates the space into two half-spaces, which are the connected components of the complement of the hyperplane, and are given by the inequalities
formula_3
and
formula_4
As an example, a point is a hyperplane in 1-dimensional space, a line is a hyperplane in 2-dimensional space, and a plane is a hyperplane in 3-dimensional space. A line in 3-dimensional space is not a hyperplane, and does not separate the space into two parts (the complement of such a line is connected).
Any hyperplane of a Euclidean space has exactly two unit normal vectors: formula_5. In particular, if we consider formula_6 equipped with the conventional inner product (dot product), then one can define the affine subspace with normal vector formula_7 and origin translation formula_8 as the set of all formula_9 such that formula_10.
Affine hyperplanes are used to define decision boundaries in many machine learning algorithms such as linear-combination (oblique) decision trees, and perceptrons.
Vector hyperplanes.
In a vector space, a vector hyperplane is a subspace of codimension 1, only possibly shifted from the origin by a vector, in which case it is referred to as a flat. Such a hyperplane is the solution of a single linear equation.
Projective hyperplanes.
Projective hyperplanes, are used in projective geometry. A projective subspace is a set of points with the property that for any two points of the set, all the points on the line determined by the two points are contained in the set. Projective geometry can be viewed as affine geometry with vanishing points (points at infinity) added. An affine hyperplane together with the associated points at infinity forms a projective hyperplane. One special case of a projective hyperplane is the infinite or ideal hyperplane, which is defined with the set of all points at infinity.
In projective space, a hyperplane does not divide the space into two parts; rather, it takes two hyperplanes to separate points and divide up the space. The reason for this is that the space essentially "wraps around" so that both sides of a lone hyperplane are connected to each other.
Applications.
In convex geometry, two disjoint convex sets in n-dimensional Euclidean space are separated by a hyperplane, a result called the hyperplane separation theorem.
In machine learning, hyperplanes are a key tool to create support vector machines for such tasks as computer vision and natural language processing.
The datapoint and its predicted value via a linear model is a hyperplane.
Dihedral angles.
The dihedral angle between two non-parallel hyperplanes of a Euclidean space is the angle between the corresponding normal vectors. The product of the transformations in the two hyperplanes is a rotation whose axis is the subspace of codimension 2 obtained by intersecting the hyperplanes, and whose angle is twice the angle between the hyperplanes.
Support hyperplanes.
A hyperplane H is called a "support" hyperplane of the polyhedron P if P is contained in one of the two closed half-spaces bounded by H and formula_11. The intersection of P and H is defined to be a "face" of the polyhedron. The theory of polyhedra and the dimension of the faces are analyzed by looking at these intersections involving hyperplanes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a_i"
},
{
"math_id": 1,
"text": "b"
},
{
"math_id": 2,
"text": "a_1x_1 + a_2x_2 + \\cdots + a_nx_n = b.\\ "
},
{
"math_id": 3,
"text": "a_1x_1 + a_2x_2 + \\cdots + a_nx_n < b\\ "
},
{
"math_id": 4,
"text": "a_1x_1 + a_2x_2 + \\cdots + a_nx_n > b.\\ "
},
{
"math_id": 5,
"text": "\\pm\\hat{n}"
},
{
"math_id": 6,
"text": "\\mathbb{R}^{n+1}"
},
{
"math_id": 7,
"text": "\\hat{n}"
},
{
"math_id": 8,
"text": "\\tilde{b} \\in \\mathbb{R}^{n+1}"
},
{
"math_id": 9,
"text": "x \\in \\mathbb{R}^{n+1}"
},
{
"math_id": 10,
"text": "\\hat{n} \\cdot (x-\\tilde{b})=0"
},
{
"math_id": 11,
"text": "H\\cap P \\neq \\varnothing"
}
] | https://en.wikipedia.org/wiki?curid=99862 |
99864 | Counting sort | Sorting algorithm
In computer science, counting sort is an algorithm for sorting a collection of objects according to keys that are small positive integers; that is, it is an integer sorting algorithm. It operates by counting the number of objects that possess distinct key values, and applying prefix sum on those counts to determine the positions of each key value in the output sequence. Its running time is linear in the number of items and the difference between the maximum key value and the minimum key value, so it is only suitable for direct use in situations where the variation in keys is not significantly greater than the number of items. It is often used as a subroutine in radix sort, another sorting algorithm, which can handle larger keys more efficiently.
Counting sort is not a comparison sort; it uses key values as indexes into an array and the Ω("n" log "n") lower bound for comparison sorting will not apply. Bucket sort may be used in lieu of counting sort, and entails a similar time analysis. However, compared to counting sort, bucket sort requires linked lists, dynamic arrays, or a large amount of pre-allocated memory to hold the sets of items within each bucket, whereas counting sort stores a single number (the count of items) per bucket.
Input and output assumptions.
In the most general case, the input to counting sort consists of a collection of n items, each of which has a non-negative integer key whose maximum value is at most k.
In some descriptions of counting sort, the input to be sorted is assumed to be more simply a sequence of integers itself, but this simplification does not accommodate many applications of counting sort. For instance, when used as a subroutine in radix sort, the keys for each call to counting sort are individual digits of larger item keys; it would not suffice to return only a sorted list of the key digits, separated from the items.
In applications such as in radix sort, a bound on the maximum key value k will be known in advance, and can be assumed to be part of the input to the algorithm. However, if the value of k is not already known then it may be computed, as a first step, by an additional loop over the data to determine the maximum key value.
The output is an array of the elements ordered by their keys. Because of its application to radix sorting, counting sort must be a stable sort; that is, if two elements share the same key, their relative order in the output array and their relative order in the input array should match.
Pseudocode.
In pseudocode, the algorithm may be expressed as:
function CountingSort(input, "k")
count ← array of "k" + 1 zeros
output ← array of same length as input
for "i" = 0 to length(input) - 1 do
"j" = key(input["i"])
count["j"] = count["j"] + 1
for "i" = 1 to "k" do
count["i"] = count["i"] + count["i" - 1]
for "i" = length(input) - 1 down to 0 do
"j" = key(input["i"])
count["j"] = count["j"] - 1
output[count["j"]] = input["i"]
return output
Here codice_0 is the input array to be sorted, codice_1 returns the numeric key of each item in the input array, codice_2 is an auxiliary array used first to store the numbers of items with each key, and then (after the second loop) to store the positions where items with each key should be placed,
codice_3 is the maximum value of the non-negative key values and codice_4 is the sorted output array.
In summary, the algorithm loops over the items in the first loop, computing a histogram of the number of times each key occurs within the codice_0 collection. After that in the second loop, it performs a prefix sum computation on codice_2 in order to determine, for each key, the position range where the items having that key should be placed; i.e. items of key formula_0 should be placed starting in position codice_7. Finally, in the third loop, it loops over the items of codice_0 again, but in reverse order, moving each item into its sorted position in the codice_4 array.
The relative order of items with equal keys is preserved here; i.e., this is a .
Complexity analysis.
Because the algorithm uses only simple codice_10 loops, without recursion or subroutine calls, it is straightforward to analyze. The initialization of the count array, and the second for loop which performs a prefix sum on the count array, each iterate at most "k" + 1 times and therefore take "O"("k") time. The other two for loops, and the initialization of the output array, each take "O"("n") time. Therefore, the time for the whole algorithm is the sum of the times for these steps, "O"("n" + "k").
Because it uses arrays of length "k" + 1 and n, the total space usage of the algorithm is also "O"("n" + "k"). For problem instances in which the maximum key value is significantly smaller than the number of items, counting sort can be highly space-efficient, as the only storage it uses other than its input and output arrays is the Count array which uses space "O"("k").
Variant algorithms.
If each item to be sorted is itself an integer, and used as key as well, then the second and third loops of counting sort can be combined; in the second loop, instead of computing the position where items with key codice_11 should be placed in the output, simply append codice_12 copies of the number codice_11 to the output.
This algorithm may also be used to eliminate duplicate keys, by replacing the codice_14 array with a bit vector that stores a codice_15 for a key that is present in the input and a codice_16 for a key that is not present. If additionally the items are the integer keys themselves, both second and third loops can be omitted entirely and the bit vector will itself serve as output, representing the values as offsets of the non-codice_16 entries, added to the range's lowest value. Thus the keys are sorted and the duplicates are eliminated in this variant just by being placed into the bit array.
For data in which the maximum key size is significantly smaller than the number of data items, counting sort may be parallelized by splitting the input into subarrays of approximately equal size, processing each subarray in parallel to generate a separate count array for each subarray, and then merging the count arrays. When used as part of a parallel radix sort algorithm, the key size (base of the radix representation) should be chosen to match the size of the split subarrays. The simplicity of the counting sort algorithm and its use of the easily parallelizable prefix sum primitive also make it usable in more fine-grained parallel algorithms.
As described, counting sort is not an in-place algorithm; even disregarding the count array, it needs separate input and output arrays. It is possible to modify the algorithm so that it places the items into sorted order within the same array that was given to it as the input, using only the count array as auxiliary storage; however, the modified in-place version of counting sort is not stable.
History.
Although radix sorting itself dates back far longer,
counting sort, and its application to radix sorting, were both invented by Harold H. Seward in 1954.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "i"
}
] | https://en.wikipedia.org/wiki?curid=99864 |
9986646 | Soliton model in neuroscience | The soliton hypothesis in neuroscience is a model that claims to explain how action potentials are initiated and conducted along axons based on a thermodynamic theory of nerve pulse propagation. It proposes that the signals travel along the cell's membrane in the form of certain kinds of solitary sound (or density) pulses that can be modeled as solitons. The model is proposed as an alternative to the Hodgkin–Huxley model in which action potentials: voltage-gated ion channels in the membrane open and allow sodium ions to enter the cell (inward current). The resulting decrease in membrane potential opens nearby voltage-gated sodium channels, thus propagating the action potential. The transmembrane potential is restored by delayed opening of potassium channels. Soliton hypothesis proponents assert that energy is mainly conserved during propagation except dissipation losses; Measured temperature changes are completely inconsistent with the Hodgkin-Huxley model.
The soliton model (and sound waves in general) depends on adiabatic propagation in which the energy provided at the source of excitation is carried adiabatically through the medium, i.e. plasma membrane. The measurement of a temperature pulse and the claimed absence of heat release during an action potential were the basis of the proposal that nerve impulses are an adiabatic phenomenon much like sound waves. Synaptically evoked action potentials in the electric organ of the electric eel are associated with substantial positive (only) heat production followed by active cooling to ambient temperature. In the garfish olfactory nerve, the action potential is associated with a biphasic temperature change; however, there is a net production of heat. These published results are inconsistent with the Hodgkin-Huxley Model and the authors interpret their work in terms of that model: The initial sodium current releases heat as the membrane capacitance is discharged; heat is absorbed during recharge of the membrane capacitance as potassium ions move with their concentration gradient but against the membrane potential. This mechanism is called the "Condenser Theory". Additional heat may be generated by membrane configuration changes driven by the changes in membrane potential. An increase in entropy during depolarization would release heat; entropy increase during repolarization would absorb heat. However, any such entropic contributions are incompatible with Hodgkin and Huxley model
History.
Ichiji Tasaki pioneered a thermodynamic approach to the phenomenon of nerve pulse propagation which identified several phenomena that were not included in the Hodgkin–Huxley model. Along with measuring various non-electrical components of a nerve impulse, Tasaki investigated the physical chemistry of phase transitions in nerve fibers and its importance for nerve pulse propagation. Based on Tasaki's work, Konrad Kaufman proposed sound waves as a physical basis for nerve pulse propagation in an unpublished manuscript. The basic idea at the core of the soliton model is the balancing of intrinsic dispersion of the two dimensional sound waves in the membrane by nonlinear elastic properties near a phase transition. The initial impulse can acquire a stable shape under such circumstances, in general known as a solitary wave. Solitons are the simplest solution of the set of nonlinear wave equations governing such phenomenon and were applied to model nerve impulse in 2005 by Thomas Heimburg and Andrew D. Jackson, both at the Niels Bohr Institute of the University of Copenhagen. Heimburg heads the institute's Membrane Biophysics Group. The biological physics group of Matthias Schneider has studied propagation of two-dimensional sound waves in lipid interfaces and their possible role in biological signalling
Justification.
The model starts with the observation that cell membranes always have a freezing point (the temperature below which the consistency changes from fluid to gel-like) only slightly below the organism's body temperature, and this allows for the propagation of solitons. An action potential traveling along a mixed nerve results in a slight increase in temperature followed by a decrease in temperature. Soliton model proponents claim that no net heat is released during the overall pulse and that the observed temperature changes are inconsistent with the Hodgkin-Huxley model. However, this is untrue: the Hodgkin Huxley model predicts a biphasic release and absorption of heat. In addition, the action potential causes a slight local thickening of the membrane and a force acting outwards; this effect is not predicted by the Hodgkin–Huxley model but does not contradict it, either.
The soliton model attempts to explain the electrical currents associated with the action potential as follows: the traveling soliton locally changes density and thickness of the membrane, and since the membrane contains many charged and polar substances, this will result in an electrical effect, akin to piezoelectricity. Indeed, such nonlinear sound waves have now been shown to exist at lipid interfaces that show superficial similarity to action potentials (electro-opto-mechanical coupling, velocities, biphasic pulse shape, threshold for excitation etc.). Furthermore, the waves remain localized in the membrane and do not spread out in the surrounding due to an impedance mismatch.
Formalism.
The soliton representing the action potential of nerves is the solution of the partial differential equation
formula_0
where "t" is time and "x" is the position along the nerve axon. Δ"ρ" is the change in membrane density under the influence of the action potential, "c"0 is the sound velocity of the nerve membrane, "p" and "q" describe the nature of the phase transition and thereby the nonlinearity of the elastic constants of the nerve membrane. The parameters "c"0, "p" and "q" are dictated by the thermodynamic properties of the nerve membrane and cannot be adjusted freely. They have to be determined experimentally. The parameter "h" describes the frequency dependence of the sound velocity of the membrane (dispersion relation). The above equation does not contain any fit parameters. It is formally related to the Boussinesq approximation for solitons in water canals. The solutions of the above equation possess a limiting maximum amplitude and a minimum propagation velocity that is similar to the pulse velocity in myelinated nerves. Under restrictive assumptions, there exist periodic solutions that display hyperpolarization and refractory periods.
Role of ion channels.
Advocates of the soliton model claim that it explains several aspects of the action potential, which are not explained by the Hodgkin–Huxley model. Since it is of thermodynamic nature it does not address the properties of single macromolecules like ion channel proteins on a molecular scale. It is rather assumed that their properties are implicitly contained in the macroscopic thermodynamic properties of the nerve membranes. The soliton model predicts membrane current fluctuations during the action potential. These currents are of similar appearance as those reported for ion channel proteins. They are thought to be caused by lipid membrane pores spontaneously generated by the thermal fluctuations. Such thermal fluctuations explain the specific ionic selectivity or the specific time-course of the response to voltage changes on the basis of their effect on the macroscopic susceptibilities of the system.
Application to anesthesia.
The authors claim that their model explains the previously obscure mode of action of numerous anesthetics. The Meyer–Overton observation holds that the strength of a wide variety of chemically diverse anesthetics is proportional to their lipid solubility, suggesting that they do not act by binding to specific proteins such as ion channels but instead by dissolving in and changing the properties of the lipid membrane. Dissolving substances in the membrane lowers the membrane's freezing point, and the resulting larger difference between body temperature and freezing point inhibits the propagation of solitons. By increasing pressure, lowering pH or lowering temperature, this difference can be restored back to normal, which should cancel the action of anesthetics: this is indeed observed. The amount of pressure needed to cancel the action of an anesthetic of a given lipid solubility can be computed from the soliton model and agrees reasonably well with experimental observations.
Differences between model predictions and experimental observations.
The following is a list of some of the disagreements between experimental observations and the "soliton model":
Action waves.
A recent theoretical model, proposed by Ahmed El Hady and Benjamin Machta, proposes that there is a mechanical surface wave which co-propagates with the electrical action potential. These surface waves are called "action waves". In the El Hady–Machta's model, these co-propagating waves are driven by voltage changes across the membrane caused by the action potential.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\frac{\\partial^2 \\Delta \\rho}{\\partial t^2} = \\frac{\\partial}{\\partial x} \\left[\\left(c_0^2 + p\\Delta \\rho + q\\Delta \\rho^2\\right)\\frac{\\partial \\Delta \\rho}{\\partial x}\\right] - h\\frac{\\partial^4 \\Delta\\rho}{\\partial x^4}, "
}
] | https://en.wikipedia.org/wiki?curid=9986646 |
9991540 | Observed information | Matrix of second derivatives of the log-likelihood function
In statistics, the observed information, or observed Fisher information, is the negative of the second derivative (the Hessian matrix) of the "log-likelihood" (the logarithm of the likelihood function). It is a sample-based version of the Fisher information.
Definition.
Suppose we observe random variables formula_0, independent and identically distributed with density "f"("X"; θ), where θ is a (possibly unknown) vector. Then the log-likelihood of the parameters formula_1 given the data formula_0 is
formula_2.
We define the observed information matrix at formula_3 as
formula_4
formula_5
Since the inverse of the information matrix is the asymptotic covariance matrix of the corresponding maximum-likelihood estimator, the observed information is often evaluated at the maximum-likelihood estimate for the purpose of significance testing or confidence-interval construction. The invariance property of maximum-likelihood estimators allows the observed information matrix to be evaluated before being inverted.
Alternative definition.
Andrew Gelman, David Dunson and Donald Rubin define observed information instead in terms of the parameters' posterior probability, formula_6:
formula_7
Fisher information.
The Fisher information formula_8 is the expected value of the observed information given a single observation formula_9 distributed according to the hypothetical model with parameter formula_1:
formula_10.
Comparison with the expected information.
The comparison between the observed information and the expected information remains an active and ongoing area of research and debate. Efron and Hinkley provided a frequentist justification for preferring the observed information to the expected information when employing normal approximations to the distribution of the maximum-likelihood estimator in one-parameter families in the presence of an ancillary statistic that affects the precision of the MLE. Lindsay and Li showed that the observed information matrix gives the minimum mean squared error as an approximation of the true information if an error term of formula_11 is ignored. In Lindsay and Li's case, the expected information matrix still requires evaluation at the obtained ML estimates, introducing randomness.
However, when the construction of confidence intervals is of primary focus, there are reported findings that the expected information outperforms the observed counterpart. Yuan and Spall showed that the expected information outperforms the observed counterpart for confidence-interval constructions of scalar parameters in the mean squared error sense. This finding was later generalized to multiparameter cases, although the claim had been weakened to the expected information matrix performing at least as well as the observed information matrix.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X_1,\\ldots,X_n"
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "\\ell(\\theta | X_1,\\ldots,X_n) = \\sum_{i=1}^n \\log f(X_i| \\theta) "
},
{
"math_id": 3,
"text": "\\theta^{*}"
},
{
"math_id": 4,
"text": "\\mathcal{J}(\\theta^*) \n = - \\left. \n \\nabla \\nabla^{\\top} \n \\ell(\\theta)\n \\right|_{\\theta=\\theta^*} \n"
},
{
"math_id": 5,
"text": "= -\n\\left.\n\\left( \\begin{array}{cccc}\n \\tfrac{\\partial^2}{\\partial \\theta_1^2}\n & \\tfrac{\\partial^2}{\\partial \\theta_1 \\partial \\theta_2}\n & \\cdots\n & \\tfrac{\\partial^2}{\\partial \\theta_1 \\partial \\theta_p} \\\\\n \\tfrac{\\partial^2}{\\partial \\theta_2 \\partial \\theta_1}\n & \\tfrac{\\partial^2}{\\partial \\theta_2^2}\n & \\cdots\n & \\tfrac{\\partial^2}{\\partial \\theta_2 \\partial \\theta_p} \\\\\n \\vdots &\n \\vdots &\n \\ddots &\n \\vdots \\\\\n \\tfrac{\\partial^2}{\\partial \\theta_p \\partial \\theta_1}\n & \\tfrac{\\partial^2}{\\partial \\theta_p \\partial \\theta_2}\n & \\cdots\n & \\tfrac{\\partial^2}{\\partial \\theta_p^2} \\\\\n\\end{array} \\right) \n\\ell(\\theta)\n\\right|_{\\theta = \\theta^*}\n"
},
{
"math_id": 6,
"text": "p(\\theta|y)"
},
{
"math_id": 7,
"text": " I(\\theta) = - \\frac{d^2}{d\\theta^2} \\log p(\\theta|y)"
},
{
"math_id": 8,
"text": "\\mathcal{I}(\\theta)"
},
{
"math_id": 9,
"text": "X"
},
{
"math_id": 10,
"text": "\\mathcal{I}(\\theta) = \\mathrm{E}(\\mathcal{J}(\\theta))"
},
{
"math_id": 11,
"text": "O(n^{-3/2})"
}
] | https://en.wikipedia.org/wiki?curid=9991540 |
999303 | Handicap (golf) | Measure of a golfer's playing ability
A golf handicap is a numerical measure of a golfer's ability, or potential ability, that is used to enable players of different abilities to compete against one another. Better players are those with the lowest handicaps.
Historically, rules relating to handicaps have varied from country to country with many different systems in force around the world. Because of incompatibilities and difficulties in translating between systems, the sport's governing bodies, the USGA and The R&A, working with the various existing handicapping authorities, devised a new World Handicap System (WHS) which began to be introduced globally in 2020.
<templatestyles src="Template:TOC limit/styles.css" />
History.
The earliest record of golf handicapping is thought to be from the late 17th century, in a diary kept by Thomas Kincaid, who was a student in Edinburgh, Scotland, although the word "handicap" would not come into use in golf until the late 19th century. The number of strokes to be given and the holes on which they would be in effect was negotiated between competing golfers prior to the start of play. According to "The Golfer's Manual" by Henry Brougham Farnie, examples of agreed terms included "third-one" (one stroke every three holes), "half-one" (one stroke every two holes), "one more" (a stroke a hole) and "two more" (two strokes a hole).
During the late 19th century, taking the difference between the average of a golfer's best three scores during the year and par became the most widely used method of handicapping in England and Scotland. As the sport grew, so did discontent with the fairness of handicapping, with less proficient players being particularly unhappy as it was much less likely for them to play to the standard of their three-score average. Another issue was the lack of consideration in the system for the varying difficulties of different courses which meant the handicap was not very portable.
In an attempt to remedy the problems with a fairly basic handicap system, along with many variations of that system and other systems also being used, the authorities in Great Britain and Ireland sought to standardize. One of the first standard and equitable handicap systems was introduced by the Ladies Golf Union (LGU) in the 1890s. This was largely achieved by means of union-assigned course ratings, instead of clubs using their own. It was not until the formation of the British Golf Unions Joint Advisory Committee in 1924 that the men's game fully coordinated to create an equitable handicap system, that included a uniform course rating, throughout Great Britain and Ireland; the "Standard Scratch Score and Handicapping Scheme" was introduced in 1926.
In the United States there was a single authority governing the sport, the USGA, which made moving to a single standard handicapping scheme somewhat easier. Introduced in 1911, the first national handicap system was based on the British three-score average system. The biggest development was a "par rating" system that assessed the average good score of a scratch golfer on every course, which made the handicap more portable. It also made clear that a player's handicap was intended to reflect their potential rather than average play. Having initially allowed clubs to determine their own par ratings, the USGA quickly changed their minds and began assigning ratings. The USGA Handicap System has further developed through the years, with an increase in the number of scores used for handicap calculations, the introduction of Equitable Stroke Control, and improvements to the course rating system. However the most significant change was the creation of the slope rating system, which enables handicaps to allow for differences in difficulty between scratch and bogey golfers. USGA Course and Slope Ratings now form the basis of many other handicap systems.
As the sport grew globally, associations around the world each created or adapted their own rules relating to handicaps. By the early 21st century, there were six major recognized handicapping systems in operation around the world: USGA Handicap System, EGA Handicap System, CONGU Unified Handicap System, Golf Australia Handicap System, South African Handicap System, and Argentinian Handicap System. While these systems share some common features, e.g. most use a common course rating system, they are not easily portable because their differences create difficulties in converting handicaps between systems. In order to eliminate these problems the USGA and The R&A, working with the various existing handicapping authorities, devised a new "World Handicap System" which was phased in globally in 2020.
Overview.
Amateur golfers who are members of golf clubs are generally eligible for official handicaps on payment of the prevailing regional and national association annual fees. Official handicaps are administered by golf clubs with the associations often providing additional peer reviewing for low handicaps. Other systems, often free of charge, are available to golfers who are ineligible for official handicaps. Handicap systems are not generally used in professional golf. A golfer whose handicap is zero is referred to as a scratch golfer, and one whose handicap is approximately 18 as a bogey golfer.
While the USGA administers its own handicapping system, the administration of handicapping systems in countries affiliated to The R&A is the responsibility of the national golf associations of those countries. These bodies have different methods of producing handicaps but they are all generally based on calculating an individual player's playing ability from their recent history of rounds. Therefore, a handicap is not fixed but is regularly adjusted to increases or decreases in a player's scoring. Some systems (e.g. World Handicap System, USGA, European Golf Association) involve calculation of a playing handicap which is dependent on the course being played and set of tees that are being used, whereas others (e.g. CONGU's Unified Handicap System) just use the allocated handicap rounded to the nearest whole number.
Contrary to popular opinion, a player's handicap is intended to reflect a player's potential or "average best", not a player's overall average score. Statistically, low handicappers will play to their handicap more often because they are likely to be more consistent than higher handicappers.
Features of handicapping systems.
Scoring.
The total number of strokes taken for a hole (or round) before accounting for a golfer's handicap is called the "gross score" for that hole (or round), and the number of strokes taken after subtracting any handicap allowance is called the "net score".
Note that the "gross score" in 'world handicap system' is calculated as the number of strokes taken for a hole "+ the handicap allowance for that hole". The "adjusted gross score" in 'world handicap system' is the "gross score" adjusted such that the maximum on any particular hole is the number of strokes taken for a hole + the handicap allowance for that hole + 2 strokes (i.e. net double bogey).
In handicap stroke play competitions, a golfer's playing handicap is subtracted from the total number of strokes taken to produce a net score, which is then used to determine the final results. In handicap Stableford competitions, a player's handicap is distributed according to predetermined hole ratings (stroke index) and strokes deducted accordingly from each hole score before calculating the points for that hole. In match play, the handicap difference between players (or teams) is used to determine the number of strokes the high handicap player should receive from the low handicapper during the playing of their round; each of these strokes are received on the lowest numbered stroke index holes. Stroke allowances may sometimes be reduced by a set percentage in order to maintain the level playing field; this is especially common in pairs and team competitions.
Course Rating.
Course Rating, (Standard) Scratch Score, Scratch Rating, and Standard Rating are largely equivalent ratings that are used to indicate the average "good score" by a scratch golfer for a set of tees on a golf course. For a par 72 course, the course rating is generally between 67 and 77. There are different methods of calculating the Course Rating, with the length of the course and its obstacles being the biggest factors. Some systems use only these two, or even length alone, but most modern handicapping systems now use the "USGA Course Rating" system which assesses the difficulty of all aspects of the course, e.g. altitude, wide or narrow fairways, length of any rough, the size and contours of the greens, etc.
Some handicapping systems provide for an adjustment to the course rating to account for variations in playing conditions on any given day, e.g. course setup and weather, and it is against this adjusted rating that handicaps are assessed and maintained. Examples of adjusted ratings are Playing Conditions Calculation (World Handicap System), Competition Scratch Score (CONGU Unified Handicapping System), Daily Scratch Rating (Golf Australia Handicap System), and Calculated Rating (South African Handicap System).
Analogous to course rating is the bogey rating, which is a measure of the playing difficulty of a course for a bogey golfer.
Slope Rating.
Devised by the USGA, the Slope Rating of a golf course describes the relative difficulty of a course for a bogey golfer compared to a scratch golfer. Slope Ratings are in the range 55 to 155, with a course of standard relative difficulty having a rating of 113; the higher the number, the more relatively difficult the course is.
Playing or course handicap.
In most major handicapping systems, a golfer does not use their exact handicap (or handicap index) directly, but use it to produce their playing or course handicap. For some systems, this means simply rounding the exact handicap to the nearest whole number; however, systems that use slope ratings require a more complex calculation to produce a course handicap with some also factoring in the course rating:
formula_0
or
formula_1
The USGA and Golf Australia systems use the first calculation; the WHS, EGA, and Golf RSA systems use the second. Under CONGU's Unified Handicapping System the exact handicap is rounded to the nearest whole number to produce the playing handicap, and in the Argentinian system the exact handicap is used directly.
A playing handicap may also refer to the stroke allowance for a given competition dependent on playing format, and is generally calculated as a percentage of the course handicap.
Stroke Index.
The Stroke Index is a number that has been assigned to each hole on a golf course, and usually printed on the scorecard, to indicate on which holes handicap strokes should be applied. On an 18-hole course, each hole is assigned a different number from 1 to 18 (1 to 9 on a 9-hole course). The lowest numbers are usually given to the holes where a higher handicapper is most likely to benefit, and the highest numbers to the holes they are least likely to benefit. Odd numbers will be allocated to either the first or second 9-holes (and even numbers to the other) to ensure a balanced distribution of handicap strokes, and guidelines generally recommend avoiding having the lowest numbers at the start or end of each nine in order to prevent early stroke allowances in playoffs between golfers with similar handicaps or strokes going unused if they are at the end.
Maximum hole score.
Most of the commonly used handicap systems seek to reduce the impact of very high scores on one or more individual holes on the calculation and updating of handicaps. This is achieved by setting a maximum score on each hole, which is only used for handicapping purposes; i.e. it is not used for determining results of competitions or matches. This "maximum hole score" is either a fixed number or a net score relative to par. "Equitable Stroke Control" (ESC) and "net double bogey" (also called Stableford Points Adjustments) are the two most common mechanisms for defining a maximum hole score.
Handicap differential.
Handicap (or score) differentials are a feature of many handicapping systems. They are a standardized measure of a golfers performance, adjusted to take account of the course being played. Normally the overall score will be adjusted prior to the calculation, e.g. by means of ESC or net double bogey. The course rating may also be adjusted to take account of conditions on the day.
For handicapping systems that use course and slope ratings, a typical calculation using the "score" (see above) is as follows:
formula_2
The differentials are used both to calculate initial handicaps and maintain existing ones, by taking a mean average of a set number of the best recent differentials (e.g. the USGA system uses the best 10 differentials from the last 20 scores).
For other handicapping systems, the differentials are simply the difference between the (adjusted) gross or net scores and a specified standard rating (e.g. course rating, standard scratch score, etc.), and they are used in different ways to maintain handicaps.
Peer review.
In golf clubs, peer review is usually managed by an elected Handicap Secretary who, supported by a small committee, conducts an Annual Review of the handicaps of all members and assesses ad hoc requests from individual members (usually when age or medium to long-term infirmity affects their playing ability). This gives uniformity to handicapping across their club for the setting and maintenance of handicaps with the objective of establishing fair competition between golfers of all abilities.
At the regional level, peer review is extended to include rigorous validation of the handicap returns of low handicap golfers. This ensures that only golfers of an appropriate standard gain entry to their elite tournaments. Occasionally, golfers are excluded from the elite game as a consequence of being found to abuse the system. To a degree, these regional bodies also monitor the performance of and provide training for Handicap Secretaries at the club level.
Nationally, the peer review is extended further to assessing golfers from external jurisdictions for their suitability for entry into their elite international events. They also play a large part in periodic reviews of the handicapping system itself to improve it for the future.
Handicapping systems.
World Handicap System.
Due to the many different handicapping systems in use around the world, and the many inconsistencies within them, which makes it difficult to compete on an equal footing where another handicap system is in use, the sports major governing bodies, in 2011 The R&A and the USGA began work on creation of a single uniform handicapping system to be used everywhere. In February 2018, they announced that the World Handicap System (WHS) would be launched in 2020. Once introduced, the World Handicap System will continue to be governed by The R&A and the USGA with the six existing major handicapping authorities (the USGA, the Council of National Golf Unions (CONGU) in Great Britain and Ireland, the European Golf Association (EGA), Golf Australia, the South African Golf Association (SAGA), and the Argentine Golf Association (AAG)) administering the system at a local level.
The WHS is based on the USGA Course and Slope Rating system, and largely follows the USGA Handicap System while also incorporating features from the six major existing handicap systems. For example, 8 differentials (like the Golf Australia system) are used after net double bogey adjustments (like the CONGU and EGA systems) for handicap calculations, and the WHS course/playing handicap includes a course rating adjustment (like the EGA system). For players with current handicaps, their handicap records in the old systems will be used to produce WHS handicaps; the expectation is that most players will at most see a difference of one or two strokes, if any.
A new WHS handicap requires several scores to be submitted; the recommendation is a minimum of 54 holes made up of any number of 9 or 18-hole rounds in order to achieve a reasonable fair and accurate result, although handicaps may be issued from a smaller sample. Handicap adjustments will be made upon submission of any 9 or 18-hole scores with updates published daily; unlike some other systems both competitive and recreational rounds may be submitted by all players (e.g. CONGU's Unified Handicapping System only allows submission of non-qualifying scores by golfers in Category 2 or above). Ongoing handicaps are based on the average of the best 8 differentials, but with an "anchor" to prevent rapid increases that would not necessarily reflect the player's true potential. There is also a hole limit of "net double bogey" for handicapping purposes in order to prevent one or two bad holes from having a disproportionate effect.
World Handicap System overview.
A WHS handicap is calculated with a specific arithmetic formula that approximates how many strokes above or below par a player might be able to play, based on the eight best scores of their last twenty rounds. The calculation has several variables, including: the player's scores from their most recent rounds, the course rating, and the slope rating.
A score differential is calculated from each of the scores after any net double bogey adjustments (an adjustment which allows for a maximum number of strokes per hole based on the player's course handicap) have been applied, using the following formula:
formula_3
formula_4
Only 18-hole differentials are used for the calculation of a handicap index. As such, 9-hole differentials need to be combined before being used, subject to remaining one of the 20 most recent differentials. The system also allows for situations where less than 18 (or 9) hole have been played, subject to a minimum of 14 (or 7) holes having been completed, by "scaling up" with net pars for any missing holes.
The score differentials are rounded to one decimal place, and the best 8 from the last 20 submitted scores are then averaged and rounded to one decimal place to produce the handicap index. Initial handicaps are calculated from a minimum of five scores using adjustments that limit each hole score to a maximum of formula_5. If there are at least 5 but fewer than 20 qualifying scores available, the handicap index is calculated using a set number or differentials according to how many scores are available, with an additional adjustment made to that average in some circumstances.
The basic formula for calculating the handicap index is as follows (where formula_6 is the number of differentials to use), with the result rounded to one decimal place:
formula_7
The handicap index is not used directly for playing purposes, but used to calculate a course handicap according to the slope rating of the set of tees being used with an adjustment based on the difference between the course rating and par. The result is rounded to the nearest whole number. For competitions, the unrounded course handicap is converted to a playing handicap by applying a handicap allowance, dependent on the format of play.
formula_8
The WHS contains measures reduce a handicap index more quickly in the case of exceptional scoring, and also to prevent a handicap index from rising too quickly. This is done by means of "soft" and "hard" caps based on the lowest index during the previous 365 days; the soft cap reduces increases above 3.0 to 50%, and the hard cap limits increases to 5.0. Updates to a golfer's handicap index are issued daily.
Many elements of WHS have flexibility which allows for local authorities to determine their own settings, but the basic handicap index calculation remains the same. Examples include: 9-hole scores may be scaled-up rather than combined; formula_9 may be omitted from the course handicap calculation; and the rounded course handicap may be used in the playing handicap calculation.
USGA Handicap System.
The first handicap system to be introduced by the USGA was largely the work of Leighton Calkins, who based it on the British "three-score average" system where the handicap was calculated as the average of the best three scores to par in the last year. The key difference was the introduction of a par rating (later known as course rating), which was based on the ability of leading amateur Jerome Travers, to account for variances in the playing difficulty of different courses. After initially allowing clubs to determine their own ratings, at the behest of Calkins the USGA quickly began assigning ratings centrally. Course ratings were rounded to the nearest whole number until 1967, when they started being given to one decimal place.
In 1947, the number of scores used to calculate handicaps was increased to the best 10 from all scores ever recorded subject to a minimum of 50. However this was not uniformly implemented, with regional associations disagreeing on the total number of rounds to be considered. In 1958, the USGA specified that the best 10 from 25 scores would be used. This was reduced to 10 from 20 in 1967, which remains to this day although a further adjustment was made with the introduction of a "Bonus of Excellence" multiplier to equalize handicaps and give better players a marginal advantage. Originally 85%, the multiplier was changed to 96% after being seen to favor better players too heavily. In 1974, Equitable Stroke Control was adopted in order to eliminate the effect of very high individual hole scores on handicap calculations.
With the system still not accounting for variances in playing difficulty for golfers of different abilities, in 1979 the USGA set to work on how to address the issue with the creation of the Handicap Research Team. The result of their work was the creation of what is now the Slope system. Slope was gradually introduced, firstly in Colorado in 1982, before being implemented nationally from 1987. The USGA then set about making further refinements to the course rating system, which at the time was still largely dependent on length, to take account of many other factors affecting scoring ability for a scratch golfer. The USGA Course and Slope Rating system is now used by most of the world's major handicapping systems.
The USGA Handicap System is used throughout the jurisdiction of the USGA (i.e. the United States and Mexico), and is also licensed for use in many other countries around the world, e.g. Canada. The USGA has often resorted to the courts to protect the integrity of its handicap system. In one such case, the California Court of Appeal (First District) summarized the system's history:
<templatestyles src="Template:Blockquote/styles.css" />The USGA was founded in 1894. One of its chief contributions to the game of golf in the United States has been its development and maintenance since 1911 of the USGA handicap system ... designed to enable individual golf players of different abilities to compete fairly with one another. Because permitting individual golfers to issue their handicaps to themselves would inevitably lead to inequities and abuse, the peer review provided by authorized golf clubs and associations has always been an essential part of the [system]. Therefore, to protect the integrity and credibility of its [handicap system], the USGA has consistently followed a policy of only permitting authorized golf associations and clubs to issue USGA handicaps ... In 1979, USGA assembled a handicap research team to investigate widespread criticisms of USGA's then-existing handicap formula. The research team invested approximately a decade and up to $2 million conducting intensive analysis and evaluation of the various factors involved in developing a more accurate and satisfactory [system]. As a result, the research team developed new handicap formulas ... designed to measure the overall difficulty of golf courses, compare individual golfers with other golfers of all abilities, take account of differences between tournament and casual play, and adjust aberrant scores on individual holes. USGA subsequently adopted and implemented these new [f]ormulas between 1987 and 1993.
USGA Handicap System overview.
A USGA handicap is calculated with a specific arithmetic formula that approximates how many strokes above or below par a player might be able to play, based on the ten best scores of their last twenty rounds. The calculation has several variables: the player's scores from their most recent rounds, and the course and slope ratings from those rounds.
A handicap differential is calculated from each of the scores after Equitable Stroke Control (ESC), an adjustment which allows for a maximum number of strokes per hole based on the player's course handicap, has been applied using the following formula:
formula_10
The handicap differentials are rounded to one decimal place, and the best 10 from the last 20 submitted scores are then averaged, before being multiplied by 0.96 (the "bonus of excellence") and truncated to one decimal place to produce the handicap index. Initial handicaps are calculated from a minimum of five scores using ESC adjustments based on the course handicap corresponding to a handicap index of 36.4 for men or 40.4 for women. If there are at least 5 but fewer than 20 qualifying scores available, the handicap index is calculated using a set number or differentials according to how many scores are available.
The basic formula for calculating the handicap index is as follows (where formula_6 is the number of differentials to use), with the result truncated to one decimal place:
formula_11
The handicap index is not used directly for playing purposes, but used to calculate a course handicap according to the slope rating of the set of tees being used. The result is rounded to the nearest whole number.
formula_12
Updates to a golfer's handicap index are issued periodically, generally once or twice per month depending on the local state and regional golf associations.
CONGU Unified Handicapping System.
Following a meeting of the four men's golf unions of Great Britain and Ireland in York arranged by The Royal and Ancient Golf Club of St Andrews in 1924, the British Golf Unions Joint Advisory Committee (later Council of National Golf Unions) was formed. The organization was tasked with creating a handicapping system that would be equitable to golfers of varying ability, and as a result the Standard Scratch Score and Handicapping Scheme was devised. The system was introduced in 1926, and used a "scratch score" system to rate courses, taking account that courses may play easier or more difficult than par.
A new system was introduced in 1983, which incorporated features of the Australian system. This was further revised in 1989 with the introduction of the "Competition Scratch Score" (CSS), an adjustment to the "Standard Scratch Score" (SSS), to take account of variances in course conditions (setup, weather, etc.) on a given day. Further significant changes came in 1993 (buffer zones) and 1997 (Stableford Points Adjustment). In 2002, the Council of National Golf Unions (CONGU) and the Ladies' Golf Union (LGU) began working together (the LGU had adopted a system similar to that of CONGU in 1998) and in February 2004 the Unified Handicapping System (UHS) came into force.
The Unified Handicapping System is used to manage handicaps for both men and women who are members of affiliated golf clubs in the United Kingdom and the Republic of Ireland. The system is published by CONGU and administered by each of the individual unions on behalf of their members, with handicaps being managed locally by someone at each club; this person normally holds the position of competitions or handicap secretary.
Unified Handicapping System overview.
Under the Unified Handicapping System, initial handicaps are allocated based on returned scores from 54 holes, usually three 18-hole rounds. The number of strokes taken on each hole is adjusted to maximum of double the par of the hole before adding up the scores; adjustments were previously 2 over par for men and 3 over par for women. The best of the "adjusted gross differentials" (AGD) between the adjusted score and the Standard Scratch Score (SSS) is taken to calculate the initial handicap using the following formula, with the result truncated to give a whole number:
formula_13
Adjustments may be made to the initial handicap should it be deemed necessary to ensure it is reasonably fair. Handicaps are given to one decimal place and divided into categories with the lowest handicaps being in Category 1. Prior to 2018, the highest handicaps were in Category 4 for men, with a maximum of 28.0, and Category 5 for women, with a maximum of 36.0, with provision for higher "club" or "disability" handicaps up to a limit of 54.0 for those who cannot play to these lower limits. In 2018, handicap limits were standardized at 54.0 and a Category 5 was introduced for men, and a new Category 6 for all, replacing the club and disability category (see table below). The exact handicap is rounded to the nearest whole number to give the playing handicap. Many handicap competitions still have maximum limits of 28 for men and 36 for women.
For all qualifying scores that are returned, adjustments are made to a players exact handicap based on the Competition Scratch Score (CSS). All hole scores are first adjusted to a maximum of net 2-over par with handicap strokes being used per the stroke index published on the scorecard; this is called Stableford or net double-bogey adjustment. Every stroke the adjusted net score is below the CSS triggers a reduction dependent on the players handicap category; for Category 1 this is 0.1 per stroke, for Category 2 it is 0.2, etc. Should the adjusted net score exceed the CSS , there is a "buffer zone" equivalent to the handicap category before a 0.1 increase is applied, which is the same for all categories; for Category 1 there is 1 stroke buffer, for Category 2 it is 2 strokes, etc. The Competition Scratch Score is an adjustment to the Standard Scratch Score computed from all scores returned and is in the range formula_14 to formula_15 with provision for "reduction only" when scoring conditions have proved especially difficult.
In addition to playing in qualifying competitions, golfers in Category 2 and above may also submit a number of "supplementary scores" in order to maintain their handicap; primarily a feature to accommodate golfers who play in few competitions and allow them to maintain current handicaps, it is also used by people who wish to try and get their handicap down while they are playing well. There are other mechanisms in the system to reduce or increase handicaps more quickly. Every year all handicaps are reviewed and adjusted if necessary to ensure they remain fair and accurate. In addition, any very good scores are monitored throughout the year and an "exceptional scoring reduction" may be applied if certain triggers are reached.
Historically calculating the CSS and any handicap adjustments was done manually by means of published tables, but this is now computerized with handicaps being published to a Centralised Database of Handicaps (CDH).
EGA Handicap System.
The EGA Handicap System is the European Golf Association's method of evaluating golf abilities so that players of different standards can compete in handicap events on equal terms. It is based on Stableford scoring and has some similarities to both the CONGU system, with regards to handicap categories and adjustments, and to the USGA system, with regards to the use of course and slope ratings and calculating playing handicaps. The first version of the system was introduced in 2000.
EGA Handicap System overview.
Under the EGA Handicap System, initial handicaps require just a single 9 or 18-hole score recorded using the maximum handicap of 54. The handicap is then calculated from the number of Stableford points scored.
formula_16
EGA handicaps are given to one decimal place and divided into categories, with the lowest handicaps being in Category 1 and the highest in Category 6 (see table below). The handicap is not used directly for playing purposes and a calculation must be done to determine a "playing handicap" specific to the course being played and set of tees being used. For handicaps in categories 1 to 5, the formula is as follows with the result rounded to the nearest whole number:
formula_17
And for category 6 a "playing handicap differential" is used, which is equal to the playing handicap for a handicap index of 36.0:
formula_18
For all qualifying scores that are returned, adjustments are made to a players handicap index. All scores are first converted into Stableford points if necessary (i.e. rounds played using another scoring method, e.g. stroke play), effectively applying a net double bogey adjustment, and then for every point scored above the "buffer zone" there is a reduction applied to the players handicap index according to their handicap category; for Category 1 this is 0.1 per point, for Category 2 it is 0.2, etc. Should the number of points scored be below the buffer zone, a fixed increase of 0.1 is applied to the handicap index regardless of category. The EGA system also takes account of variations in playing difficulty on any given competition day by means of a "Computed Buffer Adjustment" (CBA) which adjusts the buffer zones by between −1 and +2 with provision for "reductions only" when scoring is especially difficult. The CBA replaced the previous Competition Stableford Adjustment method, which adjusted player's Stableford scores directly, in 2013.
In addition to playing in qualifying competitions, golfers in Category 2 and above may also submit a number of "extra day scores" in order to maintain their handicap. Handicaps are also reviewed annually and any necessary adjustments made.
Golf Australia Handicap System.
The Golf Australia Handicap System is maintained on GOLF Link, which was a world-first computerized handicapping system developed by Golf Australia's predecessor, the Australian Golf Union (AGU) in the 1990s. When GOLF Link was first introduced it contained two key characteristics that set it apart from other world handicapping systems at the time:
In April 2010 GA adopted the USGA calculation method using the average of the best 10 differentials of the player's past 20 total rounds, multiplied by 0.96. In September 2011 this was altered to the best 8 out of 20 rounds, multiplied by 0.93. The reasons for these changes were cited to restore equity between high and low handicaps. An ‘anchor' so that handicaps could not increase by more than 5 in a rolling 12-month period, slope ratings, and a more sophisticated version of CCR called the Daily Scratch Rating (DSR) were implemented on January 23, 2014.
GA Handicap System overview.
The GA Handicap System is based on the Stableford scoring system, and uses slope and course rating (called "Scratch Rating"). For handicapping purposes, the scratch rating is adjusted to reflect scoring conditions ("Daily Scratch Rating"), and all scores are converted into Stableford points, called the Stableford Handicap Adjustment (SHA) and inherently applying net double bogey adjustments, regardless of the scoring system being used while playing.
Handicaps are calculated from the best 8 adjusted differentials, called "sloped played to" results, from the most recent 20 scores. Should there be 3 or more but fewer than 20 scores available, a specified number of "sloped played to" results are used, per the table below.
New handicaps require 3 18-hole scores to be submitted (or any combination of 9 and 18-hole scores totaling 54 holes played) using a "Temporary Daily Handicap" of 36 for men or 45 for women in order to calculate the necessary "sloped played to" results. "Sloped played to" results are calculated using the following formula and rounded to one decimal place:
formula_19
To calculate the GA handicap, the "sloped played to" results are averaged and multiplied by a factor of 0.93, which is intended to equalize the handicap in favor of better players. The formula for calculating a GA handicap is as follows (where formula_6 is the number of differentials to use), with the result truncated to one decimal place:
formula_20
The GA handicap is used to create a "daily handicap", specific to the course and set of tees being used, using the following formula with the result rounded to the nearest whole number:
formula_21
South African Handicap System.
Before 2018, the South African Handicap System used a proprietary course rating system without slope, called Standard Rating, which included specific calculations for length and altitude. Handicaps were calculated using the best 10 from the last 20 differentials, with differentials derived by means of a simple (Standard Rating − Adjusted Gross) formula. The system previously calculated handicaps against an adjusted Standard Rating (called Calculated Rating) but this was suspended in 2012. Playing handicaps were simply the exact handicap, rounded to the nearest whole number.
In September 2018, the renamed GolfRSA Handicap System adopted the USGA Course and Slope Rating system. This necessitated a few additional changes (e.g. playing handicap and differential calculations), but the system retained all other features (e.g. Adjusted Gross and no daily course rating adjustment). The playing handicap under the GolfRSA system includes the difference between the Course Rating and Par.
In October 2019, further changes were made which brought the GolfRSA Handicap System further into line with the upcoming World Handicap System. The changes introduced included reducing the number of differentials used in handicap calculations from 10 down to 8, net double bogey as the maximum score per hole, reducing the minimum number of valid 18-hole scores required for handicapping to three, and exceptional scoring reductions.
Argentinian Handicap System.
The Argentine Golf Association (AAG) handicapping system is a relatively simple one, using only a course rating, without slope. New handicaps require the submission of scorecards from five 18-hole rounds (or ten 9-hole rounds). An initial handicap of 25 is normally used as a starting point, which is then adjusted based on the submitted scores. Handicaps are updated once every month, with current handicaps generated from a lookup table using the average of the best eight differentials from the last 16 rounds. Golfers simply use their exact handicap for playing purposes.
Other systems.
For the handicapping of golfers who are ineligible for an official handicap, some system options are available:
Peoria System.
The "Peoria System" was designed for the handicapping of all players competing in an event such as a "charity" or "corporate" golf day. Before play commences, the organisers secretly select 6 holes (in readiness for handicapping purposes later) from the course to be played. When players have completed their rounds, they apply the "Peoria" algorithm to their scores on the selected holes to determine their handicap for that round. They then subtract that handicap from their gross score to give their net score - and the winner is determined in the usual way.
Callaway System.
The "Callaway System" was designed with the same objective as "Peoria". The "Callaway" handicapping algorithm works by totaling a variable number of "worst" scores achieved (subject to a double-par limit) according to a simple table. A couple of adjustments are then made to this total to give the player's handicap, which is then applied to their gross score as normal.
Scheid System.
The "Scheid System" is similar to the Callaway System, except a different version of the table is used.
System 36.
"System 36" is a same-day handicapping system similar in function to "Callaway System" and "Peoria System". Throughout the round, the golfer accrues points based on the following formula:
At the end of the round, points earned are tallied. The total is subtracted from 36, and the resulting number is the golfer's handicap allowance. The golfer's net score can then be computed using their System 36 handicap allowance.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mbox{Course handicap} = \\frac{ ( \\mbox{handicap index} \\times \\mbox{slope rating} ) }{ \\mbox{113}}"
},
{
"math_id": 1,
"text": " \\mbox{Course handicap} = \\frac{ ( \\mbox{handicap index} \\times \\mbox{slope rating} ) }{ \\mbox{113}} + ( \\mbox{course rating} - \\mbox{par} )"
},
{
"math_id": 2,
"text": " \\mbox{Handicap differential} = \\frac{ ( \\mbox{adjusted score} - \\mbox{course rating} ) \\times \\mbox{113}}{ \\mbox{slope rating}} "
},
{
"math_id": 3,
"text": " \\mbox{18-hole score differential} = \\frac{\\mbox{113}}{\\mbox{slope rating}} \\times ({\\mbox{adjusted gross score}} - {\\mbox{course rating}} - {\\mbox{PCC adjustment}})"
},
{
"math_id": 4,
"text": " \\mbox{9-hole score differential} = \\frac{\\mbox{113}}{\\mbox{slope rating}} \\times ({\\mbox{adjusted gross score}} - {\\mbox{course rating}} - ({\\mbox{0.5}} \\times {\\mbox{PCC adjustment}}))"
},
{
"math_id": 5,
"text": "(\\mbox{par} + \\mbox{5})"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": " \\mbox{Handicap index} = (\\frac{1}{n}\\sum_{x=1}^{n} \\mbox{Differential}_x) + \\mbox{adjustment}"
},
{
"math_id": 8,
"text": " \\mbox{Course Handicap} = \\frac{ ( \\mbox{Handicap index} \\times \\mbox{Slope Rating} )}{113} + ( \\mbox{course rating} - \\mbox{par} )"
},
{
"math_id": 9,
"text": "( \\mbox{course rating} - \\mbox{par} )"
},
{
"math_id": 10,
"text": " \\mbox{Handicap differential} = \\frac{ ( \\mbox{ESC adjusted score} - \\mbox{course rating} ) \\times \\mbox{113}}{ \\mbox{slope rating}} "
},
{
"math_id": 11,
"text": " \\mbox{Handicap index} = \\frac{0.96}{n}\\sum_{x=1}^{n} \\mbox{Differential}_x"
},
{
"math_id": 12,
"text": " \\mbox{Course Handicap} = \\frac{ ( \\mbox{Handicap index} \\times \\mbox{Slope Rating} )}{113} "
},
{
"math_id": 13,
"text": " \\mbox{Initial handicap} = \\frac{ ( \\mbox{Lowest AGD} + ( \\mbox{Lowest AGD} \\times \\mbox{0.13}) ) }{ \\mbox{1.237}} "
},
{
"math_id": 14,
"text": "(SSS - 1)"
},
{
"math_id": 15,
"text": "(SSS + 3)"
},
{
"math_id": 16,
"text": " \\mbox{Initial handicap} = \\mbox{54} - ( \\mbox{Stableford points} - \\mbox{36} )"
},
{
"math_id": 17,
"text": " \\mbox{Playing handicap} = \\frac{ ( \\mbox{handicap index} \\times \\mbox{slope rating} ) }{ \\mbox{113}} + ( \\mbox{course rating} - \\mbox{par} )"
},
{
"math_id": 18,
"text": " \\mbox{Playing handicap} = ( \\mbox{handicap index} + \\mbox{playing handicap differential} )"
},
{
"math_id": 19,
"text": " \\mbox{Sloped played to} = ( \\mbox{Par} + \\mbox{Daily handicap} - ( \\mbox{Stableford points} - \\mbox{36} ) - \\mbox{Daily scratch rating}) \\times \\frac{ \\mbox{113} }{ \\mbox{Slope Rating}} "
},
{
"math_id": 20,
"text": " \\mbox{GA handicap} = \\frac{0.93}{n}\\sum_{x=1}^{n} \\mbox{Sloped played to}_x"
},
{
"math_id": 21,
"text": " \\mbox{Daily handicap} = \\frac{ \\mbox{GA handicap} \\times \\mbox{Slope Rating} }{ \\mbox{113}} "
}
] | https://en.wikipedia.org/wiki?curid=999303 |
9994 | Ephemeris time | Time standard used in astronomical ephemerides
The term ephemeris time (often abbreviated ET) can in principle refer to time in association with any ephemeris (itinerary of the trajectory of an astronomical object). In practice it has been used more specifically to refer to:
Most of the following sections relate to the ephemeris time of the 1952 standard.
An impression has sometimes arisen that ephemeris time was in use from 1900: this probably arose because ET, though proposed and adopted in the period 1948–1952, was defined in detail using formulae that made retrospective use of the epoch date of 1900 January 0 and of Newcomb's "Tables of the Sun".
The ephemeris time of the 1952 standard leaves a continuing legacy, through its historical unit ephemeris second which became closely duplicated in the length of the current standard SI second (see below: Redefinition of the second).
History (1952 standard).
Ephemeris time (ET), adopted as standard in 1952, was originally designed as an approach to a uniform time scale, to be freed from the effects of irregularity in the rotation of the Earth, "for the convenience of astronomers and other scientists", for example for use in ephemerides of the Sun (as observed from the Earth), the Moon, and the planets. It was proposed in 1948 by G M Clemence.
From the time of John Flamsteed (1646–1719) it had been believed that the Earth's daily rotation was uniform. But in the later nineteenth and early twentieth centuries, with increasing precision of astronomical measurements, it began to be suspected, and was eventually established, that the rotation of the Earth ("i.e." the length of the day) showed irregularities on short time scales, and was slowing down on longer time scales. The evidence was compiled by W de Sitter (1927) who wrote "If we accept this hypothesis, then the 'astronomical time', given by the Earth's rotation, and used in all practical astronomical computations, differs from the 'uniform' or 'Newtonian' time, which is defined as the independent variable of the equations of celestial mechanics". De Sitter offered a correction to be applied to the mean solar time given by the Earth's rotation to get uniform time.
Other astronomers of the period also made suggestions for obtaining uniform time, including A Danjon (1929), who suggested in effect that observed positions of the Moon, Sun and planets, when compared with their well-established gravitational ephemerides, could better and more uniformly define and determine time.
Thus the aim developed, to provide a new time scale for astronomical and scientific purposes, to avoid the unpredictable irregularities of the mean solar time scale, and to replace for these purposes Universal Time (UT) and any other time scale based on the rotation of the Earth around its axis, such as sidereal time.
The American astronomer G M Clemence (1948) made a detailed proposal of this type based on the results of the English Astronomer Royal H Spencer Jones (1939). Clemence (1948) made it clear that his proposal was intended "for the convenience of astronomers and other scientists only" and that it was "logical to continue the use of mean solar time for civil purposes".
De Sitter and Clemence both referred to the proposal as 'Newtonian' or 'uniform' time. D Brouwer suggested the name 'ephemeris time'.
Following this, an astronomical conference held in Paris in 1950 recommended "that in all cases where the mean solar second is unsatisfactory as a unit of time by reason of its variability, the unit adopted should be the sidereal year at 1900.0, that the time reckoned in this unit be designated "ephemeris time"", and gave Clemence's formula (see Definition of ephemeris time (1952)) for translating mean solar time to ephemeris time.
The International Astronomical Union approved this recommendation at its 1952 general assembly. Practical introduction took some time (see Use of ephemeris time in official almanacs and ephemerides); ephemeris time (ET) remained a standard until superseded in the 1970s by further time scales (see Revision).
During the currency of ephemeris time as a standard, the details were revised a little. The unit was redefined in terms of the tropical year at 1900.0 instead of the sidereal year; and the standard second was defined first as 1/31556925.975 of the tropical year at 1900.0, and then as the slightly modified fraction 1/31556925.9747 instead, finally being redefined in 1967/8 in terms of the cesium atomic clock standard (see below).
Although ET is no longer directly in use, it leaves a continuing legacy. Its successor time scales, such as TDT, as well as the atomic time scale IAT (TAI), were designed with a relationship that "provides continuity with ephemeris time". ET was used for the calibration of atomic clocks in the 1950s. Close equality between the ET second with the later SI second (as defined with reference to the cesium atomic clock) has been verified to within 1 part in 1010.
In this way, decisions made by the original designers of ephemeris time influenced the length of today's standard SI second, and in turn, this has a continuing influence on the number of leap seconds which have been needed for insertion into current broadcast time scales, to keep them approximately in step with mean solar time.
Definition (1952).
Ephemeris time was defined in principle by the orbital motion of the Earth around the Sun (but its practical implementation was usually achieved in another way, see below). Its detailed definition was based on Simon Newcomb's "Tables of the Sun" (1895), implemented in a new way to accommodate certain observed discrepancies:
In the introduction to "Tables of the Sun," the basis of the tables (p. 9) includes a formula for the Sun's mean longitude at a time, indicated by interval T (in units of Julian centuries of 36525 mean solar days), reckoned from Greenwich Mean Noon on 0 January 1900:
Ls = 279° 41' 48".04 + 129,602,768".13T +1".089T2 . . . . . (1)
Spencer Jones' work of 1939 showed that differences between the observed positions of the Sun and the predicted positions given by Newcomb's formula demonstrated the need for the following correction to the formula:
ΔLs = + 1".00 + 2".97T + 1".23T2 + 0.0748B
where "the times of observation are in Universal time, not corrected to Newtonian time," and 0.0748B represents an irregular fluctuation calculated from lunar observations.
Thus, a conventionally corrected form of Newcomb's formula, incorporating the corrections on the basis of mean solar time, would be the sum of the two preceding expressions:
Ls = 279° 41' 49".04 + 129,602,771".10T +2".32T2 +0.0748B . . . . . (2)
Clemence's 1948 proposal, however, did not adopt such a correction of mean solar time. Instead, the same numbers were used as in Newcomb's original uncorrected formula (1), but now applied somewhat prescriptively, to define a new time and time scale implicitly, based on the real position of the Sun:
Ls = 279° 41' 48".04 + 129,602,768".13E +1".089E2 . . . . . (3)
With this reapplication, the time variable, now given as E, represents time in ephemeris centuries of 36525 ephemeris days of 86400 ephemeris seconds each. The 1961 official reference summarized the concept as such: "The origin and rate of ephemeris time are defined to make the Sun's mean longitude agree with Newcomb's expression"
From the comparison of formulae (2) and (3), both of which express the same real solar motion in the same real time but defined on separate time scales, Clemence arrived at an explicit expression, estimating the difference in seconds of time between ephemeris time and mean solar time, in the sense (ET-UT):
formula_0 . . . . . (4)
with the 24.349 seconds of time corresponding to the 1.00" in ΔLs.
Clemence's formula (today superseded by more modern estimations) was included in the original conference decision on ephemeris time. In view of the fluctuation term, practical determination of the difference between ephemeris time and UT depended on observation. Inspection of the formulae above shows that the (ideally constant) units of ephemeris time have been, for the whole of the twentieth century, very slightly shorter than the corresponding (but not precisely constant) units of mean solar time (which, besides their irregular fluctuations, tend to lengthen gradually). This finding is consistent with the modern results of Morrison and Stephenson (see article ΔT).
Implementations.
Secondary realizations by lunar observations.
Although ephemeris time was defined in principle by the orbital motion of the Earth around the Sun, it was usually measured in practice by the orbital motion of the Moon around the Earth. These measurements can be considered as secondary realizations (in a metrological sense) of the primary definition of ET in terms of the solar motion, after a calibration of the mean motion of the Moon with respect to the mean motion of the Sun.
Reasons for the use of lunar measurements were practically based: the Moon moves against the background of stars about 13 times as fast as the Sun's corresponding rate of motion, and the accuracy of time determinations from lunar measurements is correspondingly greater.
When ephemeris time was first adopted, time scales were still based on astronomical observation, as they always had been. The accuracy was limited by the accuracy of optical observation, and corrections of clocks and time signals were published in arrear.
Secondary realizations by atomic clocks.
A few years later, with the invention of the cesium atomic clock, an alternative offered itself. Increasingly, after the calibration in 1958 of the cesium atomic clock by reference to ephemeris time, cesium atomic clocks running on the basis of ephemeris seconds began to be used and kept in step with ephemeris time. The atomic clocks offered a further secondary realization of ET, on a quasi-real time basis that soon proved to be more useful than the primary ET standard: not only more convenient, but also more precisely uniform than the primary standard itself. Such secondary realizations were used and described as 'ET', with an awareness that the time scales based on the atomic clocks were not identical to that defined by the primary ephemeris time standard, but rather, an improvement over it on account of their closer approximation to uniformity. The atomic clocks gave rise to the atomic time scale, and to what was first called Terrestrial Dynamical Time and is now Terrestrial Time, defined to provide continuity with ET.
The availability of atomic clocks, together with the increasing accuracy of astronomical observations (which meant that relativistic corrections were at least in the foreseeable future no longer going to be small enough to be neglected), led to the eventual replacement of the ephemeris time standard by more refined time scales including terrestrial time and barycentric dynamical time, to which ET can be seen as an approximation.
Revision of time scales.
In 1976, the IAU resolved that the theoretical basis for its then-current (since 1952) standard of Ephemeris Time was non-relativistic, and that therefore, beginning in 1984, Ephemeris Time would be replaced by two relativistic timescales intended to constitute dynamical timescales: Terrestrial Dynamical Time (TDT) and Barycentric Dynamical Time (TDB). Difficulties were recognized, which led to these, in turn, being superseded in the 1990s by time scales Terrestrial Time (TT), Geocentric Coordinate Time GCT (TCG) and Barycentric Coordinate Time BCT (TCB).
JPL ephemeris time argument Teph.
High-precision ephemerides of sun, moon and planets were developed and calculated at the Jet Propulsion Laboratory (JPL) over a long period, and the latest available were adopted for the ephemerides in the Astronomical Almanac starting in 1984. Although not an IAU standard, the ephemeris time argument Teph has been in use at that institution since the 1960s. The time scale represented by Teph has been characterized as a relativistic coordinate time that differs from Terrestrial Time only by small periodic terms with an amplitude not exceeding 2 milliseconds of time: it is linearly related to, but distinct (by an offset and constant rate which is of the order of 0.5 s/a) from the TCB time scale adopted in 1991 as a standard by the IAU. Thus for clocks on or near the geoid, Teph (within 2 milliseconds), but not so closely TCB, can be used as approximations to Terrestrial Time, and via the standard ephemerides Teph is in widespread use.
Partly in acknowledgement of the widespread use of Teph via the JPL ephemerides, IAU resolution 3 of 2006 (re-)defined Barycentric Dynamical Time (TDB) as a current standard. As re-defined in 2006, TDB is a linear transformation of TCB. The same IAU resolution also stated (in note 4) that the "independent time argument of the JPL ephemeris DE405, which is called Teph" (here the IAU source cites), "is for practical purposes the same as TDB defined in this Resolution". Thus the new TDB, like Teph, is essentially a more refined continuation of the older ephemeris time ET and (apart from the < 2 ms periodic fluctuations) has the same mean rate as that established for ET in the 1950s.
Use in official almanacs and ephemerides.
Ephemeris time based on the standard adopted in 1952 was introduced into the Astronomical Ephemeris (UK) and the American Ephemeris and Nautical Almanac, replacing UT in the main ephemerides in the issues for 1960 and after. (But the ephemerides in the Nautical Almanac, by then a separate publication for the use of navigators, continued to be expressed in terms of UT.) The ephemerides continued on this basis through 1983 (with some changes due to adoption of improved values of astronomical constants), after which, for 1984 onwards, they adopted the JPL ephemerides.
Previous to the 1960 change, the 'Improved Lunar Ephemeris' had already been made available in terms of ephemeris time for the years 1952—1959 (computed by W J Eckert from Brown's theory with modifications recommended by Clemence (1948)).
Redefinition of the second.
Successive definitions of the unit of ephemeris time are mentioned above (History). The value adopted for the 1956/1960 standard second:
the fraction 1/31 556 925.9747 of the tropical year for 1900 January 0 at 12 hours ephemeris time.
was obtained from the linear time-coefficient in Newcomb's expression for the solar mean longitude (above), taken and applied with the same meaning for the time as in formula (3) above. The relation with Newcomb's coefficient can be seen from:
1/31 556 925.9747 = 129 602 768.13 / (360×60×60×36 525×86 400).
Caesium atomic clocks became operational in 1955, and quickly confirmed the evidence that the rotation of the Earth fluctuated irregularly. This confirmed the unsuitability of the mean solar second of Universal Time as a measure of time interval for the most precise purposes. After three years of comparisons with lunar observations, Markowitz et al. (1958) determined that the ephemeris second corresponded to 9 192 631 770 ± 20 cycles of the chosen cesium resonance.
Following this, in 1967/68, the General Conference on Weights and Measures (CGPM) replaced the definition of the SI second by the following:
The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.
Although this is an independent definition that does not refer to the older basis of ephemeris time, it uses the same quantity as the value of the ephemeris second measured by the cesium clock in 1958. This SI second referred to atomic time was later verified by Markowitz (1988) to be in agreement, within 1 part in 1010, with the second of ephemeris time as determined from lunar observations.
For practical purposes the length of the ephemeris second can be taken as equal to the length of the second of Barycentric Dynamical Time (TDB) or Terrestrial Time (TT) or its predecessor TDT.
The difference between ET and UT is called ΔT; it changes irregularly, but the long-term trend is parabolic, decreasing from ancient times until the nineteenth century, and increasing since then at a rate corresponding to an increase in the solar day length of 1.7 ms per century (see leap seconds).
International Atomic Time (TAI) was set equal to UT2 at 1 January 1958 0:00:00. At that time, ΔT was already about 32.18 seconds. The difference between Terrestrial Time (TT) (the successor to ephemeris time) and atomic time was later defined as follows:
1977 January 1.000 3725 TT = 1977 January 1.000 0000 TAI, "i.e."
TT − TAI = 32.184 seconds
This difference may be assumed constant—the rates of TT and TAI are designed to be identical. | [
{
"math_id": 0,
"text": " \\delta t = +24^s.349 + 72^s.3165T +29^s.949T^2 + 1.821B"
}
] | https://en.wikipedia.org/wiki?curid=9994 |
99945 | Generating set of a group | Abstract algebra concept
In abstract algebra, a generating set of a group is a subset of the group set such that every element of the group can be expressed as a combination (under the group operation) of finitely many elements of the subset and their inverses.
In other words, if formula_0 is a subset of a group formula_1, then formula_2, the "subgroup generated by formula_0", is the smallest subgroup of formula_1 containing every element of formula_0, which is equal to the intersection over all subgroups containing the elements of formula_0; equivalently, formula_2 is the subgroup of all elements of formula_1 that can be expressed as the finite product of elements in formula_0 and their inverses. (Note that inverses are only needed if the group is infinite; in a finite group, the inverse of an element can be expressed as a power of that element.)
If formula_3, then we say that formula_0 "generates" formula_1, and the elements in formula_0 are called "generators" or "group generators". If formula_0 is the empty set, then formula_2 is the trivial group formula_4, since we consider the empty product to be the identity.
When there is only a single element formula_5 in formula_0, formula_2 is usually written as formula_6. In this case, formula_6 is the "cyclic subgroup" of the powers of formula_5, a cyclic group, and we say this group is generated by formula_5. Equivalent to saying an element formula_5 generates a group is saying that formula_6 equals the entire group formula_1. For finite groups, it is also equivalent to saying that formula_5 has order formula_7.
A group may need an infinite number of generators. For example the additive group of rational numbers formula_8 is not finitely generated. It is generated by the inverses of all the integers, but any finite number of these generators can be removed from the generating set without it ceasing to be a generating set. In a case like this, all the elements in a generating set are nevertheless "non-generating elements", as are in fact all the elements of the whole group − see Frattini subgroup below.
If formula_1 is a topological group then a subset formula_0 of formula_1 is called a set of "topological generators" if formula_2 is dense in formula_1, i.e. the closure of formula_2 is the whole group formula_1.
Finitely generated group.
If formula_0 is finite, then a group formula_3 is called "finitely generated". The structure of finitely generated abelian groups in particular is easily described. Many theorems that are true for finitely generated groups fail for groups in general. It has been proven that if a finite group is generated by a subset formula_0, then each group element may be expressed as a word from the alphabet formula_0 of length less than or equal to the order of the group.
Every finite group is finitely generated since formula_9. The integers under addition are an example of an infinite group which is finitely generated by both 1 and −1, but the group of rationals under addition cannot be finitely generated. No uncountable group can be finitely generated. For example, the group of real numbers under addition, formula_10.
Different subsets of the same group can be generating subsets. For example, if formula_11 and formula_12 are integers with gcd("p", "q") = 1, then formula_13 also generates the group of integers under addition by Bézout's identity.
While it is true that every quotient of a finitely generated group is finitely generated (the images of the generators in the quotient give a finite generating set), a subgroup of a finitely generated group need not be finitely generated. For example, let formula_1 be the free group in two generators, formula_5 and formula_14 (which is clearly finitely generated, since formula_15), and let formula_0 be the subset consisting of all elements of formula_1 of the form formula_16 for some natural number formula_17. formula_2 is isomorphic to the free group in countably infinitely many generators, and so cannot be finitely generated. However, every subgroup of a finitely generated abelian group is in itself finitely generated. In fact, more can be said: the class of all finitely generated groups is closed under extensions. To see this, take a generating set for the (finitely generated) normal subgroup and quotient. Then the generators for the normal subgroup, together with preimages of the generators for the quotient, generate the group.
"e" = (1 2)(1 2)
(1 2) = (1 2)
(1 3) = (1 2)(1 2 3)
(2 3) = (1 2 3)(1 2)
(1 2 3) = (1 2 3)
(1 3 2) = (1 2)(1 2 3)(1 2)
Free group.
The most general group generated by a set formula_0 is the group freely generated by formula_0. Every group generated by formula_0 is isomorphic to a quotient of this group, a feature which is utilized in the expression of a group's presentation.
Frattini subgroup.
An interesting companion topic is that of "non-generators". An element formula_5 of the group formula_1 is a non-generator if every set formula_0 containing formula_5 that generates formula_1, still generates formula_1 when formula_5 is removed from formula_0. In the integers with addition, the only non-generator is 0. The set of all non-generators forms a subgroup of formula_1, the Frattini subgroup.
Semigroups and monoids.
If formula_1 is a semigroup or a monoid, one can still use the notion of a generating set formula_0 of formula_1. formula_0 is a semigroup/monoid generating set of formula_1 if formula_1 is the smallest semigroup/monoid containing formula_0.
The definitions of generating set of a group using finite sums, given above, must be slightly modified when one deals with semigroups or monoids. Indeed, this definition should not use the notion of inverse operation anymore. The set formula_0 is said to be a semigroup generating set of formula_1 if each element of formula_1 is a finite sum of elements of formula_0. Similarly, a set formula_0 is said to be a monoid generating set of formula_1 if each non-zero element of formula_1 is a finite sum of elements of formula_0.
For example, {1} is a monoid generator of the set of natural numbers formula_21. The set {1} is also a semigroup generator of the positive natural numbers formula_22. However, the integer 0 can not be expressed as a (non-empty) sum of 1s, thus {1} is not a semigroup generator of the natural numbers.
Similarly, while {1} is a group generator of the set of integers formula_23, {1} is not a monoid generator of the set of integers. Indeed, the integer −1 cannot be expressed as a finite sum of 1s. | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "\\langle S\\rangle"
},
{
"math_id": 3,
"text": "G=\\langle S\\rangle"
},
{
"math_id": 4,
"text": "\\{e\\}"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "\\langle x\\rangle"
},
{
"math_id": 7,
"text": "|G|"
},
{
"math_id": 8,
"text": "\\Q"
},
{
"math_id": 9,
"text": "\\langle G\\rangle =G"
},
{
"math_id": 10,
"text": "(\\R,+)"
},
{
"math_id": 11,
"text": "p"
},
{
"math_id": 12,
"text": "q"
},
{
"math_id": 13,
"text": "\\{p,q\\}"
},
{
"math_id": 14,
"text": "y"
},
{
"math_id": 15,
"text": "G=\\langle \\{x,y\\}\\rangle"
},
{
"math_id": 16,
"text": "y^nxy^{-n}"
},
{
"math_id": 17,
"text": "n"
},
{
"math_id": 18,
"text": "\\{7^i \\bmod{9}\\ |\\ i \\in \\mathbb{N}\\} = \\{7,4,1\\},"
},
{
"math_id": 19,
"text": "\\{2^i \\bmod{9}\\ |\\ i \\in \\mathbb{N}\\} = \\{2,4,8,7,5,1\\}."
},
{
"math_id": 20,
"text": "\\mathbb{Z}/n\\mathbb{Z}"
},
{
"math_id": 21,
"text": "\\N"
},
{
"math_id": 22,
"text": "\\N_{>0}"
},
{
"math_id": 23,
"text": "\\mathbb Z"
}
] | https://en.wikipedia.org/wiki?curid=99945 |
999491 | Sethi–Ullman algorithm | In computer science, the Sethi–Ullman algorithm is an algorithm named after Ravi Sethi and Jeffrey D. Ullman, its inventors, for translating abstract syntax trees into machine code that uses as few registers as possible.
Overview.
When generating code for arithmetic expressions, the compiler has to decide which is the best way to translate the expression in terms of number of instructions used as well as number of registers needed to evaluate a certain subtree. Especially in the case that free registers are scarce, the order of evaluation can be important to the length of the generated code, because different orderings may lead to larger or smaller numbers of intermediate values being spilled to memory and then restored. The Sethi–Ullman algorithm (also known as Sethi–Ullman numbering) produces code which needs the fewest instructions possible as well as the fewest storage references (under the assumption that at the most commutativity and associativity apply to the operators used, but distributive laws i.e. formula_0 do not hold). The algorithm succeeds as well if neither commutativity nor associativity hold for the expressions used, and therefore arithmetic transformations can not be applied. The algorithm also does not take advantage of common subexpressions or apply directly to expressions represented as general directed acyclic graphs rather than trees.
Simple Sethi–Ullman algorithm.
The simple Sethi–Ullman algorithm works as follows (for a load/store architecture):
Example.
For an arithmetic expression formula_1, the abstract syntax tree looks like this:
/ \
a *
/ \
/ \ / \
/ \ d 3
/ \ / \
b c f g
To continue with the algorithm, we need only to examine the arithmetic expression formula_2, i.e. we only have to look at the right subtree of the assignment '=':
/ \
+ +
/ \ d 3
/ \ / \
b c f g
Now we start traversing the tree (in preorder for now), assigning the number of registers needed to evaluate each subtree (note that the last summand in the expression formula_2 is a constant):
*2
/ \
+2 +1
/ \ d1 30
+1 *1
b1 c0f1 g0
From this tree it can be seen that we need 2 registers to compute the left subtree of the '*', but only 1 register to compute the right subtree. Nodes 'c' and 'g' do not need registers for the following reasons: If T is a tree leaf, then the number of registers to evaluate T is either 1 or 0 depending whether T is a left or a right subtree (since an operation such as add R1, A can handle the right component A directly without storing it into a register). Therefore we shall start to emit code for the left subtree first, because we might run into the situation that we only have 2 registers left to compute the whole expression. If we now computed the right subtree first (which needs only 1 register), we would then need a register to hold the result of the right subtree while computing the left subtree (which would still need 2 registers), therefore needing 3 registers concurrently. Computing the left subtree first needs 2 registers, but the result can be stored in 1, and since the right subtree needs only 1 register to compute, the evaluation of the expression can do with only 2 registers left.
Advanced Sethi–Ullman algorithm.
In an advanced version of the Sethi–Ullman algorithm, the arithmetic expressions are first transformed, exploiting the algebraic properties of the operators used. | [
{
"math_id": 0,
"text": "a * b + a * c = a * (b + c)"
},
{
"math_id": 1,
"text": "a = (b + c + f * g)*(d+3)"
},
{
"math_id": 2,
"text": "(b + c + f * g) * (d + 3)"
}
] | https://en.wikipedia.org/wiki?curid=999491 |
9996788 | Affine manifold | In differential geometry, an affine manifold is a differentiable manifold equipped with a flat, torsion-free connection.
Equivalently, it is a manifold that is (if connected) covered by an open subset of formula_0, with monodromy acting by affine transformations. This equivalence is an easy corollary of Cartan–Ambrose–Hicks theorem.
Equivalently, it is a manifold equipped with an atlas—called the affine structure—such that all transition functions between charts are affine transformations (that is, have constant Jacobian matrix); two atlases are equivalent if the manifold admits an atlas subjugated to both, with transitions from both atlases to a smaller atlas being affine. A manifold having a distinguished affine structure is called an affine manifold and the charts which are affinely related to those of the affine structure are called affine charts. In each affine coordinate domain the coordinate vector fields form a parallelisation of that domain, so there is an associated connection on each domain. These locally defined connections are the same on overlapping parts, so there is a unique connection associated with an affine structure. Note there is a link between linear connection (also called affine connection) and a web.
Formal definition.
An affine manifold formula_1 is a real manifold with charts formula_2 such that formula_3 for all formula_4 where formula_5 denotes the Lie group of affine transformations. In fancier words it is a (G,X)-manifold where formula_6 and formula_7 is the group of affine transformations.
An affine manifold is called complete if its universal covering is homeomorphic to formula_0.
In the case of a compact affine manifold formula_8, let formula_7 be the fundamental group of formula_8 and formula_9 be its universal cover. One can show that each formula_10-dimensional affine manifold comes with a developing map formula_11, and a homomorphism formula_12, such that formula_13 is an immersion and equivariant with respect to formula_14.
A fundamental group of a compact complete flat affine manifold is called an affine crystallographic group. Classification of affine crystallographic groups is a difficult problem, far from being solved. The Riemannian crystallographic groups (also known as Bieberbach groups) were classified by Ludwig Bieberbach, answering a question posed by David Hilbert. In his work on Hilbert's 18-th problem, Bieberbach proved that any Riemannian crystallographic group contains an abelian subgroup of finite index.
Important longstanding conjectures.
Geometry of affine manifolds is essentially a network of longstanding conjectures; most of them proven in low dimension and some other special cases.
The most important of them are:
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{\\mathbb R}^n"
},
{
"math_id": 1,
"text": "M\\,"
},
{
"math_id": 2,
"text": "\\psi_i\\colon U_i\\to{\\mathbb R}^n"
},
{
"math_id": 3,
"text": "\\psi_i\\circ\\psi_j^{-1}\\in \\operatorname{Aff}({\\mathbb R}^n)"
},
{
"math_id": 4,
"text": "i, j\\, ,"
},
{
"math_id": 5,
"text": "\\operatorname{Aff}({\\mathbb R}^n)"
},
{
"math_id": 6,
"text": "X=\\mathbb R^n"
},
{
"math_id": 7,
"text": "G"
},
{
"math_id": 8,
"text": "M"
},
{
"math_id": 9,
"text": "\\widetilde M"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "D\\colon {\\widetilde M}\\to{\\mathbb R}^n"
},
{
"math_id": 12,
"text": "\\varphi\\colon G\\to \\operatorname{Aff}({\\mathbb R}^n)"
},
{
"math_id": 13,
"text": "D"
},
{
"math_id": 14,
"text": "\\varphi"
}
] | https://en.wikipedia.org/wiki?curid=9996788 |
999701 | Rate of convergence | Speed of convergence of a mathematical sequence
In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence formula_0 that converges to formula_1 is said to have "order of convergence" formula_2 and "rate of convergence" formula_3 if
formula_4
The rate of convergence formula_3 is also called the "asymptotic error constant".
Note that this terminology is not standardized and some authors will use "rate" where
this article uses "order" (e.g., ).
In practice, the rate and order of convergence provide useful insights when using iterative methods for calculating numerical approximations. If the order of convergence is higher, then typically fewer iterations are necessary to yield a useful approximation. Strictly speaking, however, the asymptotic behavior of a sequence does not give conclusive information about any finite part of the sequence.
Similar concepts are used for discretization methods. The solution of the discretized problem converges to the solution of the continuous problem as the grid size goes to zero, and the speed of convergence is one of the factors of the efficiency of the method. However, the terminology, in this case, is different from the terminology for iterative methods.
Series acceleration is a collection of techniques for improving the rate of convergence of a series discretization. Such acceleration is commonly accomplished with sequence transformations.
Convergence speed for iterative methods.
Convergence definitions.
Suppose that the sequence formula_5 converges to the number formula_1. The sequence is said to "converge with order formula_6 to formula_1", and with a "rate of convergence" of formula_3, if
for some positive constant formula_7 if formula_8, and formula_9 if formula_10. It is not necessary, however, that formula_6 be an integer. For example, the secant method, when converging to a regular, simple root, has an order of φ ≈ 1.618.
Convergence with order
Order estimation.
A practical method to calculate the order of convergence for a sequence generated by a fixed point iteration is to calculate the following sequence, which converges to formula_6:
formula_13
For numerical approximation of an exact value through a numerical method of order q see
Q-convergence definitions.
In addition to the previously defined Q-linear convergence, a few other Q-convergence definitions exist. Given Definition 1 defined above, the sequence is said to "converge Q-superlinearly to formula_1" (i.e. faster than linearly) in all the cases where formula_14 and also the case formula_15. Given Definition 1, the sequence is said to "converge Q-sublinearly to formula_1" (i.e. slower than linearly) if formula_16. The sequence formula_5 "converges logarithmically to formula_1" if the sequence converges sublinearly and additionally if
formula_17
Note that unlike previous definitions, logarithmic convergence is not called "Q-logarithmic."
In the definitions above, the "Q-" stands for "quotient" because the terms are defined using the quotient between two successive terms. Often, however, the "Q-" is dropped and a sequence is simply said to have "linear convergence", "quadratic convergence", etc.
R-convergence definition.
The Q-convergence definitions have a shortcoming in that they do not include some sequences, such as the sequence formula_18 below, which converge reasonably fast, but whose rate is variable. Therefore, the definition of rate of convergence is extended as follows.
Suppose that formula_5 converges to formula_1. The sequence is said to "converge R-linearly to formula_1" if there exists a sequence formula_19 such that
formula_20
and formula_21 converges Q-linearly to zero. The "R-" prefix stands for "root".
Examples.
Consider the sequence
formula_22
It can be shown that this sequence converges to formula_23. To determine the type of convergence, we plug the sequence into the definition of Q-linear convergence,
formula_24
Thus, we find that formula_25 converges Q-linearly and has a convergence rate of formula_26.
More generally, for any formula_27, the sequence formula_28 converges linearly with rate formula_29.
The sequence
formula_30
also converges linearly to 0 with rate 1/2 under the R-convergence definition, but not under the Q-convergence definition. (Note that formula_31 is the floor function, which gives the largest integer that is less than or equal to formula_32.)
The sequence
formula_33
converges superlinearly. In fact, it is quadratically convergent.
Finally, the sequence
formula_34
converges sublinearly and logarithmically.
Convergence speed for discretization methods.
A similar situation exists for discretization methods designed to approximate a function formula_37, which might be an integral being approximated by numerical quadrature, or the solution of an ordinary differential equation (see example below). The discretization method generates a sequence formula_38, where each successive formula_39 is a function of formula_40 along with the grid spacing formula_36 between successive values of the independent variable formula_32. The important parameter here for the convergence speed to formula_37 is the grid spacing formula_36, inversely proportional to the number of grid points, i.e. the number of points in the sequence required to reach a given value of formula_32.
In this case, the sequence formula_41 is said to converge to the sequence formula_42 with order "q" if there exists a constant "C" such that
formula_43
This is written as formula_44 using big O notation.
This is the relevant definition when discussing methods for numerical quadrature or the solution of ordinary differential equations (ODEs).
A practical method to estimate the order of convergence for a discretization method is pick step sizes formula_45 and formula_46 and calculate the resulting errors formula_47 and formula_48. The order of convergence is then approximated by the following formula:
formula_49
which comes from writing the truncation error, at the old and new grid spacings, as
formula_50
The error formula_51 is, more specifically, a global truncation error (GTE), in that it represents a sum of errors accumulated over all formula_35 iterations, as opposed to a local truncation error (LTE) over just one iteration.
Example of discretization methods.
Consider the ordinary differential equation
formula_52
with initial condition formula_53. We can solve this equation using the Forward Euler scheme for numerical discretization:
formula_54
which generates the sequence
formula_55
In terms of formula_53, this sequence is as follows, from the Binomial theorem:
formula_56
The exact solution to this ODE is formula_57, corresponding to the following Taylor expansion in formula_58 for formula_59:
formula_60
In this case, the truncation error is
formula_61
so formula_41 converges to formula_42 with a convergence rate formula_11.
Examples (continued).
The sequence formula_62 with formula_63 was introduced above. This sequence converges with order 1 according to the convention for discretization methods.
The sequence formula_25 with formula_64, which was also introduced above, converges with order "q" for every number "q". It is said to converge exponentially using the convention for discretization methods. However, it only converges linearly (that is, with order 1) using the convention for iterative methods.
Recurrent sequences and fixed points.
The case of recurrent sequences formula_65 which occurs in dynamical systems and in the context of various fixed-point theorems is of particular interest. Assuming that the relevant derivatives of "f" are continuous, one can (easily) show that for a fixed point formula_66 such that formula_67, one has at least linear convergence for any starting value formula_68 sufficiently close to "p". If formula_69 and formula_70, then one has at least quadratic convergence, and so on. If formula_71, then one has a repulsive fixed point and no starting value will produce a sequence converging to "p" (unless one directly jumps to the point "p" itself).
Acceleration of convergence.
Many methods exist to increase the rate of convergence of a given sequence, i.e., to transform a given sequence into one converging faster to the same limit. Such techniques are in general known as "series acceleration". The goal is to reduce the computational cost of approximating the limit of the transformed sequence. One example of series acceleration is Aitken's delta-squared process. These methods in general (and in particular Aitken's method) do not increase the order of convergence, and are useful only if initially the convergence is not faster than linear: If formula_0 convergences linearly, one gets a sequence formula_72 that still converges linearly (except for pathologically designed special cases), but faster in the sense that formula_73. On the other hand, if the convergence is already of order ≥ 2, Aitken's method will bring no improvement.
References.
<templatestyles src="Reflist/styles.css" />
Literature.
The simple definition is used in
The extended definition is used in
The Big O definition is used in
The terms "Q-linear" and "R-linear" are used in | [
{
"math_id": 0,
"text": "(x_n)"
},
{
"math_id": 1,
"text": "L"
},
{
"math_id": 2,
"text": "q \\geq 1"
},
{
"math_id": 3,
"text": "\\mu"
},
{
"math_id": 4,
"text": " \\lim _{n \\rightarrow \\infty} \\frac{\\left|x_{n+1}-L\\right|}{\\left|x_{n}-L\\right|^{q}}=\\mu."
},
{
"math_id": 5,
"text": "(x_k)"
},
{
"math_id": 6,
"text": "q"
},
{
"math_id": 7,
"text": "\\mu \\in (0, \\infty)"
},
{
"math_id": 8,
"text": "q > 1"
},
{
"math_id": 9,
"text": "\\mu \\in (0, 1)"
},
{
"math_id": 10,
"text": "q = 1"
},
{
"math_id": 11,
"text": "q = 2"
},
{
"math_id": 12,
"text": "q = 3"
},
{
"math_id": 13,
"text": "q \\approx \\frac{\\log \\left|\\displaystyle\\frac{x_{k+1} - x_k}{x_k - x_{k-1}}\\right|}{\\log \\left|\\displaystyle\\frac{x_k - x_{k-1}}{x_{k-1} - x_{k-2}}\\right|}."
},
{
"math_id": 14,
"text": " q > 1"
},
{
"math_id": 15,
"text": " q = 1, \\mu = 0"
},
{
"math_id": 16,
"text": " q = 1, \\mu = 1"
},
{
"math_id": 17,
"text": "\\lim_{k \\to \\infty} \\frac{|x_{k+1} - x_{k}|}{|x_{k} - x_{k-1}|} = 1."
},
{
"math_id": 18,
"text": "(b_k)"
},
{
"math_id": 19,
"text": "(\\varepsilon_k)"
},
{
"math_id": 20,
"text": "\n|x_k - L|\\le\\varepsilon_k\\quad\\text{for all }k \\,,\n"
},
{
"math_id": 21,
"text": " (\\varepsilon_k) "
},
{
"math_id": 22,
"text": "(a_k) = \\left\\{1, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{8}, \\frac{1}{16}, \\frac{1}{32}, \\ldots, \\frac{1}{2^k}, \\dots \\right\\}. "
},
{
"math_id": 23,
"text": "L = 0"
},
{
"math_id": 24,
"text": "\\lim_{k \\to \\infty} \\frac{\\left| 1/2^{k+1} - 0\\right|}{\\left| 1/ 2^k - 0 \\right|} = \\lim_{k \\to \\infty} \\frac{2^k}{2^{k+1}} = \\frac{1}{2}. "
},
{
"math_id": 25,
"text": "(a_k)"
},
{
"math_id": 26,
"text": "\\mu = 1/2"
},
{
"math_id": 27,
"text": "c \\in \\mathbb{R}, \\mu \\in (-1, 1)"
},
{
"math_id": 28,
"text": "(c\\mu^k)"
},
{
"math_id": 29,
"text": "|\\mu|"
},
{
"math_id": 30,
"text": "(b_k) = \\left\\{1, 1, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{16}, \\frac{1}{16}, \\ldots,\\frac{1}{4^{\\left\\lfloor \\frac{k}{2} \\right\\rfloor}} ,\\, \\ldots \\right\\}"
},
{
"math_id": 31,
"text": "\\lfloor x \\rfloor "
},
{
"math_id": 32,
"text": "x"
},
{
"math_id": 33,
"text": "(c_k) = \\left\\{ \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{16}, \\frac{1}{256}, \\frac{1}{65,\\!536}, \\ldots, \\frac{1}{2^{2^k}}, \\ldots \\right\\}"
},
{
"math_id": 34,
"text": "(d_k) = \\left\\{1, \\frac{1}{2}, \\frac{1}{3}, \\frac{1}{4}, \\frac{1}{5}, \\frac{1}{6}, \\ldots, \\frac{1}{k + 1}, \\ldots \\right\\}"
},
{
"math_id": 35,
"text": "n"
},
{
"math_id": 36,
"text": "h"
},
{
"math_id": 37,
"text": "y = f(x)"
},
{
"math_id": 38,
"text": "{y_0,y_1,y_2,y_3,...}"
},
{
"math_id": 39,
"text": "y_j"
},
{
"math_id": 40,
"text": "y_{j-1},y_{j-2},..."
},
{
"math_id": 41,
"text": "(y_n)"
},
{
"math_id": 42,
"text": "f(x_n)"
},
{
"math_id": 43,
"text": " |y_n - f(x_n)| < C h^{q} \\text{ for all } n. "
},
{
"math_id": 44,
"text": "|y_n - f(x_n)| = \\mathcal{O}(h^{q})"
},
{
"math_id": 45,
"text": "h_\\text{new}"
},
{
"math_id": 46,
"text": "h_\\text{old}"
},
{
"math_id": 47,
"text": "e_\\text{new}"
},
{
"math_id": 48,
"text": "e_\\text{old}"
},
{
"math_id": 49,
"text": "q \\approx \\frac{\\log(e_\\text{new}/e_\\text{old})}{\\log(h_\\text{new}/h_\\text{old})},"
},
{
"math_id": 50,
"text": " e = |y_n - f(x_n)| = \\mathcal{O}(h^{q}). "
},
{
"math_id": 51,
"text": "e"
},
{
"math_id": 52,
"text": " \\frac{dy}{dx} = -\\kappa y "
},
{
"math_id": 53,
"text": "y(0) = y_0"
},
{
"math_id": 54,
"text": " \\frac{y_{n+1} - y_n}{h} = -\\kappa y_{n}, "
},
{
"math_id": 55,
"text": " y_{n+1} = y_n(1 - h\\kappa). "
},
{
"math_id": 56,
"text": " y_{n} = y_0(1 - h\\kappa)^n = y_0\\left(1 - nh\\kappa + n(n-1)\\frac{h^2\\kappa^2}{2} + ....\\right). "
},
{
"math_id": 57,
"text": "y = f(x) = y_0\\exp(-\\kappa x)"
},
{
"math_id": 58,
"text": "h\\kappa "
},
{
"math_id": 59,
"text": "h\\kappa \\ll 1"
},
{
"math_id": 60,
"text": " f(x_n) = f(nh) = y_0\\exp(-\\kappa nh) = y_0\\left[\\exp(-\\kappa h)\\right]^n = y_0\\left(1 - h\\kappa + \\frac{h^2\\kappa^2}{2} + ....\\right)^n\n = y_0\\left(1 - nh\\kappa + \\frac{n^2h^2\\kappa^2}{2} + ...\\right). "
},
{
"math_id": 61,
"text": " e = |y_n - f(x_n)| = \\frac{nh^2\\kappa^2}{2} = \\mathcal{O}(h^{2}), "
},
{
"math_id": 62,
"text": "(d_k)"
},
{
"math_id": 63,
"text": "d_k = 1/(k+1)"
},
{
"math_id": 64,
"text": "a_k = 2^{-k}"
},
{
"math_id": 65,
"text": "x_{n+1}:=f(x_n)"
},
{
"math_id": 66,
"text": "f(p)=p"
},
{
"math_id": 67,
"text": "|f'(p)| < 1"
},
{
"math_id": 68,
"text": "x_0"
},
{
"math_id": 69,
"text": "|f'(p)| = 0"
},
{
"math_id": 70,
"text": "|f''(p)| < 1"
},
{
"math_id": 71,
"text": "|f'(p)| > 1"
},
{
"math_id": 72,
"text": "(a_n)"
},
{
"math_id": 73,
"text": "\\lim (a_n-L)/(x_n-L)= 0"
}
] | https://en.wikipedia.org/wiki?curid=999701 |
9999200 | Content (measure theory) | In mathematics, in particular in measure theory, a content formula_0 is a real-valued function defined on a collection of subsets formula_1 such that
That is, a content is a generalization of a measure: while the latter must be countably additive, the former must only be finitely additive.
In many important applications the formula_1 is chosen to be a ring of sets or to be at least a semiring of sets in which case some additional properties can be deduced which are described below. For this reason some authors prefer to define contents only for the case of semirings or even rings.
If a content is additionally "σ"-additive it is called a pre-measure and if furthermore formula_1 is a "σ"-algebra, the content is called a measure. Therefore, every (real-valued) measure is a content, but not vice versa. Contents give a good notion of integrating bounded functions on a space but can behave badly when integrating unbounded functions, while measures give a good notion of integrating unbounded functions.
Examples.
A classical example is to define a content on all half open intervals formula_5 by setting their content to the length of the intervals, that is, formula_6 One can further show that this content is actually "σ"-additive and thus defines a pre-measure on the semiring of all half-open intervals. This can be used to construct the Lebesgue measure for the real number line using Carathéodory's extension theorem. For further details on the general construction see article on Lebesgue measure.
An example of a content that is not a measure on a "σ"-algebra is the content on all subsets of the positive integers that has value formula_7 on any integer formula_8 and is infinite on any infinite subset.
An example of a content on the positive integers that is always finite but is not a measure can be given as follows. Take a positive linear functional on the bounded sequences that is 0 if the sequence has only a finite number of nonzero elements and takes value 1 on the sequence formula_9 so the functional in some sense gives an "average value" of any bounded sequence. (Such a functional cannot be constructed explicitly, but exists by the Hahn–Banach theorem.) Then the content of a set of positive integers is the average value of the sequence that is 1 on this set and 0 elsewhere. Informally, one can think of the content of a subset of integers as the "chance" that a randomly chosen integer lies in this subset (though this is not compatible with the usual definitions of chance in probability theory, which assume countable additivity).
Properties.
Frequently contents are defined on collections of sets that satisfy further constraints. In this case additional properties can be deduced that fail to hold in general for contents defined on any collections of sets.
On semirings.
If formula_1 forms a Semiring of sets then the following statements can be deduced:
formula_11 for formula_12 such that formula_13
On rings.
If furthermore formula_1 is a Ring of sets one gets additionally:
Integration of bounded functions.
In general integration of functions with respect to a content does not behave well. However, there is a well-behaved notion of integration provided that the function is bounded and the total content of the space is finite, given as follows.
Suppose that the total content of a space is finite.
If formula_27 is a bounded function on the space such that the inverse image of any open subset of the reals has a content, then we can define the integral of formula_27 with respect to the content as
formula_28
where the formula_29 form a finite collections of disjoint half-open sets whose union covers the range of formula_30 and formula_31 is any element of formula_32 and where the limit is taken as the diameters of the sets formula_29 tend to 0.
Duals of spaces of bounded functions.
Suppose that formula_0 is a measure on some space formula_33 The bounded measurable functions on formula_34 form a Banach space with respect to the supremum norm. The positive elements of the dual of this space correspond to bounded contents formula_35 formula_36 with the value of formula_35 on formula_27 given by the integral formula_37 Similarly one can form the space of essentially bounded functions, with the norm given by the essential supremum, and the positive elements of the dual of this space are given by bounded contents that vanish on sets of measure 0.
Construction of a measure from a content.
There are several ways to construct a measure μ from a content formula_35 on a topological space. This section gives one such method for locally compact Hausdorff spaces such that the content is defined on all compact subsets. In general the measure is not an extension of the content, as the content may fail to be countably additive, and the measure may even be identically zero even if the content is not.
First restrict the content to compact sets. This gives a function formula_35 of compact sets formula_38 with the following properties:
There are also examples of functions formula_35 as above not constructed from contents.
An example is given by the construction of Haar measure on a locally compact group. One method of constructing such a Haar measure is to produce a left-invariant function formula_35 as above on the compact subsets of the group, which can then be extended to a left-invariant measure.
Definition on open sets.
Given λ as above, we define a function μ on all open sets by
formula_44
This has the following properties:
Definition on all sets.
Given μ as above, we extend the function μ to all subsets of the topological space by
formula_50
This is an outer measure, in other words it has the following properties:
Construction of a measure.
The function μ above is an outer measure on the family of all subsets. Therefore, it becomes a measure when restricted to the measurable subsets for the outer measure, which are the subsets formula_54 such that formula_55 for all subsets formula_33 If the space is locally compact then every open set is measurable for this measure.
The measure formula_0 does not necessarily coincide with the content formula_35 on compact sets, However it does if formula_35 is regular in the sense that
for any compact formula_56 formula_57 is the inf of formula_58 for compact sets formula_59 containing formula_38 in their interiors.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu"
},
{
"math_id": 1,
"text": "\\mathcal{A}"
},
{
"math_id": 2,
"text": "\\mu(A)\\in\\ [0, \\infty] \\text{ whenever } A \\in \\mathcal{A}."
},
{
"math_id": 3,
"text": "\\mu(\\varnothing) = 0."
},
{
"math_id": 4,
"text": "\\mu\\Bigl(\\bigcup_{i=1}^n A_i\\Bigr) = \\sum_{i=1}^n \\mu(A_i) \\text{ whenever } A_1, \\dots, A_n, \\bigcup_{i=1}^n A_i \\in \\mathcal{A} \\text{ and } A_i \\cap A_j = \\varnothing \\text{ for } i \\neq j."
},
{
"math_id": 5,
"text": "[a,b) \\subseteq \\R"
},
{
"math_id": 6,
"text": "\\mu([a,b))=b-a."
},
{
"math_id": 7,
"text": "1/2^n"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "1, 1, 1, \\ldots,"
},
{
"math_id": 10,
"text": "A \\subseteq B \\Rightarrow \\mu(A) \\leq \\mu(B) \\text{ for } A, B \\in \\mathcal{A}."
},
{
"math_id": 11,
"text": "\\mu(A \\cup B) \\leq \\mu(A) + \\mu(B)"
},
{
"math_id": 12,
"text": "A, B \\in \\mathcal{A}"
},
{
"math_id": 13,
"text": "A \\cup B \\in \\mathcal{A}."
},
{
"math_id": 14,
"text": "B \\subseteq A"
},
{
"math_id": 15,
"text": "\\mu (B) < \\infty"
},
{
"math_id": 16,
"text": "\\mu (A \\setminus B) = \\mu (A) - \\mu (B)."
},
{
"math_id": 17,
"text": "A,B\\in\\mathcal{A} \\Rightarrow \\mu(A\\cup B)+\\mu(A\\cap B) = \\mu(A)+\\mu(B)."
},
{
"math_id": 18,
"text": "A_i\\in \\mathcal{A}\\; (i=1,2,\\dotsc,n) \\Rightarrow \\mu\\left(\\bigcup_{i=1}^n A_i\\right)\\leq \\sum_{i=1}^n \\mu(A_i)."
},
{
"math_id": 19,
"text": "\\sigma"
},
{
"math_id": 20,
"text": "A_i \\in \\mathcal{A}\\; (i=1,2,\\dotsc)\\ "
},
{
"math_id": 21,
"text": "\\bigcup_{i=1}^\\infty A_i\\in \\mathcal{A}"
},
{
"math_id": 22,
"text": "\\mu\\left(\\bigcup_{i=1}^\\infty A_i\\right) \\geq \\sum_{i=1}^\\infty \\mu(A_i)."
},
{
"math_id": 23,
"text": "A \\in\\mathcal{A} \\Rightarrow \\mu(A)<\\infty,"
},
{
"math_id": 24,
"text": "\\mu\\left(\\bigcup_{i=1}^nA_i\\right) = \\sum_{k=1}^n(-1)^{k+1}\\!\\!\\sum_{I\\subseteq\\{1,\\dotsc,n\\},\\atop |I|=k}\\!\\!\\!\\!\\mu\\left(\\bigcap_{i\\in I}A_i\\right)"
},
{
"math_id": 25,
"text": "A_i\\in \\mathcal{A}"
},
{
"math_id": 26,
"text": "i\\in\\{1,\\dotsc,n\\}."
},
{
"math_id": 27,
"text": "f"
},
{
"math_id": 28,
"text": "\\int f \\, d\\lambda = \\lim \\sum_{i=1}^n f(\\alpha_i)\\lambda (f^{-1}(A_i))"
},
{
"math_id": 29,
"text": "A_i"
},
{
"math_id": 30,
"text": "f,"
},
{
"math_id": 31,
"text": "\\alpha_i"
},
{
"math_id": 32,
"text": "A_i,"
},
{
"math_id": 33,
"text": "X."
},
{
"math_id": 34,
"text": "X"
},
{
"math_id": 35,
"text": "\\lambda"
},
{
"math_id": 36,
"text": "X,"
},
{
"math_id": 37,
"text": "\\int f \\, d\\lambda."
},
{
"math_id": 38,
"text": "C"
},
{
"math_id": 39,
"text": "\\lambda(C) \\in\\ [0, \\infty]"
},
{
"math_id": 40,
"text": "\\lambda(\\varnothing) = 0."
},
{
"math_id": 41,
"text": "\\lambda(C_1) \\leq \\lambda(C_2) \\text{ whenever } C_1\\subseteq C_2"
},
{
"math_id": 42,
"text": "\\lambda(C_1 \\cup C_2) \\leq \\lambda(C_1) + \\lambda(C_2)"
},
{
"math_id": 43,
"text": "\\lambda(C_1 \\cup C_2) = \\lambda(C_1) + \\lambda(C_2)"
},
{
"math_id": 44,
"text": "\\mu(U) = \\sup_{C\\subseteq U} \\lambda (C)."
},
{
"math_id": 45,
"text": "\\mu(U) \\in\\ [0, \\infty]"
},
{
"math_id": 46,
"text": "\\mu(\\varnothing) = 0"
},
{
"math_id": 47,
"text": "\\mu(U_1) \\leq \\mu(U_2) \\text{ whenever } U_1\\subseteq U_2"
},
{
"math_id": 48,
"text": "\\mu\\left(\\bigcup_nU_n\\right) \\leq \\sum_n\\lambda(U_n)"
},
{
"math_id": 49,
"text": "\\mu\\left(\\bigcup_nU_n\\right) = \\sum_n\\lambda(U_n)"
},
{
"math_id": 50,
"text": "\\mu(A) = \\inf_{A\\subseteq U}\\mu (U)."
},
{
"math_id": 51,
"text": "\\mu(A) \\in\\ [0, \\infty]"
},
{
"math_id": 52,
"text": "\\mu(A_1) \\leq \\mu(A_2) \\text{ whenever } A_1\\subseteq A_2"
},
{
"math_id": 53,
"text": "\\mu\\left(\\bigcup_nA_n\\right) \\leq \\sum_n\\lambda(A_n)"
},
{
"math_id": 54,
"text": "E"
},
{
"math_id": 55,
"text": "\\mu(X) = \\mu(X \\cap E) + \\mu(X \\setminus E)"
},
{
"math_id": 56,
"text": "C,"
},
{
"math_id": 57,
"text": "\\lambda(C)"
},
{
"math_id": 58,
"text": "\\lambda(D)"
},
{
"math_id": 59,
"text": "D"
}
] | https://en.wikipedia.org/wiki?curid=9999200 |
9999827 | Liénard–Wiechert potential | Electromagnetic effect of point charges
The Liénard–Wiechert potentials describe the classical electromagnetic effect of a moving electric point charge in terms of a vector potential and a scalar potential in the Lorenz gauge. Stemming directly from Maxwell's equations, these describe the complete, relativistically correct, time-varying electromagnetic field for a point charge in arbitrary motion, but are not corrected for quantum mechanical effects. Electromagnetic radiation in the form of waves can be obtained from these potentials. These expressions were developed in part by Alfred-Marie Liénard in 1898 and independently by Emil Wiechert in 1900.
Equations.
Definition of Liénard–Wiechert potentials.
The retarded time is defined, in the context of distributions of charges and currents, as
formula_0
where formula_1 is the observation point, and formula_2 is the observed point subject to the variations of source charges and currents.
For a moving point charge formula_3 whose given trajectory is formula_4, formula_5
is no more fixed, but becomes a function of the retarded time itself. In other words, following the trajectory
of formula_3 yields the implicit equation
formula_6
which provides the retarded time formula_7 as a function of the current time (and of the given trajectory):
formula_8.
The Liénard–Wiechert potentials formula_9 (scalar potential field) and formula_10 (vector potential field) are, for a source point charge formula_3 at position formula_2 traveling with velocity formula_11:
formula_12
and
formula_13
where:
This can also be written in a covariant way, where the electromagnetic four-potential at formula_18 is:
formula_19
where formula_20 and formula_21 is the position of the source and formula_22 is its four velocity.
Field computation.
We can calculate the electric and magnetic fields directly from the potentials using the definitions:
formula_23 and formula_24
The calculation is nontrivial and requires a number of steps. The electric and magnetic fields are (in non-covariant form):
formula_25
and
formula_26
where formula_27, formula_28 and formula_29 (the Lorentz factor).
Note that the formula_30 part of the first term of the electric field updates the direction of the field toward the instantaneous position of the charge, if it continues to move with constant velocity formula_31. This term is connected with the "static" part of the electromagnetic field of the charge.
The second term, which is connected with electromagnetic radiation by the moving charge, requires charge acceleration formula_32 and if this is zero, the value of this term is zero, and the charge does not radiate (emit electromagnetic radiation). This term requires additionally that a component of the charge acceleration be in a direction transverse to the line which connects the charge formula_3 and the observer of the field formula_33. The direction of the field associated with this radiative term is toward the fully time-retarded position of the charge (i.e. where the charge was when it was accelerated).
Derivation.
The formula_34 scalar and formula_35 vector potentials satisfy the nonhomogeneous electromagnetic wave equation where the sources are expressed with the charge and current densities formula_36 and formula_37
formula_38
and the Ampère-Maxwell law is:
formula_39
Since the potentials are not unique, but have gauge freedom, these equations can be simplified by gauge fixing. A common choice is the Lorenz gauge condition:
formula_40
Then the nonhomogeneous wave equations become uncoupled and symmetric in the potentials:
formula_41
formula_42
Generally, the retarded solutions for the scalar and vector potentials (SI units) are
formula_43
and
formula_44
where formula_45 is the retarded time and formula_46 and formula_47
satisfy the homogeneous wave equation with no sources and boundary conditions. In the case that there are no boundaries surrounding the sources then
formula_48 and formula_49.
For a moving point charge whose trajectory is given as a function of time by formula_50, the charge and current densities are as follows:
formula_51
formula_52
where formula_53 is the three-dimensional Dirac delta function and formula_54 is the velocity of the point charge.
Substituting into the expressions for the potential gives
formula_55
formula_56
These integrals are difficult to evaluate in their present form, so we will rewrite them by replacing formula_57 with formula_58 and integrating over the delta distribution formula_59:
formula_60
formula_61
We exchange the order of integration:
formula_62
formula_63
The delta function picks out formula_64 which allows us to perform the inner integration with ease. Note that formula_57 is a function of formula_65, so this integration also fixes formula_66.
formula_67
formula_68
The retarded time formula_57 is a function of the field point formula_69 and the source trajectory formula_50, and hence depends on formula_58. To evaluate this integral, therefore, we need the identity
formula_70
where each formula_71 is a zero of formula_72. Because there is only one retarded time formula_7 for any given space-time coordinates formula_69 and source trajectory formula_50, this reduces to:
formula_73
where formula_74 and formula_2 are evaluated at the retarded time formula_7, and we have used the identity formula_75 with formula_76. Notice that the retarded time formula_7 is the solution of the equation formula_77. Finally, the delta function picks out formula_78, and
formula_79
formula_80
which are the Liénard–Wiechert potentials.
Lorenz gauge, electric and magnetic fields.
In order to calculate the derivatives of formula_9 and formula_10 it is convenient to first compute the derivatives of the retarded time. Taking the derivatives of both sides of its defining equation (remembering that formula_81):
formula_82
Differentiating with respect to t,
formula_83
formula_84
formula_85
Similarly, taking the gradient with respect to formula_86 and using the multivariable chain rule gives
formula_87
formula_88
formula_89
formula_90
It follows that
formula_91
formula_92
These can be used in calculating the derivatives of the vector potential and the resulting expressions are
formula_93
formula_94
These show that the Lorenz gauge is satisfied, namely that formula_95.
Similarly one calculates:
formula_96
formula_97
By noting that for any vectors formula_98, formula_99, formula_100:
formula_101
The expression for the electric field mentioned above becomes
formula_102
which is easily seen to be equal to formula_103
Similarly formula_104 gives the expression of the magnetic field mentioned above:
formula_105
The source terms formula_2, formula_106, and formula_107 are to be evaluated at the retarded time.
Implications.
The study of classical electrodynamics was instrumental in Albert Einstein's development of the theory of relativity. Analysis of the motion and propagation of electromagnetic waves led to the special relativity description of space and time. The Liénard–Wiechert formulation is an important launchpad into a deeper analysis of relativistic moving particles.
The Liénard–Wiechert description is accurate for a large, independently moving particle (i.e. the treatment is "classical" and the acceleration of the charge is due to a force independent of the electromagnetic field). The Liénard–Wiechert formulation always provides two sets of solutions: Advanced fields are absorbed by the charges and retarded fields are emitted. Schwarzschild and Fokker considered the advanced field of a system of moving charges, and the retarded field of a system of charges having the same geometry and opposite charges. Linearity of Maxwell's equations in vacuum allows one to add both systems, so that the charges disappear: This trick allows Maxwell's equations to become linear in matter.
Multiplying electric parameters of both problems by arbitrary real constants produces a coherent interaction of light with matter which generalizes Einstein's theory which is now considered as founding theory of lasers: it is not necessary to study a large set of identical molecules to get coherent amplification in the mode obtained by arbitrary multiplications of advanced and retarded fields.
To compute energy, it is necessary to use the absolute fields which include the zero point field; otherwise, an error appears, for instance in photon counting.
It is important to take into account the zero point field discovered by Planck. It replaces Einstein's "A" coefficient and explains that the classical electron is stable on Rydberg's classical orbits. Moreover, introducing the fluctuations of the zero point field produces Willis E. Lamb's correction of levels of H atom.
Quantum electrodynamics helped bring together the radiative behavior with the quantum constraints. It introduces quantization of normal modes of the electromagnetic field in assumed perfect optical resonators.
Universal speed limit.
The force on a particle at a given location r and time "t" depends in a complicated way on the position of the source particles at an earlier time "t"r due to the finite speed, c, at which electromagnetic information travels. A particle on Earth 'sees' a charged particle accelerate on the Moon as this acceleration happened 1.5 seconds ago, and a charged particle's acceleration on the Sun as happened 500 seconds ago. This earlier time in which an event happens such that a particle at location r 'sees' this event at a later time "t" is called the retarded time, "tr". The retarded time varies with position; for example the retarded time at the Moon is 1.5 seconds before the current time and the retarded time on the Sun is 500 s before the current time on the Earth. The retarded time "tr"="tr"(r,"t") is defined implicitly by
formula_108
where formula_109 is the distance of the particle from the source at the retarded time. Only electromagnetic wave effects depend fully on the retarded time.
A novel feature in the Liénard–Wiechert potential is seen in the breakup of its terms into two types of field terms (see below), only one of which depends fully on the retarded time. The first of these is the static electric (or magnetic) field term that depends only on the distance to the moving charge, and does not depend on the retarded time at all, if the velocity of the source is constant. The other term is dynamic, in that it requires that the moving charge be "accelerating" with a component perpendicular to the line connecting the charge and the observer and does not appear unless the source changes velocity. This second term is connected with electromagnetic radiation.
The first term describes near field effects from the charge, and its direction in space is updated with a term that corrects for any constant-velocity motion of the charge on its distant static field, so that the distant static field appears at distance from the charge, with no aberration of light or light-time correction. This term, which corrects for time-retardation delays in the direction of the static field, is required by Lorentz invariance. A charge moving with a constant velocity must appear to a distant observer in exactly the same way as a static charge appears to a moving observer, and in the latter case, the direction of the static field must change instantaneously, with no time-delay. Thus, static fields (the first term) point exactly at the true instantaneous (non-retarded) position of the charged object if its velocity has not changed over the retarded time delay. This is true over any distance separating objects.
The second term, however, which contains information about the acceleration and other unique behavior of the charge that cannot be removed by changing the Lorentz frame (inertial reference frame of the observer), is fully dependent for direction on the time-retarded position of the source. Thus, electromagnetic radiation (described by the second term) always appears to come from the direction of the position of the emitting charge at the retarded time. Only this second term describes information transfer about the behavior of the charge, which transfer occurs (radiates from the charge) at the speed of light. At "far" distances (longer than several wavelengths of radiation), the 1/R dependence of this term makes electromagnetic field effects (the value of this field term) more powerful than "static" field effects, which are described by the 1/R2 field of the first (static) term and thus decay more rapidly with distance from the charge.
Existence and uniqueness of the retarded time.
Existence.
The retarded time is not guaranteed to exist in general. For example, if, in a given frame of reference, an electron has just been created, then at this very moment another electron does not yet feel its electromagnetic force at all. However, under certain conditions, there always exists a retarded time. For example, if the source charge has existed for an unlimited amount of time, during which it has always travelled at a speed not exceeding formula_110, then there exists a valid retarded time formula_7. This can be seen by considering the function formula_111. At the present time formula_112; formula_113. The derivative formula_114 is given by
formula_115
By the mean value theorem, formula_116. By making formula_117 sufficiently large, this can become negative, "i.e.", at some point in the past, formula_118. By the intermediate value theorem, there exists an intermediate formula_7 with formula_119, the defining equation of the retarded time. Intuitively, as the source charge moves back in time, the cross section of its light cone at present time expands faster than it can recede, so eventually it must reach the point formula_86. This is not necessarily true if the source charge's speed is allowed to be arbitrarily close to formula_120, "i.e.", if for any given speed formula_121 there was some time in the past when the charge was moving at this speed. In this case the cross section of the light cone at present time approaches the point formula_86 as the observer travels back in time but does not necessarily ever reach it.
Uniqueness.
For a given point formula_69 and trajectory of the point source formula_50, there is at most one value of the retarded time formula_7, "i.e.", one value formula_7 such that formula_122. This can be realized by assuming that there are two retarded times formula_123 and formula_124, with formula_125. Then, formula_126 and formula_127. Subtracting gives formula_128 by the triangle inequality. Unless formula_129, this then implies that the average velocity of the charge between formula_123 and formula_124 is formula_130, which is impossible. The intuitive interpretation is that one can only ever "see" the point source at one location/time at once unless it travels at least at the speed of light to another location. As the source moves forward in time, the cross section of its light cone at present time contracts faster than the source can approach, so it can never intersect the point formula_86 again.
The conclusion is that, under certain conditions, the retarded time exists and is unique. | [
{
"math_id": 0,
"text": "t_r(\\mathbf{r},\\mathbf{r_s}, t) = t - \\frac{1}{c}|\\mathbf{r} - \\mathbf{r}_s|,"
},
{
"math_id": 1,
"text": " \\mathbf{r} "
},
{
"math_id": 2,
"text": "\\mathbf{r}_s"
},
{
"math_id": 3,
"text": "q"
},
{
"math_id": 4,
"text": "\\mathbf{r_s}(t)"
},
{
"math_id": 5,
"text": "\\mathbf{r_s}"
},
{
"math_id": 6,
"text": "t_r = t - \\frac{1}{c}|\\mathbf{r} - \\mathbf{r}_s(t_r)|,"
},
{
"math_id": 7,
"text": "t_r"
},
{
"math_id": 8,
"text": "t_r = t_r(\\mathbf{r},t)"
},
{
"math_id": 9,
"text": "\\varphi"
},
{
"math_id": 10,
"text": "\\mathbf{A}"
},
{
"math_id": 11,
"text": "\\mathbf{v}_s"
},
{
"math_id": 12,
"text": "\\varphi(\\mathbf{r}, t) = \\frac{1}{4 \\pi \\epsilon_0} \\left(\\frac{q}{(1 - \\mathbf{n}_s \\cdot \\boldsymbol{\\beta}_s)|\\mathbf{r} - \\mathbf{r}_s|} \\right)_{t_r}"
},
{
"math_id": 13,
"text": "\\mathbf{A}(\\mathbf{r},t) = \\frac{\\mu_0c}{4 \\pi} \\left(\\frac{q \\boldsymbol{\\beta}_s}{(1 - \\mathbf{n}_s \\cdot \\boldsymbol{\\beta}_s)|\\mathbf{r} - \\mathbf{r}_s|} \\right)_{t_r} = \\frac{\\boldsymbol{\\beta}_s(t_r)}{c} \\varphi(\\mathbf{r}, t)"
},
{
"math_id": 14,
"text": "\\boldsymbol{\\beta}_s(t) = \\frac{\\mathbf{v}_s(t)}{c}"
},
{
"math_id": 15,
"text": "{|\\mathbf{r} - \\mathbf{r}_s|}"
},
{
"math_id": 16,
"text": "\\mathbf{n}_s = \\frac{\\mathbf{r} - \\mathbf{r}_s}{|\\mathbf{r} - \\mathbf{r}_s|}"
},
{
"math_id": 17,
"text": "(\\cdots)_{t_r}"
},
{
"math_id": 18,
"text": "X^{\\mu}=(t,x,y,z)"
},
{
"math_id": 19,
"text": "A^{\\mu}(X)= -\\frac{\\mu_0 q c}{4 \\pi} \\left(\\frac{U^{\\mu}}{R_{\\nu}U^{\\nu}} \\right)_{t_r} "
},
{
"math_id": 20,
"text": "R^{\\mu}=X^{\\mu}-R_{\\rm s}^{\\mu}"
},
{
"math_id": 21,
"text": "R_{\\rm s}^{\\mu}"
},
{
"math_id": 22,
"text": "U^{\\mu}=dX^{\\mu}/d\\tau"
},
{
"math_id": 23,
"text": "\\mathbf{E} = - \\nabla \\varphi - \\dfrac{\\partial \\mathbf{A}}{\\partial t}"
},
{
"math_id": 24,
"text": "\\mathbf{B} = \\nabla \\times \\mathbf{A}"
},
{
"math_id": 25,
"text": "\\mathbf{E}(\\mathbf{r}, t) = \\frac{1}{4 \\pi \\varepsilon_0} \\left(\\frac{q(\\mathbf{n}_s - \\boldsymbol{\\beta}_s)}{\\gamma^2 (1 - \\mathbf{n}_s \\cdot \\boldsymbol{\\beta}_s)^3 |\\mathbf{r} - \\mathbf{r}_s|^2} + \\frac{q \\mathbf{n}_s \\times \\big((\\mathbf{n}_s - \\boldsymbol{\\beta}_s) \\times \\dot{\\boldsymbol{\\beta}_s}\\big)}{c(1 - \\mathbf{n}_s \\cdot \\boldsymbol{\\beta}_s)^3 |\\mathbf{r} - \\mathbf{r}_s|} \\right)_{t_r}"
},
{
"math_id": 26,
"text": "\\mathbf{B}(\\mathbf{r}, t) = \\frac{\\mu_0}{4 \\pi} \\left(\\frac{q c(\\boldsymbol{\\beta}_s \\times \\mathbf{n}_s)}{\\gamma^2 (1-\\mathbf{n}_s \\cdot \\boldsymbol{\\beta}_s)^3 |\\mathbf{r} - \\mathbf{r}_s|^2} + \\frac{q \\mathbf{n}_s \\times \\Big(\\mathbf{n}_s \\times \\big((\\mathbf{n}_s - \\boldsymbol{\\beta}_s) \\times \\dot{\\boldsymbol{\\beta}_s}\\big) \\Big)}{(1 - \\mathbf{n}_s \\cdot \\boldsymbol{\\beta}_s)^3 |\\mathbf{r} - \\mathbf{r}_s|} \\right)_{t_r} = \\frac{\\mathbf{n}_s(t_r)}{c} \\times \\mathbf{E}(\\mathbf{r}, t)"
},
{
"math_id": 27,
"text": " \\boldsymbol{\\beta}_s(t) = \\frac{\\mathbf{v}_s(t)}{c}"
},
{
"math_id": 28,
"text": "\\mathbf{n}_s(t) = \\frac{\\mathbf{r} - \\mathbf{r}_s(t)}{|\\mathbf{r} - \\mathbf{r}_s(t)|}"
},
{
"math_id": 29,
"text": "\\gamma(t) = \\frac{1}{\\sqrt{1 - |\\boldsymbol{\\beta}_s(t)|^2}}"
},
{
"math_id": 30,
"text": "\\mathbf{n}_s - \\boldsymbol{\\beta}_s"
},
{
"math_id": 31,
"text": "c \\boldsymbol{\\beta}_s"
},
{
"math_id": 32,
"text": "\\dot{\\boldsymbol{\\beta}}_s"
},
{
"math_id": 33,
"text": "\\mathbf{E}(\\mathbf{r}, t)"
},
{
"math_id": 34,
"text": "\\varphi(\\mathbf{r}, t)"
},
{
"math_id": 35,
"text": "\\mathbf{A}(\\mathbf{r}, t)"
},
{
"math_id": 36,
"text": "\\rho(\\mathbf{r}, t)"
},
{
"math_id": 37,
"text": "\\mathbf{J}(\\mathbf{r}, t)"
},
{
"math_id": 38,
"text": " \n \\nabla^2 \\varphi + {{\\partial } \\over \\partial t} \\left ( \\nabla \\cdot \\mathbf{A} \\right ) = - {\\rho \\over \\varepsilon_0} \\,, "
},
{
"math_id": 39,
"text": " \\nabla^2 \\mathbf{A} - {1 \\over c^2} {\\partial^2 \\mathbf{A} \\over \\partial t^2} - \\nabla \\left ( {1 \\over c^2} {{\\partial \\varphi } \\over {\\partial t }} + \\nabla \\cdot \\mathbf{A} \\right ) = - \\mu_0 \\mathbf{J} \\,. "
},
{
"math_id": 40,
"text": " {1 \\over c^2} {{\\partial \\varphi } \\over {\\partial t }} + \\nabla \\cdot \\mathbf{A} = 0 "
},
{
"math_id": 41,
"text": " \n \\nabla^2 \\varphi - {1 \\over c^2} {\\partial^2 \\varphi \\over \\partial t^2} = - {\\rho \\over \\varepsilon_0} \\,,"
},
{
"math_id": 42,
"text": " \n \\nabla^2 \\mathbf{A} - {1 \\over c^2} {\\partial^2 \\mathbf{A} \\over \\partial t^2} = - \\mu_0 \\mathbf{J} \\,. "
},
{
"math_id": 43,
"text": "\n\\varphi(\\mathbf{r}, t) = \\frac{1}{4\\pi \\varepsilon_0}\\int \\frac{\\rho(\\mathbf{r}', t_r')}{|\\mathbf{r} - \\mathbf{r}'|} d^3\\mathbf{r}' + \\varphi_0(\\mathbf{r}, t) \n"
},
{
"math_id": 44,
"text": "\n\\mathbf{A}(\\mathbf{r}, t) = \\frac{\\mu_0}{4\\pi} \\int \\frac{\\mathbf{J}(\\mathbf{r}', t_r')}{|\\mathbf{r} - \\mathbf{r}'|} d^3\\mathbf{r}' + \\mathbf{A}_0(\\mathbf{r}, t)\n"
},
{
"math_id": 45,
"text": "t_r' = t - \\frac{1}{c} |\\mathbf{r} - \\mathbf{r}'|"
},
{
"math_id": 46,
"text": "\\varphi_0(\\mathbf{r}, t)"
},
{
"math_id": 47,
"text": "\\mathbf{A}_0(\\mathbf{r}, t)"
},
{
"math_id": 48,
"text": "\\varphi_0(\\mathbf{r}, t) = 0"
},
{
"math_id": 49,
"text": "\\mathbf{A}_0(\\mathbf{r}, t) = 0"
},
{
"math_id": 50,
"text": "\\mathbf{r}_s(t')"
},
{
"math_id": 51,
"text": "\n\\rho(\\mathbf{r}', t') = q \\delta^3(\\mathbf{r'} - \\mathbf{r}_s(t'))\n"
},
{
"math_id": 52,
"text": "\n\\mathbf{J}(\\mathbf{r}', t') = q\\mathbf{v}_s(t') \\delta^3(\\mathbf{r'} - \\mathbf{r}_s(t'))\n"
},
{
"math_id": 53,
"text": "\\delta^3"
},
{
"math_id": 54,
"text": "\\mathbf{v}_s(t')"
},
{
"math_id": 55,
"text": "\n\\varphi(\\mathbf{r}, t) = \\frac{1}{4\\pi\\epsilon_0} \\int \\frac{q \\delta^3(\\mathbf{r'} - \\mathbf{r}_s(t_r'))}{|\\mathbf{r} - \\mathbf{r}'|} d^3\\mathbf{r}'\n"
},
{
"math_id": 56,
"text": "\n\\mathbf{A}(\\mathbf{r}, t) = \\frac{\\mu_0}{4\\pi} \\int \\frac{q\\mathbf{v}_s(t_r') \\delta^3(\\mathbf{r'} - \\mathbf{r}_s(t_r'))}{|\\mathbf{r} - \\mathbf{r}'|} d^3\\mathbf{r}'\n"
},
{
"math_id": 57,
"text": "t_r'"
},
{
"math_id": 58,
"text": "t'"
},
{
"math_id": 59,
"text": "\\delta(t' - t_r')"
},
{
"math_id": 60,
"text": "\n\\varphi(\\mathbf{r}, t) = \\frac{1}{4\\pi \\epsilon_0} \\iint \\frac{q\\delta^3(\\mathbf{r'} - \\mathbf{r}_s(t'))}{|\\mathbf{r} - \\mathbf{r}'|} \\delta(t' - t_r') \\, dt' \\, d^3\\mathbf{r}'\n"
},
{
"math_id": 61,
"text": "\n\\mathbf{A}(\\mathbf{r}, t) = \\frac{\\mu_0}{4\\pi} \\iint \\frac{q\\mathbf{v}_s(t') \\delta^3(\\mathbf{r'} - \\mathbf{r}_s(t'))}{|\\mathbf{r} - \\mathbf{r}'|} \\delta(t' - t_r') \\, dt' \\, d^3\\mathbf{r}'\n"
},
{
"math_id": 62,
"text": "\n\\varphi(\\mathbf{r}, t) = \\frac{1}{4\\pi \\epsilon_0} \\iint \\frac{\\delta(t' - t_r')}{|\\mathbf{r} - \\mathbf{r}'|} q\\delta^3(\\mathbf{r'} - \\mathbf{r}_s(t')) \\, d^3\\mathbf{r}' dt'\n"
},
{
"math_id": 63,
"text": "\n\\mathbf{A}(\\mathbf{r}, t) = \\frac{\\mu_0}{4\\pi} \\iint \\frac{\\delta(t' - t_r')}{|\\mathbf{r} - \\mathbf{r}'|} q\\mathbf{v}_s(t') \\delta^3(\\mathbf{r'} - \\mathbf{r}_s(t')) \\, d^3\\mathbf{r}' dt'\n"
},
{
"math_id": 64,
"text": "\\mathbf{r}' = \\mathbf{r}_s(t')"
},
{
"math_id": 65,
"text": "\\mathbf{r}'"
},
{
"math_id": 66,
"text": "t_r' = t - \\frac{1}{c} |\\mathbf{r} - \\mathbf{r}_s(t')|"
},
{
"math_id": 67,
"text": "\n\\varphi(\\mathbf{r}, t) = \\frac{1}{4\\pi \\epsilon_0} \\int q\\frac{\\delta(t' - t_r')}{|\\mathbf{r} - \\mathbf{r}_s(t')|} dt'\n"
},
{
"math_id": 68,
"text": "\n\\mathbf{A}(\\mathbf{r}, t) = \\frac{\\mu_0}{4\\pi} \\int q\\mathbf{v}_s(t') \\frac{\\delta(t' - t_r')}{|\\mathbf{r} - \\mathbf{r}_s(t')|} \\, dt'\n"
},
{
"math_id": 69,
"text": "(\\mathbf{r}, t)"
},
{
"math_id": 70,
"text": "\\delta(f(t')) = \\sum_i \\frac{\\delta(t' - t_i)}{|f'(t_i)|}"
},
{
"math_id": 71,
"text": "t_i"
},
{
"math_id": 72,
"text": "f"
},
{
"math_id": 73,
"text": "\\begin{align}\\delta(t' - t_r')\n=& \\frac{\\delta(t' - t_r)}{\\frac{\\partial}{\\partial t'}(t' - t_r')|_{t' = t_r}}\n= \\frac{\\delta(t' - t_r)}{\\frac{\\partial}{\\partial t'}(t' - (t - \\frac{1}{c} |\\mathbf{r} - \\mathbf{r}_s(t')|))|_{t' = t_r}} \\\\\n&= \\frac{\\delta(t' - t_r)}{1 + \\frac{1}{c} (\\mathbf{r} - \\mathbf{r}_s(t'))/|\\mathbf{r} - \\mathbf{r}_s(t')|\\cdot (-\\mathbf{v}_s(t')) |_{t' = t_r}}\\\\\n&= \\frac{\\delta(t' - t_r)}{1 - \\boldsymbol{\\beta}_s \\cdot (\\mathbf{r}-\\mathbf{r}_s)/|\\mathbf{r}-\\mathbf{r}_s|}\\end{align}"
},
{
"math_id": 74,
"text": "\\boldsymbol{\\beta}_s = \\mathbf{v}_s/c"
},
{
"math_id": 75,
"text": "|\\mathbf{x}|' = \\hat{\\mathbf{x}} \\cdot \\mathbf{v}"
},
{
"math_id": 76,
"text": "\\mathbf{v} = \\mathbf{x}'"
},
{
"math_id": 77,
"text": "t_r = t - \\frac{1}{c} |\\mathbf{r} - \\mathbf{r}_s(t_r)|"
},
{
"math_id": 78,
"text": "t' = t_r"
},
{
"math_id": 79,
"text": "\n\\varphi(\\mathbf{r}, t) = \\frac{1}{4\\pi \\epsilon_0} \\left(\\frac{q}{|\\mathbf{r}-\\mathbf{r}_s| (1 - \\boldsymbol{\\beta}_s \\cdot (\\mathbf{r}-\\mathbf{r}_s)/|\\mathbf{r}-\\mathbf{r}_s|)}\\right)_{t_r} = \\frac{1}{4\\pi \\epsilon_0} \\left(\\frac{q}{(1-\\mathbf{n}_s\\cdot \\boldsymbol{\\beta}_s)|\\mathbf{r}-\\mathbf{r}_s|}\\right)_{t_r}\n"
},
{
"math_id": 80,
"text": "\n\\mathbf{A}(\\mathbf{r}, t) = \\frac{\\mu_0}{4\\pi} \\left(\\frac{q\\mathbf{v}}{|\\mathbf{r}-\\mathbf{r}_s| (1 - \\boldsymbol{\\beta}_s \\cdot (\\mathbf{r}-\\mathbf{r}_s)/|\\mathbf{r}-\\mathbf{r}_s|)}\\right)_{t_r} = \\frac{\\mu_0 c}{4\\pi} \\left(\\frac{q\\boldsymbol{\\beta}_s}{(1-\\mathbf{n}_s\\cdot \\boldsymbol{\\beta}_s)|\\mathbf{r}-\\mathbf{r}_s|}\\right)_{t_r}\n"
},
{
"math_id": 81,
"text": "\\mathbf{r_s} = \\mathbf{r_s}(t_r)"
},
{
"math_id": 82,
"text": "t_r + \\frac{1}{c} |\\mathbf{r}-\\mathbf{r_s}|= t "
},
{
"math_id": 83,
"text": "\\frac{d t_r}{d t} + \\frac{1}{c}\\frac{d t_r}{d t}\\frac{d |\\mathbf{r}-\\mathbf{r_s}|}{d t_r}= 1 "
},
{
"math_id": 84,
"text": "\\frac{d t_r}{d t} \\left(1 - \\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s\\right) = 1 "
},
{
"math_id": 85,
"text": "\\frac{d t_r}{d t} = \\frac{1}{\\left(1 - \\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s\\right)} "
},
{
"math_id": 86,
"text": "\\mathbf{r}"
},
{
"math_id": 87,
"text": "{\\boldsymbol \\nabla} t_r + \\frac{1}{c}{\\boldsymbol \\nabla} |\\mathbf{r}-\\mathbf{r_s}| = 0 "
},
{
"math_id": 88,
"text": "{\\boldsymbol \\nabla} t_r + \\frac{1}{c} \\left({\\boldsymbol \\nabla} t_r \\frac{d |\\mathbf{r}-\\mathbf{r_s}|}{d t_r} + \\mathbf{n}_s\\right) = 0 "
},
{
"math_id": 89,
"text": "{\\boldsymbol \\nabla} t_r \\left(1 - \\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s\\right) = -\\mathbf{n}_s/c "
},
{
"math_id": 90,
"text": "{\\boldsymbol \\nabla} t_r = -\\frac{\\mathbf{n}_s/c}{\\left(1 - \\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s\\right)} "
},
{
"math_id": 91,
"text": "\\frac{d |\\mathbf{r}-\\mathbf{r_s}|}{d t} = \\frac{d t_r}{d t}\\frac{d |\\mathbf{r}-\\mathbf{r_s}|}{d t_r} = \\frac{- \\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s c}{\\left(1 - \\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s\\right)}"
},
{
"math_id": 92,
"text": "{\\boldsymbol \\nabla} |\\mathbf{r}-\\mathbf{r_s}| = {\\boldsymbol \\nabla} t_r \\frac{d |\\mathbf{r}-\\mathbf{r_s}|}{d t_r} + \\mathbf{n}_s = \\frac{\\mathbf{n}_s}{\\left(1 - \\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s\\right)}"
},
{
"math_id": 93,
"text": "\\begin{align}\n\\frac{d \\varphi}{d t} =&\n-\\frac{q}{4\\pi\\epsilon_0}\\frac{1}{|\\mathbf{r}-\\mathbf{r_s}|^2\\left(1-\\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s\\right)^2}\\frac{d}{d t}\\left[(|\\mathbf{r}-\\mathbf{r_s}|(1-\\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s)\\right]\\\\\n=& -\\frac{q}{4\\pi\\epsilon_0}\\frac{1}{|\\mathbf{r}-\\mathbf{r_s}|^2\\left(1-\\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s\\right)^2}\\frac{d}{d t}\\left[|\\mathbf{r}-\\mathbf{r_s}|-(\\mathbf{r}-\\mathbf{r_s})\\cdot{\\boldsymbol \\beta}_s\\right]\\\\\n=& -\\frac{q c}{4\\pi\\epsilon_0}\\frac{1}{|\\mathbf{r}-\\mathbf{r_s}|^2\\left(1-\\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s\\right)^3}\\left[- \\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s + {\\beta_s}^2 - (\\mathbf{r}-\\mathbf{r_s})\\cdot \\dot {\\boldsymbol \\beta}_s /c \\right]\\end{align}"
},
{
"math_id": 94,
"text": "\\begin{align}{\\boldsymbol \\nabla}\\cdot\\mathbf{A} =&\n-\\frac{q}{4\\pi\\epsilon_0 c}\\frac{1}{|\\mathbf{r}-\\mathbf{r_s}|^2\\left(1-\\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s\\right)^2} \\big({\\boldsymbol \\nabla} \\left[\\left(|\\mathbf{r}-\\mathbf{r_s}|-(\\mathbf{r}-\\mathbf{r_s})\\cdot{\\boldsymbol \\beta}_s\\right)\\right]\\cdot{\\boldsymbol \\beta}_s - \\left[\\left(|\\mathbf{r}-\\mathbf{r_s}|-(\\mathbf{r}-\\mathbf{r_s})\\cdot{\\boldsymbol \\beta}_s\\right)\\right]{\\boldsymbol \\nabla}\\cdot{\\boldsymbol \\beta}_s\\big)\\\\\n=& - \\frac{q}{4\\pi\\epsilon_0 c}\\frac{1}{|\\mathbf{r}-\\mathbf{r_s}|^2\\left(1-\\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s\\right)^3}\\cdot\\\\\n&\\left[(\\mathbf{n}_s\\cdot {\\boldsymbol \\beta}_s) - {\\beta}_s^2(1-\\mathbf{n}_s\\cdot {\\boldsymbol \\beta}_s) - {\\beta}_s^2\\mathbf{n}_s\\cdot {\\boldsymbol \\beta}_s + \\left((\\mathbf{r}-\\mathbf{r_s})\\cdot \\dot {\\boldsymbol \\beta}_s/c\\right)(\\mathbf{n}_s\\cdot {\\boldsymbol \\beta}_s) + \\big(|\\mathbf{r}-\\mathbf{r_s}|-(\\mathbf{r}-\\mathbf{r_s})\\cdot{\\boldsymbol \\beta}_s\\big)(\\mathbf{n}_s\\cdot \\dot {\\boldsymbol \\beta}_s/c)\\right]\n\\\\=&\\frac{q}{4\\pi\\epsilon_0 c}\\frac{1}{|\\mathbf{r}-\\mathbf{r_s}|^2\\left(1-\\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s\\right)^3}\\left[\\beta_s^2 - \\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s - (\\mathbf{r}-\\mathbf{r_s})\\cdot \\dot {\\boldsymbol \\beta}_s/c\\right]\\end{align}"
},
{
"math_id": 95,
"text": "\\frac{d \\varphi}{d t} + c^2 {\\boldsymbol \\nabla}\\cdot\\mathbf{A} = 0 "
},
{
"math_id": 96,
"text": "{\\boldsymbol \\nabla}\\varphi = -\\frac{q}{4\\pi\\epsilon_0}\\frac{1}{|\\mathbf{r}-\\mathbf{r_s}|^2\\left(1-\\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s\\right)^3}\\left[\\mathbf{n}_s\\left(1-{\\beta_s}^2 + (\\mathbf{r}-\\mathbf{r_s})\\cdot \\dot {\\boldsymbol \\beta}_s/c\\right) - {\\boldsymbol \\beta}_s(1-\\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s)\\right]"
},
{
"math_id": 97,
"text": "\\frac{d\\mathbf{A}}{dt} = \\frac{q}{4\\pi\\epsilon_0}\\frac{1}{|\\mathbf{r}-\\mathbf{r_s}|^2\\left(1-\\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s\\right)^3}\\left[{\\boldsymbol \\beta}_s\\left(\\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s-{\\beta_s}^2 + (\\mathbf{r}-\\mathbf{r_s})\\cdot \\dot {\\boldsymbol \\beta}_s/c\\right) + |\\mathbf{r}-\\mathbf{r_s}|\\dot {\\boldsymbol \\beta}_s (1-\\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s)/c\\right]"
},
{
"math_id": 98,
"text": "\\mathbf{u}"
},
{
"math_id": 99,
"text": "\\mathbf{v}"
},
{
"math_id": 100,
"text": "\\mathbf{w}"
},
{
"math_id": 101,
"text": "\\mathbf{u}\\times(\\mathbf{v}\\times\\mathbf{w}) = (\\mathbf{u}\\cdot\\mathbf{w})\\mathbf{v}- (\\mathbf{u}\\cdot \\mathbf{v})\\mathbf{w}"
},
{
"math_id": 102,
"text": "\\begin{align}\\mathbf{E}(\\mathbf{r}, t) =& \\frac{q}{4 \\pi \\epsilon_0} \\frac{1}{|\\mathbf{r} - \\mathbf{r}_s|^2(1 - \\mathbf{n}_s \\cdot {\\boldsymbol \\beta}_s)^3}\\cdot \\\\\n&\\left[\\left(\\mathbf{n}_s - {\\boldsymbol \\beta}_s\\right)(1-{\\beta_s}^2) + |\\mathbf{r} - \\mathbf{r}_s|(\\mathbf{n}_s \\cdot \\dot{\\boldsymbol \\beta}_s/c) (\\mathbf{n}_s - {\\boldsymbol \\beta}_s) - |\\mathbf{r} - \\mathbf{r}_s|\\big(\\mathbf{n}_s \\cdot (\\mathbf{n}_s - {\\boldsymbol \\beta}_s)\\big) \\dot{\\boldsymbol \\beta}_s/c \\right]\\end{align}\n"
},
{
"math_id": 103,
"text": "-{\\boldsymbol \\nabla}\\varphi - \\frac{d\\mathbf{A}}{dt}"
},
{
"math_id": 104,
"text": "{\\boldsymbol \\nabla}\\times\\mathbf{A}"
},
{
"math_id": 105,
"text": "\\begin{align}{\\mathbf{B}} =& {\\boldsymbol \\nabla}\\times\\mathbf{A} \\\\[1ex]\n=&\n-\\frac{q}{4\\pi\\epsilon_0 c}\\frac{1}{|\\mathbf{r}-\\mathbf{r_s}|^2\\left(1-\\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s\\right)^2}\\big({\\boldsymbol \\nabla} \\left[\\left(|\\mathbf{r}-\\mathbf{r_s}|-(\\mathbf{r}-\\mathbf{r_s})\\cdot{\\boldsymbol \\beta}_s\\right)\\right]\\times{\\boldsymbol \\beta}_s - \\left[\\left(|\\mathbf{r}-\\mathbf{r_s}|-(\\mathbf{r}-\\mathbf{r_s})\\cdot{\\boldsymbol \\beta}_s\\right)\\right]{\\boldsymbol \\nabla}\\times{\\boldsymbol \\beta}_s\\big)\\\\\n=& - \\frac{q}{4\\pi\\epsilon_0 c}\\frac{1}{|\\mathbf{r}-\\mathbf{r_s}|^2\\left(1-\\mathbf{n}_s\\cdot{\\boldsymbol \\beta}_s\\right)^3}\\cdot\\\\\n&\\qquad \\left[(\\mathbf{n}_s\\times {\\boldsymbol \\beta}_s) - ({\\boldsymbol \\beta}_s\\times {\\boldsymbol \\beta}_s)(1-\\mathbf{n}_s\\cdot {\\boldsymbol \\beta}_s) - {\\beta}_s^2\\mathbf{n}_s\\times {\\boldsymbol \\beta}_s + \\left((\\mathbf{r}-\\mathbf{r_s})\\cdot \\dot {\\boldsymbol \\beta}_s/c\\right)(\\mathbf{n}_s\\times {\\boldsymbol \\beta}_s) + \\big(|\\mathbf{r}-\\mathbf{r_s}|-(\\mathbf{r}-\\mathbf{r_s})\\cdot{\\boldsymbol \\beta}_s\\big)(\\mathbf{n}_s\\times \\dot {\\boldsymbol \\beta}_s/c)\\right]\n\\\\=&\n-\\frac{q}{4 \\pi \\epsilon_0 c} \\frac{1}{|\\mathbf{r} - \\mathbf{r}_s|^2(1 - \\mathbf{n}_s \\cdot {\\boldsymbol \\beta}_s)^3}\\cdot \\\\\n&\\qquad \\left[\\left(\\mathbf{n}_s\\times{\\boldsymbol \\beta}_s\\right)(1-{\\beta_s}^2) + |\\mathbf{r} - \\mathbf{r}_s|(\\mathbf{n}_s \\cdot \\dot{\\boldsymbol \\beta}_s/c) (\\mathbf{n}_s\\times {\\boldsymbol \\beta}_s) + |\\mathbf{r} - \\mathbf{r}_s|\\big(\\mathbf{n}_s \\cdot (\\mathbf{n}_s - {\\boldsymbol \\beta}_s)\\big) \\mathbf{n}_s\\times\\dot{\\boldsymbol \\beta}_s/c \\right] \\\\[1ex]\n=& \\frac{\\mathbf{n}_s}{c}\\times\\mathbf{E}\n\\end{align}"
},
{
"math_id": 106,
"text": "\\mathbf{n}_s"
},
{
"math_id": 107,
"text": "\\mathbf{\\beta}_s"
},
{
"math_id": 108,
"text": "t_r=t-\\frac{R(t_r)}{c}"
},
{
"math_id": 109,
"text": "R(t_r)"
},
{
"math_id": 110,
"text": "v_M < c"
},
{
"math_id": 111,
"text": "f(t') = |\\mathbf{r} - \\mathbf{r}_s(t')| - c(t - t')"
},
{
"math_id": 112,
"text": "t' = t"
},
{
"math_id": 113,
"text": "f(t') = |\\mathbf{r} - \\mathbf{r}_s(t')| - c(t - t') = |\\mathbf{r} - \\mathbf{r}_s(t')| \\geq 0"
},
{
"math_id": 114,
"text": "f'(t')"
},
{
"math_id": 115,
"text": "f'(t') = \\frac{\\mathbf{r} - \\mathbf{r}_s(t_r)}{|\\mathbf{r} - \\mathbf{r}_s(t_r)|} \\cdot (-\\mathbf{v}_s(t')) + c \\geq c - \\left|\\frac{\\mathbf{r} - \\mathbf{r}_s(t_r)}{|\\mathbf{r} - \\mathbf{r}_s(t_r)|}\\right| \\, |\\mathbf{v}_s(t')| = c - |\\mathbf{v}_s(t')| \\geq c - v_M > 0"
},
{
"math_id": 116,
"text": "f(t - \\Delta t) \\leq f(t) - f'(t)\\Delta t \\leq f(t) - (c - v_M)\\Delta t"
},
{
"math_id": 117,
"text": "\\Delta t"
},
{
"math_id": 118,
"text": "f(t') < 0"
},
{
"math_id": 119,
"text": "f(t_r) = 0"
},
{
"math_id": 120,
"text": "c"
},
{
"math_id": 121,
"text": "v < c"
},
{
"math_id": 122,
"text": "|\\mathbf{r} - \\mathbf{r}_s(t_r)| = c(t - t_r)"
},
{
"math_id": 123,
"text": "t_1"
},
{
"math_id": 124,
"text": "t_2"
},
{
"math_id": 125,
"text": "t_1 \\leq t_2"
},
{
"math_id": 126,
"text": "|\\mathbf{r} - \\mathbf{r}_s(t_1)| = c(t - t_1)"
},
{
"math_id": 127,
"text": "|\\mathbf{r} - \\mathbf{r}_s(t_2)| = c(t - t_2)"
},
{
"math_id": 128,
"text": " c(t_2 - t_1) = |\\mathbf{r} - \\mathbf{r}_s(t_1)| - |\\mathbf{r} - \\mathbf{r}_s(t_2)| \\leq |\\mathbf{r}_s(t_2) - \\mathbf{r}_s(t_1)|"
},
{
"math_id": 129,
"text": "t_2 = t_1"
},
{
"math_id": 130,
"text": "|\\mathbf{r}_s(t_2) - \\mathbf{r}_s(t_1)|/(t_2 - t_1) \\geq c"
}
] | https://en.wikipedia.org/wiki?curid=9999827 |