id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
1049453 | Semiclassical gravity | Physical theory with matter as quantum fields but gravity as a classical field
Semiclassical gravity is an approximation to the theory of quantum gravity in which one treats matter and energy fields as being quantum and the gravitational field as being classical.
In semiclassical gravity, matter is represented by quantum matter fields that propagate according to the theory of quantum fields in curved spacetime. The spacetime in which the fields propagate is classical but dynamical. The dynamics of the theory is described by the "semiclassical Einstein equations", which relate the curvature of the spacetime that is encoded by the Einstein tensor formula_0 to the expectation value of the energy–momentum tensor formula_1 (a quantum field theory operator) of the matter fields, i.e.
formula_2
where "G" is the gravitational constant, and formula_3 indicates the quantum state of the matter fields.
Energy–momentum tensor.
There is some ambiguity in regulating the energy–momentum tensor, and this depends upon the curvature. This ambiguity can be absorbed into the cosmological constant, the gravitational constant, and the quadratic couplings
formula_4 and formula_5
There is another quadratic term of the form
formula_6
but in four dimensions this term is a linear combination of the other two terms and a surface term. See Gauss–Bonnet gravity for more details.
Since the theory of quantum gravity is not yet known, it is difficult to precisely determine the regime of validity of semiclassical gravity. However, one can formally show that semiclassical gravity could be deduced from quantum gravity by considering "N" copies of the quantum matter fields and taking the limit of "N" going to infinity while keeping the product "GN" constant. At a diagrammatic level, semiclassical gravity corresponds to summing all Feynman diagrams that do not have loops of gravitons (but have an arbitrary number of matter loops). Semiclassical gravity can also be deduced from an axiomatic approach.
Experimental status.
There are cases where semiclassical gravity breaks down. For instance, if "M" is a huge mass, then the superposition
formula_7
where the locations "A" and "B" are spatially separated, results in an expectation value of the energy–momentum tensor that is "M"/2 at "A" and "M"/2 at "B", but one would never observe the metric sourced by such a distribution. Instead, one would observe the decoherence into a state with the metric sourced at "A" and another sourced at "B" with a 50% chance each. Extensions of semiclassical gravity that incorporate decoherence have also been studied.
Applications.
The most important applications of semiclassical gravity are to understand the Hawking radiation of black holes and the generation of random Gaussian-distributed perturbations in the theory of cosmic inflation, which is thought to occur at the very beginning of the Big Bang.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G_{\\mu\\nu}"
},
{
"math_id": 1,
"text": "\\hat T_{\\mu\\nu}"
},
{
"math_id": 2,
"text": "G_{\\mu\\nu} = \\frac{8 \\pi G}{c^4} \\left\\langle \\hat T_{\\mu\\nu} \\right\\rangle_\\psi,"
},
{
"math_id": 3,
"text": "\\psi"
},
{
"math_id": 4,
"text": "\\int \\sqrt{-g} R^2 \\, d^dx"
},
{
"math_id": 5,
"text": "\\int \\sqrt{-g} R^{\\mu\\nu} R_{\\mu\\nu} \\, d^dx."
},
{
"math_id": 6,
"text": "\\int \\sqrt{-g} R^{\\mu\\nu\\rho\\sigma} R_{\\mu\\nu\\rho\\sigma} \\, d^dx,"
},
{
"math_id": 7,
"text": "\\frac{1}{\\sqrt{2}} \\big(|M \\text{ at } A\\rangle + |M \\text{ at } B\\rangle\\big),"
}
] | https://en.wikipedia.org/wiki?curid=1049453 |
1049596 | Proton-exchange membrane fuel cell | Power generation technology
Proton-exchange membrane fuel cells (PEMFC), also known as polymer electrolyte membrane (PEM) fuel cells, are a type of fuel cell being developed mainly for transport applications, as well as for stationary fuel-cell applications and portable fuel-cell applications. Their distinguishing features include lower temperature/pressure ranges (50 to 100 °C) and a special proton-conducting polymer electrolyte membrane. PEMFCs generate electricity and operate on the opposite principle to PEM electrolysis, which consumes electricity. They are a leading candidate to replace the aging alkaline fuel-cell technology, which was used in the Space Shuttle.
Science.
PEMFCs are built out of membrane electrode assemblies (MEA) which include the electrodes, electrolyte, catalyst, and gas diffusion layers. An ink of catalyst, carbon, and electrode are sprayed or painted onto the solid electrolyte and carbon paper is hot pressed on either side to protect the inside of the cell and also act as electrodes. The pivotal part of the cell is the triple phase boundary (TPB) where the electrolyte, catalyst, and reactants mix and thus where the cell reactions actually occur. Importantly, the membrane must not be electrically conductive so the half reactions do not mix. Operating temperatures above 100 °C are desired so the water byproduct becomes steam and water management becomes less critical in cell design.
Reactions.
A proton exchange membrane fuel cell transforms the chemical energy liberated during the electrochemical reaction of hydrogen and oxygen to electrical energy, as opposed to the direct combustion of hydrogen and oxygen gases to produce thermal energy.
A stream of hydrogen is delivered to the anode side of the MEA. At the anode side it is catalytically split into protons and electrons. This oxidation half-cell reaction or hydrogen oxidation reaction (HOR) is represented by:
At the anode:
The newly formed protons permeate through the polymer electrolyte membrane to the cathode side. The electrons travel along an external load circuit to the cathode side of the MEA, thus creating the current output of the fuel cell.
Meanwhile, a stream of oxygen is delivered to the cathode side of the MEA. At the cathode side oxygen molecules react with the protons permeating through the polymer electrolyte membrane and the electrons arriving through the external circuit to form water molecules. This reduction half-cell reaction or oxygen reduction reaction (ORR) is represented by:
At the cathode:
Overall reaction:
The reversible reaction is expressed in the equation and shows the reincorporation of the hydrogen protons and electrons together with the oxygen molecule and the formation of one water molecule. The potentials in each case are given with respect to the standard hydrogen electrode.
Polymer electrolyte membrane.
To function, the membrane must conduct hydrogen ions (protons) but not electrons as this would in effect "short circuit" the fuel cell. The membrane must also not allow either gas to pass to the other side of the cell, a problem known as gas crossover. Finally, the membrane must be resistant to the reducing environment at the cathode as well as the harsh oxidative environment at the anode.
Splitting of the hydrogen molecule is relatively easy by using a platinum catalyst. Unfortunately however, splitting the oxygen molecule is more difficult, and this causes significant electric losses. An appropriate catalyst material for this process has not been discovered, and platinum is the best option.
Strengths.
1. Easy sealing
PEMFCs have a thin, polymeric membrane as the electrolyte. This membrane is located in between the anode and cathode catalysts and allows the passage of protons to pass to the cathode while restricting the passage of electrons. Compared to liquid electrolytes, a polymeric membrane has a much lower chance of leakage [2]. The proton-exchange membrane is commonly made of materials such as perfluorosulfonic acid (PFSA, sold commercially as Nafion and Aquivion), which minimize gas crossover and short circuiting of the fuel cell. A disadvantage of fluor containing polymers is the fact that during production (and disposal) PFAS products are formed. PFAS, the so-called forever chemicals, are highly toxic. Newer polymers such as the recently patented SPX3 (POLYMERS COMPRISING SULFONATED 2,6-DIPHENYL-1,4-PHENYLENE OXIDE REPEATING UNITS -US 11434329 B2) are fluor free and therefore do not carry the PFAS risk.
2. Low Operating Temperature
Under extreme sub-freezing conditions, the water produced by fuel cells can freeze in porous layers and flow channels. This freezing water can block gas and fuel transport as well as cover catalyst reaction sites, resulting in a loss of output power and a start-up failure of the fuel cell.
However, the low operating temperature of a PEM fuel cell allows it to reach a suitable temperature with less heating compared to other types of fuel cells. With this approach, PEM fuel cells have been shown to be capable of cold start processes from −20°C.
3. Light mass and high power density (transport applications)
PEM fuel cells have been shown to be capable of high power densities up to 39.7 kW/kg, compared to 2.5 kW/kg for solid oxide fuel cells. Due to this high power density, much research is being done on potential applications in transportation as well as wearable technology.
Weaknesses.
Fuel Cells based on PEM still have many issues:
1. Water management
Water management is crucial to performance: if water is evaporated too slowly, it will flood the membrane and the accumulation of water inside of field flow plate will impede the flow of oxygen into the fuel cell, but if water evaporates too fast, the membrane will dry and the resistance across it increases. Both cases will cause damage to stability and power output. Water management is a very difficult subject in PEM systems, primarily because water in the membrane is attracted toward the cathode of the cell through polarization.
A wide variety of solutions for managing the water exist including integration of an electroosmotic pump.
Another innovative method to resolve the water recirculation problem is the 3D fine mesh flow field design used in the Toyota Mirai, 2014. Conventional design of FC stack recirculates water from the air outlet to the air inlet through a humidifier with a straight channel and porous metal flow fields[54].The flow field is a structure made up of a rib and channels. However, the rib partially covers the gas diffusion layer (GDL) and the resultant gas-transport distance is longer than the inter-channel distance. Furthermore, the contact pressure between the GDL and the rib also compresses the GDL, making its thickness non-uniform across the rib and channel[55]. The large width and non-uniform thickness of the rib will increase potential for water vapor to accumulate and the oxygen will be compromised. As a result, oxygen will be impeded to diffuse into catalyst layer, leading to nonuniform power generation in the FC.
This new design enabled the first FC stack functions without a humidifying system meanwhile overcoming water recirculation issues and achieving high power output stability[54]. The 3D micro lattice allows more pathways for gas flow; therefore, it promotes airflow toward membrane electrode and gas diffusion layer assembly (MEGA) and promotes O2 diffusion to the catalyst layer. Unlike conventional flow fields, the 3D micro-lattices in the complex field, which act as baffles and induce frequent micro-scale interfacial flux between the GDL and flow-fields[53]. Due to this repeating micro-scale convective flow, oxygen transport to catalyst layer (CL) and liquid water removal from GDL is significantly enhanced. The generated water is quickly drawn out through the flow field, preventing accumulation within the pores. As a result, the power generation from this flow field is uniform across the cross-section and self-humidification is enabled.
2. Vulnerability of the Catalyst
The platinum catalyst on the membrane is easily poisoned by carbon monoxide, which is often present in product gases formed by methane reforming (no more than one part per million is usually acceptable). This generally necessitates the use of the water gas shift reaction to eliminate CO from product gases and form more hydrogen. Additionally, the membrane is sensitive to the presences of metal ions, which may impair proton conduction mechanisms and can be introduced by corrosion of metallic bipolar plates, metallic components in the fuel cell system or from contaminants in the fuel/oxidant.
PEM systems that use reformed methanol were proposed, as in Daimler Chrysler Necar 5; reforming methanol, i.e. making it react to obtain hydrogen, is however a very complicated process, that also requires purification from the carbon monoxide the reaction produces. A platinum-ruthenium catalyst is necessary as some carbon monoxide will unavoidably reach the membrane. The level should not exceed 10 parts per million. Furthermore, the start-up times of such a reformer reactor are of about half an hour. Alternatively, methanol, and some other biofuels can be fed to a PEM fuel cell directly without being reformed, thus making a direct methanol fuel cell (DMFC). These devices operate with limited success.
3. Limitation of Operating Temperature
The most commonly used membrane is Nafion by Chemours, which relies on liquid water humidification of the membrane to transport protons. This implies that it is not feasible to use temperatures above 80 to 90 °C, since the membrane would dry. Other, more recent membrane types, based on polybenzimidazole (PBI) or phosphoric acid, can reach up to 220 °C without using any water management (see also High Temperature Proton Exchange Membrane fuel cell, HT-PEMFC): higher temperature allow for better efficiencies, power densities, ease of cooling (because of larger allowable temperature differences), reduced sensitivity to carbon monoxide poisoning and better controllability (because of absence of water management issues in the membrane); however, these recent types are not as common. PBI can be doped with phosphoric or sulfuric acid and the conductivity scales with amount of doping and temperature. At high temperatures, it is difficult to keep Nafion hydrated, but this acid doped material does not use water as a medium for proton conduction. It also exhibits better mechanical properties, higher strength, than Nafion and is cheaper. However, acid leaching is a considerable issue and processing, mixing with catalyst to form ink, has proved tricky. Aromatic polymers, such as PEEK, are far cheaper than Teflon (PTFE and backbone of Nafion) and their polar character leads to hydration that is less temperature dependent than Nafion. However, PEEK is far less ionically conductive than Nafion and thus is a less favorable electrolyte choice. Recently, protic ionic liquids and protic organic ionic plastic crystals have been shown as promising alternative electrolyte materials for high temperature (100–200 °C) PEMFCs.
Electrodes.
An electrode typically consists of carbon support, Pt particles, Nafion ionomer, and/or Teflon binder. The carbon support functions as an electrical conductor; the Pt particles are reaction sites; the ionomer provides paths for proton conduction, and the Teflon binder increases the hydrophobicity of the electrode to minimize potential flooding. In order to enable the electrochemical reactions at the electrodes, protons, electrons and the reactant gases (hydrogen or oxygen) must gain access to the surface of the catalyst in the electrodes, while the product water, which can be in either liquid or gaseous phase, or both phases, must be able to permeate from the catalyst to the gas outlet. These properties are typically realized by porous composites of polymer electrolyte binder (ionomer) and catalyst nanoparticles supported on carbon particles. Typically platinum is used as the catalyst for the electrochemical reactions at the anode and cathode, while nanoparticles realize high surface to weight ratios (as further described below) reducing the amount of the costly platinum. The polymer electrolyte binder provides the ionic conductivity, while the carbon support of the catalyst improves the electric conductivity and enables low platinum metal loading. The electric conductivity in the composite electrodes is typically more than 40 times higher as the proton conductivity.
Gas diffusion layer.
The GDL electrically connects the catalyst and current collector. It must be porous, electrically conductive, and thin. The reactants must be able to reach the catalyst, but conductivity and porosity can act as opposing forces. Optimally, the GDL should be composed of about one third Nafion or 15% PTFE. The carbon particles used in the GDL can be larger than those employed in the catalyst because surface area is not the most important variable in this layer. GDL should be around 15–35 μm thick to balance needed porosity with mechanical strength. Often, an intermediate porous layer is added between the GDL and catalyst layer to ease the transitions between the large pores in the GDL and small porosity in the catalyst layer. Since a primary function of the GDL is to help remove water, a product, flooding can occur when water effectively blocks the GDL. This limits the reactants ability to access the catalyst and significantly decreases performance. Teflon can be coated onto the GDL to limit the possibility of flooding. Several microscopic variables are analyzed in the GDLS such as: porosity, tortuosity and permeability. These variables have incidence over the behavior of the fuel cells.
Efficiency.
The maximal theoretical efficiency applying the Gibbs free energy equation ΔG = −237.13 kJ/mol and using the heating value of Hydrogen (ΔH = −285.84 kJ/mol) is 83% at 298 K.
formula_0
The practical efficiency of a PEMs is in the range of 50–60% .
Main factors that create losses are:
Bipolar plates.
The external electrodes, often referred to as bipolar plates or backplates, serve to distribute fuel and oxygen uniformly to the catalysts, to remove water, to collect and transmit electric current. Thus, they need to be in close contact with the catalyst. Because the plates are in contact with both PEM and catalyst layers, the backplate needs to be structurally tough and leakage-resistant in case of structural failure due to fuel cell vibration and temperature cycling. As fuel cells operate in wide ranges of temperatures and in highly reductive/oxidative environments, plates must have high surface tolerances over the wide ranges of temperatures and should be chemically stable. Since the backplate accounts for more than three-quarters of the fuel cell mass, the material must also be lightweight to maximize the energy density.
Materials that fulfill all of these requirements are often very expensive. Gold has been shown to fulfill these criteria well, but is only used for small production volumes due to its high cost. Titanium nitride (TiN) is a cheaper material that is used in fuel cell backplates due to its high chemical stability, electrical conductivity, and corrosion resistance. However, defects in the TiN coating can easily lead to corrosion of the underlying material, most commonly steel.
To perform their main function of distributing gas and fuel, these plates often have straight, parallel channels across its surface. However, this simple approach has led to issues such as uneven pressure distribution, water droplets blocking gas flow, and output power oscillations. Innovative approaches, such as nature-inspired fractal models and computer simulations are being explored to optimize the function of these bipolar plates.
Metal-organic frameworks.
Metal-organic frameworks (MOFs) are a relatively new class of porous, highly crystalline materials that consist of metal nodes connected by organic linkers. Due to the simplicity of manipulating or substituting the metal centers and ligands, there are a virtually limitless number of possible combinations, which is attractive from a design standpoint. MOFs exhibit many unique properties due to their tunable pore sizes, thermal stability, high volume capacities, large surface areas, and desirable electrochemical characteristics. Among their many diverse uses, MOFs are promising candidates for clean energy applications such as hydrogen storage, gas separations, supercapacitors, Li-ion batteries, solar cells, and fuel cells. Within the field of fuel cell research, MOFs are being studied as potential electrolyte materials and electrode catalysts that could someday replace traditional polymer membranes and Pt catalysts, respectively.
As electrolyte materials, the inclusion of MOFs seems at first counter-intuitive. Fuel cell membranes generally have low porosity to prevent fuel crossover and loss of voltage between the anode and cathode. Additionally, membranes tend to have low crystallinity because the transport of ions is more favorable in disordered materials. On the other hand, pores can be filled with additional ion carriers that ultimately enhance the ionic conductivity of the system and high crystallinity makes the design process less complex.
The general requirements of a good electrolyte for PEMFCs are: high proton conductivity (>10−2 S/cm for practical applications) to enable proton transport between electrodes, good chemical and thermal stability under fuel cell operating conditions (environmental humidity, variable temperatures, resistance to poisonous species, etc.), low cost, ability to be processed into thin-films, and overall compatibility with other cell components. While polymeric materials are currently the preferred choice of proton-conducting membrane, they require humidification for adequate performance and can sometimes physically degrade due to hydrations effects, thereby causing losses of efficiency. As mentioned, Nafion is also limited by a dehydration temperature of < 100 °C, which can lead to slower reaction kinetics, poor cost efficiency, and CO poisoning of Pt electrode catalysts. Conversely, MOFs have shown encouraging proton conductivities in both low and high temperature regimes as well as over a wide range of humidity conditions. Below 100 °C and under hydration, the presence of hydrogen bonding and solvent water molecules aid in proton transport, whereas anhydrous conditions are suitable for temperatures above 100 °C. MOFs also have the distinct advantage of exhibiting proton conductivity by the framework itself in addition to the inclusion of charge carries (i.e., water, acids, etc.) into their pores.
A low temperature example is work by Kitagawa, et al. who used a two-dimensional oxalate-bridged anionic layer framework as the host and introduced ammonium cations and adipic acid molecules into the pores to increase proton concentration. The result was one of the first instances of a MOF showing "superprotonic" conductivity (8 × 10−3 S/cm) at 25 °C and 98% relative humidity (RH). They later found that increasing the hydrophilic nature of the cations introduced into the pores could enhance proton conductivity even more. In this low temperature regime that is dependent on degree of hydration, it has also been shown that proton conductivity is heavily dependent on humidity levels.
A high temperature anhydrous example is PCMOF2, which consists of sodium ions coordinated to a trisulfonated benzene derivative. To improve performance and allow for higher operating temperatures, water can be replaced as the proton carrier by less volatile imidazole or triazole molecules within the pores. The maximum temperature achieved was 150 °C with an optimum conductivity of 5 × 10−4 S/cm, which is lower than other current electrolyte membranes. However, this model holds promise for its temperature regime, anhydrous conditions, and ability to control the quantity of guest molecules within the pores, all of which allowed for the tunability of proton conductivity. Additionally, the triazole-loaded PCMOF2 was incorporated into a H2/air membrane-electrode assembly and achieved an open circuit voltage of 1.18 V at 100 °C that was stable for 72 hours and managed to remain gas tight throughout testing. This was the first instance that proved MOFs could actually be implemented into functioning fuel cells, and the moderate potential difference showed that fuel crossover due to porosity was not an issue.
To date, the highest proton conductivity achieved for a MOF electrolyte is 4.2 × 10−2 S/cm at 25 °C under humid conditions (98% RH), which is competitive with Nafion. Some recent experiments have even successfully produced thin-film MOF membranes instead of the traditional bulk samples or single crystals, which is crucial for their industrial applicability. Once MOFs are able to consistently achieve sufficient conductivity levels, mechanical strength, water stability, and simple processing, they have the potential to play an important role in PEMFCs in the near future.
MOFs have also been targeted as potential replacements of platinum group metal (PGM) materials for electrode catalysts, although this research is still in the early stages of development. In PEMFCs, the oxygen reduction reaction (ORR) at the Pt cathode is significantly slower than the fuel oxidation reaction at the anode, and thus non-PGM and metal-free catalysts are being investigated as alternatives. The high volumetric density, large pore surface areas, and openness of metal-ion sites in MOFs make them ideal candidates for catalyst precursors. Despite promising catalytic abilities, the durability of these proposed MOF-based catalysts is currently less than desirable and the ORR mechanism in this context is still not completely understood.
Catalyst research.
Much of the current research on catalysts for PEM fuel cells can be classified as having one of the following main objectives:
Examples of these approaches are given in the following sections.
Increasing catalytic activity.
As mentioned above, platinum is by far the most effective element used for PEM fuel cell catalysts, and nearly all current PEM fuel cells use platinum particles on porous carbon supports to catalyze both hydrogen oxidation and oxygen reduction. However, due to their high cost, current Pt/C catalysts are not feasible for commercialization. The U.S. Department of Energy estimates that platinum-based catalysts will need to use roughly four times less platinum than is used in current PEM fuel cell designs in order to represent a realistic alternative to internal combustion engines. Consequently, one main goal of catalyst design for PEM fuel cells is to increase the catalytic activity of platinum by a factor of four so that only one-fourth as much of the precious metal is necessary to achieve similar performance.
One method of increasing the performance of platinum catalysts is to optimize the size and shape of the platinum particles. Decreasing the particles' size alone increases the total surface area of catalyst available to participate in reactions per volume of platinum used, but recent studies have demonstrated additional ways to make further improvements to catalytic performance. For example, one study reports that high-index facets of platinum nanoparticles (that is Miller indexes with large integers, such as Pt (730)) provide a greater density of reactive sites for oxygen reduction than typical platinum nanoparticles.
Since the most common and effective catalyst, platinum, is extremely expensive, alternative processing is necessary to maximize surface area and minimize loading. Deposition of nanosized Pt particles onto carbon powder (Pt/C) provides a large Pt surface area while the carbon allows for electrical connection between the catalyst and the rest of the cell. Platinum is so effective because it has high activity and bonds to the hydrogen just strongly enough to facilitate electron transfer but not inhibit the hydrogen from continuing to move around the cell. However, platinum is less active in the cathode oxygen reduction reaction. This necessitates the use of more platinum, increasing the cell's expense and thus feasibility. Many potential catalyst choices are ruled out because of the extreme acidity of the cell.
The most effective ways of achieving the nanoscale Pt on carbon powder, which is currently the best option, are through vacuum deposition, sputtering, and electrodeposition. The platinum particles are deposited onto carbon paper that is permeated with PTFE. However, there is an optimal thinness to this catalyst layer, which limits the lower cost limit. Below 4 nm, Pt will form islands on the paper, limiting its activity. Above this thickness, the Pt will coat the carbon and be an effective catalyst. To further complicate things, Nafion cannot be infiltrated beyond 10 um, so using more Pt than this is an unnecessary expense. Thus the amount and shape of the catalyst is limited by the constraints of other materials.
A second method of increasing the catalytic activity of platinum is to alloy it with other metals. For example, it was recently shown that the Pt3Ni(111) surface has a higher oxygen reduction activity than pure Pt(111) by a factor of ten. The authors attribute this dramatic performance increase to modifications to the electronic structure of the surface, reducing its tendency to bond to oxygen-containing ionic species present in PEM fuel cells and hence increasing the number of available sites for oxygen adsorption and reduction.
Further efficiencies can be realized using an Ultrasonic nozzle to apply the platinum catalyst to the electrolyte layer or to carbon paper under atmospheric conditions resulting in high efficiency spray. Studies have shown that due to the uniform size of the droplets created by this type of spray, due to the high transfer efficiency of the technology, due to the non-clogging nature of the nozzle and finally due to the fact that the ultrasonic energy de-agglomerates the suspension just before atomization, fuel cells MEA's manufactured this way have a greater homogeneity in the final MEA, and the gas flow through the cell is more uniform, maximizing the efficiency of the platinum in the MEA.
Recent studies using inkjet printing to deposit the catalyst over the membrane have also shown high catalyst utilization due to the reduced thickness of the deposited catalyst layers.
Very recently, a new class of ORR electrocatalysts have been introduced in the case of Pt-M (M-Fe and Co) systems with an ordered intermetallic core encapsulated within a Pt-rich shell. These intermetallic core-shell (IMCS) nanocatalysts were found to exhibit an enhanced activity and most importantly, an extended durability compared to many previous designs. While the observed enhancement in the activities is ascribed to a strained lattice, the authors report that their findings on the degradation kinetics establish that the extended catalytic durability is attributable to a sustained atomic order.
Reducing poisoning.
The other popular approach to improving catalyst performance is to reduce its sensitivity to impurities in the fuel source, especially carbon monoxide (CO). Presently, pure hydrogen gas is becoming economical to mass-produce by electrolysis. However, at the moment most hydrogen gas is produced by steam reforming light hydrocarbons, a process which produces a mixture of gases that also contains CO (1–3%), CO2 (19–25%), and N2 (25%). Even tens of parts per million of CO can poison a pure platinum catalyst, so increasing platinum's resistance to CO is an active area of research.
For example, one study reported that cube-shaped platinum nanoparticles with (100) facets displayed a fourfold increase in oxygen reduction activity compared to randomly faceted platinum nanoparticles of similar size. The authors concluded that the (111) facets of the randomly shaped nanoparticles bonded more strongly to sulfate ions than the (100) facets, reducing the number of catalytic sites open to oxygen molecules. The nanocubes they synthesized, in contrast, had almost exclusively (100) facets, which are known to interact with sulfate more weakly. As a result, a greater fraction of the surface area of those particles was available for the reduction of oxygen, boosting the catalyst's oxygen reduction activity.
In addition, researchers have been investigating ways of reducing the CO content of hydrogen fuel before it enters a fuel cell as a possible way to avoid poisoning the catalysts. One recent study revealed that ruthenium-platinum core–shell nanoparticles are particularly effective at oxidizing CO to form CO2, a much less harmful fuel contaminant. The mechanism that produces this effect is conceptually similar to that described for Pt3Ni above: the ruthenium core of the particle alters the electronic structure of the platinum surface, rendering it better able to catalyze the oxidation of CO.
Lowering cost.
The challenge for the viability of PEM fuel cells today still remains in their cost and stability. The high cost can in large part be attributed to the use of the precious metal of platinum in the catalyst layer of PEM cells. The electrocatalyst currently accounts for nearly half of the fuel cell stack cost. Although the Pt loading of PEM fuel cells has been reduced by two orders of magnitude over the past decade, further reduction is necessary to make the technology economically viable for commercialization. Whereas some research efforts aim to address this issue by improving the electrocatalytic activity of Pt-based catalysts, an alternative is to eliminate the use of Pt altogether by developing a non-platinum-group-metal (non-PGM) cathode catalyst whose performance rivals that of Pt-based technologies. The U.S. Department of Energy has been setting milestones for the development of fuel cells, targeting a durability of 5000 hours and a non-PGM catalyst ORR volumetric activity of 300 A cm−3.
Promising alternatives to Pt-based catalysts are Metal/Nitrogen/ Carbon-catalysts (M/N/C-catalysts). To achieve high power density, or output of power over surface area of the cell, a volumetric activity of at least 1/10 that of Pt-based catalysts must be met, along with good mass transport properties. While M/N/C-catalysts still demonstrate poorer volumetric activities than Pt-based catalysts, the reduced costs of such catalysts allows for greater loading to compensate. However, increasing the loading of M/N/C-catalysts also renders the catalytic layer thicker, impairing its mass transport properties. In other words, H2, O2, protons, and electrons have greater difficulty in migrating through the catalytic layer, decreasing the voltage output of the cell. While high microporosity of the M/N/C catalytic network results in high volumetric activity, improved mass transport properties are instead associated to macroporosity of the network. These M/N/C materials are synthesized using high temperature pyrolysis and other high temperature treatments of precursors containing the metal, nitrogen, and carbon.
Recently, researchers have developed a Fe/N/C catalyst derived from iron (II) acetate (FeAc), phenanthroline (Phen), and a metal-organic-framework (MOF) host. The MOF is a Zn(II) zeolitic imidazolate framework (ZIF) called ZIF-8, which demonstrates a high microporous surface area and high nitrogen content conducive to ORR activity. The power density of the FeAc/Phen/ZIF-8-catalyst was found to be 0.75 W cm−2 at 0.6 V. This value is a significant improvement over the maximal 0.37 W cm−2 power density of previous M/N/C-catalysts and is much closer to matching the typical value of 1.0–1.2 W cm−2 for Pt-based catalysts with a Pt loading of 0.3 mg cm−2. The catalyst also demonstrated a volumetric activity of 230 A·cm−3, the highest value for non-PGM catalysts to date, approaching the U.S. Department of Energy milestone.
While the power density achieved by the novel FeAc/Phen/ZIF-8-catalyst is promising, its durability remains inadequate for commercial application. It is reported that the best durability exhibited by this catalyst still had a 15% drop in current density over 100 hours in H2/air. Hence while the Fe-based non-PGM catalysts rival Pt-based catalysts in their electrocatalytic activity, there is still much work to be done in understanding their degradation mechanisms and improving their durability.
Applications.
The major application of PEM fuel cells focuses on transportation primarily because of their potential impact on the environment, e.g. the control of emission of the green house gases (GHG). Other applications include distributed/stationary and portable power generation. Most major motor companies work solely on PEM fuel cells due to their high power density and excellent dynamic characteristics as compared with other types of fuel cells. Due to their light weight, PEMFCs are most suited for transportation applications. PEMFCs for buses, which use compressed hydrogen for fuel, can operate at up to 40% efficiency. Generally PEMFCs are implemented on buses over smaller cars because of the available volume to house the system and store the fuel. Technical issues for transportation involve incorporation of PEMs into current vehicle technology and updating energy systems. Full fuel cell vehicles are not advantageous if hydrogen is sourced from fossil fuels; however, they become beneficial when implemented as hybrids. There is potential for PEMFCs to be used for stationary power generation, where they provide 5 kW at 30% efficiency; however, they run into competition with other types of fuel cells, mainly SOFCs and MCFCs. Whereas PEMFCs generally require high purity hydrogen for operation, other fuel cell types can run on methane and are thus more flexible systems. Therefore, PEMFCs are best for small scale systems until economically scalable pure hydrogen is available. Furthermore, PEMFCs have the possibility of replacing batteries for portable electronics, though integration of the hydrogen supply is a technical challenge particularly without a convenient location to store it within the device.
History.
Before the invention of PEM fuel cells, existing fuel cell types such as solid-oxide fuel cells were only applied in extreme conditions. Such fuel cells also required very expensive materials and could only be used for stationary applications due to their size. These issues were addressed by the PEM fuel cell. The PEM fuel cell was invented in the early 1960s by Willard Thomas Grubb and Leonard Niedrach of General Electric. Initially, sulfonated polystyrene membranes were used for electrolytes, but they were replaced in 1966 by Nafion ionomer, which proved to be superior in performance and durability to sulfonated polystyrene.
PEM fuel cells were used in the NASA Gemini series of spacecraft, but they were replaced by Alkaline fuel cells in the Apollo program and in the Space Shuttle. General Electric continued working on PEM cells and in the mid-1970s developed PEM water electrolysis technology for undersea life support, leading to the US Navy Oxygen Generating Plant. The British Royal Navy adopted this technology in early 1980s for their submarine fleet. In the late 1980s and early 1990s, Los Alamos National Lab and Texas A&M University experimented with ways to reduce the amount of platinum required for PEM cells.
Parallel with Pratt and Whitney Aircraft, General Electric developed the first proton exchange membrane fuel cells (PEMFCs) for the Gemini space missions in the early 1960s. The first mission to use PEMFCs was Gemini V. However, the Apollo space missions and subsequent Apollo-Soyuz, Skylab and Space Shuttle missions used fuel cells based on Bacon's design, developed by Pratt and Whitney Aircraft.
Extremely expensive materials were used and the fuel cells required very pure hydrogen and oxygen. Early fuel cells tended to require inconveniently high operating temperatures that were a problem in many applications. However, fuel cells were seen to be desirable due to the large amounts of fuel available (hydrogen and oxygen).
Despite their success in space programs, fuel cell systems were limited to space missions and other special applications, where high cost could be tolerated. It was not until the late 1980s and early 1990s that fuel cells became a real option for wider application base. Several pivotal innovations, such as low platinum catalyst loading and thin film electrodes, drove the cost of fuel cells down, making development of PEMFC systems more realistic. However, there is significant debate as to whether hydrogen fuel cells will be a realistic technology for use in automobiles or other vehicles. (See hydrogen economy.) A large part of PEMFC production is for the Toyota Mirai. The US Department of Energy estimates a 2016 price at $53/kW if 500,000 units per year were made.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\eta = \\frac{\\Delta G}{\\Delta H} = 1 - \\frac{T\\Delta S}{\\Delta H}"
}
] | https://en.wikipedia.org/wiki?curid=1049596 |
1049636 | Solid oxide fuel cell | Fuel cell that produces electricity by oxidization
A solid oxide fuel cell (or SOFC) is an electrochemical conversion device that produces electricity directly from oxidizing a fuel. Fuel cells are characterized by their electrolyte material; the SOFC has a solid oxide or ceramic electrolyte.
Advantages of this class of fuel cells include high combined heat and power efficiency, long-term stability, fuel flexibility, low emissions, and relatively low cost. The largest disadvantage is the high operating temperature which results in longer start-up times and mechanical and chemical compatibility issues.
Introduction.
Solid oxide fuel cells are a class of fuel cells characterized by the use of a solid oxide material as the electrolyte. SOFCs use a solid oxide electrolyte to conduct negative oxygen ions from the cathode to the anode. The electrochemical oxidation of the hydrogen, carbon monoxide or other organic intermediates by oxygen ions thus occurs on the anode side. More recently, proton-conducting SOFCs (PC-SOFC) are being developed which transport protons instead of oxygen ions through the electrolyte with the advantage of being able to be run at lower temperatures than traditional SOFCs.
They operate at very high temperatures, typically between 600 and 1,000 °C. At these temperatures, SOFCs do not require expensive platinum group metals catalyst, as is currently necessary for lower temperature fuel cells such as PEMFCs, and are not vulnerable to carbon monoxide catalyst poisoning. However, vulnerability to sulfur poisoning has been widely observed and the sulfur must be removed before entering the cell. For fuels that are of lower quality, such as gasified biomass, coal, or biogas, the fuel processing becomes increasingly complex and, consequently, more expensive. The gasification process, which transforms the raw material into a gaseous state suitable for fuel cells, can generate significant quantities of aromatic compounds. These compounds include smaller molecules like methane and toluene, as well as larger polyaromatic and short-chain hydrocarbon compounds. These substances can lead to carbon buildup in SOFCs. Moreover, the expenses associated with reforming and desulfurization are comparable in magnitude to the cost of the fuel cell itself. These factors become especially critical for systems with lower power output or greater portability requirements.
Solid oxide fuel cells have a wide variety of applications, from use as auxiliary power units in vehicles to stationary power generation with outputs from 100 W to 2 MW. In 2009, Australian company, Ceramic Fuel Cells successfully achieved an efficiency of an SOFC device up to the previously theoretical mark of 60%. The higher operating temperature make SOFCs suitable candidates for application with heat engine energy recovery devices or combined heat and power, which further increases overall fuel efficiency.
Because of these high temperatures, light hydrocarbon fuels, such as methane, propane, and butane can be internally reformed within the anode. SOFCs can also be fueled by externally reforming heavier hydrocarbons, such as gasoline, diesel, jet fuel (JP-8) or biofuels. Such reformates are mixtures of hydrogen, carbon monoxide, carbon dioxide, steam and methane, formed by reacting the hydrocarbon fuels with air or steam in a device upstream of the SOFC anode. SOFC power systems can increase efficiency by using the heat given off by the exothermic electrochemical oxidation within the fuel cell for endothermic steam reforming process. Additionally, solid fuels such as coal and biomass may be gasified to form syngas which is suitable for fueling SOFCs in integrated gasification fuel cell power cycles.
Thermal expansion demands a uniform and well-regulated heating process at startup. SOFC stacks with planar geometry require on the order of an hour to be heated to operating temperature. Micro-tubular fuel cell design geometries promise much faster start up times, typically in the order of minutes.
Unlike most other types of fuel cells, SOFCs can have multiple geometries. The planar fuel cell design geometry is the typical sandwich type geometry employed by most types of fuel cells, where the electrolyte is sandwiched in between the electrodes. SOFCs can also be made in tubular geometries where either air or fuel is passed through the inside of the tube and the other gas is passed along the outside of the tube. The tubular design is advantageous because it is much easier to seal air from the fuel. The performance of the planar design is currently better than the performance of the tubular design, however, because the planar design has a lower resistance comparatively. Other geometries of SOFCs include modified planar fuel cell designs (MPC or MPSOFC), where a wave-like structure replaces the traditional flat configuration of the planar cell. Such designs are highly promising because they share the advantages of both planar cells (low resistance) and tubular cells.
Operation.
A solid oxide fuel cell is made up of four layers, three of which are ceramics (hence the name). A single cell consisting of these four layers stacked together is typically only a few millimeters thick. Hundreds of these cells are then connected in series to form what most people refer to as an "SOFC stack". The ceramics used in SOFCs do not become electrically and ionically active until they reach very high temperature and as a consequence, the stacks have to run at temperatures ranging from 500 to 1,000 °C. Reduction of oxygen into oxygen ions occurs at the cathode. These ions can then diffuse through the solid oxide electrolyte to the anode where they can electrochemically oxidize the fuel. In this reaction, a water byproduct is given off as well as two electrons. These electrons then flow through an external circuit where they can do work. The cycle then repeats as those electrons enter the cathode material again.
Balance of plant.
Most of the downtime of a SOFC stems from the mechanical balance of plant, the air preheater, prereformer, afterburner, water heat exchanger, anode tail gas oxidizer, and electrical balance of plant, power electronics, hydrogen sulfide sensor and fans. Internal reforming leads to a large decrease in the balance of plant costs in designing a full system.
Anode.
The ceramic anode layer must be very porous to allow the fuel to flow towards the electrolyte. Consequently, granular matter is often selected for anode fabrication procedures. Like the cathode, it must conduct electrons, with ionic conductivity a definite asset. The anode is commonly the thickest and strongest layer in each individual cell, because it has the smallest polarization losses, and is often the layer that provides the mechanical support. Electrochemically speaking, the anode's job is to use the oxygen ions that diffuse through the electrolyte to oxidize the hydrogen fuel.
The oxidation reaction between the oxygen ions and the hydrogen produces heat as well as water and electricity.
If the fuel is a light hydrocarbon, for example, methane, another function of the anode is to act as a catalyst for steam reforming the fuel into hydrogen. This provides another operational benefit to the fuel cell stack because the reforming reaction is endothermic, which cools the stack internally. The most common material used is a cermet made up of nickel mixed with the ceramic material that is used for the electrolyte in that particular cell, typically YSZ (yttria stabilized zirconia). These nanomaterial-based catalysts, help stop the grain growth of nickel. Larger grains of nickel would reduce the contact area that ions can be conducted through, which would lower the cells efficiency. Perovskite materials (mixed ionic/electronic conducting ceramics) have been shown to produce a power density of 0.6 W/cm2 at 0.7 V at 800 °C which is possible because they have the ability to overcome a larger activation energy.
Chemical Reaction:
H2 +O2- ——> H2O+2e
However, there are a few disadvantages associated with YSZ as anode material. Ni coarsening, carbon deposition, reduction-oxidation instability, and sulfur poisoning are the main obstacles limiting the long-term stability of Ni-YSZ. Ni coarsening refers to the evolution of Ni particles in doped in YSZ grows larger in grain size, which decreases the surface area for the catalytic reaction. Carbon deposition occurs when carbon atoms, formed by hydrocarbon pyrolysis or CO disproportionation, deposit on the Ni catalytic surface. Carbon deposition becomes important especially when hydrocarbon fuels are used, i.e. methane, syngas. The high operating temperature of SOFC and the oxidizing environment facilitate the oxidation of Ni catalyst through reaction Ni + <templatestyles src="Fraction/styles.css" />1⁄2 O2 = NiO. The oxidation reaction of Ni reduces the electrocatalytic activity and conductivity. Moreover, the density difference between Ni and NiO causes volume change on the anode surface, which could potentially lead to mechanical failure. Sulfur poisoning arises when fuel such as natural gas, gasoline, or diesel is used. Again, due to the high affinity between sulfur compounds (H2S, (CH3)2S) and the metal catalyst, even the smallest impurities of sulfur compounds in the feed stream could deactivate the Ni catalyst on the YSZ surface.
Current research is focused on reducing or replacing Ni content in the anode to improve long-term performance. The modified Ni-YSZ containing other materials including CeO2, Y2O3, La2O3, MgO, TiO2, Ru, Co, etc. are invented to resist sulfur poisoning, but the improvement is limited due to the rapid initial degradation. Copper-based cerement anode is considered as a solution to carbon deposition because it is inert to carbon and stable under typical SOFC oxygen partial pressures (pO2). Cu-Co bimetallic anodes in particular show a great resistivity of carbon deposition after the exposure to pure CH4 at 800C. And Cu-CeO2-YSZ exhibits a higher electrochemical oxidation rate over Ni-YSZ when running on CO and syngas, and can achieve even higher performance using CO than H2, after adding a cobalt co-catalyst. Oxide anodes including zirconia-based fluorite and perovskites are also used to replace Ni-ceramic anodes for carbon resistance. Chromite i.e. La0.8Sr0.2Cr0.5Mn0.5O3 (LSCM) is used as anodes and exhibited comparable performance against Ni–YSZ cermet anodes. LSCM is further improved by impregnating Cu and sputtering Pt as the current collector.
Electrolyte.
The electrolyte is a dense layer of ceramic that conducts oxygen ions. Its electronic conductivity must be kept as low as possible to prevent losses from leakage currents. The high operating temperatures of SOFCs allow the kinetics of oxygen ion transport to be sufficient for good performance. However, as the operating temperature approaches the lower limit for SOFCs at around 600 °C, the electrolyte begins to have large ionic transport resistances and affect the performance. Popular electrolyte materials include yttria-stabilized zirconia (YSZ) (often the 8% form 8YSZ), scandia stabilized zirconia (ScSZ) (usually 9 mol% Sc2O3 – 9ScSZ) and gadolinium doped ceria (GDC). The electrolyte material has crucial influence on the cell performances. Detrimental reactions between YSZ electrolytes and modern cathodes such as lanthanum strontium cobalt ferrite (LSCF) have been found, and can be prevented by thin (<100 nm) ceria diffusion barriers.
If the conductivity for oxygen ions in SOFC can remain high even at lower temperatures (current target in research ~500 °C), material choices for SOFC will broaden and many existing problems can potentially be solved. Certain processing techniques such as thin film deposition can help solve this problem with existing materials by:
Cathode.
The cathode, or air electrode, is a thin porous layer on the electrolyte where oxygen reduction takes place. The overall reaction is written in Kröger-Vink Notation as follows:
formula_0
Cathode materials must be, at a minimum, electrically conductive. Currently, lanthanum strontium manganite (LSM) is the cathode material of choice for commercial use because of its compatibility with doped zirconia electrolytes. Mechanically, it has a similar coefficient of thermal expansion to YSZ and thus limits stress buildup because of CTE mismatch. Also, LSM has low levels of chemical reactivity with YSZ which extends the lifetime of the materials. Unfortunately, LSM is a poor ionic conductor, and so the electrochemically active reaction is limited to the triple phase boundary (TPB) where the electrolyte, air and electrode meet. LSM works well as a cathode at high temperatures, but its performance quickly falls as the operating temperature is lowered below 800 °C. In order to increase the reaction zone beyond the TPB, a potential cathode material must be able to conduct both electrons and oxygen ions. Composite cathodes consisting of LSM YSZ have been used to increase this triple phase boundary length. Mixed ionic/electronic conducting (MIEC) ceramics, such as perovskite LSCF, are also being researched for use in intermediate temperature SOFCs as they are more active and can make up for the increase in the activation energy of the reaction.
Interconnect.
The interconnect can be either a metallic or ceramic layer that sits between each individual cell. Its purpose is to connect each cell in series, so that the electricity each cell generates can be combined. Because the interconnect is exposed to both the oxidizing and reducing side of the cell at high temperatures, it must be extremely stable. For this reason, ceramics have been more successful in the long term than metals as interconnect materials. However, these ceramic interconnect materials are very expensive when compared to metals. Nickel- and steel-based alloys are becoming more promising as lower temperature (600–800 °C) SOFCs are developed. The material of choice for an interconnect in contact with Y8SZ is a metallic 95Cr-5Fe alloy. Ceramic-metal composites called "cermet" are also under consideration, as they have demonstrated thermal stability at high temperatures and excellent electrical conductivity.
Polarizations.
Polarizations, or overpotentials, are losses in voltage due to imperfections in materials, microstructure, and design of the fuel cell. Polarizations result from ohmic resistance of oxygen ions conducting through the electrolyte (iRΩ), electrochemical activation barriers at the anode and cathode, and finally concentration polarizations due to inability of gases to diffuse at high rates through the porous anode and cathode (shown as ηA for the anode and ηC for cathode). The cell voltage can be calculated using the following equation:
formula_1
where:
In SOFCs, it is often important to focus on the ohmic and concentration polarizations since high operating temperatures experience little activation polarization. However, as the lower limit of SOFC operating temperature is approached (~600 °C), these polarizations do become important.
Above mentioned equation is used for determining the SOFC voltage (in fact for fuel cell voltage in general). This approach results in good agreement with particular experimental data (for which
adequate factors were obtained) and poor agreement for other than original experimental working parameters. Moreover, most of the equations used require the addition of numerous factors which are difficult or impossible to determine. It makes very difficult any optimizing process of the SOFC working parameters as well as design architecture configuration selection. Because of those circumstances a few other equations were proposed:
formula_6
where:
This method was validated and found to be suitable for optimization and sensitivity studies in plant-level modelling of various systems with solid oxide fuel cells. With this mathematical description it is possible to account for different properties of the SOFC. There are many parameters which impact cell working conditions, e.g. electrolyte material, electrolyte thickness, cell temperature, inlet and outlet gas compositions at anode and cathode, and electrode porosity, just to name some. The flow in these systems is often calculated using the Navier–Stokes equations.
Ohmic polarization.
Ohmic losses in an SOFC result from ionic conductivity through the electrolyte and electrical resistance offered to the flow of electrons in the external electrical circuit. This is inherently a materials property of the crystal structure and atoms involved. However, to maximize the ionic conductivity, several methods can be done. Firstly, operating at higher temperatures can significantly decrease these ohmic losses. Substitutional doping methods to further refine the crystal structure and control defect concentrations can also play a significant role in increasing the conductivity. Another way to decrease ohmic resistance is to decrease the thickness of the electrolyte layer.
Ionic conductivity.
An ionic specific resistance of the electrolyte as a function of temperature can be described by the following relationship:
formula_13
where: formula_14 – electrolyte thickness, and formula_15 – ionic conductivity.
The ionic conductivity of the solid oxide is defined as follows:
formula_16
where: formula_17 and formula_18 – factors depended on electrolyte materials, formula_19 – electrolyte temperature, and formula_3 – ideal gas constant.
Concentration polarization.
The concentration polarization is the result of practical limitations on mass transport within the cell and represents the voltage loss due to spatial variations in reactant concentration at the chemically active sites. This situation can be caused when the reactants are consumed by the electrochemical reaction faster than they can diffuse into the porous electrode, and can also be caused by variation in bulk flow composition. The latter is due to the fact that the consumption of reacting species in the reactant flows causes a drop in reactant concentration as it travels along the cell, which causes a drop in the local potential near the tail end of the cell.
The concentration polarization occurs in both the anode and cathode. The anode can be particularly problematic, as the oxidation of the hydrogen produces steam, which further dilutes the fuel stream as it travels along the length of the cell. This polarization can be mitigated by reducing the reactant utilization fraction or increasing the electrode porosity, but these approaches each have significant design trade-offs.
Activation polarization.
The activation polarization is the result of the kinetics involved with the electrochemical reactions. Each reaction has a certain activation barrier that must be overcome in order to proceed and this barrier leads to the polarization. The activation barrier is the result of many complex electrochemical reaction steps where typically the rate limiting step is responsible for the polarization. The polarization equation shown below is found by solving the Butler–Volmer equation in the high current density regime (where the cell typically operates), and can be used to estimate the activation polarization:
formula_20
where:
The polarization can be modified by microstructural optimization. The Triple Phase Boundary (TPB) length, which is the length where porous, ionic and electronically conducting pathways all meet, directly relates to the electrochemically active length in the cell. The larger the length, the more reactions can occur and thus the less the activation polarization. Optimization of TPB length can be done by processing conditions to affect microstructure or by materials selection to use a mixed ionic/electronic conductor to further increase TPB length.
Mechanical Properties.
Current SOFC research focuses heavily on optimizing cell performance while maintaining acceptable mechanical properties because optimized performance often compromises mechanical properties. Nevertheless, mechanical failure represents a significant problem to SOFC operation. The presence of various kinds of load and Thermal stress during operation requires high mechanical strength. Additional stresses associated with changes in gas atmosphere, leading to reduction or oxidation also cannot be avoided in prolonged operation. When electrode layers delaminate or crack, conduction pathways are lost, leading to a redistribution of current density and local changes in temperature. These local temperature deviations, in turn, lead to increased thermal strains, which propagate cracks and Delamination. Additionally, when electrolytes crack, separation of fuel and air is no longer guaranteed, which further endangers the continuous operation of the cell.
Since SOFCs require materials with high oxygen conductivity, thermal stresses provide a significant problem. The Coefficient of thermal expansion in mixed ionic-electronic perovskites can be directly related to oxygen vacancy concentration, which is also related to ionic conductivity. Thus, thermal stresses increase in direct correlation with improved cell performance. Additionally, however, the temperature dependence of oxygen vacancy concentration means that the CTE is not a linear property, which further complicates measurements and predictions.
Just as thermal stresses increase as cell performance improves through improved ionic conductivity, the fracture toughness of the material also decreases as cell performance increases. This is because, to increase reaction sites, porous ceramics are preferable. However, as shown in the equation below, fracture toughness decreases as porosity increases.
formula_27
Where:
formula_28 = fracture toughness
formula_29 = fracture toughness of the non-porous structure
formula_30 = experimentally determined constant
formula_31 = porosity
Thus, porosity must be carefully engineered to maximize reaction kinetics while maintaining an acceptable fracture toughness. Since fracture toughness represents the ability of pre-existing cracks or pores to propagate, a potentially more useful metric is the failure stress of a material, as this depends on sample dimensions instead of crack diameter. Failure stresses in SOFCs can also be evaluated using a ring-on ring biaxial stress test. This type of test is generally preferred, as sample edge quality does not significantly impact measurements. The determination of the sample's failure stress is shown in the equation below.
formula_32
Where:
formula_33 = failure stress of the small deformation
formula_34 = critical applied force
formula_35 = height of the sample
formula_36 = Poisson's ratio
formula_37 = diameter (sup = support ring, load = loading ring, s = sample)
However, this equation is not valid for deflections exceeding 1/2h, making it less applicable for thin samples, which are of great interest in SOFCs. Therefore, while this method does not require knowledge of crack or pore size, it must be used with great caution and is more applicable to support layers in SOFCs than active layers. In addition to failure stresses and fracture toughness, modern fuel cell designs that favor mixed ionic electronic conductors (MIECs), Creep (deformation) pose another great problem, as MIEC electrodes often operate at temperatures exceeding half of the melting temperature. As a result, diffusion creep must also be considered.
formula_38
Where:
formula_39 = equivalent creep strain
formula_37 = Diffusion coefficient
formula_40 = temperature
formula_41 = kinetic constant
formula_42 = equivalent stress (e.g. von Mises)
formula_43 = creep stress exponential factor
formula_44 = particle size exponent (2 for Nabarro–Herring creep, 3 for Coble creep)
To properly model creep strain rates, knowledge of Microstructure is therefore of significant importance. Due to the difficulty in mechanically testing SOFCs at high temperatures, and due to the microstructural evolution of SOFCs over the lifetime of operation resulting from Grain growth and coarsening, the actual creep behavior of SOFCs is currently not completely understood
Target.
DOE target requirements are 40,000 hours of service for stationary fuel cell applications and greater than 5,000 hours for transportation systems (fuel cell vehicles) at a factory cost of $40/kW for a 10 kW coal-based system without additional requirements. Lifetime effects (phase stability, thermal expansion compatibility, element migration, conductivity and aging) must be addressed. The Solid State Energy Conversion Alliance 2008 (interim) target for overall degradation per 1,000 hours is 4.0%.
Research.
Research is going now in the direction of lower-temperature SOFCs (600 °C). Low temperature systems can reduce costs by reducing insulation, materials, start-up and degradation-related costs. With higher operating temperatures, the temperature gradient increases the severity of thermal stresses, which affects materials cost and life of the system. An intermediate temperature system (650-800 °C) would enable the use of cheaper metallic materials with better mechanical properties and thermal conductivity. New developments in nano-scale electrolyte structures have been shown to bring down operating temperatures to around 350 °C, which would enable the use of even cheaper steel and elastomeric/polymeric components.
Lowering operating temperatures has the added benefit of increased efficiency. Theoretical fuel cell efficiency increases with decreasing temperature. For example, the efficiency of a SOFC using CO as fuel increases from 63% to 81% when decreasing the system temperature from 900 °C to 350 °C.
Research is also under way to improve the fuel flexibility of SOFCs. While stable operation has been achieved on a variety of hydrocarbon fuels, these cells typically rely on external fuel processing. In the case of natural gas, the fuel is either externally or internally reformed and the sulfur compounds are removed. These processes add to the cost and complexity of SOFC systems. Work is under way at a number of institutions to improve the stability of anode materials for hydrocarbon oxidation and, therefore, relax the requirements for fuel processing and decrease SOFC balance of plant costs.
Research is also going on in reducing start-up time to be able to implement SOFCs in mobile applications. This can be partially achieved by lowering operating temperatures, which is the case for proton-exchange membrane fuel cell (PEMFCs). Due to their fuel flexibility, they may run on partially reformed diesel, and this makes SOFCs interesting as auxiliary power units (APU) in refrigerated trucks.
Specifically, Delphi Automotive Systems are developing an SOFC that will power auxiliary units in automobiles and tractor-trailers, while BMW has recently stopped a similar project. A high-temperature SOFC will generate all of the needed electricity to allow the engine to be smaller and more efficient. The SOFC would run on the same gasoline or diesel as the engine and would keep the air conditioning unit and other necessary electrical systems running while the engine shuts off when not needed (e.g., at a stop light or truck stop).
Rolls-Royce is developing solid-oxide fuel cells produced by screen printing onto inexpensive ceramic materials. Rolls-Royce Fuel Cell Systems Ltd is developing an SOFC gas turbine hybrid system fueled by natural gas for power generation applications in the order of a megawatt (e.g. Futuregen).
3D printing is being explored as a possible manufacturing technique that could be used to make SOFC manufacturing easier by the Shah Lab at Northwestern University. This manufacturing technique would allow SOFC cell structure to be more flexible, which could lead to more efficient designs. This process could work in the production of any part of the cell. The 3D printing process works by combining about 80% ceramic particles with 20% binders and solvents, and then converting that slurry into an ink that can be fed into a 3D printer. Some of the solvent is very volatile, so the ceramic ink solidifies almost immediately. Not all of the solvent evaporates, so the ink maintains some flexibility before it is fired at high temperature to densify it. This flexibility allows the cells to be fired in a circular shape that would increase the surface area over which electrochemical reactions can occur, which increases the efficiency of the cell. Also, the 3D printing technique allows the cell layers to be printed on top of each other instead of having to go through separate manufacturing and stacking steps. The thickness is easy to control, and layers can be made in the exact size and shape that is needed, so waste is minimized.
Ceres Power Ltd. has developed a low cost and low temperature (500–600 degrees) SOFC stack using cerium gadolinium oxide (CGO) in place of current industry standard ceramic, yttria stabilized zirconia (YSZ), which allows the use of stainless steel to support the ceramic.
Solid Cell Inc. has developed a unique, low-cost cell architecture that combines properties of planar and tubular designs, along with a Cr-free cermet interconnect.
The high temperature electrochemistry center (HITEC) at the University of Florida, Gainesville is focused on studying ionic transport, electrocatalytic phenomena and microstructural characterization of ion conducting materials.
SiEnergy Systems, a Harvard spin-off company, has demonstrated the first macro-scale thin-film solid-oxide fuel cell that can operate at 500 degrees.
SOEC.
A solid oxide electrolyser cell (SOEC) is a solid oxide fuel cell set in regenerative mode for the electrolysis of water with a solid oxide, or ceramic, electrolyte to produce oxygen and hydrogen gas.
SOECs can also be used to do electrolysis of CO2 to produce CO and oxygen or even co-electrolysis of water and CO2 to produce syngas and oxygen.
ITSOFC.
SOFCs that operate in an intermediate temperature (IT) range, meaning between 600 and 800 °C, are named ITSOFCs. Because of the high degradation rates and materials costs incurred at temperatures in excess of 900 °C, it is economically more favorable to operate SOFCs at lower temperatures. The push for high-performance ITSOFCs is currently the topic of much research and development. One area of focus is the cathode material. It is thought that the oxygen reduction reaction is responsible for much of the loss in performance so the catalytic activity of the cathode is being studied and enhanced through various techniques, including catalyst impregnation. The research on NdCrO3 proves it to be a potential cathode material for the cathode of ITSOFC since it is thermochemically stable within the temperature range.
Another area of focus is electrolyte materials. To make SOFCs competitive in the market, ITSOFCs are pushing towards lower operational temperature by use of alternative new materials. However, efficiency and stability of the materials limit their feasibility. One choice for the electrolyte new materials is the ceria-salt ceramic composites (CSCs). The two-phase CSC electrolytes GDC (gadolinium-doped ceria) and SDC (samaria-doped ceria)-MCO3 (M=Li, Na, K, single or mixture of carbonates) can reach the power density of 300-800 mW*cm−2.
LT-SOFC.
Low-temperature solid oxide fuel cells (LT-SOFCs), operating lower than 650 °C, are of great interest for future research because the high operating temperature is currently what restricts the development and deployment of SOFCs. A low-temperature SOFC is more reliable due to smaller thermal mismatch and easier sealing. Additionally, a lower temperature requires less insulation and therefore has a lower cost. Cost is further lowered due to wider material choices for interconnects and compressive nonglass/ceramic seals. Perhaps most importantly, at a lower temperature, SOFCs can be started more rapidly and with less energy, which lends itself to uses in portable and transportable applications.
As temperature decreases, the maximum theoretical fuel cell efficiency increases, in contrast to the Carnot cycle. For example, the maximum theoretical efficiency of an SOFC using CO as a fuel increases from 63% at 900 °C to 81% at 350 °C.
This is a materials issue, particularly for the electrolyte in the SOFC. YSZ is the most commonly used electrolyte because of its superior stability, despite not having the highest conductivity. Currently, the thickness of YSZ electrolytes is a minimum of ~10 μm due to deposition methods, and this requires a temperature above 700 °C. Therefore, low-temperature SOFCs are only possible with higher conductivity electrolytes. Various alternatives that could be successful at low temperature include gadolinium-doped ceria (GDC) and erbia-cation-stabilized bismuth (ERB). They have superior ionic conductivity at lower temperatures, but this comes at the expense of lower thermodynamic stability. CeO2 electrolytes become electronically conductive and Bi2O3 electrolytes decompose to metallic Bi under the reducing fuel environment.
To combat this, researchers created a functionally graded ceria/bismuth-oxide bilayered electrolyte where the GDC layer on the anode side protects the ESB layer from decomposing while the ESB on the cathode side blocks the leakage current through the GDC layer. This leads to near-theoretical open-circuit potential (OPC) with two highly conductive electrolytes, that by themselves would not have been sufficiently stable for the application. This bilayer proved to be stable for 1400 hours of testing at 500 °C and showed no indication of interfacial phase formation or thermal mismatch. While this makes strides towards lowering the operating temperature of SOFCs, it also opens doors for future research to try and understand this mechanism.
Researchers at the Georgia Institute of Technology dealt with the instability of BaCeO3 differently. They replaced a desired fraction of Ce in BaCeO3 with Zr to form a solid solution that exhibits proton conductivity, but also chemical and thermal stability over the range of conditions relevant to fuel cell operation. A new specific composition, Ba(Zr0.1Ce0.7Y0.2)O3-δ (BZCY7) that displays the highest ionic conductivity of all known electrolyte materials for SOFC applications. This electrolyte was fabricated by dry-pressing powders, which allowed for the production of crack free films thinner than 15 μm. The implementation of this simple and cost-effective fabrication method may enable significant cost reductions in SOFC fabrication. However, this electrolyte operates at higher temperatures than the bilayered electrolyte model, closer to 600 °C rather than 500 °C.
Currently, given the state of the field for LT-SOFCs, progress in the electrolyte would reap the most benefits, but research into potential anode and cathode materials would also lead to useful results, and has started to be discussed more frequently in literature.
SOFC-GT.
An SOFC-GT system is one which comprises a solid oxide fuel cell combined with a gas turbine. Such systems have been evaluated by Siemens Westinghouse and Rolls-Royce as a means to achieve higher operating efficiencies by running the SOFC under pressure. SOFC-GT systems typically include anodic and/or cathodic atmosphere recirculation, thus increasing efficiency.
Theoretically, the combination of the SOFC and gas turbine can give result in high overall (electrical and thermal) efficiency. Further combination of the SOFC-GT in a combined cooling, heat and power (or trigeneration) configuration (via HVAC) also has the potential to yield even higher thermal efficiencies in some cases.
Another feature of the introduced hybrid system is on the gain of 100% CO2 capturing at comparable high energy efficiency. These features like zero CO2 emission and high energy efficiency make the power plant performance noteworthy.
DCFC.
For the direct use of solid coal fuel without additional gasification and reforming processes, a direct carbon fuel cell (DCFC) has been developed as a promising novel concept of a high-temperature energy conversion system. The underlying progress in the development of a coal-based DCFC has been categorized mainly according to the electrolyte materials used, such as solid oxide, molten carbonate, and molten hydroxide, as well as hybrid systems consisting of solid oxide and molten carbonate binary electrolyte or of liquid anode (Fe, Ag, In, Sn, Sb, Pb, Bi, and its alloying and its metal/metal oxide) solid oxide electrolyte. People's research on DCFC with GDC-Li/Na2CO3 as the electrolyte, Sm0.5Sr0.5CoO3 as cathode shows good performance. The highest power density of 48 mW*cm−2 can be reached at 500 °C with O2 and CO2 as oxidant and the whole system is stable within the temperature range of 500 °C to 600 °C.
SOFC operated on landfill gas
Every household produces waste/garbage on a daily basis. In 2009, Americans produced about 243 million tons of municipal solid waste, which is 4.3 pounds of waste per person per day. All that waste is sent to landfill sites. Landfill gas which is produced from the decomposition of waste that gets accumulated at the landfills has the potential to be a valuable source of energy since methane is a major constituent. Currently, the majority of the landfills either burn away their gas in flares or combust it in mechanical engines to produce electricity. The issue with mechanical engines is that incomplete combustion of gasses can lead to pollution of the atmosphere and is also highly inefficient.
The issue with using landfill gas to fuel an SOFC system is that landfill gas contains hydrogen sulfide. Any landfill accepting biological waste will contain about 50-60 ppm of hydrogen sulfide and around 1-2 ppm mercaptans. However, construction materials containing reducible sulfur species, principally sulfates found in gypsum-based wallboard, can cause considerably higher levels of sulfides in the hundreds of ppm. At operating temperatures of 750 °C hydrogen sulfide concentrations of around 0.05 ppm begin to affect the performance of the SOFCs.
Ni + H2S → NiS + H2
The above reaction controls the effect of sulfur on the anode.
This can be prevented by having background hydrogen which is calculated below.
At 453 K the equilibrium constant is 7.39 x 10−5
ΔG calculated at 453 K was 35.833 kJ/mol
Using the standard heat of formation and entropy ΔG at room temperature (298 K) came out to be 45.904 kJ/mol
On extrapolation to 1023 K, ΔG is -1.229 kJ/mol
On substitution, Keq at 1023 K is 1.44 x 10−4. Hence theoretically we need 3.4% hydrogen to prevent the formation of NiS at 5 ppm H2S.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\frac{1}{2}\\mathrm{O_2(g)} + 2\\mathrm{e'} + {V}^{\\bullet\\bullet}_o \\longrightarrow {O}^{\\times}_o "
},
{
"math_id": 1,
"text": " {V} = {E}_0 - {iR}_\\omega - {\\eta}_{cathode} - {\\eta}_{anode} "
},
{
"math_id": 2,
"text": "{E}_0"
},
{
"math_id": 3,
"text": "R"
},
{
"math_id": 4,
"text": "{\\eta}_{cathode}"
},
{
"math_id": 5,
"text": "{\\eta}_{anode}"
},
{
"math_id": 6,
"text": "E_{SOFC} = \\frac{E_{max}-i_{max}\\cdot\\eta_f\\cdot r_1}{\\frac{r_1}{r_2}\\cdot\\left( 1-\\eta_f \\right) + 1}\n"
},
{
"math_id": 7,
"text": "E_{SOFC}"
},
{
"math_id": 8,
"text": "E_{max}"
},
{
"math_id": 9,
"text": "i_{max}"
},
{
"math_id": 10,
"text": "\\eta_f"
},
{
"math_id": 11,
"text": "r_1"
},
{
"math_id": 12,
"text": "r_2"
},
{
"math_id": 13,
"text": "r_1 = \\frac{\\delta}{\\sigma}"
},
{
"math_id": 14,
"text": "\\delta"
},
{
"math_id": 15,
"text": "\\sigma"
},
{
"math_id": 16,
"text": "\\sigma = \\sigma_0\\cdot e^\\frac{-E}{R\\cdot T}"
},
{
"math_id": 17,
"text": "\\sigma_0"
},
{
"math_id": 18,
"text": "E"
},
{
"math_id": 19,
"text": "T"
},
{
"math_id": 20,
"text": " {\\eta}_{act} = \\frac {RT} {{\\beta}zF} \\times ln \\left(\\frac {i} {{i}_0} \\right) "
},
{
"math_id": 21,
"text": "{T}_0"
},
{
"math_id": 22,
"text": "{\\beta}"
},
{
"math_id": 23,
"text": "z"
},
{
"math_id": 24,
"text": "F"
},
{
"math_id": 25,
"text": "i"
},
{
"math_id": 26,
"text": "i_0"
},
{
"math_id": 27,
"text": " K_{IC} = K_{IC,0}\\exp{(-b_{k}p')} "
},
{
"math_id": 28,
"text": "K_{IC}"
},
{
"math_id": 29,
"text": "K_{IC,0}"
},
{
"math_id": 30,
"text": "b_k"
},
{
"math_id": 31,
"text": "p' "
},
{
"math_id": 32,
"text": " \\sigma_{cr}= \\frac{3F_{cr}}{2\\pi h_{s}^{2}}+\n\\Biggl((1-\\nu)\\frac{D_{sup}^{2}-D_{load}^{2}}{2D_{s}^{2}}+(1+\\nu)\\ln\\left ( \\frac{D_{sup}}{D_{load}} \\right )\\Biggr) "
},
{
"math_id": 33,
"text": " \\sigma_{cr} "
},
{
"math_id": 34,
"text": " F_{cr} "
},
{
"math_id": 35,
"text": " h_s "
},
{
"math_id": 36,
"text": " \\nu "
},
{
"math_id": 37,
"text": " D "
},
{
"math_id": 38,
"text": " \\dot{\\epsilon}_{eq}^{creep}=\\frac{\\tilde{k}_0D}{T}\\frac{\\sigma_{eq}^{m}}{d_{grain}^{n}} "
},
{
"math_id": 39,
"text": " \\dot{\\epsilon}_{eq}^{creep} "
},
{
"math_id": 40,
"text": " T "
},
{
"math_id": 41,
"text": " \\tilde{k}_0 "
},
{
"math_id": 42,
"text": " \\sigma_{eq} "
},
{
"math_id": 43,
"text": " m "
},
{
"math_id": 44,
"text": " n "
}
] | https://en.wikipedia.org/wiki?curid=1049636 |
1049637 | Molten carbonate fuel cell | Molten-carbonate fuel cells (MCFCs) are high-temperature fuel cells that operate at temperatures of 600 °C and above.
Molten carbonate fuel cells (MCFCs) were developed for natural gas, biogas (produced as a result of anaerobic digestion or biomass gasification), and coal-based power plants for electrical utility, industrial, and military applications. MCFCs are high-temperature fuel cells that use an electrolyte composed of a molten carbonate salt mixture suspended in a porous, chemically inert ceramic matrix of beta-alumina solid electrolyte (BASE). Since they operate at extremely high temperatures of 650 °C (roughly 1,200 °F) and above, non-precious metals can be used as catalysts at the anode and cathode, reducing costs.
Improved efficiency is another reason MCFCs offer significant cost reductions over phosphoric acid fuel cells (PAFCs). Molten carbonate fuel cells can reach efficiencies approaching 60%, considerably higher than the 37–42% efficiencies of a phosphoric acid fuel cell plant. When the waste heat is captured and used, overall fuel efficiencies can be as high as 85%.
Unlike alkaline, phosphoric acid, and polymer electrolyte membrane fuel cells, MCFCs don't require an external reformer to convert more energy-dense fuels to hydrogen. Due to the high temperatures at which MCFCs operate, these fuels are converted to hydrogen within the fuel cell itself by a process called internal reforming, which also reduces cost.
Molten carbonate fuel cells are not prone to poisoning by carbon monoxide or carbon dioxide — they can even use carbon oxides as fuel — making them more attractive for fueling with gases made from coal. Because they are more resistant to impurities than other fuel cell types, scientists believe that they could even be capable of internal reforming of coal, assuming they can be made resistant to impurities such as sulfur and particulates that result from converting coal, a dirtier fossil fuel source than many others, into hydrogen. Alternatively, because MCFCs require CO2 be delivered to the cathode along with the oxidizer, they can be used to electrochemically separate carbon dioxide from the flue gas of other fossil fuel power plants for sequestration.
The primary disadvantage of current MCFC technology is durability. The high temperatures at which these cells operate and the corrosive electrolyte used accelerate component breakdown and corrosion, decreasing cell life. Scientists are currently exploring corrosion-resistant materials for components as well as fuel cell designs that increase cell life without decreasing performance.
Operation.
Background.
Molten carbonate FCs are a recently developed type of fuel cell that targets small and large energy distribution/generation systems since their power production is in the 0.3-3 MW range. The operating pressure is between 1-8 atm while the temperatures are between 600 and 700 °C. Due to the production of CO2 during reforming of the fossil fuel (methane, natural gas), MCFCs are not a completely green technology, but are promising due to their reliability and efficiency (sufficient heat for co-generation with electricity). Current MCFC efficiencies range from 60 to 70%.
Reactions.
Internal Reformer (methane example):
formula_0
Anode (hydrogen example):
formula_1
Cathode:
formula_2
Cell:
formula_3
Nernst Equation:
formula_4
Materials.
Due to the high operating temperatures of MCFC's, the materials need to be very carefully selected to survive the conditions present within the cell. The following sections cover the various materials present in the fuel cell and recent developments in research.
Anode.
The anode material typically consists of a porous (3-6 μm, 45-70% material porosity) Ni based alloy. Ni is alloyed with either Chromium or Aluminum in the 2-10% range. These alloying elements allow for formation of LiCrO2/LiAlO2 at the grain boundaries, which increases the materials' creep resistance and prevents sintering of the anode at the high operating temperatures of the fuel cell. Recent research has looked at using nano Ni and other Ni alloys to increase the performance and decrease the operating temperature of the fuel cell. A reduction in operating temperature would extend the lifetime of the fuel cell (i.e. decrease corrosion rate) and allow for use of cheaper component materials. At the same time, a decrease in temperature would decrease ionic conductivity of the electrolyte and thus, the anode materials need to compensate for this performance decline (e.g. by increasing power density). Other researchers have looked into enhancing creep resistance by using a Ni3Al alloy anode to reduce mass transport of Ni in the anode when in operation.
Cathode.
On the other side of the cell, the cathode material is composed of either Lithium metatitanate or of a porous Ni that is converted to a lithiated nickel oxide (lithium is intercalated within the NiO crystal structure). The pore size within the cathode is in the range of 7-15 μm with 60-70% of the material being porous. The primary issue with the cathode material is dissolution of NiO since it reacts with CO2 when the cathode is in contact with the carbonate electrolyte. This dissolution leads to precipitation of Ni metal in the electrolyte and since it is electrically conductive, the fuel cell can get short circuited. Therefore, current studies have looked into the addition of MgO to the NiO cathode to limit this dissolution. Magnesium oxide serves to reduce the solubility of Ni2+ in the cathode and decreases precipitation in the electrolyte. Alternatively, replacement of the conventional cathode material with a LiFeO2-LiCoO2-NiO alloy has shown promising performance results and almost completely avoids the problem of Ni dissolution of the cathode.
Electrolyte.
MCFC's use a liquid electrolyte (molten carbonate) which consists of a sodium(Na) and potassium(K) carbonate. This electrolyte is supported by a ceramic (LiAlO2) matrix to contain the liquid between the electrodes. The high temperatures of the fuel cell is required to produce sufficient ionic conductivity of carbonate through this electrolyte. Common MCFC electrolytes contain 62% Li2CO3 and 38% K2CO3. A greater fraction of Li carbonate is used due to its higher ionic conductivity but is limited to 62% due to its lower gas solubility and ionic diffusivity of oxygen. In addition, Li2CO3 is a very corrosive electrolyte and this ratio of carbonates provides the lowest corrosion rate. Due to these issues, recent studies have delved into replacing the potassium carbonate with a sodium carbonate. A Li/Na electrolyte has shown to have better performance (higher conductivity) and improves the stability of the cathode when compared to a Li/K electrolyte (Li/K is more basic). In addition, scientists have also looked into modifying the matrix of the electrolyte to prevent issues such as phase changes (γ-LiAlO2 to α-LiAlO2) in the material during cell operation. The phase change accompanies a volume decrease in the electrolyte which leads to lower ionic conductivity. Through various studies, it has been found that an alumina doped α-LiAlO2 matrix would improve the phase stability while maintaining the fuel cell's performance.
MTU fuel cell.
The German company MTU Friedrichshafen presented an MCFC at the Hannover Fair in 2006. The unit weighs 2 tonnes and can produce 240 kW of electric power from various gaseous fuels, including biogas. If fueled by fuels that contain carbon such as natural gas, the exhaust will contain CO2 but will be reduced by up to 50% compared to diesel engines running on marine bunker fuel. The exhaust temperature is 400 °C, hot enough to be used for many industrial processes. Another possibility is to make more electric power via a steam turbine. Depending on feed gas type, the electric efficiency is between 12% and 19%. A steam turbine can increase the efficiency by up to 24%. The unit can be used for cogeneration.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "CH_4 + H_2O = 3H_2 + CO"
},
{
"math_id": 1,
"text": "H_2 + CO_3^{2-} = H_2O + CO_2 + 2e^-"
},
{
"math_id": 2,
"text": "\\frac{1}{2}O_2 + CO_2 +2e^- = CO_3^{2-}"
},
{
"math_id": 3,
"text": "H_2 + \\frac{1}{2}O_2 = H_2O"
},
{
"math_id": 4,
"text": "E = E^o + \\frac{RT}{2F}log\\frac{P_{H_2}P_{O_2}^{\\frac{1}{2}}}{P_{H_2O}}+\\frac{RT}{2F}log\\frac{P_{CO_2,cathode}}{P_{CO_2,anode}}"
}
] | https://en.wikipedia.org/wiki?curid=1049637 |
1049639 | Alkaline fuel cell | Type of fuel cell
The alkaline fuel cell (AFC), also known as the Bacon fuel cell after its British inventor, Francis Thomas Bacon, is one of the most developed fuel cell technologies. Alkaline fuel cells consume hydrogen and pure oxygen, to produce potable water, heat, and electricity. They are among the most efficient fuel cells, having the potential to reach 70%.
NASA has used alkaline fuel cells since the mid-1960s, in the Apollo-series missions and on the Space Shuttle.
Half Reactions.
The fuel cell produces power through a redox reaction between hydrogen and oxygen. At the anode, hydrogen is oxidized according to the reaction:
formula_0
producing water and releasing electrons. The electrons flow through an external circuit and return to the cathode, reducing oxygen in the reaction:
formula_1
producing hydroxide ions. The net reaction consumes one oxygen molecule and two hydrogen molecules in the production of two water molecules. Electricity and heat are formed as by-products of this reaction.
Electrolyte.
The two electrodes are separated by a porous matrix saturated with an aqueous alkaline solution, such as potassium hydroxide (KOH). Aqueous alkaline solutions do not reject carbon dioxide (CO2) so the fuel cell can become "poisoned" through the conversion of KOH to potassium carbonate (K2CO3). Because of this, alkaline fuel cells typically operate on pure oxygen, or at least purified air and would incorporate a 'scrubber' into the design to clean out as much of the carbon dioxide as is possible. Because the generation and storage requirements of oxygen make pure-oxygen AFCs expensive, there are few companies engaged in active development of the technology. There is, however, some debate in the research community over whether the poisoning is permanent or reversible. The main mechanisms of poisoning are blocking of the pores in the cathode with K2CO3, which is not reversible, and reduction in the ionic conductivity of the electrolyte, which may be reversible by returning the KOH to its original concentration. An alternate method involves simply replacing the KOH which returns the cell back to its original output.
When carbon dioxide reacts with the electrolyte carbonates are formed. The carbonates could precipitate on the pores of electrodes that eventually block them. It has been found that AFCs operating at higher temperature do not show a reduction in performance, whereas at around room temperature, a significant drop in performance has been shown. The carbonate poisoning at ambient temperature is thought to be a result of the low solubility of K2CO3 around room temperature, which leads to precipitation of K2CO3 that blocks the electrode pores. Also, these precipitants gradually decrease the hydrophobicity of the electrode backing layer leading to structural degradation and electrode flooding.
formula_2
On the other hand, the charge-carrying hydroxide ions in the electrolyte can react with carbon dioxide from organic fuel oxidation (i.e. methanol, formic acid) or air to form carbonate species.
formula_3
Carbonate formation depletes hydroxide ions from the electrolyte, which reduces electrolyte conductivity and consequently cell performance.
As well as these bulk effects, the effect on water management due to a change in vapor pressure and/or a change in electrolyte volume can be detrimental as well.
Basic designs.
Because of this poisoning effect, two main variants of AFCs exist: static electrolyte and flowing electrolyte. Static, or immobilized, electrolyte cells of the type used in the Apollo space craft and the Space Shuttle typically use an asbestos separator saturated in potassium hydroxide. Water production is controlled by evaporation from the anode, which produces pure water that may be reclaimed for other uses. These fuel cells typically use platinum catalysts to achieve maximum volumetric and specific efficiencies.
Flowing electrolyte designs use a more open matrix that allows the electrolyte to flow either between the electrodes (parallel to the electrodes) or through the electrodes in a transverse direction (the ASK-type or EloFlux fuel cell). In parallel-flow electrolyte designs, the water produced is retained in the electrolyte, and old electrolyte may be exchanged for fresh, in a manner analogous to an oil change in a car. More space is required between electrodes to enable this flow, and this translates into an increase in cell resistance, decreasing power output compared to immobilized electrolyte designs. A further challenge for the technology is how severe the problem of permanent blocking of the cathode is by K2CO3; some published reports have indicated thousands of hours of operation on air.
These designs have used both platinum and non-noble metal catalysts, resulting in increased efficiencies and increased cost.
The EloFlux design, with its transverse flow of electrolyte, has the advantage of low-cost construction and replaceable electrolyte but so far has only been demonstrated using oxygen.
The electrodes consist of a double layer structure: an active electrocatalyst layer and a hydrophobic layer. The active layer consists of an organic mixture which is ground and then rolled at room temperature to form a crosslinked self-supporting sheet. The hydrophobic structure prevents the electrolyte from leaking into the reactant gas flow channels and ensures diffusion of the gases to the reaction site. The two layers are then pressed onto a conducting metal mesh, and sintering completes the process.
Further variations on the alkaline fuel cell include the metal hydride fuel cell and the direct borohydride fuel cell.
Advantages over acidic fuel cells.
Alkaline fuel cells operate between ambient temperature and 90 °C with an electrical efficiency higher than fuel cells with acidic electrolyte, such as proton-exchange membrane fuel cells (PEMFC), solid oxide fuel cells, and phosphoric acid fuel cells. Because of the alkaline chemistry, oxygen reduction reaction (ORR) kinetics at the cathode are much more facile than in acidic cells, allowing use of non-noble metals, such as iron, cobalt, nickel, manganese, or carbon-based nanomaterial at the anode (where fuel is oxidized); and cheaper catalysts such as silver at the cathode, due to the low overpotentials associated with electrochemical reactions at high pH.
An alkaline medium also accelerates oxidation of fuels like methanol, making them more attractive.
This results in less pollution compared to acidic fuel cells.
Commercial prospects.
AFCs are the cheapest of fuel cells to manufacture. The catalyst required for the electrodes can be any of a number of different chemicals that are inexpensive compared to those required for other types of fuel cells.
The commercial prospects for AFCs lie largely with the recently developed bi-polar plate version of this technology, considerably superior in performance to earlier mono-plate versions.
The world's first fuel-cell ship, the "Hydra", used an AFC system with 5 kW net output.
Another recent development is the solid-state alkaline fuel cell, utilizing a solid anion-exchange membrane instead of a liquid electrolyte. This resolves the problem of poisoning and allows the development of alkaline fuel cells capable of running on safer hydrogen-rich carriers such as liquid urea solutions or metal amine complexes.
External links.
Developers | [
{
"math_id": 0,
"text": "\\mathrm{H}_2 + \\mathrm{2OH}^- \\longrightarrow \\mathrm{2H}_2\\mathrm{O} + \\mathrm{2e}^-"
},
{
"math_id": 1,
"text": "\\mathrm{O}_2 + \\mathrm{2H}_2\\mathrm{O} + \\mathrm{4e}^- \\longrightarrow \\mathrm{4OH}^-"
},
{
"math_id": 2,
"text": "\\mathrm{CO}_2 + \\mathrm{2KOH}\\longrightarrow \\mathrm{K}_2\\mathrm{CO}_3 + \\mathrm{H}_2\\mathrm{O}"
},
{
"math_id": 3,
"text": "\\mathrm{2OH}^- + \\mathrm{CO}_2\\longrightarrow \\mathrm{CO}_3^{2-} + \\mathrm{H}_2\\mathrm{O}"
}
] | https://en.wikipedia.org/wiki?curid=1049639 |
1049668 | Net national income | Measure of economic activity
In national income accounting, net national income (NNI) is net national product (NNP) minus indirect taxes. Net national income encompasses the income of households, businesses, and the government. Net national income is defined as gross domestic product plus net receipts of wages, salaries and property income from abroad, minus the depreciation of fixed capital assets (dwellings, buildings, machinery, transport equipment and physical infrastructure) through wear and tear and obsolescence.
It can be expressed as
formula_0
where C denotes consumption, I denotes investment, G denotes government spending, and NX represents net exports (exports minus imports: X – M).
This formula uses the expenditure method of national income accounting.
When net national income is adjusted for natural resource depletion, it is called "Adjusted Net National Income", expressed as
formula_1
Natural resources are non-critical natural capital such as minerals. NNI* does not take critical natural capital into account. Examples are air, water, land, etc.
For reference, capital (K) is divided into four categories: | [
{
"math_id": 0,
"text": "\\mathrm{NNI} = \\mathrm{C} + \\mathrm{I} + \\mathrm{G} + \\mathrm{NX} + \\left[{\\text{Net Foreign} \\atop \\text{Factor Income}}\\right] - \\left[{\\text{Indirect} \\atop \\text{Taxes}}\\right] - \\left[{\\text{Manufactured Capital} \\atop \\text{Depreciation}}\\right]"
},
{
"math_id": 1,
"text": "\\mathrm{NNI}^* = \\mathrm{NNI} - \\left[{\\text{Natural Resource} \\atop \\text{Depletion}}\\right]"
},
{
"math_id": 2,
"text": "K_m"
},
{
"math_id": 3,
"text": "K_h"
},
{
"math_id": 4,
"text": "K_n"
},
{
"math_id": 5,
"text": "K_h*"
}
] | https://en.wikipedia.org/wiki?curid=1049668 |
1049691 | Sequence logo | In bioinformatics, a sequence logo is a graphical representation of the sequence conservation of nucleotides (in a strand of DNA/RNA) or amino acids (in protein sequences).
A sequence logo is created from a collection of aligned sequences and depicts the consensus sequence and diversity of the sequences.
Sequence logos are frequently used to depict sequence characteristics such as protein-binding sites in DNA or functional units in proteins.
Overview.
A sequence logo consists of a stack of letters at each position.
The relative sizes of the letters indicate their frequency in the sequences.
The total height of the letters depicts the information content of the position, in bits.
Logo creation.
To create sequence logos, related DNA, RNA or protein sequences, or DNA sequences that have common conserved binding sites, are aligned so that the most conserved parts create good alignments. A sequence logo can then be created from the conserved multiple sequence alignment. The sequence logo will show how well residues are conserved at each position: the higher the number of residues, the higher the letters will be, because the better the conservation is at that position. Different residues at the same position are scaled according to their frequency. The height of the entire stack of residues is the information measured in bits. Sequence logos can be used to represent conserved DNA binding sites, where transcription factors bind.
The information content (y-axis) of position formula_0 is given by:
for amino acids, formula_1
for nucleic acids, formula_2
where formula_3 is the uncertainty
(sometimes called the Shannon entropy) of position formula_0
formula_4
Here, formula_5 is the relative frequency of base or amino acid formula_6 at position formula_0, and formula_7 is the small-sample correction for an alignment of formula_8 letters. The height of letter formula_6 in column formula_0 is given by
formula_9
The approximation for the small-sample correction, formula_7, is given by:
formula_10
where formula_11 is 4 for nucleotides, 20 for amino acids, and formula_8 is the number of sequences in the alignment.
Consensus logo.
A consensus logo is a simplified variation of a sequence logo that can be embedded in text format.
Like a sequence logo, a consensus logo is created from a collection of aligned protein or DNA/RNA sequences and conveys information about the conservation of each position of a sequence motif or sequence alignment
. However, a consensus logo displays only conservation information, and not explicitly the frequency information of each nucleotide or amino acid at each position. Instead of a stack made of several characters, denoting the relative frequency of each character, the consensus logo depicts the degree of conservation of each position using the height of the consensus character at that position.
Advantages and drawbacks.
The main, and obvious, advantage of consensus logos over sequence logos is their ability to be embedded as text in any Rich Text Format supporting editor/viewer and, therefore, in scientific manuscripts. As described above, the consensus logo is a cross between sequence logos and consensus sequences. As a result, compared to a sequence logo, the consensus logo omits information (the relative contribution of each character to the conservation of that position in the motif/alignment). Hence, a sequence logo should be used preferentially whenever possible. That being said, the need to include graphic figures in order to display sequence logos has perpetuated the use of consensus sequences in scientific manuscripts, even though they fail to convey information on both conservation and frequency. Consensus logos represent therefore an improvement over consensus sequences whenever motif/alignment information has to be constrained to text.
Extensions.
Hidden Markov models (HMMs) not only consider the information content of aligned positions in an alignment, but also of insertions and deletions. In an HMM sequence logo used by Pfam, three rows are added to indicate the frequencies of occupancy (presence) and insertion, as well as the expected insertion length.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "i"
},
{
"math_id": 1,
"text": "R_i = \\log_2(20) - (H_i + e_n)"
},
{
"math_id": 2,
"text": "R_i = \\log_2(4) - (H_i + e_n)"
},
{
"math_id": 3,
"text": "H_i"
},
{
"math_id": 4,
"text": "H_i = - \\sum_{b=1}^{t} f_{b,i} \\times \\log_2 f_{b,i} "
},
{
"math_id": 5,
"text": "f_{b,i}"
},
{
"math_id": 6,
"text": "b"
},
{
"math_id": 7,
"text": "e_n"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "\\text{height} = f_{b,i} \\times R_i"
},
{
"math_id": 10,
"text": "e_n = \\frac{1}{\\ln{2}}\\times\\frac{s-1}{2n}"
},
{
"math_id": 11,
"text": "s"
}
] | https://en.wikipedia.org/wiki?curid=1049691 |
10497038 | Yetter–Drinfeld category | In mathematics a Yetter–Drinfeld category is a special type of braided monoidal category. It consists of modules over a Hopf algebra which satisfy some additional axioms.
Definition.
Let "H" be a Hopf algebra over a field "k". Let formula_0 denote the coproduct and "S" the antipode of "H". Let "V" be a vector space over "k". Then "V" is called a (left left) Yetter–Drinfeld module over "H" if
formula_7 for all formula_8,
where, using Sweedler notation, formula_9 denotes the twofold coproduct of formula_10, and formula_11.
formula_16,
where each formula_17 is a "G"-submodule of "V".
formula_16, such that formula_18.
Braiding.
Let "H" be a Hopf algebra with invertible antipode "S", and let "V", "W" be Yetter–Drinfeld modules over "H". Then the map formula_33,
formula_34
is invertible with inverse
formula_35
Further, for any three Yetter–Drinfeld modules "U", "V", "W" the map "c" satisfies the braid relation
formula_36
A monoidal category formula_37 consisting of Yetter–Drinfeld modules over a Hopf algebra "H" with bijective antipode is called a Yetter–Drinfeld category. It is a braided monoidal category with the braiding "c" above. The category of Yetter–Drinfeld modules over a Hopf algebra "H" with bijective antipode is denoted by formula_38.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\Delta "
},
{
"math_id": 1,
"text": " (V,\\boldsymbol{.}) "
},
{
"math_id": 2,
"text": " \\boldsymbol{.}: H\\otimes V\\to V "
},
{
"math_id": 3,
"text": " (V,\\delta\\;) "
},
{
"math_id": 4,
"text": " \\delta : V\\to H\\otimes V "
},
{
"math_id": 5,
"text": "\\boldsymbol{.}"
},
{
"math_id": 6,
"text": "\\delta"
},
{
"math_id": 7,
"text": " \\delta (h\\boldsymbol{.}v)=h_{(1)}v_{(-1)}S(h_{(3)})\n\\otimes h_{(2)}\\boldsymbol{.}v_{(0)}"
},
{
"math_id": 8,
"text": " h\\in H,v\\in V"
},
{
"math_id": 9,
"text": " (\\Delta \\otimes \\mathrm{id})\\Delta (h)=h_{(1)}\\otimes h_{(2)}\n\\otimes h_{(3)} \\in H\\otimes H\\otimes H"
},
{
"math_id": 10,
"text": " h\\in H "
},
{
"math_id": 11,
"text": " \\delta (v)=v_{(-1)}\\otimes v_{(0)} "
},
{
"math_id": 12,
"text": "\\delta (v)=1\\otimes v"
},
{
"math_id": 13,
"text": "V=k\\{v\\}"
},
{
"math_id": 14,
"text": "h\\boldsymbol{.}v=\\epsilon (h)v"
},
{
"math_id": 15,
"text": " \\delta (v)=1\\otimes v"
},
{
"math_id": 16,
"text": " V=\\bigoplus _{g\\in G}V_g"
},
{
"math_id": 17,
"text": "V_g"
},
{
"math_id": 18,
"text": "g.V_h\\subset V_{ghg^{-1}}"
},
{
"math_id": 19,
"text": "k=\\mathbb{C}\\;"
},
{
"math_id": 20,
"text": "[g]\\subset G\\;"
},
{
"math_id": 21,
"text": "\\chi,X\\;"
},
{
"math_id": 22,
"text": "Cent(g)\\;"
},
{
"math_id": 23,
"text": "g\\in[g]"
},
{
"math_id": 24,
"text": "V=\\mathcal{O}_{[g]}^\\chi=\\mathcal{O}_{[g]}^{X}\\qquad V=\\bigoplus_{h\\in[g]}V_{h}=\\bigoplus_{h\\in[g]}X"
},
{
"math_id": 25,
"text": "\\mathcal{O}_{[g]}^\\chi"
},
{
"math_id": 26,
"text": "Ind_{Cent(g)}^G(\\chi)=kG\\otimes_{kCent(g)}X"
},
{
"math_id": 27,
"text": "t\\otimes v\\in kG\\otimes_{kCent(g)}X=V"
},
{
"math_id": 28,
"text": "t\\otimes v\\in V_{tgt^{-1}}"
},
{
"math_id": 29,
"text": "V\\;"
},
{
"math_id": 30,
"text": "t_i\\;"
},
{
"math_id": 31,
"text": "h\\otimes v\\subset[g]\\times X \\;\\; \\leftrightarrow \\;\\; t_i\\otimes v\\in kG\\otimes_{kCent(g)}X \\qquad\\text{with uniquely}\\;\\;h=t_igt_i^{-1}"
},
{
"math_id": 32,
"text": "h\\otimes v\\in V_h"
},
{
"math_id": 33,
"text": " c_{V,W}:V\\otimes W\\to W\\otimes V"
},
{
"math_id": 34,
"text": "c(v\\otimes w):=v_{(-1)}\\boldsymbol{.}w\\otimes v_{(0)},"
},
{
"math_id": 35,
"text": "c_{V,W}^{-1}(w\\otimes v):=v_{(0)}\\otimes S^{-1}(v_{(-1)})\\boldsymbol{.}w."
},
{
"math_id": 36,
"text": "(c_{V,W}\\otimes \\mathrm{id}_U)(\\mathrm{id}_V\\otimes c_{U,W})(c_{U,V}\\otimes \\mathrm{id}_W)=(\\mathrm{id}_W\\otimes c_{U,V}) (c_{U,W}\\otimes \\mathrm{id}_V) (\\mathrm{id}_U\\otimes c_{V,W}):U\\otimes V\\otimes W\\to W\\otimes V\\otimes U."
},
{
"math_id": 37,
"text": " \\mathcal{C}"
},
{
"math_id": 38,
"text": " {}^H_H\\mathcal{YD}"
}
] | https://en.wikipedia.org/wiki?curid=10497038 |
10497504 | Image sensor format | Shape and size of a digital camera's image sensor
In digital photography, the image sensor format is the shape and size of the image sensor.
The image sensor format of a digital camera determines the angle of view of a particular lens when used with a particular sensor. Because the image sensors in many digital cameras are smaller than the 24 mm × 36 mm image area of full-frame 35 mm cameras, a lens of a given focal length gives a narrower field of view in such cameras.
Sensor size is often expressed as optical format in inches. Other measures are also used; see table of sensor formats and sizes below.
Lenses produced for 35 mm film cameras may mount well on the digital bodies, but the larger image circle of the 35 mm system lens allows unwanted light into the camera body, and the smaller size of the image sensor compared to 35 mm film format results in cropping of the image. This latter effect is known as field-of-view crop. The format size ratio (relative to the 35 mm film format) is known as the field-of-view crop factor, crop factor, lens factor, focal-length conversion factor, focal-length multiplier, or lens multiplier.
Sensor size and depth of field.
Three possible depth-of-field comparisons between formats are discussed, applying the formulae derived in the article on depth of field. The depths of field of the three cameras may be the same, or different in either order, depending on what is held constant in the comparison.
Considering a picture with the same subject distance and angle of view for two different formats:
formula_0
so the DOFs are in inverse proportion to the absolute aperture diameters formula_1 and formula_2.
Using the same absolute aperture diameter for both formats with the "same picture" criterion (equal angle of view, magnified to same final size) yields the same depth of field. It is equivalent to adjusting the f-number inversely in proportion to crop factor – a smaller f-number for smaller sensors (this also means that, when holding the shutter speed fixed, the exposure is changed by the adjustment of the f-number required to equalise depth of field. But the aperture area is held constant, so sensors of all sizes receive the same total amount of light energy from the subject. The smaller sensor is then operating at a lower ISO setting, by the square of the crop factor). This condition of equal field of view, equal depth of field, equal aperture diameter, and equal exposure time is known as "equivalence".
And, we might compare the depth of field of sensors receiving the same photometric exposure – the f-number is fixed instead of the aperture diameter – the sensors are operating at the same ISO setting in that case, but the smaller sensor is receiving less total light, by the area ratio. The ratio of depths of field is then
formula_3
where formula_4 and formula_5 are the characteristic dimensions of the format, and thus formula_6 is the relative crop factor between the sensors. It is this result that gives rise to the common opinion that small sensors yield greater depth of field than large ones.
An alternative is to consider the depth of field given by the same lens in conjunction with different sized sensors (changing the angle of view). The change in depth of field is brought about by the requirement for a different degree of enlargement to achieve the same final image size. In this case the ratio of depths of field becomes
formula_7.
In practice, if applying a lens with a fixed focal length and a fixed aperture and made for an image circle to meet the requirements for a large sensor is to be adapted, without changing its physical properties, to smaller sensor sizes neither the depth of field nor the light gathering formula_8 will change.
Sensor size, noise and dynamic range.
Discounting photo response non-uniformity (PRNU) and dark noise variation, which are not intrinsically sensor-size dependent, the noises in an image sensor are shot noise, read noise, and dark noise. The overall signal to noise ratio of a sensor (SNR), expressed as signal electrons relative to rms noise in electrons, observed at the scale of a single pixel, assuming shot noise from Poisson distribution of signal electrons and dark electrons, is
formula_9
where formula_10 is the incident photon flux (photons per second in the area of a pixel), formula_11 is the quantum efficiency, formula_12 is the exposure time, formula_13 is the pixel dark current in electrons per second and formula_14 is the pixel read noise in electrons rms.
Each of these noises has a different dependency on sensor size.
Exposure and photon flux.
Image sensor noise can be compared across formats for a given fixed photon flux per pixel area (the "P" in the formulas); this analysis is useful for a fixed number of pixels with pixel area proportional to sensor area, and fixed absolute aperture diameter for a fixed imaging situation in terms of depth of field, diffraction limit at the subject, etc. Or it can be compared for a fixed focal-plane illuminance, corresponding to a fixed f-number, in which case "P" is proportional to pixel area, independent of sensor area. The formulas above and below can be evaluated for either case.
Shot noise.
In the above equation, the shot noise SNR is given by
formula_15.
Apart from the quantum efficiency it depends on the incident photon flux and the exposure time, which is equivalent to the exposure and the sensor area; since the exposure is the integration time multiplied with the image plane illuminance, and illuminance is the luminous flux per unit area. Thus for equal exposures, the signal to noise ratios of two different size sensors of equal quantum efficiency and pixel count will (for a given final image size) be in proportion to the square root of the sensor area (or the linear scale factor of the sensor). If the exposure is constrained by the need to achieve some required depth of field (with the same shutter speed) then the exposures will be in inverse relation to the sensor area, producing the interesting result that if depth of field is a constraint, image shot noise is not dependent on sensor area. For identical f-number lenses the signal to noise ratio increases as square root of the pixel area, or linearly with pixel pitch. As typical f-numbers for lenses for cell phones and DSLR are in the same range <templatestyles src="F//sandbox/styles.css" />f/1.5–2 it is interesting to compare performance of cameras with small and big sensors. A good cell phone camera with typical pixel size 1.1 μm (Samsung A8) would have about 3 times worse SNR due to shot noise than a 3.7 μm pixel interchangeable lens camera (Panasonic G85) and 5 times worse than a 6 μm full frame camera (Sony A7 III). Taking into consideration the dynamic range makes the difference even more prominent. As such the trend of increasing the number of "megapixels" in cell phone cameras during last 10 years was caused rather by marketing strategy to sell "more megapixels" than by attempts to improve image quality.
Read noise.
The read noise is the total of all the electronic noises in the conversion chain for the pixels in the sensor array. To compare it with photon noise, it must be referred back to its equivalent in photoelectrons, which requires the division of the noise measured in volts by the conversion gain of the pixel. This is given, for an active pixel sensor, by the voltage at the input (gate) of the read transistor divided by the charge which generates that voltage, formula_16. This is the inverse of the capacitance of the read transistor gate (and the attached floating diffusion) since capacitance formula_17. Thus formula_18.
In general for a planar structure such as a pixel, capacitance is proportional to area, therefore the read noise scales down with sensor area, as long as pixel area scales with sensor area, and that scaling is performed by uniformly scaling the pixel.
Considering the signal to noise ratio due to read noise at a given exposure, the signal will scale as the sensor area along with the read noise and therefore read noise SNR will be unaffected by sensor area. In a depth of field constrained situation, the exposure of the larger sensor will be reduced in proportion to the sensor area, and therefore the read noise SNR will reduce likewise.
Dark noise.
Dark current contributes two kinds of noise: dark offset, which is only partly correlated between pixels, and the shot noise associated with dark offset, which is uncorrelated between pixels. Only the shot-noise component "Dt" is included in the formula above, since the uncorrelated part of the dark offset is hard to predict, and the correlated or mean part is relatively easy to subtract off. The mean dark current contains contributions proportional both to the area and the linear dimension of the photodiode, with the relative proportions and scale factors depending on the design of the photodiode. Thus in general the dark noise of a sensor may be expected to rise as the size of the sensor increases. However, in most sensors the mean pixel dark current at normal temperatures is small, lower than 50 e- per second, thus for typical photographic exposure times dark current and its associated noises may be discounted. At very long exposure times, however, it may be a limiting factor. And even at short or medium exposure times, a few outliers in the dark-current distribution may show up as "hot pixels". Typically, for astrophotography applications sensors are cooled to reduce dark current in situations where exposures may be measured in several hundreds of seconds.
Dynamic range.
Dynamic range is the ratio of the largest and smallest recordable signal, the smallest being typically defined by the 'noise floor'. In the image sensor literature, the noise floor is taken as the readout noise, so formula_19 (note, the read noise formula_20 is the same quantity as formula_14 referred to in the SNR calculation).
Sensor size and diffraction.
The resolution of all optical systems is limited by diffraction. One way of considering the effect that diffraction has on cameras using different sized sensors is to consider the modulation transfer function (MTF). Diffraction is one of the factors that contribute to the overall system MTF. Other factors are typically the MTFs of the lens, anti-aliasing filter and sensor sampling window. The spatial cut-off frequency due to diffraction through a lens aperture is
formula_21
where λ is the wavelength of the light passing through the system and N is the f-number of the lens. If that aperture is circular, as are (approximately) most photographic apertures, then the MTF is given by
formula_22
for formula_23 and formula_24 for formula_25
The diffraction based factor of the system MTF will therefore scale according to formula_26 and in turn according to formula_27 (for the same light wavelength).
In considering the effect of sensor size, and its effect on the final image, the different magnification required to obtain the same size image for viewing must be accounted for, resulting in an additional scale factor of formula_28 where formula_29 is the relative crop factor, making the overall scale factor formula_30. Considering the three cases above:
For the 'same picture' conditions, same angle of view, subject distance and depth of field, then the f-numbers are in the ratio formula_31, so the scale factor for the diffraction MTF is 1, leading to the conclusion that the diffraction MTF at a given depth of field is independent of sensor size.
In both the 'same photometric exposure' and 'same lens' conditions, the f-number is not changed, and thus the spatial cutoff and resultant MTF on the sensor is unchanged, leaving the MTF in the viewed image to be scaled as the magnification, or inversely as the crop factor.
Sensor format and lens size.
It might be expected that lenses appropriate for a range of sensor sizes could be produced by simply scaling the same designs in proportion to the crop factor. Such an exercise would in theory produce a lens with the same f-number and angle of view, with a size proportional to the sensor crop factor. In practice, simple scaling of lens designs is not always achievable, due to factors such as the non-scalability of manufacturing tolerance, structural integrity of glass lenses of different sizes and available manufacturing techniques and costs. Moreover, to maintain the same absolute amount of information in an image (which can be measured as the space-bandwidth product) the lens for a smaller sensor requires a greater resolving power. The development of the 'Tessar' lens is discussed by Nasse, and shows its transformation from an <templatestyles src="F//sandbox/styles.css" />f/6.3 lens for plate cameras using the original three-group configuration through to an <templatestyles src="F//sandbox/styles.css" />f/2.8 5.2 mm four-element optic with eight extremely aspheric surfaces, economically manufacturable because of its small size. Its performance is 'better than the best 35 mm lenses – but only for a very small image'.
In summary, as sensor size reduces, the accompanying lens designs will change, often quite radically, to take advantage of manufacturing techniques made available due to the reduced size. The functionality of such lenses can also take advantage of these, with extreme zoom ranges becoming possible. These lenses are often very large in relation to sensor size, but with a small sensor can be fitted into a compact package.
Small body means small lens and means small sensor, so to keep smartphones slim and light, the smartphone manufacturers use a tiny sensor usually less than the 1/2.3" used in most bridge cameras. At one time only Nokia 808 PureView used a 1/1.2" sensor, almost three times the size of a 1/2.3" sensor. Bigger sensors have the advantage of better image quality, but with improvements in sensor technology, smaller sensors can achieve the feats of earlier larger sensors. These improvements in sensor technology allow smartphone manufacturers to use image sensors as small as 1/4" without sacrificing too much image quality compared to budget point & shoot cameras.
Active area of the sensor.
For calculating camera angle of view one should use the size of active area of the sensor.
Active area of the sensor implies an area of the sensor on which image is formed in a given mode of the camera. The active area may be smaller than the image sensor, and active area can differ in different modes of operation of the same camera.
Active area size depends on the aspect ratio of the sensor and aspect ratio of the output image of the camera. The active area size can depend on number of pixels in given mode of the camera.
The active area size and lens focal length determines angles of view.
Sensor size and shading effects.
Semiconductor image sensors can suffer from shading effects at large apertures and at the periphery of the image field, due to the geometry of the light cone projected from the exit pupil of the lens to a point, or pixel, on the sensor surface. The effects are discussed in detail by Catrysse and Wandell.
In the context of this discussion the most important result from the above is that to ensure a full transfer of light energy between two coupled optical systems such as the lens' exit pupil to a pixel's photoreceptor the geometrical extent (also known as etendue or light throughput) of the objective lens / pixel system must be smaller than or equal to the geometrical extent of the microlens / photoreceptor system. The geometrical extent of the objective lens / pixel system is given by
formula_32
where "w"pixel is the width of the pixel and ("f"/#)objective is the f-number of the objective lens. The geometrical extent of the microlens / photoreceptor system is given by
formula_33
where "w"photoreceptor is the width of the photoreceptor and ("f"/#)microlens is the f-number of the microlens.
In order to avoid shading, formula_34 therefore
formula_35
If "w"photoreceptor / "w"pixel
"ff", the linear fill factor of the lens, then the condition becomes
formula_36
Thus if shading is to be avoided the f-number of the microlens must be smaller than the f-number of the taking lens by at least a factor equal to the linear fill factor of the pixel. The f-number of the microlens is determined ultimately by the width of the pixel and its height above the silicon, which determines its focal length. In turn, this is determined by the height of the metallisation layers, also known as the 'stack height'. For a given stack height, the f-number of the microlenses will increase as pixel size reduces, and thus the objective lens f-number at which shading occurs will tend to increase.
In order to maintain pixel counts smaller sensors will tend to have smaller pixels, while at the same time smaller objective lens f-numbers are required to maximise the amount of light projected on the sensor. To combat the effect discussed above, smaller format pixels include engineering design features to allow the reduction in f-number of their microlenses. These may include simplified pixel designs which require less metallisation, 'light pipes' built within the pixel to bring its apparent surface closer to the microlens and 'back side illumination' in which the wafer is thinned to expose the rear of the photodetectors and the microlens layer is placed directly on that surface, rather than the front side with its wiring layers.
Common image sensor formats.
For interchangeable-lens cameras.
Some professional DSLRs, SLTs and mirrorless cameras use "full-frame" sensors, equivalent to the size of a frame of 35 mm film.
Most consumer-level DSLRs, SLTs and mirrorless cameras use relatively large sensors, either somewhat under the size of a frame of APS-C film, with a crop factor of 1.5–1.6; or 30% smaller than that, with a crop factor of 2.0 (this is the Four Thirds System, adopted by Olympus and Panasonic).
As of 2013[ [update]] there is only one mirrorless model equipped with a very small sensor, more typical of compact cameras: the Pentax Q7, with a 1/1.7" sensor (4.55 crop factor). See Sensors equipping Compact digital cameras and camera-phones section below.
Many different terms are used in marketing to describe DSLR/SLT/mirrorless sensor formats, including the following:
Obsolescent and out-of-production sensor sizes include:
When full-frame sensors were first introduced, production costs could exceed twenty times the cost of an APS-C sensor. Only twenty full-frame sensors can be produced on an silicon wafer, which would fit 100 or more APS-C sensors, and there is a significant reduction in yield due to the large area for contaminants per component. Additionally, full frame sensor fabrication originally required three separate exposures during each step of the photolithography process, which requires separate masks and quality control steps. Canon selected the intermediate APS-H size, since it was at the time the largest that could be patterned with a single mask, helping to control production costs and manage yields. Newer photolithography equipment now allows single-pass exposures for full-frame sensors, although other size-related production constraints remain much the same.
Due to the ever-changing constraints of semiconductor fabrication and processing, and because camera manufacturers often source sensors from third-party foundries, it is common for sensor dimensions to vary slightly within the same nominal format. For example, the Nikon D3 and D700 cameras' nominally full-frame sensors actually measure 36 × 23.9 mm, slightly smaller than a 36 × 24 mm frame of 35 mm film. As another example, the Pentax K200D's sensor (made by Sony) measures 23.5 × 15.7 mm, while the contemporaneous K20D's sensor (made by Samsung) measures 23.4 × 15.6 mm.
Most of these image sensor formats approximate the 3:2 aspect ratio of 35 mm film. Again, the Four Thirds System is a notable exception, with an aspect ratio of 4:3 as seen in most compact digital cameras (see below).
Smaller sensors.
Most sensors are made for camera phones, compact digital cameras, and bridge cameras. Most image sensors equipping compact cameras have an aspect ratio of 4:3. This matches the aspect ratio of the popular SVGA, XGA, and SXGA display resolutions at the time of the first digital cameras, allowing images to be displayed on usual monitors without cropping.
As of 2010[ [update]] most compact digital cameras used small 1/2.3" sensors. Such cameras include Canon Powershot SX230 IS, Fuji Finepix Z90 and Nikon Coolpix S9100. Some older digital cameras (mostly from 2005–2010) used even smaller 1/2.5" sensors: these include Panasonic Lumix DMC-FS62, Canon Powershot SX120 IS, Sony Cyber-shot DSC-S700, and Casio Exilim EX-Z80.
As of 2018 high-end compact cameras using one inch sensors that have nearly four times the area of those equipping common compacts include Canon PowerShot G-series (G3 X to G9 X), Sony DSC RX100 series, Panasonic Lumix TZ100 and Panasonic DMC-LX15. Canon has APS-C sensor on its top model PowerShot G1 X Mark III.
Finally, Sony has the DSC-RX1 and DSC-RX1R cameras in their lineup, which have a full-frame sensor usually only used in professional DSLRs, SLTs and MILCs.
Due to the size constraints of powerful zoom objectives, most current bridge cameras have 1/2.3" sensors, as small as those used in common more compact cameras. As lens sizes are proportional to the image sensor size, smaller sensors enable large zoom amounts with moderate size lenses. In 2011 the high-end Fujifilm X-S1 was equipped with a much larger 2/3" sensor. In 2013–2014, both Sony (Cyber-shot DSC-RX10) and Panasonic (Lumix DMC-FZ1000) produced bridge cameras with 1" sensors.
The sensors of camera phones are typically much smaller than those of typical compact cameras, allowing greater miniaturization of the electrical and optical components. Sensor sizes of around 1/6" are common in camera phones, webcams and digital camcorders. The Nokia N8 (2010)'s 1/1.83" sensor was the largest in a phone in late 2011. The Nokia 808 (2012) surpasses compact cameras with its 41 million pixels, 1/1.2" sensor.
Medium-format digital sensors.
The largest digital sensors in commercially available cameras are described as "medium format", in reference to film formats of similar dimensions. Although the most common medium format film, the 120 roll, is wide, and is most commonly shot square, the most common "medium-format" digital sensor sizes are approximately , which is roughly twice the size of a full-frame DSLR sensor format.
Available CCD sensors include Phase One's P65+ digital back with Dalsa's sensor containing 60.5 megapixels
and Leica's "S-System" DSLR with a sensor containing 37-megapixels. In 2010, Pentax released the 40MP 645D medium format DSLR with a CCD sensor; later models of the 645 series kept the same sensor size but replaced the CCD with a CMOS sensor. In 2016, Hasselblad announced the X1D, a 50MP medium-format mirrorless camera, with a CMOS sensor.
In late 2016, Fujifilm also announced its new Fujifilm GFX 50S medium format, mirrorless entry into the market, with a CMOS sensor and 51.4MP.
Table of sensor formats and sizes.
Sensor sizes are expressed in inches notation because at the time of the popularization of digital image sensors they were used to replace video camera tubes. The common 1" outside diameter circular video camera tubes have a rectangular photo sensitive area about on the diagonal, so a digital sensor with a diagonal size is a 1" video tube equivalent. The name of a 1" digital sensor should more accurately be read as "one inch video camera tube equivalent" sensor. Current digital image sensor size descriptors are the video camera tube equivalency size, not the actual size of the sensor. For example, a 1" sensor has a diagonal measurement of .
Sizes are often expressed as a fraction of an inch, with a one in the numerator, and a decimal number in the denominator. For example, 1/2.5 converts to 2/5 as a simple fraction, or 0.4 as a decimal number. This "inch" system gives a result approximately 1.5 times the length of the diagonal of the sensor. This "optical format" measure goes back to the way image sizes of video cameras used until the late 1980s were expressed, referring to the outside diameter of the glass envelope of the video camera tube. David Pogue of "The New York Times" states that "the actual sensor size is much smaller than what the camera companies publish – about one-third smaller." For example, a camera advertising a 1/2.7" sensor does not have a sensor with a diagonal of ; instead, the diagonal is closer to . Instead of "formats", these sensor sizes are often called "types", as in "1/2-inch-type CCD."
Due to inch-based sensor formats not being standardized, their exact dimensions may vary, but those listed are typical. The listed sensor areas span more than a factor of 1000 and are proportional to the maximum possible collection of light and image resolution (same lens speed, i.e., minimum f-number), but in practice are not directly proportional to image noise or resolution due to other limitations. See comparisons. Film format sizes are also included, for comparison. The application examples of phone or camera may not show the exact sensor sizes.
<templatestyles src="Reflist/styles.css" />
Notes.
<templatestyles src="Reflist/styles.css" />
Footnotes and references.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\frac {\\mathrm{DOF}_2} {\\mathrm{DOF}_1} \\approx \\frac {d_1} {d_2}"
},
{
"math_id": 1,
"text": "d_1"
},
{
"math_id": 2,
"text": "d_2"
},
{
"math_id": 3,
"text": " \\frac {\\mathrm{DOF}_2} {\\mathrm{DOF}_1} \\approx \\frac {l_1} {l_2}"
},
{
"math_id": 4,
"text": " l_1"
},
{
"math_id": 5,
"text": "l_2"
},
{
"math_id": 6,
"text": "l_1/l_2"
},
{
"math_id": 7,
"text": " \\frac {\\mathrm{DOF}_2} {\\mathrm{DOF}_1} \\approx \\frac {l_2} {l_1} "
},
{
"math_id": 8,
"text": "\\mathrm{lx = \\, \\frac{lm}{m^2}}"
},
{
"math_id": 9,
"text": " \\mathrm{SNR} = \\frac{P Q_e t}{\\sqrt{\\left(\\sqrt{P Q_e t}\\right)^2 + \\left(\\sqrt{D t}\\right)^2 + N_r^2}} = \\frac{P Q_e t}{\\sqrt{P Q_e t + D t + N_r^2}} "
},
{
"math_id": 10,
"text": "P"
},
{
"math_id": 11,
"text": "Q_e"
},
{
"math_id": 12,
"text": "t"
},
{
"math_id": 13,
"text": "D"
},
{
"math_id": 14,
"text": "N_r"
},
{
"math_id": 15,
"text": "\\frac{P Q_e t}{\\sqrt{P Q_e t}} = \\sqrt{P Q_e t}"
},
{
"math_id": 16,
"text": "CG = V_{rt}/Q_{rt}"
},
{
"math_id": 17,
"text": "C = Q/V"
},
{
"math_id": 18,
"text": "CG = 1/C_{rt}"
},
{
"math_id": 19,
"text": " DR = Q_\\text{max} / \\sigma_\\text{readout}"
},
{
"math_id": 20,
"text": "\\sigma_{readout}"
},
{
"math_id": 21,
"text": "\\xi_\\mathrm{cutoff}=\\frac{1}{\\lambda N}"
},
{
"math_id": 22,
"text": "\\mathrm{MTF}\\left(\\frac{\\xi}{\\xi_\\mathrm{cutoff}}\\right) = \\frac{2}{\\pi} \\left\\{ \\cos^{-1}\\left(\\frac{\\xi}{\\xi_\\mathrm{cutoff}}\\right) - \\left(\\frac{\\xi}{\\xi_\\mathrm{cutoff}}\\right) \\left[ 1 - \\left(\\frac{\\xi}{\\xi_\\mathrm{cutoff}}\\right)^2 \\right]^\\frac{1}{2} \\right\\}"
},
{
"math_id": 23,
"text": " \\xi < \\xi_\\mathrm{cutoff} "
},
{
"math_id": 24,
"text": " 0 "
},
{
"math_id": 25,
"text": " \\xi \\ge \\xi_\\mathrm{cutoff} "
},
{
"math_id": 26,
"text": "\\xi_\\mathrm{cutoff}"
},
{
"math_id": 27,
"text": " 1/N "
},
{
"math_id": 28,
"text": "1/{C}"
},
{
"math_id": 29,
"text": "{C}"
},
{
"math_id": 30,
"text": "1 / (N C)"
},
{
"math_id": 31,
"text": "1/C"
},
{
"math_id": 32,
"text": " G_\\mathrm{objective} \\simeq \\frac{w_\\mathrm{pixel}}{2{(f/\\#)}_\\mathrm{objective}}\\,, "
},
{
"math_id": 33,
"text": " G_\\mathrm{pixel} \\simeq \\frac{w_\\mathrm{photoreceptor}}{2{(f/\\#)}_\\mathrm{microlens}}\\,, "
},
{
"math_id": 34,
"text": " G_\\mathrm{pixel} \\ge G_\\mathrm{objective},"
},
{
"math_id": 35,
"text": " \\frac{w_\\mathrm{photoreceptor}}{{(f/\\#)}_\\mathrm{microlens}} \\ge \\frac{w_\\mathrm{pixel}}{{(f/\\#)}_\\mathrm{objective}}."
},
{
"math_id": 36,
"text": " {(f/\\#)}_\\mathrm{microlens} \\le {(f/\\#)}_\\mathrm{objective} \\times \\mathit{ff}\\,."
}
] | https://en.wikipedia.org/wiki?curid=10497504 |
10499606 | Slater's rules | Semi-empirical rules for quantum chemistry
In quantum chemistry, Slater's rules provide numerical values for the effective nuclear charge in a many-electron atom. Each electron is said to experience less than the actual nuclear charge, because of shielding or screening by the other electrons. For each electron in an atom, Slater's rules provide a value for the screening constant, denoted by "s", "S", or "σ", which relates the effective and actual nuclear charges as
formula_0
The rules were devised semi-empirically by John C. Slater and published in 1930.
Revised values of screening constants based on computations of atomic structure by the Hartree–Fock method were obtained by Enrico Clementi et al. in the 1960s.
Rules.
Firstly, the electrons are arranged into a sequence of groups in order of increasing principal quantum number n, and for equal n in order of increasing azimuthal quantum number l, except that s- and p- orbitals are kept together.
[1s] [2s,2p] [3s,3p] [3d] [4s,4p] [4d] [4f] [5s, 5p] [5d] etc.
Each group is given a different shielding constant which depends upon the number and types of electrons in those groups preceding it.
The shielding constant for each group is formed as the "sum" of the following contributions:
In tabular form, the rules are summarized as:
Example.
An example provided in Slater's original paper is for the iron atom which has nuclear charge 26 and electronic configuration 1s22s22p63s23p63d64s2. The screening constant, and subsequently the shielded (or effective) nuclear charge for each electron is deduced as:
formula_1
Note that the effective nuclear charge is calculated by subtracting the screening constant from the atomic number, 26.
Motivation.
The rules were developed by John C. Slater in an attempt to construct simple analytic expressions for the atomic orbital of any electron in an atom. Specifically, for each electron in an atom, Slater wished to determine shielding constants ("s") and "effective" quantum numbers ("n"*) such that
formula_2
provides a reasonable approximation to a single-electron wave function. Slater defined "n"* by the rule that for n = 1, 2, 3, 4, 5, 6 respectively; "n"* = 1, 2, 3, 3.7, 4.0 and 4.2. This was an arbitrary adjustment to fit calculated atomic energies to experimental data.
Such a form was inspired by the known wave function spectrum of hydrogen-like atoms which have the radial component
formula_3
where "n" is the (true) principal quantum number, "l" the azimuthal quantum number, and "f""nl"("r") is an oscillatory polynomial with "n" - "l" - 1 nodes. Slater argued on the basis of previous calculations by Clarence Zener that the presence of radial nodes was not required to obtain a reasonable approximation. He also noted that in the asymptotic limit (far away from the nucleus), his approximate form coincides with the exact hydrogen-like wave function in the presence of a nuclear charge of "Z"-"s" and in the state with a principal quantum number n equal to his effective quantum number "n"*.
Slater then argued, again based on the work of Zener, that the total energy of a "N"-electron atom with a wavefunction constructed from orbitals of his form should be well approximated as
formula_4
Using this expression for the total energy of an atom (or ion) as a function of the shielding constants and effective quantum numbers, Slater was able to compose rules such that spectral energies calculated agree reasonably well with experimental values for a wide range of atoms. Using the values in the iron example above, the total energy of a neutral iron atom using this method is −2497.2 Ry, while the energy of an excited Fe+ cation lacking a single 1s electron is −1964.6 Ry. The difference, 532.6 Ry, can be compared to the experimental (circa 1930) K absorption limit of 524.0 Ry.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Z_{\\mathrm{eff}}= Z - s.\\,"
},
{
"math_id": 1,
"text": "\n\\begin{matrix}\n 4s &: s = 0.35 \\times 1& + &0.85 \\times 14 &+& 1.00 \\times 10 &=& 22.25 &\\Rightarrow& Z_{\\mathrm{eff}}(4s) = 26.00 - 22.25 = 3.75\\\\\n 3d &: s = 0.35 \\times 5& & &+& 1.00 \\times 18 &=& 19.75 &\\Rightarrow& Z_{\\mathrm{eff}}(3d)= 26.00 - 19.75 =6.25\\\\\n3s,3p &: s = 0.35 \\times 7& + &0.85 \\times 8 &+& 1.00 \\times 2 &=& 11.25 &\\Rightarrow& Z_{\\mathrm{eff}}(3s,3p)= 26.00 - 11.25 =14.75\\\\\n2s,2p &: s = 0.35 \\times 7& + &0.85 \\times 2 & & &=& 4.15 &\\Rightarrow& Z_{\\mathrm{eff}}(2s,2p)= 26.00 - 4.15 =21.85\\\\\n1s &: s = 0.30 \\times 1& & & & &=& 0.30 &\\Rightarrow& Z_{\\mathrm{eff}}(1s)= 26.00 - 0.30 =25.70\n\\end{matrix}\n"
},
{
"math_id": 2,
"text": "\\psi_{n^{*}s}(r) = r^{n^{*}-1}\\exp\\left(-\\frac{(Z-s)r}{n^{*}}\\right)"
},
{
"math_id": 3,
"text": "R_{nl}(r) = r^{l}f_{nl}(r)\\exp\\left(-\\frac{Zr}{n}\\right),"
},
{
"math_id": 4,
"text": "E = -\\sum_{i=1}^{N}\\left(\\frac{Z-s_{i}}{n^{*}_{i}}\\right)^{2}."
}
] | https://en.wikipedia.org/wiki?curid=10499606 |
10499629 | Hochschild homology | Theory for associative algebras over rings
In mathematics, Hochschild homology (and cohomology) is a homology theory for associative algebras over rings. There is also a theory for Hochschild homology of certain functors. Hochschild cohomology was introduced by Gerhard Hochschild (1945) for algebras over a field, and extended to algebras over more general rings by Henri Cartan and Samuel Eilenberg (1956).
Definition of Hochschild homology of algebras.
Let "k" be a field, "A" an associative "k"-algebra, and "M" an "A"-bimodule. The enveloping algebra of "A" is the tensor product formula_0 of "A" with its opposite algebra. Bimodules over "A" are essentially the same as modules over the enveloping algebra of "A", so in particular "A" and "M" can be considered as "Ae"-modules. defined the Hochschild homology and cohomology group of "A" with coefficients in "M" in terms of the Tor functor and Ext functor by
formula_1
formula_2
Hochschild complex.
Let "k" be a ring, "A" an associative "k"-algebra that is a projective "k"-module, and "M" an "A"-bimodule. We will write formula_3 for the "n"-fold tensor product of "A" over "k". The chain complex that gives rise to Hochschild homology is given by
formula_4
with boundary operator formula_5 defined by
formula_6
where formula_7 is in "A" for all formula_8 and formula_9. If we let
formula_10
then formula_11, so formula_12 is a chain complex called the Hochschild complex, and its homology is the Hochschild homology of "A" with coefficients in "M". Henceforth, we will write formula_13 as simply formula_14.
Remark.
The maps formula_5 are face maps making the family of modules formula_15 a simplicial object in the category of "k"-modules, i.e., a functor Δo → "k"-mod, where Δ is the simplex category and "k"-mod is the category of "k"-modules. Here Δo is the opposite category of Δ. The degeneracy maps are defined by
formula_16
Hochschild homology is the homology of this simplicial module.
Relation with the Bar complex.
There is a similar looking complex formula_17 called the Bar complex which formally looks very similar to the Hochschild complexpg 4-5. In fact, the Hochschild complex formula_18 can be recovered from the Bar complex asformula_19giving an explicit isomorphism.
As a derived self-intersection.
There's another useful interpretation of the Hochschild complex in the case of commutative rings, and more generally, for sheaves of commutative rings: it is constructed from the derived self-intersection of a scheme (or even derived scheme) formula_20 over some base scheme formula_21. For example, we can form the derived fiber productformula_22which has the sheaf of derived rings formula_23. Then, if embed formula_20 with the diagonal mapformula_24the Hochschild complex is constructed as the pullback of the derived self intersection of the diagonal in the diagonal product schemeformula_25From this interpretation, it should be clear the Hochschild homology should have some relation to the Kähler differentials formula_26 since the Kähler differentials can be defined using a self-intersection from the diagonal, or more generally, the cotangent complex formula_27 since this is the derived replacement for the Kähler differentials. We can recover the original definition of the Hochschild complex of a commutative formula_28-algebra formula_29 by settingformula_30 and formula_31Then, the Hochschild complex is quasi-isomorphic toformula_32If formula_29 is a flat formula_28-algebra, then there's the chain of isomorphismformula_33giving an alternative but equivalent presentation of the Hochschild complex.
Hochschild homology of functors.
The simplicial circle formula_34 is a simplicial object in the category formula_35 of finite pointed sets, i.e., a functor formula_36 Thus, if "F" is a functor formula_37, we get a simplicial module by composing "F" with formula_34.
formula_38
The homology of this simplicial module is the Hochschild homology of the functor "F". The above definition of Hochschild homology of commutative algebras is the special case where "F" is the Loday functor.
Loday functor.
A skeleton for the category of finite pointed sets is given by the objects
formula_39
where 0 is the basepoint, and the morphisms are the basepoint preserving set maps. Let "A" be a commutative k-algebra and "M" be a symmetric "A"-bimodule. The Loday functor formula_40 is given on objects in formula_35 by
formula_41
A morphism
formula_42
is sent to the morphism formula_43 given by
formula_44
where
formula_45
Another description of Hochschild homology of algebras.
The Hochschild homology of a commutative algebra "A" with coefficients in a symmetric "A"-bimodule "M" is the homology associated to the composition
formula_46
and this definition agrees with the one above.
Examples.
The examples of Hochschild homology computations can be stratified into a number of distinct cases with fairly general theorems describing the structure of the homology groups and the homology ring formula_47 for an associative algebra formula_29. For the case of commutative algebras, there are a number of theorems describing the computations over characteristic 0 yielding a straightforward understanding of what the homology and cohomology compute.
Commutative characteristic 0 case.
In the case of commutative algebras formula_48 where formula_49, the Hochschild homology has two main theorems concerning smooth algebras, and more general non-flat algebras formula_29; but, the second is a direct generalization of the first. In the smooth case, i.e. for a smooth algebra formula_29, the Hochschild-Kostant-Rosenberg theorempg 43-44 states there is an isomorphism formula_50 for every formula_51. This isomorphism can be described explicitly using the anti-symmetrization map. That is, a differential formula_52-form has the mapformula_53
If the algebra formula_48 isn't smooth, or even flat, then there is an analogous theorem using the cotangent complex. For a simplicial resolution formula_54, we set formula_55. Then, there exists a descending formula_56-filtration formula_57 on formula_58 whose graded pieces are isomorphic to formula_59
Note this theorem makes it accessible to compute the Hochschild homology not just for smooth algebras, but also for local complete intersection algebras. In this case, given a presentation formula_60 for formula_61, the cotangent complex is the two-term complex formula_62.
Polynomial rings over the rationals.
One simple example is to compute the Hochschild homology of a polynomial ring of formula_63 with formula_52-generators. The HKR theorem gives the isomorphism formula_64 where the algebra formula_65 is the free antisymmetric algebra over formula_63 in formula_52-generators. Its product structure is given by the wedge product of vectors, so formula_66 for formula_67.
Commutative characteristic p case.
In the characteristic p case, there is a userful counter-example to the Hochschild-Kostant-Rosenberg theorem which elucidates for the need of a theory beyond simplicial algebras for defining Hochschild homology. Consider the formula_68-algebra formula_69. We can compute a resolution of formula_69 as the free differential graded algebrasformula_70giving the derived intersection formula_71 where formula_72 and the differential is the zero map. This is because we just tensor the complex above by formula_69, giving a formal complex with a generator in degree formula_73 which squares to formula_74. Then, the Hochschild complex is given byformula_75In order to compute this, we must resolve formula_69 as an formula_76-algebra. Observe that the algebra structure
formula_77
forces formula_78. This gives the degree zero term of the complex. Then, because we have to resolve the kernel formula_79, we can take a copy of formula_76 shifted in degree formula_80 and have it map to formula_79, with kernel in degree formula_81formula_82We can perform this recursively to get the underlying module of the divided power algebraformula_83with formula_84 and the degree of formula_85 is formula_86, namely formula_87. Tensoring this algebra with formula_69 over formula_76 givesformula_88since formula_89 multiplied with any element in formula_69 is zero. The algebra structure comes from general theory on divided power algebras and differential graded algebras. Note this computation is seen as a technical artifact because the ring formula_90 is not well behaved. For instance, formula_91. One technical response to this problem is through Topological Hochschild homology, where the base ring formula_68 is replaced by the sphere spectrum formula_92.
Topological Hochschild homology.
The above construction of the Hochschild complex can be adapted to more general situations, namely by replacing the category of (complexes of) formula_28-modules by an ∞-category (equipped with a tensor product) formula_93, and formula_29 by an associative algebra in this category. Applying this to the category formula_94 of spectra, and formula_29 being the Eilenberg–MacLane spectrum associated to an ordinary ring formula_95 yields topological Hochschild homology, denoted formula_96. The (non-topological) Hochschild homology introduced above can be reinterpreted along these lines, by taking for "formula_97" the derived category of formula_98-modules (as an ∞-category).
Replacing tensor products over the sphere spectrum by tensor products over formula_98 (or the Eilenberg–MacLane-spectrum formula_99) leads to a natural comparison map formula_100. It induces an isomorphism on homotopy groups in degrees 0, 1, and 2. In general, however, they are different, and formula_101 tends to yield simpler groups than HH. For example,
formula_102
formula_103
is the polynomial ring (with "x" in degree 2), compared to the ring of divided powers in one variable.
Lars Hesselholt (2016) showed that the Hasse–Weil zeta function of a smooth proper variety over formula_69 can be expressed using regularized determinants involving topological Hochschild homology. | [
{
"math_id": 0,
"text": "A^e=A\\otimes A^o"
},
{
"math_id": 1,
"text": " HH_n(A,M) = \\operatorname{Tor}_n^{A^e}(A, M)"
},
{
"math_id": 2,
"text": " HH^n(A,M) = \\operatorname{Ext}^n_{A^e}(A, M)"
},
{
"math_id": 3,
"text": "A^{\\otimes n}"
},
{
"math_id": 4,
"text": " C_n(A,M) := M \\otimes A^{\\otimes n} "
},
{
"math_id": 5,
"text": "d_i"
},
{
"math_id": 6,
"text": "\\begin{align}\nd_0(m\\otimes a_1 \\otimes \\cdots \\otimes a_n) &= ma_1 \\otimes a_2 \\cdots \\otimes a_n \\\\\nd_i(m\\otimes a_1 \\otimes \\cdots \\otimes a_n) &= m\\otimes a_1 \\otimes \\cdots \\otimes a_i a_{i+1} \\otimes \\cdots \\otimes a_n \\\\\nd_n(m\\otimes a_1 \\otimes \\cdots \\otimes a_n) &= a_n m\\otimes a_1 \\otimes \\cdots \\otimes a_{n-1} \n\\end{align}"
},
{
"math_id": 7,
"text": "a_i"
},
{
"math_id": 8,
"text": "1\\le i\\le n"
},
{
"math_id": 9,
"text": "m\\in M"
},
{
"math_id": 10,
"text": " b_n=\\sum_{i=0}^n (-1)^i d_i, "
},
{
"math_id": 11,
"text": "b_{n-1} \\circ b_{n} =0"
},
{
"math_id": 12,
"text": "(C_n(A,M),b_n)"
},
{
"math_id": 13,
"text": "b_n"
},
{
"math_id": 14,
"text": "b"
},
{
"math_id": 15,
"text": "(C_n(A,M),b)"
},
{
"math_id": 16,
"text": "s_i(a_0 \\otimes \\cdots \\otimes a_n) = a_0 \\otimes \\cdots \\otimes a_i \\otimes 1 \\otimes a_{i+1} \\otimes \\cdots \\otimes a_n."
},
{
"math_id": 17,
"text": "B(A/k)"
},
{
"math_id": 18,
"text": "HH(A/k)"
},
{
"math_id": 19,
"text": "HH(A/k) \\cong A\\otimes_{A\\otimes A^{op}} B(A/k)"
},
{
"math_id": 20,
"text": "X"
},
{
"math_id": 21,
"text": "S"
},
{
"math_id": 22,
"text": "X\\times^\\mathbf{L}_SX"
},
{
"math_id": 23,
"text": "\\mathcal{O}_X\\otimes_{\\mathcal{O}_S}^\\mathbf{L}\\mathcal{O}_X"
},
{
"math_id": 24,
"text": "\\Delta: X \\to X\\times^\\mathbf{L}_SX"
},
{
"math_id": 25,
"text": "HH(X/S) := \\Delta^*(\\mathcal{O}_X\\otimes_{\\mathcal{O}_X\\otimes_{\\mathcal{O}_S}^\\mathbf{L}\\mathcal{O}_X}^\\mathbf{L}\\mathcal{O}_X)"
},
{
"math_id": 26,
"text": "\\Omega_{X/S}"
},
{
"math_id": 27,
"text": "\\mathbf{L}_{X/S}^\\bullet"
},
{
"math_id": 28,
"text": "k"
},
{
"math_id": 29,
"text": "A"
},
{
"math_id": 30,
"text": "S = \\text{Spec}(k)"
},
{
"math_id": 31,
"text": "X = \\text{Spec}(A)"
},
{
"math_id": 32,
"text": "HH(A/k) \\simeq_{qiso} A\\otimes_{A\\otimes_{k}^\\mathbf{L}A}^\\mathbf{L}A "
},
{
"math_id": 33,
"text": "A\\otimes_k^\\mathbf{L}A \\cong A\\otimes_kA \\cong A\\otimes_kA^{op}"
},
{
"math_id": 34,
"text": "S^1"
},
{
"math_id": 35,
"text": "\\operatorname{Fin}_*"
},
{
"math_id": 36,
"text": "\\Delta^o \\to \\operatorname{Fin}_*."
},
{
"math_id": 37,
"text": "F\\colon \\operatorname{Fin} \\to k-\\mathrm{mod}"
},
{
"math_id": 38,
"text": " \\Delta^o \\overset{S^1}{\\longrightarrow} \\operatorname{Fin}_* \\overset{F}{\\longrightarrow} k\\text{-mod}."
},
{
"math_id": 39,
"text": " n_+ = \\{0,1,\\ldots,n\\},"
},
{
"math_id": 40,
"text": "L(A,M)"
},
{
"math_id": 41,
"text": " n_+ \\mapsto M \\otimes A^{\\otimes n}."
},
{
"math_id": 42,
"text": "f:m_+ \\to n_+"
},
{
"math_id": 43,
"text": "f_*"
},
{
"math_id": 44,
"text": " f_*(a_0 \\otimes \\cdots \\otimes a_m) = b_0 \\otimes \\cdots \\otimes b_n "
},
{
"math_id": 45,
"text": "\\forall j \\in \\{0, \\ldots, n \\}: \\qquad b_j =\n\\begin{cases}\n\\prod_{i \\in f^{-1}(j)} a_i & f^{-1}(j) \\neq \\emptyset\\\\\n1 & f^{-1}(j) =\\emptyset\n\\end{cases}"
},
{
"math_id": 46,
"text": "\\Delta^o \\overset{S^1}{\\longrightarrow} \\operatorname{Fin}_* \\overset{\\mathcal{L}(A,M)}{\\longrightarrow} k\\text{-mod},"
},
{
"math_id": 47,
"text": "HH_*(A)"
},
{
"math_id": 48,
"text": "A/k"
},
{
"math_id": 49,
"text": "\\mathbb{Q}\\subseteq k"
},
{
"math_id": 50,
"text": "\\Omega^n_{A/k} \\cong HH_n(A/k)"
},
{
"math_id": 51,
"text": "n \\geq 0"
},
{
"math_id": 52,
"text": "n"
},
{
"math_id": 53,
"text": "a\\,db_1\\wedge \\cdots \\wedge db_n \\mapsto\n\\sum_{\\sigma \\in S_n}\\operatorname{sign}(\\sigma)\n a\\otimes b_{\\sigma(1)}\\otimes \\cdots \\otimes b_{\\sigma(n)}."
},
{
"math_id": 54,
"text": "P_\\bullet \\to A"
},
{
"math_id": 55,
"text": "\\mathbb{L}^i_{A/k} = \\Omega^i_{P_\\bullet/k}\\otimes_{P_\\bullet} A"
},
{
"math_id": 56,
"text": "\\mathbb{N}"
},
{
"math_id": 57,
"text": "F_\\bullet"
},
{
"math_id": 58,
"text": "HH_n(A/k)"
},
{
"math_id": 59,
"text": "\\frac{F_i}{F_{i+1}} \\cong \\mathbb{L}^i_{A/k}[+i]."
},
{
"math_id": 60,
"text": "A = R/I"
},
{
"math_id": 61,
"text": "R = k[x_1,\\dotsc,x_n]"
},
{
"math_id": 62,
"text": "I/I^2 \\to \\Omega^1_{R/k}\\otimes_k A"
},
{
"math_id": 63,
"text": "\\mathbb{Q}"
},
{
"math_id": 64,
"text": "HH_*(\\mathbb{Q}[x_1,\\ldots, x_n]) = \\mathbb{Q}[x_1,\\ldots, x_n]\\otimes \\Lambda(dx_1,\\dotsc, dx_n)"
},
{
"math_id": 65,
"text": "\\bigwedge(dx_1,\\ldots, dx_n)"
},
{
"math_id": 66,
"text": "\\begin{align}\ndx_i\\cdot dx_j &= -dx_j\\cdot dx_i \\\\\ndx_i\\cdot dx_i &= 0 \n\\end{align}"
},
{
"math_id": 67,
"text": "i \\neq j"
},
{
"math_id": 68,
"text": "\\mathbb{Z}"
},
{
"math_id": 69,
"text": "\\mathbb{F}_p"
},
{
"math_id": 70,
"text": "\\mathbb{Z}\\xrightarrow{\\cdot p} \\mathbb{Z}"
},
{
"math_id": 71,
"text": "\\mathbb{F}_p\\otimes^\\mathbf{L}_\\mathbb{Z}\\mathbb{F}_p \\cong \\mathbb{F}_p[\\varepsilon]/(\\varepsilon^2)"
},
{
"math_id": 72,
"text": "\\text{deg}(\\varepsilon) = 1"
},
{
"math_id": 73,
"text": "1"
},
{
"math_id": 74,
"text": "0"
},
{
"math_id": 75,
"text": "\\mathbb{F}_p\\otimes^\\mathbb{L}_{\\mathbb{F}_p\\otimes^\\mathbb{L}_\\mathbb{Z} \\mathbb{F}_p}\\mathbb{F}_p"
},
{
"math_id": 76,
"text": "\\mathbb{F}_p\\otimes^\\mathbf{L}_\\mathbb{Z}\\mathbb{F}_p"
},
{
"math_id": 77,
"text": "\\mathbb{F}_p[\\varepsilon]/(\\varepsilon^2) \\to \\mathbb{F}_p"
},
{
"math_id": 78,
"text": "\\varepsilon \\mapsto 0"
},
{
"math_id": 79,
"text": "\\varepsilon \\cdot \\mathbb{F}_p\\otimes^\\mathbf{L}_\\mathbb{Z}\\mathbb{F}_p"
},
{
"math_id": 80,
"text": "2"
},
{
"math_id": 81,
"text": "3"
},
{
"math_id": 82,
"text": "\\varepsilon \\cdot \\mathbb{F}_p\\otimes^\\mathbf{L}_\\mathbb{Z}\\mathbb{F}_p = \\text{Ker}({\\displaystyle \\mathbb {F} _{p}\\otimes _{\\mathbb {Z} }^{\\mathbf {L} }\\mathbb {F} _{p}} \\to {\\displaystyle \\varepsilon \\cdot \\mathbb {F} _{p}\\otimes _{\\mathbb {Z} }^{\\mathbf {L} }\\mathbb {F} _{p}})."
},
{
"math_id": 83,
"text": "(\\mathbb{F}_p\\otimes^\\mathbf{L}_\\mathbb{Z}\\mathbb{F}_p)\\langle x \\rangle = \n\\frac{\n(\\mathbb{F}_p\\otimes^\\mathbf{L}_\\mathbb{Z}\\mathbb{F}_p)[x_1,x_2,\\ldots]\n}{x_ix_j = \\binom{i+j}{i}x_{i+j}}"
},
{
"math_id": 84,
"text": "dx_i = \\varepsilon\\cdot x_{i-1}"
},
{
"math_id": 85,
"text": "x_i"
},
{
"math_id": 86,
"text": "2i"
},
{
"math_id": 87,
"text": "|x_i| = 2i"
},
{
"math_id": 88,
"text": "HH_*(\\mathbb{F}_p) = \\mathbb{F}_p\\langle x \\rangle"
},
{
"math_id": 89,
"text": "\\varepsilon"
},
{
"math_id": 90,
"text": "\\mathbb{F}_p\\langle x \\rangle"
},
{
"math_id": 91,
"text": "x^p = 0"
},
{
"math_id": 92,
"text": "\\mathbb{S}"
},
{
"math_id": 93,
"text": "\\mathcal{C}"
},
{
"math_id": 94,
"text": "\\mathcal{C}=\\textbf{Spectra}"
},
{
"math_id": 95,
"text": "R"
},
{
"math_id": 96,
"text": "THH(R)"
},
{
"math_id": 97,
"text": "\\mathcal{C} = D(\\mathbb{Z})"
},
{
"math_id": 98,
"text": "\\Z"
},
{
"math_id": 99,
"text": "H\\Z"
},
{
"math_id": 100,
"text": "THH(R) \\to HH(R)"
},
{
"math_id": 101,
"text": "THH"
},
{
"math_id": 102,
"text": "THH(\\mathbb{F}_p) = \\mathbb{F}_p[x],"
},
{
"math_id": 103,
"text": "HH(\\mathbb{F}_p) = \\mathbb{F}_p\\langle x \\rangle"
}
] | https://en.wikipedia.org/wiki?curid=10499629 |
10500178 | Geometric Langlands correspondence | Mathematical theory
In mathematics, the geometric Langlands correspondence relates algebraic geometry and representation theory. It is a reformulation of the Langlands correspondence obtained by replacing the number fields appearing in the original number theoretic version by function fields and applying techniques from algebraic geometry. The geometric Langlands conjecture asserts the existence of the geometric Langlands correspondence.
The existence of the geometric Langlands correspondence in the specific case of general linear groups over function fields was proven by Laurent Lafforgue in 2002, where it follows as a consequence of Lafforgue's theorem.
Background.
In mathematics, the classical Langlands correspondence is a collection of results and conjectures relating number theory and representation theory. Formulated by Robert Langlands in the late 1960s, the Langlands correspondence is related to important conjectures in number theory such as the Taniyama–Shimura conjecture, which includes Fermat's Last Theorem as a special case.
Langlands correspondences can be formulated for global fields (as well as local fields), which are classified into number fields or global function fields. Establishing the classical Langlands correspondence, for number fields, has proven extremely difficult. As a result, some mathematicians posed the geometric Langlands correspondence for global function fields, which in some sense have proven easier to deal with.
The geometric Langlands conjecture for general linear groups formula_0 over a function field formula_1 was formulated by Vladimir Drinfeld and Gérard Laumon in 1987.
Status.
The geometric Langlands conjecture was proved for formula_2 by Pierre Deligne and for formula_3 by Drinfeld in 1983.
Laurent Lafforgue proved the geometric Langlands conjecture for formula_0 over a function field formula_1 in 2002.
A claimed proof of the categorical unramified geometric Langlands conjecture was announced on May 6, 2024 by a team of mathematicians including Dennis Gaitsgory. The claimed proof is contained in more than 1,000 pages across five papers and has been called "so complex that almost no one can explain it". Even conveying the significance of the result to other mathematicians was described as "very hard, almost impossible" by Drinfeld.
Connection to physics.
In a paper from 2007, Anton Kapustin and Edward Witten described a connection between the geometric Langlands correspondence and S-duality, a property of certain quantum field theories.
In 2018, when accepting the Abel Prize, Langlands delivered a paper reformulating the geometric program using tools similar to his original Langlands correspondence.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "GL(n,K)"
},
{
"math_id": 1,
"text": "K"
},
{
"math_id": 2,
"text": "GL(1)"
},
{
"math_id": 3,
"text": "GL(2)"
}
] | https://en.wikipedia.org/wiki?curid=10500178 |
1050057 | Quasidihedral group | In mathematics, the quasi-dihedral groups, also called semi-dihedral groups, are certain non-abelian groups of order a power of 2. For every positive integer "n" greater than or equal to 4, there are exactly four isomorphism classes of non-abelian groups of order 2"n" which have a cyclic subgroup of index 2. Two are well known, the generalized quaternion group and the dihedral group. One of the remaining two groups is often considered particularly important, since it is an example of a 2-group of maximal nilpotency class. In Bertram Huppert's text "Endliche Gruppen", this group is called a "Quasidiedergruppe". In Daniel Gorenstein's text, "Finite Groups", this group is called the "semidihedral group". Dummit and Foote refer to it as the "quasidihedral group"; we adopt that name in this article. All give the same presentation for this group:
formula_0.
The other non-abelian 2-group with cyclic subgroup of index 2 is not given a special name in either text, but referred to as just "G" or M"m"(2). When this group has order 16, Dummit and Foote refer to this group as the "modular group of order 16", as its lattice of subgroups is modular. In this article this group will be called the modular maximal-cyclic group of order formula_1. Its presentation is:
formula_2.
Both these two groups and the dihedral group are semidirect products of a cyclic group <"r"> of order 2"n"−1 with a cyclic group <"s"> of order 2. Such a non-abelian semidirect product is uniquely determined by an element of order 2 in the group of units of the ring formula_3 and there are precisely three such elements, formula_4, formula_5, and formula_6, corresponding to the dihedral group, the quasidihedral, and the modular maximal-cyclic group.
The generalized quaternion group, the dihedral group, and the quasidihedral group of order 2"n" all have nilpotency class "n" − 1, and are the only isomorphism classes of groups of order 2"n" with nilpotency class "n" − 1. The groups of order "p""n" and nilpotency class "n" − 1 were the beginning of the classification of all "p"-groups via coclass. The modular maximal-cyclic group of order 2"n" always has nilpotency class 2. This makes the modular maximal-cyclic group less interesting, since most groups of order "p""n" for large "n" have nilpotency class 2 and have proven difficult to understand directly.
The generalized quaternion, the dihedral, and the quasidihedral group are the only 2-groups whose derived subgroup has index 4. The Alperin–Brauer–Gorenstein theorem classifies the simple groups, and to a degree the finite groups, with quasidihedral Sylow 2-subgroups.
Examples.
The Sylow 2-subgroups of the following groups are quasidihedral: | [
{
"math_id": 0,
"text": "\\langle r,s \\mid r^{2^{n-1}} = s^2 = 1,\\ srs = r^{2^{n-2}-1}\\rangle\\,\\!"
},
{
"math_id": 1,
"text": "2^n"
},
{
"math_id": 2,
"text": "\\langle r,s \\mid r^{2^{n-1}} = s^2 = 1,\\ srs = r^{2^{n-2}+1}\\rangle\\,\\!"
},
{
"math_id": 3,
"text": "\\mathbb{Z}/2^{n-1}\\mathbb{Z}"
},
{
"math_id": 4,
"text": "2^{n-1}-1"
},
{
"math_id": 5,
"text": "2^{n-2}-1"
},
{
"math_id": 6,
"text": "2^{n-2}+1"
}
] | https://en.wikipedia.org/wiki?curid=1050057 |
10500944 | Stochastic game | In game theory, a stochastic game (or Markov game), introduced by Lloyd Shapley in the early 1950s, is a repeated game with probabilistic transitions played by one or more players. The game is played in a sequence of stages. At the beginning of each stage the game is in some state. The players select actions and each player receives a payoff that depends on the current state and the chosen actions. The game then moves to a new random state whose distribution depends on the previous state and the actions chosen by the players. The procedure is repeated at the new state and play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs.
Stochastic games generalize Markov decision processes to multiple interacting decision makers, as well as strategic-form games to dynamic situations in which the environment changes in response to the players’ choices.
Two-player games.
Stochastic two-player games on directed graphs are widely used for modeling and analysis of discrete systems operating in an unknown (adversarial) environment. Possible configurations of a system and its environment are represented as vertices, and the transitions correspond to actions of the system, its environment, or "nature". A run of the system then corresponds to an infinite path in the graph. Thus, a system and its environment can be seen as two players with antagonistic objectives, where one player (the system) aims at maximizing the probability of "good" runs, while the other player (the environment) aims at the opposite.
In many cases, there exists an equilibrium value of this probability, but optimal strategies for both players may not exist.
We introduce basic concepts and algorithmic questions studied in this area, and we mention some long-standing open problems. Then, we mention selected recent results.
Theory.
The ingredients of a stochastic game are: a finite set of players formula_0; a state space formula_1 (either a finite set or a measurable space formula_2); for each player formula_3, an action set formula_4
(either a finite set or a measurable space formula_5); a transition probability formula_6 from formula_7, where formula_8 is the action profiles, to formula_1, where formula_9 is the probability that the next state is in formula_1 given the current state formula_10 and the current action profile formula_11; and a payoff function formula_12 from formula_7 to formula_13, where the formula_14-th coordinate of formula_12, formula_15, is the payoff to player formula_14 as a function of the state formula_10 and the action profile formula_11.
The game starts at some initial state formula_16. At stage formula_17, players first observe formula_18, then simultaneously choose actions formula_19, then observe the action profile formula_20, and then nature selects formula_21 according to the probability formula_22. A play of the stochastic game, formula_23,
defines a stream of payoffs formula_24, where formula_25.
The discounted game formula_26 with discount factor formula_27 (formula_28) is the game where the payoff to player formula_14 is formula_29. The formula_30-stage game
is the game where the payoff to player formula_14 is formula_31.
The value formula_32, respectively formula_33, of a two-person zero-sum stochastic game formula_34, respectively formula_35, with finitely many states and actions exists, and Truman Bewley and Elon Kohlberg (1976) proved that formula_32 converges to a limit as formula_30 goes to infinity and that formula_33 converges to the same limit as formula_36 goes to formula_37.
The "undiscounted" game formula_38 is the game where the payoff to player formula_14 is the "limit" of the averages of the stage payoffs. Some precautions are needed in defining the value of a two-person zero-sum formula_39 and in defining equilibrium payoffs of a non-zero-sum formula_39. The uniform value formula_40 of a two-person zero-sum stochastic game formula_38 exists if for every formula_41 there is a positive integer formula_42 and a strategy pair formula_43 of player 1 and formula_44 of player 2 such that for every formula_45 and formula_46 and every formula_47 the expectation of formula_48 with respect to the probability on plays defined by formula_49 and formula_46 is at least formula_50, and the expectation of formula_48 with respect to the probability on plays defined by formula_51 and formula_44 is at most formula_52. Jean-François Mertens and Abraham Neyman (1981) proved that every two-person zero-sum stochastic game with finitely many states and actions has a uniform value.
If there is a finite number of players and the action sets and the set of states are finite, then a stochastic game with a finite number of stages always has a Nash equilibrium. The same is true for a game with infinitely many stages if the total payoff is the discounted sum.
The non-zero-sum stochastic game formula_38 has a uniform equilibrium payoff formula_40 if for every formula_41 there is a positive integer formula_42 and a strategy profile formula_45 such that for every unilateral deviation by a player formula_14, i.e., a strategy profile formula_46 with formula_53 for all formula_54, and every formula_47 the expectation of formula_48 with respect to the probability on plays defined by formula_45 is at least formula_55, and the expectation of formula_48 with respect to the probability on plays defined by formula_46 is at most formula_56. Nicolas Vieille has shown that all two-person stochastic games with finite state and action spaces have a uniform equilibrium payoff.
The non-zero-sum stochastic game formula_38 has a limiting-average equilibrium payoff formula_40 if for every formula_41 there is a strategy profile formula_45 such that for every unilateral deviation by a player formula_14, the expectation of the limit inferior of the averages of the stage payoffs with respect to the probability on plays defined by formula_45 is at least formula_55, and the expectation of the limit superior of the averages of the stage payoffs with respect to the probability on plays defined by formula_46 is at most formula_56. Jean-François Mertens and Abraham Neyman (1981) proves that every two-person zero-sum stochastic game with finitely many states and actions has a limiting-average value, and Nicolas Vieille has shown that all two-person stochastic games with finite state and action spaces have a limiting-average equilibrium payoff. In particular, these results imply that these games have a value and an approximate equilibrium payoff, called the liminf-average (respectively, the limsup-average) equilibrium payoff, when the total payoff is the limit inferior (or the limit superior) of the averages of the stage payoffs.
Whether every stochastic game with finitely many players, states, and actions, has a uniform equilibrium payoff, or a limiting-average equilibrium payoff, or even a liminf-average equilibrium payoff, is a challenging open question.
A Markov perfect equilibrium is a refinement of the concept of sub-game perfect Nash equilibrium to stochastic games.
Stochastic games have been combined with Bayesian games to model uncertainty over player strategies. The resulting stochastic Bayesian game model is solved via a recursive combination of the Bayesian Nash equilibrium equation and the Bellman optimality equation.
Applications.
Stochastic games have applications in economics, evolutionary biology and computer networks. They are generalizations of repeated games which correspond to the special case where there is only one state.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I"
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": "(S,{\\mathcal S})"
},
{
"math_id": 3,
"text": "i\\in I"
},
{
"math_id": 4,
"text": "A^i"
},
{
"math_id": 5,
"text": "(A^i,{\\mathcal A}^i)"
},
{
"math_id": 6,
"text": "P"
},
{
"math_id": 7,
"text": "S\\times A"
},
{
"math_id": 8,
"text": "A=\\times_{i\\in I}A^i"
},
{
"math_id": 9,
"text": "P(S \\mid s, a)"
},
{
"math_id": 10,
"text": "s"
},
{
"math_id": 11,
"text": "a"
},
{
"math_id": 12,
"text": "g"
},
{
"math_id": 13,
"text": "R^I"
},
{
"math_id": 14,
"text": "i"
},
{
"math_id": 15,
"text": "g^i"
},
{
"math_id": 16,
"text": "s_1"
},
{
"math_id": 17,
"text": "t"
},
{
"math_id": 18,
"text": "s_t"
},
{
"math_id": 19,
"text": "a^i_t\\in A^i"
},
{
"math_id": 20,
"text": "a_t=(a^i_t)_i"
},
{
"math_id": 21,
"text": "s_{t+1}"
},
{
"math_id": 22,
"text": "P(\\cdot\\mid s_t,a_t)"
},
{
"math_id": 23,
"text": "s_1,a_1,\\ldots,s_t,a_t,\\ldots"
},
{
"math_id": 24,
"text": "g_1,g_2,\\ldots"
},
{
"math_id": 25,
"text": "g_t=g(s_t,a_t)"
},
{
"math_id": 26,
"text": "\\Gamma_\\lambda"
},
{
"math_id": 27,
"text": "\\lambda "
},
{
"math_id": 28,
"text": "0<\\lambda \\leq 1"
},
{
"math_id": 29,
"text": "\\lambda \\sum_{t=1}^{\\infty}(1-\\lambda)^{t-1}g^i_t"
},
{
"math_id": 30,
"text": "n"
},
{
"math_id": 31,
"text": "\\bar{g}^i_n:=\\frac1n\\sum_{t=1}^ng^i_t"
},
{
"math_id": 32,
"text": "v_n(s_1)"
},
{
"math_id": 33,
"text": "v_{\\lambda}(s_1)"
},
{
"math_id": 34,
"text": "\\Gamma_n"
},
{
"math_id": 35,
"text": "\\Gamma_{\\lambda}"
},
{
"math_id": 36,
"text": "\\lambda"
},
{
"math_id": 37,
"text": "0"
},
{
"math_id": 38,
"text": "\\Gamma_\\infty"
},
{
"math_id": 39,
"text": "\\Gamma_{\\infty}"
},
{
"math_id": 40,
"text": "v_{\\infty}"
},
{
"math_id": 41,
"text": "\\varepsilon>0"
},
{
"math_id": 42,
"text": "N"
},
{
"math_id": 43,
"text": "\\sigma_{\\varepsilon}"
},
{
"math_id": 44,
"text": "\\tau_{\\varepsilon}"
},
{
"math_id": 45,
"text": "\\sigma"
},
{
"math_id": 46,
"text": "\\tau"
},
{
"math_id": 47,
"text": "n\\geq N"
},
{
"math_id": 48,
"text": "\\bar{g}^i_n"
},
{
"math_id": 49,
"text": "\\sigma_{\\varepsilon} "
},
{
"math_id": 50,
"text": "v_{\\infty} -\\varepsilon "
},
{
"math_id": 51,
"text": "\\sigma "
},
{
"math_id": 52,
"text": "v_{\\infty} +\\varepsilon "
},
{
"math_id": 53,
"text": "\\sigma^j=\\tau^j"
},
{
"math_id": 54,
"text": "j\\neq i"
},
{
"math_id": 55,
"text": "v^i_{\\infty} -\\varepsilon "
},
{
"math_id": 56,
"text": "v^i_{\\infty} +\\varepsilon "
}
] | https://en.wikipedia.org/wiki?curid=10500944 |
105012 | Solvable group | Group with subnormal series where all factors are abelian
In mathematics, more specifically in the field of group theory, a solvable group or soluble group is a group that can be constructed from abelian groups using extensions. Equivalently, a solvable group is a group whose derived series terminates in the trivial subgroup.
Motivation.
Historically, the word "solvable" arose from Galois theory and the proof of the general unsolvability of quintic equations. Specifically, a polynomial equation is solvable in radicals if and only if the corresponding Galois group is solvable (note this theorem holds only in characteristic 0). This means associated to a polynomial formula_0 there is a tower of field extensionsformula_1such that
Example.
For example, the smallest Galois field extension of formula_9 containing the elementformula_10gives a solvable group. It has associated field extensionsformula_11giving a solvable group of Galois extensions containing the following composition factors:
, where formula_21 is the identity permutation. All of the defining group actions change a single extension while keeping all of the other extensions fixed. For example, an element of this group is the group action formula_25. A general element in the group can be written as formula_26 for a total of 80 elements.
It is worthwhile to note that this group is not abelian itself. For example:
formula_27
formula_28
In fact, in this group, formula_29. The solvable group is isometric to formula_30, defined using the semidirect product and direct product of the cyclic groups. In the solvable group, formula_31 is not a normal subgroup.
Definition.
A group "G" is called solvable if it has a subnormal series whose factor groups (quotient groups) are all abelian, that is, if there are subgroups
formula_32
meaning that "G""j"−1 is normal in "Gj", such that "Gj "/"G""j"−1 is an abelian group, for "j" = 1, 2, ..., "k".
Or equivalently, if its derived series, the descending normal series
formula_33
where every subgroup is the commutator subgroup of the previous one, eventually reaches the trivial subgroup of "G". These two definitions are equivalent, since for every group "H" and every normal subgroup "N" of "H", the quotient "H"/"N" is abelian if and only if "N" includes the commutator subgroup of "H". The least "n" such that "G"("n") = 1 is called the derived length of the solvable group "G".
For finite groups, an equivalent definition is that a solvable group is a group with a composition series all of whose factors are cyclic groups of prime order. This is equivalent because a finite group has finite composition length, and every simple abelian group is cyclic of prime order. The Jordan–Hölder theorem guarantees that if one composition series has this property, then all composition series will have this property as well. For the Galois group of a polynomial, these cyclic groups correspond to "n"th roots (radicals) over some field. The equivalence does not necessarily hold for infinite groups: for example, since every nontrivial subgroup of the group Z of integers under addition is isomorphic to Z itself, it has no composition series, but the normal series {0, Z}, with its only factor group isomorphic to Z, proves that it is in fact solvable.
Examples.
Abelian groups.
The basic example of solvable groups are abelian groups. They are trivially solvable since a subnormal series is formed by just the group itself and the trivial group. But non-abelian groups may or may not be solvable.
Nilpotent groups.
More generally, all nilpotent groups are solvable. In particular, finite "p"-groups are solvable, as all finite "p"-groups are nilpotent.
Quaternion groups.
In particular, the quaternion group is a solvable group given by the group extensionformula_34where the kernel formula_35 is the subgroup generated by formula_36.
Group extensions.
Group extensions form the prototypical examples of solvable groups. That is, if formula_37 and formula_38 are solvable groups, then any extensionformula_39defines a solvable group formula_40. In fact, all solvable groups can be formed from such group extensions.
Non-abelian group which is non-nilpotent.
A small example of a solvable, non-nilpotent group is the symmetric group "S"3. In fact, as the smallest simple non-abelian group is "A"5, (the alternating group of degree 5) it follows that "every" group with order less than 60 is solvable.
Finite groups of odd order.
The Feit–Thompson theorem states that every finite group of odd order is solvable. In particular this implies that if a finite group is simple, it is either a prime cyclic or of even order.
Non-example.
The group "S"5 is not solvable — it has a composition series {E, "A"5, "S"5} (and the Jordan–Hölder theorem states that every other composition series is equivalent to that one), giving factor groups isomorphic to "A"5 and "C"2; and "A"5 is not abelian. Generalizing this argument, coupled with the fact that "A""n" is a normal, maximal, non-abelian simple subgroup of "S""n" for "n" > 4, we see that "S""n" is not solvable for "n" > 4. This is a key step in the proof that for every "n" > 4 there are polynomials of degree "n" which are not solvable by radicals (Abel–Ruffini theorem). This property is also used in complexity theory in the proof of Barrington's theorem.
Subgroups of GL2.
Consider the subgroupsformula_41 of formula_42for some field formula_43. Then, the group quotient formula_44 can be found by taking arbitrary elements in formula_45, multiplying them together, and figuring out what structure this gives. Soformula_46Note the determinant condition on formula_47 implies formula_48, hence formula_49 is a subgroup (which are the matrices where formula_50). For fixed formula_51, the linear equation formula_52 implies formula_53, which is an arbitrary element in formula_54 since formula_55. Since we can take any matrix in formula_56 and multiply it by the matrixformula_57with formula_53, we can get a diagonal matrix in formula_56. This shows the quotient group formula_58.
Remark.
Notice that this description gives the decomposition of formula_56 as formula_59 where formula_60 acts on formula_61 by formula_62. This implies formula_63. Also, a matrix of the formformula_64corresponds to the element formula_65 in the group.
Borel subgroups.
For a linear algebraic group formula_37, a Borel subgroup is defined as a subgroup which is closed, connected, and solvable in formula_37, and is a maximal possible subgroup with these properties (note the first two are topological properties). For example, in formula_66 and formula_67 the groups of upper-triangular, or lower-triangular matrices are two of the Borel subgroups. The example given above, the subgroup formula_68 in formula_69, is a Borel subgroup.
Borel subgroup in GL3.
In formula_70 there are the subgroupsformula_71Notice formula_72, hence the Borel group has the formformula_73
Borel subgroup in product of simple linear algebraic groups.
In the product group formula_74 the Borel subgroup can be represented by matrices of the formformula_75where formula_76 is an formula_77 upper triangular matrix and formula_78 is a formula_79 upper triangular matrix.
Z-groups.
Any finite group whose "p"-Sylow subgroups are cyclic is a semidirect product of two cyclic groups, in particular solvable. Such groups are called Z-groups.
OEIS values.
Numbers of solvable groups with order "n" are (start with "n" = 0)
0, 1, 1, 1, 2, 1, 2, 1, 5, 2, 2, 1, 5, 1, 2, 1, 14, 1, 5, 1, 5, 2, 2, 1, 15, 2, 2, 5, 4, 1, 4, 1, 51, 1, 2, 1, 14, 1, 2, 2, 14, 1, 6, 1, 4, 2, 2, 1, 52, 2, 5, 1, 5, 1, 15, 2, 13, 2, 2, 1, 12, 1, 2, 4, 267, 1, 4, 1, 5, 1, 4, 1, 50, ... (sequence in the OEIS)
Orders of non-solvable groups are
60, 120, 168, 180, 240, 300, 336, 360, 420, 480, 504, 540, 600, 660, 672, 720, 780, 840, 900, 960, 1008, 1020, 1080, 1092, 1140, 1176, 1200, 1260, 1320, 1344, 1380, 1440, 1500, ... (sequence in the OEIS)
Properties.
Solvability is closed under a number of operations.
Solvability is closed under group extension:
It is also closed under wreath product:
For any positive integer "N", the solvable groups of derived length at most "N" form a subvariety of the variety of groups, as they are closed under the taking of homomorphic images, subalgebras, and (direct) products. The direct product of a sequence of solvable groups with unbounded derived length is not solvable, so the class of all solvable groups is not a variety.
Burnside's theorem.
Burnside's theorem states that if "G" is a finite group of order "paqb" where "p" and "q" are prime numbers, and "a" and "b" are non-negative integers, then "G" is solvable.
Related concepts.
Supersolvable groups.
As a strengthening of solvability, a group "G" is called supersolvable (or supersoluble) if it has an "invariant" normal series whose factors are all cyclic. Since a normal series has finite length by definition, uncountable groups are not supersolvable. In fact, all supersolvable groups are finitely generated, and an abelian group is supersolvable if and only if it is finitely generated. The alternating group "A"4 is an example of a finite solvable group that is not supersolvable.
If we restrict ourselves to finitely generated groups, we can consider the following arrangement of classes of groups:
cyclic < abelian < nilpotent < supersolvable < polycyclic < solvable < finitely generated group.
Virtually solvable groups.
A group "G" is called virtually solvable if it has a solvable subgroup of finite index. This is similar to virtually abelian. Clearly all solvable groups are virtually solvable, since one can just choose the group itself, which has index 1.
Hypoabelian.
A solvable group is one whose derived series reaches the trivial subgroup at a "finite" stage. For an infinite group, the finite derived series may not stabilize, but the transfinite derived series always stabilizes. A group whose transfinite derived series reaches the trivial group is called a hypoabelian group, and every solvable group is a hypoabelian group. The first ordinal "α" such that "G"("α") = "G"("α"+1) is called the (transfinite) derived length of the group "G", and it has been shown that every ordinal is the derived length of some group .
p-solvable.
A finite group is p-solvable for some prime p if every factor in the composition series is a p-group or has order prime to p. A finite group is solvable iff it is p-solvable for every p.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f \\in F[x]"
},
{
"math_id": 1,
"text": "F = F_0 \\subseteq F_1 \\subseteq F_2 \\subseteq \\cdots \\subseteq F_m=K"
},
{
"math_id": 2,
"text": "F_i = F_{i-1}[\\alpha_i]"
},
{
"math_id": 3,
"text": "\\alpha_i^{m_i} \\in F_{i-1}"
},
{
"math_id": 4,
"text": "\\alpha_i"
},
{
"math_id": 5,
"text": "x^{m_i} - a"
},
{
"math_id": 6,
"text": "a \\in F_{i-1}"
},
{
"math_id": 7,
"text": "F_m"
},
{
"math_id": 8,
"text": "f(x)"
},
{
"math_id": 9,
"text": "\\mathbb{Q}"
},
{
"math_id": 10,
"text": "a = \\sqrt[5]{\\sqrt{2} + \\sqrt{3}}"
},
{
"math_id": 11,
"text": "\\mathbb{Q} \n\\subseteq \\mathbb{Q}(\\sqrt{2}) \n\\subseteq \\mathbb{Q}(\\sqrt{2}, \\sqrt{3}) \n\\subseteq \\mathbb{Q}(\\sqrt{2}, \\sqrt{3})\\left(e^{2i\\pi/ 5}\\right)\n\\subseteq \\mathbb{Q}(\\sqrt{2}, \\sqrt{3})\\left(e^{2i\\pi/ 5}, a\\right)"
},
{
"math_id": 12,
"text": "\\mathrm{Aut}\\left(\\mathbb{Q(\\sqrt{2})}\\right/\\mathbb{Q}) \\cong \\mathbb{Z}/2 "
},
{
"math_id": 13,
"text": "f\\left(\\pm\\sqrt{2}\\right) = \\mp\\sqrt{2}, \\ f^2 = 1"
},
{
"math_id": 14,
"text": "x^2 - 2"
},
{
"math_id": 15,
"text": "\\mathrm{Aut}\\left(\\mathbb{Q(\\sqrt{2},\\sqrt{3})}\\right/\\mathbb{Q(\\sqrt{2})}) \\cong \\mathbb{Z}/2 "
},
{
"math_id": 16,
"text": "g\\left(\\pm\\sqrt{3}\\right) = \\mp\\sqrt{3} ,\\ g^2 = 1"
},
{
"math_id": 17,
"text": "x^2 - 3"
},
{
"math_id": 18,
"text": "\\mathrm{Aut}\\left(\n\\mathbb{Q}(\\sqrt{2}, \\sqrt{3})\\left(e^{2i\\pi/ 5}\\right)/\n\\mathbb{Q}(\\sqrt{2}, \\sqrt{3})\n\\right)\n\\cong \\mathbb{Z}/4 "
},
{
"math_id": 19,
"text": "h^n\\left(e^{2im\\pi/5}\\right) = e^{2(n+1)mi\\pi/5} , \\ 0 \\leq n \\leq 3, \\ h^4 = 1"
},
{
"math_id": 20,
"text": "x^4 + x^3+x^2+x+1 = (x^5 - 1)/(x-1)"
},
{
"math_id": 21,
"text": "1"
},
{
"math_id": 22,
"text": "\\mathrm{Aut}\\left(\n\\mathbb{Q}(\\sqrt{2}, \\sqrt{3})\\left(e^{2i\\pi/ 5}, a\\right)/\n\\mathbb{Q}(\\sqrt{2}, \\sqrt{3})\\left(e^{2i\\pi/ 5}\\right)\n\\right)\n\\cong \\mathbb{Z}/5 "
},
{
"math_id": 23,
"text": "j^l(a) = e^{2li\\pi/5}a, \\ j^5 = 1"
},
{
"math_id": 24,
"text": "x^5 - \\left(\\sqrt{2} + \\sqrt{3}\\right)"
},
{
"math_id": 25,
"text": "fgh^3j^4 "
},
{
"math_id": 26,
"text": "f^ag^bh^nj^l,\\ 0 \\leq a, b \\leq 1,\\ 0 \\leq n \\leq 3,\\ 0 \\leq l \\leq 4 "
},
{
"math_id": 27,
"text": "hj(a) = h(e^{2i\\pi/5}a) = e^{4i\\pi/5}a "
},
{
"math_id": 28,
"text": "jh(a) = j(a) = e^{2i\\pi/5}a "
},
{
"math_id": 29,
"text": "jh = hj^3 "
},
{
"math_id": 30,
"text": "(\\mathbb{C}_5 \\rtimes_\\varphi \\mathbb{C}_4) \\times (\\mathbb{C}_2 \\times \\mathbb{C}_2),\\ \\mathrm{where}\\ \\varphi_h(j) = hjh^{-1} = j^2 "
},
{
"math_id": 31,
"text": "\\mathbb{C}_4 "
},
{
"math_id": 32,
"text": "1 = G_0\\triangleleft G_1 \\triangleleft \\cdots \\triangleleft G_k=G"
},
{
"math_id": 33,
"text": "G\\triangleright G^{(1)}\\triangleright G^{(2)} \\triangleright \\cdots,"
},
{
"math_id": 34,
"text": "1 \\to \\mathbb{Z}/2 \\to Q \\to \\mathbb{Z}/2 \\times \\mathbb{Z}/2 \\to 1"
},
{
"math_id": 35,
"text": "\\mathbb{Z}/2"
},
{
"math_id": 36,
"text": "-1"
},
{
"math_id": 37,
"text": "G"
},
{
"math_id": 38,
"text": "G'"
},
{
"math_id": 39,
"text": "1 \\to G \\to G'' \\to G' \\to 1"
},
{
"math_id": 40,
"text": "G''"
},
{
"math_id": 41,
"text": "B = \\left\\{ \\begin{bmatrix}\n* & * \\\\\n0 & *\n\\end{bmatrix} \\right\\} \\text{, }\nU = \\left\\{ \\begin{bmatrix}\n1 & * \\\\\n0 & 1\n\\end{bmatrix} \\right\\}"
},
{
"math_id": 42,
"text": "GL_2(\\mathbb{F})"
},
{
"math_id": 43,
"text": "\\mathbb{F}"
},
{
"math_id": 44,
"text": "B/U"
},
{
"math_id": 45,
"text": "B,U"
},
{
"math_id": 46,
"text": "\\begin{bmatrix}\na & b \\\\\n0 & c \n\\end{bmatrix} \\cdot\n\\begin{bmatrix}\n1 & d \\\\\n0 & 1 \n\\end{bmatrix} =\n\\begin{bmatrix}\na & ad + b \\\\\n0 & c\n\\end{bmatrix}\n\n"
},
{
"math_id": 47,
"text": "GL_2\n\n"
},
{
"math_id": 48,
"text": "ac \\neq 0\n\n"
},
{
"math_id": 49,
"text": "\\mathbb{F}^\\times \\times \\mathbb{F}^\\times \\subset B\n\n"
},
{
"math_id": 50,
"text": "b=0\n\n"
},
{
"math_id": 51,
"text": "a,b\n\n"
},
{
"math_id": 52,
"text": "ad + b = 0\n\n"
},
{
"math_id": 53,
"text": "d = -b/a\n\n"
},
{
"math_id": 54,
"text": "\\mathbb{F}\n\n"
},
{
"math_id": 55,
"text": "b \\in \\mathbb{F}\n\n"
},
{
"math_id": 56,
"text": "B\n\n"
},
{
"math_id": 57,
"text": "\\begin{bmatrix}\n1 & d \\\\\n0 & 1\n\\end{bmatrix}\n\n"
},
{
"math_id": 58,
"text": "B/U \\cong \\mathbb{F}^\\times \\times \\mathbb{F}^\\times"
},
{
"math_id": 59,
"text": "\\mathbb{F} \\rtimes (\\mathbb{F}^\\times \\times \\mathbb{F}^\\times)\n\n"
},
{
"math_id": 60,
"text": "(a,c)\n\n"
},
{
"math_id": 61,
"text": "b\n\n"
},
{
"math_id": 62,
"text": "(a,c)(b) = ab\n\n"
},
{
"math_id": 63,
"text": "(a,c)(b + b') = (a,c)(b) + (a,c)(b') = ab + ab'\n\n"
},
{
"math_id": 64,
"text": "\\begin{bmatrix}\na & b \\\\\n0 & c\n\\end{bmatrix}"
},
{
"math_id": 65,
"text": "(b) \\times (a,c)"
},
{
"math_id": 66,
"text": "GL_n"
},
{
"math_id": 67,
"text": "SL_n"
},
{
"math_id": 68,
"text": "B"
},
{
"math_id": 69,
"text": "GL_2"
},
{
"math_id": 70,
"text": "GL_3"
},
{
"math_id": 71,
"text": "B = \\left\\{\n\\begin{bmatrix}\n* & * & * \\\\\n0 & * & * \\\\\n0 & 0 & *\n\\end{bmatrix}\n\\right\\}, \\text{ }\nU_1 = \\left\\{\n\\begin{bmatrix}\n1 & * & * \\\\\n0 & 1 & * \\\\\n0 & 0 & 1\n\\end{bmatrix}\n\\right\\}"
},
{
"math_id": 72,
"text": "B/U_1 \\cong \\mathbb{F}^\\times \\times \\mathbb{F}^\\times \\times \\mathbb{F}^\\times"
},
{
"math_id": 73,
"text": "U\\rtimes\n(\\mathbb{F}^\\times \\times \\mathbb{F}^\\times \\times \\mathbb{F}^\\times)\n\n"
},
{
"math_id": 74,
"text": "GL_n \\times GL_m"
},
{
"math_id": 75,
"text": "\\begin{bmatrix}\nT & 0 \\\\\n0 & S\n\\end{bmatrix}"
},
{
"math_id": 76,
"text": "T"
},
{
"math_id": 77,
"text": "n\\times n"
},
{
"math_id": 78,
"text": "S"
},
{
"math_id": 79,
"text": "m\\times m"
}
] | https://en.wikipedia.org/wiki?curid=105012 |
1050125 | Laver table | Mathematical concept
In mathematics, Laver tables (named after Richard Laver, who discovered them towards the end of the 1980s in connection with his works on set theory) are tables of numbers that have certain properties of algebraic and combinatorial interest. They occur in the study of racks and quandles.
Definition.
For any nonnegative integer "n", the "n"-th "Laver table" is the 2"n" × 2"n" table whose entry in the cell at row "p" and column "q" (1 ≤ "p","q" ≤ 2"n") is defined as
formula_0
where formula_1 is the unique binary operation that satisfies the following two equations for all "p", "q" in {1...,2"n"}:
and
Note: Equation (1) uses the notation formula_2 to mean the unique member of {1...,2"n"} congruent to "x" modulo 2"n".
Equation (2) is known as the "(left) self-distributive law", and a set endowed with "any" binary operation satisfying this law is called a shelf. Thus, the "n"-th Laver table is just the multiplication table for the unique shelf ({1...,2"n"}, formula_1) that satisfies Equation (1).
Examples: Following are the first five Laver tables, i.e. the multiplication tables for the shelves ({1...,2"n"}, formula_1), "n" = 0, 1, 2, 3, 4:
There is no known closed-form expression to calculate the entries of a Laver table directly, but Patrick Dehornoy provides a simple algorithm for filling out Laver tables.
Are the first-row periods unbounded?
Looking at just the first row in the "n"-th Laver table, for "n" = 0, 1, 2, ..., the entries in each first row are seen to be periodic with a period that's always a power of two, as mentioned in Property 2 above. The first few periods are 1, 1, 2, 4, 4, 8, 8, 8, 8, 16, 16, ... (sequence in the OEIS). This sequence is nondecreasing, and in 1995 Richard Laver proved, "under the assumption that there exists a rank-into-rank (a large cardinal property)", that it actually increases without bound. (It is not known whether this is also provable in ZFC without the additional large-cardinal axiom.) In any case, it grows extremely slowly; Randall Dougherty showed that 32 cannot appear in this sequence (if it ever does) until "n" > A(9, A(8, A(8, 254))), where A denotes the Ackermann–Péter function.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L_n(p, q) := p \\star_n q"
},
{
"math_id": 1,
"text": "\\star_n"
},
{
"math_id": 2,
"text": "x \\bmod 2^n"
},
{
"math_id": 3,
"text": "\\ \\ 2^n \\star_n q = q;\\ \\ p \\star_n 2^n = 2^n;\\ \\ (2^n-1)\\star_n q = 2^n;\\ \\ p\\star_n 2^{n-1}=2^n\\text{ if }p\\ne 2^n"
},
{
"math_id": 4,
"text": "\\ \\ (p \\star_n q)_{q=1,2,3,...}"
},
{
"math_id": 5,
"text": "\\ \\ (p \\star_n q)_{q=1,2,3,...,\\pi_n(p)}"
},
{
"math_id": 6,
"text": "p \\star_n 1 = p+1\\ "
},
{
"math_id": 7,
"text": "\\ p \\star_n \\pi_n(p) = 2^n"
},
{
"math_id": 8,
"text": "\\ p \\star_n q = (p+1)^{(q)}, \\text{ where } x^{(1)}=x,\\ x^{(k+1)}=x^{(k)} \\star_n x."
}
] | https://en.wikipedia.org/wiki?curid=1050125 |
105035 | Heinz von Foerster | Austrian-American scientist and cybernetician (1911–2002)
Heinz von Foerster (née von Förster; November 13, 1911 – October 2, 2002) was an Austrian-American scientist combining physics and philosophy, and widely attributed as the originator of second-order cybernetics. He was twice a Guggenheim fellow (1956–57 and 1963–64) and also was a fellow of the American Association for the Advancement of Science, 1980. He is well known for his 1960 Doomsday equation formula published in "Science" predicting future population growth.
As a polymath, he wrote nearly two hundred professional papers, gaining renown in fields ranging from computer science and artificial intelligence to epistemology, and researched high-speed electronics and electro-optics switching devices as a physicist, and in biophysics, the study of memory and knowledge. He worked on cognition based on neurophysiology, mathematics, and philosophy and was called "one of the most consequential thinkers in the history of cybernetics". He came to the United States, and stayed after meeting with Warren Sturgis McCulloch, where he received funding from The Pentagon to establish the Biological Computer Laboratory, which built the first parallel computer, the "Numa-Rete". Working with William Ross Ashby, one of the original Ratio Club members, and together with Warren McCulloch, Norbert Wiener, John von Neumann and Lawrence J. Fogel, Heinz von Foerster was an architect of cybernetics and one of the members of the Macy conferences, eventually becoming editor of its early proceedings alongside Hans-Lukas Teuber and Margaret Mead.
Biography.
Von Foerster was born in 1911 in Vienna, Austria-Hungary, as Heinz von Förster. His paternal grandfather was the Austrian architect Emil von Förster. His maternal grandmother was Marie Lang, an Austrian feminist, theosophist, and publisher. He studied physics at the Technical University of Vienna and at the University of Breslau, where in 1944 he received a PhD in physics. His relatives included Ludwig Wittgenstein, Erwin Lang and Hugo von Hofmannsthal. Ludwig Förster was his great-grandfather. His Jewish roots did not cause him much trouble while he worked in radar laboratories during the Nazi era, as "he hid his ancestry with the help of an employer who chose not to press him for documents on his family."
He moved to the US in 1949 and worked at the University of Illinois at Urbana–Champaign, where he was a professor of electrical engineering from 1951 to 1975. He was also professor of biophysics (1962–1975) and director of the Biological Computer Laboratory (1958–1975). Additionally, in 1956–57 and 1963–64, he was a Guggenheim Fellow and also President of the Wenner-Gren-Foundation for anthropological research from 1963 to 1965.
He knew well and was in conversation with John von Neumann, Norbert Wiener, Humberto Maturana, Francisco Varela, Gordon Pask, Gregory Bateson, Lawrence J. Fogel and Margaret Mead, among many others. He influenced generations of students as a teacher and an inclusive, enthusiastic collaborator.
He died on October 2, 2002, in Pescadero, California.
Work.
Von Foerster was influenced by the Vienna Circle and Ludwig Wittgenstein. He worked in the field of cybernetics and is known as the inventor of second-order cybernetics. He made important contributions to constructivism. He is also known for his interest in computer music and magic.
The electron tube laboratory.
In 1949, von Foerster started work at the University of Illinois at Urbana–Champaign at the electron tube laboratory of the Electrical Engineering Department, where he succeeded Joseph Tykociński-Tykociner. With his students he developed many innovative devices, including ultra-high-frequency electronics
He also worked on mathematical models of population dynamics and in 1959 published a model now called the "von Foerster equation", which is derivable from the principles of constant aging and conservation of mass.
formula_0
where: "n" = "n"("t","a"), "t" stands for time and "a" for age. "m"("a") is the death in function of the population age; "n"("t","a") is the population density in function of age.
When "m"("a") = 0, we have:
formula_1
It relates that a population ages, and that fact is the only one that influences change in population density.
It is therefore a continuity equation; it can be solved using the method of characteristics. Another way is by similarity solution; and a third is a numerical approach such as finite differences.
The gross birth rate is given by the following boundary condition:
formula_2
The solution is only unique given the initial conditions
formula_3
which states that the initial population distribution must be given; then it will evolve according to the partial differential equation.
Biological Computer Laboratory.
In 1958, he formed the "Biological Computer Lab", studying similarities in cybernetic systems in biology and electronics.
Macy conferences.
He was the youngest member of the core group of the Macy conferences on Cybernetics and editor of the five volumes of "Cybernetics" (1949–1953), a series of conference transcripts that represent important foundational conversations in the field. It was von Foerster who suggested that Wiener's coinage "Cybernetics" be applied to this conference series, which had previously been called "Circular Causal and Feedback Mechanisms in Biological and Social Systems".
Doomsday equation.
A 1960 issue of "Science" magazine included an article by von Foerster and his colleagues P. M. Mora and L. W. Amiot proposing a formula representing a best fit to available historical data on world population; the authors then predicted future population growth on the basis of this formula.
The formula gave 2.7 billion as the 1960 world population and predicted that population growth would become infinite by Friday, November 13, 2026 – von Foerster's 115th birthday anniversary – a prediction that earned it the name "the Doomsday Equation."
Based on population data obtained from various sources, von Foerster and his students concluded that world population growth over the centuries was faster than an exponential. In such a situation, doubling-time decreases over time. Von Foerster's tongue-in-cheek prediction of Doomsday on November 13, 2026, was based on an extrapolation into the future of doubling-time, with the finding that doubling-time would decrease to zero on that date.
Responders to his Doomsday prediction objected on the grounds of the finite human gestation time of 9 months, and the transparent fact that biological systems rarely persist in exponential growth for any substantial length of time. Those who knew von Foerster could see in his rejoinders an evident sense of humor.
Publications.
Von Foerster authored more than 100 publications. Books, a selection:
Articles, a selection:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\partial n}{\\partial t} + \\frac{\\partial n}{\\partial a} = - m(a)n, "
},
{
"math_id": 1,
"text": "\\frac{\\partial n}{\\partial t} = - \\frac{\\partial n}{\\partial a} "
},
{
"math_id": 2,
"text": " n(t,0)= \\int_0^\\infty b (a)n(t,a) \\, dt ,"
},
{
"math_id": 3,
"text": " n(0,a)= f(a), \\, "
}
] | https://en.wikipedia.org/wiki?curid=105035 |
10504214 | Braided Hopf algebra | In mathematics, a braided Hopf algebra is a Hopf algebra in a braided monoidal category. The most common braided Hopf algebras are objects in a Yetter–Drinfeld category of a Hopf algebra "H", particularly the Nichols algebra of a braided vector space in that category.
"The notion should not be confused with quasitriangular Hopf algebra."
Definition.
Let "H" be a Hopf algebra over a field "k", and assume that the antipode of "H" is bijective. A Yetter–Drinfeld module "R" over "H" is called a braided bialgebra in the Yetter–Drinfeld category formula_0 if
formula_11
Here "c" is the canonical braiding in the Yetter–Drinfeld category formula_0.
A braided bialgebra in formula_0 is called a braided Hopf algebra, if there is a morphism formula_12 of Yetter–Drinfeld modules such that
formula_13 for all formula_14
where formula_15 in slightly modified Sweedler notation – a change of notation is performed in order to avoid confusion in Radford's biproduct below.
formula_20
The counit formula_21 then satisfies the equation formula_22 for all formula_23
Radford's biproduct.
For any braided Hopf algebra "R" in formula_0 there exists a natural Hopf algebra formula_25 which contains "R" as a subalgebra and "H" as a Hopf subalgebra. It is called Radford's biproduct, named after its discoverer, the Hopf algebraist David Radford. It was rediscovered by Shahn Majid, who called it bosonization.
As a vector space, formula_25 is just formula_26. The algebra structure of formula_25 is given by
formula_27
where formula_28, formula_29 (Sweedler notation) is the coproduct of formula_30, and formula_31 is the left action of "H" on "R". Further, the coproduct of formula_25 is determined by the formula
formula_32
Here formula_15 denotes the coproduct of "r" in "R", and formula_33 is the left coaction of "H" on formula_34 | [
{
"math_id": 0,
"text": " {}^H_H\\mathcal{YD}"
},
{
"math_id": 1,
"text": " (R,\\cdot ,\\eta ) "
},
{
"math_id": 2,
"text": "\\cdot :R\\times R\\to R"
},
{
"math_id": 3,
"text": " \\eta :k\\to R "
},
{
"math_id": 4,
"text": " (R,\\Delta ,\\varepsilon )"
},
{
"math_id": 5,
"text": "\\varepsilon "
},
{
"math_id": 6,
"text": " \\Delta "
},
{
"math_id": 7,
"text": "\\Delta :R\\to R\\otimes R "
},
{
"math_id": 8,
"text": " \\varepsilon :R\\to k "
},
{
"math_id": 9,
"text": " R\\otimes R "
},
{
"math_id": 10,
"text": " \\eta \\otimes \\eta(1) : k\\to R\\otimes R"
},
{
"math_id": 11,
"text": " (R\\otimes R)\\times (R\\otimes R)\\to R\\otimes R,\\quad (r\\otimes s,t\\otimes u) \\mapsto \\sum _i rt_i\\otimes s_i u, \\quad \\text{and}\\quad c(s\\otimes t)=\\sum _i t_i\\otimes s_i. "
},
{
"math_id": 12,
"text": " S:R\\to R "
},
{
"math_id": 13,
"text": " S(r^{(1)})r^{(2)}=r^{(1)}S(r^{(2)})=\\eta(\\varepsilon (r)) "
},
{
"math_id": 14,
"text": " r\\in R,"
},
{
"math_id": 15,
"text": "\\Delta _R(r)=r^{(1)}\\otimes r^{(2)}"
},
{
"math_id": 16,
"text": " H=k "
},
{
"math_id": 17,
"text": " H=k[\\mathbb{Z}/2\\mathbb{Z}] "
},
{
"math_id": 18,
"text": " TV "
},
{
"math_id": 19,
"text": " V\\in {}^H_H\\mathcal{YD}"
},
{
"math_id": 20,
"text": " \\Delta (v)=1\\otimes v+v\\otimes 1 \\quad \\text{for all}\\quad v\\in V."
},
{
"math_id": 21,
"text": "\\varepsilon :TV\\to k"
},
{
"math_id": 22,
"text": " \\varepsilon (v)=0"
},
{
"math_id": 23,
"text": " v\\in V ."
},
{
"math_id": 24,
"text": " V "
},
{
"math_id": 25,
"text": " R\\# H "
},
{
"math_id": 26,
"text": " R\\otimes H "
},
{
"math_id": 27,
"text": " (r\\# h)(r'\\#h')=r(h_{(1)}\\boldsymbol{.}r')\\#h_{(2)}h', "
},
{
"math_id": 28,
"text": " r,r'\\in R,\\quad h,h'\\in H"
},
{
"math_id": 29,
"text": " \\Delta (h)=h_{(1)}\\otimes h_{(2)} "
},
{
"math_id": 30,
"text": " h\\in H "
},
{
"math_id": 31,
"text": " \\boldsymbol{.}:H\\otimes R\\to R "
},
{
"math_id": 32,
"text": " \\Delta (r\\#h)=(r^{(1)}\\#r^{(2)}{}_{(-1)}h_{(1)})\\otimes (r^{(2)}{}_{(0)}\\#h_{(2)}), \\quad r\\in R,h\\in H."
},
{
"math_id": 33,
"text": " \\delta (r^{(2)})=r^{(2)}{}_{(-1)}\\otimes r^{(2)}{}_{(0)} "
},
{
"math_id": 34,
"text": " r^{(2)}\\in R. "
}
] | https://en.wikipedia.org/wiki?curid=10504214 |
10504376 | Spectral element method | In the numerical solution of partial differential equations, a topic in mathematics, the spectral element method (SEM) is a formulation of the finite element method (FEM) that uses high-degree piecewise polynomials as basis functions. The spectral element method was introduced in a 1984 paper by A. T. Patera. Although Patera is credited with development of the method, his work was a rediscovery of an existing method (see Development History)
Discussion.
The spectral method expands the solution in trigonometric series, a chief advantage being that the resulting method is of a very high order.
This approach relies on the fact that trigonometric polynomials are an orthonormal basis for formula_0.
The spectral element method chooses instead a high degree piecewise polynomial basis functions, also achieving a very high order of accuracy.
Such polynomials are usually orthogonal Chebyshev polynomials or very high order Lagrange polynomials over non-uniformly spaced nodes.
In SEM computational error decreases exponentially as the order of approximating polynomial increases, therefore a fast convergence of solution to the exact solution is realized with fewer degrees of freedom of the structure in comparison with FEM.
In structural health monitoring, FEM can be used for detecting large flaws in a structure, but as the size of the flaw is reduced there is a need to use a high-frequency wave. In order to simulate the propagation of a high-frequency wave, the FEM mesh required is very fine resulting in increased computational time. On the other hand, SEM provides good accuracy with fewer degrees of freedom.
Non-uniformity of nodes helps to make the mass matrix diagonal, which saves time and memory and is also useful for adopting a central difference method (CDM).
The disadvantages of SEM include difficulty in modeling complex geometry, compared to the flexibility of FEM.
Although the method can be applied with a modal piecewise orthogonal polynomial basis, it is most often implemented with a nodal tensor product Lagrange basis. The method gains its efficiency by placing the nodal points at the Legendre-Gauss-Lobatto (LGL) points and performing the Galerkin method integrations with a reduced Gauss-Lobatto quadrature using the same nodes. With this combination, simplifications result such that mass lumping occurs at all nodes and a collocation procedure results at interior points.
The most popular applications of the method are in computational fluid dynamics and modeling seismic wave propagation.
A-priori error estimate.
The classic analysis of Galerkin methods and Céa's lemma holds here and it can be shown that, if formula_1 is the solution of the weak equation, formula_2 is the approximate solution and formula_3:
formula_4
where formula_5 is related to the discretization of the domain (ie. element length), formula_6 is independent from formula_5, and formula_7 is no larger than the degree of the piecewise polynomial basis. Similar results can be obtained to bound the error in stronger topologies. If formula_8
formula_9
As we increase formula_5, we can also increase the degree of the basis functions. In this case, if formula_1 is an analytic function:
formula_10
where formula_11 depends only on formula_1.
The Hybrid-Collocation-Galerkin possesses some superconvergence properties. The LGL form of SEM is equivalent, so it achieves the same superconvergence properties.
Development History.
Development of the most popular LGL form of the method is normally attributed to Maday and Patera. However, it was developed more than a decade earlier. First, there is the Hybrid-Collocation-Galerkin method (HCGM), which applies collocation at the interior Lobatto points and uses a Galerkin-like integral procedure at element interfaces. The Lobatto-Galerkin method described by Young is identical to SEM, while the HCGM is equivalent to these methods. This earlier work is ignored in the spectral literature. | [
{
"math_id": 0,
"text": "L^2(\\Omega)"
},
{
"math_id": 1,
"text": "u"
},
{
"math_id": 2,
"text": "u_N"
},
{
"math_id": 3,
"text": "u \\in H^{s+1}(\\Omega)"
},
{
"math_id": 4,
"text": "\\|u-u_N\\|_{H^1(\\Omega)} \\leqq C_s N^{-s} \\| u \\|_{H^{s+1}(\\Omega)}"
},
{
"math_id": 5,
"text": "N"
},
{
"math_id": 6,
"text": "C_s"
},
{
"math_id": 7,
"text": "s"
},
{
"math_id": 8,
"text": "k \\leq s+1"
},
{
"math_id": 9,
"text": "\\|u-u_N \\|_{H^k(\\Omega)} \\leq C_{s,k} N^{k-1-s} \\|u\\|_{H^{s+1}(\\Omega)}"
},
{
"math_id": 10,
"text": "\\|u-u_N\\|_{H^1(\\Omega)} \\leqq C \\exp( - \\gamma N )"
},
{
"math_id": 11,
"text": "\\gamma"
},
{
"math_id": 12,
"text": "a(\\cdot,\\cdot)"
},
{
"math_id": 13,
"text": "F"
}
] | https://en.wikipedia.org/wiki?curid=10504376 |
10504570 | Lift (mathematics) | Term in mathematics
In category theory, a branch of mathematics, given a morphism "f": "X" → "Y" and a morphism "g": "Z" → "Y", a lift or lifting of "f" to "Z" is a morphism "h": "X" → "Z" such that "f" = "g"∘"h". We say that "f" factors through "h".
Lifts are ubiquitous; for example, the definition of fibrations (see Homotopy lifting property) and the valuative criteria of separated and proper maps of schemes are formulated in terms of existence and (in the last case) uniqueness of certain lifts.
In algebraic topology and homological algebra, tensor product and the Hom functor are adjoint; however, they might not always lift to an exact sequence. This leads to the definition of the Tor functor and the Ext functor.
Covering space.
A basic example in topology is lifting a path in one topological space to a path in a covering space. For example, consider mapping opposite points on a sphere to the same point, a continuous map from the sphere covering the projective plane. A path in the projective plane is a continuous map from the unit interval [0,1]. We can lift such a path to the sphere by choosing one of the two sphere points mapping to the first point on the path, then maintain continuity. In this case, each of the two starting points forces a unique path on the sphere, the lift of the path in the projective plane. Thus in the category of topological spaces with continuous maps as morphisms, we have
formula_0
Algebraic logic.
The notations of first-order predicate logic are streamlined when quantifiers are relegated to established domains and ranges of binary relations. Gunther Schmidt and Michael Winter have illustrated the method of lifting traditional logical expressions of topology to calculus of relations in their book "Relational Topology".
They aim "to lift concepts to a relational level making them point free as well as quantifier free, thus
liberating them from the style of first order predicate logic and approaching the clarity of algebraic reasoning."
For example, a partial function "M" corresponds to the inclusion formula_1 where formula_2 denotes the identity relation on the range of "M". "The notation for quantification is hidden and stays deeply incorporated in the typing of the relational operations (here transposition and composition) and their rules."
Circle maps.
For maps of a circle, the definition of a lift to the real line is slightly different (a common application is the calculation of rotation number). Given a map on a circle, formula_3, a lift of formula_4, formula_5, is any map on the real line, formula_6, for which there exists a projection (or, covering map), formula_7, such that formula_8.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n f\\colon\\, &[0,1] \\to \\mathbb{RP}^2 &&\\ \\text{ (projective plane path)} \\\\\n g\\colon\\, &S^2 \\to \\mathbb{RP}^2 &&\\ \\text{ (covering map)} \\\\\n h\\colon\\, &[0,1] \\to S^2 &&\\ \\text{ (sphere path)} \n\\end{align}"
},
{
"math_id": 1,
"text": "M^T ; M \\subseteq I"
},
{
"math_id": 2,
"text": "I"
},
{
"math_id": 3,
"text": "T:\\text{S}\\rightarrow\\text{S}"
},
{
"math_id": 4,
"text": "T"
},
{
"math_id": 5,
"text": "F_T"
},
{
"math_id": 6,
"text": "F_T:\\mathbb{R}\\rightarrow\\mathbb{R}"
},
{
"math_id": 7,
"text": "\\pi: \\mathbb{R} \\rightarrow \\text{S}"
},
{
"math_id": 8,
"text": "\\pi \\circ F_T = T \\circ \\pi"
}
] | https://en.wikipedia.org/wiki?curid=10504570 |
10505383 | Popper's experiment | Proposal to test the uncertainty principle of quantum mechanics
Popper's experiment is an experiment proposed by the philosopher Karl Popper to test aspects of the uncertainty principle in quantum mechanics.
History.
In fact, as early as 1934, Popper started criticising the increasingly more accepted Copenhagen interpretation, a popular subjectivist interpretation of quantum mechanics. Therefore, in his most famous book "Logik der Forschung" he proposed a first experiment alleged to empirically discriminate between the Copenhagen Interpretation and a realist ensemble interpretation, which he advocated. Einstein, however, wrote a letter to Popper about the experiment in which he raised some crucial objections and Popper himself declared that this first attempt was "a gross mistake for which I have been deeply sorry and ashamed of ever since".
Popper, however, came back to the foundations of quantum mechanics from 1948, when he developed his criticism of determinism in both quantum and classical physics.
As a matter of fact, Popper greatly intensified his research activities on the foundations of quantum mechanics throughout the 1950s and 1960s developing his interpretation of quantum mechanics in terms of real existing probabilities (propensities), also thanks to the support of a number of distinguished physicists (such as David Bohm).
In 1980, Popper proposed perhaps his more important, yet overlooked, contribution to QM: a "new simplified version of the EPR experiment".
The experiment was however published only two years later, in the third volume of the "Postscript" to the "Logic of Scientific Discovery".
The most widely known interpretation of quantum mechanics is the Copenhagen interpretation put forward by Niels Bohr and his school. It maintains that observations lead to a wavefunction collapse, thereby suggesting the counter-intuitive result that two well separated, non-interacting systems require action-at-a-distance. Popper argued that such non-locality conflicts with common sense, and would lead to a subjectivist interpretation of phenomena, depending on the role of the 'observer'.
While the EPR argument was always meant to be a thought experiment, put forward to shed light on the intrinsic paradoxes of QM, Popper proposed an experiment which could have been experimentally implemented and participated at a physics conference organised in Bari in 1983, to present his experiment and propose to the experimentalists to carry it out.
The actual realisation of Popper's experiment required new techniques which would make use of the phenomenon of spontaneous parametric down-conversion but had not yet been exploited at that time, so his experiment was eventually performed only in 1999, five years after Popper had died.
Description.
Popper's experiment of 1980 exploits couples of entangled particles, in order to put to the test Heisenberg's uncertainty principle.
Indeed, Popper maintains: "I wish to suggest a crucial experiment to "test" whether knowledge alone is sufficient to create 'uncertainty' and, with it, scatter (as is contended under the Copenhagen interpretation), or whether it is the physical situation that is responsible for the scatter."
Popper's proposed experiment consists of a low-intensity source of particles that can generate pairs of particles traveling to the left and to the right along the "x"-axis. The beam's low intensity is "so that the probability is high that two particles recorded at the same time on the left and on the right are those which have actually interacted before emission."
There are two slits, one each in the paths of the two particles. Behind the slits are semicircular arrays of counters which can detect the particles after they pass through the slits (see Fig. 1). "These counters are coincident counters [so] that they only detect particles that have passed at the same time through A and B."
Popper argued that because the slits localize the particles to a narrow region along the "y"-axis, from the uncertainty principle they experience large uncertainties in the "y"-components of their momenta. This larger spread in the momentum will show up as particles being detected even at positions that lie outside the regions where particles would normally reach based on their initial momentum spread.
Popper suggests that we count the particles in coincidence, i.e., we count only those particles behind slit B, whose partner has gone through slit A. Particles which are not able to pass through slit A are ignored.
The Heisenberg scatter for both the beams of particles going to the right and to the left, is tested "by making the two slits A and B wider or narrower. If the slits are narrower, then counters should come into play which are higher up and lower down, seen from the slits. The coming into play of these counters is indicative of the wider scattering angles which go with a narrower slit, according to the Heisenberg relations."
Now the slit at A is made very small and the slit at B very wide. Popper wrote that, according to the EPR argument, we have measured position "y" for both particles (the one passing through A and the one passing through B) with the precision formula_0, and not just for particle passing through slit A. This is because from the initial entangled EPR state we can calculate the position of the particle 2, once the position of particle 1 is known, with approximately the same precision. We can do this, argues Popper, even though slit B is wide open.
Therefore, Popper states that "fairly precise "knowledge"" about the y position of particle 2 is made; its "y" position is measured indirectly. And since it is, according to the Copenhagen interpretation, our "knowledge" which is described by the theory – and especially by the Heisenberg relations — it should be expected that the momentum formula_1 of particle 2 scatters as much as that of particle 1, even though the slit A is much narrower than the widely opened slit at B.
Now the scatter can, in principle, be tested with the help of the counters. If the Copenhagen interpretation is correct, then such counters on the far side of B that are indicative of a wide scatter (and of a narrow slit) should now count coincidences: counters that did not count any particles before the slit A was narrowed.
To sum up: if the Copenhagen interpretation is correct, then any increase in the precision in the measurement of our "mere knowledge" of the particles going through slit B should increase their scatter.
Popper was inclined to believe that the test would decide against the Copenhagen interpretation, as it is applied to Heisenberg's uncertainty principle.
If the test decided in favor of the Copenhagen interpretation, Popper argued, it could be interpreted as indicative of action at a distance.
The debate.
Many viewed Popper's experiment as a crucial test of quantum mechanics, and there was a debate on what result an actual realization of the experiment would yield.
In 1985, Sudbery pointed out that the EPR state, which could be written as formula_2, already contained an infinite spread in momenta (tacit in the integral over k), so no further spread could be seen by localizing one particle. Although it pointed to a crucial flaw in Popper's argument, its full implication was not understood.
Kripps theoretically analyzed Popper's experiment and predicted that narrowing slit A would lead to momentum spread increasing at slit B. Kripps also argued that his result was based just on the formalism of quantum mechanics, without any interpretational problem. Thus, if Popper was challenging anything, he was challenging the central formalism of quantum mechanics.
In 1987 there came a major objection to Popper's proposal from Collet and Loudon. They pointed out that because the particle pairs originating from the source had a zero total momentum, the source could not have a sharply defined position. They showed that once the uncertainty in the position of the source is taken into account, the blurring introduced washes out the Popper effect.
Furthermore, Redhead analyzed Popper's experiment with a broad source and concluded that it could not yield the effect that Popper was seeking.
Realizations.
Kim–Shih's experiment.
Popper's experiment was realized in 1999 by Yoon-Ho Kim & Yanhua Shih using a spontaneous parametric down-conversion photon source. They did not observe an extra spread in the momentum of particle 2 due to particle 1 passing through a narrow slit. They write: "Indeed, it is astonishing to see that the experimental results agree with Popper’s prediction. Through quantum entanglement one may learn the precise knowledge of a photon’s position and would therefore expect a greater uncertainty in its momentum under the usual Copenhagen interpretation of the uncertainty relations. However, the measurement shows that the momentum does not experience a corresponding increase in uncertainty. Is this a violation of the uncertainty principle?"
Rather, the momentum spread of particle 2 (observed in coincidence with particle 1 passing through slit A) was narrower than its momentum spread in the initial state.
They concluded that:"Popper and EPR were correct in the prediction of the physical outcomes of their experiments. However, Popper and EPR made the same error by applying the results of two-particle physics to the explanation of the behavior of an individual particle. The two-particle entangled state is not the state of two individual particles. Our experimental result is emphatically NOT a violation of the uncertainty principle which governs the behavior of an individual quantum."
This led to a renewed heated debate, with some even going to the extent of claiming that Kim and Shih's experiment had demonstrated that there is no non-locality in quantum mechanics.
Unnikrishnan (2001), discussing Kim and Shih's result, wrote that the result:"is a solid proof that there is no state-reduction-at-a-distance. ... Popper's experiment and its analysis forces us to radically change the current held view on quantum non-locality."
Short criticized Kim and Shih's experiment, arguing that because of the finite size of the source, the localization of particle 2 is imperfect, which leads to a smaller momentum spread than expected. However, Short's argument implies that if the source were improved, we should see a spread in the momentum of particle 2.
Sancho carried out a theoretical analysis of Popper's experiment, using the path-integral approach, and found a similar kind of narrowing in the momentum spread of particle 2, as was observed by Kim and Shih. Although this calculation did not give them any deep insight, it indicated that the experimental result of Kim-Shih agreed with quantum mechanics. It did not say anything about what bearing it has on the Copenhagen interpretation, if any.
Ghost diffraction.
Popper's conjecture has also been tested experimentally in the so-called two-particle ghost interference experiment. This experiment was not carried out with the purpose of testing Popper's ideas, but ended up giving a conclusive result about Popper's test. In this experiment two entangled photons travel in different directions. Photon 1 goes through a slit, but there is no slit in the path of photon 2. However, Photon 2, if detected in coincidence with a fixed detector behind the slit detecting photon 1, shows a diffraction pattern. The width of the diffraction pattern for photon 2 increases when the slit in the path of photon 1 is narrowed. Thus, increase in the precision of knowledge about photon 2, by detecting photon 1 behind the slit, leads to increase in the scatter of photons 2.
Predictions according to quantum mechanics.
Tabish Qureshi has published the following analysis of Popper's argument.
The ideal EPR state is written as formula_3, where the two labels in the "ket" state represent the positions or momenta of the two particle. This implies perfect correlation, meaning, detecting particle 1 at position formula_4 will also lead to particle 2 being detected at formula_4. If particle 1 is measured to have a momentum formula_5, particle 2 will be detected to have a momentum formula_6. The particles in this state have infinite momentum spread, and are infinitely delocalized. However, in the real world, correlations are always imperfect. Consider the following entangled state
formula_7
where formula_8 represents a finite momentum spread, and formula_9 is a measure of the position spread of the particles. The uncertainties in position and momentum, for the two particles can be written as
formula_10
The action of a narrow slit on particle 1 can be thought of as reducing it to a narrow Gaussian state:
formula_11.
This will reduce the state of particle 2 to
formula_12.
The momentum uncertainty of particle 2 can now be calculated, and is given by
formula_13
If we go to the extreme limit of slit A being infinitesimally narrow (formula_14), the momentum uncertainty of particle 2 is formula_15, which is exactly what the momentum spread was to begin with. In fact, one can show that the momentum spread of particle 2, conditioned on particle 1 going through slit A, is always less than or equal to formula_16 (the initial spread), for any value of formula_17, and formula_9. Thus, particle 2 does not acquire any extra momentum spread than it already had. This is the prediction of standard quantum mechanics. So, the momentum spread of particle 2 will always be smaller than what was contained in the original beam. This is what was actually seen in the experiment of Kim and Shih. Popper's proposed experiment, if carried out in this way, is incapable of testing the Copenhagen interpretation of quantum mechanics.
On the other hand, if slit A is gradually narrowed, the momentum spread of particle 2 (conditioned on the detection of particle 1 behind slit A) will show a gradual increase (never beyond the initial spread, of course). This is what quantum mechanics predicts. Popper had said
"...if the Copenhagen interpretation is correct, then any increase in the precision in the measurement of our mere knowledge of the particles going through slit B should increase their scatter."
This particular aspect can be experimentally tested.
Faster-than-light signalling.
The expected additional momentum scatter which Popper wrongly attributed to the Copenhagen interpretation would allow faster-than-light communication, which is excluded by the no-communication theorem in quantum mechanics. Note however that both Collet and Loudon and Qureshi compute that scatter decreases with decreasing the size of slit A, contrary to the increase predicted by Popper. There was some controversy about this decrease also allowing superluminal communication. But the reduction is of the standard deviation of the conditional distribution of the position of particle 2 knowing that particle 1 did go through slit A, since we are only counting coincident detection. The reduction in conditional distribution allows for the unconditional distribution to remain the same, which is the only thing that matters to exclude superluminal communication. Also note that the conditional distribution would be different from the unconditional distribution in classical physics as well. But measuring the conditional distribution after slit B requires the information on the result at slit A, which has to be communicated classically, so that the conditional distribution cannot be known as soon as the measurement is made at slit A but is delayed by the time required to transmit that information.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta y"
},
{
"math_id": 1,
"text": "p_y"
},
{
"math_id": 2,
"text": "\\psi(y_1,y_2)\n= \\int_{-\\infty}^\\infty e^{iky_1}e^{-iky_2}\\,dk"
},
{
"math_id": 3,
"text": "|\\psi\\rangle = \\int_{-\\infty}^\\infty |y,y\\rangle \\, dy = \\int_{-\\infty}^\\infty |p,-p\\rangle \\, dp"
},
{
"math_id": 4,
"text": "x_0"
},
{
"math_id": 5,
"text": "p_0"
},
{
"math_id": 6,
"text": "-p_0"
},
{
"math_id": 7,
"text": "\\psi(y_1, y_2) =\n A\\!\\int_{-\\infty}^\\infty dp e^{-\\frac{1}{4}p^2\\sigma^2} e^{-\\frac{i}{\\hbar}py_2} e^{\\frac{i}{\\hbar}py_1} \\exp\\left[-\\frac{\\left(y_1 + y_2\\right)^2}{16\\Omega^2}\\right]\n"
},
{
"math_id": 8,
"text": "\\sigma"
},
{
"math_id": 9,
"text": "\\Omega"
},
{
"math_id": 10,
"text": "\n \\Delta p_2 = \\Delta p_1 = \\sqrt{\\sigma^2 + \\frac{\\hbar^2}{16\\Omega^2}},\\qquad\n \\Delta y_1 = \\Delta y_2 = \\sqrt{\\Omega^2 + \\frac{\\hbar^2}{16\\sigma^2}}.\n"
},
{
"math_id": 11,
"text": "\\phi_1(y_1) = \\frac{1}{\\left(2\\pi\\epsilon^2\\right)^\\frac{1}{4}} e^{-\\frac{y_1^2}{4\\epsilon^2}}"
},
{
"math_id": 12,
"text": "\\phi_2(y_2) = \\!\\int_{-\\infty}^\\infty \\psi(y_1, y_2) \\phi_1^*(y_1) dy_1"
},
{
"math_id": 13,
"text": "\\Delta p_2 =\n \\sqrt{\\frac{\\sigma^2\\left(1 + \\frac{\\epsilon^2}{\\Omega^2}\\right) + \\frac{\\hbar^2}{16\\Omega^2}}{1 + 4\\epsilon^2\\left(\\frac{\\sigma^2}{\\hbar^2} + \\frac{1}{16\\Omega^2}\\right)}}.\n"
},
{
"math_id": 14,
"text": "\\epsilon\\to 0"
},
{
"math_id": 15,
"text": "\\lim_{\\epsilon\\to 0} \\Delta p_2 = \\sqrt{\\sigma^2 + \\hbar^2/16\\Omega^2}"
},
{
"math_id": 16,
"text": "\\sqrt{\\sigma^2 + \\hbar^2/16\\Omega^2}"
},
{
"math_id": 17,
"text": "\\epsilon, \\sigma"
}
] | https://en.wikipedia.org/wiki?curid=10505383 |
1050551 | Multiple-criteria decision analysis | Operations research that evaluates multiple conflicting criteria in decision making
Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making (both in daily life and in settings such as business, government and medicine). It is also known as multiple attribute utility theory, multiple attribute value theory, multiple attribute preference theory, and multi-objective decision analysis.
Conflicting criteria are typical in evaluating options: cost or price is usually one of the main criteria, and some measure of quality is typically another criterion, easily in conflict with the cost. In purchasing a car, cost, comfort, safety, and fuel economy may be some of the main criteria we consider – it is unusual that the cheapest car is the most comfortable and the safest one. In portfolio management, managers are interested in getting high returns while simultaneously reducing risks; however, the stocks that have the potential of bringing high returns typically carry high risk of losing money. In a service industry, customer satisfaction and the cost of providing service are fundamental conflicting criteria.
In their daily lives, people usually weigh multiple criteria implicitly and may be comfortable with the consequences of such decisions that are made based on only intuition. On the other hand, when stakes are high, it is important to properly structure the problem and explicitly evaluate multiple criteria. In making the decision of whether to build a nuclear power plant or not, and where to build it, there are not only very complex issues involving multiple criteria, but there are also multiple parties who are deeply affected by the consequences.
Structuring complex problems well and considering multiple criteria explicitly leads to more informed and better decisions. There have been important advances in this field since the start of the modern multiple-criteria decision-making discipline in the early 1960s. A variety of approaches and methods, many implemented by specialized decision-making software, have been developed for their application in an array of disciplines, ranging from politics and business to the environment and energy.
Foundations, concepts, definitions.
MCDM or MCDA are acronyms for "multiple-criteria decision-making" and "multiple-criteria decision analysis". Stanley Zionts helped popularizing the acronym with his 1979 article "MCDM – If not a Roman Numeral, then What?", intended for an entrepreneurial audience.
MCDM is concerned with structuring and solving decision and planning problems involving multiple criteria. The purpose is to support decision-makers facing such problems. Typically, there does not exist a unique optimal solution for such problems and it is necessary to use decision-makers' preferences to differentiate between solutions.
"Solving" can be interpreted in different ways. It could correspond to choosing the "best" alternative from a set of available alternatives (where "best" can be interpreted as "the most preferred alternative" of a decision-maker). Another interpretation of "solving" could be choosing a small set of good alternatives, or grouping alternatives into different preference sets. An extreme interpretation could be to find all "efficient" or "nondominated" alternatives (which we will define shortly).
The difficulty of the problem originates from the presence of more than one criterion. There is no longer a unique optimal solution to an MCDM problem that can be obtained without incorporating preference information. The concept of an optimal solution is often replaced by the set of nondominated solutions. A solution is called nondominated if it is not possible to improve it in any criterion without sacrificing it in another. Therefore, it makes sense for the decision-maker to choose a solution from the nondominated set. Otherwise, they could do better in terms of some or all of the criteria, and not do worse in any of them. Generally, however, the set of nondominated solutions is too large to be presented to the decision-maker for the final choice. Hence we need tools that help the decision-maker focus on the preferred solutions (or alternatives). Normally one has to "tradeoff" certain criteria for others.
MCDM has been an active area of research since the 1970s. There are several MCDM-related organizations including the International Society on Multi-criteria Decision Making, Euro Working Group on MCDA, and INFORMS Section on MCDM. For a history see: Köksalan, Wallenius and Zionts (2011).
MCDM draws upon knowledge in many fields including:
A typology.
There are different classifications of MCDM problems and methods. A major distinction between MCDM problems is based on whether the solutions are explicitly or implicitly defined.
Whether it is an evaluation problem or a design problem, preference information of DMs is required in order to differentiate between solutions. The solution methods for MCDM problems are commonly classified based on the timing of preference information obtained from the DM.
There are methods that require the DM's preference information at the start of the process, transforming the problem into essentially a single criterion problem. These methods are said to operate by "prior articulation of preferences". Methods based on estimating a value function or using the concept of "outranking relations", analytical hierarchy process, and some rule-based decision methods try to solve multiple criteria evaluation problems utilizing prior articulation of preferences. Similarly, there are methods developed to solve multiple-criteria design problems using prior articulation of preferences by constructing a value function. Perhaps the most well-known of these methods is goal programming. Once the value function is constructed, the resulting single objective mathematical program is solved to obtain a preferred solution.
Some methods require preference information from the DM throughout the solution process. These are referred to as interactive methods or methods that require "progressive articulation of preferences". These methods have been well-developed for both the multiple criteria evaluation (see for example, Geoffrion, Dyer and Feinberg, 1972, and Köksalan and Sagala, 1995 ) and design problems (see Steuer, 1986).
Multiple-criteria design problems typically require the solution of a series of mathematical programming models in order to reveal implicitly defined solutions. For these problems, a representation or approximation of "efficient solutions" may also be of interest. This category is referred to as "posterior articulation of preferences", implying that the DM's involvement starts posterior to the explicit revelation of "interesting" solutions (see for example Karasakal and Köksalan, 2009).
When the mathematical programming models contain integer variables, the design problems become harder to solve. Multiobjective Combinatorial Optimization (MOCO) constitutes a special category of such problems posing substantial computational difficulty (see Ehrgott and Gandibleux, 2002, for a review).
Representations and definitions.
The MCDM problem can be represented in the criterion space or the decision space. Alternatively, if different criteria are combined by a weighted linear function, it is also possible to represent the problem in the weight space. Below are the demonstrations of the criterion and weight spaces as well as some formal definitions.
Criterion space representation.
Let us assume that we evaluate solutions in a specific problem situation using several criteria. Let us further assume that more is better in each criterion. Then, among all possible solutions, we are ideally interested in those solutions that perform well in all considered criteria. However, it is unlikely to have a single solution that performs well in all considered criteria. Typically, some solutions perform well in some criteria and some perform well in others. Finding a way of trading off between criteria is one of the main endeavors in the MCDM literature.
Mathematically, the MCDM problem corresponding to the above arguments can be represented as
"max" q
subject to
q ∈ Q
where q is the vector of "k" criterion functions (objective functions) and Q is the feasible set, Q ⊆ R"k".
If Q is defined explicitly (by a set of alternatives), the resulting problem is called a multiple-criteria evaluation problem.
If Q is defined implicitly (by a set of constraints), the resulting problem is called a multiple-criteria design problem.
The quotation marks are used to indicate that the maximization of a vector is not a well-defined mathematical operation. This corresponds to the argument that we will have to find a way to resolve the trade-off between criteria (typically based on the preferences of a decision maker) when a solution that performs well in all criteria does not exist.
Decision space representation.
The decision space corresponds to the set of possible decisions that are available to us. The criteria values will be consequences of the decisions we make. Hence, we can define a corresponding problem in the decision space. For example, in designing a product, we decide on the design parameters (decision variables) each of which affects the performance measures (criteria) with which we evaluate our product.
Mathematically, a multiple-criteria design problem can be represented in the decision space as follows:
formula_0
where X is the feasible set and x is the decision variable vector of size n.
A well-developed special case is obtained when X is a polyhedron defined by linear inequalities and equalities. If all the objective functions are linear in terms of the decision variables, this variation leads to multiple objective linear programming (MOLP), an important subclass of MCDM problems.
There are several definitions that are central in MCDM. Two closely related definitions are those of nondominance (defined based on the criterion space representation) and efficiency (defined based on the decision variable representation).
"Definition 1." q* ∈ Q is nondominated if there does not exist another q ∈ Q such that q ≥ q* and q ≠ q*.
Roughly speaking, a solution is nondominated so long as it is not inferior to any other available solution in all the considered criteria.
"Definition 2." x* ∈ X is efficient if there does not exist another x ∈ X such that f(x) ≥ f(x*) and f(x) ≠ f(x*).
If an MCDM problem represents a decision situation well, then the most preferred solution of a DM has to be an efficient solution in the decision space, and its image is a nondominated point in the criterion space. Following definitions are also important.
"Definition 3." q* ∈ Q is weakly nondominated if there does not exist another q ∈ Q such that q > q*.
"Definition 4." x* ∈ X is weakly efficient if there does not exist another x ∈ X such that f(x) > f(x*).
Weakly nondominated points include all nondominated points and some special dominated points. The importance of these special dominated points comes from the fact that they commonly appear in practice and special care is necessary to distinguish them from nondominated points. If, for example, we maximize a single objective, we may end up with a weakly nondominated point that is dominated. The dominated points of the weakly nondominated set are located either on vertical or horizontal planes (hyperplanes) in the criterion space.
"Ideal point": (in criterion space) represents the best (the maximum for maximization problems and the minimum for minimization problems) of each objective function and typically corresponds to an infeasible solution.
"Nadir point": (in criterion space) represents the worst (the minimum for maximization problems and the maximum for minimization problems) of each objective function among the points in the nondominated set and is typically a dominated point.
The ideal point and the nadir point are useful to the DM to get the "feel" of the range of solutions (although it is not straightforward to find the nadir point for design problems having more than two criteria).
Illustrations of the decision and criterion spaces.
The following two-variable MOLP problem in the decision variable space will help demonstrate some of the key concepts graphically.
formula_1
In Figure 1, the extreme points "e" and "b" maximize the first and second objectives, respectively. The red boundary between those two extreme points represents the efficient set. It can be seen from the figure that, for any feasible solution outside the efficient set, it is possible to improve both objectives by some points on the efficient set. Conversely, for any point on the efficient set, it is not possible to improve both objectives by moving to any other feasible solution. At these solutions, one has to sacrifice from one of the objectives in order to improve the other objective.
Due to its simplicity, the above problem can be represented in criterion space by replacing the x's with the f 's as follows:
Max f1
Max f2
subject to
"f"1 + 2"f"2 ≤ 12
2"f"1 + "f"2 ≤ 12
"f"1 + "f"2 ≤ 7
"f"1 – "f"2 ≤ 9
−"f"1 + "f"2 ≤ 9
"f"1 + 2"f"2 ≥ 0
2"f"1 + "f"2 ≥ 0
We present the criterion space graphically in Figure 2. It is easier to detect the nondominated points (corresponding to efficient solutions in the decision space) in the criterion space. The north-east region of the feasible space constitutes the set of nondominated points (for maximization problems).
Generating nondominated solutions.
There are several ways to generate nondominated solutions. We will discuss two of these. The first approach can generate a special class of nondominated solutions whereas the second approach can generate any nondominated solution.
If we combine the multiple criteria into a single criterion by multiplying each criterion with a positive weight and summing up the weighted criteria, then the solution to the resulting single criterion problem is a special efficient solution. These special efficient solutions appear at corner points of the set of available solutions. Efficient solutions that are not at corner points have special characteristics and this method is not capable of finding such points. Mathematically, we can represent this situation as
max wT.q = wT.f(x), w> 0
subject to
x ∈ X
By varying the weights, weighted sums can be used for generating efficient extreme point solutions for design problems, and supported (convex nondominated) points for evaluation problems.
Achievement scalarizing functions also combine multiple criteria into a single criterion by weighting them in a very special way. They create rectangular contours going away from a reference point towards the available efficient solutions. This special structure empower achievement scalarizing functions to reach any efficient solution. This is a powerful property that makes these functions very useful for MCDM problems.
Mathematically, we can represent the corresponding problem as
Min "s"(g, q, w, "ρ") = Min {max"i" [("g""i" − "q""i")/"w""i" ] + "ρ" Σ"i" ("g""i" − "q""i")},
subject to
q ∈ Q
The achievement scalarizing function can be used to project any point (feasible or infeasible) on the efficient frontier. Any point (supported or not) can be reached. The second term in the objective function is required to avoid generating inefficient solutions. Figure 3 demonstrates how a feasible point, g1, and an infeasible point, g2, are projected onto the nondominated points, q1 and q2, respectively, along the direction w using an achievement scalarizing function. The dashed and solid contours correspond to the objective function contours with and without the second term of the objective function, respectively.
Solving MCDM problems.
Different schools of thought have developed for solving MCDM problems (both of the design and evaluation type). For a bibliometric study showing their development over time, see Bragge, Korhonen, H. Wallenius and J. Wallenius [2010].
Multiple objective mathematical programming school
(1) "Vector maximization": The purpose of vector maximization is to approximate the nondominated set; originally developed for Multiple Objective Linear Programming problems (Evans and Steuer, 1973; Yu and Zeleny, 1975).
(2) "Interactive programming": Phases of computation alternate with phases of decision-making (Benayoun et al., 1971; Geoffrion, Dyer and Feinberg, 1972; Zionts and Wallenius, 1976; Korhonen and Wallenius, 1988). No explicit knowledge of the DM's value function is assumed.
Goal programming school
The purpose is to set apriori target values for goals, and to minimize weighted deviations from these goals. Both importance weights as well as lexicographic pre-emptive weights have been used (Charnes and Cooper, 1961).
Fuzzy-set theorists
Fuzzy sets were introduced by Zadeh (1965) as an extension of the classical notion of sets. This idea is used in many MCDM algorithms to model and solve fuzzy problems.
Ordinal data based methods
Ordinal data has a wide application in real-world situations. In this regard, some MCDM methods were designed to handle ordinal data as input data. For example, Ordinal Priority Approach and Qualiflex method.
Multi-attribute utility theorists
Multi-attribute utility or value functions are elicited and used to identify the most preferred alternative or to rank order the alternatives. Elaborate interview techniques, which exist for eliciting linear additive utility functions and multiplicative nonlinear utility functions, may be used (Keeney and Raiffa, 1976). Another approach is to elicit value functions indirectly by asking the decision-maker a series of pairwise ranking questions involving choosing between hypothetical alternatives (PAPRIKA method; Hansen and Ombler, 2008).
French school
The French school focuses on decision aiding, in particular the ELECTRE family of outranking methods that originated in France during the mid-1960s. The method was first proposed by Bernard Roy (Roy, 1968).
Evolutionary multiobjective optimization school (EMO)
EMO algorithms start with an initial population, and update it by using processes designed to mimic natural survival-of-the-fittest principles and genetic variation operators to improve the average population from one generation to the next. The goal is to converge to a population of solutions which represent the nondominated set (Schaffer, 1984; Srinivas and Deb, 1994). More recently, there are efforts to incorporate preference information into the solution process of EMO algorithms (see Deb and Köksalan, 2010).
Grey system theory based methods
In the 1980s, Deng Julong proposed Grey System Theory (GST) and its first multiple-attribute decision-making model, called Deng's Grey relational analysis (GRA) model. Later, the grey systems scholars proposed many GST based methods like Liu Sifeng's Absolute GRA model, Grey Target Decision Making (GTDM) and Grey Absolute Decision Analysis (GADA).
Analytic hierarchy process (AHP)
The AHP first decomposes the decision problem into a hierarchy of subproblems. Then the decision-maker evaluates the relative importance of its various elements by pairwise comparisons. The AHP converts these evaluations to numerical values (weights or priorities), which are used to calculate a score for each alternative (Saaty, 1980). A consistency index measures the extent to which the decision-maker has been consistent in her responses. AHP is one of the more controversial techniques listed here, with some researchers in the MCDA community believing it to be flawed.
Several papers reviewed the application of MCDM techniques in various disciplines such as fuzzy MCDM, classic MCDM, sustainable and renewable energy, VIKOR technique, transportation systems, service quality, TOPSIS method, energy management problems, e-learning, tourism and hospitality, SWARA and WASPAS methods.
MCDM methods.
The following MCDM methods are available, many of which are implemented by specialized decision-making software:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}\n\\max q & = f(x) = f(x_1,\\ldots,x_n) \\\\\n\\text{subject to} \\\\\nq\\in Q & = \\{f(x) : x\\in X,\\, X\\subseteq \\mathbb R^n\\}\n\\end{align}\n"
},
{
"math_id": 1,
"text": "\n\\begin{align}\n\\max f_1(\\mathbf{x}) & = -x_1 + 2x_2 \\\\\n\\max f_2(\\mathbf{x}) & = 2x_1 - x_2 \\\\\n\\text{subject to} \\\\\nx_1 & \\le 4 \\\\\nx_2 & \\le 4 \\\\\nx_1+x_2 & \\le 7 \\\\\n-x_1+x_2 & \\le 3 \\\\\nx_1 - x_2 & \\le 3 \\\\\nx_1, x_2 & \\ge 0\n\\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=1050551 |
10506228 | Integral of secant cubed | Commonly encountered and tricky integral
The integral of secant cubed is a frequent and challenging indefinite integral of elementary calculus:
formula_0
where formula_1 is the inverse Gudermannian function, the integral of the secant function.
There are a number of reasons why this particular antiderivative is worthy of special attention:
formula_2
where formula_3 is a constant. In particular, it appears in the problems of:
* rectifying the parabola and the Archimedean spiral
* finding the surface area of the helicoid.
Derivations.
Integration by parts.
This antiderivative may be found by integration by parts, as follows:
formula_4
where
formula_5
Then
formula_6
Next add formula_7 to both sides:
formula_8
using the integral of the secant function, formula_9
Finally, divide both sides by 2:
formula_10
which was to be derived. A possible mnemonic is: "The integral of secant cubed is the average of the derivative and integral of secant".
formula_11
Reduction to an integral of a rational function.
where formula_12, so that formula_13. This admits a decomposition by partial fractions:
formula_14
Antidifferentiating term-by-term, one gets
formula_15
Hyperbolic functions.
Integrals of the form: formula_16 can be reduced using the Pythagorean identity if formula_17 is even or formula_17 and formula_18 are both odd. If formula_17 is odd and formula_18 is even, hyperbolic substitutions can be used to replace the nested integration by parts with hyperbolic power-reducing formulas.
formula_19
Note that formula_20 follows directly from this substitution.
formula_21
Higher odd powers of secant.
Just as the integration by parts above reduced the integral of secant cubed to the integral of secant to the first power, so a similar process reduces the integral of higher odd powers of secant to lower ones. This is the secant reduction formula, which follows the syntax:
formula_22
Even powers of tangents can be accommodated by using binomial expansion to form an odd polynomial of secant and using these formulae on the largest term and combining like terms.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n\\int \\sec^3 x \\, dx\n &= \\tfrac12\\sec x \\tan x + \\tfrac12 \\int \\sec x\\, dx + C \\\\[6mu]\n &= \\tfrac12(\\sec x \\tan x + \\ln \\left|\\sec x + \\tan x\\right|) + C \\\\[6mu]\n &= \\tfrac12(\\sec x \\tan x + \\operatorname{gd}^{-1} x) + C, \\qquad |x| < \\tfrac12\\pi\n\\end{align}"
},
{
"math_id": 1,
"text": "\\operatorname{gd}^{-1}"
},
{
"math_id": 2,
"text": "\\int \\sqrt{a^2+x^2}\\,dx,"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "\\int \\sec^3 x \\, dx = \\int u\\,dv = uv - \\int v \\,du"
},
{
"math_id": 5,
"text": "\nu = \\sec x,\\quad dv = \\sec^2 x\\,dx,\\quad v\n = \\tan x,\\quad du = \\sec x \\tan x\\,dx.\n"
},
{
"math_id": 6,
"text": "\\begin{align}\n\\int \\sec^3 x \\, dx\n &= \\int (\\sec x)(\\sec^2 x)\\,dx \\\\\n &= \\sec x \\tan x - \\int \\tan x\\,(\\sec x \\tan x)\\,dx \\\\\n &= \\sec x \\tan x - \\int \\sec x \\tan^2 x\\,dx \\\\\n &= \\sec x \\tan x - \\int \\sec x\\, (\\sec^2 x - 1)\\,dx \\\\\n &= \\sec x \\tan x - \\left(\\int \\sec^3 x \\, dx - \\int \\sec x\\,dx\\right) \\\\\n &= \\sec x \\tan x - \\int \\sec^3 x \\, dx + \\int \\sec x\\,dx.\n\\end{align}"
},
{
"math_id": 7,
"text": "\\int\\sec^3 x \\,dx"
},
{
"math_id": 8,
"text": "\\begin{align}\n2 \\int \\sec^3 x \\, dx\n &= \\sec x \\tan x + \\int \\sec x\\,dx \\\\\n &= \\sec x \\tan x + \\ln\\left|\\sec x + \\tan x\\right| + C,\n\\end{align}"
},
{
"math_id": 9,
"text": "\\int \\sec x \\,dx = \\ln \\left|\\sec x + \\tan x\\right| + C."
},
{
"math_id": 10,
"text": "\n\\int \\sec^3 x \\, dx\n= \\tfrac12(\\sec x \\tan x + \\ln \\left|\\sec x + \\tan x\\right|) + C,\n"
},
{
"math_id": 11,
"text": "\n\\int \\sec^3 x \\, dx\n= \\int \\frac{dx}{\\cos^3 x}\n= \\int \\frac{\\cos x\\,dx}{\\cos^4 x}\n= \\int \\frac{\\cos x\\,dx}{(1-\\sin^2 x)^2}\n= \\int \\frac{du}{(1-u^2)^2}\n"
},
{
"math_id": 12,
"text": "u = \\sin x"
},
{
"math_id": 13,
"text": "du = \\cos x\\,dx"
},
{
"math_id": 14,
"text": "\n\\frac{1}{(1-u^2)^2}\n= \\frac{1}{(1+u)^2(1-u)^2}\n= \\frac{1}{4(1+u)} + \\frac{1}{4(1+u)^2} + \\frac{1}{4(1-u)} + \\frac{1}{4(1-u)^2}.\n"
},
{
"math_id": 15,
"text": "\\begin{align}\n\\int \\sec^3 x \\, dx\n &= \\tfrac14 \\ln |1+u| - \\frac{1}{4(1+u)} - \\tfrac14 \\ln|1-u| + \\frac{1}{4(1-u)} + C \\\\[6pt]\n &= \\tfrac14 \\ln \\Biggl| \\frac{1+u}{1-u} \\Biggl| + \\frac{u}{2(1-u^2)} + C \\\\[6pt]\n &= \\tfrac14 \\ln \\Biggl|\\frac{1+\\sin x}{1-\\sin x} \\Biggl| + \\frac{\\sin x}{2\\cos^2 x} + C\\\\[6pt]\n &= \\tfrac14 \\ln \\left|\\frac{1+\\sin x}{1-\\sin x}\\right| + \\tfrac12 \\sec x \\tan x + C \\\\[6pt]\n &= \\tfrac14 \\ln \\left|\\frac{(1+\\sin x)^2}{1-\\sin^2 x}\\right| + \\tfrac12 \\sec x \\tan x + C \\\\[6pt]\n &= \\tfrac14 \\ln \\left|\\frac{(1+\\sin x)^2}{\\cos^2 x}\\right| + \\tfrac12 \\sec x \\tan x + C \\\\[6pt]\n &= \\tfrac12 \\ln \\left|\\frac{1+\\sin x}{\\cos x}\\right| + \\tfrac12 \\sec x \\tan x + C \\\\[6pt]\n &= \\tfrac12 (\\ln|\\sec x + \\tan x| + \\sec x \\tan x) + C.\n\\end{align}"
},
{
"math_id": 16,
"text": "\\int \\sec^n x \\tan^m x\\, dx"
},
{
"math_id": 17,
"text": "n"
},
{
"math_id": 18,
"text": "m"
},
{
"math_id": 19,
"text": "\\begin{align}\n\\sec x &= \\cosh u \\\\[6pt]\n\\tan x &= \\sinh u \\\\[6pt]\n\\sec^2 x \\, dx &= \\cosh u \\, du \\text{ or } \\sec x \\tan x\\, dx = \\sinh u \\, du \\\\[6pt]\n\\sec x \\, dx &= \\, du \\text{ or } dx = \\operatorname{sech} u \\, du \\\\[6pt]\nu &= \\operatorname{arcosh} (\\sec x ) = \\operatorname{arsinh} ( \\tan x ) = \\ln|\\sec x + \\tan x|\n\\end{align}"
},
{
"math_id": 20,
"text": "\\int \\sec x \\, dx = \\ln|\\sec x + \\tan x|"
},
{
"math_id": 21,
"text": "\\begin{align}\n\\int \\sec^3 x \\, dx\n &= \\int \\cosh^2 u\\,du \\\\[6pt]\n &= \\tfrac12 \\int ( \\cosh 2u +1) \\,du \\\\[6pt]\n &= \\tfrac12 \\left( \\tfrac12 \\sinh2u + u\\right) + C\\\\[6pt]\n &= \\tfrac12 ( \\sinh u \\cosh u + u ) + C \\\\[6pt]\n &= \\tfrac12 (\\sec x \\tan x + \\ln \\left|\\sec x + \\tan x\\right|) + C\n\\end{align}"
},
{
"math_id": 22,
"text": "\n\\int \\sec^n x \\, dx\n= \\frac{\\sec^{n-2} x \\tan x}{n-1} \\,+\\, \\frac{n-2}{n-1}\\int \\sec^{n-2} x \\, dx \\qquad \\text{ (for }n \\ne 1\\text{)}\\,\\!\n"
}
] | https://en.wikipedia.org/wiki?curid=10506228 |
1050741 | Pythagorean trigonometric identity | Relation between sine and cosine
The Pythagorean trigonometric identity, also called simply the Pythagorean identity, is an identity expressing the Pythagorean theorem in terms of trigonometric functions. Along with the sum-of-angles formulae, it is one of the basic relations between the sine and cosine functions.
The identity is
formula_0
As usual, formula_1 means formula_2.
Proofs and their relationships to the Pythagorean theorem.
Proof based on right-angle triangles.
Any similar triangles have the property that if we select the same angle in all of them, the ratio of the two sides defining the angle is the same regardless of which similar triangle is selected, regardless of its actual size: the ratios depend upon the three angles, not the lengths of the sides. Thus for either of the similar right triangles in the figure, the ratio of its horizontal side to its hypotenuse is the same, namely cos "θ".
The elementary definitions of the sine and cosine functions in terms of the sides of a right triangle are:
formula_3
formula_4
The Pythagorean identity follows by squaring both definitions above, and adding; the left-hand side of the identity then becomes
formula_5
which by the Pythagorean theorem is equal to 1. This definition is valid for all angles, due to the definition of defining formula_6 and formula_7 for the unit circle and thus formula_8 and formula_9 for a circle of radius "c" and reflecting our triangle in the y axis and setting formula_10 and formula_11.
Alternatively, the identities found at Trigonometric symmetry, shifts, and periodicity may be employed. By the periodicity identities we can say if the formula is true for −π < "θ" ≤ π then it is true for all real "θ". Next we prove the identity in the range π/2 < "θ" ≤ π, to do this we let "t"
"θ" − π/2, "t" will now be in the range 0 < "t" ≤ π/2. We can then make use of squared versions of some basic shift identities (squaring conveniently removes the minus signs):
formula_12
All that remains is to prove it for −π < "θ" < 0; this can be done by squaring the symmetry identities to get
formula_13
Related identities.
The identities
formula_14
and
formula_15
are also called Pythagorean trigonometric identities. If one leg of a right triangle has length 1, then the tangent of the angle adjacent to that leg is the length of the other leg, and the secant of the angle is the length of the hypotenuse.
formula_16
and:
formula_17
In this way, this trigonometric identity involving the tangent and the secant follows from the Pythagorean theorem. The angle opposite the leg of length 1 (this angle can be labeled φ = π/2 − θ) has cotangent equal to the length of the other leg, and cosecant equal to the length of the hypotenuse. In that way, this trigonometric identity involving the cotangent and the cosecant also follows from the Pythagorean theorem.
The following table gives the identities with the factor or divisor that relates them to the main identity.
Proof using the unit circle.
The unit circle centered at the origin in the Euclidean plane is defined by the equation:
formula_18
Given an angle "θ", there is a unique point "P" on the unit circle at an anticlockwise angle of "θ" from the "x"-axis, and the "x"- and "y"-coordinates of "P" are:
formula_19
Consequently, from the equation for the unit circle:
formula_20
the Pythagorean identity.
In the figure, the point "P" has a "negative" x-coordinate, and is appropriately given by "x" = cos "θ", which is a negative number: cos "θ" = −cos(π−"θ"). Point "P" has a positive "y"-coordinate, and sin "θ" = sin(π−"θ") > 0. As "θ" increases from zero to the full circle "θ" = 2π, the sine and cosine change signs in the various quadrants to keep "x" and "y" with the correct signs. The figure shows how the sign of the sine function varies as the angle changes quadrant.
Because the "x"- and "y"-axes are perpendicular, this Pythagorean identity is equivalent to the Pythagorean theorem for triangles with hypotenuse of length 1 (which is in turn equivalent to the full Pythagorean theorem by applying a similar-triangles argument). See Unit circle for a short explanation.
Proof using power series.
The trigonometric functions may also be defined using power series, namely (for "x" an angle measured in radians):
formula_21
Using the multiplication formula for power series at Multiplication and division of power series (suitably modified to account for the form of the series here) we obtain
formula_22
In the expression for sin2, "n" must be at least 1, while in the expression for cos2, the constant term is equal to 1. The remaining terms of their sum are (with common factors removed)
formula_23
by the binomial theorem. Consequently,
formula_24
which is the Pythagorean trigonometric identity.
When the trigonometric functions are defined in this way, the identity in combination with the Pythagorean theorem shows that these power series parameterize the unit circle, which we used in the previous section. This definition constructs the sine and cosine functions in a rigorous fashion and proves that they are differentiable, so that in fact it subsumes the previous two.
Proof using the differential equation.
Sine and cosine can be defined as the two solutions to the differential equation:
formula_25
satisfying respectively "y"(0) = 0, "y"′(0) = 1 and "y"(0) = 1, "y"′(0) = 0. It follows from the theory of ordinary differential equations that the first solution, sine, has the second, cosine, as its derivative, and it follows from this that the derivative of cosine is the negative of the sine. The identity is equivalent to the assertion that the function
formula_26
is constant and equal to 1. Differentiating using the chain rule gives:
formula_27
so "z" is constant. A calculation confirms that "z"(0) = 1, and "z" is a constant so "z" = 1 for all "x", so the Pythagorean identity is established.
A similar proof can be completed using power series as above to establish that the sine has as its derivative the cosine, and the cosine has as its derivative the negative sine. In fact, the definitions by ordinary differential equation and by power series lead to similar derivations of most identities.
This proof of the identity has no direct connection with Euclid's demonstration of the Pythagorean theorem.
Proof using Euler's formula.
Using Euler's formula formula_28 and factoring formula_29 as the complex difference of two squares,
formula_30 | [
{
"math_id": 0,
"text": "\\sin^2 \\theta + \\cos^2 \\theta = 1."
},
{
"math_id": 1,
"text": "\\sin^2 \\theta"
},
{
"math_id": 2,
"text": "(\\sin\\theta)^2"
},
{
"math_id": 3,
"text": "\\sin \\theta = \\frac{\\mathrm{opposite}}{\\mathrm{hypotenuse}}= \\frac{b}{c}"
},
{
"math_id": 4,
"text": "\\cos \\theta = \\frac{\\mathrm{adjacent}}{\\mathrm{hypotenuse}} = \\frac{a}{c} "
},
{
"math_id": 5,
"text": "\\frac{\\mathrm{opposite}^2 + \\mathrm{adjacent}^2}{\\mathrm{hypotenuse}^2}"
},
{
"math_id": 6,
"text": "x = \\cos \\theta"
},
{
"math_id": 7,
"text": "y = \\sin \\theta"
},
{
"math_id": 8,
"text": "x = c\\cos \\theta"
},
{
"math_id": 9,
"text": "y = c\\sin \\theta"
},
{
"math_id": 10,
"text": "a=x"
},
{
"math_id": 11,
"text": "b=y"
},
{
"math_id": 12,
"text": "\\sin^2\\theta + \\cos^2\\theta = \\sin^2\\left(t + \\tfrac{1}{2}\\pi\\right) + \\cos^2\\left(t + \\tfrac{1}{2}\\pi\\right) = \\cos^2 t + \\sin^2 t = 1."
},
{
"math_id": 13,
"text": "\\sin^2\\theta = \\sin^2(-\\theta)\\text{ and }\\cos^2\\theta = \\cos^2(-\\theta)."
},
{
"math_id": 14,
"text": "1 + \\tan^2 \\theta = \\sec^2 \\theta"
},
{
"math_id": 15,
"text": "1 + \\cot^2 \\theta = \\csc^2 \\theta"
},
{
"math_id": 16,
"text": "\\tan \\theta = \\frac{b}{a},"
},
{
"math_id": 17,
"text": "\\sec \\theta = \\frac{c}{a}."
},
{
"math_id": 18,
"text": "x^2 + y^2 = 1."
},
{
"math_id": 19,
"text": "x = \\cos\\theta \\ \\text{ and }\\ y = \\sin\\theta."
},
{
"math_id": 20,
"text": "\\cos^2 \\theta + \\sin^2 \\theta = 1,"
},
{
"math_id": 21,
"text": "\\begin{align}\n \\sin x &= \\sum_{n = 0}^\\infty \\frac{(-1)^n}{(2n + 1)!} x^{2n + 1},\\\\\n \\cos x &= \\sum_{n = 0}^\\infty \\frac{(-1)^n}{(2n)!} x^{2n}.\n\\end{align}"
},
{
"math_id": 22,
"text": "\\begin{align}\n\\sin^2 x & = \\sum_{i = 0}^\\infty \\sum_{j = 0}^\\infty \\frac{(-1)^i}{(2i + 1)!} \\frac{(-1)^j}{(2j + 1)!} x^{(2i + 1) + (2j + 1)} \\\\\n& = \\sum_{n = 1}^\\infty \\left(\\sum_{i = 0}^{n - 1} \\frac{(-1)^{n - 1}}{(2i + 1)!(2(n - i - 1) + 1)!}\\right) x^{2n} \\\\\n& = \\sum_{n = 1}^\\infty \\left( \\sum_{i = 0}^{n - 1} {2n \\choose 2i + 1} \\right) \\frac{(-1)^{n - 1}}{(2n)!} x^{2n},\\\\\n\\cos^2 x & = \\sum_{i = 0}^\\infty \\sum_{j = 0}^\\infty \\frac{(-1)^i}{(2i)!} \\frac{(-1)^j}{(2j)!} x^{(2i) + (2j)} \\\\\n& = \\sum_{n = 0}^\\infty \\left(\\sum_{i = 0}^n \\frac{(-1)^n}{(2i)!(2(n - i))!}\\right) x^{2n} \\\\\n& = \\sum_{n = 0}^\\infty \\left( \\sum_{i = 0}^n {2n \\choose 2i} \\right) \\frac{(-1)^n}{(2n)!} x^{2n}.\n\\end{align}"
},
{
"math_id": 23,
"text": "\\sum_{i = 0}^n {2n \\choose 2i} - \\sum_{i = 0}^{n - 1} {2n \\choose 2i + 1}\n= \\sum_{j = 0}^{2n} (-1)^j {2n \\choose j}\n= (1 - 1)^{2n}\n= 0"
},
{
"math_id": 24,
"text": "\\sin^2 x + \\cos^2 x = 1,"
},
{
"math_id": 25,
"text": "y'' + y = 0"
},
{
"math_id": 26,
"text": "z = \\sin^2 x + \\cos^2 x"
},
{
"math_id": 27,
"text": "\\frac{d}{dx} z = 2 \\sin x \\cos x + 2 \\cos x(-\\sin x) = 0,"
},
{
"math_id": 28,
"text": "e^{i\\theta} = \\cos\\theta + i\\sin\\theta"
},
{
"math_id": 29,
"text": "\\cos^2 \\theta + \\sin^2 \\theta"
},
{
"math_id": 30,
"text": "\\begin{align}\n1\n&= e^{i\\theta}e^{-i\\theta} \\\\[3mu]\n&= (\\cos\\theta + i\\sin\\theta)(\\cos\\theta - i\\sin\\theta) \\\\[3mu]\n&= \\cos^2 \\theta + \\sin^2 \\theta.\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=1050741 |
1051442 | Probabilistic encryption | Use of randomness in key code generation
Probabilistic encryption is the use of randomness in an encryption algorithm, so that when encrypting the same message several times it will, in general, yield different ciphertexts. The term "probabilistic encryption" is typically used in reference to public key encryption algorithms; however various symmetric key encryption algorithms achieve a similar property (e.g., block ciphers when used in a chaining mode such as CBC), and stream ciphers such as Freestyle which are inherently random. To be semantically secure, that is, to hide even partial information about the plaintext, an encryption algorithm must be probabilistic.
History.
The first provably-secure probabilistic public-key encryption scheme was proposed by Shafi Goldwasser and Silvio Micali, based on the hardness of the quadratic residuosity problem and had a message expansion factor equal to the public key size. More efficient probabilistic encryption algorithms include Elgamal, Paillier, and various constructions under the random oracle model, including OAEP.
Security.
Probabilistic encryption is particularly important when using public key cryptography. Suppose that the adversary observes a ciphertext, and suspects that the plaintext is either "YES" or "NO", or has a hunch that the plaintext might be "ATTACK AT CALAIS". When a deterministic encryption algorithm is used, the adversary can simply try encrypting each of his guesses under the recipient's public key, and compare each result to the target ciphertext. To combat this attack, public key encryption schemes must incorporate an element of randomness, ensuring that each plaintext maps into one of a large number of possible ciphertexts.
An intuitive approach to converting a deterministic encryption scheme into a probabilistic one is to simply pad the plaintext with a random string before encrypting with the deterministic algorithm. Conversely, decryption involves applying a deterministic algorithm and ignoring the random padding. However, early schemes which applied this naive approach were broken due to limitations in some deterministic encryption schemes. Techniques such as Optimal Asymmetric Encryption Padding (OAEP) integrate random padding in a manner that is secure using any trapdoor permutation.
Examples.
Example of probabilistic encryption using any trapdoor permutation:
formula_0
formula_1
This is inefficient because only a single bit is encrypted. In other words, the message expansion factor is equal to the public key size.
Example of probabilistic encryption in the random oracle model:
formula_2
formula_3
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n{\\rm Enc}(x) = (f(r), x \\oplus b(r))\n"
},
{
"math_id": 1,
"text": "\n{\\rm Dec}(y, z) = b(f^{-1}(y)) \\oplus z\n"
},
{
"math_id": 2,
"text": "\n{\\rm Enc}(x) = (f(r), x \\oplus h(r))\n"
},
{
"math_id": 3,
"text": "\n{\\rm Dec}(y, z) = h(f^{-1}(y)) \\oplus z\n"
}
] | https://en.wikipedia.org/wiki?curid=1051442 |
1051627 | Szemerédi–Trotter theorem | Bound on the number of incidences between points and lines in the plane
The Szemerédi–Trotter theorem is a mathematical result in the field of Discrete geometry. It asserts that given n points and m lines in the Euclidean plane, the number of incidences ("i.e.", the number of point-line pairs, such that the point lies on the line) is
formula_0
This bound cannot be improved, except in terms of the implicit constants.
As for the implicit constants, it was shown by János Pach, Radoš Radoičić, Gábor Tardos, and Géza Tóth that the upper bound formula_1 holds. Since then better constants are known due to better crossing lemma constants; the current best is 2.44. On the other hand, Pach and Tóth showed that the statement does not hold true if one replaces the coefficient 2.44 with 0.42.
An equivalent formulation of the theorem is the following. Given n points and an integer "k" ≥ 2, the number of lines which pass through at least k of the points is
formula_2
The original proof of Endre Szemerédi and William T. Trotter was somewhat complicated, using a combinatorial technique known as "cell decomposition". Later, László Székely discovered a much simpler proof using the crossing number inequality for graphs. (See below.)
The Szemerédi–Trotter theorem has a number of consequences, including Beck's theorem in incidence geometry and the Erdős-Szemerédi sum-product problem in additive combinatorics.
Proof of the first formulation.
We may discard the lines which contain two or fewer of the points, as they can contribute at most 2"m" incidences to the total number. Thus we may assume that every line contains at least three of the points.
If a line contains k points, then it will contain "k" − 1 line segments which connect two consecutive points along the line. Because "k" ≥ 3 after discarding the two-point lines, it follows that "k" − 1 ≥ "k"/2, so the number of these line segments on each line is at least half the number of incidences on that line. Summing over all of the lines, the number of these line segments is again at least half the total number of incidences. Thus if e denotes the number of such line segments, it will suffice to show that
formula_3
Now consider the graph formed by using the n points as vertices, and the e line segments as edges. Since each line segment lies on one of m lines, and any two lines intersect in at most one point, the crossing number of this graph is at most the number of points where two lines intersect, which is at most "m"("m" − 1)/2. The crossing number inequality implies that either "e" ≤ 7.5"n", or that "m"("m" − 1)/2 ≥ "e"3 / 33.75"n"2. In either case "e" ≤ 3.24("nm")2/3 + 7.5"n", giving the desired bound
formula_4
Proof of the second formulation.
Since every pair of points can be connected by at most one line, there can be at most "n"("n" − 1)/2 lines which can connect at k or more points, since "k" ≥ 2. This bound will prove the theorem when k is small (e.g. if "k" ≤ "C" for some absolute constant C). Thus, we need only consider the case when k is large, say "k" ≥ "C".
Suppose that there are "m" lines that each contain at least k points. These lines generate at least mk incidences, and so by the first formulation of the Szemerédi–Trotter theorem, we have
formula_5
and so at least one of the statements formula_6, or formula_7 is true. The third possibility is ruled out since k was assumed to be large, so we are left with the first two. But in either of these two cases, some elementary algebra will give the bound formula_8 as desired.
Optimality.
Except for its constant, the Szemerédi–Trotter incidence bound cannot be improved. To see this, consider for any positive integer formula_9 a set of points on the integer lattice
formula_10
and a set of lines
formula_11
Clearly, formula_12 and formula_13. Since each line is incident to N points (i.e., once for each formula_14), the number of incidences is formula_15 which matches the upper bound.
Generalization to formula_16.
One generalization of this result to arbitrary dimension, formula_16, was found by Agarwal and Aronov. Given a set of n points, S, and the set of m hyperplanes, H, which are each spanned by S, the number of incidences between S and H is bounded above by
formula_17
provided formula_18. Equivalently, the number of hyperplanes in H containing k or more points is bounded above by
formula_19
A construction due to Edelsbrunner shows this bound to be asymptotically optimal.
József Solymosi and Terence Tao obtained near sharp upper bounds for the number of incidences between points and algebraic varieties in higher dimensions, when the points and varieties satisfy "certain pseudo-line type axioms". Their proof uses the Polynomial Ham Sandwich Theorem.
In formula_20.
Many proofs of the Szemerédi–Trotter theorem over formula_21 rely in a crucial way on the topology of Euclidean space, so do not extend easily to other fields. e.g. the original proof of Szemerédi and Trotter; the polynomial partitioning proof and the crossing number proof do not extend to the complex plane.
Tóth successfully generalized the original proof of Szemerédi and Trotter to the complex plane formula_20 by introducing additional ideas. This result was also obtained independently and through a different method by Zahl. The implicit constant in the bound is not the same in the complex numbers: in Tóth's proof the constant can be taken to be formula_22; the constant is not explicit in Zahl's proof.
When the point set is a Cartesian product, Solymosi and Tardos show that the Szemerédi-Trotter bound holds using a much simpler argument.
In finite fields.
Let formula_23 be a field.
A Szemerédi-Trotter bound is impossible in general due to the following example, stated here in formula_24: let formula_25 be the set of all formula_26 points and let formula_27 be the set of all formula_26 lines in the plane. Since each line contains formula_28 points, there are formula_29 incidences. On the other hand, a Szemerédi-Trotter bound would give formula_30 incidences. This example shows that the trivial, combinatorial incidence bound is tight.
Bourgain, Katz and Tao show that if this example is excluded, then an incidence bound that is an improvement on the trivial bound can be attained.
Incidence bounds over finite fields are of two types: (i) when at least one of the set of points or lines is `large' in terms of the characteristic of the field; (ii) both the set of points and the set of lines are `small' in terms of the characteristic.
Large set incidence bounds.
Let formula_31 be an odd prime power. Then Vinh showed that the number of incidences between formula_32 points and formula_33 lines in formula_34 is at most
formula_35
Note that there is no implicit constant in this bound.
Small set incidence bounds.
Let formula_23 be a field of characteristic formula_36. Stevens and de Zeeuw show that the number of incidences between formula_32 points and formula_33 lines in formula_37 is
formula_38
under the condition formula_39 in positive characteristic. (In a field of characteristic zero, this condition is not necessary.) This bound is better than the trivial incidence estimate when formula_40.
If the point set is a Cartesian Product, then they show an improved incidence bound: let formula_41 be a finite set of points with formula_42 and let formula_43 be a set of lines in the plane. Suppose that formula_44 and in positive characteristic that formula_45. Then the number of incidences between formula_46 and formula_43 is
formula_47
This bound is optimal. Note that by point-line duality in the plane, this incidence bound can be rephrased for an arbitrary point set and a set of lines having a Cartesian product structure.
In both the reals and arbitrary fields, Rudnev and Shkredov show an incidence bound for when both the point set and the line set has a Cartesian product structure. This is sometimes better than the above bounds.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O \\left ( n^{2/3} m^{2/3} + n + m \\right )."
},
{
"math_id": 1,
"text": " 2.5n^{2/3} m^{2/3} + n + m"
},
{
"math_id": 2,
"text": "O \\left( \\frac{n^2}{k^3} + \\frac{n}{k} \\right )."
},
{
"math_id": 3,
"text": "e = O \\left ( n^{2/3} m^{2/3} + n + m \\right)."
},
{
"math_id": 4,
"text": "e = O \\left ( n^{2/3} m^{2/3} + n + m \\right )."
},
{
"math_id": 5,
"text": "mk = O \\left ( n^{2/3} m^{2/3} + n + m \\right ),"
},
{
"math_id": 6,
"text": "mk = O( n^{2/3} m^{2/3} ), mk = O(n)"
},
{
"math_id": 7,
"text": "mk = O(m)"
},
{
"math_id": 8,
"text": "m = O( n^2 / k^3 + n/k )"
},
{
"math_id": 9,
"text": "N\\in \\mathbb{N}"
},
{
"math_id": 10,
"text": "P = \\left \\{ (a, b) \\in \\mathbb{Z}^2 \\ : \\ 1 \\leq a \\leq N; 1 \\leq b \\leq 2N^2 \\right \\},"
},
{
"math_id": 11,
"text": "L = \\left \\{ (x, mx + b) \\ : \\ m, b \\in \\mathbb{Z}; 1 \\leq m \\leq N; 1 \\leq b \\leq N^2 \\right \\}."
},
{
"math_id": 12,
"text": "|P| = 2N^3"
},
{
"math_id": 13,
"text": "|L| = N^3"
},
{
"math_id": 14,
"text": "x \\in \\{1, \\cdots, N\\}"
},
{
"math_id": 15,
"text": "N^4"
},
{
"math_id": 16,
"text": "\\mathbb{R}^d"
},
{
"math_id": 17,
"text": "O \\left (m^{2/3}n^{d/3}+n^{d-1} \\right ),"
},
{
"math_id": 18,
"text": " n^{d-2} < m < n^{d}"
},
{
"math_id": 19,
"text": "O\\left( \\frac{n^d}{k^3} + \\frac{n^{d-1}}{k} \\right )."
},
{
"math_id": 20,
"text": "\\mathbb{C}^2"
},
{
"math_id": 21,
"text": "\\mathbb{R}"
},
{
"math_id": 22,
"text": "10^{60}"
},
{
"math_id": 23,
"text": "\\mathbb{F}"
},
{
"math_id": 24,
"text": "\\mathbb{F}_p"
},
{
"math_id": 25,
"text": "\\mathcal{P} = \\mathbb{F}_p\\times \\mathbb{F}_p"
},
{
"math_id": 26,
"text": "p^2"
},
{
"math_id": 27,
"text": "\\mathcal{L}"
},
{
"math_id": 28,
"text": "p"
},
{
"math_id": 29,
"text": "p^3"
},
{
"math_id": 30,
"text": "O((p^2)^{2/3} (p^2)^{2/3} + p^2) = O(p^{8/3})"
},
{
"math_id": 31,
"text": "q"
},
{
"math_id": 32,
"text": "n"
},
{
"math_id": 33,
"text": "m"
},
{
"math_id": 34,
"text": "\\mathbb{F}_q^2"
},
{
"math_id": 35,
"text": "\\frac{nm}{q} + \\sqrt{qnm}."
},
{
"math_id": 36,
"text": "p\\neq 2"
},
{
"math_id": 37,
"text": "\\mathbb{F}^2"
},
{
"math_id": 38,
"text": "O\\left(m^{11/15}n^{11/15}\\right)"
},
{
"math_id": 39,
"text": "m^{-2}n^{13} \\leq p^{15}"
},
{
"math_id": 40,
"text": "m^{7/8} < n < m^{8/7}"
},
{
"math_id": 41,
"text": "\\mathcal{P} = A\\times B \\subseteq \\mathbb{F}^2"
},
{
"math_id": 42,
"text": "|A|\\leq |B|"
},
{
"math_id": 43,
"text": "\\mathcal{L} "
},
{
"math_id": 44,
"text": "|A||B|^2 \\leq |\\mathcal{L}|^3"
},
{
"math_id": 45,
"text": "|A||\\mathcal{L}|\\leq p^2"
},
{
"math_id": 46,
"text": "\\mathcal{P}"
},
{
"math_id": 47,
"text": "O\\left(|A|^{3/4}|B|^{1/2} |\\mathcal{L}|^{3/4} + |\\mathcal{L}|\n\\right)."
}
] | https://en.wikipedia.org/wiki?curid=1051627 |
10516723 | Detrended fluctuation analysis | Statistical term
In stochastic processes, chaos theory and time series analysis, detrended fluctuation analysis (DFA) is a method for determining the statistical self-affinity of a signal. It is useful for analysing time series that appear to be long-memory processes (diverging correlation time, e.g. power-law decaying autocorrelation function) or 1/f noise.
The obtained exponent is similar to the Hurst exponent, except that DFA may also be applied to signals whose underlying statistics (such as mean and variance) or dynamics are non-stationary (changing with time). It is related to measures based upon spectral techniques such as autocorrelation and Fourier transform.
Peng et al. introduced DFA in 1994 in a paper that has been cited over 3,000 times as of 2022 and represents an extension of the (ordinary) fluctuation analysis (FA), which is affected by non-stationarities.
Definition.
Algorithm.
Given: a time series formula_1.
Compute its average value formula_2.
Sum it into a process formula_3. This is the cumulative sum, or profile, of the original time series. For example, the profile of an i.i.d. white noise is a standard random walk.
Select a set formula_4 of integers, such that formula_5, the smallest formula_6, the largest formula_7, and the sequence is roughly distributed evenly in log-scale: formula_8. In other words, it is approximately a geometric progression.
For each formula_9, divide the sequence formula_10 into consecutive segments of length formula_0. Within each segment, compute the least squares straight-line fit (the local trend). Let formula_11 be the resulting piecewise-linear fit.
Compute the root-mean-square deviation from the local trend (local fluctuation):formula_12And their root-mean-square is the total fluctuation:
formula_13
Make the log-log plot formula_15.
Interpretation.
A straight line of slope formula_16 on the log-log plot indicates a statistical self-affinity of form formula_17. Since formula_18 monotonically increases with formula_0, we always have formula_19.
The scaling exponent formula_16 is a generalization of the Hurst exponent, with the precise value giving information about the series self-correlations:
Because the expected displacement in an uncorrelated random walk of length N grows like formula_26, an exponent of formula_27 would correspond to uncorrelated white noise. When the exponent is between 0 and 1, the result is fractional Gaussian noise.
Pitfalls in interpretation.
Though the DFA algorithm always produces a positive number formula_16 for any time series, it does not necessarily imply that the time series is self-similar. Self-similarity requires the log-log graph to be sufficiently linear over a wide range of formula_0. Furthermore, a combination of techniques including maximum likelihood estimation (MLE), rather than least-squares has been shown to better approximate the scaling, or power-law, exponent.
Also, there are many scaling exponent-like quantities that can be measured for a self-similar time series, including the divider dimension and Hurst exponent. Therefore, the DFA scaling exponent formula_16 is not a fractal dimension, and does not have certain desirable properties that the Hausdorff dimension has, though in certain special cases it is related to the box-counting dimension for the graph of a time series.
Generalizations.
Generalization to polynomial trends (higher order DFA).
The standard DFA algorithm given above removes a linear trend in each segment. If we remove a degree-n polynomial trend in each segment, it is called DFAn, or higher order DFA.
Since formula_10 is a cumulative sum of formula_28, a linear trend in formula_10 is a constant trend in formula_28, which is a constant trend in formula_29 (visible as short sections of "flat plateaus"). In this regard, DFA1 removes the mean from segments of the time series formula_29 before quantifying the fluctuation.
Similarly, a degree n trend in formula_10 is a degree (n-1) trend in formula_29. For example, DFA1 removes linear trends from segments of the time series formula_29 before quantifying the fluctuation, DFA1 removes parabolic trends from formula_29, and so on.
The Hurst R/S analysis removes constant trends in the original sequence and thus, in its detrending it is equivalent to DFA1.
Generalization to different moments (multifractal DFA).
DFA can be generalized by computingformula_30then making the log-log plot of formula_31, If there is a strong linearity in the plot of formula_31, then that slope is formula_32. DFA is the special case where formula_33.
Multifractal systems scale as a function formula_34. Essentially, the scaling exponents need not be independent of the scale of the system. In particular, DFA measures the scaling-behavior of the second moment-fluctuations.
Kantelhardt et al. intended this scaling exponent as a generalization of the classical Hurst exponent. The classical Hurst exponent corresponds to formula_35 for stationary cases, and formula_36 for nonstationary cases.
Applications.
The DFA method has been applied to many systems, e.g. DNA sequences, neuronal oscillations, speech pathology detection, heartbeat fluctuation in different sleep stages, and animal behavior pattern analysis.
The effect of trends on DFA has been studied.
Relations to other methods, for specific types of signal.
For signals with power-law-decaying autocorrelation.
In the case of power-law decaying auto-correlations, the correlation function decays with an exponent formula_37:
formula_38.
In addition the power spectrum decays as formula_39.
The three exponents are related by:
The relations can be derived using the Wiener–Khinchin theorem. The relation of DFA to the power spectrum method has been well studied.
Thus, formula_16 is tied to the slope of the power spectrum formula_43 and is used to describe the color of noise by this relationship: formula_44.
For fractional Gaussian noise.
For fractional Gaussian noise (FGN), we have formula_45, and thus formula_46, and formula_47, where formula_48 is the Hurst exponent. formula_16 for FGN is equal to formula_48.
For fractional Brownian motion.
For fractional Brownian motion (FBM), we have formula_49, and thus formula_50, and formula_51, where formula_48 is the Hurst exponent. formula_16 for FBM is equal to formula_52. In this context, FBM is the cumulative sum or the integral of FGN, thus, the exponents of their
power spectra differ by 2.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "x_1, x_2, ..., x_N"
},
{
"math_id": 2,
"text": "\\langle x\\rangle = \\frac 1N \\sum_{t=1}^N x_t"
},
{
"math_id": 3,
"text": "X_t=\\sum_{i=1}^t (x_i-\\langle x\\rangle)"
},
{
"math_id": 4,
"text": "T = \\{n_1, ..., n_k\\}"
},
{
"math_id": 5,
"text": "n_1 < n_2 < \\cdots < n_k"
},
{
"math_id": 6,
"text": "n_1 \\approx 4"
},
{
"math_id": 7,
"text": "n_k \\approx N"
},
{
"math_id": 8,
"text": "\\log(n_2) - \\log(n_1) \\approx \\log(n_3) - \\log(n_2) \\approx \\cdots"
},
{
"math_id": 9,
"text": "n \\in T"
},
{
"math_id": 10,
"text": "X_t"
},
{
"math_id": 11,
"text": "Y_{1,n}, Y_{2,n}, ..., Y_{N,n}"
},
{
"math_id": 12,
"text": "F( n, i) = \\sqrt{\\frac{1}{n}\\sum_{t = in+1}^{in+n} \\left( X_t - Y_{t, n} \\right)^2}."
},
{
"math_id": 13,
"text": "F( n ) = \\sqrt{\\frac{1}{N/n}\\sum_{i = 1}^{N/n} F(n, i)^2}."
},
{
"math_id": 14,
"text": "N"
},
{
"math_id": 15,
"text": "\\log n - \\log F(n)"
},
{
"math_id": 16,
"text": "\\alpha"
},
{
"math_id": 17,
"text": "F(n) \\propto n^{\\alpha}"
},
{
"math_id": 18,
"text": "F(n)"
},
{
"math_id": 19,
"text": "\\alpha > 0"
},
{
"math_id": 20,
"text": "\\alpha<1/2"
},
{
"math_id": 21,
"text": "\\alpha \\simeq 1/2"
},
{
"math_id": 22,
"text": "\\alpha>1/2"
},
{
"math_id": 23,
"text": "\\alpha\\simeq 1"
},
{
"math_id": 24,
"text": "\\alpha>1"
},
{
"math_id": 25,
"text": "\\alpha\\simeq 3/2"
},
{
"math_id": 26,
"text": "\\sqrt{N}"
},
{
"math_id": 27,
"text": "\\tfrac{1}{2}"
},
{
"math_id": 28,
"text": "x_t-\\langle x\\rangle "
},
{
"math_id": 29,
"text": "x_t "
},
{
"math_id": 30,
"text": "F_q( n ) = \\left(\\frac{1}{N/n}\\sum_{i = 1}^{N/n} F(n, i)^q\\right)^{1/q}."
},
{
"math_id": 31,
"text": "\\log n - \\log F_q(n)"
},
{
"math_id": 32,
"text": "\\alpha(q)"
},
{
"math_id": 33,
"text": "q=2"
},
{
"math_id": 34,
"text": "F_q(n) \\propto n^{\\alpha(q)}"
},
{
"math_id": 35,
"text": "H=\\alpha(2)"
},
{
"math_id": 36,
"text": "H=\\alpha(2)-1"
},
{
"math_id": 37,
"text": "\\gamma"
},
{
"math_id": 38,
"text": "C(L)\\sim L^{-\\gamma}\\!\\ "
},
{
"math_id": 39,
"text": "P(f)\\sim f^{-\\beta}\\!\\ "
},
{
"math_id": 40,
"text": "\\gamma=2-2\\alpha"
},
{
"math_id": 41,
"text": "\\beta=2\\alpha-1"
},
{
"math_id": 42,
"text": "\\gamma=1-\\beta"
},
{
"math_id": 43,
"text": "\\beta"
},
{
"math_id": 44,
"text": "\\alpha = (\\beta+1)/2"
},
{
"math_id": 45,
"text": " \\beta \\in [-1,1] "
},
{
"math_id": 46,
"text": "\\alpha \\in [0,1]"
},
{
"math_id": 47,
"text": "\\beta = 2H-1"
},
{
"math_id": 48,
"text": "H"
},
{
"math_id": 49,
"text": " \\beta \\in [1,3] "
},
{
"math_id": 50,
"text": "\\alpha \\in [1,2]"
},
{
"math_id": 51,
"text": "\\beta = 2H+1"
},
{
"math_id": 52,
"text": "H+1"
}
] | https://en.wikipedia.org/wiki?curid=10516723 |
10517 | Economies of scale | Cost advantages obtained via scale of operation
In microeconomics, economies of scale are the cost advantages that enterprises obtain due to their scale of operation, and are typically measured by the amount of output produced per unit of time. A decrease in cost per unit of output enables an increase in scale that is, increased production with lowered cost. At the basis of economies of scale, there may be technical, statistical, organizational or related factors to the degree of market control.
Economies of scale arise in a variety of organizational and business situations and at various levels, such as a production, plant or an entire enterprise. When average costs start falling as output increases, then economies of scale occur. Some economies of scale, such as capital cost of manufacturing facilities and friction loss of transportation and industrial equipment, have a physical or engineering basis. The economic concept dates back to Adam Smith and the idea of obtaining larger production returns through the use of division of labor. Diseconomies of scale are the opposite.
Economies of scale often have limits, such as passing the optimum design point where costs per additional unit begin to increase. Common limits include exceeding the nearby raw material supply, such as wood in the lumber, pulp and paper industry. A common limit for a low cost per unit weight commodities is saturating the regional market, thus having to ship products uneconomic distances. Other limits include using energy less efficiently or having a higher defect rate.
Large producers are usually efficient at long runs of a product grade (a commodity) and find it costly to switch grades frequently. They will, therefore, avoid specialty grades even though they have higher margins. Often smaller (usually older) manufacturing facilities remain viable by changing from commodity-grade production to specialty products. Economies of scale must be distinguished from economies stemming from an increase in the production of a given plant. When a plant is used below its optimal production capacity, increases in its degree of utilization bring about decreases in the total average cost of production. Nicholas Georgescu-Roegen (1966) and Nicholas Kaldor (1972) both argue that these economies should not be treated as economies of scale.
Overview.
The simple meaning of economies of scale is doing things more efficiently with increasing size. Common sources of economies of scale are purchasing (bulk buying of materials through long-term contracts), managerial (increasing the specialization of managers), financial (obtaining lower-interest charges when borrowing from banks and having access to a greater range of financial instruments), marketing (spreading the cost of advertising over a greater range of output in media markets), and technological (taking advantage of returns to scale in the production function). Each of these factors reduces the long run average costs (LRAC) of production by shifting the short-run average total cost (SRATC) curve down and to the right.
Economies of scale is a concept that may explain patterns in international trade or in the number of firms in a given market. The exploitation of economies of scale helps explain why companies grow large in some industries. It is also a justification for free trade policies, since some economies of scale may require a larger market than is possible within a particular country—for example, it would not be efficient for Liechtenstein to have its own carmaker if they only sold to their local market. A lone carmaker may be profitable, but even more so if they exported cars to global markets in addition to selling to the local market. Economies of scale also play a role in a "natural monopoly". There is a distinction between two types of economies of scale: internal and external. An industry that exhibits an internal economy of scale is one where the costs of production fall when the number of firms in the industry drops, but the remaining firms increase their production to match previous levels. Conversely, an industry exhibits an external economy of scale when costs drop due to the introduction of more firms, thus allowing for more efficient use of specialized services and machinery.
Economies of scale exist whenever the total cost of producing two quantities of a product X is lower when a single firm instead of two separate firms produce it. See Economies of scope#Economics.
formula_0
Determinants of economies of scale.
Physical and engineering basis: economies of increased dimension.
Some of the economies of scale recognized in engineering have a physical basis, such as the square–cube law, by which the surface of a vessel increases by the square of the dimensions while the volume increases by the cube. This law has a direct effect on the capital cost of such things as buildings, factories, pipelines, ships and airplanes.
In structural engineering, the strength of beams increases with the cube of the thickness.
Drag loss of vehicles like aircraft or ships generally increases less than proportional with increasing cargo volume, although the physical details can be quite complicated. Therefore, making them larger usually results in less fuel consumption per ton of cargo at a given speed.
Heat loss from industrial processes vary per unit of volume for pipes, tanks and other vessels in a relationship somewhat similar to the square–cube law. In some productions, an increase in the size of the plant reduces the average variable cost, thanks to the energy savings resulting from the lower dispersion of heat.
Economies of increased dimension are often misinterpreted because of the confusion between indivisibility and three-dimensionality of space. This confusion arises from the fact that three-dimensional production elements, such as pipes and ovens, once installed and operating, are always technically indivisible. However, the economies of scale due to the increase in size do not depend on indivisibility but exclusively on the three-dimensionality of space. Indeed, indivisibility only entails the existence of economies of scale produced by the balancing of productive capacities, considered above; or of increasing returns in the utilisation of a single plant, due to its more efficient use as the quantity produced increases. However, this latter phenomenon has nothing to do with the economies of scale which, by definition, are linked to the use of a larger plant.
Economies in holding stocks and reserves.
At the base of economies of scale there are also returns to scale linked to statistical factors. In fact, the greater of the number of resources involved, the smaller, in proportion, is the quantity of reserves necessary to cope with unforeseen contingencies (for instance, machine spare parts, inventories, circulating capital, etc.).
Transaction economies.
One of the reasons firms appear is to reduce transaction costs. A larger scale generally determines greater bargaining power over input prices and therefore benefits from pecuniary economies in terms of purchasing raw materials and intermediate goods compared to companies that make orders for smaller amounts. In this case, we speak of pecuniary economies, to highlight the fact that nothing changes from the "physical" point of view of the returns to scale. Furthermore, supply contracts entail fixed costs which lead to decreasing average costs if the scale of production increases. This is of important utility in the study of corporate finance.
Economies deriving from the balancing of production capacity.
Economies of productive capacity balancing derives from the possibility that a larger scale of production involves a more efficient use of the production capacities of the individual phases of the production process. If the inputs are indivisible and complementary, a small scale may be subject to idle times or to the underutilization of the productive capacity of some sub-processes. A higher production scale can make the different production capacities compatible. The reduction in machinery idle times is crucial in the case of a high cost of machinery.
Economies resulting from the division of labour and the use of superior techniques.
A larger scale allows for a more efficient division of labour. The economies of division of labour derive from the increase in production speed, from the possibility of using specialized personnel and adopting more efficient techniques. An increase in the division of labour inevitably leads to changes in the quality of inputs and outputs.
Managerial economics.
Many administrative and organizational activities are mostly cognitive and, therefore, largely independent of the scale of production. When the size of the company and the division of labour increase, there are a number of advantages due to the possibility of making organizational management more effective and perfecting accounting and control techniques. Furthermore, the procedures and routines that turned out to be the best can be reproduced by managers at different times and places.
Learning and growth economies.
Learning and growth economies are at the base of dynamic economies of scale, associated with the process of growth of the scale dimension and not to the dimension of scale per se. Learning by doing implies improvements in the ability to perform and promotes the introduction of incremental innovations with a progressive lowering of average costs. Learning economies are directly proportional to the cumulative production (experience curve).
Growth economies emerge if a company gains an added benefit by expanding its size. These economies are due to the presence of some resource or competence that is not fully utilized, or to the existence of specific market positions that create a differential advantage in expanding the size of the firms. That growth economies disappear once the scale size expansion process is completed. For example, a company that owns a supermarket chain benefits from an economy of growth if, opening a new supermarket, it gets an increase in the price of the land it owns around the new supermarket. The sale of these lands to economic operators, who wish to open shops near the supermarket, allows the company in question to make a profit, making a profit on the revaluation of the value of building land.
Capital and operating cost.
Overall costs of capital projects are known to be subject to economies of scale. A crude estimate is that if the capital cost for a given sized piece of equipment is known, changing the size will change the capital cost by the 0.6 power of the capacity ratio (the point six to the power rule).
In estimating capital cost, it typically requires an insignificant amount of labor, and possibly not much more in materials, to install a larger capacity electrical wire or pipe having significantly greater capacity.
The cost of a unit of capacity of many types of equipment, such as electric motors, centrifugal pumps, diesel and gasoline engines, decreases as size increases. Also, the efficiency increases with size.
Crew size and other operating costs for ships, trains and airplanes.
Operating crew size for ships, airplanes, trains, etc., does not increase in direct proportion to capacity. (Operating crew consists of pilots, co-pilots, navigators, etc. and does not include passenger service personnel.) Many aircraft models were significantly lengthened or "stretched" to increase payload.
Many manufacturing facilities, especially those making bulk materials like chemicals, refined petroleum products, cement and paper, have labor requirements that are not greatly influenced by changes in plant capacity. This is because labor requirements of automated processes tend to be based on the complexity of the operation rather than production rate, and many manufacturing facilities have nearly the same basic number of processing steps and pieces of equipment, regardless of production capacity.
Economical use of byproducts.
Karl Marx noted that large scale manufacturing allowed economical use of products that would otherwise be waste. Marx cited the chemical industry as an example, which today along with petrochemicals, remains highly dependent on turning various residual reactant streams into salable products. In the pulp and paper industry, it is economical to burn bark and fine wood particles to produce process steam and to recover the spent pulping chemicals for conversion back to a usable form.
Economies of scale and the size of exporter.
Large and more productive firms typically generate enough net revenues abroad to cover the fixed costs associated with exporting. However, in the event of trade liberalization, resources will have to be reallocated toward the more productive firm, which raises the average productivity within the industry.
Firms differ in their labor productivity and the quality of their products, so more efficient firms are more likely to generate more net income abroad and thus become exporters of their goods or services. There is a correlating relationship between a firm's total sales and underlying efficiency. Firms with higher productivity will always outperform a firm with lower productivity which will lead to lower sales. Through trade liberalization, organizations are able to drop their trade costs due to export growth. However, trade liberalization does not account for any tariff reduction or shipping logistics improvement. However, total economies of scale is based on the exporters individual frequency and size. So large-scale companies are more likely to have a lower cost per unit as opposed to small-scale companies. Likewise, high trade frequency companies are able to reduce their overall cost attributed per unit when compared to those of low-trade frequency companies.
Economies of scale and returns to scale.
Economies of scale is related to and can easily be confused with the theoretical economic notion of returns to scale. Where economies of scale refer to a firm's costs, returns to scale describe the relationship between inputs and outputs in a long-run (all inputs variable) production function. A production function has "constant" returns to scale if increasing all inputs by some proportion results in output increasing by that same proportion. Returns are "decreasing" if, say, doubling inputs results in less than double the output, and "increasing" if more than double the output. If a mathematical function is used to represent the production function, and if that production function is homogeneous, returns to scale are represented by the degree of homogeneity of the function. Homogeneous production functions with constant returns to scale are first degree homogeneous, increasing returns to scale are represented by degrees of homogeneity greater than one, and decreasing returns to scale by degrees of homogeneity less than one.
If the firm is a perfect competitor in all input markets, and thus the per-unit prices of all its inputs are unaffected by how much of the inputs the firm purchases, then it can be shown that at a particular level of output, the firm has economies of scale if and only if it has increasing returns to scale, has diseconomies of scale if and only if it has decreasing returns to scale, and has neither economies nor diseconomies of scale if it has constant returns to scale. In this case, with perfect competition in the output market the long-run equilibrium will involve all firms operating at the minimum point of their long-run average cost curves (i.e., at the borderline between economies and diseconomies of scale).
If, however, the firm is not a perfect competitor in the input markets, then the above conclusions are modified. For example, if there are increasing returns to scale in some range of output levels, but the firm is so big in one or more input markets that increasing its purchases of an input drives up the input's per-unit cost, then the firm could have diseconomies of scale in that range of output levels. Conversely, if the firm is able to get bulk discounts of an input, then it could have economies of scale in some range of output levels even if it has decreasing returns in production in that output range.
In essence, returns to scale refer to the variation in the relationship between inputs and output. This relationship is therefore expressed in "physical" terms. But when talking about economies of scale, the relation taken into consideration is that between the average production cost and the dimension of scale. Economies of scale therefore are affected by variations in input prices. If input prices remain the same as their quantities purchased by the firm increase, the notions of increasing returns to scale and economies of scale can be considered equivalent. However, if input prices vary in relation to their quantities purchased by the company, it is necessary to distinguish between returns to scale and economies of scale. The concept of economies of scale is more general than that of returns to scale since it includes the possibility of changes in the price of inputs when the quantity purchased of inputs varies with changes in the scale of production.
The literature assumed that due to the competitive nature of reverse auctions, and in order to compensate for lower prices and lower margins, suppliers seek higher volumes to maintain or increase the total revenue. Buyers, in turn, benefit from the lower transaction costs and economies of scale that result from larger volumes. In part as a result, numerous studies have indicated that the procurement volume must be sufficiently high to provide sufficient profits to attract enough suppliers, and provide buyers with enough savings to cover their additional costs.
However, Shalev and Asbjornse found, in their research based on 139 reverse auctions conducted in the public sector by public sector buyers, that the higher auction volume, or economies of scale, did not lead to better success of the auction. They found that auction volume did not correlate with competition, nor with the number of bidders, suggesting that auction volume does not promote additional competition. They noted, however, that their data included a wide range of products, and the degree of competition in each market varied significantly, and offer that further research on this issue should be conducted to determine whether these findings remain the same when purchasing the same product for both small and high volumes. Keeping competitive factors constant, increasing auction volume may further increase competition.
Economies of scale in the history of economic analysis.
Economies of scale in classical economists.
The first systematic analysis of the advantages of the division of labour capable of generating economies of scale, both in a static and dynamic sense, was that contained in the famous First Book of Wealth of Nations (1776) by Adam Smith, generally considered the founder of political economy as an autonomous discipline.
John Stuart Mill, in Chapter IX of the First Book of his Principles, referring to the work of Charles Babbage (On the economics of machines and manufactories), widely analyses the relationships between increasing returns and scale of production all inside the production unit.
Economies of scale in Marx and distributional consequences.
In (1867), Karl Marx, referring to Charles Babbage, extensively analyzed economies of scale and concludes that they are one of the factors underlying the ever-increasing concentration of capital. Marx observes that in the capitalist system the technical conditions of the work process are continuously revolutionized in order to increase the surplus by improving the productive force of work. According to Marx, with the cooperation of many workers brings about an economy in the use of the means of production and an increase in productivity due to the increase in the division of labour. Furthermore, the increase in the size of the machinery allows significant savings in construction, installation and operation costs. The tendency to exploit economies of scale entails a continuous increase in the volume of production which, in turn, requires a constant expansion of the size of the market. However, if the market does not expand at the same rate as production increases, overproduction crises can occur. According to Marx the capitalist system is therefore characterized by two tendencies, connected to economies of scale: towards a growing concentration and towards economic crises due to overproduction.
In his 1844 "Economic and Philosophic Manuscripts", Karl Marx observes that economies of scale have historically been associated with an increasing concentration of private wealth and have been used to justify such concentration. Marx points out that concentrated private ownership of large-scale economic enterprises is a historically contingent fact, and not essential to the nature of such enterprises. In the case of agriculture, for example, Marx calls attention to the sophistical nature of the arguments used to justify the system of concentrated ownership of land:
As for large landed property, its defenders have always sophistically identified the economic advantages offered by large-scale agriculture with large-scale landed property, as if it were not precisely as a result of the abolition of property that this advantage, for one thing, received its greatest possible extension, and, for another, only then would be of social benefit.
Instead of concentrated private ownership of land, Marx recommends that economies of scale should instead be realized by associations:
Association, applied to land, shares the economic advantage of large-scale landed property, and first brings to realization the original tendency inherent in land-division, namely, equality. In the same way association re-establishes, now on a rational basis, no longer mediated by serfdom, overlordship and the silly mysticism of property, the intimate ties of man with the earth, for the earth ceases to be an object of huckstering, and through free labor and free enjoyment becomes once more a true personal property of man.
Economies of scale in Marshall.
Alfred Marshall notes that Antoine Augustin Cournot and others have considered "the internal economies [...] apparently without noticing that their premises lead inevitably to the conclusion that, whatever firm first gets a good start will obtain a monopoly of the whole business of its trade … ". Marshall believes that there are factors that limit this trend toward monopoly, and in particular:
Sraffa's critique.
Piero Sraffa observes that Marshall, in order to justify the operation of the law of increasing returns without it coming into conflict with the hypothesis of free competition, tended to highlight the advantages of external economies linked to an increase in the production of an entire sector of activity. However, "those economies which are external from the point of view of the individual firm, but internal as regards the industry in its aggregate, constitute precisely the class which is most seldom to be met with." "In any case - Sraffa notes – in so far as external economies of the kind in question exist, they are not linked to be called forth by small increases in production," as required by the marginalist theory of price. Sraffa points out that, in the equilibrium theory of the individual industries, the presence of external economies cannot play an important role because this theory is based on marginal changes in the quantities produced.
Sraffa concludes that, if the hypothesis of perfect competition is maintained, economies of scale should be excluded. He then suggests the possibility of abandoning the assumption of free competition to address the study of firms that have their own particular market. This stimulated a whole series of studies on the cases of imperfect competition in Cambridge. However, in the succeeding years Sraffa followed a different path of research that brought him to write and publish his main work "Production of commodities by means of commodities" . In this book, Sraffa determines relative prices assuming no changes in output, so that no question arises as to the variation or constancy of returns.
Economies of scale and the tendency towards monopoly: "Cournot's dilemma".
It has been noted that in many industrial sectors there are numerous companies with different sizes and organizational structures, despite the presence of significant economies of scale. This contradiction, between the empirical evidence and the logical incompatibility between economies of scale and competition, has been called the 'Cournot dilemma'. As Mario Morroni observes, Cournot's dilemma appears to be unsolvable if we only consider the effects of economies of scale on the dimension of scale. If, on the other hand, the analysis is expanded, including the aspects concerning the development of knowledge and the organization of transactions, it is possible to conclude that economies of scale do not always lead to monopoly. In fact, the competitive advantages deriving from the development of the firm's capabilities and from the management of transactions with suppliers and customers can counterbalance those provided by the scale, thus counteracting the tendency towards a monopoly inherent in economies of scale. In other words, the heterogeneity of the organizational forms and of the size of the companies operating in a sector of activity can be determined by factors regarding the quality of the products, the production flexibility, the contractual methods, the learning opportunities, the heterogeneity of preferences of customers who express a differentiated demand with respect to the quality of the product, and assistance before and after the sale. Very different organizational forms can therefore co-exist in the same sector of activity, even in the presence of economies of scale, such as, for example, flexible production on a large scale, small-scale flexible production, mass production, industrial production based on rigid technologies associated with flexible organizational systems and traditional artisan production. The considerations regarding economies of scale are therefore important, but not sufficient to explain the size of the company and the market structure. It is also necessary to take into account the factors linked to the development of capabilities and the management of transaction costs.
External economies of scale.
External economies of scale tend to be more prevalent than internal economies of scale. Through the external economies of scale, the entry of new firms benefits all existing competitors as it creates greater competition and also reduces the average cost for all firms as opposed to internal economies of scale which only allows benefits to the individual firm. Advantages that arise from external economies of scale include;
Sources.
Purchasing.
Firms are able to lower their average costs by buying their inputs required for the production process in bulk or from special wholesalers.
Managerial.
Firms might be able to lower their average costs by improving their management structure within the firm. This can range from hiring better skilled or more experienced managers from the industry.
Technological.
Technological advancements change production processes and subsequently reduce the overall cost per unit. Tim Hindle argues that the rollout of the internet "has completely reshaped the assumptions underlying economies of scale".
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " TC((Q_1 + Q_2)X) < TC(Q_1X) + TC(Q_2X)"
}
] | https://en.wikipedia.org/wiki?curid=10517 |
1051942 | Fusion tree | In computer science, a fusion tree is a type of tree data structure that implements an associative array on w-bit integers on a finite universe, where each of the input integers has size less than 2w and is non-negative. When operating on a collection of n key–value pairs, it uses "O"("n") space and performs searches in "O"(log"w" "n") time, which is asymptotically faster than a traditional self-balancing binary search tree, and also better than the van Emde Boas tree for large values of w. It achieves this speed by using certain constant-time operations that can be done on a machine word. Fusion trees were invented in 1990 by Michael Fredman and Dan Willard.
Several advances have been made since Fredman and Willard's original 1990 paper. In 1999 it was shown how to implement fusion trees under a model of computation in which all of the underlying operations of the algorithm belong to AC0, a model of circuit complexity that allows addition and bitwise Boolean operations but does not allow the multiplication operations used in the original fusion tree algorithm. A dynamic version of fusion trees using hash tables was proposed in 1996 which matched the original structure's "O"(log"w" "n") runtime in expectation. Another dynamic version using exponential tree was proposed in 2007 which yields worst-case runtimes of "O"(log"w" "n" + log log "n") per operation. Finally, it was shown that dynamic fusion trees can perform each operation in "O"(log"w" "n") time deterministically.
This data structure implements add key, remove key, search key, and predecessor (next smaller value) and successor (next larger value) search operations for a given key. The partial result of most significant bit locator in constant time has also helped further research. Fusion trees utilize word-level parallelism to be efficient, performing computation on several small integers, stored in a single machine word, simultaneously to reduce the number of total operations.
How it works.
A fusion tree is essentially a B-tree with branching factor of "w"1/5 (any small exponent is also possible as it will not have a great impact on the height of the tree), which gives it a height of "O"(log"w" "n"). To achieve the desired runtimes for updates and queries, the fusion tree must be able to search a node containing up to "w"1/5 keys in constant time. This is done by compressing ("sketching") the keys so that all can fit into one machine word, which in turn allows comparisons to be done in parallel. So, a series of computations involving sketching, parallel comparison and most significant bit index locator, help reach the required solution.
Sketching.
Sketching is the method by which each w-bit key at a node containing k keys is compressed into only "k" − 1 bits. Each key x may be thought of as a path in the full binary tree of height w starting at the root and ending at the leaf corresponding to x. This path can be processed by recursively searching the left child of "i" if the "ith" bit is 0, and the right child if it is 1, generally, until all bits are scanned. To distinguish two paths, it suffices to look at their branching point (the first bit where any two keys differ). As there are a maximum of "k" keys, there will not be more than "k-1" branching points, which means that no more than "k-1" bits are required to identify a key. And hence, no sketch will have more than "k-1" bits.
An important property of the sketch function is that it preserves the order of the keys. That is, sketch("x") < sketch("y") for any two keys "x" < "y". So, for the entire range of keys, sketch(x0)<sketch(x1)<...<sketch(xk-1) because if the binary-tree-like path is followed, the nodes will be ordered in such a manner that x0<x1<...<xk-1.
Approximating the sketch.
If the locations of the sketch bits are "b"1 < "b"2 < ⋅⋅⋅ < "b""r", then the sketch of the key "x""w"-1⋅⋅⋅"x"1"x"0 is the "r"-bit integer formula_0.
With only standard word operations, such as those of the C programming language, it is difficult to directly compute the perfect sketch of a key in constant time. Instead, the sketch bits can be packed into a range of size at most "r"4, using bitwise AND and multiplication, called the approximate sketch, which does have all the important bits but also some additional useless bits spread out in a predictable pattern. The bitwise AND operation serves as a mask to remove all these non-sketch bits from the key, while the multiplication shifts the sketch bits into a small range. Like the "perfect" sketch, the approximate sketch also preserves the order of the keys and means that sketch(x0)<sketch(x1)<...<sketch(xk-1).
Some preprocessing is needed to determine the correct multiplication constant. Each sketch bit in location "b""i" will get shifted to "b""i" + "m""i" via a multiplication by "m" = formula_1 2"m""i". For the approximate sketch to work, the following three properties must hold:
An inductive argument shows how the "m""i" can be constructed. Let "m"1 = "w" − "b"1. Suppose that 1 < "t" ≤ "r" and that "m"1, "m"2... "m""t-1" have already been chosen. Then pick the smallest integer "m""t" such that both properties (1) and (2) are satisfied. Property (1) requires that "m""t" ≠ "b""i" − "b""j" + "m""l" for all 1 ≤ "i", "j" ≤ "r" and 1 ≤ "l" ≤ "t"-1. Thus, there are less than "tr"2 ≤ "r"3 values that "m""t" must avoid. Since "m""t" is chosen to be minimal, ("b""t" + "m""t") ≤ ("b""t"-1 + "m""t"-1) + "r"3. This implies Property (3).
The approximate sketch is thus computed as follows:
Parallel comparison.
The purpose of the compression achieved by sketching is to allow all of the keys to be stored in one "w"-bit word. Let the "node sketch" of a node be the bit string
1codice_0("x"1)1codice_0("x"2)...1codice_0("x""k")
Here, all sketch words are clubbed together in one string by prepending a set bit to each of them. We can assume that the sketch function uses exactly "b" ≤ "r"4 bits. Then each block uses 1 + "b" ≤ "w"4/5 bits, and since "k" ≤ "w"1/5, the total number of bits in the node sketch is at most "w".
A brief notational aside: for a bit string "s" and nonnegative integer "m", let "s""m" denote the concatenation of "s" to itself "m" times. If "t" is also a bit string, "st" denotes the concatenation of "t" to "s".
The node sketch makes it possible to search the keys for any "b"-bit integer "y". Let "z" = (0"y")"k", which can be computed in constant time (multiply "y" by the constant (0"b"1)"k"), to make it as long as the node sketch such that each word in the node sketch can be compared with the query integer "y" in one operation, demonstrating word-level parallelism. If "y" were 5 bits long, it would be multiplied by 000001...000001 to get sketch(y)k. The difference between sketch(xi) and 0y results in the leading bit for each block to be 1, if and only if sketch(y) formula_3 sketch(xi). We can thus compute the smallest index "i" such that codice_0("x""i") ≥ "y" as follows:
Desketching.
For an arbitrary query "q", parallel comparison computes the index "i" such that
codice_0("x""i"-1) ≤ codice_0("q") ≤ codice_0("x""i")
Unfortunately, this does give the exact predecessor or successor of "q", because the location of the sketch of "q" among the sketches of all the values may not be the same as the location of "q" in all the actual values. What is true is that, among all of the keys, either "x""i"-1 or "x""i" has the longest common prefix with "q". This is because any key "y" with a longer common prefix with "q" would also have more sketch bits in common with "q", and thus codice_0("y") would be closer to codice_0("q") than any codice_0("x""j").
The length longest common prefix between two "w"-bit integers "a" and "b" can be computed in constant time by finding the most significant bit of the bitwise XOR between "a" and "b". This can then be used to mask out all but the longest common prefix.
Note that "p" identifies exactly where "q" branches off from the set of keys. If the next bit of "q" is 0, then the successor of "q" is contained in the "p"1 subtree, and if the next bit of "q" is 1, then the predecessor of "q" is contained in the "p"0 subtree. This suggests the following algorithm for determining the exact location of "q":
Fusion hashing.
An application of fusion trees to hash tables was given by Willard, who describes a data structure for hashing in which an outer-level hash table with hash chaining is combined with a fusion tree representing each hash chain.
In hash chaining, in a hash table with a constant load factor, the average size of a chain is constant, but additionally with high probability all chains have size "O"(log "n" / log log "n"), where n is the number of hashed items.
This chain size is small enough that a fusion tree can handle searches and updates within it in constant time per operation. Therefore, the time for all operations in the data structure is constant with high probability.
More precisely, with this data structure, for every inverse-quasipolynomial probability "p"("n") = exp((log "n")"O"(1)), there is a constant C such that the probability that there exists an operation that exceeds time C is at most "p"("n").
Computational Model and Necessary Assumptions.
The computational model for the Fusion Tree algorithm is a Word RAM with a specific instruction set, including arithmetic instructions - addition, subtraction and multiplication (all performed
modulo 2"w") and Boolean operations - bitwise AND, NOT etc. A double-precision multiplication instruction is also included. It has been shown that removal of the latter instruction makes it impossible to sort faster than "O"("n" log "n"), unless it is permitted to use memory space of nearly 2"w" words (in contrast to linear space used by Fusion Trees), or include other instructions instead.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x_{b_r}x_{b_{r-1}}\\cdots x_{b_1}"
},
{
"math_id": 1,
"text": "\\textstyle\\sum_{i=1}^r"
},
{
"math_id": 2,
"text": "\\sum_{i=0}^{r-1} 2^{b_i}"
},
{
"math_id": 3,
"text": "\\leq\n"
}
] | https://en.wikipedia.org/wiki?curid=1051942 |
1052096 | Logarithmic form | In algebraic geometry and the theory of complex manifolds, a logarithmic differential form is a differential form with poles of a certain kind. The concept was introduced by Pierre Deligne. In short, logarithmic differentials have the mildest possible singularities needed in order to give information about an open submanifold (the complement of the divisor of poles). (This idea is made precise by several versions of de Rham's theorem discussed below.)
Let "X" be a complex manifold, "D" ⊂ "X" a reduced divisor (a sum of distinct codimension-1 complex subspaces), and ω a holomorphic "p"-form on "X"−"D". If both ω and "d"ω have a pole of order at most 1 along "D", then ω is said to have a logarithmic pole along "D". ω is also known as a logarithmic "p"-form. The "p"-forms with log poles along "D" form a subsheaf of the meromorphic "p"-forms on "X", denoted
formula_0
The name comes from the fact that in complex analysis, formula_1; here formula_2 is a typical example of a 1-form on the complex numbers C with a logarithmic pole at the origin. Differential forms such as formula_2 make sense in a purely algebraic context, where there is no analog of the logarithm function.
Logarithmic de Rham complex.
Let "X" be a complex manifold and "D" a reduced divisor on "X". By definition of formula_3 and the fact that the exterior derivative "d" satisfies "d"2 = 0, one has
formula_4
for every open subset "U" of "X". Thus the logarithmic differentials form a complex of sheaves formula_5, known as the logarithmic de Rham complex associated to the divisor "D". This is a subcomplex of the direct image formula_6, where formula_7 is the inclusion and formula_8 is the complex of sheaves of holomorphic forms on "X"−"D".
Of special interest is the case where "D" has normal crossings: that is, "D" is locally a sum of codimension-1 complex submanifolds that intersect transversely. In this case, the sheaf of logarithmic differential forms is the subalgebra of formula_9 generated by the holomorphic differential forms formula_10 together with the 1-forms formula_11 for holomorphic functions formula_12 that are nonzero outside "D". Note that
formula_13
Concretely, if "D" is a divisor with normal crossings on a complex manifold "X", then each point "x" has an open neighborhood "U" on which there are holomorphic coordinate functions formula_14 such that "x" is the origin and "D" is defined by the equation formula_15 for some formula_16. On the open set "U", sections of formula_17 are given by
formula_18
This describes the holomorphic vector bundle formula_19 on formula_20. Then, for any formula_21, the vector bundle formula_22 is the "k"th exterior power,
formula_23
The logarithmic tangent bundle formula_24 means the dual vector bundle to formula_25. Explicitly, a section of formula_24 is a holomorphic vector field on "X" that is tangent to "D" at all smooth points of "D".
Logarithmic differentials and singular cohomology.
Let "X" be a complex manifold and "D" a divisor with normal crossings on "X". Deligne proved a holomorphic analog of de Rham's theorem in terms of logarithmic differentials. Namely,
formula_26
where the left side denotes the cohomology of "X" with coefficients in a complex of sheaves, sometimes called hypercohomology. This follows from the natural inclusion of complexes of sheaves
formula_27
being a quasi-isomorphism.
Logarithmic differentials in algebraic geometry.
In algebraic geometry, the vector bundle of logarithmic differential "p"-forms formula_3 on a smooth scheme "X" over a field, with respect to a divisor formula_28 with simple normal crossings, is defined as above: sections of formula_3 are (algebraic) differential forms ω on formula_29 such that both ω and "d"ω have a pole of order at most one along "D". Explicitly, for a closed point "x" that lies in formula_30 for formula_31 and not in formula_30 for formula_32, let formula_33 be regular functions on some open neighborhood "U" of "x" such that formula_30 is the closed subscheme defined by formula_34 inside "U" for formula_31, and "x" is the closed subscheme of "U" defined by formula_35. Then a basis of sections of formula_25 on "U" is given by:
formula_36
This describes the vector bundle formula_25 on "X", and then formula_3 is the "p"th exterior power of formula_25.
There is an exact sequence of coherent sheaves on "X":
formula_37
where formula_38 is the inclusion of an irreducible component of "D". Here β is called the residue map; so this sequence says that a 1-form with log poles along "D" is regular (that is, has no poles) if and only if its residues are zero. More generally, for any "p" ≥ 0, there is an exact sequence of coherent sheaves on "X":
formula_39
where the sums run over all irreducible components of given dimension of intersections of the divisors "D""j". Here again, β is called the residue map.
Explicitly, on an open subset of formula_20 that only meets one component formula_30 of formula_40, with formula_30 locally defined by formula_41, the residue of a logarithmic formula_42-form along formula_30 is determined by: the residue of a regular "p"-form is zero, whereas
formula_43
for any regular formula_44-form formula_45. Some authors define the residue by saying that formula_46 has residue formula_47, which differs from the definition here by the sign formula_48.
Example of the residue.
Over the complex numbers, the residue of a differential form with log poles along a divisor formula_30 can be viewed as the result of integration over loops in formula_20 around formula_30. In this context, the residue may be called the Poincaré residue.
For an explicit example, consider an elliptic curve "D" in the complex projective plane formula_49, defined in affine coordinates formula_50 by the equation formula_51 where formula_52 and formula_53 is a complex number. Then "D" is a smooth hypersurface of degree 3 in formula_54 and, in particular, a divisor with simple normal crossings. There is a meromorphic 2-form on formula_54 given in affine coordinates by
formula_55
which has log poles along "D". Because the canonical bundle formula_56 is isomorphic to the line bundle formula_57, the divisor of poles of formula_58 must have degree 3. So the divisor of poles of formula_58 consists only of "D" (in particular, formula_58 does not have a pole along the line formula_59 at infinity). The residue of ω along "D" is given by the holomorphic 1-form
formula_60
It follows that formula_61 extends to a holomorphic one-form on the projective curve "D" in formula_54, an elliptic curve.
The residue map formula_62 considered here is part of a linear map formula_63, which may be called the "Gysin map". This is part of the Gysin sequence associated to any smooth divisor "D" in a complex manifold "X":
formula_64
Historical terminology.
In the 19th-century theory of elliptic functions, 1-forms with logarithmic poles were sometimes called "integrals of the second kind" (and, with an unfortunate inconsistency, sometimes "differentials of the third kind"). For example, the Weierstrass zeta function associated to a lattice formula_65 in C was called an "integral of the second kind" to mean that it could be written
formula_66
In modern terms, it follows that formula_67 is a 1-form on C with logarithmic poles on formula_65, since formula_65 is the zero set of the Weierstrass sigma function formula_68
Mixed Hodge theory for smooth varieties.
Over the complex numbers, Deligne proved a strengthening of Alexander Grothendieck's algebraic de Rham theorem, relating coherent sheaf cohomology with singular cohomology. Namely, for any smooth scheme "X" over C with a divisor with simple normal crossings "D", there is a natural isomorphism
formula_69
for each integer "k", where the groups on the left are defined using the Zariski topology and the groups on the right use the classical (Euclidean) topology.
Moreover, when "X" is smooth and proper over C, the resulting spectral sequence
formula_70
degenerates at formula_71. So the cohomology of formula_29 with complex coefficients has a decreasing filtration, the Hodge filtration, whose associated graded vector spaces are the algebraically defined groups formula_72.
This is part of the mixed Hodge structure which Deligne defined on the cohomology of any complex algebraic variety. In particular, there is also a weight filtration on the rational cohomology of formula_29. The resulting filtration on formula_73 can be constructed using the logarithmic de Rham complex. Namely, define an increasing filtration formula_74 by
formula_75
The resulting filtration on cohomology is the weight filtration:
formula_76
Building on these results, Hélène Esnault and Eckart Viehweg generalized the Kodaira–Akizuki–Nakano vanishing theorem in terms of logarithmic differentials. Namely, let "X" be a smooth complex projective variety of dimension "n", "D" a divisor with simple normal crossings on "X", and "L" an ample line bundle on "X". Then
formula_77
and
formula_78
for all formula_79.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Omega^p_X(\\log D)."
},
{
"math_id": 1,
"text": "d(\\log z)=dz/z"
},
{
"math_id": 2,
"text": "dz/z"
},
{
"math_id": 3,
"text": "\\Omega^p_X(\\log D)"
},
{
"math_id": 4,
"text": " d\\Omega^p_X(\\log D)(U)\\subset \\Omega^{p+1}_X(\\log D)(U)"
},
{
"math_id": 5,
"text": "( \\Omega^{\\bullet}_X(\\log D), d) "
},
{
"math_id": 6,
"text": " j_*(\\Omega^{\\bullet}_{X-D}) "
},
{
"math_id": 7,
"text": " j:X-D\\rightarrow X "
},
{
"math_id": 8,
"text": " \\Omega^{\\bullet}_{X-D} "
},
{
"math_id": 9,
"text": "j_*(\\Omega^{\\bullet}_{X-D})"
},
{
"math_id": 10,
"text": "\\Omega^{\\bullet}_X"
},
{
"math_id": 11,
"text": "df/f"
},
{
"math_id": 12,
"text": "f"
},
{
"math_id": 13,
"text": "\\frac{d(fg)}{fg}=\\frac{df}{f}+\\frac{dg}{g}."
},
{
"math_id": 14,
"text": "z_1,\\ldots,z_n"
},
{
"math_id": 15,
"text": " z_1\\cdots z_k = 0 "
},
{
"math_id": 16,
"text": "0\\leq k\\leq n"
},
{
"math_id": 17,
"text": " \\Omega^1_X(\\log D) "
},
{
"math_id": 18,
"text": "\\Omega_X^1(\\log D) = \\mathcal{O}_{X}\\frac{dz_1}{z_1}\\oplus\\cdots\\oplus\\mathcal{O}_{X}\\frac{dz_k}{z_k} \\oplus \\mathcal{O}_{X}dz_{k+1} \\oplus \\cdots \\oplus \\mathcal{O}_{X}dz_n."
},
{
"math_id": 19,
"text": "\\Omega_X^1(\\log D)"
},
{
"math_id": 20,
"text": "X"
},
{
"math_id": 21,
"text": "k\\geq 0"
},
{
"math_id": 22,
"text": "\\Omega^k_X(\\log D)"
},
{
"math_id": 23,
"text": " \\Omega_X^k(\\log D) = \\bigwedge^k \\Omega_X^1(\\log D)."
},
{
"math_id": 24,
"text": "TX(-\\log D)"
},
{
"math_id": 25,
"text": "\\Omega^1_X(\\log D)"
},
{
"math_id": 26,
"text": " H^k(X, \\Omega^{\\bullet}_X(\\log D))\\cong H^k(X-D,\\mathbf{C}),"
},
{
"math_id": 27,
"text": " \\Omega^{\\bullet}_X(\\log D)\\rightarrow j_*\\Omega_{X-D}^{\\bullet} "
},
{
"math_id": 28,
"text": "D = \\sum D_j"
},
{
"math_id": 29,
"text": "X-D"
},
{
"math_id": 30,
"text": "D_j"
},
{
"math_id": 31,
"text": "1 \\le j \\le k"
},
{
"math_id": 32,
"text": "j > k"
},
{
"math_id": 33,
"text": "u_j"
},
{
"math_id": 34,
"text": "u_j=0"
},
{
"math_id": 35,
"text": "u_1=\\cdots=u_n=0"
},
{
"math_id": 36,
"text": "{du_1 \\over u_1}, \\dots, {du_k \\over u_k}, \\, du_{k+1}, \\dots, du_n."
},
{
"math_id": 37,
"text": "0 \\to \\Omega^1_X \\to \\Omega^1_X(\\log D) \\overset{\\beta}\\to \\oplus_j ({i_j})_*\\mathcal{O}_{D_j} \\to 0,"
},
{
"math_id": 38,
"text": "i_j: D_j \\to X"
},
{
"math_id": 39,
"text": "0 \\to \\Omega^p_X \\to \\Omega^p_X(\\log D) \\overset{\\beta}\\to \\oplus_j ({i_j})_*\\Omega^{p-1}_{D_j}(\\log (D-D_j)) \\to \\cdots \\to 0,"
},
{
"math_id": 40,
"text": "D"
},
{
"math_id": 41,
"text": "f=0"
},
{
"math_id": 42,
"text": "p"
},
{
"math_id": 43,
"text": "\\text{Res}_{D_j}\\bigg(\\frac{df}{f}\\wedge \\alpha\\bigg)=\\alpha|_{D_j}"
},
{
"math_id": 44,
"text": "(p-1)"
},
{
"math_id": 45,
"text": "\\alpha"
},
{
"math_id": 46,
"text": "\\alpha\\wedge(df/f)"
},
{
"math_id": 47,
"text": "\\alpha|_{D_j}"
},
{
"math_id": 48,
"text": "(-1)^{p-1}"
},
{
"math_id": 49,
"text": "\\mathbf{P}^2=\\{ [x,y,z]\\}"
},
{
"math_id": 50,
"text": "z=1"
},
{
"math_id": 51,
"text": "g(x,y) = y^2 - f(x) = 0,"
},
{
"math_id": 52,
"text": "f(x) = x(x-1)(x-\\lambda)"
},
{
"math_id": 53,
"text": "\\lambda\\neq 0,1"
},
{
"math_id": 54,
"text": "\\mathbf{P}^2"
},
{
"math_id": 55,
"text": "\\omega =\\frac{dx\\wedge dy}{g(x,y)},"
},
{
"math_id": 56,
"text": "K_{\\mathbf{P}^2}=\\Omega^2_{\\mathbf{P}^2}"
},
{
"math_id": 57,
"text": "\\mathcal{O}(-3)"
},
{
"math_id": 58,
"text": "\\omega"
},
{
"math_id": 59,
"text": "z=0"
},
{
"math_id": 60,
"text": " \\text{Res}_D(\\omega) = \\left. \\frac{dy}{\\partial g/\\partial x} \\right |_D =\\left. -\\frac{dx}{\\partial g/\\partial y} \\right |_D = \\left. -\\frac{1}{2}\\frac{dx}{y} \\right |_D. "
},
{
"math_id": 61,
"text": "dx/y|_D "
},
{
"math_id": 62,
"text": "H^0(\\mathbf{P}^2,\\Omega^2_{\\mathbf{P}^2}(\\log D))\\to H^0(D,\\Omega^1_D)"
},
{
"math_id": 63,
"text": "H^2(\\mathbf{P}^2-D,\\mathbf{C})\\to H^1(D,\\mathbf{C})"
},
{
"math_id": 64,
"text": "\\cdots \\to H^{j-2}(D)\\to H^j(X)\\to H^j(X-D)\\to H^{j-1}(D)\\to\\cdots."
},
{
"math_id": 65,
"text": "\\Lambda"
},
{
"math_id": 66,
"text": "\\zeta(z)=\\frac{\\sigma'(z)}{\\sigma(z)}."
},
{
"math_id": 67,
"text": "\\zeta(z)dz=d\\sigma/\\sigma"
},
{
"math_id": 68,
"text": "\\sigma(z)."
},
{
"math_id": 69,
"text": " H^k(X, \\Omega^{\\bullet}_X(\\log D)) \\cong H^k(X-D,\\mathbf{C})"
},
{
"math_id": 70,
"text": "E_1^{pq} = H^q(X,\\Omega^p_X(\\log D)) \\Rightarrow H^{p+q}(X-D,\\mathbf{C})"
},
{
"math_id": 71,
"text": "E_1"
},
{
"math_id": 72,
"text": "H^q(X,\\Omega^p_X(\\log D))"
},
{
"math_id": 73,
"text": "H^*(X-D,\\mathbf{C})"
},
{
"math_id": 74,
"text": "W_{\\bullet} \\Omega^p_X(\\log D) "
},
{
"math_id": 75,
"text": "W_{m}\\Omega^p_X(\\log D) = \\begin{cases}\n0 & m < 0\\\\\n\\Omega^{p-m}_X\\cdot \\Omega^m_X(\\log D) & 0\\leq m \\leq p\\\\\n\\Omega^p_X(\\log D) & m\\geq p.\n\\end{cases} "
},
{
"math_id": 76,
"text": " W_mH^k(X-D, \\mathbf{C}) = \\text{Im}(H^k(X, W_{m-k}\\Omega^{\\bullet}_X(\\log D))\\rightarrow H^k(X-D,\\mathbf{C}))."
},
{
"math_id": 77,
"text": "H^q(X,\\Omega^p_X(\\log D)\\otimes L)=0"
},
{
"math_id": 78,
"text": "H^q(X,\\Omega^p_X(\\log D)\\otimes O_X(-D)\\otimes L)=0"
},
{
"math_id": 79,
"text": "p+q>n"
}
] | https://en.wikipedia.org/wiki?curid=1052096 |
1052176 | Euler system | In mathematics, an Euler system is a collection of compatible elements of Galois cohomology groups indexed by fields. They were introduced by Kolyvagin (1990) in his work on Heegner points on modular elliptic curves, which was motivated by his earlier paper and the work of . Euler systems are named after Leonhard Euler because the factors relating different elements of an Euler system resemble the Euler factors of an Euler product.
Euler systems can be used to construct annihilators of ideal class groups or Selmer groups, thus giving bounds on their orders, which in turn has led to deep theorems such as the finiteness of some Tate-Shafarevich groups. This led to Karl Rubin's new proof of the main conjecture of Iwasawa theory, considered simpler than the original proof due to Barry Mazur and Andrew Wiles.
Definition.
Although there are several definitions of special sorts of Euler system, there seems to be no published definition of an Euler system that covers all known cases. But it is possible to say roughly what an Euler system is, as follows:
formula_0
Here the "Euler factor" "P"(τ|"B";"x") is defined to be the element det(1-τ"x"|"B") considered as an element of O["x"], which when "x" happens to act on "B" is not the same as det(1-τ"x"|"B") considered as an element of O.
Kazuya Kato refers to the elements in an Euler system as "arithmetic incarnations of zeta" and describes the property of being an Euler system as "an arithmetic reflection of the fact that these incarnations are related to special values of Euler products".
Examples.
Cyclotomic units.
For every square-free positive integer "n" pick an "n"-th root ζ"n" of 1, with ζ"mn" = ζ"m"ζ"n" for "m","n" coprime. Then the cyclotomic Euler system is the set of numbers
α"n" = 1 − ζ"n". These satisfy the relations
formula_1
formula_2 modulo all primes above "l"
where "l" is a prime not dividing "n" and "F""l" is a Frobenius automorphism with "F""l"(ζ"n") = ζ.
Kolyvagin used this Euler system to give an elementary proof of the Gras conjecture.
Heegner points.
Kolyvagin constructed an Euler system from the Heegner points of an elliptic curve, and used this to show that in some cases the Tate-Shafarevich group is finite.
Kato's Euler system.
Kato's Euler system consists of certain elements occurring in the algebraic K-theory of modular curves. These elements—named Beilinson elements after Alexander Beilinson who introduced them in —were used by Kazuya Kato in to prove one divisibility in Barry Mazur's main conjecture of Iwasawa theory for elliptic curves.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " {\\rm cor}_{G/F}(c_G) = \\prod_{q\\in \\Sigma(G/F)}P(\\mathrm{Fr}_q^{-1}|{\\rm Hom}_O(T,O(1));\\mathrm{Fr}_q^{-1})c_F"
},
{
"math_id": 1,
"text": "N_{Q(\\zeta_{nl})/Q(\\zeta_l)}(\\alpha_{nl}) = \\alpha_n^{F_l-1}"
},
{
"math_id": 2,
"text": "\\alpha_{nl}\\equiv\\alpha_n "
}
] | https://en.wikipedia.org/wiki?curid=1052176 |
1052323 | Sphere of influence (astrodynamics) | Region of space gravitationally dominated by a given body
A sphere of influence (SOI) in astrodynamics and astronomy is the oblate-spheroid-shaped region where a particular celestial body exerts the main gravitational influence on an orbiting object. This is usually used to describe the areas in the Solar System where planets dominate the orbits of surrounding objects such as moons, despite the presence of the much more massive but distant Sun.
In the patched conic approximation, used in estimating the trajectories of bodies moving between the neighbourhoods of different bodies using a two-body approximation, ellipses and hyperbolae, the SOI is taken as the boundary where the trajectory switches which mass field it is influenced by. It is not to be confused with the sphere of activity which extends well beyond the sphere of influence.
Models.
The most common base models to calculate the sphere of influence is the Hill sphere and the Laplace sphere, but updated and particularly more dynamic ones have been described.
The general equation describing the radius of the sphere formula_0 of a planet:
formula_1
where
In the patched conic approximation, once an object leaves the planet's SOI, the primary/only gravitational influence is the Sun (until the object enters another body's SOI). Because the definition of "r"SOI relies on the presence of the Sun and a planet, the term is only applicable in a three-body or greater system and requires the mass of the primary body to be much greater than the mass of the secondary body. This changes the three-body problem into a restricted two-body problem.
Table of selected SOI radii.
The table shows the values of the sphere of gravity of the bodies of the solar system in relation to the Sun (with the exception of the Moon which is reported relative to Earth):
An important understanding to be drawn from this table is that "Sphere of Influence" here is "Primary". For example, though Jupiter is much larger in mass than say, Neptune, its Primary SOI is much smaller due to Jupiter's much closer proximity to the Sun.
Increased accuracy on the SOI.
The Sphere of influence is, in fact, not quite a sphere. The distance to the SOI depends on the angular distance formula_5 from the massive body. A more accurate formula is given by
formula_6
Averaging over all possible directions we get:
formula_7
Derivation.
Consider two point masses formula_8 and formula_9 at locations formula_10 and formula_11, with mass formula_12 and formula_13 respectively. The distance formula_14 separates the two objects. Given a massless third point formula_15 at location formula_16, one can ask whether to use a frame centered on formula_8 or on formula_9 to analyse the dynamics of formula_15.
Consider a frame centered on formula_17. The gravity of formula_18 is denoted as formula_19 and will be treated as a perturbation to the dynamics of formula_15 due to the gravity formula_20 of body formula_17. Due to their gravitational interactions, point formula_17 is attracted to point formula_18 with acceleration formula_21, this frame is therefore non-inertial. To quantify the effects of the perturbations in this frame, one should consider the ratio of the perturbations to the main body gravity i.e. formula_22. The perturbation formula_23 is also known as the tidal forces due to body formula_18. It is possible to construct the perturbation ratio formula_24 for the frame centered on formula_18 by interchanging formula_25.
As formula_15 gets close to formula_17, formula_26 and formula_27, and vice versa. The frame to choose is the one that has the smallest perturbation ratio. The surface for which formula_28 separates the two regions of influence. In general this region is rather complicated but in the case that one mass dominates the other, say formula_29, it is possible to approximate the separating surface. In such a case this surface must be close to the mass formula_17, denote formula_30 as the distance from formula_17 to the separating surface.
The distance to the sphere of influence must thus satisfy formula_31 and so formula_32 is the radius of the sphere of influence of body formula_17
Gravity well.
Gravity well is a metaphorical name for the sphere of influence, highlighting the gravitational potential that shapes a sphere of influence, and that needs to be accounted for to escape or stay in the sphere of influence.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r_\\text{SOI}"
},
{
"math_id": 1,
"text": "r_\\text{SOI} \\approx a\\left(\\frac{m}{M}\\right)^{2/5}"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": "m"
},
{
"math_id": 4,
"text": "M"
},
{
"math_id": 5,
"text": "\\theta"
},
{
"math_id": 6,
"text": "r_\\text{SOI}(\\theta) \\approx a\\left(\\frac{m}{M}\\right)^{2/5}\\frac{1}{\\sqrt[10]{1+3\\cos^2(\\theta)}}"
},
{
"math_id": 7,
"text": "\\overline{r_\\text{SOI}} = 0.9431 a\\left(\\frac{m}{M}\\right)^{2/5}"
},
{
"math_id": 8,
"text": " A"
},
{
"math_id": 9,
"text": " B"
},
{
"math_id": 10,
"text": " r_A"
},
{
"math_id": 11,
"text": " r_B"
},
{
"math_id": 12,
"text": " m_A"
},
{
"math_id": 13,
"text": " m_B"
},
{
"math_id": 14,
"text": " R=|r_B-r_A|"
},
{
"math_id": 15,
"text": " C "
},
{
"math_id": 16,
"text": " r_C "
},
{
"math_id": 17,
"text": " A "
},
{
"math_id": 18,
"text": " B "
},
{
"math_id": 19,
"text": " g_B "
},
{
"math_id": 20,
"text": " g_A "
},
{
"math_id": 21,
"text": " a_A = \\frac{Gm_B}{R^3} (r_B-r_A) "
},
{
"math_id": 22,
"text": " \\chi_A = \\frac{|g_B-a_A|}{|g_A|} "
},
{
"math_id": 23,
"text": " g_B-a_A "
},
{
"math_id": 24,
"text": " \\chi_B "
},
{
"math_id": 25,
"text": " A \\leftrightarrow B "
},
{
"math_id": 26,
"text": " \\chi_A \\rightarrow 0 "
},
{
"math_id": 27,
"text": " \\chi_B \\rightarrow \\infty "
},
{
"math_id": 28,
"text": " \\chi_A = \\chi_B "
},
{
"math_id": 29,
"text": " m_A \\ll m_B "
},
{
"math_id": 30,
"text": " r "
},
{
"math_id": 31,
"text": " \\frac{m_B}{m_A} \\frac{r^3}{R^3} = \\frac{m_A}{m_B} \\frac{R^2}{r^2}"
},
{
"math_id": 32,
"text": " r = R\\left(\\frac{m_A}{m_B}\\right)^{2/5} "
}
] | https://en.wikipedia.org/wiki?curid=1052323 |
1052632 | Sylvester–Gallai theorem | Existence of a line through two points
The Sylvester–Gallai theorem in geometry states that every finite set of points in the Euclidean plane has a line that passes through exactly two of the points or a line that passes through all of them. It is named after James Joseph Sylvester, who posed it as a problem in 1893, and Tibor Gallai, who published one of the first proofs of this theorem in 1944.
A line that contains exactly two of a set of points is known as an "ordinary line". Another way of stating the theorem is that every finite set of points that is not collinear has an ordinary line. According to a strengthening of the theorem, every finite point set (not all on one line) has at least a linear number of ordinary lines. An algorithm can find an ordinary line in a set of formula_0 points in time formula_1.
History.
The Sylvester–Gallai theorem was posed as a problem by J. J. Sylvester (1893). Kelly (1986) suggests that Sylvester may have been motivated by a related phenomenon in algebraic geometry, in which the inflection points of a cubic curve in the complex projective plane form a configuration of nine points and twelve lines (the Hesse configuration) in which each line determined by two of the points contains a third point. The Sylvester–Gallai theorem implies that it is impossible for all nine of these points to have real coordinates.
H. J. Woodall (1893a, 1893b) claimed to have a short proof of the Sylvester–Gallai theorem, but it was already noted to be incomplete at the time of publication. Eberhard Melchior (1941) proved the theorem (and actually a slightly stronger result) in an equivalent formulation, its projective dual. Unaware of Melchior's proof, Paul Erdős (1943) again stated the conjecture, which was subsequently proved by Tibor Gallai, and soon afterwards by other authors.
In a 1951 review, Erdős called the result "Gallai's theorem", but it was already called the Sylvester–Gallai theorem in a 1954 review by Leonard Blumenthal. It is one of many mathematical topics named after Sylvester.
Equivalent versions.
The question of the existence of an ordinary line can also be posed for points in the real projective plane RP2 instead of the Euclidean plane. The projective plane can be formed from the Euclidean plane by adding extra points "at infinity" where lines that are parallel in the Euclidean plane intersect each other, and by adding a single line "at infinity" containing all the added points. However, the additional points of the projective plane cannot help create non-Euclidean finite point sets with no ordinary line, as any finite point set in the projective plane can be transformed into a Euclidean point set with the same combinatorial pattern of point-line incidences. Therefore, any pattern of finitely many intersecting points and lines that exists in one of these two types of plane also exists in the other. Nevertheless, the projective viewpoint allows certain configurations to be described more easily. In particular, it allows the use of projective duality, in which the roles of points and lines in statements of projective geometry can be exchanged for each other. Under projective duality, the existence of an ordinary line for a set of non-collinear points in RP2 is equivalent to the existence of an "ordinary point" in a nontrivial arrangement of finitely many lines. An arrangement is said to be trivial when all its lines pass through a common point, and nontrivial otherwise; an ordinary point is a point that belongs to exactly two lines.
Arrangements of lines have a combinatorial structure closely connected to zonohedra, polyhedra formed as the Minkowski sum of a finite set of line segments, called generators. In this connection, each pair of opposite faces of a zonohedron corresponds to a crossing point of an arrangement of lines in the projective plane, with one line for each generator. The number of sides of each face is twice the number of lines that cross in the arrangement. For instance, the elongated dodecahedron shown is a zonohedron with five generators, two pairs of opposite hexagon faces, and four pairs of opposite parallelogram faces.
In the corresponding five-line arrangement, two triples of lines cross (corresponding to the two pairs of opposite hexagons) and the remaining four pairs of lines cross at ordinary points (corresponding to the four pairs of opposite parallelograms). An equivalent statement of the Sylvester–Gallai theorem, in terms of zonohedra, is that every zonohedron has at least one parallelogram face (counting rectangles, rhombuses, and squares as special cases of parallelograms). More strongly, whenever sets of formula_0 points in the plane can be guaranteed to have at least formula_2 ordinary lines, zonohedra with formula_0 generators can be guaranteed to have at least formula_3 parallelogram faces.
Proofs.
The Sylvester–Gallai theorem has been proved in many different ways. Gallai's 1944 proof switches back and forth between Euclidean and projective geometry, in order to transform the points into an equivalent configuration in which an ordinary line can be found as a line of slope closest to zero; for details, see . The 1941 proof by Melchior uses projective duality to convert the problem into an equivalent question about arrangements of lines, which can be answered using Euler's polyhedral formula. Another proof by Leroy Milton Kelly shows by contradiction that the connecting line with the smallest nonzero distance to another point must be ordinary. And, following an earlier proof by Steinberg, H. S. M. Coxeter showed that the metric concepts of slope and distance appearing in Gallai's and Kelly's proofs are unnecessarily powerful, instead proving the theorem using only the axioms of ordered geometry.
Kelly's proof.
This proof is by Leroy Milton Kelly. call it "simply the best" of the many proofs of this theorem.
Suppose that a finite set formula_4 of points is not all collinear. Define a connecting line to be a line that contains at least two points in the collection. By finiteness, formula_4 must have a point formula_5 and a connecting line formula_6 that are a positive distance apart but are closer than all other point-line pairs. Kelly proved that formula_6 is ordinary, by contradiction.
Assume that formula_6 is not ordinary. Then it goes through at least three points of formula_4. At least two of these are on the same side of formula_7, the perpendicular projection of formula_5 on formula_6. Call them formula_8 and formula_9, with formula_8 being closest to formula_7 (and possibly coinciding with it). Draw the connecting line formula_10 passing through formula_5 and formula_9, and the perpendicular from formula_8 to formula_11 on formula_10 . Then formula_12 is shorter than formula_13. This follows from the fact that formula_14 and formula_15 are similar triangles, one contained inside the other.
However, this contradicts the original definition of formula_5 and formula_6 as the point-line pair with the smallest positive distance. So the assumption that formula_6 is not ordinary cannot be true, QED.
Melchior's proof.
In 1941 (thus, prior to Erdős publishing the question and Gallai's subsequent proof) Melchior showed that any nontrivial finite arrangement of lines in the projective plane has at least three ordinary points. By duality, this results also says that any finite nontrivial set of points on the plane has at least three ordinary lines.
Melchior observed that, for any graph embedded in the real projective plane, the formula formula_16 must equal formula_17, the Euler characteristic of the projective plane. Here formula_18, formula_19, and formula_20 are the number of vertices, edges, and faces of the graph, respectively. Any nontrivial line arrangement on the projective plane defines a graph in which each face is bounded by at least three edges, and each edge bounds two faces; so, double counting gives the additional inequality formula_21. Using this inequality to eliminate formula_20 from the Euler characteristic leads to the inequality formula_22. But if every vertex in the arrangement were the crossing point of three or more lines, then the total number of edges would be at least formula_23, contradicting this inequality. Therefore, some vertices must be the crossing point of only two lines, and as Melchior's more careful analysis shows, at least three ordinary vertices are needed in order to satisfy the inequality formula_22.
As note, the same argument for the existence of an ordinary vertex was also given in 1944 by Norman Steenrod, who explicitly applied it to the dual ordinary line problem.
Melchior's inequality.
By a similar argument, Melchior was able to prove a more general result. For every formula_24, let formula_25 be the number of points to which formula_26 lines are incident. Then
formula_27
or equivalently,
formula_28
Axiomatics.
H. S. M. Coxeter (1948, 1969) writes of Kelly's proof that its use of Euclidean distance is unnecessarily powerful, "like using a sledge hammer to crack an almond". Instead, Coxeter gave another proof of the Sylvester–Gallai theorem within ordered geometry, an axiomatization of geometry in terms of betweenness that includes not only Euclidean geometry but several other related geometries. Coxeter's proof is a variation of an earlier proof given by Steinberg in 1944. The problem of finding a minimal set of axioms needed to prove the theorem belongs to reverse mathematics; see for a study of this question.
The usual statement of the Sylvester–Gallai theorem is not valid in constructive analysis, as it implies the lesser limited principle of omniscience, a weakened form of the law of excluded middle that is rejected as an axiom of constructive mathematics. Nevertheless, it is possible to formulate a version of the Sylvester–Gallai theorem that is valid within the axioms of constructive analysis, and to adapt Kelly's proof of the theorem to be a valid proof under these axioms.
Finding an ordinary line.
Kelly's proof of the existence of an ordinary line can be turned into an algorithm that finds an ordinary line by searching for the closest pair of a point and a line through two other points. report the time for this closest-pair search as formula_29, based on a brute-force search of all triples of points, but an algorithm to find the closest given point to each line through two given points, in time formula_30, was given earlier by , as a subroutine for finding the minimum-area triangle determined by three of a given set of points. The same paper of also shows how to construct the dual arrangement of lines to the given points (as used in Melchior and Steenrod's proof) in the same time, formula_30, from which it is possible to identify all ordinary vertices and all ordinary lines. first showed how to find a single ordinary line (not necessarily the one from Kelly's proof) in time formula_1, and a simpler algorithm with the same time bound was described by .
The algorithm of is based on Coxeter's proof using ordered geometry. It performs the following steps:
As the authors prove, the line returned by this algorithm must be ordinary. The proof is either by construction if it is returned by step 4, or by contradiction if it is returned by step 7: if the line returned in step 7 were not ordinary, then the authors prove that there would exist an ordinary line between one of its points and formula_31, but this line should have already been found and returned in step 4.
The number of ordinary lines.
While the Sylvester–Gallai theorem states that an arrangement of points, not all collinear, must determine an ordinary line, it does not say how many must be determined. Let formula_2 be the minimum number of ordinary lines determined over every set of formula_0 non-collinear points. Melchior's proof showed that formula_35. de Bruijn and Erdős (1948) raised the question of whether formula_2 approaches infinity with formula_0. Theodore Motzkin (1951) confirmed that it does by proving that formula_36. Gabriel Dirac (1951) conjectured that formula_37, for all values of formula_0. This is often referred to as the Dirac–Motzkin conjecture; see for example . proved that formula_38.
Dirac's conjectured lower bound is asymptotically the best possible, as the even numbers formula_0 greater than four have a matching upper bound formula_39. The construction, due to Károly Böröczky, that achieves this bound consists of the vertices of a regular formula_10-gon in the real projective plane and another formula_10 points (thus, formula_40) on the line at infinity corresponding to each of the directions determined by pairs of vertices. Although there are formula_41 pairs of these points, they determine only formula_10 distinct directions. This arrangement has only formula_10 ordinary lines, the lines that connect a vertex formula_42 with the point at infinity collinear with the two neighbors of formula_42. As with any finite configuration in the real projective plane, this construction can be perturbed so that all points are finite, without changing the number of ordinary lines.
For odd formula_0, only two examples are known that match Dirac's lower bound conjecture, that is, with formula_43 One example, by , consists of the vertices, edge midpoints, and centroid of an equilateral triangle; these seven points determine only three ordinary lines. The configuration in which these three ordinary lines are replaced by a single line cannot be realized in the Euclidean plane, but forms a finite projective space known as the Fano plane. Because of this connection, the Kelly–Moser example has also been called the non-Fano configuration. The other counterexample, due to McKee, consists of two regular pentagons joined edge-to-edge together with the midpoint of the shared edge and four points on the line at infinity in the projective plane; these 13 points have among them 6 ordinary lines. Modifications of Böröczky's construction lead to sets of odd numbers of points with formula_44 ordinary lines.
proved that formula_45 except when formula_0 is seven. Asymptotically, this formula is already formula_46 of the proven formula_34 upper bound. The formula_47 case is an exception because otherwise the Kelly–Moser construction would be a counterexample; their construction shows that formula_48. However, were the Csima–Sawyer bound valid for formula_47, it would claim that formula_49.
A closely related result is Beck's theorem, stating a tradeoff between the number of lines with few points and the number of points on a single line.
Ben Green and Terence Tao showed that for all sufficiently large point sets (that is, formula_50 for some suitable choice of formula_51), the number of ordinary lines is indeed at least formula_34. Furthermore, when formula_0 is odd, the number of ordinary lines is at least formula_52, for some constant formula_9. Thus, the constructions of Böröczky for even and odd (discussed above) are best possible. Minimizing the number of ordinary lines is closely related to the orchard-planting problem of maximizing the number of three-point lines, which Green and Tao also solved for all sufficiently large point sets. In the dual setting, where one is looking for ordinary points, one can consider the minimum number of ordinary points in an arrangement of pseudolines. In this context, the Csima-Sawyer formula_53 lower bound is still valid, though it is not known whether the Green and Tao asymptotic formula_34 bound still holds.
The number of connecting lines.
As Paul Erdős observed, the Sylvester–Gallai theorem immediately implies that any set of formula_0 points that are not collinear determines at least formula_0 different lines. This result is known as the De Bruijn–Erdős theorem. As a base case, the result is clearly true for formula_54. For any larger value of formula_0, the result can be reduced from formula_0 points to formula_55 points, by deleting an ordinary line and one of the two points on it (taking care not to delete a point for which the remaining subset would lie on a single line). Thus, it follows by mathematical induction. The example of a near-pencil, a set of formula_55 collinear points together with one additional point that is not on the same line as the other points, shows that this bound is tight.
Generalizations.
The Sylvester–Gallai theorem has been generalized to colored point sets in the Euclidean plane, and to systems of points and lines defined algebraically or by distances in a metric space. In general, these variations of the theorem consider only finite sets of points, to avoid examples like the set of all points in the Euclidean plane, which does not have an ordinary line.
Colored points.
A variation of Sylvester's problem, posed in the mid-1960s by Ronald Graham and popularized by Donald J. Newman, considers finite planar sets of points (not all in a line) that are given two colors, and asks whether every such set has a line through two or more points that are all the same color. In the language of sets and families of sets, an equivalent statement is that the family of the collinear subsets of a finite point set (not all on one line) cannot have Property B. A proof of this variation was announced by Theodore Motzkin but never published; the first published proof was by .
Non-real coordinates.
Just as the Euclidean plane or projective plane can be defined by using real numbers for the coordinates of their points (Cartesian coordinates for the Euclidean plane and homogeneous coordinates for the projective plane), analogous abstract systems of points and lines can be defined by using other number systems as coordinates. The Sylvester–Gallai theorem does not hold for geometries defined in this way over finite fields: for some finite geometries defined in this way, such as the Fano plane, the set of all points in the geometry has no ordinary lines.
The Sylvester–Gallai theorem also does not directly apply to geometries in which points have coordinates that are pairs of complex numbers or quaternions, but these geometries have more complicated analogues of the theorem. For instance, in the complex projective plane there exists a configuration of nine points, Hesse's configuration (the inflection points of a cubic curve), in which every line is non-ordinary, violating the Sylvester–Gallai theorem. Such a configuration is known as a Sylvester–Gallai configuration, and it cannot be realized by points and lines of the Euclidean plane. Another way of stating the Sylvester–Gallai theorem is that whenever the points of a Sylvester–Gallai configuration are embedded into a Euclidean space, preserving colinearities, the points must all lie on a single line, and the example of the Hesse configuration shows that this is false for the complex projective plane. However, proved a complex-number analogue of the Sylvester–Gallai theorem: whenever the points of a Sylvester–Gallai configuration are embedded into a complex projective space, the points must all lie in a two-dimensional subspace. Equivalently, a set of points in three-dimensional complex space whose affine hull is the whole space must have an ordinary line, and in fact must have a linear number of ordinary lines. Similarly, showed that whenever a Sylvester–Gallai configuration is embedded into a space defined over the quaternions, its points must lie in a three-dimensional subspace.
Matroids.
Every set of points in the Euclidean plane, and the lines connecting them, may be abstracted as the elements and flats of a rank-3 oriented matroid. The points and lines of geometries defined using other number systems than the real numbers also form matroids, but not necessarily oriented matroids. In this context, the result of lower-bounding the number of ordinary lines can be generalized to oriented matroids: every rank-3 oriented matroid with formula_0 elements has at least formula_56 two-point lines, or equivalently every rank-3 matroid with fewer two-point lines must be non-orientable. A matroid without any two-point lines is called a Sylvester matroid. Relatedly, the Kelly–Moser configuration with seven points and only three ordinary lines forms one of the forbidden minors for GF(4)-representable matroids.
Distance geometry.
Another generalization of the Sylvester–Gallai theorem to arbitrary metric spaces was conjectured by and proved by . In this generalization, a triple of points in a metric space is defined to be collinear when the triangle inequality for these points is an equality, and a line is defined from any pair of points by repeatedly including additional points that are collinear with points already added to the line, until no more such points can be added. The generalization of Chvátal and Chen states that every finite metric space has a line that contains either all points or exactly two of the points.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "O(n\\log n)"
},
{
"math_id": 2,
"text": "t_2(n)"
},
{
"math_id": 3,
"text": "2t_2(n)"
},
{
"math_id": 4,
"text": "S"
},
{
"math_id": 5,
"text": "P"
},
{
"math_id": 6,
"text": "\\ell"
},
{
"math_id": 7,
"text": "P'"
},
{
"math_id": 8,
"text": "B"
},
{
"math_id": 9,
"text": "C"
},
{
"math_id": 10,
"text": "m"
},
{
"math_id": 11,
"text": "B'"
},
{
"math_id": 12,
"text": "BB'"
},
{
"math_id": 13,
"text": "PP'"
},
{
"math_id": 14,
"text": "PP'C"
},
{
"math_id": 15,
"text": "BB'C"
},
{
"math_id": 16,
"text": "V-E+F"
},
{
"math_id": 17,
"text": "1"
},
{
"math_id": 18,
"text": "V"
},
{
"math_id": 19,
"text": "E"
},
{
"math_id": 20,
"text": "F"
},
{
"math_id": 21,
"text": "F\\le 2E/3"
},
{
"math_id": 22,
"text": "E\\le 3V-3"
},
{
"math_id": 23,
"text": "3V"
},
{
"math_id": 24,
"text": "k\\ge 2"
},
{
"math_id": 25,
"text": "t_k"
},
{
"math_id": 26,
"text": "k"
},
{
"math_id": 27,
"text": "\\displaystyle \\sum_{k\\geq2} (k-3) t_k \\leq -3."
},
{
"math_id": 28,
"text": "\\displaystyle t_2 \\geqslant 3 + \\sum_{k\\geq4} (k-3) t_k."
},
{
"math_id": 29,
"text": "O(n^3)"
},
{
"math_id": 30,
"text": "O(n^2)"
},
{
"math_id": 31,
"text": "p_0"
},
{
"math_id": 32,
"text": "\\ell_0"
},
{
"math_id": 33,
"text": "\\ell_i"
},
{
"math_id": 34,
"text": "n/2"
},
{
"math_id": 35,
"text": "t_2(n)\\ge 3"
},
{
"math_id": 36,
"text": "t_2(n)\\ge\\sqrt n"
},
{
"math_id": 37,
"text": "t_2\\ge\\lfloor n/2\\rfloor"
},
{
"math_id": 38,
"text": "t_2(n)\\ge 3n/7"
},
{
"math_id": 39,
"text": "t_2(n)\\le n/2"
},
{
"math_id": 40,
"text": "n=2m"
},
{
"math_id": 41,
"text": "m(m-1)/2"
},
{
"math_id": 42,
"text": "v"
},
{
"math_id": 43,
"text": "t_2(n)=(n-1)/2"
},
{
"math_id": 44,
"text": "3\\lfloor n/4\\rfloor"
},
{
"math_id": 45,
"text": "t_2(n)\\ge\\lceil 6n/13\\rceil"
},
{
"math_id": 46,
"text": "12/13 \\approx 92.3\\%"
},
{
"math_id": 47,
"text": "n=7"
},
{
"math_id": 48,
"text": "t(7)\\le 3"
},
{
"math_id": 49,
"text": "t_2(7)\\ge 4"
},
{
"math_id": 50,
"text": "n > n_0"
},
{
"math_id": 51,
"text": "n_0"
},
{
"math_id": 52,
"text": "3n/4-C"
},
{
"math_id": 53,
"text": "\\lceil 6n/13\\rceil"
},
{
"math_id": 54,
"text": "n=3"
},
{
"math_id": 55,
"text": "n-1"
},
{
"math_id": 56,
"text": "3n/7"
}
] | https://en.wikipedia.org/wiki?curid=1052632 |
10527390 | UTP—glucose-1-phosphate uridylyltransferase | Class of enzymes
UTP—glucose-1-phosphate uridylyltransferase also known as glucose-1-phosphate uridylyltransferase (or UDP–glucose pyrophosphorylase) is an enzyme involved in carbohydrate metabolism. It synthesizes UDP-glucose from glucose-1-phosphate and UTP; i.e.,
glucose-1-phosphate + UTP formula_0 UDP-glucose + pyrophosphate
UTP—glucose-1-phosphate uridylyltransferase is an enzyme found in all three domains (bacteria, eukarya, and archaea) as it is a key player in glycogenesis and cell wall synthesis. Its role in sugar metabolism has been studied extensively in plants in order to understand plant growth and increase agricultural production. Recently, human UTP—glucose-1-phosphate uridylyltransferase has been studied and crystallized, revealing a different type of regulation than other organisms previously studied. Its significance is derived from the many uses of UDP-glucose including galactose metabolism, glycogen synthesis, glycoprotein synthesis, and glycolipid synthesis.
Structure.
The structure of UTP—glucose-1-phosphate uridylyltransferase is significantly different between prokaryotes and eukaryotes, but within eukaryotes, the primary, secondary, and tertiary structures of the enzyme are quite conserved. In many species, UTP—glucose-1-phosphate uridylyltransferase is found as a homopolymer consisting of identical subunits in a symmetrical quaternary structure. The number of subunits varies across species: for instance, in Escherichia coli, the enzyme is found as a tetramer, whereas in Burkholderia xenovorans, the enzyme is dimeric. In humans and in yeast, the enzyme is active as an octamer consisting of two tetramers stacked onto one another with conserved hydrophobic residues at the interfaces between the subunits. In contrast, the enzyme in plants has conserved charged residues forming the interface between subunits.
In humans, each enzyme subunit contains several residues (L113, N251, and N328) that are highly conserved in eukaryotes. A Rossman fold motif participates in binding of the UTP nucleotide and a sugar-binding domain (residues T286–G293) coordinates with the glucose ring. A missense mutation (G115D) in the region of the enzyme containing the active site (which is conserved in eukaryotes) causes a dramatic decrease in enzymatic activity in vitro.
Examples.
Human genes encoding proteins with UTP—glucose-1-phosphate uridylyltransferase activity include two isoforms with molecular weights of 56.9 and 55.7 kDa, respectively.
Function.
UTP—glucose-1-phosphate uridylyltransferase is ubiquitous in nature due to its important role in the generation of UDP-glucose, a central compound in carbohydrate metabolism. In plant leaves, UTP—glucose-1-phosphate uridylyltransferase is a key part of the sucrose biosynthesis pathway, supplying Uridine diphosphate glucose to Sucrose-phosphate synthase which converts UDP-glucose and D-fructose 6-phosphate into sucrose-6-phosphate. It may also be partially responsible for the breakdown of sucrose in other tissues using UDP-glucose.
In higher animals, the enzyme is highly active in tissues involved in glycogenesis, including the liver and the muscles. An exception is the brain, which has high levels of glycogen but low specific activity of UTP—glucose-1-phosphate uridylyltransferase. In animal cells, UTP—glucose-1-phosphate uridylyltransferase is found predominantly in the cytoplasm.
UTP—glucose-1-phosphate uridylyltransferase is also required for galactose metabolism in animals and microorganisms. In galactose metabolism, the enzyme galactose 1-phosphate uridylyltransferase transfers a phosphate from UDP-glucose to galactose 1-phosphate to produce UDP-galactose, which is then converted to UDP-glucose. Bacteria with defective UTP—glucose-1-phosphate uridylyltransferase are unable to incorporate galactose into their cell walls.
Mechanism.
In this enzyme's primary reaction, the phosphate group on glucose-1-phosphate replaces the phosphoanhydride bond on UTP. This reaction is readily reversible and the Gibbs Free Energy is close to zero. However, under typical cellular conditions, inorganic pyrophosphatase quickly hydrolyzes the pyrophosphate product and drives the reaction forward by Le Chatelier's Principle.
UTP—glucose-1-phosphate uridylyltransferase uses an ordered sequential Bi Bi mechanism for both the forward and reverse reactions. In yeast, the enzyme follows simple Michaelis-Menten kinetics and does not exhibit cooperativity between the subunits in the octamer.
Similar to other sugar nucleotidyltransferases, UTP—glucose-1-phosphate uridylyltransferase activity requires two divalent cations to stabilize the binding of negatively charged phosphate groups. Magnesium typically serves in this role, but other ions such as manganese(II), cobalt(II), and nickel(II) can also substitute with a ~75% reduction in the optimal activity. X-ray crystallography experiments have shown that one Mg2+ ion is coordinated by a phosphoryl oxygen on glucose 1-phosphate and by an α-phosphoryl oxygen on UTP. In addition to stabilizing the negatively charged phosphates, Mg2+ is thought to orient the glucose 1-phosphate for nucleophilic attack of the α-phosphorus of UTP.
Regulation.
Although functionally similar across species, UDP-glucose pyrophosphorylase has different structures and regulation mechanisms in different organisms.
Microorganisms.
In yeast, UTP—glucose-1-phosphate uridylyltransferase is regulated by phosphorylation by PAS kinase. This phosphorylation is reversible and controls the partition of sugar flux towards glycogen and cell wall synthesis.
Plants.
UTP—glucose-1-phosphate uridylyltransferase in plants is regulated through oligomerization and possibly phosphorylation. In barley, it has been shown that UDP-glucose pyrophosphorylase is only active in monomeric form but readily forms oligomers, suggesting that oligomerization may be a form of regulation of the enzyme. In rice, cold stress decreases N-glycosylation of the enzyme, which is thought to alter the enzyme's activity in response to cold.
In Arabidopsis, there are two isozymes of UTP—glucose-1-phosphate uridylyltransferase: UGP1 and UGP2. These two isozymes have almost identical activities and differ in only 32 amino acids, all of which are located on the outer surface of the protein away from the active site. These minor differences may allow for differential allosteric regulation of isozyme activity. UGP1 and UGP2 are differentially expressed in different parts of the plant. UGP1 expression is widely expressed in the majority of tissues while UGP2 is expressed primarily in flowers, suggesting that UGP1 is the major form of the enzyme and UGP2 serves an auxiliary function. Indeed, UGP2 expression is increased in response to stressors such as phosphate deficiency, indicating that UGP2 probably functions as a backup to UGP1 when the plant is under environmental stress.
Animals.
The control of UTP—glucose-1-phosphate uridylyltransferase activity is primarily achieved by genetic means (i.e. regulation of transcription and translation). Similar to most enzymes, UTP—glucose-1-phosphate uridylyltransferase is inhibited by its product, UDP-glucose. However, the enzyme is not subject to significant allosteric regulation, which is logical given the widespread use of UDP-glucose in a variety of metabolic pathways.
Humans.
In humans, UDP-glucose pyrophosphorylase is active as an octamer. The enzyme's activity is also modified by O-glycosylation. Similar to other mamallian species, there two different isoforms in humans that are produced by alternative splicing of the gene. The isoforms differ by only 11 amino acids at the N-terminus and no significant differences in their functional activity have been identified.
Disease relevance.
In humans, galactosemia is a disorder that affects the development of newborns and children as they cannot metabolize the sugar galactose properly. It is speculated that overexpression of UDP-glucose pyrophosphorylase may relieve symptoms in humans with galactosemia.
In cancer cells, which typically have high rates of glycolysis and decreased glycogen content, the activity of UTP—glucose-1-phosphate uridylyltransferase is often downregulated by up to 50-60% compared to normal cells. The abnormally low activity of UTP—glucose-1-phosphate uridylyltransferase is due to decreased levels of the enzyme and the downregulation of other enzymes in the glycogenic pathway including glycogen synthase and phosphoglucomutase.
UTP—glucose-1-phosphate uridylyltransferase has been found to be an important virulence factor in a variety of pathogens including bacteria and protozoa. For example, the enzyme has been found to be required for the biosynthesis of capsular polysaccharide, an important virulence factor of "Streptococcus pneumoniae", a bacterial cause of pneumonia, bronchitis, and other breathing issues. As a result, the enzyme has attracted attention as a potential target for pharmaceuticals. However, in order to achieve specificity, the drugs must be designed to specifically target allosteric sites on the surface of the protein because the active site is highly conserved across species.
UDP-glucose pyrophosphorylase ("UGP2") was recently found to be implicated in novel neurodevelopmental disorder in humans, known as also referred to as Barakat-Perenthaler syndrome. This disorder was first described in 22 individuals from 15 families, presenting with a severe epileptic encephalopathy, neurodevelopmental delay with absence of virtually all developmental milestones, intractable seizures, progressive microcephaly, visual disturbance and similar minor dysmorphisms. Barakat and colleagues identified a recurrent homozygous mutation in all affected individuals (chr2:64083454A > G), which mutates the translational start site of the shorter protein isoform of UGP2. Therefore, the shorter protein isoform can no longer be produced in patients harboring the homozygous mutation. Functional studies from the same group showed that the short protein isoform is normally predominantly expressed in human brain. Therefore, the recurrent mutation leads to a tissue-specific absence of UGP2 in brain, which leads to altered glycogen metabolism, upregulated unfolded protein response and premature neuronal differentiation. Other bi-allelic loss-of-function mutations in "UGP2" are likely lethal, as human embryonic stem cells depleted of both short and long isoforms of UGP2 fail to differentiate in cardiomyocytes and blood cells. Hence, the identification of this new disease also shows that isoform-specific start-loss mutations causing expression loss of a tissue-relevant isoform of an essential protein can cause a genetic disease, even when an organism-wide protein absence is incompatible with life. A therapy for Barakat-Perenthaler syndrome does currently not exist.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=10527390 |
10529910 | Phosphopentose epimerase | Class of enzymes
Phosphopentose epimerase (also known as ribulose-phosphate 3-epimerase and ribulose 5-phosphate 3-epimerase, EC 5.1.3.1) encoded in humans by the "RPE" gene is a metalloprotein that catalyzes the interconversion between D-ribulose 5-phosphate and D-xylulose 5-phosphate.
D-ribulose 5-phosphate formula_0 D-xylulose 5-phosphate
This reversible conversion is required for carbon fixation in plants – through the Calvin cycle – and for the nonoxidative phase of the pentose phosphate pathway. This enzyme has also been implicated in additional pentose and glucuronate interconversions.
In "Cupriavidus metallidurans" two copies of the gene coding for PPE are known, one is chromosomally encoded P40117, the other one is on a plasmid Q04539. PPE has been found in a wide range of bacteria, archaebacteria, fungi and plants. All the proteins have from 209 to 241 amino acid residues. The enzyme has a TIM barrel structure.
Nomenclature.
The systematic name of this enzyme class is D-ribulose-5-phosphate 3-epimerase. Other names in common use include
<templatestyles src="Div col/styles.css"/>
This enzyme participates in 3 metabolic pathways: pentose phosphate pathway, pentose and glucuronate interconversions, and carbon fixation.
The human protein containing this domain is the RPE (gene).
Family.
Phosphopentose epimerase belongs to two protein families of increasing hierarchy. This enzyme belongs to the isomerase family, specifically those racemases and epimerases which act on carbohydrates and their derivatives. In addition, the Structural Classification of Proteins database has defined the “ribulose phosphate binding” superfamily for which this epimerase is a member. Other proteins included in this superfamily are 5‘-monophosphate decarboxylase (OMPDC), and 3-keto-l-gulonate 6-phosphate decarboxylase (KGPDC).
Structure.
As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 1H1Y, 1H1Z, 1RPX, and 1TQJ.
Overall.
Crystallographic studies have helped elucidate the apoenzyme structure of phosphopentose epimerase. Results of these studies have shown that this enzyme exists as a homodimer in solution. Furthermore, Phosphopentose epimerase folds into a (β/α)8 triosephosphate isomerase (TIM) barrel that includes loops. The core barrel is composed of 8 parallel strands that make up the central beta sheet, with helices located in between consecutive strands. The loops in this structure have been known to regulate substrate specificities. Specifically, the loop that connects helix α6 with strand β6 caps the active site upon binding of the substrate.
As previously mentioned, Phosphopentose epimerase is a metalloenzyme. It requires a cofactor for functionality and binds one divalent metal cation per subunit. This enzyme has been shown to use Zn2+ predominantly for catalysis, along with Co2+ and Mn2+. However, human phosphopentose epimerase – which is encoded by the RPE gene - differs in that it binds Fe2+ predominantly in catalysis. Fe2+ is octahedrally coordinated and stabilizes the 2,3-enediolate reaction intermediate observed in the figure.
Active site.
The β6/α6 loop region interacts with the substrate and regulates access to the active site. Phe147, Gly148, and Ala149 of this region cap the active site once binding has occurred. In addition, the Fe2+ ion is coordinated to His35, His70, Asp37, Asp175, and oxygens O2 and O3 of the substrate. The binding of substrate atoms to the iron cation helps stabilize the complex during catalysis. Mutagenesis studies have also indicated that two aspartic acids are located within the active site and help mediate catalysis through a 1,1-proton transfer reaction. The aspartic acids are the acid/base catalysts. Lastly, once the ligand is attached to the active site, a series of methionines (Met39, Met72, and Met141) restrict further movement through constriction.
Mechanism.
Phosphopentose utilizes an acid/base type of catalytic mechanism. The reaction proceeds in such a way that trans-2,3-enediol phosphate is the intermediate. The two aspartic acids mentioned above act as proton donors and acceptors. Asp37 and Asp175 are both hydrogen bonded to the iron cation in the active site. When Asp37 is deprotonated, it attacks a proton on the third carbon of D-ribulose 5-phosphate, which forms the intermediate. In a concerted step, as Asp37 grabs a proton, the carbonyl bond on the substrate grabs a second proton from Asp175 to form a hydroxyl group. The iron complex helps stabilize any additional charges. It is C3 of D-ribulose 5-phosphate which undergoes this epimerization, forming D-xylulose 5-phosphate. The mechanism is clearly demonstrated in the figure.
Function.
Calvin cycle.
Electron microscopy experiments in plants have shown that phosphopentose epimerase localizes to the thylakoid membrane of chloroplasts. This epimerase participates in the third phase of the Calvin cycle, which involves the regeneration of ribulose 1,5-bisphosphate. RuBP is the acceptor of the carbon dioxide (CO2) in the first step of the pathway, which suggests that phosphopentose epimerase regulates flux through the Calvin cycle. Without the regeneration of ribulose 1,5-bisphosphate, the cycle will be unable to continue. Therefore, xylulose 5-phosphate is reversibly converted into ribulose 5-phosphate by this epimerase. Subsequently, phosphoribulose kinase converts ribulose 5-phosphate into ribulose 1,5-bisphosphate.
Pentose phosphate pathway.
The reactions of the pentose phosphate pathway (PPP) take place in the cytoplasm. Phosphopentose epimerase specifically affects the nonoxidative portion of the pathway, which involves the production of various sugars and precursors. This enzyme converts ribulose 5-phosphate into the appropriate epimer for the transketolase reaction, xylulose 5-phosphate. Therefore, the reaction that occurs in the pentose phosphate pathway is exactly the reverse of the reaction which occurs in the Calvin cycle. The mechanism remains the same and involves the formation of an enediolate intermediate.
Due to its involvement in this pathway, phosphopentose epimerase is an important enzyme for the cellular response to oxidative stress. The generation of NADPH by the pentose phosphate pathway helps protect cells against reactive oxygen species. NADPH is able to reduce glutathione, which detoxifies the body by producing water from hydrogen peroxide (H2O2). Therefore, not only does phosphopentose epimerase alter flux through the PPP, but it also prevents buildup of peroxides.
Evolution.
The structures of many phosphopentose epimerase analogs have been discovered through crystallographic studies. Due to its role in the Calvin cycle and the pentose phosphate pathway, the overall structure is conserved. When the sequences of evolutionarily-distant organisms were compared, greater than 50% similarity was observed. However, amino acids positioned at the dimer interface – which are involved in many intermolecular interactions – are not necessarily conserved. It is important to note that the members of the “ribulose phosphate binding” superfamily resulted from divergent evolution from a (β/α)8 - barrel ancestor.
Drug targeting and malaria.
The protozoan organism "Plasmodium falciparum" is a major causative agent of malaria. Phosphopentose epimerase has been implicated in the shikimate pathway, an essential pathway for the propagation of malaria. As the enzyme converts ribulose 5-phosphate into xylulose 5-phosphate, the latter is further metabolized into erythrose 4-phosphate. The shikimate pathway then converts erythrose 4-phosphate into chorismate. It is phosphopentose epimerase which allows "Plasmodium falciparum" to use erythorse 4-phosphate as a substrate. Due to this enzyme’s involvement in the shikimate pathway, phosphopentose epimerase is a potential drug target for developing antimalarials.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=10529910 |
10530074 | Monodromy theorem | Mathematical Sentence
In complex analysis, the monodromy theorem is an important result about analytic continuation of a complex-analytic function to a larger set. The idea is that one can extend a complex-analytic function (from here on called simply "analytic function") along curves starting in the original domain of the function and ending in the larger set. A potential problem of this analytic continuation along a curve strategy is there are usually many curves which end up at the same point in the larger set. The monodromy theorem gives sufficient conditions for analytic continuation to give the same value at a given point regardless of the curve used to get there, so that the resulting extended analytic function is well-defined and single-valued.
Before stating this theorem it is necessary to define analytic continuation along a curve and study its properties.
Analytic continuation along a curve.
The definition of analytic continuation along a curve is a bit technical, but the basic idea is that one starts with an analytic function defined around a point, and one extends that function along a curve via analytic functions defined on small overlapping disks covering that curve.
Formally, consider a curve (a continuous function) formula_1 Let formula_2 be an analytic function defined on an open disk formula_3 centered at formula_4 An "analytic continuation" of the pair formula_5 along formula_6 is a collection of pairs formula_7 for formula_8 such that
Properties of analytic continuation along a curve.
Analytic continuation along a curve is essentially unique, in the sense that given two analytic continuations formula_7 and formula_23 formula_24 of formula_5 along formula_25 the functions formula_26 and formula_27 coincide on formula_28 Informally, this says that any two analytic continuations of formula_5 along formula_6 will end up with the same values in a neighborhood of formula_29
If the curve formula_6 is closed (that is, formula_30), one need not have formula_31 equal formula_26 in a neighborhood of formula_4 For example, if one starts at a point formula_32 with formula_33 and the complex logarithm defined in a neighborhood of this point, and one lets formula_6 be the circle of radius formula_34 centered at the origin (traveled counterclockwise from formula_32), then by doing an analytic continuation along this curve one will end up with a value of the logarithm at formula_32 which is formula_35 plus the original value (see the second illustration on the right).
Monodromy theorem.
As noted earlier, two analytic continuations along the same curve yield the same result at the curve's endpoint. However, given two different curves branching out from the same point around which an analytic function is defined, with the curves reconnecting at the end, it is not true in general that the analytic continuations of that function along the two curves will yield the same value at their common endpoint.
Indeed, one can consider, as in the previous section, the complex logarithm defined in a neighborhood of a point formula_32 and the circle centered at the origin and radius formula_36 Then, it is possible to travel from formula_32 to formula_37 in two ways, counterclockwise, on the upper half-plane arc of this circle, and clockwise, on the lower half-plane arc. The values of the logarithm at formula_37 obtained by analytic continuation along these two arcs will differ by formula_38
If, however, one can continuously deform one of the curves into another while keeping the starting points and ending points fixed, and analytic continuation is possible on each of the intermediate curves, then the analytic continuations along the two curves will yield the same results at their common endpoint. This is called the monodromy theorem and its statement is made precise below.
Let formula_3 be an open disk in the complex plane centered at a point formula_39 and formula_40 be a complex-analytic function. Let formula_41 be another point in the complex plane. If there exists a family of curves formula_42 with formula_43 such that formula_44 and formula_45 for all formula_46 the function formula_47 is continuous, and for each formula_43 it is possible to do an analytic continuation of formula_2 along formula_48 then the analytic continuations of formula_2 along formula_49 and formula_50 will yield the same values at formula_51
The monodromy theorem makes it possible to extend an analytic function to a larger set via curves connecting a point in the original domain of the function to points in the larger set. The theorem below which states that is also called the monodromy theorem.
Let formula_3 be an open disk in the complex plane centered at a point formula_39 and formula_52 be a complex-analytic function. If formula_53 is an open simply-connected set containing formula_54 and it is possible to perform an analytic continuation of formula_2 on any curve contained in formula_53 which starts at formula_55 then formula_2 admits a "direct analytic continuation" to formula_56 meaning that there exists a complex-analytic function formula_57 whose restriction to formula_3 is formula_58 | [
{
"math_id": 0,
"text": "U_t"
},
{
"math_id": 1,
"text": "\\gamma:[0, 1]\\to \\Complex."
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "U"
},
{
"math_id": 4,
"text": "\\gamma(0)."
},
{
"math_id": 5,
"text": "(f, U)"
},
{
"math_id": 6,
"text": "\\gamma"
},
{
"math_id": 7,
"text": "(f_t, U_t)"
},
{
"math_id": 8,
"text": "0\\le t\\le 1"
},
{
"math_id": 9,
"text": "f_0=f"
},
{
"math_id": 10,
"text": "U_0=U."
},
{
"math_id": 11,
"text": "t\\in [0, 1], U_t"
},
{
"math_id": 12,
"text": "\\gamma(t)"
},
{
"math_id": 13,
"text": "f_t:U_t\\to\\Complex"
},
{
"math_id": 14,
"text": "t\\in [0, 1]"
},
{
"math_id": 15,
"text": "\\varepsilon >0"
},
{
"math_id": 16,
"text": "t'\\in [0, 1]"
},
{
"math_id": 17,
"text": "|t-t'|<\\varepsilon"
},
{
"math_id": 18,
"text": "\\gamma(t')\\in U_t"
},
{
"math_id": 19,
"text": "U_{t'}"
},
{
"math_id": 20,
"text": "f_t"
},
{
"math_id": 21,
"text": "f_{t'}"
},
{
"math_id": 22,
"text": "U_t\\cap U_{t'}."
},
{
"math_id": 23,
"text": "(g_t, V_t)"
},
{
"math_id": 24,
"text": "(0\\le t\\le 1)"
},
{
"math_id": 25,
"text": "\\gamma,"
},
{
"math_id": 26,
"text": "f_1"
},
{
"math_id": 27,
"text": "g_1"
},
{
"math_id": 28,
"text": "U_1\\cap V_1."
},
{
"math_id": 29,
"text": "\\gamma(1)."
},
{
"math_id": 30,
"text": "\\gamma(0)=\\gamma(1)"
},
{
"math_id": 31,
"text": "f_0"
},
{
"math_id": 32,
"text": "(a, 0)"
},
{
"math_id": 33,
"text": "a>0"
},
{
"math_id": 34,
"text": "a"
},
{
"math_id": 35,
"text": "2\\pi i"
},
{
"math_id": 36,
"text": "a."
},
{
"math_id": 37,
"text": "(-a, 0)"
},
{
"math_id": 38,
"text": "2\\pi i."
},
{
"math_id": 39,
"text": "P"
},
{
"math_id": 40,
"text": "f:U\\to \\Complex"
},
{
"math_id": 41,
"text": "Q"
},
{
"math_id": 42,
"text": "\\gamma_s:[0, 1]\\to \\Complex"
},
{
"math_id": 43,
"text": "s\\in [0, 1]"
},
{
"math_id": 44,
"text": "\\gamma_s(0)=P"
},
{
"math_id": 45,
"text": "\\gamma_s(1)=Q"
},
{
"math_id": 46,
"text": "s\\in [0, 1],"
},
{
"math_id": 47,
"text": "(s, t)\\in [0, 1]\\times[0, 1]\\to \\gamma_s(t)\\in \\mathbb C"
},
{
"math_id": 48,
"text": "\\gamma_s,"
},
{
"math_id": 49,
"text": "\\gamma_0"
},
{
"math_id": 50,
"text": "\\gamma_1"
},
{
"math_id": 51,
"text": "Q."
},
{
"math_id": 52,
"text": "f:U\\to\\Complex"
},
{
"math_id": 53,
"text": "W"
},
{
"math_id": 54,
"text": "U,"
},
{
"math_id": 55,
"text": "P,"
},
{
"math_id": 56,
"text": "W,"
},
{
"math_id": 57,
"text": "g:W\\to\\Complex"
},
{
"math_id": 58,
"text": "f."
}
] | https://en.wikipedia.org/wiki?curid=10530074 |
10531718 | Graph cuts in computer vision | As applied in the field of computer vision, graph cut optimization can be employed to efficiently solve a wide variety of low-level computer vision problems ("early vision"), such as image smoothing, the stereo correspondence problem, image segmentation, object co-segmentation, and many other computer vision problems that can be formulated in terms of energy minimization. Many of these energy minimization problems can be approximated by solving a maximum flow problem in a graph (and thus, by the max-flow min-cut theorem, define a minimal cut of the graph). Under most formulations of such problems in computer vision, the minimum energy solution corresponds to the maximum a posteriori estimate of a solution. Although many computer vision algorithms involve cutting a graph (e.g., normalized cuts), the term "graph cuts" is applied specifically to those models which employ a max-flow/min-cut optimization (other graph cutting algorithms may be considered as graph partitioning algorithms).
"Binary" problems (such as denoising a binary image) can be solved exactly using this approach; problems where pixels can be labeled with more than two different labels (such as stereo correspondence, or denoising of a grayscale image) cannot be solved exactly, but solutions produced are usually near the global optimum.
History.
The theory of graph cuts used as an optimization method was first applied in computer vision in the seminal paper by Greig, Porteous and Seheult of Durham University. Allan Seheult and Bruce Porteous were members of Durham's lauded statistics group of the time, led by Julian Besag and Peter Green, with the optimisation expert Margaret Greig notable as the first ever female member of staff of the Durham Mathematical Sciences Department.
In the Bayesian statistical context of smoothing noisy (or corrupted) images, they showed how the maximum a posteriori estimate of a binary image can be obtained "exactly" by maximizing the flow through an associated image network, involving the introduction of a "source" and "sink". The problem was therefore shown to be efficiently solvable. Prior to this result, "approximate" techniques such as simulated annealing (as proposed by the Geman brothers), or iterated conditional modes (a type of greedy algorithm suggested by Julian Besag) were used to solve such image smoothing problems.
Although the general formula_0-colour problem remains unsolved for formula_1 the approach of Greig, Porteous and Seheult has turned out to have wide applicability in general computer vision problems. Greig, Porteous and Seheult's approaches are often applied iteratively to a sequence of binary problems, usually yielding near optimal solutions.
In 2011, C. Couprie "et al". proposed a general image segmentation framework, called the "Power Watershed", that minimized a real-valued indicator function from [0,1] over a graph, constrained by user seeds (or unary terms) set to 0 or 1, in which the minimization of the indicator function over the graph is optimized with respect to an exponent formula_2. When formula_3, the Power Watershed is optimized by graph cuts, when formula_4 the Power Watershed is optimized by shortest paths, formula_5 is optimized by the random walker algorithm and formula_6 is optimized by the watershed algorithm. In this way, the Power Watershed may be viewed as a generalization of graph cuts that provides a straightforward connection with other energy optimization segmentation/clustering algorithms.
These 2 steps are repeated recursively until convergence.
formula_13
Binary segmentation of images.
Energy function.
where the energy formula_14 is composed of two different models (formula_15 and formula_16):
Likelihood / Color model / Regional term.
formula_15 — unary term describing the likelihood of each color.
Prior / Coherence model / Boundary term.
formula_16 — binary term describing the coherence between neighborhood pixels.
Criticism.
Graph cuts methods have become popular alternatives to the level set-based approaches for optimizing the location of a contour (see for an extensive comparison). However, graph cut approaches have been criticized in the literature for several issues:
Algorithm.
Implementation (exact).
The Boykov-Kolmogorov algorithm is an efficient way to compute the max-flow for computer vision-related graphs.
Implementation (approximation).
The Sim Cut algorithm approximates the minimum graph cut. The algorithm implements a solution by simulation of an electrical network. This is the approach suggested by Cederbaum's maximum flow theorem. Acceleration of the algorithm is possible through parallel computing.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "k > 2,"
},
{
"math_id": 2,
"text": "p"
},
{
"math_id": 3,
"text": "p=1"
},
{
"math_id": 4,
"text": "p=0"
},
{
"math_id": 5,
"text": "p=2"
},
{
"math_id": 6,
"text": "p=\\infty"
},
{
"math_id": 7,
"text": "x \\in \\{R,G,B\\}^N"
},
{
"math_id": 8,
"text": "S \\in R^N"
},
{
"math_id": 9,
"text": "S \\in \\{0 \\text{ for background}, 1 \\text{ for foreground/object to be detected}\\}^N"
},
{
"math_id": 10,
"text": "E(x, S, C, \\lambda)"
},
{
"math_id": 11,
"text": "E(x,S,C,\\lambda)=E_{\\rm color} + E_{\\rm coherence}"
},
{
"math_id": 12,
"text": "{\\arg\\min}_S E(x, S, C, \\lambda)"
},
{
"math_id": 13,
"text": "\\Pr(x\\mid S) = K^{-E}"
},
{
"math_id": 14,
"text": "E"
},
{
"math_id": 15,
"text": "E_{\\rm color}"
},
{
"math_id": 16,
"text": "E_{\\rm coherence}"
},
{
"math_id": 17,
"text": "24n+14m"
},
{
"math_id": 18,
"text": "n"
},
{
"math_id": 19,
"text": "m"
}
] | https://en.wikipedia.org/wiki?curid=10531718 |
1053303 | Statistical learning theory | Framework for machine learning
<templatestyles src="Machine learning/styles.css"/>
Statistical learning theory is a framework for machine learning drawing from the fields of statistics and functional analysis. Statistical learning theory deals with the statistical inference problem of finding a predictive function based on data. Statistical learning theory has led to successful applications in fields such as computer vision, speech recognition, and bioinformatics.
Introduction.
The goals of learning are understanding and prediction. Learning falls into many categories, including supervised learning, unsupervised learning, online learning, and reinforcement learning. From the perspective of statistical learning theory, supervised learning is best understood. Supervised learning involves learning from a training set of data. Every point in the training is an input–output pair, where the input maps to an output. The learning problem consists of inferring the function that maps between the input and the output, such that the learned function can be used to predict the output from future input.
Depending on the type of output, supervised learning problems are either problems of regression or problems of classification. If the output takes a continuous range of values, it is a regression problem. Using Ohm's law as an example, a regression could be performed with voltage as input and current as an output. The regression would find the functional relationship between voltage and current to be formula_0, such that
formula_1
Classification problems are those for which the output will be an element from a discrete set of labels. Classification is very common for machine learning applications. In facial recognition, for instance, a picture of a person's face would be the input, and the output label would be that person's name. The input would be represented by a large multidimensional vector whose elements represent pixels in the picture.
After learning a function based on the training set data, that function is validated on a test set of data, data that did not appear in the training set.
Formal description.
Take formula_2 to be the vector space of all possible inputs, and formula_3 to be the vector space of all possible outputs. Statistical learning theory takes the perspective that there is some unknown probability distribution over the product space formula_4, i.e. there exists some unknown formula_5. The training set is made up of formula_6 samples from this probability distribution, and is notated
formula_7
Every formula_8 is an input vector from the training data, and formula_9is the output that corresponds to it.
In this formalism, the inference problem consists of finding a function formula_10 such that formula_11. Let formula_12 be a space of functions formula_10 called the hypothesis space. The hypothesis space is the space of functions the algorithm will search through. Let formula_13 be the loss function, a metric for the difference between the predicted value formula_14 and the actual value formula_15. The expected risk is defined to be
formula_16
The target function, the best possible function formula_17 that can be chosen, is given by the formula_17 that satisfies
formula_18
Because the probability distribution formula_19 is unknown, a proxy measure for the expected risk must be used. This measure is based on the training set, a sample from this unknown probability distribution. It is called the empirical risk
formula_20
A learning algorithm that chooses the function formula_21 that minimizes the empirical risk is called empirical risk minimization.
Loss functions.
The choice of loss function is a determining factor on the function formula_21 that will be chosen by the learning algorithm. The loss function also affects the convergence rate for an algorithm. It is important for the loss function to be convex.
Different loss functions are used depending on whether the problem is one of regression or one of classification.
Regression.
The most common loss function for regression is the square loss function (also known as the L2-norm). This familiar loss function is used in Ordinary Least Squares regression. The form is:
formula_22
The absolute value loss (also known as the L1-norm) is also sometimes used:
formula_23
Classification.
In some sense the 0-1 indicator function is the most natural loss function for classification. It takes the value 0 if the predicted output is the same as the actual output, and it takes the value 1 if the predicted output is different from the actual output. For binary classification with formula_24, this is:
formula_25
where formula_26 is the Heaviside step function.
Regularization.
In machine learning problems, a major problem that arises is that of overfitting. Because learning is a prediction problem, the goal is not to find a function that most closely fits the (previously observed) data, but to find one that will most accurately predict output from future input. Empirical risk minimization runs this risk of overfitting: finding a function that matches the data exactly but does not predict future output well.
Overfitting is symptomatic of unstable solutions; a small perturbation in the training set data would cause a large variation in the learned function. It can be shown that if the stability for the solution can be guaranteed, generalization and consistency are guaranteed as well. Regularization can solve the overfitting problem and give the problem stability.
Regularization can be accomplished by restricting the hypothesis space formula_12. A common example would be restricting formula_12 to linear functions: this can be seen as a reduction to the standard problem of linear regression. formula_12 could also be restricted to polynomial of degree formula_27, exponentials, or bounded functions on L1. Restriction of the hypothesis space avoids overfitting because the form of the potential functions are limited, and so does not allow for the choice of a function that gives empirical risk arbitrarily close to zero.
One example of regularization is Tikhonov regularization. This consists of minimizing
formula_28
where formula_29 is a fixed and positive parameter, the regularization parameter. Tikhonov regularization ensures existence, uniqueness, and stability of the solution.
Bounding empirical risk.
Consider a binary classifier formula_30. We can apply Hoeffding's inequality to bound the probability that the empirical risk deviates from the true risk to be a Sub-Gaussian distribution.
formula_31
But generally, when we do empirical risk minimization, we are not given a classifier; we must choose it. Therefore, a more useful result is to bound the probability of the supremum of the difference over the whole class.
formula_32
where formula_33 is the shattering number and formula_6 is the number of samples in your dataset. The exponential term comes from Hoeffding but there is an extra cost of taking the supremum over the whole class, which is the shattering number.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R"
},
{
"math_id": 1,
"text": "V = I R"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "Y"
},
{
"math_id": 4,
"text": "Z = X \\times Y"
},
{
"math_id": 5,
"text": "p(z) = p(\\mathbf{x},y)"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "S = \\{(\\mathbf{x}_1,y_1), \\dots ,(\\mathbf{x}_n,y_n)\\} = \\{\\mathbf{z}_1, \\dots ,\\mathbf{z}_n\\}"
},
{
"math_id": 8,
"text": "\\mathbf{x}_i"
},
{
"math_id": 9,
"text": "y_i"
},
{
"math_id": 10,
"text": "f: X \\to Y"
},
{
"math_id": 11,
"text": "f(\\mathbf{x}) \\sim y"
},
{
"math_id": 12,
"text": "\\mathcal{H}"
},
{
"math_id": 13,
"text": "V(f(\\mathbf{x}),y)"
},
{
"math_id": 14,
"text": "f(\\mathbf{x})"
},
{
"math_id": 15,
"text": "y"
},
{
"math_id": 16,
"text": "I[f] = \\int_{X \\times Y} V(f(\\mathbf{x}),y)\\, p(\\mathbf{x},y) \\,d\\mathbf{x} \\,dy"
},
{
"math_id": 17,
"text": "f"
},
{
"math_id": 18,
"text": "f = \\mathop{\\operatorname{argmin}}_{h \\in \\mathcal{H}} I[h]"
},
{
"math_id": 19,
"text": "p(\\mathbf{x},y)"
},
{
"math_id": 20,
"text": "I_S[f] = \\frac{1}{n} \\sum_{i=1}^n V( f(\\mathbf{x}_i),y_i)"
},
{
"math_id": 21,
"text": "f_S"
},
{
"math_id": 22,
"text": "V(f(\\mathbf{x}),y) = (y - f(\\mathbf{x}))^2"
},
{
"math_id": 23,
"text": "V(f(\\mathbf{x}),y) = |y - f(\\mathbf{x})|"
},
{
"math_id": 24,
"text": "Y = \\{-1, 1\\}"
},
{
"math_id": 25,
"text": "V(f(\\mathbf{x}),y) = \\theta(- y f(\\mathbf{x}))"
},
{
"math_id": 26,
"text": "\\theta"
},
{
"math_id": 27,
"text": "p"
},
{
"math_id": 28,
"text": "\\frac{1}{n} \\sum_{i=1}^n V(f(\\mathbf{x}_i),y_i) + \\gamma \\left\\|f\\right\\|_{\\mathcal{H}}^2"
},
{
"math_id": 29,
"text": "\\gamma"
},
{
"math_id": 30,
"text": "f: \\mathcal{X} \\to \\{0, 1\\}"
},
{
"math_id": 31,
"text": "\\mathbb{P} (|\\hat{R} (f) - R(f)| \\geq \\epsilon) \\leq 2e^{- 2 n \\epsilon^2}"
},
{
"math_id": 32,
"text": "\\mathbb{P} \\bigg( \\sup_{f \\in \\mathcal{F}} | \\hat{R} (f) - R(f) | \\geq \\epsilon \\bigg) \\leq 2 S(\\mathcal{F}, n) e^{-n \\epsilon^2 / 8} \\approx n^d e^{-n \\epsilon^2 / 8}"
},
{
"math_id": 33,
"text": "S(\\mathcal{F},n)"
}
] | https://en.wikipedia.org/wiki?curid=1053303 |
10533337 | Skewb Ultimate | The Skewb Ultimate, originally marketed as the Pyraminx Ball, is a twelve-sided puzzle derivation of the Skewb, produced by German toy-maker Uwe Mèffert. Most versions of this puzzle are sold with six different colors of stickers attached, with opposite sides of the puzzle having the same color; however, some early versions of the puzzle have a full set of 12 colors.
Description.
The Skewb Ultimate is made in the shape of a dodecahedron, like the Megaminx, but cut differently. Each face is cut into four parts, two equal and two unequal. Each cut is a deep cut: it bisects the puzzle. This results in eight smaller corner pieces and six larger "edge" pieces.
The object of the puzzle is to scramble the colors, and then restore them to the original configuration.
Solutions.
At first glance, the Skewb Ultimate appears to be much more difficult to solve than the other Skewb puzzles, because of its uneven cuts which cause the pieces to move in a way that may seem irregular or strange.
Mathematically speaking, however, the Skewb Ultimate has exactly the same structure as the Skewb Diamond. The solution for the Skewb Diamond can be used to solve this puzzle, by identifying the Diamond's face pieces with the Ultimate's corner pieces, and the Diamond's corner pieces with the Ultimate's edge pieces. The only additional trick here is that the Ultimate's corner pieces (equivalent to the Diamond's face pieces) are sensitive to orientation, and so may require an additional algorithm for orienting them after being correctly placed.
Similarly, the Skewb Ultimate is mathematically identical to the Skewb, by identifying corners with corners, and the Skewb's face centers with the Ultimate's edges. The solution of the Skewb can be used directly to solve the Skewb Ultimate. The only addition is that the edge pieces of the Skewb Ultimate are sensitive to orientation, and may require an additional algorithm to orient them after being placed correctly.
Number of combinations.
The Skewb Ultimate has six large "edge" pieces and eight smaller corner pieces. Only even permutations of the larger pieces are possible, giving 6!/2 possible arrangements. Each of them has two possible orientations, although the orientation of the last piece is determined by the orientations of the other pieces, hence giving a total of 25 possible orientations.
The positions of four of the smaller corner pieces depend on the positions of the other four corner pieces, and only even permutations of these positions are possible. Hence the number of arrangements of corner pieces is 4!/2. Each corner piece has three possible orientations, although the orientation of the last corner is determined by the orientations of the other corners, so the number of possible corner orientations is 37. However, the orientations of four of the corners plus the position of one of the other corners determines the positions of the remaining three, so the total number of possible combinations of corners is only formula_0.
Therefore, the number of possible combinations is:
formula_1 | [
{
"math_id": 0,
"text": "\\frac{4!\\times 3^6}{2}"
},
{
"math_id": 1,
"text": "\\frac{6!\\times 2^5\\times 4!\\times 3^6}{4} = 100,776,960."
}
] | https://en.wikipedia.org/wiki?curid=10533337 |
10533603 | Atkinson's theorem | In operator theory, Atkinson's theorem (named for Frederick Valentine Atkinson) gives a characterization of Fredholm operators.
The theorem.
Let "H" be a Hilbert space and "L"("H") the set of bounded operators on "H". The following is the classical definition of a Fredholm operator: an operator "T" ∈ "L"("H") is said to be a Fredholm operator if the kernel Ker("T") is finite-dimensional, Ker("T*") is finite-dimensional (where "T*" denotes the adjoint of "T"), and the range Ran("T") is closed.
Atkinson's theorem states:
A "T" ∈ "L"("H") is a Fredholm operator if and only if "T" is invertible modulo compact perturbation, i.e. "TS" = "I" + "C"1 and "ST" = "I" + "C"2 for some bounded operator "S" and compact operators "C"1 and "C"2.
In other words, an operator "T" ∈ "L"("H") is Fredholm, in the classical sense, if and only if its projection in the Calkin algebra is invertible.
Sketch of proof.
The outline of a proof is as follows. For the ⇒ implication, express "H" as the orthogonal direct sum
formula_0
The restriction "T" : Ker("T")⊥ → Ran("T") is a bijection, and therefore invertible by the open mapping theorem. Extend this inverse by 0 on Ran("T")⊥ = Ker("T*") to an operator "S" defined on all of "H". Then "I" − "TS" is the finite-rank projection onto Ker("T*"), and "I" − "ST" is the projection onto Ker("T"). This proves the only if part of the theorem.
For the converse, suppose now that "ST" = "I" + "C"2 for some compact operator "C"2. If "x" ∈ Ker("T"), then "STx" = "x" + "C"2"x" = 0. So Ker("T") is contained in an eigenspace of "C"2, which is finite-dimensional (see spectral theory of compact operators). Therefore, Ker("T") is also finite-dimensional. The same argument shows that Ker("T*") is also finite-dimensional.
To prove that Ran("T") is closed, we make use of the approximation property: let "F" be a finite-rank operator such that ||"F" − "C"2|| < "r". Then for every "x" in Ker("F"),
||"S"||⋅||"Tx"|| ≥ ||"STx"|| = ||"x" + "C"2"x"|| = ||"x" + "Fx" +"C"2"x" − "Fx"|| ≥ ||x|| − ||"C"2 − "F"||⋅||x|| ≥ (1 − "r")||"x"||.
Thus "T" is bounded below on Ker("F"), which implies that "T"(Ker("F")) is closed. On the other hand, "T"(Ker("F")⊥) is finite-dimensional, since Ker("F")⊥ = Ran("F*") is finite-dimensional. Therefore, Ran("T") = "T"(Ker("F")) + "T"(Ker("F")⊥) is closed, and this proves the theorem.
A more complete treatment of Atkinson's Theorem is in the reference by Arveson: it shows that if B is a Banach space, an operator is Fredholm iff it is invertible modulo a finite rank operator (and that the latter is equivalent to being invertible modulo a compact operator, which is significant in view of Enflo's example of a separable, reflexive Banach space with compact operators that are not norm-limits of finite rank operators). For Banach spaces, a Fredholm operator is one with finite dimensional kernel and range of finite codimension (equivalent to the kernel of its adjoint being finite dimensional). Note that the hypothesis that Ran("T") is closed is redundant since a space of finite codimension that is also the range of a bounded operator is always closed (see Arveson reference below); this is a consequence of the open-mapping theorem (and is not true if the space is not the range of a bounded operator, for example the kernel of a discontinuous linear functional).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " H = \n\\operatorname{Ker}(T)^\\perp \\oplus \\operatorname{Ker} (T). \n"
}
] | https://en.wikipedia.org/wiki?curid=10533603 |
10535463 | Persymmetric matrix | Square matrix symmetric about its anti-diagonal
In mathematics, persymmetric matrix may refer to:
The first definition is the most common in the recent literature. The designation "Hankel matrix" is often used for matrices satisfying the property in the second definition.
Definition 1.
Let "A" = ("aij") be an "n" × "n" matrix. The first definition of "persymmetric" requires that
formula_0 for all "i", "j".
For example, 5 × 5 persymmetric matrices are of the form
formula_1
This can be equivalently expressed as "AJ" = "JA"T where J is the exchange matrix.
A third way to express this is seen by post-multiplying "AJ" = "JA"T with J on both sides, showing that "A"T rotated 180 degrees is identical to A:
formula_2
A symmetric matrix is a matrix whose values are symmetric in the northwest-to-southeast diagonal. If a symmetric matrix is rotated by 90°, it becomes a persymmetric matrix. Symmetric persymmetric matrices are sometimes called bisymmetric matrices.
Definition 2.
The second definition is due to Thomas Muir. It says that the square matrix "A" = ("a""ij") is persymmetric if "a""ij" depends only on "i" + "j". Persymmetric matrices in this sense, or Hankel matrices as they are often called, are of the form
formula_3
A persymmetric determinant is the determinant of a persymmetric matrix.
A matrix for which the values on each line parallel to the main diagonal are constant is called a Toeplitz matrix.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a_{ij} = a_{n-j+1,\\,n-i+1}"
},
{
"math_id": 1,
"text": "A = \\begin{bmatrix}\na_{11} & a_{12} & a_{13} & a_{14} & a_{15} \\\\\na_{21} & a_{22} & a_{23} & a_{24} & a_{14} \\\\\na_{31} & a_{32} & a_{33} & a_{23} & a_{13} \\\\\na_{41} & a_{42} & a_{32} & a_{22} & a_{12} \\\\\na_{51} & a_{41} & a_{31} & a_{21} & a_{11}\n\\end{bmatrix}."
},
{
"math_id": 2,
"text": "A = J A^\\mathsf{T} J."
},
{
"math_id": 3,
"text": "A = \\begin{bmatrix}\nr_1 & r_2 & r_3 & \\cdots & r_n \\\\\nr_2 & r_3 & r_4 & \\cdots & r_{n+1} \\\\\nr_3 & r_4 & r_5 & \\cdots & r_{n+2} \\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\nr_n & r_{n+1} & r_{n+2} & \\cdots & r_{2n-1}\n\\end{bmatrix}."
}
] | https://en.wikipedia.org/wiki?curid=10535463 |
105375 | Student's t-distribution | Probability distribution
In probability and statistics, Student's t distribution (or simply the t distribution) formula_2 is
a continuous probability distribution that generalizes the standard normal distribution. Like the latter, it is symmetric around zero and bell-shaped.
However, formula_2 has heavier tails and the amount of probability mass in the tails is controlled by the parameter formula_3 For formula_4 the Student's t distribution formula_5 becomes the standard Cauchy distribution, which has very "fat" tails; whereas for formula_6 it becomes the standard normal distribution formula_7 which has very "thin" tails.
The Student's t distribution plays a role in a number of widely used statistical analyses, including Student's t test for assessing the statistical significance of the difference between two sample means, the construction of confidence intervals for the difference between two population means, and in linear regression analysis.
In the form of the "location-scale t distribution" formula_8 it generalizes the normal distribution and also arises in the Bayesian analysis of data from a normal family as a compound distribution when marginalizing over the variance parameter.
History and etymology.
In statistics, the t distribution was first derived as a posterior distribution in 1876 by Helmert and Lüroth. As such, Student's t-distribution is an example of Stigler's Law of Eponymy. The t distribution also appeared in a more general form as Pearson type IV distribution in Karl Pearson's 1895 paper.
In the English-language literature, the distribution takes its name from William Sealy Gosset's 1908 paper in "Biometrika" under the pseudonym "Student". One version of the origin of the pseudonym is that Gosset's employer preferred staff to use pen names when publishing scientific papers instead of their real name, so he used the name "Student" to hide his identity. Another version is that Guinness did not want their competitors to know that they were using the t test to determine the quality of raw material.
Gosset worked at the Guinness Brewery in Dublin, Ireland, and was interested in the problems of small samples – for example, the chemical properties of barley where sample sizes might be as few as 3. Gosset's paper refers to the distribution as the "frequency distribution of standard deviations of samples drawn from a normal population". It became well known through the work of Ronald Fisher, who called the distribution "Student's distribution" and represented the test value with the letter t.
Definition.
Probability density function.
Student's t distribution has the probability density function (PDF) given by
formula_9
where formula_10 is the number of "degrees of freedom" and formula_11 is the gamma function. This may also be written as
formula_12
where formula_13is the Beta function. In particular for integer valued degrees of freedom formula_10 we have:
For formula_14 and even,
formula_15
For formula_14 and odd,
formula_16
The probability density function is symmetric, and its overall shape resembles the bell shape of a normally distributed variable with mean 0 and variance 1, except that it is a bit lower and wider. As the number of degrees of freedom grows, the t distribution approaches the normal distribution with mean 0 and variance 1. For this reason formula_17 is also known as the normality parameter.
The following images show the density of the t distribution for increasing values of formula_18 The normal distribution is shown as a blue line for comparison. Note that the t distribution (red line) becomes closer to the normal distribution as formula_10 increases.
Cumulative distribution function.
The cumulative distribution function (CDF) can be written in terms of I, the regularized
incomplete beta function. For
formula_19
where
formula_20
Other values would be obtained by symmetry. An alternative formula, valid for formula_21 is
formula_22
where formula_23 is a particular instance of the hypergeometric function.
For information on its inverse cumulative distribution function, see .
Special cases.
Certain values of formula_10 give a simple form for Student's t-distribution.
Moments.
For formula_24 the raw moments of the t distribution are
formula_25
Moments of order formula_10 or higher do not exist.
The term for formula_26 k even, may be simplified using the properties of the gamma function to
formula_27
For a t distribution with formula_10 degrees of freedom, the expected value is formula_0 if formula_1 and its variance is formula_28 if formula_29 The skewness is 0 if formula_30 and the excess kurtosis is formula_31 if formula_32
Location-scale t distribution.
Location-scale transformation.
Student's t distribution generalizes to the three parameter "location-scale t distribution" formula_33 by introducing a location parameter formula_34 and a scale parameter formula_35 With
formula_36
and location-scale family transformation
formula_37
we get
formula_38
The resulting distribution is also called the "non-standardized Student's t distribution".
Density and first two moments.
The location-scale t distribution has a density defined by:
formula_39
Equivalently, the density can be written in terms of formula_40:
formula_41
Other properties of this version of the distribution are:
formula_42
How the t distribution arises (characterization).
As the distribution of a test statistic.
Student's "t"-distribution with formula_55 degrees of freedom can be defined as the distribution of the random variable "T" with
formula_56
where
A different distribution is defined as that of the random variable defined, for a given constant "μ", by
formula_57
This random variable has a noncentral "t"-distribution with noncentrality parameter "μ". This distribution is important in studies of the power of Student's "t"-test.
Derivation.
Suppose "X"1, ..., "X""n" are independent realizations of the normally-distributed, random variable "X", which has an expected value "μ" and variance "σ"2. Let
formula_58
be the sample mean, and
formula_59
be an unbiased estimate of the variance from the sample. It can be shown that the random variable
formula_60
has a chi-squared distribution with formula_61 degrees of freedom (by Cochran's theorem). It is readily shown that the quantity
formula_62
is normally distributed with mean 0 and variance 1, since the sample mean formula_63 is normally distributed with mean "μ" and variance "σ"2/"n". Moreover, it is possible to show that these two random variables (the normally distributed one "Z" and the chi-squared-distributed one "V") are independent. Consequently the pivotal quantity
formula_64
which differs from "Z" in that the exact standard deviation "σ" is replaced by the random variable "S""n", has a Student's "t"-distribution as defined above. Notice that the unknown population variance "σ"2 does not appear in "T", since it was in both the numerator and the denominator, so it canceled. Gosset intuitively obtained the probability density function stated above, with formula_55 equal to "n" − 1, and Fisher proved it in 1925.
The distribution of the test statistic "T" depends on formula_55, but not "μ" or "σ"; the lack of dependence on "μ" and "σ" is what makes the "t"-distribution important in both theory and practice.
Sampling distribution of t-statistic.
The t distribution arises as the sampling distribution
of the t statistic. Below the one-sample t statistic is discussed, for the corresponding two-sample t statistic see Student's t-test.
Unbiased variance estimate.
Let formula_65 be independent and identically distributed samples from a normal distribution with mean formula_46 and variance formula_66 The sample mean and unbiased sample variance are given by:
formula_67
The resulting (one sample) t statistic is given by
formula_68
and is distributed according to a Student's t distribution with formula_69 degrees of freedom.
Thus for inference purposes the t statistic is a useful "pivotal quantity" in the case when the mean and variance formula_70 are unknown population parameters, in the sense that the t statistic has then a probability distribution that depends on neither formula_46 nor formula_66
ML variance estimate.
Instead of the unbiased estimate formula_71 we may also use the maximum likelihood estimate
formula_72
yielding the statistic
formula_73
This is distributed according to the location-scale t distribution:
formula_74
Compound distribution of normal with inverse gamma distribution.
The location-scale t distribution results from compounding a Gaussian distribution (normal distribution) with mean formula_34 and unknown variance, with an inverse gamma distribution placed over the variance with parameters formula_75 and formula_76 In other words, the random variable "X" is assumed to have a Gaussian distribution with an unknown variance distributed as inverse gamma, and then the variance is marginalized out (integrated out).
Equivalently, this distribution results from compounding a Gaussian distribution with a scaled-inverse-chi-squared distribution with parameters formula_55 and formula_47 The scaled-inverse-chi-squared distribution is exactly the same distribution as the inverse gamma distribution, but with a different parameterization, i.e. formula_77
The reason for the usefulness of this characterization is that in Bayesian statistics the inverse gamma distribution is the conjugate prior distribution of the variance of a Gaussian distribution. As a result, the location-scale t distribution arises naturally in many Bayesian inference problems.
Maximum entropy distribution.
Student's t distribution is the maximum entropy probability distribution for a random variate "X" for which formula_78 is fixed.
Further properties.
Monte Carlo sampling.
There are various approaches to constructing random samples from the Student's t distribution. The matter depends on whether the samples are required on a stand-alone basis, or are to be constructed by application of a quantile function to uniform samples; e.g., in the multi-dimensional applications basis of copula-dependency. In the case of stand-alone sampling, an extension of the Box–Muller method and its polar form is easily deployed. It has the merit that it applies equally well to all real positive degrees of freedom, ν, while many other candidate methods fail if ν is close to zero.
Integral of Student's probability density function and p-value.
The function is the integral of Student's probability density function, "f"("t") between -t and t, for It thus gives the probability that a value of "t" less than that calculated from observed data would occur by chance. Therefore, the function can be used when testing whether the difference between the means of two sets of data is statistically significant, by calculating the corresponding value of t and the probability of its occurrence if the two sets of data were drawn from the same population. This is used in a variety of situations, particularly in t tests. For the statistic t, with ν degrees of freedom, is the probability that t would be less than the observed value if the two means were the same (provided that the smaller mean is subtracted from the larger, so that It can be easily calculated from the cumulative distribution function "F""ν"("t") of the t distribution:
formula_79
where is the regularized incomplete beta function.
For statistical hypothesis testing this function is used to construct the "p"-value.
Uses.
In frequentist statistical inference.
Student's t distribution arises in a variety of statistical estimation problems where the goal is to estimate an unknown parameter, such as a mean value, in a setting where the data are observed with additive errors. If (as in nearly all practical statistical work) the population standard deviation of these errors is unknown and has to be estimated from the data, the t distribution is often used to account for the extra uncertainty that results from this estimation. In most such problems, if the standard deviation of the errors were known, a normal distribution would be used instead of the t distribution.
Confidence intervals and hypothesis tests are two statistical procedures in which the quantiles of the sampling distribution of a particular statistic (e.g. the standard score) are required. In any situation where this statistic is a linear function of the data, divided by the usual estimate of the standard deviation, the resulting quantity can be rescaled and centered to follow Student's t distribution. Statistical analyses involving means, weighted means, and regression coefficients all lead to statistics having this form.
Quite often, textbook problems will treat the population standard deviation as if it were known and thereby avoid the need to use the Student's t distribution. These problems are generally of two kinds: (1) those in which the sample size is so large that one may treat a data-based estimate of the variance as if it were certain, and (2) those that illustrate mathematical reasoning, in which the problem of estimating the standard deviation is temporarily ignored because that is not the point that the author or instructor is then explaining.
Hypothesis testing.
A number of statistics can be shown to have t distributions for samples of moderate size under null hypotheses that are of interest, so that the t distribution forms the basis for significance tests. For example, the distribution of Spearman's rank correlation coefficient ρ, in the null case (zero correlation) is well approximated by the t distribution for sample sizes above about 20.
Confidence intervals.
Suppose the number "A" is so chosen that
formula_81
when T has a t distribution with degrees of freedom. By symmetry, this is the same as saying that A satisfies
formula_82
so "A" is the "95th percentile" of this probability distribution, or formula_83 Then
formula_84
and this is equivalent to
formula_85
Therefore, the interval whose endpoints are
formula_86
is a 90% confidence interval for μ. Therefore, if we find the mean of a set of observations that we can reasonably expect to have a normal distribution, we can use the t distribution to examine whether the confidence limits on that mean include some theoretically predicted value – such as the value predicted on a null hypothesis.
It is this result that is used in the Student's t tests: since the difference between the means of samples from two normal distributions is itself distributed normally, the t distribution can be used to examine whether that difference can reasonably be supposed to be zero.
If the data are normally distributed, the one-sided confidence limit (UCL) of the mean, can be calculated using the following equation:
formula_87
The resulting UCL will be the greatest average value that will occur for a given confidence interval and population size. In other words, formula_63 being the mean of the set of observations, the probability that the mean of the distribution is inferior to UCL is equal to the confidence
Prediction intervals.
The t distribution can be used to construct a prediction interval for an unobserved sample from a normal distribution with unknown mean and variance.
In Bayesian statistics.
The Student's t distribution, especially in its three-parameter (location-scale) version, arises frequently in Bayesian statistics as a result of its connection with the normal distribution. Whenever the variance of a normally distributed random variable is unknown and a conjugate prior placed over it that follows an inverse gamma distribution, the resulting marginal distribution of the variable will follow a Student's t distribution. Equivalent constructions with the same results involve a conjugate scaled-inverse-chi-squared distribution over the variance, or a conjugate gamma distribution over the precision. If an improper prior proportional to is placed over the variance, the t distribution also arises. This is the case regardless of whether the mean of the normally distributed variable is known, is unknown distributed according to a conjugate normally distributed prior, or is unknown distributed according to an improper constant prior.
Related situations that also produce a t distribution are:
Robust parametric modeling.
The t distribution is often used as an alternative to the normal distribution as a model for data, which often has heavier tails than the normal distribution allows for; see e.g. Lange et al. The classical approach was to identify outliers (e.g., using Grubbs's test) and exclude or downweight them in some way. However, it is not always easy to identify outliers (especially in high dimensions), and the t distribution is a natural choice of model for such data and provides a parametric approach to robust statistics.
A Bayesian account can be found in Gelman et al. The degrees of freedom parameter controls the kurtosis of the distribution and is correlated with the scale parameter. The likelihood can have multiple local maxima and, as such, it is often necessary to fix the degrees of freedom at a fairly low value and estimate the other parameters taking this as given. Some authors report that values between 3 and 9 are often good choices. Venables and Ripley suggest that a value of 5 is often a good choice.
Student's t process.
For practical regression and prediction needs, Student's t processes were introduced, that are generalisations of the Student t distributions for functions. A Student's t process is constructed from the Student t distributions like a Gaussian process is constructed from the Gaussian distributions. For a Gaussian process, all sets of values have a multidimensional Gaussian distribution. Analogously, formula_88 is a Student t process on an interval formula_89 if the correspondent values of the process formula_90 (formula_91) have a joint multivariate Student t distribution. These processes are used for regression, prediction, Bayesian optimization and related problems. For multivariate regression and multi-output prediction, the multivariate Student t processes are introduced and used.
Table of selected values.
The following table lists values for t distributions with ν degrees of freedom for a range of one-sided or two-sided critical regions. The first column is ν, the percentages along the top are confidence levels formula_92 and the numbers in the body of the table are the formula_93 factors described in the section on confidence intervals.
The last row with infinite ν gives critical points for a normal distribution since a t distribution with infinitely many degrees of freedom is a normal distribution. (See Related distributions above).
Let's say we have a sample with size 11, sample mean 10, and sample variance 2. For 90% confidence with 10 degrees of freedom, the one-sided t value from the table is 1.372 . Then with confidence interval calculated from
formula_94
we determine that with 90% confidence we have a true mean lying below
formula_95
In other words, 90% of the times that an upper threshold is calculated by this method from particular samples, this upper threshold exceeds the true mean.
And with 90% confidence we have a true mean lying above
formula_96
In other words, 90% of the times that a lower threshold is calculated by this method from particular samples, this lower threshold lies below the true mean.
So that at 80% confidence (calculated from 100% − 2 × (1 − 90%) = 80%), we have a true mean lying within the interval
formula_97
Saying that 80% of the times that upper and lower thresholds are calculated by this method from a given sample, the true mean is both below the upper threshold and above the lower threshold is not the same as saying that there is an 80% probability that the true mean lies between a particular pair of upper and lower thresholds that have been calculated by this method; see confidence interval and prosecutor's fallacy.
Nowadays, statistical software, such as the R programming language, and functions available in many spreadsheet programs compute values of the t distribution and its inverse without tables.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ 0\\ "
},
{
"math_id": 1,
"text": "\\ \\nu > 1\\ ,"
},
{
"math_id": 2,
"text": "\\ t_\\nu\\ "
},
{
"math_id": 3,
"text": "\\ \\nu ~ ~."
},
{
"math_id": 4,
"text": "\\ \\nu = 1\\ "
},
{
"math_id": 5,
"text": "t_\\nu"
},
{
"math_id": 6,
"text": "\\ \\nu \\rightarrow \\infty\\ "
},
{
"math_id": 7,
"text": "\\ \\mathcal{N}(0,1) \\,"
},
{
"math_id": 8,
"text": "lst(\\mu, \\tau^2, \\nu)"
},
{
"math_id": 9,
"text": " f(t)\\ =\\ \\frac{\\ \\Gamma\\!\\left(\\frac{\\ \\nu+1\\ }{ 2 }\\right)\\ }{\\ \\sqrt{\\pi\\ \\nu\\ }\\; \\Gamma\\!\\left(\\frac{\\nu}{2}\\right)} \\; \\left(\\ 1 + \\frac{~ t^2\\ }{ \\nu }\\ \\right)^{-(\\nu+1)/2}\\ ,"
},
{
"math_id": 10,
"text": "\\ \\nu\\ "
},
{
"math_id": 11,
"text": "\\ \\Gamma\\ "
},
{
"math_id": 12,
"text": " f(t)\\ =\\ \\frac{ 1 }{\\ \\sqrt{\\nu\\ }\\ {\\mathrm B}\\!\\left( \\frac{\\ 1\\ }{ 2 },\\ \\frac{\\ \\nu\\ }{ 2 }\\right)\\ } \\; \\left(\\ 1 + \\frac{\\ t^2\\ }{ \\nu }\\ \\right)^{-(\\nu+1)/2}\\ ,"
},
{
"math_id": 13,
"text": "\\ {\\mathrm B}\\ "
},
{
"math_id": 14,
"text": "\\ \\nu > 1\\ "
},
{
"math_id": 15,
"text": "\\ \\frac{\\ \\Gamma\\!\\left( \\frac{\\ \\nu+1\\ }{ 2 }\\right)\\ }{\\ \\sqrt{\\pi\\ \\nu\\ }\\; \\Gamma\\!\\left( \\frac{\\ \\nu\\ }{ 2 }\\right)\\ }\\ =\\ \\frac{ 1 }{\\ 2 \\sqrt{\\nu\\ }\\ }\\ \\cdot\\ \\frac{\\ (\\nu - 1)\\cdot(\\nu -3)\\cdots 5 \\cdot 3\\ }{\\ (\\nu -2)\\cdot(\\nu -4) \\cdots 4 \\cdot 2\\ } ~."
},
{
"math_id": 16,
"text": "\\ \\frac{\\ \\Gamma\\!\\left( \\frac{\\ \\nu+1\\ }{ 2 } \\right)\\ }{\\ \\sqrt{\\pi\\ \\nu\\ }\\ \\Gamma\\!\\left( \\frac{\\ \\nu\\ }{ 2 } \\right)}\\ =\\ \\frac{ 1 }{\\ \\pi \\sqrt{\\nu\\ }\\ }\\ \\cdot\\ \\frac{(\\nu -1)\\cdot(\\nu -3)\\cdots 4 \\cdot 2\\ }{\\ (\\nu -2)\\cdot(\\nu -4)\\cdots 5 \\cdot 3\\ } ~."
},
{
"math_id": 17,
"text": "{\\ \\nu\\ }"
},
{
"math_id": 18,
"text": "\\ \\nu ~."
},
{
"math_id": 19,
"text": "F(t) = \\int_{-\\infty}^t\\ f(u)\\ \\operatorname{d}u ~=~ 1 - \\frac{1}{2} I_{x(t)}\\!\\left( \\frac{\\ \\nu\\ }{ 2 },\\ \\frac{\\ 1\\ }{ 2 } \\right)\\ ,"
},
{
"math_id": 20,
"text": "x(t) = \\frac{ \\nu }{\\ t^2+\\nu\\ } ~."
},
{
"math_id": 21,
"text": "\\ t^2 < \\nu\\ ,"
},
{
"math_id": 22,
"text": "\\int_{-\\infty}^t f(u)\\ \\operatorname{d}u ~=~ \\frac{1}{2} + t\\ \\frac{\\ \\Gamma\\!\\left( \\frac{\\ \\nu+1\\ }{ 2 } \\right)\\ }{\\ \\sqrt{\\pi\\ \\nu\\ }\\ \\Gamma\\!\\left( \\frac{ \\nu }{\\ 2\\ }\\right)\\ } \\ {}_{2}F_1\\!\\left(\\ \\frac{1}{2}, \\frac{\\ \\nu+1\\ }{2}\\ ; \\frac{ 3 }{\\ 2\\ }\\ ;\\ -\\frac{~ t^2\\ }{ \\nu }\\ \\right)\\ ,"
},
{
"math_id": 23,
"text": "\\ {}_{2}F_1(\\ ,\\ ;\\ ;\\ )\\ "
},
{
"math_id": 24,
"text": "\\nu > 1\\ ,"
},
{
"math_id": 25,
"text": "\\operatorname{\\mathbb E}\\left\\{\\ T^k\\ \\right\\} = \\begin{cases}\n\\quad 0 & k \\text{ odd }, \\quad 0 < k < \\nu\\ , \\\\ {} \\\\\n\\frac{1}{\\ \\sqrt{\\pi\\ }\\ \\Gamma\\left(\\frac{\\ \\nu\\ }{ 2 }\\right)}\\ \\left[\\ \\Gamma\\!\\left(\\frac{\\ k + 1\\ }{ 2 }\\right)\\ \\Gamma\\!\\left(\\frac{\\ \\nu - k\\ }{ 2 }\\right)\\ \\nu^{\\frac{\\ k\\ }{ 2 }}\\ \\right] & k \\text{ even }, \\quad 0 < k < \\nu ~.\\\\\n\\end{cases}"
},
{
"math_id": 26,
"text": "\\ 0 < k < \\nu\\ ,"
},
{
"math_id": 27,
"text": "\\operatorname{\\mathbb E}\\left\\{\\ T^k\\ \\right\\} = \\nu^{ \\frac{\\ k\\ }{ 2 } }\\ \\prod_{j=1}^{k/2}\\ \\frac{~ 2j - 1 ~}{ \\nu - 2j } \\qquad k \\text{ even}, \\quad 0 < k < \\nu ~."
},
{
"math_id": 28,
"text": "\\ \\frac{ \\nu }{\\ \\nu-2\\ }\\ "
},
{
"math_id": 29,
"text": "\\ \\nu > 2 ~."
},
{
"math_id": 30,
"text": "\\ \\nu > 3\\ "
},
{
"math_id": 31,
"text": "\\ \\frac{ 6 }{\\ \\nu - 4\\ }\\ "
},
{
"math_id": 32,
"text": "\\ \\nu > 4 ~."
},
{
"math_id": 33,
"text": "\\ \\mathcal{lst}(\\mu,\\ \\tau^2,\\ \\nu)\\ "
},
{
"math_id": 34,
"text": "\\ \\mu\\ "
},
{
"math_id": 35,
"text": "\\ \\tau ~."
},
{
"math_id": 36,
"text": "\\ T \\sim t_\\nu\\ "
},
{
"math_id": 37,
"text": "\\ X = \\mu + \\tau\\ T\\ "
},
{
"math_id": 38,
"text": "\\ X \\sim \\mathcal{lst}(\\mu,\\ \\tau^2,\\ \\nu) ~."
},
{
"math_id": 39,
"text": "p(x\\mid \\nu,\\mu,\\tau) = \\frac{\\ \\Gamma\\left(\\frac{\\ \\nu + 1\\ }{ 2 } \\right)\\ }{\\ \\Gamma\\left( \\frac{\\ \\nu\\ }{ 2 }\\right)\\ \\sqrt{\\pi\\ \\nu\\ }\\ \\tau\\ }\\ \\left( 1 + \\frac{\\ 1\\ }{ \\nu }\\ \\left(\\ \\frac{\\ x-\\mu\\ }{ \\tau }\\ \\right)^2\\ \\right)^{-(\\nu+1)/2}\\ "
},
{
"math_id": 40,
"text": "\\tau^2"
},
{
"math_id": 41,
"text": "\\ p(x\\ \\mid\\ \\nu,\\ \\mu,\\ \\tau^2) = \\frac{\\ \\Gamma( \\frac{\\nu + 1}{2})\\ }{\\ \\Gamma\\left(\\frac{\\ \\nu\\ }{ 2 }\\right)\\ \\sqrt{\\pi\\ \\nu\\ \\tau^2}\\ }\\ \\left(\\ 1 + \\frac{\\ 1\\ }{ \\nu }\\ \\frac{\\ (x - \\mu)^2\\ }{\\ \\tau^2\\ }\\ \\right)^{-(\\nu+1)/2}\\ "
},
{
"math_id": 42,
"text": "\\begin{align}\n\\operatorname{\\mathbb E}\\{\\ X\\ \\} &= \\mu & \\text{ for } \\nu > 1\\ ,\\\\\n\\operatorname{var}\\{\\ X\\ \\} &= \\tau^2\\frac{\\nu}{\\nu-2} & \\text{ for } \\nu > 2\\ ,\\\\\n\\operatorname{mode}\\{\\ X\\ \\} &= \\mu ~.\n\\end{align} "
},
{
"math_id": 43,
"text": "\\ X\\ "
},
{
"math_id": 44,
"text": "\\ X \\sim \\mathcal{lst}\\left(\\mu,\\ \\tau^2,\\ \\nu\\right)\\ "
},
{
"math_id": 45,
"text": "X \\sim \\mathrm{N}\\left(\\mu, \\tau^2\\right)"
},
{
"math_id": 46,
"text": "\\mu"
},
{
"math_id": 47,
"text": "\\ \\tau^2 ~."
},
{
"math_id": 48,
"text": "\\ \\mathcal{lst}\\left(\\mu,\\ \\tau^2,\\ \\nu=1 \\right)\\ "
},
{
"math_id": 49,
"text": "\\nu=1"
},
{
"math_id": 50,
"text": "\\mathrm{Cau}\\left(\\mu, \\tau\\right) ~."
},
{
"math_id": 51,
"text": "\\ \\mathcal{lst}\\left(\\mu=0,\\ \\tau^2=1,\\ \\nu\\right)\\ "
},
{
"math_id": 52,
"text": "\\mu=0"
},
{
"math_id": 53,
"text": "\\ \\tau^2=1\\ "
},
{
"math_id": 54,
"text": "\\ t_\\nu ~."
},
{
"math_id": 55,
"text": "\\nu"
},
{
"math_id": 56,
"text": " T=\\frac{Z}{\\sqrt{V/\\nu}} = Z \\sqrt{\\frac{\\nu}{V}},"
},
{
"math_id": 57,
"text": "(Z+\\mu)\\sqrt{\\frac{\\nu}{V}}."
},
{
"math_id": 58,
"text": "\\overline{X}_n = \\frac{1}{n}(X_1+\\cdots+X_n)"
},
{
"math_id": 59,
"text": "S_n^2 = \\frac{1}{n-1} \\sum_{i=1}^n \\left(X_i - \\overline{X}_n\\right)^2"
},
{
"math_id": 60,
"text": "V = (n-1)\\frac{S_n^2}{\\sigma^2} "
},
{
"math_id": 61,
"text": "\\nu = n - 1"
},
{
"math_id": 62,
"text": "Z = \\left(\\overline{X}_n - \\mu\\right) \\frac{\\sqrt{n}}{\\sigma}"
},
{
"math_id": 63,
"text": "\\overline{X}_n"
},
{
"math_id": 64,
"text": "T \\equiv \\frac{Z}{\\sqrt{V/\\nu}} = \\left(\\overline{X}_n - \\mu\\right) \\frac{\\sqrt{n}}{S_n},"
},
{
"math_id": 65,
"text": "\\ x_1, \\ldots, x_n \\sim {\\mathcal N}(\\mu, \\sigma^2)\\ "
},
{
"math_id": 66,
"text": "\\ \\sigma^2 ~."
},
{
"math_id": 67,
"text": "\n\\begin{align}\n \\bar{x} &= \\frac{\\ x_1+\\cdots+x_n\\ }{ n }\\ , \\\\[5pt]\n s^2 &= \\frac{ 1 }{\\ n-1\\ }\\ \\sum_{i=1}^n (x_i - \\bar{x})^2 ~.\n\\end{align}\n"
},
{
"math_id": 68,
"text": " t = \\frac{\\bar{x} - \\mu}{\\ \\sqrt{s^2/n\\ }\\ } \\sim t_{n-1} ~."
},
{
"math_id": 69,
"text": "\\ n - 1\\ "
},
{
"math_id": 70,
"text": "(\\mu, \\sigma^2)"
},
{
"math_id": 71,
"text": "\\ s^2\\ "
},
{
"math_id": 72,
"text": "\\ s^2_\\mathsf{ML} = \\frac{\\ 1\\ }{ n }\\ \\sum_{i=1}^n (x_i - \\bar{x})^2\\ "
},
{
"math_id": 73,
"text": "\\ t_\\mathsf{ML} = \\frac{\\bar{x} - \\mu}{\\sqrt{s^2_\\mathsf{ML}/n\\ }} = \\sqrt{\\frac{n}{n-1}\\ }\\ t ~."
},
{
"math_id": 74,
"text": " t_\\mathsf{ML} \\sim \\mathcal{lst}(0,\\ \\tau^2=n/(n-1),\\ n-1) ~."
},
{
"math_id": 75,
"text": "\\ a = \\frac{\\ \\nu\\ }{ 2 }\\ "
},
{
"math_id": 76,
"text": "b = \\frac{\\ \\nu\\ \\tau^2\\ }{ 2 } ~."
},
{
"math_id": 77,
"text": "\\ \\nu = 2\\ a, \\; {\\tau}^2 = \\frac{\\ b\\ }{ a } ~."
},
{
"math_id": 78,
"text": "\\ \\operatorname{\\mathbb E}\\left\\{\\ \\ln(\\nu+X^2)\\ \\right\\}\\ "
},
{
"math_id": 79,
"text": " A( t \\mid \\nu) = F_\\nu(t) - F_\\nu(-t) = 1 - I_{ \\frac{\\nu}{\\nu +t^2} }\\!\\left(\\frac{\\nu}{2},\\frac{1}{2}\\right),"
},
{
"math_id": 80,
"text": " \\prod_{j=1}^k \\frac{1}{(r+j+a)^2+b^2} \\quad \\quad r=\\ldots, -1, 0, 1, \\ldots ~."
},
{
"math_id": 81,
"text": "\\ \\operatorname{\\mathbb P}\\left\\{\\ -A < T < A\\ \\right\\} = 0.9\\ ,"
},
{
"math_id": 82,
"text": "\\ \\operatorname{\\mathbb P}\\left\\{\\ T < A\\ \\right\\} = 0.95\\ ,"
},
{
"math_id": 83,
"text": "\\ A = t_{(0.05,n-1)} ~."
},
{
"math_id": 84,
"text": "\\ \\operatorname{\\mathbb P}\\left\\{\\ -A < \\frac{\\ \\overline{X}_n - \\mu\\ }{ S_n/\\sqrt{n\\ } } < A\\ \\right\\} = 0.9\\ ,"
},
{
"math_id": 85,
"text": "\\ \\operatorname{\\mathbb P}\\left\\{\\ \\overline{X}_n - A \\frac{ S_n }{\\ \\sqrt{n\\ }\\ } < \\mu < \\overline{X}_n + A\\ \\frac{ S_n }{\\ \\sqrt{n\\ }\\ }\\ \\right\\} = 0.9."
},
{
"math_id": 86,
"text": "\\ \\overline{X}_n\\ \\pm A\\ \\frac{ S_n }{\\ \\sqrt{n\\ }\\ }\\ "
},
{
"math_id": 87,
"text": "\\mathsf{UCL}_{1-\\alpha} = \\overline{X}_n + t_{\\alpha,n-1}\\ \\frac{ S_n }{\\ \\sqrt{n\\ }\\ } ~."
},
{
"math_id": 88,
"text": "X(t)"
},
{
"math_id": 89,
"text": "I=[a,b]"
},
{
"math_id": 90,
"text": "\\ X(t_1),\\ \\ldots\\ , X(t_n)\\ "
},
{
"math_id": 91,
"text": "t_i \\in I"
},
{
"math_id": 92,
"text": "\\ \\alpha\\ ,"
},
{
"math_id": 93,
"text": "t_{\\alpha,n-1}"
},
{
"math_id": 94,
"text": "\\ \\overline{X}_n \\pm t_{\\alpha,\\nu}\\ \\frac{S_n}{\\ \\sqrt{n\\ }\\ }\\ ,"
},
{
"math_id": 95,
"text": "\\ 10 + 1.372\\ \\frac{ \\sqrt{2\\ } }{\\ \\sqrt{11\\ }\\ } = 10.585 ~."
},
{
"math_id": 96,
"text": "\\ 10 - 1.372\\ \\frac{ \\sqrt{2\\ } }{\\ \\sqrt{11\\ }\\ } = 9.414 ~."
},
{
"math_id": 97,
"text": "\\left(\\ 10 - 1.372\\ \\frac{ \\sqrt{2\\ } }{\\ \\sqrt{11\\ }\\ },\\ 10 + 1.372\\ \\frac{ \\sqrt{2\\ } }{\\ \\sqrt{11\\ }\\ }\\ \\right) = (\\ 9.414,\\ 10.585\\ ) ~."
},
{
"math_id": 98,
"text": "(0, \\infty)"
},
{
"math_id": 99,
"text": " f(x)= \\frac{2\\beta^{\\frac{\\alpha}{2}} x^{\\alpha-1} \\exp(-\\beta x^2+ \\gamma x )}{\\Psi{\\left(\\frac{\\alpha}{2}, \\frac{ \\gamma}{\\sqrt{\\beta}}\\right)}}\\ ,"
},
{
"math_id": 100,
"text": "\\Psi(\\alpha,z)={}_1\\Psi_1\\left(\\begin{matrix}\\left(\\alpha,\\frac{1}{2}\\right)\\\\(1,0)\\end{matrix};z \\right)"
}
] | https://en.wikipedia.org/wiki?curid=105375 |
10538204 | Parabolic Lie algebra | In algebra, a parabolic Lie algebra formula_0 is a subalgebra of a semisimple Lie algebra formula_1 satisfying one of the following two conditions:
These conditions are equivalent over an algebraically closed field of characteristic zero, such as the complex numbers. If the field formula_2 is not algebraically closed, then the first condition is replaced by the assumption that
where formula_5 is the algebraic closure of formula_2.
Examples.
For the general linear Lie algebra formula_6, a parabolic subalgebra is the stabilizer of a partial flag of formula_7, i.e. a sequence of nested linear subspaces. For a complete flag, the stabilizer gives a Borel subalgebra. For a single linear subspace formula_8, one gets a maximal parabolic subalgebra formula_0, and the space of possible choices is the Grassmannian formula_9.
In general, for a complex simple Lie algebra formula_1, parabolic subalgebras are in bijection with subsets of simple roots, i.e. subsets of the nodes of the Dynkin diagram. | [
{
"math_id": 0,
"text": "\\mathfrak p"
},
{
"math_id": 1,
"text": "\\mathfrak g"
},
{
"math_id": 2,
"text": "\\mathbb F"
},
{
"math_id": 3,
"text": "\\mathfrak p\\otimes_{\\mathbb F}\\overline{\\mathbb F}"
},
{
"math_id": 4,
"text": " \\mathfrak g\\otimes_{\\mathbb F}\\overline{\\mathbb F}"
},
{
"math_id": 5,
"text": "\\overline{\\mathbb F}"
},
{
"math_id": 6,
"text": "\\mathfrak{g}=\\mathfrak{gl}_n(\\mathbb F)"
},
{
"math_id": 7,
"text": "\\mathbb F^n"
},
{
"math_id": 8,
"text": "\\mathbb F^k\\subset \\mathbb F^n"
},
{
"math_id": 9,
"text": "\\mathrm{Gr}(k,n)"
}
] | https://en.wikipedia.org/wiki?curid=10538204 |
1053909 | Stein manifold | In mathematics, in the theory of several complex variables and complex manifolds, a Stein manifold is a complex submanifold of the vector space of "n" complex dimensions. They were introduced by and named after Karl Stein (1951). A Stein space is similar to a Stein manifold but is allowed to have singularities. Stein spaces are the analogues of affine varieties or affine schemes in algebraic geometry.
Definition.
Suppose formula_0 is a complex manifold of complex dimension formula_1 and let formula_2 denote the ring of holomorphic functions on formula_3 We call formula_0 a Stein manifold if the following conditions hold:
formula_5
is also a "compact" subset of formula_0.
Non-compact Riemann surfaces are Stein manifolds.
Let "X" be a connected, non-compact Riemann surface. A deep theorem of Heinrich Behnke and Stein (1948) asserts that "X" is a Stein manifold.
Another result, attributed to Hans Grauert and Helmut Röhrl (1956), states moreover that every holomorphic vector bundle on "X" is trivial. In particular, every line bundle is trivial, so formula_9. The exponential sheaf sequence leads to the following exact sequence:
formula_10
Now Cartan's theorem B shows that formula_11, therefore formula_12.
This is related to the solution of the second Cousin problem.
Properties and examples of Stein manifolds.
These facts imply that a Stein manifold is a closed complex submanifold of complex space, whose complex structure is that of the ambient space (because the embedding is biholomorphic).
Numerous further characterizations of such manifolds exist, in particular capturing the property of their having "many" holomorphic functions taking values in the complex numbers. See for example Cartan's theorems A and B, relating to sheaf cohomology. The initial impetus was to have a description of the properties of the domain of definition of the (maximal) analytic continuation of an analytic function.
In the GAGA set of analogies, Stein manifolds correspond to affine varieties.
Stein manifolds are in some sense dual to the elliptic manifolds in complex analysis which admit "many" holomorphic functions from the complex numbers into themselves. It is known that a Stein manifold is elliptic if and only if it is fibrant in the sense of so-called "holomorphic homotopy theory".
Relation to smooth manifolds.
Every compact smooth manifold of dimension 2"n", which has only handles of index ≤ "n", has a Stein structure provided "n" > 2, and when "n" = 2 the same holds provided the 2-handles are attached with certain framings (framing less than the Thurston–Bennequin framing). Every closed smooth 4-manifold is a union of two Stein 4-manifolds glued along their common boundary. | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\mathcal O(X)"
},
{
"math_id": 3,
"text": "X."
},
{
"math_id": 4,
"text": "K \\subset X"
},
{
"math_id": 5,
"text": "\\bar K = \\left \\{z \\in X \\,\\left|\\, |f(z)| \\leq \\sup_{w \\in K} |f(w)| \\ \\forall f \\in \\mathcal O(X) \\right. \\right \\},"
},
{
"math_id": 6,
"text": "x \\neq y"
},
{
"math_id": 7,
"text": "f \\in \\mathcal O(X)"
},
{
"math_id": 8,
"text": "f(x) \\neq f(y)."
},
{
"math_id": 9,
"text": "H^1(X, \\mathcal O_X^*) =0 "
},
{
"math_id": 10,
"text": "H^1(X, \\mathcal O_X) \\longrightarrow H^1(X, \\mathcal O_X^*) \\longrightarrow H^2(X, \\Z) \\longrightarrow H^2(X, \\mathcal O_X) "
},
{
"math_id": 11,
"text": "H^1(X,\\mathcal{O}_X)= H^2(X,\\mathcal{O}_X)=0 "
},
{
"math_id": 12,
"text": "H^2(X,\\Z) =0"
},
{
"math_id": 13,
"text": "\\Complex^n"
},
{
"math_id": 14,
"text": "\\Complex^{2 n+1}"
},
{
"math_id": 15,
"text": "x \\in X"
},
{
"math_id": 16,
"text": "x"
},
{
"math_id": 17,
"text": "\\psi"
},
{
"math_id": 18,
"text": "i \\partial \\bar \\partial \\psi >0"
},
{
"math_id": 19,
"text": "\\{z \\in X \\mid \\psi (z)\\leq c \\}"
},
{
"math_id": 20,
"text": "c"
},
{
"math_id": 21,
"text": "\\{z \\mid -\\infty\\leq\\psi(z)\\leq c\\}"
},
{
"math_id": 22,
"text": "X_c=f^{-1}(c)"
},
{
"math_id": 23,
"text": "f^{-1}(-\\infty, c)."
},
{
"math_id": 24,
"text": "f^{-1}(-\\infty, c)"
}
] | https://en.wikipedia.org/wiki?curid=1053909 |
1053994 | Yield management | Operational business strategy
Yield management is a variable pricing strategy, based on understanding, anticipating and influencing consumer behavior in order to maximize revenue or profits from a fixed, time-limited resource (such as airline seats, hotel room reservations or advertising inventory). As a specific, inventory-focused branch of revenue management, yield management involves strategic control of inventory to sell the right product to the right customer at the right time for the right price. This process can result in price discrimination, in which customers consuming identical goods or services are charged different prices. Yield management is a large revenue generator for several major industries; Robert Crandall, former Chairman and CEO of American Airlines, gave yield management its name and has called it "the single most important technical development in transportation management since we entered deregulation."
Definition.
Yield management has become part of mainstream business theory and practice over the last fifteen to twenty years. Whether an emerging discipline or a new management science (it has been called both), yield management is a set of yield maximization strategies and tactics to improve the profitability of certain businesses. It is complex because it involves several aspects of management control, including rate management, revenue streams management, and distribution channel management. Yield management is multidisciplinary because it blends elements of marketing, operations, and financial management into a highly successful new approach. Yield management strategists must frequently work with one or more other departments when designing and implementing yield management strategies.
History.
Deregulation is generally regarded as the catalyst for yield management in the airline industry, but this tends to overlook the role of global distribution systems (GDSs). It is arguable that the fixed pricing paradigm occurs as a result of decentralized consumption. With mass production, pricing became a centralized management activity and customer contact staff focused on customer service exclusively. Electronic commerce, of which the GDSs were the first wave, created an environment where large volumes of sales could be managed without large numbers of customer service staff. They also gave management staff direct access to price at time of consumption and rich data capture for future decision-making.
On January 17, 1985, American Airlines launched Ultimate Super Saver fares in an effort to compete with low cost carrier People Express Airlines. Donald Burr, the CEO of People Express, is quoted as saying "We were a vibrant, profitable company from 1981 to 1985, and then we tipped right over into losing $50 million a month... We had been profitable from the day we started until American came at us with Ultimate Super Savers." in the book "Revenue Management" by Robert G. Cross, Chairman and CEO of Revenue Analytics. The yield management systems developed at American Airlines were recognized by the Edelman Prize committee of INFORMS for contributing $1.4 billion in a three-year period at the airline.
Yield management spread to other travel and transportation companies in the early 1990s. Notable was implementation of yield management at National Car Rental. In 1993, General Motors was forced to take a $744 million charge against earnings related to its ownership of National Car Rental. In response, National's program expanded the definition of yield management to include capacity management, pricing and reservations control. As a result of this program, General Motors was able to sell National Car Rental for an estimated $1.2 billion. Yield management gave way to the more general practice of revenue management. Whereas revenue management involves predicting consumer behavior by segmenting markets, forecasting demand, and optimizing prices for several different types of products, yield management refers specifically to maximizing revenue through inventory control. Some notable revenue management implementations include the NBC which credits its system with $200 million in improved ad sales from 1996 to 2000, the target pricing initiative at UPS, and revenue management at Texas Children's Hospital. Since 2000, much of the dynamic pricing, promotions management and dynamic packaging that underlie e commerce sites leverage revenue management techniques. In 2002 GMAC launched an early implementation of web based revenue management in the financial services industry.
There have also been high-profile failures and faux pas. Amazon.com was criticized for irrational price changes that resulted from a revenue management software bug. The Coca-Cola Company's plans for a dynamic pricing vending machine were put on hold as a result of negative consumer reactions. Revenue management is also blamed for much of the financial difficulty currently experienced by legacy carriers. The reliance of the major carriers on high fares in captive markets arguably created the conditions for low-cost carriers to thrive.
Use by industry.
There are three essential conditions for yield management to be applicable:
If the resources available are not fixed or not perishable, the problem is limited to logistics, i.e. inventory or production management. If all customers would pay the same price for using the same amount of resources, the challenge would perhaps be limited to selling as quickly as possible, e.g. if there are costs for holding inventory.
Yield management is of especially high relevance in cases where the constant costs are relatively high compared to the variable costs. The less variable cost there is, the more the additional revenue earned will contribute to the overall profit. This is because it focuses on maximizing expected marginal revenue for a given operation and planning horizon. It optimizes resource utilization by ensuring inventory availability to customers with the highest expected net revenue contribution and extracting the greatest level of ‘willingness to pay’ from the entire customer base. Yield management practitioners typically claim 3% to 7% incremental revenue gains. In many industries this can equate to over 100% increase in profits.
Yield management has significantly altered the travel and hospitality industry since its inception in the mid-1980s. It requires analysts with detailed market knowledge and advanced computing systems who implement sophisticated mathematical techniques to analyze market behavior and capture revenue opportunities. It has evolved from the system airlines invented as a response to deregulation and quickly spread to hotels, car rental firms, cruise lines, media, telecommunications and energy to name a few. Its effectiveness in generating incremental revenues from an existing operation and customer base has made it particularly attractive to business leaders that prefer to generate return from revenue growth and enhanced capability rather than downsizing and cost cutting.
Airlines.
In the passenger airline case, capacity is regarded as fixed because changing what aircraft flies a certain service based on the demand is the exception rather than the rule. When the aircraft departs, the unsold seats cannot generate any revenue and thus can be said to have perished, or have spoiled. Airlines use specialized software to monitor how seats are reserved and react accordingly. There are various inventory controls such as a nested inventory system. For example, airlines can offer discounts on low-demand flights, where the flight will likely not sell out. When there is excess demand, the seats can be sold at a higher price.
Another way of capturing varying willingness to pay is market segmentation. A firm may repackage its basic inventory into different products to this end. In the passenger airline case this means implementing purchase restrictions, length of stay requirements and requiring fees for changing or canceling tickets.
The airline needs to keep a specific number of seats in reserve to cater to the probable demand for high-fare seats. This process can be managed by inventory controls or by managing the fare rules such as the AP (Advanced Purchase) restrictions. (30 day advance purchase, 21 day advance purchase, 14 day advance purchase, 7 day advance purchase, day of departure/walk up fares) The price of each seat varies directly with the number of seats reserved, that is, the fewer seats that are reserved for a particular category, the lower the price of each seat. This will continue until the price of seat in the premium class equals that of those in the concession class. Depending on this, a floor price (lower price) for the next seat to be sold is set.
Hotels.
Hotels use this system in largely the same way, to calculate the rates. Yield management is one of the most common pricing strategies used in the hotel industry to increase reservations and boost revenue.
Rental.
In the rental car industry, yield management deals with the sale of optional insurance, damage waivers and vehicle upgrades. It accounts for a major portion of the rental company's profitability, and is monitored on a daily basis. In the equipment rental industry, yield management is a method to manage rental rates against capacity (available fleet) and demand.
Intercity buses.
Yield management has moved into the bus industry with companies such as Megabus (United Kingdom), Megabus (North America), BoltBus, and easyBus, which run low-cost networks in the United Kingdom and parts of the United States, and more recently, nakedbus.com and Intercape, which have networks in New Zealand and South Africa. Now operating and developed in Chile by SARCAN, a Chilean company that provides revenue and yield management systems focused on this industry, with the company Turbus as principal customer. Finnish low-cost inter-city bus service OnniBus, as well as Polish PolskiBus, bases its revenue flow on yield management.
Multifamily housing.
In the multi-family residential industry, yield optimization is focused on producing supply and demand forecasts to determine rent recommendations for profit optimization. However, the use of the yield optimization systems is fairly new to the industry in the late 1990s, with Archstone Smith pioneering its use. The yield management systems include the LRO (Lease Rent Options) Revenue Management System from Rainmaker, the YieldStar Asset Optimization System from RealPage and PricingPortal product from Property Solutions International.
Insurance.
Insurance companies use price (premium) optimization to improve profitability on policy sales. The method is widely used by property & casualty insurers and brokers in the UK, Spain and, to a lesser extent, in the US. Several vendors, such as Earnix, Willis Towers Watson, EMB, ODG, provide specialized pricing optimization software for the industry.
Telecommunications.
On average, communications service providers use an average of just 35 to 40 percent of available network capacity. Recently, telecommunications software vendors such as Telcordia and Ericsson have promoted yield management as a strategy for communications service providers to generate additional revenue and reduce capital expenditures by maximizing the subscriber use of available network bandwidth. Approaches include basing a strategy on innovative services explicitly designed to use only spare capacity and borrowing proven methods from the airline industry. The approach can be more difficult to implement in the telecommunications industry than the airlines sector because of the difficulty to control and sometimes refuse network access to customers.
Similarities that exist between the airline and telecom industries include a large sunk cost combined with low marginal cost, perishable inventory, reservations, pricing flexibility and the opportunity to upsell. Differences that present challenges for communications service providers include low-value transactions and overall network complexity. Suggested approaches to executing a successful yield management strategy include accurate network information collection, bandwidth capacity allocation that does not impact service quality, the deployment of service management software such as real time policy and real-time charging, and using new marketing channels to target consumers with innovative services.
Online advertising.
Yield management in online ad sales is in essence the same as in other industries above mentioned; managing the publishers supply/inventory (banner impressions) with the market demand, at the best price (CPM/RPM) while assuring highest possible fill rates.
Railways.
While railways traditionally sold fully flexible tickets that were valid on all trains on a given day or even trains on several days, deregulation and (partial) privatization introduced yield management in the United Kingdom as well as for high speed services in Germany or France. Tickets for the same route can be as cheap as €19 but also go into the triple digits depending on departure time, demand, and the time the ticket is booked.
Skiing.
Yield Management has shown increasing popularity in the ski industry, especially in the North American markets. This ranges from non-physical rate fences, including age and validity differentiation to fully dynamic prices. Determinants of such variable prices include date-specific expected demand factors (institutional and public holidays, weekends, weather, size and accessibility of the resort, etc.)
Pet boarding.
With predictable demand far outnumbering fixed supply in the professional pet boarding industry, yield management has become an ever-popular practice for this segment of businesses. Much like the hotel industry, those systems help gauge which restrictions to implement, such as length of stay, non-refundable rate, or close to arrival, and also ensuring they are selling rooms and services at the right price to the right person at the right time.
Econometrics.
Yield management and econometrics center on detailed forecasting and mathematical optimization of marginal revenue opportunities. The opportunities arise from segmentation of consumer willingness to pay. If the market for a particular good follows the simple straight line Price/Demand relationship illustrated below, a single fixed price of $50 there is enough demand to sell 50 units of inventory. This results in $2500 in revenues. However the same Price/Demand relationship yields $4000 if consumers are presented with multiple prices.
In practice, the segmentation approach relies on adequate fences between consumers so that everyone doesn't buy at the lowest price offered. The airlines use time of purchase to create this segmentation, with later booking customers paying the higher fares. The fashion industry uses time in the opposite direction, discounting later in the selling season once the item is out of fashion or inappropriate for the time of year. Other approaches to fences involve attributes that create substantial value to the consumer at little or no cost to the seller. A backstage pass at a concert is a good example of this. Initially yield management avoided the complexity caused by the interaction of absolute price and price position by using surrogates for price such as booking class. By the mid-1990s, most implementation incorporated some measures of price elasticity. The airlines were exceptional in this case, preferring to focus on more detailed segmentation by implementing O&D (Origin & Destination) systems.
At the heart of yield management decision-making process is the trade-off of marginal yields from segments that are competing for the same inventory. In capacity-constrained cases, there is a bird-in-the-hand decision that forces the seller to reject lower revenue generating customers in the hopes that the inventory can be sold in a higher valued segment. The tradeoff is sometimes mistakenly identified as occurring at the intersection of the marginal revenue curves for the competing segments. While this is accurate when it supports marketing decisions where access to both segments is equivalent, it is wrong for inventory control decisions. In these cases the intersection of the marginal revenue curve of the higher valued segment with the actual value of the lower segment is the point of interest.
In the case illustrated here, a car rental company must set up protection levels for its higher valued segments. By estimating where the marginal revenue curve of the luxury segment crosses the actual rental value of the midsize car segment the company can decide how many luxury cars to make available to midsize car renters. Where the vertical line from this intersection point crosses the demand (horizontal) axis determines how many luxury cars should be protected for genuine luxury car renters. The need to calculate protection levels has led to a number of heuristic solutions, most notable EMSRa and EMSRb, which stands for Expected Marginal Seat Revenue version a and b respectively. The balancing point of interest is found using Littlewood's rule which states that demand for formula_0 should be accepted as long as
formula_12 formula_21formula_31formula_4
where
formula_0 is the value of the lower valued segment
formula_5 is the value of the higher valued segment
formula_6 is the demand for the higher valued segment and
formula_7 is the capacity left
This equation is re-arranged to compute protection levels as follows:
formula_81formula_9−1formula_102formula_111formula_12
In words, the seller wants to protect formula_81 units of inventory for the higher valued segment where formula_81 is equal to the inverse probability of demand of the revenue ratio of the lower valued segment to the higher valued segment. This equation defines the EMSRa algorithm which handles the two segment case. EMSRb is smarter and handles multiple segments by comparing the revenue of the lower segment to a demand weighted average of the revenues of the higher segments. Neither of these heuristics produces the exact right answer and increasingly implementations make use of Monte Carlo simulation to find optimal protection levels.
Since the mid-1990s, increasingly sophisticated mathematical models have been developed such as the dynamic programming formulation pioneered by Talluri and Van Ryzin which has led to more accurate estimates of bid prices. Bid prices represent the minimum price a seller should accept for a single piece of inventory and are popular control mechanisms for Hotels and Car Rental firms. Models derived from developments in financial engineering are intriguing but have been unstable and difficult to place the parameters in practice. Yield management tends to focus on environments that are less rational than the financial markets.
Yield management system.
Firms that engage in yield management usually use computer yield management systems to do so. The Internet has greatly facilitated this process.
Enterprises that use yield management periodically review transactions for goods or services already supplied and for goods or services to be supplied in the future. They may also review information (including statistics) about events (known future events such as holidays, or unexpected past events such as terrorist attacks), competitive information (including prices), seasonal patterns, and other pertinent factors that affect sales. The models attempt to forecast total demand for all products/services they provide, by market segment and price point. Since total demand normally exceeds what the particular firm can produce in that period, the models attempt to optimize the firm's outputs to maximize revenue.
The optimization attempts to answer the question: "Given our operating constraints, what is the best mix of products and/or services for us to produce and sell in the period, and at what prices, to generate the highest expected revenue?"
Optimization can help the firm adjust prices and to allocate capacity among market segments to maximize expected revenues. This can be done at different levels of detail:
Yield management is particularly suitable when selling perishable products, i.e. goods that become unsellable at a point in time (for example air tickets just after a flight takes off). Industries that use yield management include airlines, hotels, stadiums and other venues with a fixed number of seats, and advertising. With an advance forecast of demand and pricing flexibility, buyers will self-sort based on their price sensitivity (using more power in off-peak hours or going to the theater mid-week), their demand sensitivity (must have the higher cost early morning flight or must go to the Saturday night opera) or their time of purchase (usually paying a premium for booking late).
In this way, yield management's overall aim is to provide an optimal mix of goods at a variety of price points at different points in time or for different baskets of features. The system will try to maintain a distribution of purchases over time that is balanced as well as high.
Good yield management maximizes (or at least significantly increases) revenue production for the same number of units, by taking advantage of the forecast of high demand/low demand periods, effectively shifting demand from high demand periods to low demand periods and by charging a premium for late bookings. While yield management systems tend to generate higher revenues, the revenue streams tends to arrive later in the booking horizon as more capacity is held for late sale at premium prices.
Firms faced with lack of pricing power sometimes turn to yield management as a last resort. After a year or two using yield management, many of them are surprised to discover they have actually lowered prices for the majority of their opera seats or hotel rooms or other products. That is, they offer far higher discounts more frequently for off-peak times, while raising prices only marginally for peak times, resulting in higher revenue overall.
By doing this, they have actually increased quantity demanded by selectively introducing many more price points, as they learn about and react to the diversity of interests and purchase drivers of their customers.
Ethical issues and questions of efficacy.
Some consumers are concerned that yield management could penalize them for conditions which cannot be helped and are unethical to penalize. For example, the formulas, algorithms, and neural networks that determine airline ticket prices could feasibly consider frequent flyer information, which includes a wealth of socio-economic information such as age and home address. The airline then could charge higher prices to consumers who are between certain ages or who live in neighborhoods with higher average wealth, even if those neighborhoods also include poor households. Very few (if any) airlines using yield management are able to employ this level of price discrimination because prices are not set based on characteristics of the purchaser, which are in any case often not known at the time of purchase.
Some consumers may object that it is impossible for them to boycott yield management when buying some goods, such as airline tickets.
Yield management also includes many noncontroversial and more prevalent practices, such as varying prices over time to reflect demand. This level of yield management makes up the majority of yield management in the airline industry. For example, airlines may price a ticket on the Sunday after Thanksgiving at a higher fare than the Sunday a week later. Alternatively, they may make tickets more expensive when bought at the last minute than when bought six months in advance. The goal of this level of yield management is essentially trying to force demand to equal or exceed supply.
When yield management was introduced in the early 1990s, primarily in the airline industry, many suggested that despite the obvious immediate increase in revenues, it might harm customer satisfaction and loyalty, interfere with relationship marketing, and drive customers from firms that used yield management to firms that do not. Frequent flier programs were developed as a response to regain customer loyalty and reward frequent and high yield passengers. Today, yield management is nearly universal in many industries, including airlines.
Despite optimizing revenue in theory, introduction of yield management does not always achieve that in practice because of corporate image problems. In 2002, Deutsche Bahn, the German national railway company, experimented with yield management for frequent loyalty card passengers. The fixed pricing model that had existed for decades was replaced with a more demand-responsive pricing model, but that reform proved highly unpopular with passengers and led to widespread protests and a decline in passenger numbers.
Experimental studies of yield management decisions.
Recently, people working in the area of behavioral operations research have begun to study the yield management decisions of actual human decision makers. One question that this research addresses is how much might revenues increase if managers relied on yield management systems rather than their own judgment when making pricing decisions. Using methods from experimental economics, this work has revealed that yield management systems are likely to increase revenues significantly. Further, this research reveals that "errors" in yield management decisions tend to be quite systematic. For instance, Bearden, Murphy, and Rappaport showed that with respect to expected revenue maximizing policies, people tend to price too high when they have high levels of inventory and too low when their inventory levels are low.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R_2"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "\\ge R"
},
{
"math_id": 3,
"text": " * Prob( D"
},
{
"math_id": 4,
"text": ">x ) "
},
{
"math_id": 5,
"text": "R_1"
},
{
"math_id": 6,
"text": "D_1"
},
{
"math_id": 7,
"text": "x"
},
{
"math_id": 8,
"text": "y"
},
{
"math_id": 9,
"text": " = Prob"
},
{
"math_id": 10,
"text": "( R"
},
{
"math_id": 11,
"text": "/R"
},
{
"math_id": 12,
"text": " )"
}
] | https://en.wikipedia.org/wiki?curid=1053994 |
1053995 | Stevens's power law | Empirical relationship between actual and perceived changed intensity of stimulus
Stevens' power law is an empirical relationship in psychophysics between an increased intensity or strength in a physical stimulus and the perceived magnitude increase in the sensation created by the stimulus. It is often considered to supersede the Weber–Fechner law, which is based on a logarithmic relationship between stimulus and sensation, because the power law describes a wider range of sensory comparisons, down to zero intensity.
The theory is named after psychophysicist Stanley Smith Stevens (1906–1973). Although the idea of a power law had been suggested by 19th-century researchers, Stevens is credited with reviving the law and publishing a body of psychophysical data to support it in 1957.
The general form of the law is
formula_0
where "I" is the intensity or strength of the stimulus in physical units (energy, weight, pressure, mixture proportions, etc.), ψ("I") is the magnitude of the sensation evoked by the stimulus, "a" is an exponent that depends on the type of stimulation or sensory modality, and "k" is a proportionality constant that depends on the units used.
A distinction has been made between local psychophysics, where stimuli can only be discriminated with a probability around 50%, and global psychophysics, where the stimuli can be discriminated correctly with near certainty (Luce & Krumhansl, 1988). The Weber–Fechner law and methods described by L. L. Thurstone are generally applied in local psychophysics, whereas Stevens' methods are usually applied in global psychophysics.
The table to the right lists the exponents reported by Stevens.
Methods.
The principal methods used by Stevens to measure the perceived intensity of a stimulus were "magnitude estimation" and "magnitude production". In magnitude estimation with a standard, the experimenter presents a stimulus called a "standard" and assigns it a number called the "modulus". For subsequent stimuli, subjects report numerically their perceived intensity relative to the standard so as to preserve the ratio between the sensations and the numerical estimates (e.g., a sound perceived twice as loud as the standard should be given a number twice the modulus). In magnitude estimation without a standard (usually just "magnitude estimation"), subjects are free to choose their own standard, assigning any number to the first stimulus and all subsequent ones with the only requirement being that the ratio between sensations and numbers is preserved. In magnitude production a number and a reference stimulus is given and subjects produce a stimulus that is perceived as that number times the reference. Also used is "cross-modality matching", which generally involves subjects altering the magnitude of one physical quantity, such as the brightness of a light, so that its perceived intensity is equal to the perceived intensity of another type of quantity, such as warmth or pressure.
Criticisms.
Stevens generally collected magnitude estimation data from multiple observers, averaged the data across subjects, and then fitted a power function to the data. Because the fit was generally reasonable, he concluded the power law was correct.
A principal criticism has been that Stevens' approach provides neither a direct test of the power law itself nor the underlying assumptions of the magnitude estimation/production method: it simply fits curves to data points. In addition, the power law can be deduced mathematically from the Weber-Fechner logarithmic function (Mackay, 1963), and the relation makes predictions consistent with data (Staddon, 1978). As with all psychometric studies, Stevens' approach ignores individual differences in the stimulus-sensation relationship, and there are generally large individual differences in this relationship that averaging the data will obscure .
Stevens' main assertion was that using magnitude estimations/productions respondents were able to make judgements on a ratio scale (i.e., if "x" and "y" are values on a given ratio scale, then there exists a constant "k" such that "x" = "ky"). In the context of axiomatic psychophysics, formulated a testable property capturing the implicit underlying assumption this assertion entailed. Specifically, for two proportions "p" and "q", and three stimuli, "x", "y", "z", if "y" is judged "p" times "x", "z" is judged "q" times "y", then "t" = "pq" times "x" should be equal to "z". This amounts to assuming that respondents interpret numbers in a veridical way. This property was unambiguously rejected (, ). Without assuming veridical interpretation of numbers, formulated another property that, if sustained, meant that respondents could make ratio scaled judgments, namely, if "y" is judged "p" times "x", "z" is judged "q" times "y", and if "y"' is judged "q" times "x", "z"' is judged "p" times "y"', then "z" should equal "z"'. This property has been sustained in a variety of situations (, ).
Critics of the power law also point out that the validity of the law is contingent on the measurement of perceived stimulus intensity that is employed in the relevant experiments. , under the condition that respondents' numerical distortion function and the psychophysical functions could be separated, formulated a behavioral condition equivalent to the psychophysical function being a power function. This condition was confirmed for just over half the respondents, and the power form was found to be a reasonable approximation for the rest .
It has also been questioned, particularly in terms of signal detection theory, whether any given stimulus is actually associated with a particular and "absolute" perceived intensity; i.e. one that is independent of contextual factors and conditions. Consistent with this, Luce (1990, p. 73) observed that "by introducing contexts such as background noise in loudness judgements, the shape of the magnitude estimation functions certainly deviates sharply from a power function". Indeed, nearly all sensory judgments can be changed by the context in which a stimulus is perceived.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\psi(I) = k I ^a,"
}
] | https://en.wikipedia.org/wiki?curid=1053995 |
10543268 | Change detection | Statistical analysis
In statistical analysis, change detection or change point detection tries to identify times when the probability distribution of a stochastic process or time series changes. In general the problem concerns both detecting whether or not a change has occurred, or whether several changes might have occurred, and identifying the times of any such changes.
Specific applications, like step detection and edge detection, may be concerned with changes in the mean, variance, correlation, or spectral density of the process. More generally change detection also includes the detection of anomalous behavior: anomaly detection.
"Offline" change point detection it is assumed that a sequence of length formula_0 is available and the goal is to identify whether any change point(s) occurred in the series. This is an example of post hoc analysis and is often approached using hypothesis testing methods. By contrast, "online" change point detection is concerned with detecting change points in an incoming data stream.
Background.
A time series measures the progression of one or more quantities over time. For instance, the figure above shows the level of water in the Nile river between 1870 and 1970. Change point detection is concerned with identifying whether, and if so "when", the behavior of the series changes significantly. In the Nile river example, the volume of water changes significantly after a dam was built in the river. Importantly, anomalous observations that differ from the ongoing behavior of the time series are not generally considered change points as long as the series returns to its previous behavior afterwards.
Mathematically, we can describe a time series as an ordered sequence of observations formula_1. We can write the joint distribution of a subset formula_2 of the time series as formula_3. If the goal is to determine whether a change point occurred at a time formula_4 in a finite time series of length formula_0, then we really ask whether formula_5 equals formula_6. This problem can be generalized to the case of more than one change point.
Algorithms.
Online change detection.
Using the sequential analysis ("online") approach, any change test must make a trade-off between these common metrics:
In a Bayes change-detection problem, a prior distribution is available for the change time.
Online change detection is also done using streaming algorithms.
Offline change detection.
Basseville (1993, Section 2.6) discusses offline change-in-mean detection with hypothesis testing based on the works of Page and Picard and maximum-likelihood estimation of the change time, related to two-phase regression.
Other approaches employ clustering based on maximum likelihood estimation, use optimization to infer the number and times of changes, via spectral analysis, or singular spectrum analysis.
Statistically speaking, change detection is often considered as a model selection problem. Models with more changepoints fit data better but with more parameters. The best trade-off can be found by optimizing a model selection criterion such as Akaike information criterion and Bayesian information criterion. Bayesian model selection has also been used. Bayesian methods often quantify uncertainties of all sorts and answer questions hard to tackle by classical methods, such as what is the probability of having a change at a given time and what is the probability of the data having a certain number of changepoints.
"Offline" approaches cannot be used on streaming data because they need to compare to statistics of the complete time series, and cannot react to changes in real-time but often provide a more accurate estimation of the change time and magnitude.
Applications.
Change detection tests are often used in manufacturing for quality control, intrusion detection, spam filtering, website tracking, and medical diagnostics.
Linguistic change detection.
Linguistic change detection refers to the ability to detect word-level changes across multiple presentations of the same sentence. Researchers have found that the amount of semantic overlap (i.e., relatedness) between the changed word and the new word influences the ease with which such a detection is made (Sturt, Sanford, Stewart, & Dawydiak, 2004).
Additional research has found that focussing one's attention to the word that will be changed during the initial reading of the original sentence can improve detection. This was shown using italicized text to focus attention, whereby the word that will be changing is italicized in the original sentence (Sanford, Sanford, Molle, & Emmott, 2006), as well as using clefting constructions such as ""It was the" tree that needed water." (Kennette, Wurm, & Van Havermaet, 2010). These change-detection phenomena appear to be robust, even occurring cross-linguistically when bilinguals read the original sentence in their native language and the changed sentence in their second language (Kennette, Wurm & Van Havermaet, 2010). Recently, researchers have detected word-level changes in semantics across time by computationally analyzing temporal corpora (for example:the word "gay" ha"s" acquired a new meaning over time")" using change point detection. This is also applicable to reading non-words such as music. Even though music is not a language, it is still written and people to comprehend its meaning which involves perception and attention, allowing change detection to be present.
Visual change detection.
Visual change detection is one's ability to detect differences between two or more images or scenes. This is essential in many everyday tasks. One example is detecting changes on the road to drive safely and successfully. Change detection is crucial in operating motor vehicles to detect other vehicles, traffic control signals, pedestrians, and more. Another example of utilizing visual change detection is facial recognition. When noticing one's appearance, change detection is vital, as faces are "dynamic" and can change in appearance due to different factors such as "lighting conditions, facial expressions, aging, and occlusion". Change detection algorithms use various techniques, such as "feature tracking, alignment, and normalization," to capture and compare different facial features and patterns across individuals in order to correctly identify people. Visual change detection involves the integration of "multiple sensors inputs, cognitive processes, and attentional mechanisms," often focusing on multiple stimuli at once. The brain processes visual information from the eyes, compares it with previous knowledge stored in memory, and identifies differences between the two stimuli. This process occurs rapidly and unconsciously, allowing individuals to respond to changing environments and make necessary adjustments to their behavior.
Cognitive change detection.
There have been several studies conducted to analyze the cognitive functions of change detection. With cognitive change detection, researchers have found that most people overestimate their change detection, when in reality, they are more susceptible to change blindness than they think. Cognitive change detection has many complexities based on external factors, and sensory pathways play a key role in determining one's success in detecting changes. One study proposes and proves that the multi-sensory pathway network, which consists of three sensory pathways, significantly increases the effectiveness of change detection. Sensory pathway one fuses the stimuli together, sensory pathway two involves using the middle concatenation strategy to learn the changed behavior, and sensory pathway three involves using the middle difference strategy to learn the changed behavior. With all three of these working together, change detection has a significantly increased success rate. It was previously believed that the posterior parietal cortex (PPC) played a role in enhancing change detection due to its focus on "sensory and task-related activity". However, studies have also disproven that the PPC is necessary for change detection; although these have high functional correlation with each other, the PPC's mechanistic involvement in change detection is insignificant. Moreover, top-down processing plays an important role in change detection because it enables people to resort to background knowledge which then influences perception, which is also common in children. Researchers have conducted a longitudinal study surrounding children's development and the change detection throughout infancy to adulthood. In this, it was found that change detection is stronger in young infants compared to older children, with top-down processing being a main contributor to this outcome. | [
{
"math_id": 0,
"text": "T"
},
{
"math_id": 1,
"text": "(x_1, x_2, \\ldots)"
},
{
"math_id": 2,
"text": "x_{a:b} = (x_a, x_{a+1}, \\ldots, x_{b})"
},
{
"math_id": 3,
"text": "p(x_{a:b})"
},
{
"math_id": 4,
"text": "\\tau"
},
{
"math_id": 5,
"text": "p(x_{1:\\tau})"
},
{
"math_id": 6,
"text": "p(x_{\\tau+1:T})"
}
] | https://en.wikipedia.org/wiki?curid=10543268 |
10544951 | Enoch calendar | Solar calendar described in the Book of Enoch
The Enoch calendar is an ancient calendar described in the pseudepigraphal Book of Enoch. It divided the year into four seasons of exactly 13 weeks. Each season consisted of two 30-day months followed by one 31-day month, with the 31st day ending the season, so that Enoch's year consisted of exactly 364 days.
The Enoch calendar was purportedly given to Enoch by the angel Uriel. Four named days, inserted as the 31st day of every third month, were named instead of numbered, which "placed them outside the numbering". The Book of Enoch gives the count of 2,912 days for 8 years, which divides out to exactly 364 days per year. This specifically excludes any periodic intercalations.
Evaluation.
Calendar expert John Pratt wrote that
"The Enoch calendar has been criticized as hopelessly primitive because, with only 364 days, it would get out of sync with the seasons so quickly: In only 25 years the seasons would arrive an entire month early. Such a gross discrepancy, however, merely indicates that the method of intercalation has been omitted."
Pratt pointed out that by adding an extra week at the end of every seventh year (or Sabbatical year), and then adding an additional extra week to every fourth Sabbatical year (or every 28 years), the calendar could be as accurate as the Julian calendar:
formula_0
formula_1
formula_2
There is some evidence that the group whose writings were found at Qumran used a variation of the Enoch calendar (see Qumran calendar).
Further reading.
See the various writings of Julian Morgenstern, James C. VanderKam and others. | [
{
"math_id": 0,
"text": "\\left( \\frac{\\ 52\\ \\mathsf{weeks}\\ }{\\ 1\\ \\mathsf{year}\\ } + \\frac{\\ 1\\ \\mathsf{week}\\ }{\\ 7\\ \\mathsf{years}\\ } + \\frac{\\ 1\\ \\mathsf{week}\\ }{\\ 28\\ \\mathsf{years}\\ } \\right)\\ \\times\\ \\frac{\\ 7\\ \\mathsf{days}\\ }{\\ 1\\ \\mathsf{week}\\ } "
},
{
"math_id": 1,
"text": " = \\left(\\ 52 + \\frac{\\ 5\\ }{ 28 }\\ \\right)\\ \\frac{\\ \\mathsf{week}\\ }{ \\text{year} }\\ \\times\\ \\frac{\\ 7\\ \\mathsf{days}\\ }{\\ 1\\ \\mathsf{week}\\ }\n= \\frac{\\ \\left( 365 + \\tfrac{\\ 1\\ }{ 4 } \\right) \\mathsf{days}\\ }{\\ 1\\ \\mathsf{year}\\ } "
},
{
"math_id": 2,
"text": " = \\frac{\\ 365\\ \\mathsf{days}\\ }{\\ 1\\ \\mathsf{year}\\ } + \\frac{\\ 1\\ \\mathsf{day}\\ }{\\ 4\\ \\mathsf{years}\\ } ~."
}
] | https://en.wikipedia.org/wiki?curid=10544951 |
10545671 | Bapat–Beg theorem | In probability theory, the Bapat–Beg theorem gives the joint probability distribution of order statistics of independent but not necessarily identically distributed random variables in terms of the cumulative distribution functions of the random variables. Ravindra Bapat and M.I. Beg published the theorem in 1989, though they did not offer a proof. A simple proof was offered by Hande in 1994.
Often, all elements of the sample are obtained from the same population and thus have the same probability distribution. The Bapat–Beg theorem describes the order statistics when each element of the sample is obtained from a different statistical population and therefore has its own probability distribution.
Statement.
Let formula_0 be independent real valued random variables with cumulative distribution functions respectively formula_1. Write formula_2 for the order statistics. Then the joint probability distribution of the formula_3 order statistics (with formula_4 and formula_5) is
formula_6
where
formula_7
is the permanent of the given block matrix. (The figures under the braces show the number of columns.)
Independent identically distributed case.
In the case when the variables formula_0 are independent and identically distributed with cumulative probability distribution function formula_8 for all "i" the theorem reduces to
formula_9
Complexity.
Glueck et al. note that the Bapat‒Beg formula is computationally intractable, because it involves an exponential number of permanents of the size of the number of random variables. However, when the random variables have only two possible distributions, the complexity can be reduced to formula_10. Thus, in the case of two populations, the complexity is polynomial in formula_11 for any fixed number of statistics formula_12. | [
{
"math_id": 0,
"text": "X_1,X_2,\\ldots, X_n"
},
{
"math_id": 1,
"text": "F_1(x),F_2(x),\\ldots,F_n(x)"
},
{
"math_id": 2,
"text": "X_{(1)},X_{(2)},\\ldots, X_{(n)}"
},
{
"math_id": 3,
"text": "n_1, n_2\\ldots, n_k"
},
{
"math_id": 4,
"text": "n_1<n_2<\\cdots < n_k"
},
{
"math_id": 5,
"text": "x_1<x_2<\\cdots< x_k"
},
{
"math_id": 6,
"text": "\\begin{align} \nF_{X_{(n_1)},\\ldots, X_{(n_k)}}(x_1,\\ldots,x_k)\n& = \\Pr ( X_{(n_1)}\\leq x_1 \\land X_{(n_2)}\\leq x_2 \\land\\cdots\\land X_{(n_k)} \\leq x_k) \\\\\n& = \\sum_{i_k=n_k}^n \\cdots\\sum_{i_2=n_2}^{i_3} \\sum _{i_1=n_1}^{i_2}\\frac{P_{i_1,\\ldots,i_k} (x_1,\\ldots ,x_k)}{i_1! (i_2-i_1)! \\cdots (n-i_k)!}, \\end{align}"
},
{
"math_id": 7,
"text": "\n\\begin{align}\nP_{i_1,\\ldots,i_k}(x_1,\\ldots,x_k) = \n\\operatorname{per}\n\\begin{bmatrix}\nF_1(x_1) \\cdots F_1(x_1) & \nF_1(x_2)-F_1(x_1) \\cdots F_1(x_2)-F_1(x_1) & \\cdots & \n1-F_1(x_k) \\cdots 1-F_1(x_k) \\\\\n\nF_2(x_1) \\cdots F_2(x_1) & \nF_2(x_2)-F_2(x_1) \\cdots F_2(x_2)-F_2(x_1) & \\cdots & \n1-F_2(x_k) \\cdots 1-F_1(x_k )\\\\\n\\vdots & \n\\vdots & & \n\\vdots \\\\\n\\underbrace{F_n(x_1) \\cdots F_n(x_1) }_{i_1} & \n\\underbrace{F_n(x_2)-F_n(x_1) \\cdots F_n(x_2)-F_n(x_1)}_{i_2-i_1} & \\cdots & \n\\underbrace{1-F_n(x_k) \\cdots 1-F_n(x_k) }_{n-i_k}\n\\end{bmatrix}\n\\end{align}\n"
},
{
"math_id": 8,
"text": "F_i=F"
},
{
"math_id": 9,
"text": "\n\\begin{align}\nF_{X_{(n_1)},\\ldots, X_{(n_k)}}(x_1,\\ldots,x_k)\n= \\sum_{i_k=n_k}^n \\cdots \\sum_{i_2=n_2}^{i_3} \\sum_{i_1=n_1}^{i_2} n! \\frac{F(x_1)^{i_1}}{i_1!} \\frac{(1-F(x_k))^{n-i_k}}{(n-i_k)!} \\prod\\limits_{j=2}^k \\frac{\\left[ F(x_j) -F(x_{j-1}) \\right]^{i_j-i_{j-1}} }{(i_j-i_{j-1})!}.\n\\end{align}\n"
},
{
"math_id": 10,
"text": "O(m^{2k})"
},
{
"math_id": 11,
"text": "m"
},
{
"math_id": 12,
"text": "k"
}
] | https://en.wikipedia.org/wiki?curid=10545671 |
10547121 | Gain (laser) | In laser physics, gain or amplification is a process where the medium transfers part of its energy to the emitted electromagnetic radiation, resulting in an increase in optical power. This is the basic principle of all lasers.
Quantitatively, "gain" is a measure of the ability of a laser medium to increase optical power. However, overall a laser consumes energy.
Definition.
The gain can be defined as the derivative of logarithm of power formula_0
as it passes through the medium. The factor by which an input beam is amplified by a medium is called the gain and is represented by G.
formula_1
where formula_2 is the coordinate in the direction of propagation.
This equation neglects the effects of the transversal profile of the beam.
In the quasi-monochromatic paraxial approximation, the gain can be taken into account with the following equation
formula_3,
where
formula_4 is variation of index of refraction (Which is supposed to be small),
formula_5 is complex field, related to the physical electric field
formula_6 with relation
formula_7, where
formula_8 is vector of polarization,
formula_9 is wavenumber,
formula_10 is frequency,
formula_11
is transversal Laplacian;
formula_12 means real part.
Gain in quasi two-level system.
In the simple quasi two-level system,
the gain can be expressed in terms of populations
formula_13 and
formula_14 of lower and excited states:
formula_15
where
formula_16 and
formula_17
are effective emission and absorption cross-sections. In the case of non-pumped medium, the gain is negative.
Round-trip gain means gain multiplied by the length of propagation of the laser emission during a single round-trip.
In the case of gain varying along the length, the round-trip gain can be expressed with integral
formula_18.
This definition assumes either flat-top profile of the laser beam inside the laser, or
some effective gain, averaged across the beam cross-section.
The amplification coefficient formula_19 can be defined as ratio of the
output power formula_20 to the
input power formula_21:
formula_22.
It is related with gain;
formula_23.
The gain and the amplification coefficient should not be confused with the magnification coefficient.
The magnification characterizes the scale of enlarging of an image; such enlargement
can be realized with passive elements, without gain medium.
Alternative terminology and notations.
There is no established terminology about gain and absorption.
Everyone is free to use own notations, and it is not possible to
cover all the systems of notations in this article.
In radiophysics, gain may mean logarithm of the amplification coefficient.
In many articles on laser physics, which do not use the amplification coefficient formula_19 defined above,
the gain is called "Amplification coefficient", in analogy with "Absorption coefficient", which is actually not a coefficient at all;
one has to multiply it to the length of propagation (thickness), change the signum, take inverse of the exponential,
and only then get the coefficient of attenuation of the sample.
Some publications use term "increment" instead of gain and "decrement" instead of absorption coefficient to avoid the ambiguity,
exploiting the analogy between paraxial propagation of quasi-monochromatic waves and time evolution of a dynamic system.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "~P~"
},
{
"math_id": 1,
"text": "G = \\frac{{\\rm d}}{{\\rm d}z}\\ln(P)=\\frac{ {\\rm d}P /{\\rm d} z}{P}"
},
{
"math_id": 2,
"text": "~z~"
},
{
"math_id": 3,
"text": "\n2ik\\frac{\\partial E}{\\partial z}=\n\\Delta_{\\perp}E + 2 \\nu E + i G E"
},
{
"math_id": 4,
"text": "~\\nu~"
},
{
"math_id": 5,
"text": "~E~"
},
{
"math_id": 6,
"text": "~E_{\\rm phys}~"
},
{
"math_id": 7,
"text": "~E_{\\rm phys}={\\rm Re}\\left( \\vec e E \\exp(ikz-i\\omega t)\\right)~"
},
{
"math_id": 8,
"text": "~\\vec e~"
},
{
"math_id": 9,
"text": "~k~"
},
{
"math_id": 10,
"text": "~\\omega~"
},
{
"math_id": 11,
"text": "~\\Delta_{\\rm \\perp}=\\left(\n\\frac{\\partial ^2}{\\partial x^2}+\n\\frac{\\partial ^2}{\\partial y^2}\n\\right)\n~"
},
{
"math_id": 12,
"text": "~ \\rm Re ~"
},
{
"math_id": 13,
"text": "~N_1~"
},
{
"math_id": 14,
"text": "~N_2~"
},
{
"math_id": 15,
"text": "~ \nG = \\sigma_{\\rm e}N_2 - \\sigma_{\\rm a}N_1\n~"
},
{
"math_id": 16,
"text": "~ \\sigma_{\\rm e}~"
},
{
"math_id": 17,
"text": "~ \\sigma_{\\rm a}~"
},
{
"math_id": 18,
"text": "\ng=\\int G {\\rm d} z\n"
},
{
"math_id": 19,
"text": "~K~"
},
{
"math_id": 20,
"text": "~ P_{\\rm out}"
},
{
"math_id": 21,
"text": "~P_{\\rm in}"
},
{
"math_id": 22,
"text": "~ K=P_{\\rm out}/P_{\\rm in}"
},
{
"math_id": 23,
"text": "~K=\\exp\\left(\\int G {\\rm d} z\\right)~"
}
] | https://en.wikipedia.org/wiki?curid=10547121 |
105499 | Whittaker–Shannon interpolation formula | Signal (re-)construction algorithm
The Whittaker–Shannon interpolation formula or sinc interpolation is a method to construct a continuous-time bandlimited function from a sequence of real numbers. The formula dates back to the works of E. Borel in 1898, and E. T. Whittaker in 1915, and was cited from works of J. M. Whittaker in 1935, and in the formulation of the Nyquist–Shannon sampling theorem by Claude Shannon in 1949. It is also commonly called Shannon's interpolation formula and Whittaker's interpolation formula. E. T. Whittaker, who published it in 1915, called it the Cardinal series.
Definition.
Given a sequence of real numbers, "x"["n"], the continuous function
formula_0
Equivalent formulation: convolution/lowpass filter.
The interpolation formula is derived in the Nyquist–Shannon sampling theorem article, which points out that it can also be expressed as the convolution of an infinite impulse train with a sinc function:
formula_1
This is equivalent to filtering the impulse train with an ideal ("brick-wall") low-pass filter with gain of 1 (or 0 dB) in the passband. If the sample rate is sufficiently high, this means that the baseband image (the original signal before sampling) is passed unchanged and the other images are removed by the brick-wall filter.
Convergence.
The interpolation formula always converges absolutely and locally uniformly as long as
formula_2
By the Hölder inequality this is satisfied if the sequence formula_3 belongs to any of the formula_4 spaces with 1 ≤ "p" < ∞, that is
formula_5
This condition is sufficient, but not necessary. For example, the sum will generally converge if the sample sequence comes from sampling almost any stationary process, in which case the sample sequence is not square summable, and is not in any formula_4 space.
Stationary random processes.
If "x"["n"] is an infinite sequence of samples of a sample function of a wide-sense stationary process, then it is not a member of any formula_6 or Lp space, with probability 1; that is, the infinite sum of samples raised to a power "p" does not have a finite expected value. Nevertheless, the interpolation formula converges with probability 1. Convergence can readily be shown by computing the variances of truncated terms of the summation, and showing that the variance can be made arbitrarily small by choosing a sufficient number of terms. If the process mean is nonzero, then pairs of terms need to be considered to also show that the expected value of the truncated terms converges to zero.
Since a random process does not have a Fourier transform, the condition under which the sum converges to the original function must also be different. A stationary random process does have an autocorrelation function and hence a spectral density according to the Wiener–Khinchin theorem. A suitable condition for convergence to a sample function from the process is that the spectral density of the process be zero at all frequencies equal to and above half the sample rate.
See also.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "x(t) = \\sum_{n=-\\infty}^{\\infty} x[n] \\, {\\rm sinc}\\left(\\frac{t - nT}{T}\\right)\\,"
},
{
"math_id": 1,
"text": " x(t) = \\left( \\sum_{n=-\\infty}^{\\infty} T\\cdot \\underbrace{x(nT)}_{x[n]}\\cdot \\delta \\left( t - nT \\right) \\right) \\circledast \n\\left( \\frac{1}{T}{\\rm sinc}\\left(\\frac{t}{T}\\right) \\right). "
},
{
"math_id": 2,
"text": "\\sum_{n\\in\\Z,\\,n\\ne 0}\\left|\\frac{x[n]}n\\right|<\\infty."
},
{
"math_id": 3,
"text": "(x[n])_{n\\in\\Z}"
},
{
"math_id": 4,
"text": "\\ell^p(\\Z,\\mathbb C)"
},
{
"math_id": 5,
"text": "\\sum_{n\\in\\Z}\\left|x[n]\\right|^p<\\infty."
},
{
"math_id": 6,
"text": "\\ell^p"
}
] | https://en.wikipedia.org/wiki?curid=105499 |
10553199 | Jamshid al-Kashi | Persian astronomer and mathematician
Ghiyāth al-Dīn Jamshīd Masʿūd al-Kāshī (or al-Kāshānī) ( "Ghiyās-ud-dīn Jamshīd Kāshānī") (c. 1380 Kashan, Iran – 22 June 1429 Samarkand, Transoxania) was an astronomer and mathematician during the reign of Tamerlane.
Much of al-Kāshī's work was not brought to Europe and still, even the extant work, remains unpublished in any form.
Biography.
Al-Kashi was born in 1380, in Kashan, in central Iran, to a Persian family. This region was controlled by Tamerlane, better known as Timur.
The situation changed for the better when Timur died in 1405, and his son, Shah Rokh, ascended into power. Shah Rokh and his wife, Goharshad, a Turkish princess, were very interested in the sciences, and they encouraged their court to study the various fields in great depth. Consequently, the period of their power became one of many scholarly accomplishments. This was the perfect environment for al-Kashi to begin his career as one of the world's greatest mathematicians.
Eight years after he came into power in 1409, their son, Ulugh Beg, founded an institute in Samarkand which soon became a prominent university. Students from all over the Middle East and beyond, flocked to this academy in the capital city of Ulugh Beg's empire. Consequently, Ulugh Beg gathered many great mathematicians and scientists of the Middle East. In 1414, al-Kashi took this opportunity to contribute vast amounts of knowledge to his people. His best work was done in the court of Ulugh Beg.
Al-Kashi was still working on his book, called “Risala al-watar wa’l-jaib” meaning “The Treatise on the Chord and Sine”, when he died, in 1429. Some state that he was murdered and say that Ulugh Beg probably ordered this, whereas others suggest he died a natural death. Regardless, after his death, Ulugh Beg described him as "a remarkable scientist" who "could solve the most difficult problems".
Astronomy.
"Khaqani Zij".
Al-Kashi produced a "Zij" entitled the "Khaqani Zij", which was based on Nasir al-Din al-Tusi's earlier "Zij-i Ilkhani". In his "Khaqani Zij", al-Kashi thanks the Timurid sultan and mathematician-astronomer Ulugh Beg, who invited al-Kashi to work at his observatory (see Islamic astronomy) and his university (see Madrasah) which taught theology. Al-Kashi produced sine tables to four sexagesimal digits (equivalent to eight decimal places) of accuracy for each degree and includes differences for each minute. He also produced tables dealing with transformations between coordinate systems on the celestial sphere, such as the transformation from the ecliptic coordinate system to the equatorial coordinate system.
"Astronomical Treatise on the size and distance of heavenly bodies".
He wrote the book Sullam al-Sama on the resolution of difficulties met by predecessors in the determination of distances and sizes of heavenly bodies, such as the Earth, the Moon, the Sun, and the Stars.
"Treatise on Astronomical Observational Instruments".
In 1416, al-Kashi wrote the "Treatise on Astronomical Observational Instruments", which described a variety of different instruments, including the triquetrum and armillary sphere, the equinoctial armillary and solsticial armillary of Mo'ayyeduddin Urdi, the sine and versine instrument of Urdi, the sextant of al-Khujandi, the Fakhri sextant at the Samarqand observatory, a double quadrant Azimuth-altitude instrument he invented, and a small armillary sphere incorporating an alhidade which he invented.
Plate of Conjunctions.
Al-Kashi invented the Plate of Conjunctions, an analog computing instrument used to determine the time of day at which planetary conjunctions will occur, and for performing linear interpolation.
Planetary computer.
Al-Kashi also invented a mechanical planetary computer which he called the Plate of Zones, which could graphically solve a number of planetary problems, including the prediction of the true positions in longitude of the Sun and Moon, and the planets in terms of elliptical orbits; the latitudes of the Sun, Moon, and planets; and the ecliptic of the Sun. The instrument also incorporated an alhidade and ruler.
Mathematics.
Law of cosines.
In French, the law of cosines is named "" (Theorem of Al-Kashi), as al-Kashi was the first to provide an explicit statement of the law of cosines in a form suitable for triangulation. His other work is al-"Risāla al"-"muhītīyya" or "The Treatise on the Circumference".
"The Treatise of Chord and Sine".
In "The Treatise on the Chord and Sine", al-Kashi computed sin 1° to nearly as much accuracy as his value for π, which was the most accurate approximation of sin 1° in his time and was not surpassed until Taqi al-Din in the sixteenth century. In algebra and numerical analysis, he developed an iterative method for solving cubic equations, which was not discovered in Europe until centuries later.
A method algebraically equivalent to Newton's method was known to his predecessor Sharaf al-Dīn al-Tūsī. Al-Kāshī improved on this by using a form of Newton's method to solve formula_0 to find roots of "N". In western Europe, a similar method was later described by Henry Briggs in his "Trigonometria Britannica", published in 1633.
In order to determine sin 1°, al-Kashi discovered the following formula, often attributed to François Viète in the sixteenth century:
formula_1
"The Key to Arithmetic".
Computation of 2π.
In his numerical approximation, he correctly computed 2π to 9 sexagesimal digits in 1424, and he converted this estimate of 2π to 16 decimal places of accuracy. This was far more accurate than the estimates earlier given in Greek mathematics (3 decimal places by Ptolemy, AD 150), Chinese mathematics (7 decimal places by Zu Chongzhi, AD 480) or Indian mathematics (11 decimal places by Madhava of Kerala School, "c." 14th Century ). The accuracy of al-Kashi's estimate was not surpassed until Ludolph van Ceulen computed 20 decimal places of π 180 years later. Al-Kashi's goal was to compute the circle constant so precisely that the circumference of the largest possible circle (ecliptica) could be computed with the highest desirable precision (the diameter of a hair).
Decimal fractions.
In discussing decimal fractions, Struik states that (p. 7):
"The introduction of decimal fractions as a common computational practice can be dated back to the Flemish pamphlet "De Thiende", published at Leyden in 1585, together with a French translation, "La Disme", by the Flemish mathematician Simon Stevin (1548-1620), then settled in the Northern Netherlands. It is true that decimal fractions were used by the Chinese many centuries before Stevin and that the Persian astronomer Al-Kāshī used both decimal and sexagesimal fractions with great ease in his "Key to arithmetic" (Samarkand, early fifteenth century)."
Khayyam's triangle.
In considering Pascal's triangle, known in Persia as "Khayyam's triangle" (named after Omar Khayyám), Struik notes that (p. 21):
"The Pascal triangle appears for the first time (so far as we know at present) in a book of 1261 written by Yang Hui, one of the mathematicians of the Song dynasty in China. The properties of binomial coefficients were discussed by the Persian mathematician Jamshid Al-Kāshī in his "Key to arithmetic" of c. 1425. Both in China and Persia the knowledge of these properties may be much older. This knowledge was shared by some of the Renaissance mathematicians, and we see Pascal's triangle on the title page of Peter Apian's German arithmetic of 1527. After this, we find the triangle and the properties of binomial coefficients in several other authors."
Biographical film.
In 2009, IRIB produced and broadcast (through Channel 1 of IRIB) a biographical-historical film series on the life and times of Jamshid Al-Kāshi, with the title "The Ladder of the Sky" ("Nardebām-e Āsmān"). The series, which consists of 15 parts, with each part being 45 minutes long, is directed by Mohammad Hossein Latifi and produced by Mohsen Ali-Akbari. In this production, the role of the adult Jamshid Al-Kāshi is played by Vahid Jalilvand.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x^P - N = 0"
},
{
"math_id": 1,
"text": "\\sin 3 \\phi = 3 \\sin \\phi - 4 \\sin^3 \\phi\\,\\!"
}
] | https://en.wikipedia.org/wiki?curid=10553199 |
1055357 | Sheaf cohomology | Tool in algebraic topology
In mathematics, sheaf cohomology is the application of homological algebra to analyze the global sections of a sheaf on a topological space. Broadly speaking, sheaf cohomology describes the obstructions to solving a geometric problem globally when it can be solved locally. The central work for the study of sheaf cohomology is Grothendieck's 1957 Tôhoku paper.
Sheaves, sheaf cohomology, and spectral sequences were introduced by Jean Leray at the prisoner-of-war camp Oflag XVII-A in Austria. From 1940 to 1945, Leray and other prisoners organized a "université en captivité" in the camp.
Leray's definitions were simplified and clarified in the 1950s. It became clear that sheaf cohomology was not only a new approach to cohomology in algebraic topology, but also a powerful method in complex analytic geometry and algebraic geometry. These subjects often involve constructing global functions with specified local properties, and sheaf cohomology is ideally suited to such problems. Many earlier results such as the Riemann–Roch theorem and the Hodge theorem have been generalized or understood better using sheaf cohomology.
Definition.
The category of sheaves of abelian groups on a topological space "X" is an abelian category, and so it makes sense to ask when a morphism "f": "B" → "C" of sheaves is injective (a monomorphism) or surjective (an epimorphism). One answer is that "f" is injective (respectively surjective) if and only if the associated homomorphism on stalks "B""x" → "C""x" is injective (respectively surjective) for every point "x" in "X". It follows that "f" is injective if and only if the homomorphism "B"("U") → "C"("U") of sections over "U" is injective for every open set "U" in "X". Surjectivity is more subtle, however: the morphism "f" is surjective if and only if for every open set "U" in "X", every section "s" of "C" over "U", and every point "x" in "U", there is an open neighborhood "V" of "x" in "U" such that "s" restricted to "V" is the image of some section of "B" over "V". (In words: every section of "C" lifts "locally" to sections of "B".)
As a result, the question arises: given a surjection "B" → "C" of sheaves and a section "s" of "C" over "X", when is "s" the image of a section of "B" over "X"? This is a model for all kinds of local-vs.-global questions in geometry. Sheaf cohomology gives a satisfactory general answer. Namely, let "A" be the kernel of the surjection "B" → "C", giving a short exact sequence
formula_0
of sheaves on "X". Then there is a long exact sequence of abelian groups, called sheaf cohomology groups:
formula_1
where "H"0("X","A") is the group "A"("X") of global sections of "A" on "X". For example, if the group "H"1("X","A") is zero, then this exact sequence implies that every global section of "C" lifts to a global section of "B". More broadly, the exact sequence makes knowledge of higher cohomology groups a fundamental tool in aiming to understand sections of sheaves.
Grothendieck's definition of sheaf cohomology, now standard, uses the language of homological algebra. The essential point is to fix a topological space "X" and think of cohomology as a functor from sheaves of abelian groups on "X" to abelian groups. In more detail, start with the functor "E" ↦ "E"("X") from sheaves of abelian groups on "X" to abelian groups. This is left exact, but in general not right exact. Then the groups "H""i"("X","E") for integers "i" are defined as the right derived functors of the functor "E" ↦ "E"("X"). This makes it automatic that "H""i"("X","E") is zero for "i" < 0, and that "H"0("X","E") is the group "E"("X") of global sections. The long exact sequence above is also straightforward from this definition.
The definition of derived functors uses that the category of sheaves of abelian groups on any topological space "X" has enough injectives; that is, for every sheaf "E" there is an injective sheaf "I" with an injection "E" → "I". It follows that every sheaf "E" has an injective resolution:
formula_2
Then the sheaf cohomology groups "H""i"("X","E") are the cohomology groups (the kernel of one homomorphism modulo the image of the previous one) of the chain complex of abelian groups:
formula_3
Standard arguments in homological algebra imply that these cohomology groups are independent of the choice of injective resolution of "E".
The definition is rarely used directly to compute sheaf cohomology. It is nonetheless powerful, because it works in great generality (any sheaf of abelian groups on any topological space), and it easily implies the formal properties of sheaf cohomology, such as the long exact sequence above. For specific classes of spaces or sheaves, there are many tools for computing sheaf cohomology, some discussed below.
Functoriality.
For any continuous map "f": "X" → "Y" of topological spaces, and any sheaf "E" of abelian groups on "Y", there is a pullback homomorphism
formula_4
for every integer "j", where "f"*("E") denotes the inverse image sheaf or pullback sheaf. If "f" is the inclusion of a subspace "X" of "Y", "f"*("E") is the restriction of "E" to "X", often just called "E" again, and the pullback of a section "s" from "Y" to "X" is called the restriction "s"|"X".
Pullback homomorphisms are used in the Mayer–Vietoris sequence, an important computational result. Namely, let "X" be a topological space which is a union of two open subsets "U" and "V", and let "E" be a sheaf on "X". Then there is a long exact sequence of abelian groups:
formula_5
Sheaf cohomology with constant coefficients.
For a topological space formula_6 and an abelian group formula_7, the constant sheaf formula_8 means the sheaf of locally constant functions with values in formula_7. The sheaf cohomology groups formula_9 with constant coefficients are often written simply as formula_10, unless this could cause confusion with another version of cohomology such as singular cohomology.
For a continuous map "f": "X" → "Y" and an abelian group "A", the pullback sheaf "f"*("A""Y") is isomorphic to "A""X". As a result, the pullback homomorphism makes sheaf cohomology with constant coefficients into a contravariant functor from topological spaces to abelian groups.
For any spaces "X" and "Y" and any abelian group "A", two homotopic maps "f" and "g" from "X" to "Y" induce the "same" homomorphism on sheaf cohomology:
formula_11
It follows that two homotopy equivalent spaces have isomorphic sheaf cohomology with constant coefficients.
Let "X" be a paracompact Hausdorff space which is locally contractible, even in the weak sense that every open neighborhood "U" of a point "x" contains an open neighborhood "V" of "x" such that the inclusion "V" → "U" is homotopic to a constant map. Then the singular cohomology groups of "X" with coefficients in an abelian group "A" are isomorphic to sheaf cohomology with constant coefficients, "H"*("X","A""X"). For example, this holds for "X" a topological manifold or a CW complex.
As a result, many of the basic calculations of sheaf cohomology with constant coefficients are the same as calculations of singular cohomology. See the article on cohomology for the cohomology of spheres, projective spaces, tori, and surfaces.
For arbitrary topological spaces, singular cohomology and sheaf cohomology (with constant coefficients) can be different. This happens even for "H"0. The singular cohomology "H"0("X",Z) is the group of all functions from the set of path components of "X" to the integers Z, whereas sheaf cohomology "H"0("X",Z"X") is the group of locally constant functions from "X" to Z. These are different, for example, when "X" is the Cantor set. Indeed, the sheaf cohomology "H"0("X",Z"X") is a countable abelian group in that case, whereas the singular cohomology "H"0("X",Z) is the group of "all" functions from "X" to Z, which has cardinality
formula_12
For a paracompact Hausdorff space "X" and any sheaf "E" of abelian groups on "X", the cohomology groups "H""j"("X","E") are zero for "j" greater than the covering dimension of "X". (This does not hold in the same generality for singular cohomology: for example, there is a compact subset of Euclidean space R3 that has nonzero singular cohomology in infinitely many degrees.) The covering dimension agrees with the usual notion of dimension for a topological manifold or a CW complex.
Flabby and soft sheaves.
A sheaf "E" of abelian groups on a topological space "X" is called acyclic if "H""j"("X","E") = 0 for all "j" > 0. By the long exact sequence of sheaf cohomology, the cohomology of any sheaf can be computed from any acyclic resolution of "E" (rather than an injective resolution). Injective sheaves are acyclic, but for computations it is useful to have other examples of acyclic sheaves.
A sheaf "E" on "X" is called flabby (French: "flasque") if every section of "E" on an open subset of "X" extends to a section of "E" on all of "X". Flabby sheaves are acyclic. Godement defined sheaf cohomology via a canonical flabby resolution of any sheaf; since flabby sheaves are acyclic, Godement's definition agrees with the definition of sheaf cohomology above.
A sheaf "E" on a paracompact Hausdorff space "X" is called soft if every section of the restriction of "E" to a closed subset of "X" extends to a section of "E" on all of "X". Every soft sheaf is acyclic.
Some examples of soft sheaves are the sheaf of real-valued continuous functions on any paracompact Hausdorff space, or the sheaf of smooth ("C"∞) functions on any smooth manifold. More generally, any sheaf of modules over a soft sheaf of commutative rings is soft; for example, the sheaf of smooth sections of a vector bundle over a smooth manifold is soft.
For example, these results form part of the proof of de Rham's theorem. For a smooth manifold "X", the Poincaré lemma says that the de Rham complex is a resolution of the constant sheaf R"X":
formula_13
where Ω"X""j" is the sheaf of smooth "j"-forms and the map Ω"X""j" → Ω"X""j"+1 is the exterior derivative "d". By the results above, the sheaves Ω"X""j" are soft and therefore acyclic. It follows that the sheaf cohomology of "X" with real coefficients is isomorphic to the de Rham cohomology of "X", defined as the cohomology of the complex of real vector spaces:
formula_14
The other part of de Rham's theorem is to identify sheaf cohomology and singular cohomology of "X" with real coefficients; that holds in greater generality, as discussed above.
Čech cohomology.
Čech cohomology is an approximation to sheaf cohomology that is often useful for computations. Namely, let formula_15 be an open cover of a topological space "X", and let "E" be a sheaf of abelian groups on "X". Write the open sets in the cover as "U""i" for elements "i" of a set "I", and fix an ordering of "I". Then Čech cohomology formula_16 is defined as the cohomology of an explicit complex of abelian groups with "j"th group
formula_17
There is a natural homomorphism formula_18. Thus Čech cohomology is an approximation to sheaf cohomology using only the sections of "E" on finite intersections of the open sets "U""i".
If every finite intersection "V" of the open sets in formula_15 has no higher cohomology with coefficients in "E", meaning that "H""j"("V","E") = 0 for all "j" > 0, then the homomorphism from Čech cohomology formula_16 to sheaf cohomology is an isomorphism.
Another approach to relating Čech cohomology to sheaf cohomology is as follows. The Čech cohomology groups formula_19 are defined as the direct limit of formula_16 over all open covers formula_15 of "X" (where open covers are ordered by refinement). There is a homomorphism formula_20 from Čech cohomology to sheaf cohomology, which is an isomorphism for "j" ≤ 1. For arbitrary topological spaces, Čech cohomology can differ from sheaf cohomology in higher degrees. Conveniently, however, Čech cohomology is isomorphic to sheaf cohomology for any sheaf on a paracompact Hausdorff space.
The isomorphism formula_21 implies a description of "H"1("X","E") for any sheaf "E" of abelian groups on a topological space "X": this group classifies the "E"-torsors (also called principal "E"-bundles) over "X", up to isomorphism. (This statement generalizes to any sheaf of groups "G", not necessarily abelian, using the non-abelian cohomology set "H"1("X","G").) By definition, an "E"-torsor over "X" is a sheaf "S" of sets together with an action of "E" on "X" such that every point in "X" has an open neighborhood on which "S" is isomorphic to "E", with "E" acting on itself by translation. For example, on a ringed space ("X","O""X"), it follows that the Picard group of invertible sheaves on "X" is isomorphic to the sheaf cohomology group "H"1("X","O""X"*), where "O""X"* is the sheaf of units in "O""X".
Relative cohomology.
For a subset "Y" of a topological space "X" and a sheaf "E" of abelian groups on "X", one can define relative cohomology groups:
formula_22
for integers "j". Other names are the cohomology of "X" with support in "Y", or (when "Y" is closed in "X") local cohomology. A long exact sequence relates relative cohomology to sheaf cohomology in the usual sense:
formula_23
When "Y" is closed in "X", cohomology with support in "Y" can be defined as the derived functors of the functor
formula_24
the group of sections of "E" that are supported on "Y".
There are several isomorphisms known as excision. For example, if "X" is a topological space with subspaces "Y" and "U" such that the closure of "Y" is contained in the interior of "U", and "E" is a sheaf on "X", then the restriction
formula_25
is an isomorphism. (So cohomology with support in a closed subset "Y" only depends on the behavior of the space "X" and the sheaf "E" near "Y".) Also, if "X" is a paracompact Hausdorff space that is the union of closed subsets "A" and "B", and "E" is a sheaf on "X", then the restriction
formula_26
is an isomorphism.
Cohomology with compact support.
Let "X" be a locally compact topological space. (In this article, a locally compact space is understood to be Hausdorff.) For a sheaf "E" of abelian groups on "X", one can define cohomology with compact support "H"c"j"("X","E"). These groups are defined as the derived functors of the functor of compactly supported sections:
formula_27
There is a natural homomorphism "H"c"j"("X","E") →
"H""j"("X","E"), which is an isomorphism for "X" compact.
For a sheaf "E" on a locally compact space "X", the compactly supported cohomology of "X" × R with coefficients in the pullback of "E" is a shift of the compactly supported cohomology of "X":
formula_28
It follows, for example, that "H""c""j"(R"n",Z) is isomorphic to Z if "j" = "n" and is zero otherwise.
Compactly supported cohomology is not functorial with respect to arbitrary continuous maps. For a proper map "f": "Y" → "X" of locally compact spaces and a sheaf "E" on "X", however, there is a pullback homomorphism
formula_29
on compactly supported cohomology. Also, for an open subset "U" of a locally compact space "X" and a sheaf "E" on "X", there is a pushforward homomorphism known as extension by zero:
formula_30
Both homomorphisms occur in the long exact localization sequence for compactly supported cohomology, for a locally compact space "X" and a closed subset "Y":
formula_31
Cup product.
For any sheaves "A" and "B" of abelian groups on a topological space "X", there is a bilinear map, the cup product
formula_32
for all "i" and "j". Here "A"⊗"B" denotes the tensor product over Z, but if "A" and "B" are sheaves of modules over some sheaf "O""X" of commutative rings, then one can map further from "H""i"+"j"(X,"A"⊗Z"B") to "H""i"+"j"(X,"A"⊗"O""X""B"). In particular, for a sheaf "O""X" of commutative rings, the cup product makes the direct sum
formula_33
into a graded-commutative ring, meaning that
formula_34
for all "u" in "H""i" and "v" in "H""j".
Complexes of sheaves.
The definition of sheaf cohomology as a derived functor extends to define cohomology of a topological space "X" with coefficients in any complex "E" of sheaves:
formula_35
In particular, if the complex "E" is bounded below (the sheaf "E""j" is zero for "j" sufficiently negative), then "E" has an injective resolution "I" just as a single sheaf does. (By definition, "I" is a bounded below complex of injective sheaves with a chain map "E" → "I" that is a quasi-isomorphism.) Then the cohomology groups "H""j"("X","E") are defined as the cohomology of the complex of abelian groups
formula_36
The cohomology of a space with coefficients in a complex of sheaves was earlier called hypercohomology, but usually now just "cohomology".
More generally, for any complex of sheaves "E" (not necessarily bounded below) on a space "X", the cohomology group "H""j"("X","E") is defined as a group of morphisms in the derived category of sheaves on "X":
formula_37
where Z"X" is the constant sheaf associated to the integers, and "E"["j"] means the complex "E" shifted "j" steps to the left.
Poincaré duality and generalizations.
A central result in topology is the Poincaré duality theorem: for a closed oriented connected topological manifold "X" of dimension "n" and a field "k", the group "H""n"("X","k") is isomorphic to "k", and the cup product
formula_38
is a perfect pairing for all integers "j". That is, the resulting map from "H""j"("X","k") to the dual space "H""n"−"j"("X","k")* is an isomorphism. In particular, the vector spaces "H""j"("X","k") and "H""n"−"j"("X","k")* have the same (finite) dimension.
Many generalizations are possible using the language of sheaf cohomology. If "X" is an oriented "n"-manifold, not necessarily compact or connected, and "k" is a field, then cohomology is the dual of cohomology with compact support:
formula_39
For any manifold "X" and field "k", there is a sheaf "o""X" on "X", the orientation sheaf, which is locally (but perhaps not globally) isomorphic to the constant sheaf "k". One version of Poincaré duality for an arbitrary "n"-manifold "X" is the isomorphism:
formula_40
More generally, if "E" is a locally constant sheaf of "k"-vector spaces on an "n"-manifold "X" and the stalks of "E" have finite dimension, then there is an isomorphism
formula_41
With coefficients in an arbitrary commutative ring rather than a field, Poincaré duality is naturally formulated as an isomorphism from cohomology to Borel–Moore homology.
Verdier duality is a vast generalization. For any locally compact space "X" of finite dimension and any field "k", there is an object "D""X" in the derived category "D"("X") of sheaves on "X" called the dualizing complex (with coefficients in "k"). One case of Verdier duality is the isomorphism:
formula_42
For an "n"-manifold "X", the dualizing complex "D""X" is isomorphic to the shift "o""X"["n"] of the orientation sheaf. As a result, Verdier duality includes Poincaré duality as a special case.
Alexander duality is another useful generalization of Poincaré duality. For any closed subset "X" of an oriented "n"-manifold "M" and any field "k", there is an isomorphism:
formula_43
This is interesting already for "X" a compact subset of "M" = R"n", where it says (roughly speaking) that the cohomology of R"n"−"X" is the dual of the sheaf cohomology of "X". In this statement, it is essential to consider sheaf cohomology rather than singular cohomology, unless one makes extra assumptions on "X" such as local contractibility.
Higher direct images and the Leray spectral sequence.
Let "f": "X" → "Y" be a continuous map of topological spaces, and let "E" be a sheaf of abelian groups on "X". The direct image sheaf "f"*"E" is the sheaf on "Y" defined by
formula_44
for any open subset "U" of "Y". For example, if "f" is the map from "X" to a point, then "f"*"E" is the sheaf on a point corresponding to the group "E"("X") of global sections of "E".
The functor "f"* from sheaves on "X" to sheaves on "Y" is left exact, but in general not right exact. The higher direct image sheaves R"i""f"*"E" on "Y" are defined as the right derived functors of the functor "f"*. Another description is that R"i""f"*"E" is the sheaf associated to the presheaf
formula_45
on "Y". Thus, the higher direct image sheaves describe the cohomology of inverse images of small open sets in "Y", roughly speaking.
The Leray spectral sequence relates cohomology on "X" to cohomology on "Y". Namely, for any continuous map "f": "X" → "Y" and any sheaf "E" on "X", there is a spectral sequence
formula_46
This is a very general result. The special case where "f" is a fibration and "E" is a constant sheaf plays an important role in homotopy theory under the name of the Serre spectral sequence. In that case, the higher direct image sheaves are locally constant, with stalks the cohomology groups of the fibers "F" of "f", and so the Serre spectral sequence can be written as
formula_47
for an abelian group "A".
A simple but useful case of the Leray spectral sequence is that for any closed subset "X" of a topological space "Y" and any sheaf "E" on "X", writing "f": "X" → "Y" for the inclusion, there is an isomorphism
formula_48
As a result, any question about sheaf cohomology on a closed subspace can be translated to a question about the direct image sheaf on the ambient space.
Finiteness of cohomology.
There is a strong finiteness result on sheaf cohomology. Let "X" be a compact Hausdorff space, and let "R" be a principal ideal domain, for example a field or the ring Z of integers. Let "E" be a sheaf of "R"-modules on "X", and assume that "E" has "locally finitely generated cohomology", meaning that for each point "x" in "X", each integer "j", and each open neighborhood "U" of "x", there is an open neighborhood "V" ⊂ "U" of "x" such that the image of "H""j"("U","E") → "H""j"("V","E") is a finitely generated "R"-module. Then the cohomology groups "H""j"("X","E") are finitely generated "R"-modules.
For example, for a compact Hausdorff space "X" that is locally contractible (in the weak sense discussed above), the sheaf cohomology group "H""j"("X",Z) is finitely generated for every integer "j".
One case where the finiteness result applies is that of a constructible sheaf. Let "X" be a topologically stratified space. In particular, "X" comes with a sequence of closed subsets
formula_49
such that each difference "X""i"−"X""i"−1 is a topological manifold of dimension "i". A sheaf "E" of "R"-modules on "X" is constructible with respect to the given stratification if the restriction of "E" to each stratum "X""i"−"X""i"−1 is locally constant, with stalk a finitely generated "R"-module. A sheaf "E" on "X" that is constructible with respect to the given stratification has locally finitely generated cohomology. If "X" is compact, it follows that the cohomology groups "H""j"("X","E") of "X" with coefficients in a constructible sheaf are finitely generated.
More generally, suppose that "X" is compactifiable, meaning that there is a compact stratified space "W" containing "X" as an open subset, with "W"–"X" a union of connected components of strata. Then, for any constructible sheaf "E" of "R"-modules on "X", the "R"-modules "H""j"("X","E") and "H""c""j"("X","E") are finitely generated. For example, any complex algebraic variety "X", with its classical (Euclidean) topology, is compactifiable in this sense.
Cohomology of coherent sheaves.
In algebraic geometry and complex analytic geometry, coherent sheaves are a class of sheaves of particular geometric importance. For example, an algebraic vector bundle (on a locally Noetherian scheme) or a holomorphic vector bundle (on a complex analytic space) can be viewed as a coherent sheaf, but coherent sheaves have the advantage over vector bundles that they form an abelian category. On a scheme, it is also useful to consider the quasi-coherent sheaves, which include the locally free sheaves of infinite rank.
A great deal is known about the cohomology groups of a scheme or complex analytic space with coefficients in a coherent sheaf. This theory is a key technical tool in algebraic geometry. Among the main theorems are results on the vanishing of cohomology in various situations, results on finite-dimensionality of cohomology, comparisons between coherent sheaf cohomology and singular cohomology such as Hodge theory, and formulas on Euler characteristics in coherent sheaf cohomology such as the Riemann–Roch theorem.
Sheaves on a site.
In the 1960s, Grothendieck defined the notion of a site, meaning a category equipped with a Grothendieck topology. A site "C" axiomatizes the notion of a set of morphisms "V"α → "U" in "C" being a "covering" of "U". A topological space "X" determines a site in a natural way: the category "C" has objects the open subsets of "X", with morphisms being inclusions, and with a set of morphisms "V"α → "U" being called a covering of "U" if and only if "U" is the union of the open subsets "V"α. The motivating example of a Grothendieck topology beyond that case was the étale topology on schemes. Since then, many other Grothendieck topologies have been used in algebraic geometry: the fpqc topology, the Nisnevich topology, and so on.
The definition of a sheaf works on any site. So one can talk about a sheaf of sets on a site, a sheaf of abelian groups on a site, and so on. The definition of sheaf cohomology as a derived functor also works on a site. So one has sheaf cohomology groups "H""j"("X", "E") for any object "X" of a site and any sheaf "E" of abelian groups. For the étale topology, this gives the notion of étale cohomology, which led to the proof of the Weil conjectures. Crystalline cohomology and many other cohomology theories in algebraic geometry are also defined as sheaf cohomology on an appropriate site.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " 0\\to A\\to B\\to C\\to 0"
},
{
"math_id": 1,
"text": " 0\\to H^0(X,A) \\to H^0(X,B) \\to H^0(X,C) \\to H^1(X,A) \\to \\cdots,"
},
{
"math_id": 2,
"text": "0\\to E\\to I_0\\to I_1\\to I_2\\to \\cdots."
},
{
"math_id": 3,
"text": " 0\\to I_0(X) \\to I_1(X) \\to I_2(X)\\to \\cdots."
},
{
"math_id": 4,
"text": "f^*\\colon H^j(Y,E) \\to H^j(X,f^*(E))"
},
{
"math_id": 5,
"text": " 0\\to H^0(X,E) \\to H^0(U,E)\\oplus H^0(V,E) \\to H^0(U\\cap V, E) \\to H^1(X,E) \\to \\cdots."
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "A"
},
{
"math_id": 8,
"text": "A_X"
},
{
"math_id": 9,
"text": "H^j(X,A_X)"
},
{
"math_id": 10,
"text": "H^j(X,A)"
},
{
"math_id": 11,
"text": "f^*=g^*: H^j(Y,A)\\to H^j(X,A)."
},
{
"math_id": 12,
"text": "2^{2^{\\aleph_0}}."
},
{
"math_id": 13,
"text": "0\\to\\mathbf{R}_X\\to\\Omega^0_X\\to\\Omega^1_X\\to\\cdots,"
},
{
"math_id": 14,
"text": "0\\to \\Omega^0_X(X)\\to\\Omega^1_X(X)\\to\\cdots."
},
{
"math_id": 15,
"text": "\\mathcal{U}"
},
{
"math_id": 16,
"text": "H^j(\\mathcal{U},E)"
},
{
"math_id": 17,
"text": "C^j(\\mathcal{U},E)=\\prod_{i_0<\\cdots<i_j}E(U_{i_0}\\cap\\cdots\\cap U_{i_j})."
},
{
"math_id": 18,
"text": "H^j(\\mathcal{U},E)\\to H^j(X,E)"
},
{
"math_id": 19,
"text": "\\check{H}^j(X,E)"
},
{
"math_id": 20,
"text": "\\check{H}^j(X,E)\\to H^j(X,E)"
},
{
"math_id": 21,
"text": "\\check{H}^1(X,E)\\cong H^1(X,E)"
},
{
"math_id": 22,
"text": "H^j_Y(X,E)=H^j(X,X-Y;E)"
},
{
"math_id": 23,
"text": "\\cdots \\to H^j_Y(X,E)\\to H^j(X,E)\\to H^j(X-Y,E)\\to H^{j+1}_Y(X,E)\\to\\cdots."
},
{
"math_id": 24,
"text": "H^0_Y(X,E):=\\{s\\in E(X): s|_{X-Y}=0\\},"
},
{
"math_id": 25,
"text": "H^j_Y(X,E)\\to H^j_Y(U,E)"
},
{
"math_id": 26,
"text": "H^j(X,B;E)\\to H^j(A,A\\cap B;E)"
},
{
"math_id": 27,
"text": "H^0_c(X,E)=\\{s\\in E(X): \\text{there is a compact subset }K\\text{ of }X\\text{ with }s|_{X-K}=0\\}."
},
{
"math_id": 28,
"text": "H^{j+1}_c(X\\times\\mathbf{R},E)\\cong H^j_c(X,E)."
},
{
"math_id": 29,
"text": "f^*\\colon H^j_c(X,E)\\to H^j_c(Y,f^*(E))"
},
{
"math_id": 30,
"text": "H^j_c(U,E)\\to H^j_c(X,E)."
},
{
"math_id": 31,
"text": "\\cdots\\to H^j_c(X-Y,E)\\to H^j_c(X,E)\\to H^j_c(Y,E)\\to H^{j+1}_c(X-Y,E)\\to\\cdots."
},
{
"math_id": 32,
"text": "H^i(X,A)\\times H^j(X,B)\\to H^{i+j}(X,A\\otimes B),"
},
{
"math_id": 33,
"text": "H^*(X,O_X) = \\bigoplus_j H^j(X,O_X)"
},
{
"math_id": 34,
"text": "vu=(-1)^{ij}uv"
},
{
"math_id": 35,
"text": "\\cdots\\to E_j\\to E_{j+1}\\to E_{j+2}\\to \\cdots"
},
{
"math_id": 36,
"text": "\\cdots \\to I_j(X)\\to I_{j+1}(X)\\to I_{j+2}(X)\\to\\cdots."
},
{
"math_id": 37,
"text": "H^j(X,E)=\\operatorname{Hom}_{D(X)}(\\mathbf{Z}_X,E[j]),"
},
{
"math_id": 38,
"text": "H^j(X,k)\\times H^{n-j}(X,k)\\to H^n(X,k)\\cong k"
},
{
"math_id": 39,
"text": "H^j(X,k)\\cong H^{n-j}_c(X,k)^*."
},
{
"math_id": 40,
"text": "H^j(X,o_X)\\cong H^{n-j}_c(X,k)^*."
},
{
"math_id": 41,
"text": "H^j(X,E^*\\otimes o_X)\\cong H^{n-j}_c(X,E)^*."
},
{
"math_id": 42,
"text": "H^j(X,D_X)\\cong H^{-j}_c(X,k)^*."
},
{
"math_id": 43,
"text": "H^j_X(M,k)\\cong H^{n-j}_c(X,k)^*."
},
{
"math_id": 44,
"text": "(f_*E)(U) = E(f^{-1}(U))"
},
{
"math_id": 45,
"text": "U \\mapsto H^i(f^{-1}(U),E)"
},
{
"math_id": 46,
"text": " E_2^{ij} = H^i(Y,R^jf_*E) \\Rightarrow H^{i+j}(X,E)."
},
{
"math_id": 47,
"text": " E_2^{ij} = H^i(Y,H^j(F,A)) \\Rightarrow H^{i+j}(X,A)"
},
{
"math_id": 48,
"text": "H^i(Y,f_*E)\\cong H^i(X,E)."
},
{
"math_id": 49,
"text": "X=X_n\\supset X_{n-1}\\supset\\cdots\\supset X_{-1}=\\emptyset"
}
] | https://en.wikipedia.org/wiki?curid=1055357 |
1055365 | Sangaku | Wooden tablets inscribed with geometrical theorems in Edo Japan
Sangaku or san gaku () are Japanese geometrical problems or theorems on wooden tablets which were placed as offerings at Shinto shrines or Buddhist temples during the Edo period by members of all social classes.
History.
The sangaku were painted in color on wooden tablets (ema) and hung in the precincts of Buddhist temples and Shinto shrines as offerings to the kami and buddhas, as challenges to the congregants, or as displays of the solutions to questions. Many of these tablets were lost during the period of modernization that followed the Edo period, but around nine hundred are known to remain.
Fujita Kagen (1765–1821), a Japanese mathematician of prominence, published the first collection of "sangaku" problems, his "Shimpeki Sampo" (Mathematical problems Suspended from the Temple) in 1790, and in 1806 a sequel, the "Zoku Shimpeki Sampo".
During this period Japan applied strict regulations to commerce and foreign relations for western countries so the tablets were created using Japanese mathematics, developed in parallel to western mathematics. For example, the connection between an integral and its derivative (the fundamental theorem of calculus) was unknown, so sangaku problems on areas and volumes were solved by expansions in infinite series and term-by-term calculation.
formula_0
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{1}{\\sqrt{r_\\text{middle}}} = \\frac{1}{\\sqrt{r_\\text{left}}} + \\frac{1}{\\sqrt{r_\\text{right}}}."
}
] | https://en.wikipedia.org/wiki?curid=1055365 |
1055369 | Japanese mathematics | Independent development of mathematics in Japan during the isolation of the Edo period
denotes a distinct kind of mathematics which was developed in Japan during the Edo period (1603–1867). The term "wasan", from "wa" ("Japanese") and "san" ("calculation"), was coined in the 1870s and employed to distinguish native Japanese mathematical theory from Western mathematics (洋算 "yōsan").
In the history of mathematics, the development of "wasan" falls outside the Western realm. At the beginning of the Meiji period (1868–1912), Japan and its people opened themselves to the West. Japanese scholars adopted Western mathematical technique, and this led to a decline of interest in the ideas used in "wasan".
History.
The Japanese mathematical schema evolved during a period when Japan's people were isolated from European influences, but instead borrowed from ancient mathematical texts written in China, including those from the Yuan dynasty and earlier. The Japanese mathematicians Yoshida Shichibei Kōyū, Imamura Chishō, and Takahara Kisshu are among the earliest known Japanese mathematicians. They came to be known to their contemporaries as "the Three Arithmeticians".
Yoshida was the author of the oldest extant Japanese mathematical text, the 1627 work called "Jinkōki". The work dealt with the subject of soroban arithmetic, including square and cube root operations. Yoshida's book significantly inspired a new generation of mathematicians, and redefined the Japanese perception of educational enlightenment, which was defined in the Seventeen Article Constitution as "the product of earnest meditation".
Seki Takakazu founded "enri" (円理: circle principles), a mathematical system with the same purpose as calculus at a similar time to calculus's development in Europe. However Seki's investigations did not proceed from the same foundations as those used in Newton's studies in Europe.
Mathematicians like Takebe Katahiro played an important role in developing Enri (" circle principle"), a crude analog to the Western calculus. He obtained power series expansion of formula_0 in 1722, 15 years earlier than Euler. He used Richardson extrapolation in 1695, about 200 years earlier than Richardson. He also computed 41 digits of π, based on polygon approximation and Richardson extrapolation.
Select mathematicians.
The following list encompasses mathematicians whose work was derived from "wasan."
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(\\arcsin(x))^2"
}
] | https://en.wikipedia.org/wiki?curid=1055369 |
1055370 | Lucas pseudoprime | Probabilistic test for the primality of an integer
Lucas pseudoprimes and Fibonacci pseudoprimes are composite integers that pass certain tests which all primes and very few composite numbers pass: in this case, criteria relative to some Lucas sequence.
Baillie-Wagstaff-Lucas pseudoprimes.
Baillie and Wagstaff define Lucas pseudoprimes as follows: Given integers "P" and "Q", where "P" > 0 and formula_0,
let "Uk"("P", "Q") and "Vk"("P", "Q") be the corresponding Lucas sequences.
Let "n" be a positive integer and let formula_1 be the Jacobi symbol. We define
formula_2
If "n" is a prime that does not divide "Q", then the following congruence condition holds:
If this congruence does "not" hold, then "n" is "not" prime.
If "n" is "composite", then this congruence "usually" does not hold. These are the key facts that make Lucas sequences useful in primality testing.
The congruence (1) represents one of two congruences defining a Frobenius pseudoprime. Hence, every Frobenius pseudoprime is also a Baillie-Wagstaff-Lucas pseudoprime, but the converse does not always hold.
Some good references are chapter 8 of the book by Bressoud and Wagon (with Mathematica code), pages 142–152 of the book by Crandall and Pomerance, and pages 53–74 of the book by Ribenboim.
Lucas probable primes and pseudoprimes.
A Lucas probable prime for a given ("P, Q") pair is "any" positive integer "n" for which equation (1) above is true (see, page 1398).
A Lucas pseudoprime for a given ("P, Q") pair is a positive "composite" integer "n" for which equation (1) is true (see, page 1391).
A Lucas probable prime test is most useful if "D" is chosen such that the Jacobi symbol formula_1 is −1
(see pages 1401–1409 of, page 1024 of, or pages 266–269 of
). This is especially important when combining a Lucas test with a strong pseudoprime test, such as the Baillie–PSW primality test. Typically implementations will use a parameter selection method that ensures this condition (e.g. the Selfridge method recommended in and described below).
If formula_3 then equation (1) becomes
If congruence (2) is false, this constitutes a proof that "n" is composite.
If congruence (2) is true, then "n" is a Lucas probable prime.
In this case, either "n" is prime or it is a Lucas pseudoprime.
If congruence (2) is true, then "n" is "likely" to be prime (this justifies the term probable prime), but this does not "prove" that "n" is prime.
As is the case with any other probabilistic primality test, if we perform additional Lucas tests with different "D", "P" and "Q", then unless one of the tests proves that "n" is composite, we gain more confidence that "n" is prime.
Examples: If "P" = 3, "Q" = −1, and "D" = 13, the sequence of "U"'s is OEIS: : "U0" = 0, "U1" = 1, "U2" = 3, "U3" = 10, etc.
First, let "n" = 19. The Jacobi symbol formula_4 is −1, so δ("n") = 20, "U20" = 6616217487 = 19·348221973 and we have
formula_5
Therefore, 19 is a Lucas probable prime for this ("P, Q") pair. In this case 19 is prime, so it is "not" a Lucas pseudoprime.
For the next example, let "n" = 119. We have formula_6 = −1, and we can compute
formula_7
However, 119 = 7·17 is not prime, so 119 is a Lucas "pseudoprime" for this ("P, Q") pair.
In fact, 119 is the smallest pseudoprime for "P" = 3, "Q" = −1.
We will see below that, in order to check equation (2) for a given "n", we do "not" need to compute all of the first "n" + 1 terms in the "U" sequence.
Let "Q" = −1, the smallest Lucas pseudoprime to "P" = 1, 2, 3, ... are
323, 35, 119, 9, 9, 143, 25, 33, 9, 15, 123, 35, 9, 9, 15, 129, 51, 9, 33, 15, 21, 9, 9, 49, 15, 39, 9, 35, 49, 15, 9, 9, 33, 51, 15, 9, 35, 85, 39, 9, 9, 21, 25, 51, 9, 143, 33, 119, 9, 9, 51, 33, 95, 9, 15, 301, 25, 9, 9, 15, 49, 155, 9, 399, 15, 33, 9, 9, 49, 15, 119, 9, ...
Strong Lucas pseudoprimes.
Now, factor formula_8 into the form formula_9 where formula_10 is odd.
A strong Lucas pseudoprime for a given ("P, Q") pair is an odd composite number "n" with GCD("n, D") = 1, satisfying one of the conditions
formula_11
or
formula_12
for some 0 ≤ "r" < "s"; see page 1396 of. A strong Lucas pseudoprime is also a Lucas pseudoprime (for the same ("P, Q") pair), but the converse is not necessarily true.
Therefore, the strong test is a more stringent primality test than equation (1).
There are infinitely many strong Lucas pseudoprimes, and therefore, infinitely many Lucas pseudoprimes.
Theorem 7 in states: Let formula_13 and formula_14 be relatively prime positive integers for which formula_15 is positive but not a square. Then there is a positive constant formula_16 (depending on formula_13 and formula_14) such that the number of strong Lucas pseudoprimes not exceeding formula_17 is greater than formula_18, for formula_17 sufficiently large.
We can set "Q" = −1, then formula_19 and formula_20 are "P"-Fibonacci sequence and "P"-Lucas sequence, the pseudoprimes can be called strong Lucas pseudoprime in base "P", for example, the least strong Lucas pseudoprime with "P" = 1, 2, 3, ... are 4181, 169, 119, ...
An extra strong Lucas pseudoprime
is a strong Lucas pseudoprime for a set of parameters ("P", "Q") where "Q" = 1, satisfying one of the conditions
formula_21
or
formula_12
for some formula_22. An extra strong Lucas pseudoprime is also a strong Lucas pseudoprime for the same formula_23 pair.
Implementing a Lucas probable prime test.
Before embarking on a probable prime test, one usually verifies that "n", the number to be tested for primality, is odd, is not a perfect square, and is not divisible by any small prime less than some convenient limit. Perfect squares are easy to detect using Newton's method for square roots.
We choose a Lucas sequence where the Jacobi symbol formula_24, so that δ("n") = "n" + 1.
Given "n", one technique for choosing "D" is to use trial and error to find the first "D" in the sequence 5, −7, 9, −11, ... such that formula_24. Note that formula_25.
(If "D" and "n" have a prime factor in common, then formula_26).
With this sequence of "D" values, the average number of "D" values that must be tried before we encounter one whose Jacobi symbol is −1 is about 1.79; see, p. 1416.
Once we have "D", we set formula_27 and formula_28.
It is a good idea to check that "n" has no prime factors in common with "P" or "Q".
This method of choosing "D", "P", and "Q" was suggested by John Selfridge.
Given "D", "P", and "Q", there are recurrence relations that enable us to quickly compute formula_29 and formula_30 in formula_31 steps; see . To start off,
formula_32
formula_33
First, we can double the subscript from formula_34 to formula_35 in one step using the recurrence relations
formula_36
formula_37.
Next, we can increase the subscript by 1 using the recurrences
formula_38
formula_39.
If formula_40 is odd, replace it with formula_41; this is even so it can now be divided by 2. The numerator of formula_42 is handled in the same way. (Adding "n" does not change the result modulo "n".)
Observe that, for each term that we compute in the "U" sequence, we compute the corresponding term in the "V" sequence. As we proceed, we also compute the same, corresponding powers of "Q".
At each stage, we reduce formula_43, formula_44, and the power of formula_14, mod "n".
We use the bits of the binary expansion of "n" to determine "which" terms in the "U" sequence to compute. For example, if "n"+1 = 44 (= 101100 in binary), then, taking the bits one at a time from left to right, we obtain the sequence of indices to compute: 12 = 1, 102 = 2, 1002 = 4, 1012 = 5, 10102 = 10, 10112 = 11, 101102 = 22, 1011002 = 44. Therefore, we compute "U"1, "U"2, "U"4, "U"5, "U"10, "U"11, "U"22, and "U"44. We also compute the same-numbered terms in the "V" sequence, along with "Q"1, "Q"2, "Q"4, "Q"5, "Q"10, "Q"11, "Q"22, and "Q"44.
By the end of the calculation, we will have computed "Un+1", "Vn+1", and "Qn+1", (mod "n").
We then check congruence (2) using our known value of "Un+1".
When "D", "P", and "Q" are chosen as described above, the first 10 Lucas pseudoprimes are (see page 1401 of ):
323, 377, 1159, 1829, 3827, 5459, 5777, 9071, 9179, and 10877 (sequence in the OEIS)
The strong versions of the Lucas test can be implemented in a similar way.
When "D", "P", and "Q" are chosen as described above, the first 10 "strong" Lucas pseudoprimes are: 5459, 5777, 10877, 16109, 18971, 22499, 24569, 25199, 40309, and 58519
To calculate a list of "extra strong" Lucas pseudoprimes, set formula_45.
Then try "P" = 3, 4, 5, 6, ..., until a value of formula_0 is found so that the Jacobi symbol formula_24.
With this method for selecting "D", "P", and "Q", the first 10 "extra strong" Lucas pseudoprimes are
989, 3239, 5777, 10877, 27971, 29681, 30739, 31631, 39059, and 72389
Checking additional congruence conditions.
If we have checked that congruence (2) is true, there are additional congruence conditions we can check that have almost no additional computational cost.
If "n" happens to be composite, these additional conditions may help discover that fact.
If "n" is an odd prime and formula_24, then we have the following (see equation 2 on page 1392 of ):
Although this congruence condition is not, by definition, part of the Lucas probable prime test, it is almost free to check this condition because, as noted above, the value of "Vn+1" was computed in the process of computing "Un+1".
If either congruence (2) or (3) is false, this constitutes a proof that "n" is not prime.
If "both" of these congruences are true, then it is even more likely that "n" is prime than if we had checked only congruence (2).
If Selfridge's method (above) for choosing "D", "P", and "Q" happened to set "Q" = −1, then we can adjust "P" and "Q" so that "D" and formula_1 remain unchanged and "P" = "Q" = 5 (see Lucas sequence-Algebraic relations).
If we use this enhanced method for choosing "P" and "Q", then 913 = 11·83 is the "only" composite less than 108 for which congruence (3) is true (see page 1409 and Table 6 of;). More extensive calculations show that, with this method of choosing "D", "P", and "Q", there are only five odd, composite numbers less than 1015 for which congruence (3) is true.
If formula_46, then a further congruence condition that involves very little additional computation can be implemented.
Recall that formula_47 is computed during the calculation of formula_29, and we can easily save the previously computed power of formula_14, namely, formula_48.
If "n" is prime, then, by Euler's criterion,
formula_49 .
(Here, formula_50 is the Legendre symbol; if "n" is prime, this is the same as the Jacobi symbol).
Therefore, if "n" is prime, we must have,
The Jacobi symbol on the right side is easy to compute, so this congruence is easy to check.
If this congruence does not hold, then "n" cannot be prime. Provided GCD("n, Q") = 1 then testing for congruence (4) is equivalent to augmenting our Lucas test with a "base Q" Solovay–Strassen primality test.
Additional congruence conditions that must be satisfied if "n" is prime are described in Section 6 of. If "any" of these conditions fails to hold, then we have proved that "n" is not prime.
Comparison with the Miller–Rabin primality test.
"k" applications of the Miller–Rabin primality test declare a composite "n" to be probably prime with a probability at most (1/4)"k".
There is a similar probability estimate for the strong Lucas probable prime test.
Aside from two trivial exceptions (see below), the fraction of ("P","Q") pairs (modulo "n") that declare a composite "n" to be probably prime is at most (4/15).
Therefore, "k" applications of the strong Lucas test would declare a composite "n" to be probably prime with a probability at most (4/15)k.
There are two trivial exceptions. One is "n" = 9. The other is when "n" = "p"("p"+2) is the product of two twin primes. Such an "n" is easy to factor, because in this case, "n"+1 = ("p"+1)2 is a perfect square. One can quickly detect perfect squares using Newton's method for square roots.
By combining a Lucas pseudoprime test with a Fermat primality test, say, to base 2, one can obtain very powerful probabilistic tests for primality, such as the Baillie–PSW primality test.
Fibonacci pseudoprimes.
When "P" = 1 and "Q" = −1, the "Un"("P","Q") sequence represents the Fibonacci numbers.
A Fibonacci pseudoprime is often
defined as a composite number "n" not divisible by 5 for which congruence (1) holds with "P" = 1 and "Q" = −1. By this definition, the Fibonacci pseudoprimes form a sequence:
323, 377, 1891, 3827, 4181, 5777, 6601, 6721, 8149, 10877, ... (sequence in the OEIS).
The references of Anderson and Jacobsen below use this definition.
If "n" is congruent to 2 or 3 modulo 5, then Bressoud, and Crandall and Pomerance point out that it is rare for a Fibonacci pseudoprime to also be a Fermat pseudoprime base 2. However, when "n" is congruent to 1 or 4 modulo 5, the opposite is true, with over 12% of Fibonacci pseudoprimes under 1011 also being base-2 Fermat pseudoprimes.
If "n" is prime and GCD("n", "Q") = 1, then we also have
This leads to an alternative definition of Fibonacci pseudoprime:
a Fibonacci pseudoprime is a composite number "n" for which congruence (5) holds with "P" = 1 and "Q" = −1.
This definition leads the Fibonacci pseudoprimes form a sequence:
705, 2465, 2737, 3745, 4181, 5777, 6721, 10877, 13201, 15251, ... (sequence in the OEIS),
which are also referred to as "Bruckman-Lucas" pseudoprimes.
Hoggatt and Bicknell studied properties of these pseudoprimes in 1974. Singmaster computed these pseudoprimes up to 100000. Jacobsen lists all 111443 of these pseudoprimes less than 1013.
It has been shown that there are no even Fibonacci pseudoprimes as defined by equation (5). However, even Fibonacci pseudoprimes do exist (sequence in the OEIS) under the first definition given by (1).
A strong Fibonacci pseudoprime is a composite number "n" for which congruence (5) holds for "Q" = −1 and all "P". It follows that an odd composite integer "n" is a strong Fibonacci pseudoprime if and only if:
The smallest example of a strong Fibonacci pseudoprime is 443372888629441 = 17·31·41·43·89·97·167·331.
Pell pseudoprimes.
A Pell pseudoprime may be defined as a composite number "n" for which equation (1) above is true with "P" = 2 and "Q" = −1; the sequence "Un" then being the Pell sequence. The first pseudoprimes are then 35, 169, 385, 779, 899, 961, 1121, 1189, 2419, ...
This differs from the definition in OEIS: which may be written as:
formula_51
with ("P", "Q") = (2, −1) again defining "Un" as the Pell sequence. The first pseudoprimes are then 169, 385, 741, 961, 1121, 2001, 3827, 4879, 5719, 6215 ...
A third definition uses equation (5) with ("P", "Q") = (2, −1), leading to the pseudoprimes 169, 385, 961, 1105, 1121, 3827, 4901, 6265, 6441, 6601, 7107, 7801, 8119, ...
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "D=P^2-4Q"
},
{
"math_id": 1,
"text": "\\left(\\tfrac{D}{n}\\right)"
},
{
"math_id": 2,
"text": "\\delta(n)=n-\\left(\\tfrac{D}{n}\\right)."
},
{
"math_id": 3,
"text": "\\left(\\tfrac{D}{n}\\right)=-1,"
},
{
"math_id": 4,
"text": "\\left(\\tfrac{13}{19}\\right)"
},
{
"math_id": 5,
"text": " U_{20} = 6616217487 \\equiv 0 \\pmod {19} . "
},
{
"math_id": 6,
"text": "\\left(\\tfrac{13}{119}\\right)"
},
{
"math_id": 7,
"text": " U_{120} \\equiv 0 \\pmod {119}. "
},
{
"math_id": 8,
"text": "\\delta(n)=n-\\left(\\tfrac{D}{n}\\right)"
},
{
"math_id": 9,
"text": "d\\cdot2^s"
},
{
"math_id": 10,
"text": "d"
},
{
"math_id": 11,
"text": " U_d \\equiv 0 \\pmod {n} "
},
{
"math_id": 12,
"text": " V_{d \\cdot 2^r} \\equiv 0 \\pmod {n} "
},
{
"math_id": 13,
"text": "P"
},
{
"math_id": 14,
"text": "Q"
},
{
"math_id": 15,
"text": "P^2 - 4Q"
},
{
"math_id": 16,
"text": "c"
},
{
"math_id": 17,
"text": "x"
},
{
"math_id": 18,
"text": "c \\cdot \\log x"
},
{
"math_id": 19,
"text": "U_n"
},
{
"math_id": 20,
"text": "V_n"
},
{
"math_id": 21,
"text": " U_d \\equiv 0 \\pmod {n} \\text{ and } V_d \\equiv \\pm 2 \\pmod {n} "
},
{
"math_id": 22,
"text": " 0 \\le r<s-1"
},
{
"math_id": 23,
"text": "(P,Q)"
},
{
"math_id": 24,
"text": "\\left(\\tfrac{D}{n}\\right)=-1"
},
{
"math_id": 25,
"text": "\\left(\\tfrac{k}{n}\\right)\\left(\\tfrac{-k}{n}\\right)=-1"
},
{
"math_id": 26,
"text": "\\left(\\tfrac{D}{n}\\right)=0"
},
{
"math_id": 27,
"text": "P=1"
},
{
"math_id": 28,
"text": "Q=(1-D)/4"
},
{
"math_id": 29,
"text": "U_{n+1}"
},
{
"math_id": 30,
"text": "V_{n+1}"
},
{
"math_id": 31,
"text": "O(\\log_2 n)"
},
{
"math_id": 32,
"text": "U_{1}=1"
},
{
"math_id": 33,
"text": "V_{1}=P=1"
},
{
"math_id": 34,
"text": "k"
},
{
"math_id": 35,
"text": "2k"
},
{
"math_id": 36,
"text": "U_{2k}=U_k\\cdot V_k"
},
{
"math_id": 37,
"text": "V_{2k}=V_k^2-2Q^k=\\frac{V_k^2+DU_k^2}{2}"
},
{
"math_id": 38,
"text": "U_{2k+1}=(P\\cdot U_{2k}+V_{2k})/2"
},
{
"math_id": 39,
"text": "V_{2k+1}=(D\\cdot U_{2k}+P\\cdot V_{2k})/2"
},
{
"math_id": 40,
"text": "P\\cdot U_{2k}+V_{2k}"
},
{
"math_id": 41,
"text": "P\\cdot U_{2k}+V_{2k}+n"
},
{
"math_id": 42,
"text": "V_{2k+1}"
},
{
"math_id": 43,
"text": "U"
},
{
"math_id": 44,
"text": "V"
},
{
"math_id": 45,
"text": "Q = 1"
},
{
"math_id": 46,
"text": "Q \\neq \\pm 1 "
},
{
"math_id": 47,
"text": "Q^{n+1}"
},
{
"math_id": 48,
"text": "Q^{(n+1)/2}"
},
{
"math_id": 49,
"text": " Q^{(n-1)/2} \\equiv \\left(\\tfrac{Q}{n}\\right) \\pmod {n} "
},
{
"math_id": 50,
"text": "\\left(\\tfrac{Q}{n}\\right)"
},
{
"math_id": 51,
"text": " \\text{ } U_n \\equiv \\left(\\tfrac{2}{n}\\right) \\pmod {n}"
}
] | https://en.wikipedia.org/wiki?curid=1055370 |
10557570 | Parity-check matrix | In coding theory, a parity-check matrix of a linear block code "C" is a matrix which describes the linear relations that the components of a codeword must satisfy. It can be used to decide whether a particular vector is a codeword and is also used in decoding algorithms.
Definition.
Formally, a parity check matrix "H" of a linear code "C" is a generator matrix of the dual code, "C"⊥. This means that a codeword c is in "C "if and only if the matrix-vector product "Hc⊤ = 0 (some authors would write this in an equivalent form, cH"⊤ = 0.)
The rows of a parity check matrix are the coefficients of the parity check equations. That is, they show how linear combinations of certain digits (components) of each codeword equal zero. For example, the parity check matrix
formula_0,
compactly represents the parity check equations,
formula_1,
that must be satisfied for the vector formula_2 to be a codeword of "C".
From the definition of the parity-check matrix it directly follows the minimum distance of the code is the minimum number "d" such that every "d - 1" columns of a parity-check matrix "H" are linearly independent while there exist "d" columns of "H" that are linearly dependent.
Creating a parity check matrix.
The parity check matrix for a given code can be derived from its generator matrix (and vice versa). If the generator matrix for an ["n","k"]-code is in standard form
formula_3,
then the parity check matrix is given by
formula_4,
because
formula_5.
Negation is performed in the finite field F"q". Note that if the characteristic of the underlying field is 2 (i.e., 1 + 1 = 0 in that field), as in binary codes, then -"P" = "P", so the negation is unnecessary.
For example, if a binary code has the generator matrix
formula_6,
then its parity check matrix is
formula_7.
It can be verified that G is a formula_8 matrix, while H is a formula_9 matrix.
Syndromes.
For any (row) vector x of the ambient vector space, s = "H"x⊤ is called the syndrome of x. The vector x is a codeword if and only if s = 0. The calculation of syndromes is the basis for the syndrome decoding algorithm.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H =\n\n\\left[\n\\begin{array}{cccc}\n 0&0&1&1\\\\\n 1&1&0&0\n\\end{array}\n\\right]\n"
},
{
"math_id": 1,
"text": "\\begin{align} c_3 + c_4 &= 0 \\\\ c_1 + c_2 &= 0 \\end{align}"
},
{
"math_id": 2,
"text": "(c_1, c_2, c_3, c_4)"
},
{
"math_id": 3,
"text": "G = \\begin{bmatrix} I_k | P \\end{bmatrix}"
},
{
"math_id": 4,
"text": "H = \\begin{bmatrix} -P^{\\top} | I_{n-k} \\end{bmatrix}"
},
{
"math_id": 5,
"text": "G H^{\\top} = P-P = 0"
},
{
"math_id": 6,
"text": "G = \n\\left[\n\\begin{array}{cc|ccc}\n1&0&1&0&1 \\\\\n0&1&1&1&0 \\\\\n\\end{array}\n\\right]"
},
{
"math_id": 7,
"text": "H =\n\\left[\n\\begin{array}{cc|ccc}\n1&1&1&0&0 \\\\\n0&1&0&1&0 \\\\\n1&0&0&0&1 \\\\\n\\end{array}\n\\right]"
},
{
"math_id": 8,
"text": "k \\times n"
},
{
"math_id": 9,
"text": "(n-k) \\times n"
}
] | https://en.wikipedia.org/wiki?curid=10557570 |
1055940 | Completely multiplicative function | Arithmetic function
In number theory, functions of positive integers which respect products are important and are called completely multiplicative functions or totally multiplicative functions. A weaker condition is also important, respecting only products of coprime numbers, and such functions are called multiplicative functions. Outside of number theory, the term "multiplicative function" is often taken to be synonymous with "completely multiplicative function" as defined in this article.
Definition.
A completely multiplicative function (or totally multiplicative function) is an arithmetic function (that is, a function whose domain is the natural numbers), such that "f"(1) = 1 and "f"("ab") = "f"("a")"f"("b") holds "for all" positive integers "a" and "b".
In logic notation: formula_0 and formula_1.
Without the requirement that "f"(1) = 1, one could still have "f"(1) = 0, but then "f"("a") = 0 for all positive integers "a", so this is not a very strong restriction. If one did not fix formula_0, one can see that both formula_2 and formula_3 are possibilities for the value of formula_4 in the following way:
formula_5
The definition above can be rephrased using the language of algebra: A completely multiplicative function is a homomorphism from the monoid formula_6 (that is, the positive integers under multiplication) to some other monoid.
Examples.
The easiest example of a completely multiplicative function is a monomial with leading coefficient 1: For any particular positive integer "n", define "f"("a") = "a""n". Then "f"("bc") = ("bc")"n" = "b""n""c""n" = "f"("b")"f"("c"), and "f"(1) = 1"n" = 1.
The Liouville function is a non-trivial example of a completely multiplicative function as are Dirichlet characters, the Jacobi symbol and the Legendre symbol.
Properties.
A completely multiplicative function is completely determined by its values at the prime numbers, a consequence of the fundamental theorem of arithmetic. Thus, if "n" is a product of powers of distinct primes, say "n" = "p""a" "q""b" ..., then "f"("n") = "f"("p")"a" "f"("q")"b" ...
While the Dirichlet convolution of two multiplicative functions is multiplicative, the Dirichlet convolution of two completely multiplicative functions need not be completely multiplicative. Arithmetic functions which can be written as the Dirichlet convolution of two completely multiplicative functions are said to be quadratics or specially multiplicative multiplicative functions. They are rational arithmetic functions of order (2, 0) and obey the Busche-Ramanujan identity.
There are a variety of statements about a function which are equivalent to it being completely multiplicative. For example, if a function "f" is multiplicative then it is completely multiplicative if and only if its Dirichlet inverse is formula_7 where formula_8 is the Möbius function.
Completely multiplicative functions also satisfy a distributive law. If "f" is completely multiplicative then
formula_9
where "*" represents the Dirichlet product and formula_10 represents pointwise multiplication. One consequence of this is that for any completely multiplicative function "f" one has
formula_11
which can be deduced from the above by putting both formula_12, where formula_13 is the constant function.
Here formula_14 is the divisor function.
formula_15
Dirichlet series.
The L-function of completely (or totally) multiplicative Dirichlet series formula_16 satisfies
formula_17
which means that the sum all over the natural numbers is equal to the product all over the prime numbers. | [
{
"math_id": 0,
"text": "f(1) = 1"
},
{
"math_id": 1,
"text": "\\forall a, b \\in \\text{domain}(f), f(ab) = f(a)f(b)"
},
{
"math_id": 2,
"text": "0"
},
{
"math_id": 3,
"text": "1"
},
{
"math_id": 4,
"text": "f(1)"
},
{
"math_id": 5,
"text": "\n \\begin{align}\n f(1) = f(1 \\cdot 1) &\\iff f(1) = f(1)f(1) \\\\\n &\\iff f(1) = f(1)^2 \\\\\n &\\iff f(1)^2 - f(1) = 0 \\\\\n &\\iff f(1)\\left(f(1) - 1\\right) = 0 \\\\\n &\\iff f(1) = 0 \\lor f(1) = 1.\n\\end{align} "
},
{
"math_id": 6,
"text": "(\\mathbb Z^+,\\cdot)"
},
{
"math_id": 7,
"text": "\\mu\\cdot f"
},
{
"math_id": 8,
"text": "\\mu"
},
{
"math_id": 9,
"text": "f \\cdot (g*h)=(f \\cdot g)*(f \\cdot h)"
},
{
"math_id": 10,
"text": "\\cdot"
},
{
"math_id": 11,
"text": "f*f = \\tau \\cdot f"
},
{
"math_id": 12,
"text": "g = h = 1"
},
{
"math_id": 13,
"text": "1(n) = 1"
},
{
"math_id": 14,
"text": " \\tau"
},
{
"math_id": 15,
"text": "\n\\begin{align}\nf \\cdot \\left(g*h \\right)(n) &= f(n) \\cdot \\sum_{d|n} g(d) h \\left( \\frac{n}{d} \\right) = \\\\\n&= \\sum_{d|n} f(n) \\cdot (g(d) h \\left( \\frac{n}{d} \\right)) = \\\\\n&= \\sum_{d|n} (f(d) f \\left( \\frac{n}{d} \\right)) \\cdot (g(d) h \\left( \\frac{n}{d} \\right)) \\text{ (since } f \\text{ is completely multiplicative) } = \\\\\n&= \\sum_{d|n} (f(d) g(d)) \\cdot (f \\left( \\frac{n}{d} \\right) h \\left( \\frac{n}{d} \\right)) \\\\\n&= (f \\cdot g)*(f \\cdot h).\n\\end{align}\n"
},
{
"math_id": 16,
"text": "a(n)"
},
{
"math_id": 17,
"text": "L(s,a)=\\sum^\\infty_{n=1}\\frac{a(n)}{n^s}=\\prod_p\\biggl(1-\\frac{a(p)}{p^s}\\biggr)^{-1},"
}
] | https://en.wikipedia.org/wiki?curid=1055940 |
1056003 | Fundamental theorem of curves | Regular 3-D curves are shape and size determined by their curvature and torsion
In differential geometry, the fundamental theorem of space curves states that every regular curve in three-dimensional space, with non-zero curvature, has its shape (and size or scale) completely determined by its curvature and torsion.
Use.
A curve can be described, and thereby defined, by a pair of scalar fields: curvature formula_0 and torsion formula_1, both of which depend on some parameter which parametrizes the curve but which can ideally be the arc length of the curve. From just the curvature and torsion, the vector fields for the tangent, normal, and binormal vectors can be derived using the Frenet–Serret formulas. Then, integration of the tangent field (done numerically, if not analytically) yields the curve.
Congruence.
If a pair of curves are in different positions but have the same curvature and torsion, then they are congruent to each other.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\kappa"
},
{
"math_id": 1,
"text": "\\tau"
}
] | https://en.wikipedia.org/wiki?curid=1056003 |
105620 | Orthogonal matrix | Real square matrix whose columns and rows are orthogonal unit vectors
In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.
One way to express this is
formula_0
where "Q"T is the transpose of Q and I is the identity matrix.
This leads to the equivalent characterization: a matrix Q is orthogonal if its transpose is equal to its inverse:
formula_1
where "Q"−1 is the inverse of Q.
An orthogonal matrix Q is necessarily invertible (with inverse "Q"−1 = "Q"T), unitary ("Q"−1 = "Q"∗), where "Q"∗ is the Hermitian adjoint (conjugate transpose) of Q, and therefore normal ("Q"∗"Q" = "QQ"∗) over the real numbers. The determinant of any orthogonal matrix is either +1 or −1. As a linear transformation, an orthogonal matrix preserves the inner product of vectors, and therefore acts as an isometry of Euclidean space, such as a rotation, reflection or rotoreflection. In other words, it is a unitary transformation.
The set of "n" × "n" orthogonal matrices, under multiplication, forms the group O("n"), known as the orthogonal group. The subgroup SO("n") consisting of orthogonal matrices with determinant +1 is called the special orthogonal group, and each of its elements is a special orthogonal matrix. As a linear transformation, every special orthogonal matrix acts as a rotation.
Overview.
An orthogonal matrix is the real specialization of a unitary matrix, and thus always a normal matrix. Although we consider only real matrices here, the definition can be used for matrices with entries from any field. However, orthogonal matrices arise naturally from dot products, and for matrices of complex numbers that leads instead to the unitary requirement. Orthogonal matrices preserve the dot product, so, for vectors u and v in an n-dimensional real Euclidean space
formula_2
where Q is an orthogonal matrix. To see the inner product connection, consider a vector v in an n-dimensional real Euclidean space. Written with respect to an orthonormal basis, the squared length of v is vTv. If a linear transformation, in matrix form "Q"v, preserves vector lengths, then
formula_3
Thus finite-dimensional linear isometries—rotations, reflections, and their combinations—produce orthogonal matrices. The converse is also true: orthogonal matrices imply orthogonal transformations. However, linear algebra includes orthogonal transformations between spaces which may be neither finite-dimensional nor of the same dimension, and these have no orthogonal matrix equivalent.
Orthogonal matrices are important for a number of reasons, both theoretical and practical. The "n" × "n" orthogonal matrices form a group under matrix multiplication, the orthogonal group denoted by O("n"), which—with its subgroups—is widely used in mathematics and the physical sciences. For example, the point group of a molecule is a subgroup of O(3). Because floating point versions of orthogonal matrices have advantageous properties, they are key to many algorithms in numerical linear algebra, such as QR decomposition. As another example, with appropriate normalization the discrete cosine transform (used in MP3 compression) is represented by an orthogonal matrix.
Examples.
Below are a few examples of small orthogonal matrices and possible interpretations.
Elementary constructions.
Lower dimensions.
The simplest orthogonal matrices are the 1 × 1 matrices [1] and [−1], which we can interpret as the identity and a reflection of the real line across the origin.
The 2 × 2 matrices have the form
formula_8
which orthogonality demands satisfy the three equations
formula_9
In consideration of the first equation, without loss of generality let "p" = cos "θ", "q" = sin "θ"; then either "t" = −"q", "u" = "p" or "t" = "q", "u" = −"p". We can interpret the first case as a rotation by θ (where "θ" = 0 is the identity), and the second as a reflection across a line at an angle of .
formula_10
The special case of the reflection matrix with "θ" = 90° generates a reflection about the line at 45° given by "y" = "x" and therefore exchanges x and y; it is a permutation matrix, with a single 1 in each column and row (and otherwise 0):
formula_11
The identity is also a permutation matrix.
A reflection is its own inverse, which implies that a reflection matrix is symmetric (equal to its transpose) as well as orthogonal. The product of two rotation matrices is a rotation matrix, and the product of two reflection matrices is also a rotation matrix.
Higher dimensions.
Regardless of the dimension, it is always possible to classify orthogonal matrices as purely rotational or not, but for 3 × 3 matrices and larger the non-rotational matrices can be more complicated than reflections. For example,
formula_12
represent an "inversion" through the origin and a "rotoinversion", respectively, about the z-axis.
Rotations become more complicated in higher dimensions; they can no longer be completely characterized by one angle, and may affect more than one planar subspace. It is common to describe a 3 × 3 rotation matrix in terms of an axis and angle, but this only works in three dimensions. Above three dimensions two or more angles are needed, each associated with a plane of rotation.
However, we have elementary building blocks for permutations, reflections, and rotations that apply in general.
Primitives.
The most elementary permutation is a transposition, obtained from the identity matrix by exchanging two rows. Any "n" × "n" permutation matrix can be constructed as a product of no more than "n" − 1 transpositions.
A Householder reflection is constructed from a non-null vector v as
formula_13
Here the numerator is a symmetric matrix while the denominator is a number, the squared magnitude of v. This is a reflection in the hyperplane perpendicular to v (negating any vector component parallel to v). If v is a unit vector, then "Q" = "I" − 2vvT suffices. A Householder reflection is typically used to simultaneously zero the lower part of a column. Any orthogonal matrix of size "n" × "n" can be constructed as a product of at most n such reflections.
A Givens rotation acts on a two-dimensional (planar) subspace spanned by two coordinate axes, rotating by a chosen angle. It is typically used to zero a single subdiagonal entry. Any rotation matrix of size "n" × "n" can be constructed as a product of at most such rotations. In the case of 3 × 3 matrices, three such rotations suffice; and by fixing the sequence we can thus describe all 3 × 3 rotation matrices (though not uniquely) in terms of the three angles used, often called Euler angles.
A Jacobi rotation has the same form as a Givens rotation, but is used to zero both off-diagonal entries of a 2 × 2 symmetric submatrix.
Properties.
Matrix properties.
A real square matrix is orthogonal if and only if its columns form an orthonormal basis of the Euclidean space R"n" with the ordinary Euclidean dot product, which is the case if and only if its rows form an orthonormal basis of R"n". It might be tempting to suppose a matrix with orthogonal (not orthonormal) columns would be called an orthogonal matrix, but such matrices have no special interest and no special name; they only satisfy "M"T"M" = "D", with D a diagonal matrix.
The determinant of any orthogonal matrix is +1 or −1. This follows from basic facts about determinants, as follows:
formula_14
The converse is not true; having a determinant of ±1 is no guarantee of orthogonality, even with orthogonal columns, as shown by the following counterexample.
formula_15
With permutation matrices the determinant matches the signature, being +1 or −1 as the parity of the permutation is even or odd, for the determinant is an alternating function of the rows.
Stronger than the determinant restriction is the fact that an orthogonal matrix can always be diagonalized over the complex numbers to exhibit a full set of eigenvalues, all of which must have (complex) modulus 1.
Group properties.
The inverse of every orthogonal matrix is again orthogonal, as is the matrix product of two orthogonal matrices. In fact, the set of all "n" × "n" orthogonal matrices satisfies all the axioms of a group. It is a compact Lie group of dimension , called the orthogonal group and denoted by O("n").
The orthogonal matrices whose determinant is +1 form a path-connected normal subgroup of O("n") of index 2, the special orthogonal group SO("n") of rotations. The quotient group O("n")/SO("n") is isomorphic to O(1), with the projection map choosing [+1] or [−1] according to the determinant. Orthogonal matrices with determinant −1 do not include the identity, and so do not form a subgroup but only a coset; it is also (separately) connected. Thus each orthogonal group falls into two pieces; and because the projection map splits, O("n") is a semidirect product of SO("n") by O(1). In practical terms, a comparable statement is that any orthogonal matrix can be produced by taking a rotation matrix and possibly negating one of its columns, as we saw with 2 × 2 matrices. If n is odd, then the semidirect product is in fact a direct product, and any orthogonal matrix can be produced by taking a rotation matrix and possibly negating all of its columns. This follows from the property of determinants that negating a column negates the determinant, and thus negating an odd (but not even) number of columns negates the determinant.
Now consider ("n" + 1) × ("n" + 1) orthogonal matrices with bottom right entry equal to 1. The remainder of the last column (and last row) must be zeros, and the product of any two such matrices has the same form. The rest of the matrix is an "n" × "n" orthogonal matrix; thus O("n") is a subgroup of O("n" + 1) (and of all higher groups).
formula_16
Since an elementary reflection in the form of a Householder matrix can reduce any orthogonal matrix to this constrained form, a series of such reflections can bring any orthogonal matrix to the identity; thus an orthogonal group is a reflection group. The last column can be fixed to any unit vector, and each choice gives a different copy of O("n") in O("n" + 1); in this way O("n" + 1) is a bundle over the unit sphere "S""n" with fiber O("n").
Similarly, SO("n") is a subgroup of SO("n" + 1); and any special orthogonal matrix can be generated by Givens plane rotations using an analogous procedure. The bundle structure persists: SO("n") ↪ SO("n" + 1) → "S""n". A single rotation can produce a zero in the first row of the last column, and series of "n" − 1 rotations will zero all but the last row of the last column of an "n" × "n" rotation matrix. Since the planes are fixed, each rotation has only one degree of freedom, its angle. By induction, SO("n") therefore has
formula_17
degrees of freedom, and so does O("n").
Permutation matrices are simpler still; they form, not a Lie group, but only a finite group, the order "n"! symmetric group S"n". By the same kind of argument, S"n" is a subgroup of S"n" + 1. The even permutations produce the subgroup of permutation matrices of determinant +1, the order alternating group.
Canonical form.
More broadly, the effect of any orthogonal matrix separates into independent actions on orthogonal two-dimensional subspaces. That is, if Q is special orthogonal then one can always find an orthogonal matrix P, a (rotational) change of basis, that brings Q into block diagonal form:
formula_18
where the matrices "R"1, ..., "R""k" are 2 × 2 rotation matrices, and with the remaining entries zero. Exceptionally, a rotation block may be diagonal, ±"I". Thus, negating one column if necessary, and noting that a 2 × 2 reflection diagonalizes to a +1 and −1, any orthogonal matrix can be brought to the form
formula_19
The matrices "R"1, ..., "R""k" give conjugate pairs of eigenvalues lying on the unit circle in the complex plane; so this decomposition confirms that all eigenvalues have absolute value 1. If n is odd, there is at least one real eigenvalue, +1 or −1; for a 3 × 3 rotation, the eigenvector associated with +1 is the rotation axis.
Lie algebra.
Suppose the entries of Q are differentiable functions of t, and that "t" = 0 gives "Q" = "I". Differentiating the orthogonality condition
formula_20
yields
formula_21
Evaluation at "t" = 0 ("Q" = "I") then implies
formula_22
In Lie group terms, this means that the Lie algebra of an orthogonal matrix group consists of skew-symmetric matrices. Going the other direction, the matrix exponential of any skew-symmetric matrix is an orthogonal matrix (in fact, special orthogonal).
For example, the three-dimensional object physics calls angular velocity is a differential rotation, thus a vector in the Lie algebra formula_23 tangent to SO(3). Given ω = ("xθ", "yθ", "zθ"), with v = ("x", "y", "z") being a unit vector, the correct skew-symmetric matrix form of ω is
formula_24
The exponential of this is the orthogonal matrix for rotation around axis v by angle θ; setting "c" = cos, "s" = sin,
formula_25
Numerical linear algebra.
Benefits.
Numerical analysis takes advantage of many of the properties of orthogonal matrices for numerical linear algebra, and they arise naturally. For example, it is often desirable to compute an orthonormal basis for a space, or an orthogonal change of bases; both take the form of orthogonal matrices. Having determinant ±1 and all eigenvalues of magnitude 1 is of great benefit for numeric stability. One implication is that the condition number is 1 (which is the minimum), so errors are not magnified when multiplying with an orthogonal matrix. Many algorithms use orthogonal matrices like Householder reflections and Givens rotations for this reason. It is also helpful that, not only is an orthogonal matrix invertible, but its inverse is available essentially free, by exchanging indices.
Permutations are essential to the success of many algorithms, including the workhorse Gaussian elimination with partial pivoting (where permutations do the pivoting). However, they rarely appear explicitly as matrices; their special form allows more efficient representation, such as a list of n indices.
Likewise, algorithms using Householder and Givens matrices typically use specialized methods of multiplication and storage. For example, a Givens rotation affects only two rows of a matrix it multiplies, changing a full multiplication of order "n"3 to a much more efficient order n. When uses of these reflections and rotations introduce zeros in a matrix, the space vacated is enough to store sufficient data to reproduce the transform, and to do so robustly. (Following , we do "not" store a rotation angle, which is both expensive and badly behaved.)
Decompositions.
A number of important matrix decompositions involve orthogonal matrices, including especially:
Examples.
Consider an overdetermined system of linear equations, as might occur with repeated measurements of a physical phenomenon to compensate for experimental errors. Write "A"x = b, where A is "m" × "n", "m" > "n".
A QR decomposition reduces A to upper triangular R. For example, if A is 5 × 3 then R has the form
formula_26
The linear least squares problem is to find the x that minimizes , which is equivalent to projecting b to the subspace spanned by the columns of A. Assuming the columns of A (and hence R) are independent, the projection solution is found from "A"T"A"x = "A"Tb. Now "A"T"A" is square ("n" × "n") and invertible, and also equal to "R"T"R". But the lower rows of zeros in R are superfluous in the product, which is thus already in lower-triangular upper-triangular factored form, as in Gaussian elimination (Cholesky decomposition). Here orthogonality is important not only for reducing "A"T"A" = ("R"T"Q"T)"QR" to "R"T"R", but also for allowing solution without magnifying numerical problems.
In the case of a linear system which is underdetermined, or an otherwise non-invertible matrix, singular value decomposition (SVD) is equally useful. With A factored as "U"Σ"V"T, a satisfactory solution uses the Moore-Penrose pseudoinverse, "V"Σ+"U"T, where Σ+ merely replaces each non-zero diagonal entry with its reciprocal. Set x to "V"Σ+"U"Tb.
The case of a square invertible matrix also holds interest. Suppose, for example, that A is a 3 × 3 rotation matrix which has been computed as the composition of numerous twists and turns. Floating point does not match the mathematical ideal of real numbers, so A has gradually lost its true orthogonality. A Gram–Schmidt process could orthogonalize the columns, but it is not the most reliable, nor the most efficient, nor the most invariant method. The polar decomposition factors a matrix into a pair, one of which is the unique "closest" orthogonal matrix to the given matrix, or one of the closest if the given matrix is singular. (Closeness can be measured by any matrix norm invariant under an orthogonal change of basis, such as the spectral norm or the Frobenius norm.) For a near-orthogonal matrix, rapid convergence to the orthogonal factor can be achieved by a "Newton's method" approach due to (1990), repeatedly averaging the matrix with its inverse transpose. has published an accelerated method with a convenient convergence test.
For example, consider a non-orthogonal matrix for which the simple averaging algorithm takes seven steps
formula_27
and which acceleration trims to two steps (with γ = 0.353553, 0.565685).
formula_28
Gram-Schmidt yields an inferior solution, shown by a Frobenius distance of 8.28659 instead of the minimum 8.12404.
formula_29
Randomization.
Some numerical applications, such as Monte Carlo methods and exploration of high-dimensional data spaces, require generation of uniformly distributed random orthogonal matrices. In this context, "uniform" is defined in terms of Haar measure, which essentially requires that the distribution not change if multiplied by any freely chosen orthogonal matrix. Orthogonalizing matrices with independent uniformly distributed random entries does not result in uniformly distributed orthogonal matrices, but the QR decomposition of independent normally distributed random entries does, as long as the diagonal of R contains only positive entries . replaced this with a more efficient idea that later generalized as the "subgroup algorithm" (in which form it works just as well for permutations and rotations). To generate an ("n" + 1) × ("n" + 1) orthogonal matrix, take an "n" × "n" one and a uniformly distributed unit vector of dimension "n" + 1. Construct a Householder reflection from the vector, then apply it to the smaller matrix (embedded in the larger size with a 1 at the bottom right corner).
Nearest orthogonal matrix.
The problem of finding the orthogonal matrix Q nearest a given matrix M is related to the Orthogonal Procrustes problem. There are several different ways to get the unique solution, the simplest of which is taking the singular value decomposition of M and replacing the singular values with ones. Another method expresses the R explicitly but requires the use of a matrix square root:
formula_30
This may be combined with the Babylonian method for extracting the square root of a matrix to give a recurrence which converges to an orthogonal matrix quadratically:
formula_31
where "Q"0 = "M".
These iterations are stable provided the condition number of M is less than three.
Using a first-order approximation of the inverse and the same initialization results in the modified iteration:
formula_32
formula_33
formula_34
Spin and pin.
A subtle technical problem afflicts some uses of orthogonal matrices. Not only are the group components with determinant +1 and −1 not connected to each other, even the +1 component, SO("n"), is not simply connected (except for SO(1), which is trivial). Thus it is sometimes advantageous, or even necessary, to work with a covering group of SO("n"), the spin group, Spin("n"). Likewise, O("n") has covering groups, the pin groups, Pin("n"). For "n" > 2, Spin("n") is simply connected and thus the universal covering group for SO("n"). By far the most famous example of a spin group is Spin(3), which is nothing but SU(2), or the group of unit quaternions.
The Pin and Spin groups are found within Clifford algebras, which themselves can be built from orthogonal matrices.
Rectangular matrices.
If Q is not a square matrix, then the conditions "Q"T"Q" = "I" and "QQ"T = "I" are not equivalent. The condition "Q"T"Q" = "I" says that the columns of "Q" are orthonormal. This can only happen if Q is an "m" × "n" matrix with "n" ≤ "m" (due to linear dependence). Similarly, "QQ"T = "I" says that the rows of Q are orthonormal, which requires "n" ≥ "m".
There is no standard terminology for these matrices. They are variously called "semi-orthogonal matrices", "orthonormal matrices", "orthogonal matrices", and sometimes simply "matrices with orthonormal rows/columns".
For the case "n" ≤ "m", matrices with orthonormal columns may be referred to as orthogonal k-frames and they are elements of the Stiefel manifold.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Q^\\mathrm{T} Q = Q Q^\\mathrm{T} = I,"
},
{
"math_id": 1,
"text": "Q^\\mathrm{T}=Q^{-1},"
},
{
"math_id": 2,
"text": "{\\mathbf u} \\cdot {\\mathbf v} = \\left(Q {\\mathbf u}\\right) \\cdot \\left(Q {\\mathbf v}\\right) "
},
{
"math_id": 3,
"text": "{\\mathbf v}^\\mathrm{T}{\\mathbf v} = (Q{\\mathbf v})^\\mathrm{T}(Q{\\mathbf v}) = {\\mathbf v}^\\mathrm{T} Q^\\mathrm{T} Q {\\mathbf v} ."
},
{
"math_id": 4,
"text": "\n\\begin{bmatrix}\n1 & 0 \\\\\n0 & 1 \\\\\n\\end{bmatrix}"
},
{
"math_id": 5,
"text": "\n\\begin{bmatrix}\n\\cos \\theta & -\\sin \\theta \\\\\n\\sin \\theta & \\cos \\theta \\\\\n\\end{bmatrix}"
},
{
"math_id": 6,
"text": "\n\\begin{bmatrix}\n1 & 0 \\\\\n0 & -1 \\\\\n\\end{bmatrix}"
},
{
"math_id": 7,
"text": "\n\\begin{bmatrix}\n0 & 0 & 0 & 1 \\\\\n0 & 0 & 1 & 0 \\\\\n1 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0\n\\end{bmatrix}"
},
{
"math_id": 8,
"text": "\\begin{bmatrix}\np & t\\\\\nq & u\n\\end{bmatrix},"
},
{
"math_id": 9,
"text": "\\begin{align}\n1 & = p^2+t^2, \\\\\n1 & = q^2+u^2, \\\\\n0 & = pq+tu.\n\\end{align}"
},
{
"math_id": 10,
"text": "\n\\begin{bmatrix}\n\\cos \\theta & -\\sin \\theta \\\\\n\\sin \\theta & \\cos \\theta \\\\\n\\end{bmatrix}\\text{ (rotation), }\\qquad\n\\begin{bmatrix}\n\\cos \\theta & \\sin \\theta \\\\\n\\sin \\theta & -\\cos \\theta \\\\\n\\end{bmatrix}\\text{ (reflection)}\n"
},
{
"math_id": 11,
"text": "\\begin{bmatrix}\n0 & 1\\\\\n1 & 0\n\\end{bmatrix}."
},
{
"math_id": 12,
"text": "\n\\begin{bmatrix}\n-1 & 0 & 0\\\\\n0 & -1 & 0\\\\\n0 & 0 & -1\n\\end{bmatrix}\\text{ and }\n\\begin{bmatrix}\n0 & -1 & 0\\\\\n1 & 0 & 0\\\\\n0 & 0 & -1\n\\end{bmatrix}"
},
{
"math_id": 13,
"text": "Q = I - 2 \\frac{{\\mathbf v}{\\mathbf v}^\\mathrm{T}}{{\\mathbf v}^\\mathrm{T}{\\mathbf v}} ."
},
{
"math_id": 14,
"text": "1=\\det(I)=\\det\\left(Q^\\mathrm{T}Q\\right)=\\det\\left(Q^\\mathrm{T}\\right)\\det(Q)=\\bigl(\\det(Q)\\bigr)^2 ."
},
{
"math_id": 15,
"text": "\\begin{bmatrix}\n2 & 0 \\\\\n0 & \\frac{1}{2}\n\\end{bmatrix}"
},
{
"math_id": 16,
"text": "\\begin{bmatrix}\n & & & 0\\\\\n & \\mathrm{O}(n) & & \\vdots\\\\\n & & & 0\\\\\n 0 & \\cdots & 0 & 1\n\\end{bmatrix}"
},
{
"math_id": 17,
"text": "(n-1) + (n-2) + \\cdots + 1 = \\frac{n(n-1)}{2}"
},
{
"math_id": 18,
"text": "P^\\mathrm{T}QP = \\begin{bmatrix}\nR_1 & & \\\\ & \\ddots & \\\\ & & R_k\n\\end{bmatrix}\\ (n\\text{ even}),\n\\ P^\\mathrm{T}QP = \\begin{bmatrix}\nR_1 & & & \\\\ & \\ddots & & \\\\ & & R_k & \\\\ & & & 1\n\\end{bmatrix}\\ (n\\text{ odd})."
},
{
"math_id": 19,
"text": "P^\\mathrm{T}QP = \\begin{bmatrix}\n\\begin{matrix}R_1 & & \\\\ & \\ddots & \\\\ & & R_k\\end{matrix} & 0 \\\\\n0 & \\begin{matrix}\\pm 1 & & \\\\ & \\ddots & \\\\ & & \\pm 1\\end{matrix} \\\\\n\\end{bmatrix},"
},
{
"math_id": 20,
"text": "Q^\\mathrm{T} Q = I "
},
{
"math_id": 21,
"text": "\\dot{Q}^\\mathrm{T} Q + Q^\\mathrm{T} \\dot{Q} = 0"
},
{
"math_id": 22,
"text": "\\dot{Q}^\\mathrm{T} = -\\dot{Q} ."
},
{
"math_id": 23,
"text": "\\mathfrak{so}(3)"
},
{
"math_id": 24,
"text": "\n\\Omega = \\begin{bmatrix}\n0 & -z\\theta & y\\theta \\\\\nz\\theta & 0 & -x\\theta \\\\\n-y\\theta & x\\theta & 0\n\\end{bmatrix} ."
},
{
"math_id": 25,
"text": "\\exp(\\Omega) = \\begin{bmatrix}\n1 - 2s^2 + 2x^2 s^2 & 2xy s^2 - 2z sc & 2xz s^2 + 2y sc\\\\\n2xy s^2 + 2z sc & 1 - 2s^2 + 2y^2 s^2 & 2yz s^2 - 2x sc\\\\\n2xz s^2 - 2y sc & 2yz s^2 + 2x sc & 1 - 2s^2 + 2z^2 s^2 \n\\end{bmatrix}."
},
{
"math_id": 26,
"text": "R = \\begin{bmatrix}\n\\cdot & \\cdot & \\cdot \\\\\n0 & \\cdot & \\cdot \\\\\n0 & 0 & \\cdot \\\\\n0 & 0 & 0 \\\\\n0 & 0 & 0\n\\end{bmatrix}."
},
{
"math_id": 27,
"text": "\\begin{bmatrix}3 & 1\\\\7 & 5\\end{bmatrix}\n\\rightarrow\n\\begin{bmatrix}1.8125 & 0.0625\\\\3.4375 & 2.6875\\end{bmatrix}\n\\rightarrow \\cdots \\rightarrow\n\\begin{bmatrix}0.8 & -0.6\\\\0.6 & 0.8\\end{bmatrix}"
},
{
"math_id": 28,
"text": "\\begin{bmatrix}3 & 1\\\\7 & 5\\end{bmatrix}\n\\rightarrow\n\\begin{bmatrix}1.41421 & -1.06066\\\\1.06066 & 1.41421\\end{bmatrix}\n\\rightarrow\n\\begin{bmatrix}0.8 & -0.6\\\\0.6 & 0.8\\end{bmatrix}"
},
{
"math_id": 29,
"text": "\\begin{bmatrix}3 & 1\\\\7 & 5\\end{bmatrix}\n\\rightarrow\n\\begin{bmatrix}0.393919 & -0.919145\\\\0.919145 & 0.393919\\end{bmatrix}"
},
{
"math_id": 30,
"text": "Q = M \\left(M^\\mathrm{T} M\\right)^{-\\frac 1 2}"
},
{
"math_id": 31,
"text": "Q_{n + 1} = 2 M \\left(Q_n^{-1} M + M^\\mathrm{T} Q_n\\right)^{-1}"
},
{
"math_id": 32,
"text": "N_{n} = Q_n^\\mathrm{T} Q_n"
},
{
"math_id": 33,
"text": "P_{n} = \\frac 1 2 Q_n N_{n}"
},
{
"math_id": 34,
"text": "Q_{n + 1} = 2 Q_n + P_n N_n - 3 P_n"
}
] | https://en.wikipedia.org/wiki?curid=105620 |
105622 | Euler's criterion | In number theory concerning primes
In number theory, Euler's criterion is a formula for determining whether an integer is a quadratic residue modulo a prime. Precisely,
Let "p" be an odd prime and "a" be an integer coprime to "p". Then
formula_0
Euler's criterion can be concisely reformulated using the Legendre symbol:
formula_1
The criterion dates from a 1748 paper by Leonhard Euler.
Proof.
The proof uses the fact that the residue classes modulo a prime number are a field. See the article prime field for more details.
Because the modulus is prime, Lagrange's theorem applies: a polynomial of degree k can only have at most k roots. In particular, "x"2 ≡ "a" (mod "p") has at most 2 solutions for each a. This immediately implies that besides 0 there are at least distinct quadratic residues modulo p: each of the "p" − 1 possible values of x can only be accompanied by one other to give the same residue.
In fact, formula_2This is because formula_3
So, the formula_4 distinct quadratic residues are:
formula_5
As a is coprime to p, Fermat's little theorem says that
formula_6
which can be written as
formula_7
Since the integers mod p form a field, for each a, one or the other of these factors must be zero. Therefore,
formula_8 or
formula_9
Now if a is a quadratic residue, "a" ≡ "x"2,
formula_10
So every quadratic residue (mod p) makes the first factor zero.
Applying Lagrange's theorem again, we note that there can be no more than values of a that make the first factor zero. But as we noted at the beginning, there are at least distinct quadratic residues (mod p) (besides 0). Therefore, they are precisely the residue classes that make the first factor zero. The other residue classes, the nonresidues, must make the second factor zero, or they would not satisfy Fermat's little theorem. This is Euler's criterion.
Alternative proof.
This proof only uses the fact that any congruence formula_11 has a unique (modulo formula_12) solution formula_13 provided formula_12 does not divide formula_14. (This is true because as formula_13 runs through all nonzero remainders modulo formula_12 without repetitions, so does formula_15—if we have formula_16, then formula_17, hence formula_18, but formula_19 and formula_20 aren't congruent modulo formula_12.) It follows from this fact that all nonzero remainders modulo formula_12 the square of which isn't congruent to formula_21 can be grouped into unordered pairs formula_22 according to the rule that the product of the members of each pair is congruent to formula_21 modulo formula_12 (since by this fact for every formula_23 we can find such an formula_13, uniquely, and vice versa, and they will differ from each other if formula_24 is not congruent to formula_21). If formula_21 is not a quadratic nonresidue, this is simply a regrouping of all formula_25 nonzero residues into formula_26 pairs, hence we conclude that formula_27. If formula_21 is a quadratic residue, exactly two remainders were not among those paired, formula_28 and formula_29 such that formula_30. If we pair those two absent remainders together, their product will be formula_31 rather than formula_21, whence in this case formula_32. In summary, considering these two cases we have demonstrated that for formula_33 we have formula_34. It remains to substitute formula_35 (which is obviously a square) into this formula to obtain at once Wilson's theorem, Euler's criterion, and (by squaring both sides of Euler's criterion) Fermat's little theorem.
Examples.
Example 1: Finding primes for which "a" is a residue
Let "a" = 17. For which primes "p" is 17 a quadratic residue?
We can test prime "p"'s manually given the formula above.
In one case, testing "p" = 3, we have 17(3 − 1)/2 = 171 ≡ 2 ≡ −1 (mod 3), therefore 17 is not a quadratic residue modulo 3.
In another case, testing "p" = 13, we have 17(13 − 1)/2 = 176 ≡ 1 (mod 13), therefore 17 is a quadratic residue modulo 13. As confirmation, note that 17 ≡ 4 (mod 13), and 22 = 4.
We can do these calculations faster by using various modular arithmetic and Legendre symbol properties.
If we keep calculating the values, we find:
(17/"p") = +1 for "p" = {13, 19, ...} (17 is a quadratic residue modulo these values)
(17/"p") = −1 for "p" = {3, 5, 7, 11, 23, ...} (17 is not a quadratic residue modulo these values).
Example 2: Finding residues given a prime modulus "p"
Which numbers are squares modulo 17 (quadratic residues modulo 17)?
We can manually calculate it as:
12 = 1
22 = 4
32 = 9
42 = 16
52 = 25 ≡ 8 (mod 17)
62 = 36 ≡ 2 (mod 17)
72 = 49 ≡ 15 (mod 17)
82 = 64 ≡ 13 (mod 17).
So the set of the quadratic residues modulo 17 is {1,2,4,8,9,13,15,16}. Note that we did not need to calculate squares for the values 9 through 16, as they are all negatives of the previously squared values (e.g. 9 ≡ −8 (mod 17), so 92 ≡ (−8)2 = 64 ≡ 13 (mod 17)).
We can find quadratic residues or verify them using the above formula. To test if 2 is a quadratic residue modulo 17, we calculate 2(17 − 1)/2 = 28 ≡ 1 (mod 17), so it is a quadratic residue. To test if 3 is a quadratic residue modulo 17, we calculate 3(17 − 1)/2 = 38 ≡ 16 ≡ −1 (mod 17), so it is not a quadratic residue.
Euler's criterion is related to the law of quadratic reciprocity.
Applications.
In practice, it is more efficient to use an extended variant of Euclid's algorithm to calculate the Jacobi symbol formula_36. If formula_37 is an odd prime, this is equal to the Legendre symbol, and decides whether formula_21 is a quadratic residue modulo formula_37.
On the other hand, since the equivalence of formula_38 to the Jacobi symbol holds for all odd primes, but not necessarily for composite numbers, calculating both and comparing them can be used as a primality test, specifically the Solovay–Strassen primality test. Composite numbers for which the congruence holds for a given formula_21 are called Euler–Jacobi pseudoprimes to base formula_21.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
The "Disquisitiones Arithmeticae" has been translated from Gauss's Ciceronian Latin into English and German. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes. | [
{
"math_id": 0,
"text": "\na^{\\tfrac{p-1}{2}} \\equiv\n\\begin{cases}\n\\;\\;\\,1\\pmod{p}& \\text{ if there is an integer }x \\text{ such that }x^2\\equiv a \\pmod{p},\\\\\n -1\\pmod{p}& \\text{ if there is no such integer.}\n\\end{cases}\n"
},
{
"math_id": 1,
"text": "\n\\left(\\frac{a}{p}\\right) \\equiv a^{\\tfrac{p-1}{2}} \\pmod p.\n"
},
{
"math_id": 2,
"text": " (p-x)^{2}\\equiv x^{2} \\pmod p."
},
{
"math_id": 3,
"text": " (p-x)^{2} \\equiv p^{2}-{2}{x}{p}+x^{2} \\equiv x^{2} \\pmod p."
},
{
"math_id": 4,
"text": " \\tfrac{p-1}{2}"
},
{
"math_id": 5,
"text": "1^{2}, 2^{2}, ... , (\\tfrac{p-1}{2})^{2} \\pmod p. "
},
{
"math_id": 6,
"text": "\na^{p-1}\\equiv 1 \\pmod p,\n"
},
{
"math_id": 7,
"text": "\n\\left( a^{\\tfrac{p-1}{2}}-1 \\right)\\left( a^{\\tfrac{p-1}{2}}+1 \\right) \\equiv 0 \\pmod p.\n"
},
{
"math_id": 8,
"text": "\na^{\\tfrac{p-1}{2}}\\equiv 1\\pmod p\n"
},
{
"math_id": 9,
"text": "\na^{\\tfrac{p-1}{2}} \\equiv {-1}\\pmod p.\n"
},
{
"math_id": 10,
"text": "\na^{\\tfrac{p-1}{2}}\\equiv {(x^2)}^{\\tfrac{p-1}{2}} \\equiv x^{p-1}\\equiv1\\pmod p.\n"
},
{
"math_id": 11,
"text": "kx\\equiv l\\!\\!\\! \\pmod p"
},
{
"math_id": 12,
"text": "p"
},
{
"math_id": 13,
"text": "x"
},
{
"math_id": 14,
"text": "k"
},
{
"math_id": 15,
"text": "kx"
},
{
"math_id": 16,
"text": "kx_1\\equiv kx_2 \\pmod p"
},
{
"math_id": 17,
"text": "p\\mid k(x_1-x_2)"
},
{
"math_id": 18,
"text": "p\\mid (x_1-x_2)"
},
{
"math_id": 19,
"text": "x_1"
},
{
"math_id": 20,
"text": "x_2"
},
{
"math_id": 21,
"text": "a"
},
{
"math_id": 22,
"text": "(x,y)"
},
{
"math_id": 23,
"text": "y"
},
{
"math_id": 24,
"text": "y^2"
},
{
"math_id": 25,
"text": "p-1"
},
{
"math_id": 26,
"text": "(p-1)/2"
},
{
"math_id": 27,
"text": "1\\cdot2\\cdot ... \\cdot (p-1)\\equiv a^{\\frac{p-1}{2}} \\!\\!\\! \\pmod p"
},
{
"math_id": 28,
"text": "r"
},
{
"math_id": 29,
"text": "-r"
},
{
"math_id": 30,
"text": "r^2\\equiv a\\!\\!\\! \\pmod p"
},
{
"math_id": 31,
"text": "-a"
},
{
"math_id": 32,
"text": "1\\cdot2\\cdot ... \\cdot (p-1)\\equiv -a^{\\frac{p-1}{2}} \\!\\!\\! \\pmod p"
},
{
"math_id": 33,
"text": "a\\not\\equiv 0 \\!\\!\\! \\pmod p"
},
{
"math_id": 34,
"text": "1\\cdot2\\cdot ... \\cdot (p-1)\\equiv -\\left(\\frac{a}{p}\\right)a^{\\frac{p-1}{2}} \\!\\!\\! \\pmod p"
},
{
"math_id": 35,
"text": "a=1"
},
{
"math_id": 36,
"text": "\\left(\\frac{a}{n}\\right)"
},
{
"math_id": 37,
"text": "n"
},
{
"math_id": 38,
"text": "a^\\frac{n-1}{2}"
}
] | https://en.wikipedia.org/wiki?curid=105622 |
1056496 | Leaky bucket | Network traffic shaping and policing algorithm
The leaky bucket is an algorithm based on an analogy of how a bucket with a constant leak will overflow if either the average rate at which water is poured in exceeds the rate at which the bucket leaks or if more water than the capacity of the bucket is poured in all at once. It can be used to determine whether some sequence of discrete events conforms to defined limits on their average and peak rates or frequencies, e.g. to limit the actions associated with these events to these rates or delay them until they do conform to the rates. It may also be used to check conformance or limit to an average rate alone, i.e. remove any variation from the average.
It is used in packet-switched computer networks and telecommunications networks in both the traffic policing, traffic shaping and scheduling of data transmissions, in the form of packets, to defined limits on bandwidth and burstiness (a measure of the variations in the traffic flow).
A version of the leaky bucket, the generic cell rate algorithm, is recommended for Asynchronous Transfer Mode (ATM) networks in UPC and NPC at user–network interfaces or inter-network interfaces or network-to-network interfaces to protect a network from excessive traffic levels on connections routed through it. The generic cell rate algorithm, or an equivalent, may also be used to shape transmissions by a network interface card onto an ATM network.
At least some implementations of the leaky bucket are a mirror image of the token bucket algorithm and will, given equivalent parameters, determine exactly the same sequence of events to conform or not conform to the same limits.
Overview.
Two different methods of applying this leaky bucket analogy are described in the literature. These give what appear to be two different algorithms, both of which are referred to as the leaky bucket algorithm and generally without reference to the other method. This has resulted in confusion about what the leaky bucket algorithm is and what its properties are.
In one version the bucket is a counter or variable separate from the flow of traffic or schedule of events. This counter is used only to check that the traffic or events conform to the limits: The counter is incremented as each packet arrives at the point where the check is being made or an event occurs, which is equivalent to the way water is added intermittently to the bucket. The counter is also decremented at a fixed rate, equivalent to the way the water leaks out of the bucket. As a result, the value in the counter represents the level of the water in the bucket. If the counter remains below a specified limit value when a packet arrives or an event occurs, i.e. the bucket does not overflow, that indicates its conformance to the bandwidth and burstiness limits or the average and peak rate event limits. This version is referred to here as " the leaky bucket as a meter".
In the second version, the bucket is a queue in the flow of traffic. This queue is used to directly control that flow: Packets are entered into the queue as they arrive, equivalent to the water being added to the bucket. These packets are then removed from the queue (first come, first served), usually at a fixed rate, e.g. for onward transmission, equivalent to water leaking from the bucket. This configuration imposes conformance rather than checking it, and where the output is serviced at a fixed rate (and where the packets are all the same length), the resulting traffic stream is necessarily devoid of burstiness or jitter. So in this version, the traffic itself is the analogue of the water passing through the bucket. This version is referred to here as "leaky bucket as a queue".
The leaky bucket as a meter is exactly equivalent to (a mirror image of) the token bucket algorithm, i.e. the process of adding water to the leaky bucket exactly mirrors that of removing tokens from the token bucket when a conforming packet arrives, the process of leaking of water from the leaky bucket exactly mirrors that of regularly adding tokens to the token bucket, and the test that the leaky bucket will not overflow is a mirror of the test that the token bucket contains enough tokens and will not "underflow". Thus, given equivalent parameters, the two algorithms will see the same traffic as conforming or nonconforming. The leaky bucket as a queue can be seen as a special case of the leaky bucket as a meter.
As a meter.
Jonathan S. Turner is credited with the original description of the leaky bucket algorithm and describes it as follows: "A counter associated with each user transmitting on a connection is incremented whenever the user sends a packet and is decremented periodically. If the counter exceeds a threshold upon being incremented, the network discards the packet. The user specifies the rate at which the counter is decremented (this determines the average bandwidth) and the value of the threshold (a measure of burstiness)". The bucket (analogous to the counter) is, in this case, used as a meter to test the conformance of packets, rather than as a queue to directly control them.
Another description of what is essentially the same meter version of the algorithm, the generic cell rate algorithm, is given by the ITU-T in recommendation I.371 and in the ATM Forum's UNI Specification. The description, in which the term "cell" is equivalent to "packet" in Turner's description is given by the ITU-T as follows: "The continuous-state leaky bucket can be viewed as a finite capacity bucket whose real-valued content drains out at a continuous rate of 1 unit of content per time unit and whose content is increased by the increment "T" for each conforming cell... If at a cell arrival the content of the bucket is less than or equal to the limit value "τ", then the cell is conforming; otherwise, the cell is non-conforming. The capacity of the bucket (the upper bound of the counter) is ("T" + "τ")". These specifications also state that, due to its finite capacity, if the contents of the bucket at the time the conformance is tested is greater than the limit value, and hence the cell is non-conforming, then the bucket is left unchanged; that is, the water is simply not added if it would make the bucket overflow.
David E. McDysan and Darrel L. Spohn provide a commentary on the description given by the ITU-T/ATM Forum. In this, they state, "In the leaky bucket analogy, the [ATM] cells do not actually flow through the bucket; only the check for conforming admission does". However, uncommonly in the descriptions in the literature, McDysan and Spohn also refer to the leaky bucket algorithm as a queue, going on, "Note that one implementation of traffic shaping is to actually have the cells flow through the bucket".
In describing the operation of the ITU-T's version of the algorithm, McDysan and Spohn invoke a "notion commonly employed in queueing theory of a fictional "gremlin"". This gremlin inspects the level in the bucket and takes action if the level is above the limit value "τ": in "traffic policing", it pulls open a trap door, which causes the arriving packet to be dropped and stops its water from entering the bucket; in "traffic shaping", it pushes up a flap, which delays the arriving packet and prevents it from delivering its water, until the water level in the bucket falls below "τ".
The difference between the descriptions given by Turner and the ITU-T/ATM Forum is that Turner's is specific to traffic policing, whereas the ITU-T/ATM Forum description is applicable to both traffic policing and traffic shaping. Also, Turner does not state that the contents of the counter should only be affected by conforming packets, and should only be incremented when this would not cause it to exceed a limit, i.e. Turner does not explicitly state that the bucket's capacity or counter's maximum value is finite.
Concept of operation.
A description of the concept of operation of the leaky bucket algorithm as a meter that can be used in either traffic policing or traffic shaping may be described as a fixed capacity bucket, associated with each virtual connection or user. The bucket leaks at a fixed rate. If the bucket is empty, it stops leaking. For a packet to conform, it has to be possible to add a specific amount of water to the bucket The specific amount added by a conforming packet can be the same for all packets or can be proportional to the length of the packet. If this amount of water would cause the bucket to overflow then the packet does not conform and the water in the bucket is left unchanged.
Uses.
The leaky bucket as a meter can be used in either traffic shaping or traffic policing. For example, in ATM networks, in the form of the generic cell rate algorithm, it is used to compare the bandwidth and burstiness of traffic on a virtual channel (VC) or virtual path (VP) against the specified limits on the rate at which cells may arrive and the maximum jitter, or variation in inter-arrival intervals, for the VC or VP. In traffic policing, cells that do not conform to these limits (nonconforming cells) may be dropped or may be reduced in priority for downstream traffic management functions to drop if there is congestion. In traffic shaping, cells are delayed until they conform. Traffic policing and traffic shaping are commonly used in UPC and NPC to protect the network against excess or excessively bursty traffic. (See bandwidth management and congestion avoidance.) Traffic shaping is commonly used in the network interfaces in hosts to prevent transmissions from exceeding the bandwidth or jitter limits and being discarded by traffic management functions in the network. (See scheduling (computing) and network scheduler.)
The leaky bucket algorithm as a meter can also be used in a leaky bucket counter to measure the rate of random (stochastic) processes. A Leaky bucket counter can be used to indicate, by its overflowing when the average or peak rate of events increases above some acceptable background level. For example, such a leaky bucket counter can be used to detect when there is a sudden burst of correctable memory errors or when there has been a gradual, but significant, increase in the average rate, which may indicate an impending correction failure.
The use of the leaky bucket algorithm in a leaky bucket counter is similar to that in traffic management, in that it is used as a meter. Essentially, the events replace the packets in the description, with each event causing a quantity of water to be added to the bucket. If the bucket would overflow, as a result of the event, then the event should trigger the action associated with an out-of-limits event. Some implementations seem to parallel Turner's description, in that there is no explicit limit on the maximum value that the counter may take, implying that once the counter has exceeded the threshold, it may not return to its previous state until a period significantly greater than the equivalent of the emission interval has passed, which may be increased by what would otherwise be conforming events. However, other implementations may not increment the counter while it is overflowed, allowing it to correctly determine whether the following events conform or not.
Parameters.
In the case of the leaky bucket algorithm as a meter, the limits on the traffic can be bandwidth and a burstiness of the output. The bandwidth limit and burstiness limit for the connection may be specified in a traffic contract. A bandwidth limit may be specified as a packet rate, a bit rate, or as an emission interval between the packets. A limit on burstiness may be specified as a delay variation tolerance, or as a maximum burst size (MBS).
Multiple sets of contract parameters can be applied concurrently to a connection using multiple instances of the leaky bucket algorithm, each of which may take a bandwidth and a burstiness limit: see .
Emission interval.
The rate at which the bucket leaks will determine the bandwidth limit, which is referred to as the average rate by Turner and the inverse of which is referred to as the emission interval by the ITU-T. It is easiest to explain what this interval is where packets have a fixed length. Hence, the first part of this description assumes this, and the implications of variable packet lengths are considered separately.
Consider a bucket that is exactly filled to the top by preceding traffic, i.e. when the maximum permitted burstiness has already occurred, i.e. the maximum number of packets or cells have just arrived in the minimum amount of time for them to still conform to the bandwidth and jitter limits. The minimum interval before the next packet can conform is then the time it takes for the bucket to leak exactly the amount of water delivered by a packet, and if a packet is tested and conforms at that time, this will exactly fill the bucket once more. Thus, once the bucket is filled, the maximum rate that packets can conform is with this interval between each packet.
Turner refers to this rate as the average, implying that its inverse is the average interval. There is, however, some ambiguity in what the average rate and interval are. Since packets can arrive at any lower rate, this is an upper bound, rather than a fixed value, so it could at best be called the maximum for the average rate. Also, during the time the maximum burstiness occurs, packets can arrive at smaller intervals and thus at a higher rate than this. So, for any period less than infinity, the actual average rate can be (but is not necessarily) greater than this and the average interval can be (but is not necessarily) less than the emission interval. Hence, because of this ambiguity, the term emission interval is used hereafter. However, it is still true that the minimum value that the long-term average interval can take tends to be the emission interval.
For variable-length packets, where the amount added to the bucket is proportional to the packet length, the maximum rate at which they can conform varies according to their length: the amount that the bucket must have leaked from full for a packet to conform is the amount the packet will add, and if this is proportional to the packet length, so is the interval between it and the preceding packet that filled the bucket. Hence, it is not possible to specify a specific emission interval for variable-length packets, and the bandwidth limit has to be specified explicitly, in bits or bytes per second.
Delay variation tolerance.
It is easiest to explain delay variation tolerance for the case where packets have a fixed length. Hence, the first part of this description assumes this, and the implications of variable packet lengths are considered separately.
The ITU-T defines a limit value, "τ", that is less than the capacity of the bucket by "T" (the amount by which the bucket contents is incremented for each conforming cell), so that the capacity of the bucket is "T" + "τ". This limit value specifies how much earlier a packet can arrive than it would normally be expected if the packets were arriving with exactly the emission interval between them.
Imagine the following situation: A bucket leaks at 1 unit of water per second, so the limit value, "τ" and the amount of water added by a packet, "T", are effectively in units of seconds. This bucket starts off empty, so when a packet arrives at the bucket, it does not quite fill the bucket by adding its water "T", and the bucket is now "τ" below its capacity. So when the next packet arrives the bucket only has to have drained by "T" – "τ" for this to conform. So the interval between these two packets can be as much as "τ" less than "T".
This extends to multiple packets in a sequence: Imagine the following: The bucket again starts off empty, so the first packet to arrive clearly conforms. The bucket then becomes exactly full after a number of conforming packets, "N", have arrived in the minimum possible time for them to conform. For the last (the "N"th) packet to conform, the bucket must have leaked enough of the water from the preceding "N" – 1 packets (("N" – 1)×"T" seconds' worth) for it to be exactly at the limit value "τ" at this time. Hence, the water leaked is ("N" – 1)×"T" – "τ", which because the leak is one unit per second, took exactly ("N" – 1)×"T" – "τ" seconds to leak. Thus the shortest time in which all "N" packets can arrive and conform is ("N" – 1)×"T" – "τ" seconds, which is exactly "τ" less than the time it would have taken if the packets had been arriving at exactly the emission interval.
However, packets can only arrive with intervals less than "T" when the bucket is not filled by the previous packet. If it is filled, then the bucket must have drained by the full amount "T" before the next packet conforms. So once this bucket space has been used up by packets arriving at less than "T", subsequent frames must arrive at intervals no less than "T". They may, however, arrive at greater intervals, when the bucket will not be filled by them. Since the bucket stops leaking when it is empty, there is always a limit ("τ") to how much tolerance can be accrued by these intervals greater than "T".
Since the limit value "τ" defines how much earlier a packet can arrive than would be expected, it is the limit on the difference between the maximum and minimum delays from the source to the point where the conformance test is being made (assuming the packets are generated with no jitter). Hence, the use of the term cell delay variation tolerance (CDVt) for this parameter in ATM.
As an example, a possible source of delay variation is where a number of streams of packets are multiplexed together at the output of a switch. Assuming that the sum of the bandwidths of these connections is less than the capacity of the output, all of the packets that arrive can be transmitted eventually. However, if their arrivals are independent, e.g. because they arrive at different inputs of the switch, then several may arrive at or nearly at the same time. Since the output can only transmit one packet at a time, the others must be queued in a buffer until it is their turn to be transmitted. This buffer then introduces an additional delay between a packet arriving at an input and being transmitted by the output, and this delay varies, depending on how many other packets are already queued in the buffer. A similar situation can occur at the output of a host (in the network interface controller) when multiple packets have the same or similar release times, and this delay can usually be modelled as a delay in a virtual output buffer.
For variable length packets, where the amount of water added by a given packet is proportional to its length, "τ" can not be seen as a limit on how full the bucket can be when a packet arrives, as this varies depending on the packet size. However, the time it takes to drain from this level to empty is still how much earlier a packet can arrive than is expected when packets are transmitted at the bandwidth limit. Thus, it is still the maximum variation in transfer delay to the point where the conformance test is being applied that can be tolerated, and thus the tolerance on maximum delay variation.
Maximum burst size.
The limit value or delay variation tolerance also controls how many packets can arrive in a burst, determined by the excess depth of the bucket over the capacity required for a single packet. Hence MBS is also a measure of burstiness or jitter, and it is possible to specify the burstiness as an MBS and derive the limit value "τ" from this or to specify it as a jitter or delay variation tolerance or limit value, and derive the MBS from this.
A burst or clump of packets can arrive at a higher rate than determined by the emission interval "T". This may be the line rate of the physical layer connection when the packets in the burst will arrive back-to-back. However, as in ATM, the tolerance may be applied to a lower rate, in that case the Sustainable Cell Rate (SCR), and the burst of packets (cells) can arrive at a higher rate, but less than the line rate of the physical layer, in that case, the Peak Cell Rate (PCR). The MBS may then be the number of cells needed to transport a higher layer packet (see segmentation and reassembly), where the packets are transmitted with a maximum bandwidth determined by the SCR and cells within the packets are transmitted at the PCR; thus allowing the last cell of the packet, and the packet itself, to arrive significantly earlier than it would if the cells were sent at the SCR: transmission duration = (MBS-1)/PCR rather than (MBS-1)/SCR. This bursting at the PCR puts a significantly higher load on shared resources, e.g. switch output buffers, than does transmission at the SCR, and is thus more likely to result in buffer overflows and network congestion. However, it puts a lesser load on these resources than would transmitting at the SCR with a limit value, "τSCR", that allows MBS cells to be transmitted and arrive back-to-back at the line rate.
If the limit value is large enough, then several packets can arrive in a burst and still conform: if the bucket starts from empty, the first packet to arrive will add "T", but if, by the time the next packet arrives, the contents is below "τ", this will also conform. Assuming that each packet takes "δ" to arrive, then if "τ" (expressed as the time it takes the bucket to empty from the limit value) is equal to or greater than the emission interval less the minimum interarrival time, "T" – "δ", the second packet will conform even if it arrives as a burst with the first. Similarly, if "τ" is equal to or greater than ("T" – "δ") × 2, then 3 packets can arrive in a burst, etc.
The maximum size of this burst, "M", can be calculated from the emission interval, "T"; the maximum jitter tolerance, "τ"; and the time taken to transmit/receive a packet, "δ", as follows:
formula_0
Equally, the minimum value of jitter tolerance "τ" that gives a specific MBS can be calculated from the MBS as follows:
formula_1
In the case of ATM, where technically MBS only relates to the SCR tolerance, in the above equation the time it takes each packet to arrive, "δ", is the emission interval for cells at the PCR "TPCR", and the emission interval, "T", is the emission interval for the SCR "TSCR". Where MBS is to be the number of cells required to transport a segmented packet, the limit value in the above, "τ", should be that for the SCR "τSCR". However, at the UNI or an NNI, where cells at the PCR will have been subjected to delay variation, it should be the limit value for the SCR plus that for the PCR "τSCR" + "τPCR".
For variable length packets, the maximum burst size will depend on the lengths of the packets in the burst and there is no single value for the maximum burst size. However, it is possible to specify the total burst length in bytes, from the byte rate of the input stream, the equivalent byte rate of the leak, and the bucket depth.
Comparison with the token bucket algorithm.
The leaky bucket algorithm is sometimes contrasted with the token bucket algorithm. However, the above concept of operation for the leaky bucket as a meter may be directly compared with the token bucket algorithm, the description of which is given in that article as the following:
*A token is added to the bucket every 1/"r" seconds.
*The bucket can hold at the most "b" tokens. If a token arrives when the bucket is full, it is discarded.
*When a packet (network layer PDU) ["sic"] of "n" bytes arrives, "n" tokens are removed from the bucket, and the packet is sent to the network.
*If fewer than "n" tokens are available, no tokens are removed from the bucket, and the packet is considered to be non-conformant.
This can be compared with the concept of operation, repeated from above:
*A fixed capacity bucket, associated with each virtual connection or user, leaks at a fixed rate.
* If the bucket is empty, it stops leaking.
*For a packet to conform, it has to be possible to add a specific amount of water to the bucket: The specific amount added by a conforming packet can be the same for all packets, or can be proportional to the length of the packet.
*If this amount of water would cause the bucket to exceed its capacity then the packet does not conform and the water in the bucket is left unchanged.
As can be seen, these two descriptions are essentially mirror images of one another: one adds something to the bucket on a regular basis and takes something away for conforming packets down to a limit of zero; the other takes away regularly and adds for conforming packets up to a limit of the bucket's capacity. So, is an implementation that adds tokens for a conforming packet and removes them at a fixed rate an implementation of the leaky bucket or of the token bucket? Similarly, which algorithm is used in an implementation that removes water for a conforming packet and adds water at a fixed rate? In fact both are effectively the same, i.e. implementations of both the leaky bucket and token bucket, as these are the same basic algorithm described differently. This explains why, given equivalent parameters, the two algorithms will see exactly the same packets as conforming or nonconforming. The differences in the properties and performance of implementations of the leaky and token bucket algorithms thus result entirely from the differences in the implementations, i.e. they do not stem from differences in the underlying algorithms.
The points to note are that the leaky bucket algorithm, when used as a meter, can allow a conforming output packet stream with jitter or burstiness, can be used in traffic policing as well as shaping, and can be implemented for variable-length packets.
As a queue.
The leaky bucket as a queue is essentially a way of describing a simple FIFO buffer or queue that is serviced at a fixed rate to remove burstiness or jitter. A description of it is given by Andrew S. Tanenbaum, in (an older version of) his book "Computer Networks" as "The leaky bucket consists of a finite queue. When a packet arrives, if there is room on the queue it is appended to the queue; otherwise it is discarded. At every clock tick one packet is transmitted (unless the queue is empty)". An implementation of the leaky bucket as a queue is therefore always a form of traffic shaping function.
As can be seen this implementation is restricted in that the packets are only ever transmitted at a fixed rate. To underline this, Tanenbaum also states that "The leaky bucket algorithm enforces a rigid output pattern at the average rate, no matter how bursty the [input] traffic is". However, this assertion is only strictly true as long as the queue does not become empty: if the average arrival rate is less than the rate of clock ticks, or if the input is sufficiently bursty that the losses bring the rate of the remainder below the clock tick rate (i.e. gaps in the input stream are long enough and the queue is small enough that it can become empty), there will be gaps in the output stream.
A further restriction is that the leaky bucket as a queue traffic shaping function only transmits packets on the ticks; hence, if it is used within a network, equivalent to UPC and NPC, it also imposes a fixed phase on the onward transmission of packets. Whereas, when using a leaky bucket meter to control onward transmission, a packet is transmitted as soon as it conforms, i.e. relative to the previous one or, if it already conforms, its arrival time; not according to some arbitrary local clock. Perversely, in the context of the transfer delay, this imposition of a fixed phase that may, over time, differ from that of an otherwise conforming input packet stream, constitutes a delay variation and hence a jitter. Jitter caused in this particular way could only be observed where the delay is measured as the transit time between two separate measurement points, one either side of the leaky bucket as a queue shaping function. However, in the context of real-time data transmissions, it is this end-to-end delay that determines the latency of received data. Hence, the leaky bucket as a queue is unsuitable for traffic shaping real-time transmissions.
Limiting variable length packets using the leaky bucket algorithm as a queue is significantly more complicated than it is for fixed-length packets. Tanenbaum gives a description of a "byte-counting" leaky bucket for variable length packets as follows: "At each tick, a counter is initialized to n. If the first packet on the queue has fewer bytes than the current value of the counter, it is transmitted, and the counter is decremented by that number of bytes. Additional packets may also be sent, as long as the counter is high enough. When the counter drops below the length of the next packet on the queue, transmission stops until the next tick, at which time the residual byte count is reset [to n] and the flow can continue". As with the version for fixed length packets, this implementation has a strong effect on the phase of transmissions, resulting in variable end-to-end delays, and unsuitability for real-time traffic shaping.
Uses.
The leaky bucket as a queue can only be used in shaping traffic to a specified bandwidth with no jitter in the output. It may be used within the network, e.g. as part of bandwidth management, but is more appropriate to traffic shaping in the network interfaces of hosts. Leaky bucket algorithm is used in Nginx's "ngx_http_limit_conn_module" module for limiting the number of concurrent connections originating from a single IP address.
Parameters.
In the case of the leaky bucket algorithm as a queue, the only defined limit for this algorithm is the bandwidth of its output.
The bandwidth limit for the connection may be specified in a traffic contract. A bandwidth limit may be specified as a packet or frame rate, a byte or bit rate, or as an emission interval between the packets.
Inefficiency.
The implementation of the leaky bucket as a queue does not use available network resources efficiently. Because it transmits packets only at fixed intervals, there will be many instances when the traffic volume is very low and large portions of network resources (bandwidth in particular) are not being used. Therefore no mechanism exists in the leaky-bucket implementation as a queue to allow individual flows to burst up to port speed, effectively consuming network resources at times when there would not be resource contention in the network. Implementations of the token bucket and leaky bucket as a meter do, however, allow output traffic flows to have bursty characteristics.
Comparison between the two versions.
Analysis of the two versions of the leaky bucket algorithm shows that the version as a queue is a special case of the version as a meter.
Imagine a traffic shaping function for fixed-length packets that is implemented using a fixed-length queue, forming a delay element, which is serviced using a leaky bucket as a meter. Imagine also that the bucket in this meter has a depth equal to the amount added by a packet, i.e. has a limit value, "τ", of zero. However, the conformance test is only performed at intervals of the emission interval, when the packet at the head of the queue is transmitted and its water is added. This water then leaks away during the next emission interval (or is removed just prior to performing the next conformance test), allowing the next packet to conform then or at some subsequent emission interval. The service function can also be viewed in terms of a token bucket with the same depth, where enough tokens for one packet are added (if the bucket is not full) at the emission intervals. This implementation will then receive packets with a bursty arrival pattern (limited by the queue depth) and transmit them on at intervals that are always exact (integral) multiples of the emission interval.
However, the implementation of the leaky bucket as a meter (or token bucket) in a traffic shaping function described above is an exact equivalent to the description of the leaky bucket as a queue: the delay element of the meter version is the bucket of the queue version; the bucket of the meter version is the process that services the queue, and the leak is such that the emission interval is the same as the tick interval. Therefore for fixed length packets, the implementation of the leaky bucket as a queue is of a special case of a traffic shaping function using a leaky bucket (or token bucket) as a meter in which the limit value, "τ", is zero and the process of testing conformance is performed at the lowest possible rate.
The leaky bucket as a queue for variable packet lengths can also be described as equivalent to a special case of the leaky bucket as a meter. The suggested implementation can, like the fixed length implementation, be seen as traffic shaping function in which the queue is a delay element, rather than the bucket, and the function that services the queue is, in this case, explicitly given as a token bucket: it is decremented for conforming packets and incremented at a fixed rate. Hence, as the leaky bucket as a meter and token bucket are equivalent, the leaky bucket as a queue for variable packet lengths is also a special case of a traffic shaping function using a leaky bucket (or token bucket) as a meter.
There is an interesting consequence of seeing the leaky bucket as a queue for variable packet lengths as a specific implementation of the token bucket or leaky bucket as a meter in traffic shaping. This is that the bucket of the meter has a depth, n, and, as is always the case with the token bucket, this depth determines the burstiness of the output traffic (perhaps in relation to the average or minimum number of tokens required by the packets). Hence, it is possible to quantify the burstiness of the output of this "byte counting" leaky bucket as a meter, unless all packets are of the maximum length when it becomes pointless. However, this ability to define a burstiness for the output is in direct contradiction to the statement that the leaky bucket (as a queue) necessarily gives an output with a rigid rate, no matter how bursty the input.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " M = \\left \\lfloor 1 + \\frac{\\tau}{T - \\delta} \\right \\rfloor "
},
{
"math_id": 1,
"text": " \\tau = \\left (M - 1 \\right )\\left (T - \\delta \\right ) "
}
] | https://en.wikipedia.org/wiki?curid=1056496 |
105651 | William Sealy Gosset | British statistician
William Sealy Gosset (13 June 1876 – 16 October 1937) was an English statistician, chemist and brewer who served as Head Brewer of Guinness and Head Experimental Brewer of Guinness and was a pioneer of modern statistics. He pioneered small sample experimental design and analysis with an economic approach to the logic of uncertainty. Gosset published under the pen name Student and developed most famously Student's t-distribution – originally called Student's "z" – and "Student's test of statistical significance".
Life and career.
Born in Canterbury, England the eldest son of Agnes Sealy Vidal and Colonel Frederic Gosset, R.E. Royal Engineers, Gosset attended Winchester College before matriculating as Winchester Scholar in natural sciences and mathematics at New College, Oxford. Upon graduating in 1899, he joined the brewery of Arthur Guinness & Son in Dublin, Ireland; he spent the rest of his 38-year career at Guinness.
Gosset had three children with Marjory Gosset (née Phillpotts). Harry Gosset (1907–1965) was a consultant paediatrician; Bertha Marian Gosset (1909–2004) was a geographer and nurse; the youngest, Ruth Gosset (1911–1953) married the Oxford mathematician Douglas Roaf and had five children.
In his job as Head Experimental Brewer at Guinness, the self-trained Gosset developed new statistical methods – both in the brewery and on the farm – now central to the design of experiments, to proper use of significance testing on repeated trials, and to analysis of economic significance (an early instance of decision theory interpretation of statistics) and more, such as his small-sample, stratified, and repeated balanced experiments on barley for proving the best yielding varieties. Gosset acquired that knowledge by study, by trial and error, by cooperating with others, and by spending two terms in 1906–1907 in the Biometrics laboratory of Karl Pearson. Gosset and Pearson had a good relationship. Pearson helped Gosset with the mathematics of his papers, including the 1908 papers, but had little appreciation of their importance. The papers addressed the brewer's concern with small samples; biometricians like Pearson, on the other hand, typically had hundreds of observations and saw no urgency in developing small-sample methods.
Gosset's first publication came in 1907, "On the Error of Counting with a Haemacytometer," in which – unbeknownst to Gosset aka "Student" – he rediscovered the Poisson distribution. Another researcher at Guinness had previously published a paper containing trade secrets of the Guinness brewery. The economic historian Stephen Ziliak discovered in the Guinness Archives that to prevent further disclosure of confidential information, the Guinness Board of Directors allowed its scientists to publish research on condition that they do not mention "1) beer, 2) Guinness, or 3) their own surname". To Ziliak, Gosset seems to have gotten his pen name "Student" from his 1906–1907 notebook on counting yeast cells with a haemacytometer, "The Student's Science Notebook" Thus his most noteworthy achievement is now called Student's, rather than Gosset's, t-distribution and test of statistical significance.
Gosset published most of his 21 academic papers, including "The probable error of a mean," in Pearson's journal "Biometrika" under the pseudonym "Student". It was, however, not Pearson but Ronald A. Fisher who appreciated the understudied importance of Gosset's small-sample work. Fisher wrote to Gosset in 1912 explaining that Student's z-distribution should be divided by degrees of freedom not total sample size. From 1912 to 1934 Gosset and Fisher would exchange more than 150 letters. In 1924, Gosset wrote in a letter to Fisher, "I am sending you a copy of Student's Tables as you are the only man that's ever likely to use them!" Fisher believed that Gosset had effected a "logical revolution". In a special issue of "Metron" in 1925 Student published the corrected tables, now called Student's t formula_0. In the same volume Fisher contributed applications of Student's "t"-distribution to regression analysis.
Although introduced by others, Studentized residuals are named in Student's honour because, like the problem that led to Student's t-distribution, the idea of adjusting for estimated standard deviations is central to that concept.
Gosset's interest in the cultivation of barley led him to speculate that the design of experiments should aim not only at improving the average yield but also at breeding varieties whose yield was insensitive to variation in soil and climate (that is, "robust"). Gosset called his innovation "balanced layout", because treatments and controls are allocated in a balanced fashion to stratified growing conditions, such as differential soil fertility. Gosset's balanced principle was challenged by Ronald Fisher, who preferred randomized designs. The Bayesian Harold Jeffreys, and Gosset's close associates Jerzy Neyman and Egon S. Pearson sided with Gosset's balanced designs of experiments; however, as Ziliak (2014) has shown, Gosset and Fisher would strongly disagree for the rest of their lives about the meaning and interpretation of balanced versus randomized experiments, as they had earlier clashed on the role of bright-line rules of statistical significance.
In 1935, at the age of 59, Gosset left Dublin to take up the position of Head Brewer at a new (and second) Guinness brewery at Park Royal in northwestern London. In September 1937 Gosset was promoted to Head Brewer of all Guinness. He died one month later, aged 61, in Beaconsfield, England, of a heart attack.
Gosset was a friend of both Pearson and Fisher, a noteworthy achievement, for each had a massive ego and a loathing for the other. He was a modest man who once cut short an admirer with this comment: "Fisher would have discovered it all anyway."
Bibliography.
Gosset:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "z=\\frac{t}{\\sqrt{n-1}}"
}
] | https://en.wikipedia.org/wiki?curid=105651 |
10565476 | Domino tiling | Geometric construct
In geometry, a domino tiling of a region in the Euclidean plane is a tessellation of the region by dominoes, shapes formed by the union of two unit squares meeting edge-to-edge. Equivalently, it is a perfect matching in the grid graph formed by placing a vertex at the center of each square of the region and connecting two vertices when they correspond to adjacent squares.
Height functions.
For some classes of tilings on a regular grid in two dimensions, it is possible to define a height function associating an integer to the vertices of the grid. For instance, draw a chessboard, fix a node formula_0 with height 0, then for any node there is a path from formula_0 to it. On this path define the height of each node formula_1 (i.e. corners of the squares) to be the height of the previous node formula_2 plus one if the square on the right of the path from formula_2 to formula_1 is black, and minus one otherwise.
More details can be found in .
Thurston's height condition.
William Thurston (1990) describes a test for determining whether a simply-connected region, formed as the union of unit squares in the plane, has a domino tiling. He forms an undirected graph that has as its vertices the points ("x","y","z") in the three-dimensional integer lattice, where each such point is connected to four neighbors: if "x" + "y" is even, then ("x","y","z") is connected to ("x" + 1,"y","z" + 1), ("x" − 1,"y","z" + 1), ("x","y" + 1,"z" − 1), and ("x","y" − 1,"z" − 1), while if "x" + "y" is odd, then ("x","y","z") is connected to ("x" + 1,"y","z" − 1), ("x" − 1,"y","z" − 1), ("x","y" + 1,"z" + 1), and ("x","y" − 1,"z" + 1). The boundary of the region, viewed as a sequence of integer points in the ("x","y") plane, lifts uniquely (once a starting height is chosen) to a path in this three-dimensional graph. A necessary condition for this region to be tileable is that this path must close up to form a simple closed curve in three dimensions, however, this condition is not sufficient. Using more careful analysis of the boundary path, Thurston gave a criterion for tileability of a region that was sufficient as well as necessary.
Counting tilings of regions.
The number of ways to cover an formula_3 rectangle with formula_4 dominoes, calculated independently by and , is given by
formula_5 (sequence in the OEIS)
When both "m" and "n" are odd, the formula correctly reduces to zero possible domino tilings.
A special case occurs when tiling the formula_6 rectangle with "n" dominoes: the sequence reduces to the Fibonacci sequence.
Another special case happens for squares with "m" = "n" = 0, 2, 4, 6, 8, 10, 12, ... is
<templatestyles src="Block indent/styles.css"/>
These numbers can be found by writing them as the Pfaffian of an formula_7 skew-symmetric matrix whose eigenvalues can be found explicitly. This technique may be applied in many mathematics-related subjects, for example, in the classical, 2-dimensional computation of the dimer-dimer correlator function in statistical mechanics.
The number of tilings of a region is very sensitive to boundary conditions, and can change dramatically with apparently insignificant changes in the shape of the region. This is illustrated by the number of tilings of an Aztec diamond of order "n", where the number of tilings is 2("n" + 1)"n"/2. If this is replaced by the "augmented Aztec diamond" of order "n" with 3 long rows in the middle rather than 2, the number of tilings drops to the much smaller number D("n","n"), a Delannoy number, which has only exponential rather than super-exponential growth in "n". For the "reduced Aztec diamond" of order "n" with only one long middle row, there is only one tiling.
Tatami.
Tatami are Japanese floor mats in the shape of a domino (1x2 rectangle). They are used to tile rooms, but with additional rules about how they may be placed. In particular, typically, junctions where three tatami meet are considered auspicious, while junctions where four meet are inauspicious, so a proper tatami tiling is one where only three tatami meet at any corner. The problem of tiling an irregular room by tatami that meet three to a corner is NP-complete.
Applications in statistical physics.
There is a one-to-one correspondence between a periodic domino tiling and a ground state configuration of the fully-frustrated Ising model on a two-dimensional periodic lattice. To see that, we note that at the ground state, each plaquette of the spin model must contain exactly one frustrated interaction. Therefore, viewing from the dual lattice, each frustrated edge must be "covered" by a "1x2" rectangle, such that the rectangles span the entire lattice and do not overlap, or a domino tiling of the dual lattice.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "A_0"
},
{
"math_id": 1,
"text": "A_{n+1}"
},
{
"math_id": 2,
"text": "A_n"
},
{
"math_id": 3,
"text": " m \\times n "
},
{
"math_id": 4,
"text": " \\frac{mn}{2} "
},
{
"math_id": 5,
"text": " \\prod_{j=1}^{\\lceil\\frac{m}{2}\\rceil} \\prod_{k=1}^{\\lceil\\frac{n}{2}\\rceil} \\left ( 4\\cos^2 \\frac{\\pi j}{m + 1} + 4\\cos^2 \\frac{\\pi k}{n + 1} \\right )."
},
{
"math_id": 6,
"text": "2\\times n"
},
{
"math_id": 7,
"text": "mn \\times mn"
}
] | https://en.wikipedia.org/wiki?curid=10565476 |
10566228 | Fox derivative | Concept in mathematics
In mathematics, the Fox derivative is an algebraic construction in the theory of free groups which bears many similarities to the conventional derivative of calculus. The Fox derivative and related concepts are often referred to as the Fox calculus, or (Fox's original term) the free differential calculus. The Fox derivative was developed in a series of five papers by mathematician Ralph Fox, published in Annals of Mathematics beginning in 1953.
Definition.
If "G" is a free group with identity element "e" and generators "gi", then the Fox derivative with respect to "gi" is a function from "G" into the integral group ring formula_0 which is denoted formula_1, and obeys the following axioms:
The first two axioms are identical to similar properties of the partial derivative of calculus, and the third is a modified version of the product rule. As a consequence of the axioms, we have the following formula for inverses
Applications.
The Fox derivative has applications in group cohomology, knot theory, and covering space theory, among other areas of mathematics. | [
{
"math_id": 0,
"text": "\\Z G"
},
{
"math_id": 1,
"text": "\\frac{\\partial}{\\partial g_i}"
},
{
"math_id": 2,
"text": "\\frac{\\partial}{\\partial g_i}(g_j) = \\delta_{ij}"
},
{
"math_id": 3,
"text": "\\delta_{ij}"
},
{
"math_id": 4,
"text": "\\frac{\\partial}{\\partial g_i}(e) = 0"
},
{
"math_id": 5,
"text": "\\frac{\\partial}{\\partial g_i}(uv) = \\frac{\\partial}{\\partial g_i}(u) + u\\frac{\\partial}{\\partial g_i}(v)"
},
{
"math_id": 6,
"text": "\\frac{\\partial}{\\partial g_i}(u^{-1}) = -u^{-1}\\frac{\\partial}{\\partial g_i}(u)"
}
] | https://en.wikipedia.org/wiki?curid=10566228 |
10567426 | Underwater acoustics | Study of the propagation of sound in water
Underwater acoustics (also known as hydroacoustics) is the study of the propagation of sound in water and the interaction of the mechanical waves that constitute sound with the water, its contents and its boundaries. The water may be in the ocean, a lake, a river or a tank. Typical frequencies associated with underwater acoustics are between 10 Hz and 1 MHz. The propagation of sound in the ocean at frequencies lower than 10 Hz is usually not possible without penetrating deep into the seabed, whereas frequencies above 1 MHz are rarely used because they are absorbed very quickly.
Hydroacoustics, using sonar technology, is most commonly used for monitoring of underwater physical and biological characteristics. Hydroacoustics can be used to detect the depth of a water body (bathymetry), as well as the presence or absence, abundance, distribution, size, and behavior of underwater plants and animals. Hydroacoustic sensing involves "passive acoustics" (listening for sounds) or "active acoustics" making a sound and listening for the echo, hence the common name for the device, echo sounder or echosounder.
There are a number of different causes of noise from shipping. These can be subdivided into those caused by the propeller, those caused by machinery, and those caused by the movement of the hull through the water. The relative importance of these three different categories will depend, amongst other things, on the ship type
One of the main causes of hydro acoustic noise from fully submerged lifting surfaces is the unsteady separated turbulent flow near the surface's trailing edge that produces pressure fluctuations on the surface and unsteady oscillatory flow in the near wake. The relative motion between the surface and the ocean creates a turbulent boundary layer (TBL) that surrounds the surface. The noise is generated by the fluctuating velocity and pressure fields within this TBL.
The field of underwater acoustics is closely related to a number of other fields of acoustic study, including sonar, transduction, signal processing, acoustical oceanography, bioacoustics, and physical acoustics.
History.
Underwater sound has probably been used by marine animals for millions of years. The science of underwater acoustics began in 1490, when Leonardo da Vinci wrote the following,
"If you cause your ship to stop and place the head of a long tube in the water and place the outer extremity to your ear, you will hear ships at a great distance from you."
In 1687 Isaac Newton wrote his "Mathematical Principles of Natural Philosophy" which included the first mathematical treatment of sound. The next major step in the development of underwater acoustics was made by Daniel Colladon, a Swiss physicist, and Charles Sturm, a French mathematician. In 1826, on Lake Geneva, they measured the elapsed time between a flash of light and the sound of a submerged ship's bell heard using an underwater listening horn. They measured a sound speed of 1435 metres per second over a 17 kilometre (km) distance, providing the first quantitative measurement of sound speed in water. The result they obtained was within about 2% of currently accepted values. In 1877 Lord Rayleigh wrote the "Theory of Sound" and established modern acoustic theory.
The sinking of "Titanic" in 1912 and the start of World War I provided the impetus for the next wave of progress in underwater acoustics. Systems for detecting icebergs and U-boats were developed. Between 1912 and 1914, a number of echolocation patents were granted in Europe and the U.S., culminating in Reginald A. Fessenden's echo-ranger in 1914. Pioneering work was carried out during this time in France by Paul Langevin and in Britain by A B Wood and associates. The development of both active ASDIC and passive sonar (SOund Navigation And Ranging) proceeded apace during the war, driven by the first large scale deployments of submarines. Other advances in underwater acoustics included the development of acoustic mines.
In 1919, the first scientific paper on underwater acoustics was published, theoretically describing the refraction of sound waves produced by temperature and salinity gradients in the ocean. The range predictions of the paper were experimentally validated by propagation loss measurements.
The next two decades saw the development of several applications of underwater acoustics. The fathometer, or depth sounder, was developed commercially during the 1920s. Originally natural materials were used for the transducers, but by the 1930s sonar systems incorporating piezoelectric transducers made from synthetic materials were being used for passive listening systems and for active echo-ranging systems. These systems were used to good effect during World War II by both submarines and anti-submarine vessels. Many advances in underwater acoustics were made which were summarised later in the series "Physics of Sound in the Sea", published in 1946.
After World War II, the development of sonar systems was driven largely by the Cold War, resulting in advances in the theoretical and practical understanding of underwater acoustics, aided by computer-based techniques.
Theory.
Sound waves in water, bottom of sea.
A sound wave propagating underwater consists of alternating compressions and rarefactions of the water. These compressions and rarefactions are detected by a receiver, such as the human ear or a hydrophone, as changes in pressure. These waves may be man-made or naturally generated.
Speed of sound, density and impedance.
The speed of sound formula_0 (i.e., the longitudinal motion of wavefronts) is related to frequency formula_1 and wavelength formula_2 of a wave by formula_3.
This is different from the particle velocity formula_4, which refers to the motion of molecules in the medium due to the sound, and relates to the plane wave pressure formula_5 to the fluid density formula_6 and sound speed formula_0 by formula_7.
The product of formula_8 and formula_6 from the above formula is known as the characteristic acoustic impedance. The acoustic power (energy per second) crossing unit area is known as the intensity of the wave and for a plane wave the average intensity is given by formula_9, where formula_10 is the root mean square acoustic pressure.
Sometimes the term "sound velocity" is used but this is incorrect as the quantity is a scalar.
The large impedance contrast between air and water (the ratio is about 3600) and the scale of surface roughness means that the sea surface behaves as an almost perfect reflector of sound at frequencies below 1 kHz. Sound speed in water exceeds that in air by a factor of 4.4 and the density ratio is about 820.
Absorption of sound.
Absorption of low frequency sound is weak. (see Technical Guides – Calculation of absorption of sound in seawater for an on-line calculator). The main cause of sound attenuation in fresh water, and at high frequency in sea water (above 100 kHz) is viscosity. Important additional contributions at lower frequency in seawater are associated with the ionic relaxation of boric acid (up to c. 10 kHz) and magnesium sulfate (c. 10 kHz-100 kHz).
Sound may be absorbed by losses at the fluid boundaries. Near the surface of the sea losses can occur in a bubble layer or in ice, while at the bottom sound can penetrate into the sediment and be absorbed.
Sound reflection and scattering.
Boundary interactions.
Both the water surface and bottom are reflecting and scattering boundaries.
Surface.
For many purposes the sea-air surface can be thought of as a perfect reflector. The impedance contrast is so great that little energy is able to cross this boundary. Acoustic pressure waves reflected from the sea surface experience a reversal in phase, often stated as either a "pi phase change" or a "180 deg phase change". This is represented mathematically by assigning a reflection coefficient of minus 1 instead of plus one to the sea surface.
At high frequency (above about 1 kHz) or when the sea is rough, some of the incident sound is scattered, and this is taken into account by assigning a reflection coefficient whose magnitude is less than one. For example, close to normal incidence, the reflection coefficient becomes formula_11, where "h" is the rms wave height.
A further complication is the presence of wind-generated bubbles or fish close to the sea surface. The bubbles can also form plumes that absorb some of the incident and scattered sound, and scatter some of the sound themselves.
Seabed.
The acoustic impedance mismatch between water and the bottom is generally much less than at the surface and is more complex. It depends on the bottom material types and depth of the layers. Theories have been developed for predicting the sound propagation in the bottom in this case, for example by Biot and by Buckingham.
At target.
The reflection of sound at a target whose dimensions are large compared with the acoustic wavelength depends on its size and shape as well as the impedance of the target relative to that of water. Formulae have been developed for the target strength of various simple shapes as a function of angle of sound incidence. More complex shapes may be approximated by combining these simple ones.
Propagation of sound.
Underwater acoustic propagation depends on many factors. The direction of sound propagation is determined by the sound speed gradients in the water. These speed gradients transform the sound wave through refraction, reflection, and dispersion. In the sea the vertical gradients are generally much larger than the horizontal ones. Combining this with a tendency towards increasing sound speed at increasing depth, due to the increasing pressure in the deep sea, causes a reversal of the sound speed gradient in the thermocline, creating an efficient waveguide at the depth, corresponding to the minimum sound speed. The sound speed profile may cause regions of low sound intensity called "Shadow Zones", and regions of high intensity called "Caustics". These may be found by ray tracing methods.
At the equator and temperate latitudes in the ocean, the surface temperature is high enough to reverse the pressure effect, such that a sound speed minimum occurs at depth of a few hundred meters. The presence of this minimum creates a special channel known as deep sound channel, or SOFAR (sound fixing and ranging) channel, permitting guided propagation of underwater sound for thousands of kilometers without interaction with the sea surface or the seabed. Another phenomenon in the deep sea is the formation of sound focusing areas, known as convergence zones. In this case sound is refracted downward from a near-surface source and then back up again. The horizontal distance from the source at which this occurs depends on the positive and negative sound speed gradients. A surface duct can also occur in both deep and moderately shallow water when there is upward refraction, for example due to cold surface temperatures. Propagation is by repeated sound bounces off the surface.
In general, as sound propagates underwater there is a reduction in the sound intensity over increasing ranges, though in some circumstances a gain can be obtained due to focusing. "Propagation loss" (sometimes referred to as "transmission loss") is a quantitative measure of the reduction in sound intensity between two points, normally the sound source and a distant receiver. If formula_12 is the far field intensity of the source referred to a point 1 m from its acoustic center and formula_13 is the intensity at the receiver, then the propagation loss is given by formula_14.
In this equation formula_13 is not the true acoustic intensity at the receiver, which is a vector quantity, but a scalar equal to the equivalent plane wave intensity (EPWI) of the sound field. The EPWI is defined as the magnitude of the intensity of a plane wave of the same RMS pressure as the true acoustic field. At short range the propagation loss is dominated by spreading while at long range it is dominated by absorption and/or scattering losses.
An alternative definition is possible in terms of pressure instead of intensity, giving formula_15, where formula_16 is the RMS acoustic pressure in the far-field of the projector, scaled to a standard distance of 1 m, and formula_17 is the RMS pressure at the receiver position.
These two definitions are not exactly equivalent because the characteristic impedance at the receiver may be different from that at the source. Because of this, the use of the intensity definition leads to a different sonar equation to the definition based on a pressure ratio. If the source and receiver are both in water, the difference is small.
Propagation modelling.
The propagation of sound through water is described by the wave equation, with appropriate boundary conditions. A number of models have been developed to simplify propagation calculations. These models include ray theory, normal mode solutions, and parabolic equation simplifications of the wave equation. Each set of solutions is generally valid and computationally efficient in a limited frequency and range regime, and may involve other limits as well. Ray theory is more appropriate at short range and high frequency, while the other solutions function better at long range and low frequency. Various empirical and analytical formulae have also been derived from measurements that are useful approximations.
Reverberation.
Transient sounds result in a decaying background that can be of much larger duration than the original transient signal. The cause of this background, known as reverberation, is partly due to scattering from rough boundaries and partly due to scattering from fish and other biota. For an acoustic signal to be detected easily, it must exceed the reverberation level as well as the background noise level.
Doppler shift.
If an underwater object is moving relative to an underwater receiver, the frequency of the received sound is different from that of the sound radiated (or reflected) by the object. This change in frequency is known as a Doppler shift. The shift can be easily observed in active sonar systems, particularly narrow-band ones, because the transmitter frequency is known, and the relative motion between sonar and object can be calculated. Sometimes the frequency of the radiated noise (a tonal) may also be known, in which case the same calculation can be done for passive sonar. For active systems the change in frequency is 0.69 Hz per knot per kHz and half this for passive systems as propagation is only one way. The shift corresponds to an increase in frequency for an approaching target.
Intensity fluctuations.
Though acoustic propagation modelling generally predicts a constant received sound level, in practice there are both temporal and spatial fluctuations. These may be due to both small and large scale environmental phenomena. These can include sound speed profile fine structure and frontal zones as well as internal waves. Because in general there are multiple propagation paths between a source and receiver, small phase changes in the interference pattern between these paths can lead to large fluctuations in sound intensity.
Non-linearity.
In water, especially with air bubbles, the change in density due to a change in pressure is not exactly linearly proportional. As a consequence for a sinusoidal wave input additional harmonic and subharmonic frequencies are generated. When two sinusoidal waves are input, sum and difference frequencies are generated. The conversion process is greater at high source levels than small ones. Because of the non-linearity there is a dependence of sound speed on the pressure amplitude so that large changes travel faster than small ones. Thus a sinusoidal waveform gradually becomes a sawtooth one with a steep rise and a gradual tail. Use is made of this phenomenon in parametric sonar and theories have been developed to account for this, e.g. by Westerfield.
Measurements.
Sound in water is measured using a hydrophone, which is the underwater equivalent of a microphone. A hydrophone measures pressure fluctuations, and these are usually converted to sound pressure level (SPL), which is a logarithmic measure of the mean square acoustic pressure.
Measurements are usually reported in one of two forms:
The scale for acoustic pressure in water differs from that used for sound in air. In air the reference pressure is 20 μPa rather than 1 μPa. For the same numerical value of SPL, the intensity of a plane wave (power per unit area, proportional to mean square sound pressure divided by acoustic impedance) in air is about 202×3600 = 1 440 000 times higher than in water. Similarly, the intensity is about the same if the SPL is 61.6 dB higher in the water.
The 2017 standard ISO 18405 defines terms and expressions used in the field of underwater acoustics, including the calculation of underwater sound pressure levels.
Sound speed.
Approximate values for fresh water and seawater, respectively, at atmospheric pressure are 1450 and 1500 m/s for the sound speed, and 1000 and 1030 kg/m3 for the density. The speed of sound in water increases with increasing pressure, temperature and salinity. The maximum speed in pure water under atmospheric pressure is attained at about 74 °C; sound travels slower in hotter water after that point; the maximum increases with pressure.
Absorption.
Many measurements have been made of sound absorption in lakes and the ocean
(see Technical Guides – Calculation of absorption of sound in seawater for an on-line calculator).
Ambient noise.
Measurement of acoustic signals are possible if their amplitude exceeds a minimum threshold, determined partly by the signal processing used and partly by the level of background noise. Ambient noise is that part of the received noise that is independent of the source, receiver and platform characteristics. Thus it excludes reverberation and towing noise for example.
The background noise present in the ocean, or ambient noise, has many different sources and varies with location and frequency. At the lowest frequencies, from about 0.1 Hz to 10 Hz, ocean turbulence and microseisms are the primary contributors to the noise background. Typical noise spectrum levels decrease with increasing frequency from about 140 dB re 1 μPa2/Hz at 1 Hz to about 30 dB re 1 μPa2/Hz at 100 kHz. Distant ship traffic is one of the dominant noise sources in most areas for frequencies of around 100 Hz, while wind-induced surface noise is the main source between 1 kHz and 30 kHz. At very high frequencies, above 100 kHz, thermal noise of water molecules begins to dominate. The thermal noise spectral level at 100 kHz is 25 dB re 1 μPa2/Hz. The spectral density of thermal noise increases by 20 dB per decade (approximately 6 dB per octave).
Transient sound sources also contribute to ambient noise. These can include intermittent geological activity, such as earthquakes and underwater volcanoes, rainfall on the surface, and biological activity. Biological sources include cetaceans (especially blue, fin and sperm whales), certain types of fish, and snapping shrimp.
Rain can produce high levels of ambient noise. However the numerical relationship between rain rate and ambient noise level is difficult to determine because measurement of rain rate is problematic at sea.
Reverberation.
Many measurements have been made of sea surface, bottom and volume reverberation. Empirical models have sometimes been derived from these. A commonly used expression for the band 0.4 to 6.4 kHz is that by Chapman and Harris. It is found that a sinusoidal waveform is spread in frequency due to the surface motion. For bottom reverberation a Lambert's Law is found often to apply approximately, for example see Mackenzie. Volume reverberation is usually found to occur mainly in layers, which change depth with the time of day, e.g., see Marshall and Chapman. The under-surface of ice can produce strong reverberation when it is rough, see for example Milne.
Bottom loss.
Bottom loss has been measured as a function of grazing angle for many frequencies in various locations, for example those by the US Marine Geophysical Survey. The loss depends on the sound speed in the bottom (which is affected by gradients and layering) and by roughness. Graphs have been produced for the loss to be expected in particular circumstances. In shallow water bottom loss often has the dominant impact on long range propagation. At low frequencies sound can propagate through the sediment then back into the water.
Underwater hearing.
Comparison with airborne sound levels.
As with airborne sound, sound pressure level underwater is usually reported in units of decibels, but there are some important differences that make it difficult (and often inappropriate) to compare SPL in water with SPL in air. These differences include:
Human hearing.
Hearing sensitivity.
The lowest audible SPL for a human diver with normal hearing is about 67 dB re 1 μPa, with greatest sensitivity occurring at frequencies around 1 kHz. This corresponds to a sound intensity 5.4 dB, or 3.5 times, higher than the threshold in air (see Measurements above).
Safety thresholds.
High levels of underwater sound create a potential hazard to human divers. Guidelines for exposure of human divers to underwater sound are reported by the SOLMAR project of the NATO Undersea Research Centre. Human divers exposed to SPL above 154 dB re 1 μPa in the frequency range 0.6 to 2.5 kHz are reported to experience changes in their heart rate or breathing frequency. Diver aversion to low frequency sound is dependent upon sound pressure level and center frequency.
Other species.
Aquatic mammals.
Dolphins and other toothed whales are known for their acute hearing sensitivity, especially in the frequency range 5 to 50 kHz. Several species have hearing thresholds between 30 and 50 dB re 1 μPa in this frequency range. For example, the hearing threshold of the killer whale occurs at an RMS acoustic pressure of 0.02 mPa (and frequency 15 kHz), corresponding to an SPL threshold of 26 dB re 1 μPa.
High levels of underwater sound create a potential hazard to marine and amphibious animals. The effects of exposure to underwater noise are reviewed by Southall et al.
Fish.
The hearing sensitivity of fish is reviewed by Ladich and Fay.
The hearing threshold of the soldier fish, is 0.32 mPa (50 dB re 1 μPa) at 1.3 kHz, whereas the lobster has a hearing threshold of 1.3 Pa at 70 Hz (122 dB re 1 μPa). The effects of exposure to underwater noise are reviewed by Popper et al.
Aquatic birds.
Several aquatic bird species have been observed to react to underwater sound in the 1-4 kHz range, which follows the frequency range of best hearing sensitivities of birds in air. Seaducks and cormorants have been trained to respond to sounds of 1-4 kHz with lowest hearing threshold (highest sensitivity) of 71 dB re 1 μPa (cormorants) and 105 dB re 1 μPa (seaducks). Diving species have several morphological differences in the ear relative to terrestrial species, suggesting some adaptations of the ear in diving birds to aquatic conditions
Applications of underwater acoustics.
Sonar.
Sonar is the name given to the acoustic equivalent of radar. Pulses of sound are used to probe the sea, and the echoes are then processed to extract information about the sea, its boundaries and submerged objects. An alternative use, known as "passive sonar", attempts to do the same by listening to the sounds radiated by underwater objects.
Underwater communication.
The need for underwater acoustic telemetry exists in applications such as data harvesting for environmental monitoring, communication with and between crewed and uncrewed underwater vehicles, transmission of diver speech, etc. A related application is underwater remote control, in which acoustic telemetry is used to remotely actuate a switch or trigger an event. A prominent example of underwater remote control are acoustic releases, devices that are used to return sea floor deployed instrument packages or other payloads to the surface per remote command at the end of a deployment. Acoustic communications form an active field of research
with significant challenges to overcome, especially in horizontal, shallow-water channels. Compared with radio telecommunications, the available bandwidth is reduced by several orders of magnitude. Moreover, the low speed of sound causes multipath propagation to stretch over time delay intervals of tens or hundreds of milliseconds, as well as significant Doppler shifts and spreading. Often acoustic communication systems are not limited by noise, but by reverberation and time variability beyond the capability of receiver algorithms. The fidelity of underwater communication links can be greatly improved by the use of hydrophone arrays, which allow processing techniques such as adaptive beamforming and diversity combining.
Underwater navigation and tracking.
Underwater navigation and tracking is a common requirement for exploration and work by divers, ROV, autonomous underwater vehicles (AUV), crewed submersibles and submarines alike. Unlike most radio signals which are quickly absorbed, sound propagates far underwater and at a rate that can be precisely measured or estimated. It can thus be used to measure distances between a tracked target and one or multiple reference of "baseline stations" precisely, and triangulate the position of the target, sometimes with centimeter accuracy. Starting in the 1960s, this has given rise to underwater acoustic positioning systems which are now widely used.
Seismic exploration.
Seismic exploration involves the use of low frequency sound (< 100 Hz) to probe deep into the seabed. Despite the relatively poor resolution due to their long wavelength, low frequency sounds are preferred because high frequencies are heavily attenuated when they travel through the seabed. Sound sources used include airguns, vibroseis and explosives.
Weather and climate observation.
Acoustic sensors can be used to monitor the sound made by wind and precipitation. For example, an acoustic rain gauge is described by Nystuen. Lightning strikes can also be detected. Acoustic thermometry of ocean climate (ATOC) uses low frequency sound to measure the global ocean temperature.
Acoustical oceanography.
Acoustical oceanography is the use of underwater sound to study the sea, its boundaries and its contents.
History.
Interest in developing echo ranging systems began in earnest following the sinking of the RMS Titanic in 1912. By sending a sound wave ahead of a ship, the theory went, a return echo bouncing off the submerged portion of an iceberg should give early warning of collisions. By directing the same type of beam downwards, the depth to the bottom of the ocean could be calculated.
The first practical deep-ocean echo sounder was invented by Harvey C. Hayes, a U.S. Navy physicist. For the first time, it was possible to create a quasi-continuous profile of the ocean floor along the course of a ship. The first such profile was made by Hayes on board the U.S.S. Stewart, a Navy destroyer that sailed from Newport to Gibraltar between June 22 and 29, 1922. During that week, 900 deep-ocean soundings were made.
Using a refined echo sounder, the German survey ship Meteor made several passes across the South Atlantic from the equator to Antarctica between 1925 and 1927, taking soundings every 5 to 20 miles. Their work created the first detailed map of the Mid-Atlantic Ridge. It showed that the Ridge was a rugged mountain range, and not the smooth plateau that some scientists had envisioned. Since that time, both naval and research vessels have operated echo sounders almost continuously while at sea.
Important contributions to acoustical oceanography have been made by:
Equipment used.
The earliest and most widespread use of sound and sonar technology to study the properties of the sea is the use of a rainbow echo sounder to measure water depth. Sounders were the devices used that mapped the many miles of the Santa Barbara Harbor ocean floor until 1993.
Fathometers measure the depth of the waters. It works by electronically sending sounds from ships, therefore also receiving the sound waves that bounces back from the bottom of the ocean. A paper chart moves through the fathometer and is calibrated to record the depth.
As technology advances, the development of high resolution sonars in the second half of the 20th century made it possible to not just detect underwater objects but to classify them and even image them. Electronic sensors are now attached to ROVs since nowadays, ships or robot submarines have Remotely Operated Vehicles (ROVs). There are cameras attached to these devices giving out accurate images. The oceanographers are able to get a clear and precise quality of pictures. The 'pictures' can also be sent from sonars by having sound reflected off ocean surroundings. Oftentimes sound waves reflect off animals, giving information which can be documented into deeper animal behaviour studies.
Marine biology.
Due to its excellent propagation properties, underwater sound is used as a tool to aid the study of marine life, from microplankton to the blue whale. Echo sounders are often used to provide data on marine life abundance, distribution, and behavior information. Echo sounders, also referred to as hydroacoustics is also used for fish location, quantity, size, and biomass.
Acoustic telemetry is also used for monitoring fish and marine wildlife. An acoustic transmitter is attached to the fish (sometimes internally) while an array of receivers listen to the information conveyed by the sound wave. This enables the researchers to track the movements of individuals in a small-medium scale.
Pistol shrimp create sonoluminescent cavitation bubbles that reach up to
Particle physics.
A neutrino is a fundamental particle that interacts very weakly with other matter. For this reason, it requires detection apparatus on a very large scale, and the ocean is sometimes used for this purpose. In particular, it is thought that ultra-high energy neutrinos in seawater can be detected acoustically.
Other applications.
Other applications include:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "c \\,"
},
{
"math_id": 1,
"text": "f \\,"
},
{
"math_id": 2,
"text": "\\lambda \\,"
},
{
"math_id": 3,
"text": "c = f \\cdot \\lambda"
},
{
"math_id": 4,
"text": "u \\,"
},
{
"math_id": 5,
"text": "p \\,"
},
{
"math_id": 6,
"text": "\\rho \\,"
},
{
"math_id": 7,
"text": "p = c \\cdot u \\cdot \\rho"
},
{
"math_id": 8,
"text": "c"
},
{
"math_id": 9,
"text": "I = q^2/(\\rho c) \\,"
},
{
"math_id": 10,
"text": "q \\,"
},
{
"math_id": 11,
"text": "R=-e^{-2 k^{2} h^{2} \\sin^2A}"
},
{
"math_id": 12,
"text": "I_s"
},
{
"math_id": 13,
"text": "I_r"
},
{
"math_id": 14,
"text": "\\mathit{PL}=10\\log (I_s/I_r)"
},
{
"math_id": 15,
"text": "\\mathit{PL}=20 \\log (p_s/p_r)"
},
{
"math_id": 16,
"text": "p_s"
},
{
"math_id": 17,
"text": "p_r"
}
] | https://en.wikipedia.org/wiki?curid=10567426 |
10567854 | State of charge | Value of the charge level of an energy storage system relative to its capacity
State of charge (SoC) quantifies the remaining capacity available in a battery at a given time and in relation to a given state of ageing. It is usually expressed as percentage (0% = empty; 100% = full). An alternative form of the same measure is the depth of discharge (DoD), calculated as 1 − SoC (100% = empty; 0% = full). It refers to the amount of charge that may be used up if the cell is fully discharged. State of charge is normally used when discussing the current state of a battery in use, while depth of discharge is most often used to discuss a constant variation of state of charge during repeated cycles.
In electric vehicles.
In a battery electric vehicle (BEV), the state of charge indicates the remaining energy in the battery pack. It is the equivalent of a fuel gauge.
The state of charge can help to reduce electrical car's owners' anxiety when they are waiting in the line or stay at home since it will reflect the progress of charging and let owners know when it will be ready. However on any vehicle dashboard, especially in plug-in hybrid vehicles, the state of charge presented as a gauge or percentage value may not be representative of a real level of charge. A noticeable amount of energy may be reserved for hybrid-work operations. Examples of such cars are Mitsubishi Outlander PHEV (all versions/years of production), where a charge level of zero is indicated to the driver when the real charge level is 20–22%. Another one is the BMW i3 REX (Range Extender version), where about 6% of SoC is reserved for PHEV-alike operations.
State of charge is also known to impact battery aging. To extend battery lifetime, extremes of state of charge should be avoided and reduced variations windows are also preferable.
Determining SoC.
Usually, SoC cannot be measured directly but it can be estimated from direct measurement variables in two ways: offline and online. In offline techniques, the battery desires to be charged and discharged in constant rate such as Coulomb-counting. This method gives precise estimation of battery SoC, but they are protracted, costly, and interrupt main battery performance. Therefore, researchers are looking for some online techniques. In general there are five methods to determine SoC indirectly:
Chemical method.
This method works only with batteries that offer access to their liquid electrolyte, such as non-sealed lead acid batteries. The specific gravity of the electrolyte can be used to indicate the SoC of the battery.
Hydrometers are used to calculate the specific gravity of a battery. To find specific gravity, it is necessary to measure out volume of the electrolyte and to weigh it. Then specific gravity is given by (mass of electrolyte [g]/ volume of electrolyte [ml])/ (Density of Water, i.e. 1g/1ml). To find SoC from specific gravity, a look-up table of SG vs SoC is needed.
Refractometry has been shown to be a viable method for continuous monitoring of the state of charge. The refractive index of the battery electrolyte is directly relatable to the specific gravity or density of the electrolyte of the cell.
Notably, analysis of electrolyte does not provide information about the state-of-charge in the case of lithium-ion batteries and other batteries, that do not produce or consume solvent or dissolved species during their operation. The method works for lead-acid batteries, because the concentration of sulfuric acid changes with the battery's state-of-charge according to the following reaction:
Pb(s) + PbO2(s) + 2H2SO4(aq) → 2PbSO4(s) + 2H2O(l) formula_0
Voltage method.
This method converts a reading of the battery voltage to SoC, using the known discharge curve (voltage vs. SoC) of the battery. However, the voltage is more significantly affected by the battery current (due to the battery's electrochemical kinetics) and temperature. This method can be made more accurate by compensating the voltage reading by a correction term proportional to the battery current, and by using a look-up table of battery's open circuit voltage vs. temperature.
In fact, it is a stated goal of battery design to provide a voltage as constant as possible no matter the SoC, which makes this method difficult to apply. For batteries, that have voltage independent on their state-of-charge (such as lithium iron phosphate battery), open-circuit voltage measurements cannot provide a reliable estimate of the SoC. On the other hand, batteries with a slopping voltage-charge curves (such as nickel-cobalt-manganese battery), are more amenable to SoC estimation from the open-circuit voltage measurements.
Current integration method.
This method, also known as "coulomb counting", calculates the SoC by measuring the battery current and integrating it in time.
Since no measurement can be perfect, this method suffers from long-term drift and lack of a reference point: therefore, the SoC must be re-calibrated on a regular basis, such as by resetting the SoC to 100% when a charger determines that the battery is fully charged (using one of the other methods described here).
Combined approaches.
Maxim Integrated touts a combined voltage and charge approach that is claimed superior to either method alone; it is implemented in their ModelGauge m3 series of chips, such as MAX17050, which is used in the Nexus 6 and Nexus 9 Android devices, for example.
Kalman filtering.
To overcome the shortcomings of the voltage method and the current integration method, a Kalman filter can be used. The battery can be described with an electrical model which the Kalman filter will use to predict the over-voltage given the observed current. In combination with coulomb counting, it can make an accurate estimation of the state of charge. The strength of this technique is that a Kalman filter adjusts its relative trust of the battery voltage and coulomb counting in real time.
Pressure method.
This method can be used with certain NiMH batteries, whose internal pressure increases rapidly when the battery is charged. More commonly, a pressure switch indicates if the battery is fully charged. This method may be improved by taking into account Peukert's law which is a function of charge/discharge current.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_{cell}^\\circ = 2.05\\text{ V}"
}
] | https://en.wikipedia.org/wiki?curid=10567854 |
10568006 | All-electric range | Driving range of a vehicle using only power from its electric battery pack
All-electric range (AER) is the maximum driving range of an electric vehicle using only power from its on-board battery pack to traverse a given driving cycle. In the case of a Battery electric vehicle (BEV), it means the maximum range per recharge, typically between 150 and 400 miles. For a plug-in hybrid electric vehicle (PHEV), it means the maximum range in charge-depleting mode, typically between 20 and 40 miles. PHEVs can travel considerably further in charge-sustaining mode which utilizes both fuel combustion and the on-board battery pack like a conventional hybrid electric vehicle (HEV).
Calculating AER is made more complicated in PHEVs because of variations in drivetrain design. A vehicle like the Fisker Karma that uses a serial hybrid design has a clear AER. Similarly a vehicle like the Chevrolet Volt which has a parallel design disengages the internal combustion engine (ICE) from the drivetrain while in electric mode and has a clear AER. However blend-mode PHEVs which use the ICE and electric motor in series do not have a clear AER because they use both gasoline and electricity at the same time. This motor design uses only the battery to power the drivetrain and uses the internal combustion engine to generate electricity for the battery. "Equivalent AER" is the AER of vehicles following this architecture. One example of this calculation can be found in Argonne National Labs report titled "TEST PROCEDURES AND BENCHMARKING Blended-Type and EV-Capable Plug-In Hybrid Electric Vehicles."
This procedure uses the formula below to calculate an equivalent AER for vehicles that operate in blended mode:
formula_0
Where GPMCD designates efficiency in charge-depleting mode, and GPMCS charge-sustaining mode as designated and "d"CD is distance in charge depleting mode.
A plug-in hybrid's all-electric range is designated by PHEV-"(miles)" or PHEV-"(kilometers)" km representing the distance the vehicle can travel on battery power alone. For example, a PHEV-20 can travel 20 miles without using its internal combustion engine, or about 32 kilometers, so it may also be designated as PHEV32km.
The all-electric range for BEVs has steadily increased in the last decade. In model year 2010 BEVs, the average AER is 127 km (78.9 mi), while in model year 2021 BEVs the average AER is 349 km (216.9 mi). In model year 2021 BEVs, the median driving range was 60% of the median range of internal combustion engine (ICE) cars, a gap which will continue to narrow as more long-range BEVs are produced.
The All-electric range has also increased for PHEVs over the past decade, increasing from an average PHEV range of 33 km (20.5 mi) in model year 2012 to 62 km (38.5 mi) in model year 2021.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{AER}_\\text{Equivalent} = \\left (1-\\frac{GPM_{CD} }{GPM_{CS} }\\right ) d^{CD}"
}
] | https://en.wikipedia.org/wiki?curid=10568006 |
1056866 | GC-content | Percentage of guanine and cytosine in DNA or RNA molecules
In molecular biology and genetics, GC-content (or guanine-cytosine content) is the percentage of nitrogenous bases in a DNA or RNA molecule that are either guanine (G) or cytosine (C). This measure indicates the proportion of G and C bases out of an implied four total bases, also including adenine and thymine in DNA and adenine and uracil in RNA.
GC-content may be given for a certain fragment of DNA or RNA or for an entire genome. When it refers to a fragment, it may denote the GC-content of an individual gene or section of a gene (domain), a group of genes or gene clusters, a non-coding region, or a synthetic oligonucleotide such as a primer.
Structure.
Qualitatively, guanine (G) and cytosine (C) undergo a specific hydrogen bonding with each other, whereas adenine (A) bonds specifically with thymine (T) in DNA and with uracil (U) in RNA. Quantitatively, each GC base pair is held together by three hydrogen bonds, while AT and AU base pairs are held together by two hydrogen bonds. To emphasize this difference, the base pairings are often represented as "G≡C" versus "A=T" or "A=U".
DNA with low GC-content is less stable than DNA with high GC-content; however, the hydrogen bonds themselves do not have a particularly significant impact on molecular stability, which is instead caused mainly by molecular interactions of base stacking. In spite of the higher thermostability conferred to a nucleic acid with high GC-content, it has been observed that at least some species of bacteria with DNA of high GC-content undergo autolysis more readily, thereby reducing the longevity of the cell "per se". Because of the thermostability of GC pairs, it was once presumed that high GC-content was a necessary adaptation to high temperatures, but this hypothesis was refuted in 2001. Even so, it has been shown that there is a strong correlation between the optimal growth of prokaryotes at higher temperatures and the GC-content of structural RNAs such as ribosomal RNA, transfer RNA, and many other non-coding RNAs. The AU base pairs are less stable than the GC base pairs, making high-GC-content RNA structures more resistant to the effects of high temperatures.
More recently, it has been demonstrated that the most important factor contributing to the thermal stability of double-stranded nucleic acids is actually due to the base stackings of adjacent bases rather than the number of hydrogen bonds between the bases. There is more favorable stacking energy for GC pairs than for AT or AU pairs because of the relative positions of exocyclic groups. Additionally, there is a correlation between the order in which the bases stack and the thermal stability of the molecule as a whole.
Determination.
GC-content is usually expressed as a percentage value, but sometimes as a ratio (called G+C ratio or GC-ratio). GC-content percentage is calculated as
formula_0
whereas the AT/GC ratio is calculated as
formula_1 .
The GC-content percentages as well as GC-ratio can be measured by several means, but one of the simplest methods is to measure the melting temperature of the DNA double helix using spectrophotometry. The absorbance of DNA at a wavelength of 260 nm increases fairly sharply when the double-stranded DNA molecule separates into two single strands when sufficiently heated. The most commonly used protocol for determining GC-ratios uses flow cytometry for large numbers of samples.
In an alternative manner, if the DNA or RNA molecule under investigation has been reliably sequenced, then GC-content can be accurately calculated by simple arithmetic or by using a variety of publicly available software tools, such as the free online GC calculator.
Genomic content.
Within-genome variation.
The GC-ratio within a genome is found to be markedly variable. These variations in GC-ratio within the genomes of more complex organisms result in a mosaic-like formation with islet regions called isochores. This results in the variations in staining intensity in chromosomes. GC-rich isochores typically include many protein-coding genes within them, and thus determination of GC-ratios of these specific regions contributes to mapping gene-rich regions of the genome.
Coding sequences.
Within a long region of genomic sequence, genes are often characterised by having a higher GC-content in contrast to the background GC-content for the entire genome. There is evidence that the length of the coding region of a gene is directly proportional to higher G+C content. This has been pointed to the fact that the stop codon has a bias towards A and T nucleotides, and, thus, the shorter the sequence the higher the AT bias.
Comparison of more than 1,000 orthologous genes in mammals showed marked within-genome variations of the third-codon position GC content, with a range from less than 30% to more than 80%.
Among-genome variation.
GC content is found to be variable with different organisms, the process of which is envisaged to be contributed to by variation in selection, mutational bias, and biased recombination-associated DNA repair.
The average GC-content in human genomes ranges from 35% to 60% across 100-Kb fragments, with a mean of 41%. The GC-content of Yeast ("Saccharomyces cerevisiae") is 38%, and that of another common model organism, thale cress ("Arabidopsis thaliana"), is 36%. Because of the nature of the genetic code, it is virtually impossible for an organism to have a genome with a GC-content approaching either 0% or 100%. However, a species with an extremely low GC-content is "Plasmodium falciparum" (GC% = ~20%), and it is usually common to refer to such examples as being AT-rich instead of GC-poor.
Several mammalian species (e.g., shrew, microbat, tenrec, rabbit) have independently undergone a marked increase in the GC-content of their genes. These GC-content changes are correlated with species life-history traits (e.g., body mass or longevity) and genome size, and might be linked to a molecular phenomenon called the GC-biased gene conversion.
Applications.
Molecular biology.
In polymerase chain reaction (PCR) experiments, the GC-content of short oligonucleotides known as primers is often used to predict their annealing temperature to the template DNA. A higher GC-content level indicates a relatively higher melting temperature.
Many sequencing technologies, such as Illumina sequencing, have trouble reading high-GC-content sequences. Bird genomes are known to have many such parts, causing the problem of "missing genes" expected to be present from evolution and phenotype but never sequenced — until improved methods were used.
Systematics.
The species problem in non-eukaryotic taxonomy has led to various suggestions in classifying bacteria, and the "ad hoc committee on reconciliation of approaches to bacterial systematics" of 1987 has recommended use of GC-ratios in higher-level hierarchical classification. For example, the Actinomycetota are characterised as "high GC-content bacteria". In "Streptomyces coelicolor" A3(2), GC-content is 72%. With the use of more reliable, modern methods of molecular systematics, the GC-content definition of Actinomycetota has been abolished and low-GC bacteria of this clade have been found.
Software tools.
GCSpeciesSorter and TopSort are software tools for classifying species based on their GC-contents.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\cfrac{G+C}{A+T+G+C}\\times100%"
},
{
"math_id": 1,
"text": "\\cfrac{A+T}{G+C}"
}
] | https://en.wikipedia.org/wiki?curid=1056866 |
1056980 | Bid–ask spread | Financial markets concept
The bid–ask spread (also bid–offer or bid/ask and buy/sell in the case of a market maker) is the difference between the prices quoted (either by a single market maker or in a limit order book) for an immediate sale (ask) and an immediate purchase (bid) for stocks, futures contracts, options, or currency pairs in some auction scenario. The size of the bid–ask spread in a security is one measure of the liquidity of the market and of the size of the transaction cost. If the spread is 0 then it is a frictionless asset.
Liquidity.
The trader initiating the transaction is said to demand liquidity, and the other party (counterparty) to the transaction supplies liquidity. Liquidity demanders place market orders and liquidity suppliers place limit orders. For a round trip (a purchase and sale together) the liquidity demander pays the spread and the liquidity supplier earns the spread. All limit orders outstanding at a given time (i.e. limit orders that have not been executed) are together called the Limit Order Book. In some markets such as NASDAQ, dealers supply liquidity. However, on most exchanges, such as the Australian Securities Exchange, there are no designated liquidity suppliers, and liquidity is supplied by other traders. On these exchanges, and even on NASDAQ, institutions and individuals can supply liquidity by placing limit orders.
The bid–ask spread is an accepted measure of liquidity costs in exchange traded securities and commodities. On any standardized exchange, two elements comprise almost all of the transaction cost—brokerage fees and bid–ask spreads. Under competitive conditions, the bid–ask spread measures the cost of making transactions without delay. The difference in price paid by an urgent buyer and received by an urgent seller is the liquidity cost. Since brokerage commissions do not vary with the time taken to complete a transaction, differences in bid–ask spread indicate differences in the liquidity cost.
Types of spreads.
Quoted spread.
The simplest type of bid-ask spread is the quoted spread. This spread is taken directly from quotes, that is, posted prices. Using quotes, this spread is the difference between the lowest asking price (the lowest price at which someone will sell) and the highest bid price (the highest price at which someone will buy). This spread is often expressed as a percent of the midpoint, that is, the average between the lowest ask and highest bid: formula_0.
Effective spread.
Quoted spreads often over-state the spreads finally paid by traders, due to "price improvement", that is, a dealer offering a better price than the quotes, also known as "trading inside the spread". Effective spreads account for this issue by using trade prices, and are typically defined as: formula_1. The effective spread is more difficult to measure than the quoted spread, since one needs to match trades with quotes and account for reporting delays (at least pre-electronic trading). Moreover, this definition embeds the assumption that trades above the midpoint are buys and trades below the midpoint are sales.
Realized spread.
Quoted and effective spreads represent costs incurred by traders. This cost includes both a cost of asymmetric information, that is, a loss to traders that are more informed, as well as a cost of immediacy, that is, a cost for having a trade being executed by an intermediary. The realized spread isolates the cost of immediacy, also known as the "real cost". This spread is defined as: formula_2 where the subscript k represents the kth trade. The intuition for why this spread measures the cost of immediacy is that, after each trade, the dealer adjusts quotes to reflect the information in the trade (and inventory effects).
Inner price moves are moves of the bid-ask price where the spread has been deducted.
Example: Currency spread.
If the current bid price for the EUR/USD currency pair is 1.5760 and the current offer price is 1.5763, this means that currently you can sell the EUR/USD at 1.5760 and buy at 1.5763. The difference between those prices (3 pips) is the spread.
If the USD/JPY currency pair is currently trading at 101.89/101.92, that is another way of saying that the bid for the USD/JPY is 101.89 and the offer is 101.92. This means that currently, holders of USD can sell US$1 for 101.89 JPY and investors who wish to buy dollars can do so at a cost of 101.92 JPY per US$1.
Example: Metals.
Gold and silver are known for having the tightest bid-ask spreads, making them useful as money, while other metals may have wider bid-ask spreads due to lower trading volumes, less liquidity, or large fluctuations in supply and demand. For example, rare metals like platinum, palladium, and rhodium have lower trading volumes compared to gold or silver, which can result in larger bid-ask spreads.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Quoted Spread} = \\frac{\\hbox{ask}-\\hbox{bid}}{\\hbox{midpoint}}\\times100"
},
{
"math_id": 1,
"text": "\\text{Effective Spread} = 2 \\times \\frac{|\\hbox{Trade Price}-\\hbox{Midpoint}|}{\\hbox{Midpoint}}\\times 100"
},
{
"math_id": 2,
"text": "\\text{Realized Spread}_k = 2 \\times \\frac{|\\hbox{Midpoint}_{k+1}-\\hbox{Traded Price}_k|}{\\hbox{Midpoint}_k}\\times 100"
}
] | https://en.wikipedia.org/wiki?curid=1056980 |
1056981 | Organ flue pipe scaling | Scaling is the ratio of an organ pipe's diameter to its length. The scaling of a pipe is a major influence on its timbre. Reed pipes are scaled according to different formulas than for flue pipes. In general, the larger the diameter of a given pipe at a given pitch, the fuller and more fundamental the sound becomes.
The effect of the scale of a pipe on its timbre.
The sound of an organ pipe is made up of a set of harmonics formed by acoustic resonance, with wavelengths that are fractions of the length of the pipe. There are nodes of stationary air, and antinodes of moving air, two of which will be the two ends of an open-ended organ-pipe (the mouth, and the open end at the top). The actual position of the antinodes is not exactly at the end of the pipe; rather it is slightly outside the end. The difference is called an end correction. The difference is larger for wider pipes. For example, at low frequencies, the additional effective length at the open pipe is about formula_0, where formula_1 is the radius of the pipe. However, the end correction is also smaller at higher frequencies. This shorter effective length raises the pitch of the resonance, so the higher resonant frequencies of the pipe are 'too high', sharp of where they should be, as natural harmonics of the fundamental note.
This effect suppresses the higher harmonics. The wider the pipe, the greater the suppression. Thus, other factors being equal, wide pipes are poor in harmonics, and narrow pipes are rich in harmonics. The scale of a pipe refers to its width compared to its length, and an organ builder will refer to a flute as a wide-scaled stop, and a string-toned gamba as a narrow-scaled stop.
Dom Bédos de Celles and the problem of scaling across a rank of pipes.
The lowest pipes in a rank are long, and the highest are short. The progression of the length of pipes is dictated by physics alone, and the length must halve for each octave. Since there are twelve semitones in an octave, each pipe differs from its neighbours by a factor of formula_2. If the diameters of the pipes are scaled in the same way, so each pipe has exactly the same proportions, it is found that the perceived timbre and volume vary greatly between the low notes and the high, and the result is not musically satisfactory. This effect has been known since antiquity, and part of the organ builder's art is to scale pipes such that the timbre and volume of a rank vary little, or only according to the wishes of the builder. One of the first authors to publish data on the scaling of organ pipes was Dom Bédos de Celles. The basis of his scale was unknown until Mahrenholz discovered that the scale was based on one in which the width halved for each octave, but with addition of a constant. This constant compensates for the inappropriate narrowing of the highest pipes, and if chosen with care, can match modern scalings to within the difference of diameter that one would expect from pipes sounding notes about two semi-tones apart.
Töpfer's "Normalmensur".
The system most commonly used to fully document and describe scaling was devised by Johann Gottlob Töpfer. Since varying the diameter of a pipe in direct proportion to its length (which means it varies by a factor of 1:2 per octave) caused the pipes to narrow too rapidly, and keeping the diameter constant (a factor of 1:1 per octave) was too little, the correct change in scale must be between these values. Töpfer reasoned that the cross-sectional area of the pipe was the critical factor, and he chose to vary this by the geometric mean of the ratios 1:2 and 1:4 per octave. This meant that the cross-sectional area varied as formula_3. In consequence, the diameter of the pipe halved after 16 semitone intervals, i.e. on the 17th note (musicians count the starting-note as the first, so if C is the first note, C# is the second, differing by one semitone). Töpfer was able to confirm that if the diameter of the pipes in a rank halved on the 17th note, its volume and timbre remained adequately constant across the entire organ keyboard. He established this as a standard scale, or in German, Normalmensur, with the additional stipulation that the internal diameter be at 8′ C (the lowest note of the modern organ compass) and the mouth width one-quarter of the circumference of such a pipe.
Töpfer's system provides a reference scale, from which the scale of other pipe ranks can be described by means of "half-tone" deviations larger or smaller (indicated by the abbreviation "ht"). A rank that also halves in diameter at the 17th note but is somewhat wider could be described as "+ 2 ht" meaning that the pipe corresponding to the note "D" has the width expected for a pipe of the note "C", two semitones below (and therefore two semitone intervals wider). If a rank does not halve exactly at the 17th note, then its relationship to the Normalmensur will vary across the keyboard. The system can therefore be used to produce Normalmensur variation tables or line graphs for the analysis of existing ranks or the design of new ranks.
The following is a list of representative 8′ stops in order of increasing diameter (and, therefore, of increasingly fundamental tone) at middle C with respect to Normalmensur, which is listed in the middle. Deviations from Normalmensur are provided after the pipe measurement in brackets.
Normalmensur scaling table, 17th halving ratio:
From Organ Supply Industries catalog
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "e = 0.6r"
},
{
"math_id": 1,
"text": "r"
},
{
"math_id": 2,
"text": " \\sqrt[12]{2} "
},
{
"math_id": 3,
"text": "1 : \\sqrt{8}"
}
] | https://en.wikipedia.org/wiki?curid=1056981 |
10570113 | Layer cake representation | In mathematics, the layer cake representation of a non-negative, real-valued measurable function formula_0 defined on a measure space formula_1 is the formula
formula_2
for all formula_3, where formula_4 denotes the indicator function of a subset formula_5 and formula_6 denotes the super-level set
formula_7
The layer cake representation follows easily from observing that
formula_8
and then using the formula
formula_9
The layer cake representation takes its name from the representation of the value formula_10 as the sum of contributions from the "layers" formula_6: "layers"/values formula_11 below formula_10 contribute to the integral, while values formula_11 above formula_10 do not.
It is a generalization of Cavalieri's principle and is also known under this name.cor. 2.2.34
An important consequence of the layer cake representation is the identity
formula_12
which follows from it by applying the Fubini-Tonelli theorem.
An important application is that formula_13 for formula_14 can be written as follows
formula_15
which follows immediately from the change of variables formula_16 in the layer cake representation of formula_17.
This representation can be used to prove Markov's inequality and Chebyshev's inequality.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "(\\Omega,\\mathcal{A},\\mu)"
},
{
"math_id": 2,
"text": "f(x) = \\int_0^\\infty 1_{L(f, t)} (x) \\, \\mathrm{d}t,"
},
{
"math_id": 3,
"text": "x \\in \\Omega"
},
{
"math_id": 4,
"text": "1_E"
},
{
"math_id": 5,
"text": "E\\subseteq \\Omega"
},
{
"math_id": 6,
"text": "L(f,t)"
},
{
"math_id": 7,
"text": "L(f, t) = \\{ y \\in \\Omega \\mid f(y) \\geq t \\}."
},
{
"math_id": 8,
"text": " 1_{L(f, t)}(x) = 1_{[0, f(x)]}(t)"
},
{
"math_id": 9,
"text": "f(x) = \\int_0^{f(x)} \\,\\mathrm{d}t."
},
{
"math_id": 10,
"text": "f(x)"
},
{
"math_id": 11,
"text": "t"
},
{
"math_id": 12,
"text": "\\int_\\Omega f(x) \\, \\mathrm{d}\\mu(x) = \\int_0^{\\infty} \\mu(\\{ x \\in \\Omega \\mid f(x) > t \\})\\,\\mathrm{d}t,"
},
{
"math_id": 13,
"text": "L^p"
},
{
"math_id": 14,
"text": "1\\leq p<+\\infty"
},
{
"math_id": 15,
"text": "\\int_\\Omega |f(x)|^p \\, \\mathrm{d}\\mu(x) = p\\int_0^{\\infty} s^{p-1}\\mu(\\{ x \\in \\Omega \\mid\\, |f(x)| > s \\}) \\mathrm{d}s,"
},
{
"math_id": 16,
"text": "t=s^{p}"
},
{
"math_id": 17,
"text": "|f(x)|^p"
}
] | https://en.wikipedia.org/wiki?curid=10570113 |
10570298 | Prékopa–Leindler inequality | In mathematics, the Prékopa–Leindler inequality is an integral inequality closely related to the reverse Young's inequality, the Brunn–Minkowski inequality and a number of other important and classical inequalities in analysis. The result is named after the Hungarian mathematicians András Prékopa and László Leindler.
Statement of the inequality.
Let 0 < "λ" < 1 and let "f", "g", "h" : R"n" → [0, +∞) be non-negative real-valued measurable functions defined on "n"-dimensional Euclidean space R"n". Suppose that these functions satisfy
for all "x" and "y" in R"n". Then
formula_0
Essential form of the inequality.
Recall that the essential supremum of a measurable function "f" : R"n" → R is defined by
formula_1
This notation allows the following "essential form" of the Prékopa–Leindler inequality: let 0 < "λ" < 1 and let "f", "g" ∈ "L"1(R"n"; [0, +∞)) be non-negative absolutely integrable functions. Let
formula_2
Then "s" is measurable and
formula_3
The essential supremum form was given by Herm Brascamp and Elliott Lieb. Its use can change the left side of the inequality. For example, a function "g" that takes the value 1 at exactly one point will not usually yield a zero left side in the "non-essential sup" form but it will always yield a zero left side in the "essential sup" form.
Relationship to the Brunn–Minkowski inequality.
It can be shown that the usual Prékopa–Leindler inequality implies the Brunn–Minkowski inequality in the following form: if 0 < "λ" < 1 and "A" and "B" are bounded, measurable subsets of R"n" such that the Minkowski sum (1 − "λ")"A" + λ"B" is also measurable, then
formula_4
where "μ" denotes "n"-dimensional Lebesgue measure. Hence, the Prékopa–Leindler inequality can also be used to prove the Brunn–Minkowski inequality in its more familiar form: if 0 < "λ" < 1 and "A" and "B" are non-empty, bounded, measurable subsets of R"n" such that (1 − "λ")"A" + λ"B" is also measurable, then
formula_5
Applications in probability and statistics.
Log-concave distributions.
The Prékopa–Leindler inequality is useful in the theory of log-concave distributions, as it can be used to show that log-concavity is preserved by marginalization and independent summation of log-concave distributed random variables. Since, if formula_6 have pdf formula_7, and formula_6 are independent, then formula_8 is the pdf of formula_9, we also have that the convolution of two log-concave functions is log-concave.
Suppose that "H"("x","y") is a log-concave distribution for ("x","y") ∈ R"m" × R"n", so that by definition we have
and let "M"("y") denote the marginal distribution obtained by integrating over "x":
formula_10
Let "y"1, "y"2 ∈ R"n" and 0 < "λ" < 1 be given. Then equation (2) satisfies condition (1) with "h"("x") = "H"("x",(1 − "λ")y1 + "λy"2), "f"("x") = "H"("x","y"1) and "g"("x") = "H"("x","y"2), so the Prékopa–Leindler inequality applies. It can be written in terms of "M" as
formula_11
which is the definition of log-concavity for "M".
To see how this implies the preservation of log-convexity by independent sums, suppose that "X" and "Y" are independent random variables with log-concave distribution. Since the product of two log-concave functions is log-concave, the joint distribution of ("X","Y") is also log-concave. Log-concavity is preserved by affine changes of coordinates, so the distribution of ("X" + "Y", "X" − "Y") is log-concave as well. Since the distribution of "X+Y" is a marginal over the joint distribution of ("X" + "Y", "X" − "Y"), we conclude that "X" + "Y" has a log-concave distribution.
Applications to concentration of measure.
The Prékopa–Leindler inequality can be used to prove results about concentration of measure.
Theorem Let formula_12, and set formula_13. Let formula_14 denote the standard Gaussian pdf, and formula_15 its associated measure. Then formula_16.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\| h\\|_{1} := \\int_{\\mathbb{R}^n} h(x) \\, \\mathrm{d} x \\geq \\left( \\int_{\\mathbb{R}^n} f(x) \\, \\mathrm{d} x \\right)^{1 -\\lambda} \\left( \\int_{\\mathbb{R}^n} g(x) \\, \\mathrm{d} x \\right)^\\lambda =: \\| f\\|_1^{1 -\\lambda} \\| g\\|_1^\\lambda. "
},
{
"math_id": 1,
"text": "\\mathop{\\mathrm{ess\\,sup}}_{x \\in \\mathbb{R}^{n}} f(x) = \\inf \\left\\{ t \\in [- \\infty, + \\infty] \\mid f(x) \\leq t \\text{ for almost all } x \\in \\mathbb{R}^{n} \\right\\}."
},
{
"math_id": 2,
"text": "s(x) = \\mathop{\\mathrm{ess\\,sup}}_{y \\in \\mathbb{R}^n} f \\left( \\frac{x - y}{1 - \\lambda} \\right)^{1 - \\lambda} g \\left( \\frac{y}{\\lambda} \\right)^\\lambda."
},
{
"math_id": 3,
"text": "\\| s \\|_1 \\geq \\| f \\|_1^{1 - \\lambda} \\| g \\|_1^\\lambda."
},
{
"math_id": 4,
"text": "\\mu \\left( (1 - \\lambda) A + \\lambda B \\right) \\geq \\mu (A)^{1 - \\lambda} \\mu (B)^{\\lambda},"
},
{
"math_id": 5,
"text": "\\mu \\left( (1 - \\lambda) A + \\lambda B \\right)^{1 / n} \\geq (1 - \\lambda) \\mu (A)^{1 / n} + \\lambda \\mu (B)^{1 / n}."
},
{
"math_id": 6,
"text": "X, Y"
},
{
"math_id": 7,
"text": "f, g"
},
{
"math_id": 8,
"text": "f\\star g"
},
{
"math_id": 9,
"text": "X+Y"
},
{
"math_id": 10,
"text": "M(y) = \\int_{\\mathbb{R}^m} H(x,y) \\, dx."
},
{
"math_id": 11,
"text": "M((1-\\lambda) y_1 + \\lambda y_2) \\geq M(y_1)^{1-\\lambda} M(y_2)^\\lambda,"
},
{
"math_id": 12,
"text": " A \\subseteq \\mathbb{R}^n "
},
{
"math_id": 13,
"text": " A_{\\epsilon} = \\{ x : d(x,A) < \\epsilon \\} "
},
{
"math_id": 14,
"text": " \\gamma(x) "
},
{
"math_id": 15,
"text": " \\mu "
},
{
"math_id": 16,
"text": " \\mu(A_{\\epsilon}) \\geq 1 - \\frac{ e^{ - \\epsilon^2/4}}{\\mu(A)} "
}
] | https://en.wikipedia.org/wiki?curid=10570298 |
1057085 | Lévy C curve | In mathematics, the Lévy C curve is a self-similar fractal curve that was first described and whose differentiability properties were analysed by Ernesto Cesàro in 1906 and Georg Faber in 1910, but now bears the name of French mathematician Paul Lévy, who was the first to describe its self-similarity properties as well as to provide a geometrical construction showing it as a representative curve in the same class as the Koch curve. It is a special case of a period-doubling curve, a de Rham curve.
L-system construction.
If using a Lindenmayer system then the construction of the C curve starts with a straight line. An isosceles triangle with angles of 45°, 90° and 45° is built using this line as its hypotenuse. The original line is then replaced by the other two sides of this triangle.
At the second stage, the two new lines each form the base for another right-angled isosceles triangle, and are replaced by the other two sides of their respective triangle. So, after two stages, the curve takes the appearance of three sides of a rectangle with the same length as the original line, but only half as wide.
At each subsequent stage, each straight line segment in the curve is replaced by the other two sides of a right-angled isosceles triangle built on it. After "n" stages the curve consists of 2"n" line segments, each of which is smaller than the original line by a factor of 2"n"/2.
This L-system can be described as follows:
where "F" means "draw forward", "+" means "turn clockwise 45°", and "−" means "turn anticlockwise 45°".
The fractal curve that is the limit of this "infinite" process is the Lévy C curve. It takes its name from its resemblance to a highly ornamented version of the letter "C". The curve resembles the finer details of the Pythagoras tree.
The Hausdorff dimension of the C curve equals 2 (it contains open sets), whereas the boundary has dimension about 1.9340 .
Variations.
The standard C curve is built using 45° isosceles triangles. Variations of the C curve can be constructed by using isosceles triangles with angles other than 45°. As long as the angle is less than 60°, the new lines introduced at each stage are each shorter than the lines that they replace, so the construction process tends towards a limit curve. Angles less than 45° produce a fractal that is less tightly "curled".
IFS construction.
If using an iterated function system (IFS, or the chaos game IFS-method actually), then the construction of the C curve is a bit easier. It will need a set of two "rules" which are: Two points in a plane (the translators), each associated with a scale factor of 1/√2. The first rule is a rotation of 45° and the second −45°. This set will iterate a point ["x", "y"] from randomly choosing any of the two rules and use the parameters associated with the rule to scale/rotate and translate the point using a 2D-transform function.
Put into formulae:
formula_0
formula_1
from the initial set of points formula_2.
Sample Implementation of Levy C Curve.
// Java Sample Implementation of Levy C Curve
import java.awt.Color;
import java.awt.Graphics;
import java.awt.Graphics2D;
import javax.swing.JFrame;
import javax.swing.JPanel;
import java.util.concurrent.ThreadLocalRandom;
public class C_curve extends JPanel {
public float x, y, len, alpha_angle;
public int iteration_n;
public void paint(Graphics g) {
Graphics2D g2d = (Graphics2D) g;
c_curve(x, y, len, alpha_angle, iteration_n, g2d);
public void c_curve(double x, double y, double len, double alpha_angle, int iteration_n, Graphics2D g) {
double fx = x;
double fy = y;
double length = len;
double alpha = alpha_angle;
int it_n = iteration_n;
if (it_n > 0) {
length = (length / Math.sqrt(2));
c_curve(fx, fy, length, (alpha + 45), (it_n - 1), g); // Recursive Call
fx = (fx + (length * Math.cos(Math.toRadians(alpha + 45))));
fy = (fy + (length * Math.sin(Math.toRadians(alpha + 45))));
c_curve(fx, fy, length, (alpha - 45), (it_n - 1), g); // Recursive Call
} else {
Color[] A = {Color.RED, Color.ORANGE, Color.BLUE, Color.DARK_GRAY};
g.setColor(A[ThreadLocalRandom.current().nextInt(0, A.length)]); //For Choosing Different Color Values
g.drawLine((int) fx, (int) fy, (int) (fx + (length * Math.cos(Math.toRadians(alpha)))), (int) (fy + (length * Math.sin(Math.toRadians(alpha)))));
public static void main(String[] args) {
C_curve points = new C_curve();
points.x = 200; // Stating x value
points.y = 100; // Stating y value
points.len = 150; // Stating length value
points.alpha_angle = 90; // Stating angle value
points.iteration_n = 15; // Stating iteration value
JFrame frame = new JFrame("Points");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.add(points);
frame.setSize(500, 500);
frame.setLocationRelativeTo(null);
frame.setVisible(true);
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f_1(z)=\\frac{(1-i)z}{2}"
},
{
"math_id": 1,
"text": "f_2(z)=1+\\frac{(1+i)(z-1)}{2}"
},
{
"math_id": 2,
"text": "S_0 = \\{0,1\\}"
}
] | https://en.wikipedia.org/wiki?curid=1057085 |
10572379 | Calendrical calculation | A calendrical calculation is a calculation concerning calendar dates. Calendrical calculations can be considered an area of applied mathematics.
Some examples of calendrical calculations:
Calendrical calculation is one of the five major Savant syndrome characteristics.
Examples.
Numerical methods were described in the "Journal of the Department of Mathematics," Open University, Milton Keynes, Buckinghamshire "(M500)" in 1997 and 1998. The following algorithm gives the number of days ("d") in month "m" of year "y". The value of "m" is given on the right of the month in the following list:
January 11 February 12 March 1 April 2 May 3 June 4 July 5 August 6 September 7 October 8 November 9 December 10.
The algorithm enables a computer to print calendar and diary pages for past or future sequences of any desired length from the reform of the calendar, which in England was 3/14 September 1752. The article Date of Easter gives algorithms for calculating the date of Easter. Combining the two enables the page headers to show any fixed or movable festival observed on the day, and whether it is a bank holiday.
The algorithm utilises the integral or floor function: thus formula_0 is that part of the number "x" which lies to the left of the decimal point. It is only necessary to work through the complete function when calculating the length of February in a year which is divisible by 100 without remainder. When calculating the length of February in any other year it is only necessary to evaluate the terms to the left of the fifth + sign. When calculating the length of any other month it is only necessary to evaluate the terms to the left of the third - sign.
formula_1
formula_2
To find the length of, for example, February 2000 the calculation is
formula_3
formula_4
formula_5
formula_6
formula_7
See also.
"Calendrical Calculations"
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left\\lfloor{x}\\right\\rfloor"
},
{
"math_id": 1,
"text": "d=30+\\left\\lfloor{0.6m+0.4}\\right\\rfloor-\\left\\lfloor{0.6m-0.2}\\right\\rfloor-2\\left\\lfloor{m/12}\\right\\rfloor+\\left\\lfloor{m/12}\\right\\rfloor\\left\\lfloor{\\frac{y-1}{4}-\\left\\lfloor{\\frac{y-1}{4}}\\right\\rfloor+0.25}\\right\\rfloor"
},
{
"math_id": 2,
"text": "+\\left\\lfloor{m/12}\\right\\rfloor\\left\\lfloor{\\left\\lfloor{\\cfrac{\\left\\lfloor{0.3+\\cfrac{\\left\\lfloor{y/100}\\right\\rfloor-3}{4.5}-\\left\\lfloor{\\cfrac{\\left\\lfloor{y/100}\\right\\rfloor-3}{4.5}}\\right\\rfloor}\\right\\rfloor+99+100\\left\\lfloor{y/100-\\left\\lfloor{y/100}\\right\\rfloor}\\right\\rfloor}{100}}\\right\\rfloor-1}\\right\\rfloor"
},
{
"math_id": 3,
"text": "d=30+\\left\\lfloor{7.2+0.4}\\right\\rfloor-\\left\\lfloor{7.2-0.2}\\right\\rfloor-2+\\left\\lfloor{1999/4-\\left\\lfloor{1999/4}\\right\\rfloor+0.25}\\right\\rfloor"
},
{
"math_id": 4,
"text": "+\\left\\lfloor{\\cfrac{\\left\\lfloor{0.3+\\cfrac{20-3}{4.5}-\\left\\lfloor{\\cfrac{20-3}{4.5}}\\right\\rfloor}\\right\\rfloor+99}{100}}\\right\\rfloor-1"
},
{
"math_id": 5,
"text": "=30+7-7-2+\\left\\lfloor{499.75-499+0.25}\\right\\rfloor+\\left\\lfloor{\\frac{\\left\\lfloor{0.3+3.77-3}\\right\\rfloor+99}{100}}\\right\\rfloor-1"
},
{
"math_id": 6,
"text": "=28+1+1-1"
},
{
"math_id": 7,
"text": "=29."
}
] | https://en.wikipedia.org/wiki?curid=10572379 |
10573305 | K-mer | Substrings of length k contained in a biological sequence
In bioinformatics, "k"-mers are substrings of length formula_0 contained within a biological sequence. Primarily used within the context of computational genomics and sequence analysis, in which "k"-mers are composed of nucleotides ("i.e". A, T, G, and C), "k"-mers are capitalized upon to assemble DNA sequences, improve heterologous gene expression, identify species in metagenomic samples, and create attenuated vaccines. Usually, the term "k"-mer refers to all of a sequence's subsequences of length formula_0, such that the sequence AGAT would have four monomers (A, G, A, and T), three 2-mers (AG, GA, AT), two 3-mers (AGA and GAT) and one 4-mer (AGAT). More generally, a sequence of length formula_1 will have formula_2 "k"-mers and formula_3 total possible "k"-mers, where formula_4 is number of possible monomers (e.g. four in the case of DNA).
Introduction.
"k"-mers are simply length formula_0 subsequences. For example, all the possible "k"-mers of a DNA sequence are shown below:
A method of visualizing "k"-mers, the "k"-mer spectrum, shows the multiplicity of each "k"-mer in a sequence versus the number of "k"-mers with that multiplicity. The number of modes in a "k"-mer spectrum for a species's genome varies, with most species having a unimodal distribution. However, all mammals have a multimodal distribution. The number of modes within a "k"-mer spectrum can vary between regions of genomes as well: humans have unimodal "k"-mer spectra in 5' UTRs and exons but multimodal spectra in 3' UTRs and introns.
Forces affecting DNA "k"-mer frequency.
The frequency of "k"-mer usage is affected by numerous forces, working at multiple levels, which are often in conflict. It is important to note that "k"-mers for higher values of "k" are affected by the forces affecting lower values of "k" as well. For example, if the 1-mer A does not occur in a sequence, none of the 2-mers containing A (AA, AT, AG, and AC) will occur either, thereby linking the effects of the different forces.
"k" = 1.
When "k" = 1, there are four DNA "k"-mers, "i.e.", A, T, G, and C. At the molecular level, there are three hydrogen bonds between G and C, whereas there are only two between A and T. GC bonds, as a result of the extra hydrogen bond (and stronger stacking interactions), are more thermally stable than AT bonds. Mammals and birds have a higher ratio of Gs and Cs to As and Ts (GC-content), which led to the hypothesis that thermal stability was a driving factor of GC-content variation. However, while promising, this hypothesis did not hold up under scrutiny: analysis among a variety of prokaryotes showed no evidence of GC-content correlating with temperature as the thermal adaptation hypothesis would predict. Indeed, if natural selection were to be the driving force behind GC-content variation, that would require that single nucleotide changes, which are often silent, to alter the fitness of an organism.
Rather, current evidence suggests that GC‐biased gene conversion (gBGC) is a driving factor behind variation in GC content. gBGC is a process that occurs during recombination which replaces As and Ts with Gs and Cs. This process, though distinct from natural selection, can nevertheless exert selective pressure on DNA biased towards GC replacements being fixed in the genome. gBGC can therefore be seen as an "impostor" of natural selection. As would be expected, GC content is greater at sites experiencing greater recombination. Furthermore, organisms with higher rates of recombination exhibit higher GC content, in keeping with the gBGC hypothesis's predicted effects. Interestingly, gBGC does not appear to be limited to eukaryotes. Asexual organisms such as bacteria and archaea also experience recombination by means of gene conversion, a process of homologous sequence replacement resulting in multiple identical sequences throughout the genome. That recombination is able to drive up GC content in all domains of life suggests that gBGC is universally conserved. Whether gBGC is a (mostly) neutral byproduct of the molecular machinery of life or is itself under selection remains to be determined. The exact mechanism and evolutionary advantage or disadvantage of gBGC is currently unknown.
"k" = 2.
Despite the comparatively large body of literature discussing GC-content biases, relatively little has been written about dinucleotide biases. What is known is that these dinucleotide biases are relatively constant throughout the genome, unlike GC-content, which, as seen above, can vary considerably. This is an important insight that must not be overlooked. If dinucleotide bias were subject to pressures resulting from translation, then there would be differing patterns of dinucleotide bias in coding and noncoding regions driven by some dinucelotides' reduced translational efficiency. Because there is not, it can therefore be inferred that the forces modulating dinucleotide bias are independent of translation. Further evidence against translational pressures affecting dinucleotide bias is the fact that the dinucleotide biases of viruses, which rely heavily on translational efficiency, are shaped by their viral family more than by their hosts, whose translational machinery the viruses hijack.
Counter to gBGC's increasing GC-content is CG suppression, which reduces the frequency of CG 2-mers due to deamination of methylated CG dinucleotides, resulting in substitutions of CGs with TGs, thereby reducing the GC-content. This interaction highlights the interrelationship between the forces affecting "k"-mers for varying values of "k."
One interesting fact about dinucleotide bias is that it can serve as a "distance" measurement between phylogenetically similar genomes. The genomes of pairs of organisms that are closely related share more similar dinucleotide biases than between pairs of more distantly related organisms.
"k" = 3.
There are twenty natural amino acids that are used to build the proteins that DNA encodes. However, there are only four nucleotides. Therefore, there cannot be a one-to-one correspondence between nucleotides and amino acids. Similarly, there are 16 2-mers, which is also not enough to unambiguously represent every amino acid. However, there are 64 distinct 3-mers in DNA, which is enough to uniquely represent each amino acid. These non-overlapping 3-mers are called codons. While each codon only maps to one amino acid, each amino acid can be represented by multiple codons. Thus, the same amino acid sequence can have multiple DNA representations. Interestingly, each codon for an amino acid is not used in equal proportions. This is called codon-usage bias (CUB). When "k" = 3, a distinction must be made between true 3-mer frequency and CUB. For example, the sequence ATGGCA has four 3-mer words within it (ATG, TGG, GGC, and GCA) while only containing two codons (ATG and GCA). However, CUB is a major driving factor of 3-mer usage bias (accounting for up to ⅓ of it, since ⅓ of the "k"-mers in a coding region are codons) and will be the main focus of this section.
The exact cause of variation between the frequencies of various codons is not fully understood. It is known that codon preference is correlated with tRNA abundances, with codons matching more abundant tRNAs being correspondingly more frequent and that more highly expressed proteins exhibit greater CUB. This suggests that selection for translational efficiency or accuracy is the driving force behind CUB variation.
"k" = 4.
Similar to the effect seen in dinucleotide bias, the tetranucleotide biases of phylogenetically similar organisms are more similar than between less closely related organisms. The exact cause of variation in tetranucleotide bias is not well understood, but it has been hypothesized to be the result of the maintenance of genetic stability at the molecular level.
Applications.
The frequency of a set of "k"-mers in a species's genome, in a genomic region, or in a class of sequences can be used as a "signature" of the underlying sequence. Comparing these frequencies is computationally easier than sequence alignment and is an important method in alignment-free sequence analysis. It can also be used as a first stage analysis before an alignment.
Sequence assembly.
In sequence assembly, "k"-mers are used during the construction of De Bruijn graphs. In order to create a De Bruijn Graph, the "k"-mers stored in each edge with length formula_5 must overlap another string in another edge by formula_6 in order to create a vertex. Reads generated from next-generation sequencing will typically have different read lengths being generated. For example, reads by Illumina's sequencing technology capture reads of 100-mers. However, the problem with the sequencing is that only small fractions out of all the possible 100-mers that are present in the genome are actually generated. This is due to read errors, but more importantly, just simple coverage holes that occur during sequencing. The problem is that these small fractions of the possible "k"-mers violate the key assumption of De Bruijn graphs that all the "k"-mer reads must overlap its adjoining "k"-mer in the genome by formula_7 (which cannot occur when all the possible "k"-mers are not present).
The solution to this problem is to break these "k"-mer sized reads into smaller "k"-mers, such that the resulting smaller "k"-mers will represent all the possible "k"-mers of that smaller size that are present in the genome. Furthermore, splitting the "k"-mers into smaller sizes also helps alleviate the problem of different initial read lengths. In this example, the five reads do not account for all the possible 7-mers of the genome, and as such, a De Bruijn graph cannot be created. But, when they are split into 4-mers, the resultant subsequences are enough to reconstruct the genome using a De Bruijn graph.
Beyond being used directly for sequence assembly, "k"-mers can also be used to detect genome mis-assembly by identifying "k"-mers that are overrepresented which suggest the presence of repeated DNA sequences that have been combined. In addition, "k"-mers are also used to detect bacterial contamination during eukaryotic genome assembly, an approach borrowed from the field of metagenomics.
Choice of "k"-mer size.
The choice of the "k"-mer size has many different effects on the sequence assembly. These effects vary greatly between lower sized and larger sized "k"-mers. Therefore, an understanding of the different "k"-mer sizes must be achieved in order to choose a suitable size that balances the effects. The effects of the sizes are outlined below.
Genetics and Genomics.
With respect to disease, dinucleotide bias has been applied to the detection of genetic islands associated with pathogenicity. Prior work has also shown that tetranucleotide biases are able to effectively detect horizontal gene transfer in both prokaryotes and eukaryotes.
Another application of "k"-mers is in genomics-based taxonomy. For example, GC-content has been used to distinguish between species of "Erwinia" with moderate success. Similar to the direct use of GC-content for taxonomic purposes is the use of T, the melting temperature of DNA. Because GC bonds are more thermally stable, sequences with higher GC content exhibit a higher T. In 1987, the Ad Hoc Committee on Reconciliation of Approaches to Bacterial Systematics proposed the use of ΔT as factor in determining species boundaries as part of the phylogenetic species concept, though this proposal does not appear to have gained traction within the scientific community.
Other applications within genetics and genomics include:
Metagenomics.
"k"-mer frequency and spectrum variation is heavily used in metagenomics for both analysis and binning. In binning, the challenge is to separate sequencing reads into "bins" of reads for each organism (or operational taxonomic unit), which will then be assembled. TETRA is a notable tool that takes metagenomic samples and bins them into organisms based on their tetranucleotide ("k" = 4) frequencies. Other tools that similarly rely on "k"-mer frequency for metagenomic binning are CompostBin ("k" = 6), PCAHIER, PhyloPythia (5 ≤ "k" ≤ 6), CLARK ("k" ≥ 20), and TACOA (2 ≤ "k" ≤ 6). Recent developments have also applied deep learning to metagenomic binning using "k"-mers.
Other applications within metagenomics include:
Biotechnology.
Modifying "k"-mer frequencies in DNA sequences has been used extensively in biotechnological applications to control translational efficiency. Specifically, it has been used to both up- and down-regulate protein production rates.
With respect to increasing protein production, reducing unfavorable dinucleotide frequency has been used yield higher rates of protein synthesis. In addition, codon usage bias has been modified to create synonymous sequences with greater protein expression rates. Similarly, codon pair optimization, a combination of dinucelotide and codon optimization, has also been successfully used to increase expression.
The most studied application of "k"-mers for decreasing translational efficiency is codon-pair manipulation for attenuating viruses in order to create vaccines. Researchers were able to recode dengue virus, the virus that causes dengue fever, such that its codon-pair bias was more different to mammalian codon-usage preference than the wild type. Though containing an identical amino-acid sequence, the recoded virus demonstrated significantly weakened pathogenicity while eliciting a strong immune response. This approach has also been used effectively to create an influenza vaccine as well a vaccine for Marek's disease herpesvirus (MDV). Notably, the codon-pair bias manipulation employed to attenuate MDV did not effectively reduce the oncogenicity of the virus, highlighting a potential weakness in the biotechnology applications of this approach. To date, no codon-pair deoptimized vaccine has been approved for use.
Two later articles help explain the actual mechanism underlying codon-pair deoptimization: codon-pair bias is the result of dinucleotide bias. By studying viruses and their hosts, both sets of authors were able to conclude that the molecular mechanism that results in the attenuation of viruses is an increase in dinucleotides poorly suited for translation.
GC-content, due to its effect on DNA melting point, is used to predict annealing temperature in PCR, another important biotechnology tool.
Implementation.
Pseudocode.
Determining the possible "k"-mers of a read can be done by simply cycling over the string length by one and taking out each substring of length formula_0. The pseudocode to achieve this is as follows:
procedure k-mers(string seq, integer k) is
L ← length(seq)
arr ← new array of L − k + 1 empty strings
"// iterate over the number of k-mers in seq,"
"// storing the nth k-mer in the output array"
for n ← 0 to L − k + 1 exclusive do
arr[n] ← subsequence of seq from letter n inclusive to letter n + k exclusive
return arr
In bioinformatics pipelines.
Because the number of "k"-mers grows exponentially for values of "k", counting "k"-mers for large values of "k" (usually >10) is a computationally difficult task. While simple implementations such as the above pseudocode work for small values of "k", they need to be adapted for high-throughput applications or when "k" is large. To solve this problem, various tools have been developed:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "L"
},
{
"math_id": 2,
"text": "L - k + 1"
},
{
"math_id": 3,
"text": "n^{k}"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": " L"
},
{
"math_id": 6,
"text": "L-1"
},
{
"math_id": 7,
"text": "k-1"
}
] | https://en.wikipedia.org/wiki?curid=10573305 |
10573340 | Supnick matrix | A Supnick matrix or Supnick array – named after Fred Supnick of the City College of New York, who introduced the notion in 1957 – is a Monge array which is also a symmetric matrix.
Mathematical definition.
A Supnick matrix is a square Monge array that is symmetric around the main diagonal.
An "n"-by-"n" matrix is a Supnick matrix if, for all "i", "j", "k", "l" such that if
formula_0 and formula_1
then
formula_2
and also
formula_3
A logically equivalent definition is given by Rudolf & Woeginger who in 1995 proved that
"A matrix is a Supnick matrix iff it can be written as the sum of a sum matrix "S" and a non-negative linear combination of LL-UR block matrices."
The "sum matrix" is defined in terms of a sequence of "n" real numbers {α"i"}:
formula_4
and an "LL-UR block matrix" consists of two symmetrically placed rectangles in the lower-left and upper right corners for which "aij" = 1, with all the rest of the matrix elements equal to zero.
Properties.
Adding two Supnick matrices together will result in a new Supnick matrix (Deineko and Woeginger 2006).
Multiplying a Supnick matrix by a non-negative real number produces a new Supnick matrix (Deineko and Woeginger 2006).
If the distance matrix in a traveling salesman problem can be written as a Supnick matrix, that particular instance of the problem admits an easy solution (even though the problem is, in general, NP hard). | [
{
"math_id": 0,
"text": "1\\le i < k\\le n"
},
{
"math_id": 1,
"text": "1\\le j < l\\le n"
},
{
"math_id": 2,
"text": "a_{ij} + a_{kl} \\le a_{il} + a_{kj}\\,"
},
{
"math_id": 3,
"text": "a_{ij} = a_{ji}. \\,"
},
{
"math_id": 4,
"text": "\nS = [s_{ij}] = [\\alpha_i + \\alpha_j]; \\,\n"
}
] | https://en.wikipedia.org/wiki?curid=10573340 |
10574794 | Deligne–Lusztig theory | Technique in mathematical group theory
In mathematics, Deligne–Lusztig theory is a way of constructing linear representations of finite groups of Lie type using ℓ-adic cohomology with compact support, introduced by Pierre Deligne and George Lusztig (1976).
used these representations to find all representations of all finite simple groups of Lie type.
Motivation.
Suppose that "G" is a reductive group defined over a finite field, with Frobenius map "F".
Ian G. Macdonald conjectured that there should be a map from "general position" characters of "F"-stable maximal tori to irreducible representations of formula_0 (the fixed points of "F"). For general linear groups this was already known by the work of J. A. Green (1955). This was the main result proved by Pierre Deligne and George Lusztig; they found a virtual representation for all characters of an "F"-stable maximal torus, which is irreducible (up to sign) when the character is in general position.
When the maximal torus is split, these representations were well known and are given by parabolic induction of characters of the torus (extend the character to a Borel subgroup, then induce it up to "G"). The representations of parabolic induction can be constructed using functions on a space, which can be thought of as elements of a suitable zeroth cohomology group. Deligne and Lusztig's construction is a generalization of parabolic induction to non-split tori using higher cohomology groups. (Parabolic induction can also be done with tori of "G" replaced by Levi subgroups of "G", and there is a generalization of Deligne–Lusztig theory to this case too.)
Vladimir Drinfeld proved that the discrete series representations of SL2(F"q") can be found in the ℓ-adic cohomology groups
formula_1
of the affine curve "X" defined by
formula_2.
The polynomial formula_3 is a determinant used in the construction of the Dickson invariant of the general linear group, and is an invariant of the special linear group.
The construction of Deligne and Lusztig is a generalization of this fundamental example to other groups. The affine curve "X" is generalized to a formula_4 bundle over a "Deligne–Lusztig variety" where "T" is a maximal torus of "G", and instead of using just the first cohomology group they use an alternating sum of ℓ-adic cohomology groups with compact support to construct virtual representations.
The Deligne-Lusztig construction is formally similar to Hermann Weyl's construction of the representations of a compact group from the characters of a maximal torus. The case of compact groups is easier partly because there is only one conjugacy class of maximal tori. The Borel–Weil–Bott construction of representations of algebraic groups using coherent sheaf cohomology is also similar.
For real semisimple groups there is an analogue of the construction of Deligne and Lusztig, using Zuckerman functors to construct representations.
Deligne–Lusztig varieties.
The construction of Deligne-Lusztig characters uses a family of auxiliary algebraic varieties "X""T" called Deligne–Lusztig varieties, constructed from a reductive linear algebraic group "G" defined over a finite field F"q".
If "B" is a Borel subgroup of "G" and "T" a maximal torus of "B" then we write
"W""T","B"
for the Weyl group (normalizer mod centralizer)
"N""G"("T")/"T"
of "G" with respect to "T", together with the simple roots corresponding to "B". If "B"1 is another Borel subgroup with maximal torus "T"1 then there is a canonical isomorphism from "T" to "T"1 that identifies the two Weyl groups. So we can identify all these Weyl groups, and call it 'the' Weyl group "W" of "G". Similarly there is a canonical isomorphism between any two maximal tori with given choice of positive roots, so we can identify all these and call it 'the' maximal torus "T" of "G".
By the Bruhat decomposition
"G" = "BWB",
the subgroup "B"1 can be written as the conjugate of "B" by "bw" for some "b"∈"B" and "w"∈"W" (identified with "W""T","B") where "w" is uniquely determined. In this case we say that "B" and "B"1 are in relative position "w".
Suppose that "w" is in the Weyl group of "G", and write "X" for the smooth projective variety of all Borel subgroups of "G".
The Deligne-Lusztig variety "X"("w") consists of all Borel subgroups "B" of "G" such that "B" and "F"("B") are in relative position "w" [recall that "F" is the Frobenius map]. In other words, it is the inverse image of the "G"-homogeneous space of pairs of Borel subgroups in relative position "w", under the Lang isogeny with formula
"g"."F"("g")−1.
For example, if "w"=1 then "X"("w") is 0-dimensional and its points are the rational Borel subgroups of "G".
We let "T"("w") be the torus "T", with the rational structure for which the Frobenius is "wF".
The "G""F" conjugacy classes of "F"-stable maximal tori of "G" can be identified with the "F"-conjugacy classes of "W", where we say "w"∈"W" is "F"-conjugate to elements of the form "vwF"("v")−1 for "v"∈"W". If the group "G" is split, so that "F" acts trivially on "W", this is the same as ordinary conjugacy, but in general for non-split groups "G", "F" may act on "W" via a non-trivial diagram automorphism. The "F"-stable conjugacy classes can be identified with elements of the non-abelian Galois cohomology group of torsors
formula_5.
Fix a maximal torus "T" of "G" and a Borel subgroup "B" containing it, both invariant under the Frobenius map "F", and write "U" for the unipotent radical of "B".
If we choose a representative "w"′ of the normalizer "N"("T") representing "w", then we define "X"′("w"′) to be the elements of "G"/"U" with "F"("u")="uw"′.
This is acted on freely by "T"("F"), and the quotient is isomorphic to "X"("T"). So
for each character θ of "T"("w")"F" we get a corresponding local system "F"θ on "X"("w"). The
Deligne-Lusztig virtual representation
"R"θ("w")
of "G""F" is defined by the alternating sum
formula_6
of "l"-adic compactly supported cohomology groups of "X"("w") with coefficients in the "l"-adic local system "F"θ.
If "T" is a maximal "F"-invariant torus of "G" contained in a Borel subgroup "B" such that
"B" and "FB" are in relative position "w" then "R"θ("w") is also
denoted by "R"θ"T"⊂"B", or by "R"θ"T" as up to isomorphism it does not depend on the choice of "B".
formula_7
where "U""F" is a Sylow "p"-subgroup of "G""F", of order the largest power of "p" dividing |"G""F"|.
formula_8
where "x"="su" is the Jordan–Chevalley decomposition of "x" as the product of commuting semisimple and unipotent elements "s" and "u", and "G""s" is the identity component of the centralizer of "s" in "G". In particular the character value vanishes unless the semisimple part of "x" is conjugate under "G""F" to something in the torus "T".
Lusztig's classification of irreducible characters.
Lusztig classified all the irreducible characters of "G""F" by decomposing such a character into a semisimple character and a unipotent character (of another group) and separately classifying the semisimple and unipotent characters.
The dual group.
The representations of "G""F" are classified using conjugacy classes of the dual group of "G".
A reductive group over a finite field determines a root datum (with choice of Weyl chamber) together with an action of the Frobenius element on it.
The dual group "G"* of a reductive algebraic group "G" defined over a finite field is the one with dual root datum (and adjoint Frobenius action).
This is similar to the Langlands dual group (or L-group), except here the dual group is defined over a finite field rather than over the complex numbers. The dual group has the same root system, except that root systems of type B and C get exchanged.
The local Langlands conjectures state (very roughly) that representations of an algebraic group over a local field should be closely related to conjugacy classes in the Langlands dual group. Lusztig's classification of representations of reductive groups over finite fields can be thought of as a verification of an analogue of this conjecture for finite fields (though Langlands never stated his conjecture for this case).
Jordan decomposition.
In this section "G" will be a reductive group with connected center.
An irreducible character is called unipotent if it occurs in some "R"1"T", and is called semisimple if its average value on regular unipotent elements is non-zero (in which case the average value is 1 or −1). If "p" is a good prime for "G" (meaning that it does not divide any of the coefficients of roots expressed as linear combinations of simple roots) then an irreducible character is semisimple if and only if its order is not divisible by "p".
An arbitrary irreducible character has a "Jordan decomposition": to it one can associate a semisimple character (corresponding to some semisimple element "s" of the dual group), and a unipotent representation of the centralizer of "s". The dimension of the irreducible character is the product of the dimensions of its semisimple and unipotent components.
This (more or less) reduces the classification of irreducible characters to the problem of finding the semisimple and the unipotent characters.
Geometric conjugacy.
Two pairs ("T",θ), ("T"′,θ′) of a maximal torus "T" of "G" fixed by "F" and a character θ of "T""F" are called geometrically conjugate if they are conjugate under some element of "G"("k"), where "k" is the algebraic closure of F"q". If an irreducible representation occurs in both "R""T"θ and "R""T"′θ′ then ("T",θ), ("T"′,θ′) need not be conjugate under "G""F", but are always geometrically conjugate. For example, if θ = θ′ = 1 and "T" and "T"′ are not conjugate, then the identity representation occurs in both Deligne–Lusztig characters, and the corresponding pairs ("T",1), ("T"′,1) are geometrically conjugate but not conjugate.
The geometric conjugacy classes of pairs ("T",θ) are parameterized by geometric conjugacy classes of semisimple elements "s" of the group "G"*"F" of elements of the dual group "G"* fixed by "F". Two elements of "G"*"F" are called geometrically conjugate if they are conjugate over the algebraic closure of the finite field; if the center of "G" is connected this is equivalent to conjugacy in "G"*"F". The number of geometric conjugacy classes of pairs ("T",θ) is |"Z"0"F"|"q""l" where "Z"0 is the identity component of the center "Z" of "G" and "l" is the semisimple rank of "G".
Classification of semisimple characters.
In this subsection "G" will be a reductive group with connected center "Z". (The case when the center is not connected has some extra complications.)
The semisimple characters of "G" correspond to geometric conjugacy classes of pairs ("T",θ) (where "T" is a maximal torus invariant under "F" and θ is a character of "T""F"); in fact among the irreducible characters occurring in the Deligne–Lusztig characters of a geometric conjugacy class there is exactly one semisimple character. If
the center of "G" is connected there are |"Z""F"|"q""l" semisimple characters. If κ is a geometric conjugacy class of pairs ("T",θ) then the character of the corresponding semisimple representation is given up to sign by
formula_9
and its dimension is the "p"′ part of the index of the centralizer of the element "s" of the dual group corresponding to it.
The semisimple characters are (up to sign) exactly the duals of the regular characters, under Alvis–Curtis duality, a duality operation on generalized characters.
An irreducible character is called regular if it occurs in the Gelfand–Graev representation
"G""F", which is the representation induced from a certain "non-degenerate" 1-dimensional character of the Sylow "p"-subgroup. It is reducible, and any irreducible character of "G""F" occurs at most once in it. If κ is a geometric conjugacy class of pairs ("T",θ) then the character of the corresponding regular representation is given by
formula_10
and its dimension is the "p"′ part of the index of the centralizer of the element "s" of the dual group corresponding to it times the "p"-part of the order of the centralizer.
Classification of unipotent characters.
These can be found from the cuspidal unipotent characters: those that cannot be obtained from decomposition of parabolically induced characters of smaller rank groups. The unipotent cuspidal characters were listed by Lusztig using rather complicated arguments. The number of them depends only on the type of the group and not on the underlying field; and is given as follows:
The unipotent characters can be found by decomposing the characters induced from the cuspidal ones, using results of Howlett and Lehrer. The number of unipotent characters depends only on the root system of the group and not on the field (or the center). The dimension of the unipotent characters can be given by universal polynomials in the order of the ground field depending only on the root system; for example the Steinberg representation has dimension "q""r", where "r" is the number of positive roots of the root system.
Lusztig discovered that the unipotent characters of a group "G""F" (with irreducible Weyl group) fall into families of size 4"n" ("n" ≥ 0), 8, 21, or 39. The characters of each family are indexed by conjugacy classes of pairs ("x",σ) where "x" is in one of the groups Z/2Z"n", "S"3, "S"4, "S"5 respectively, and σ is a representation of its centralizer. (The family of size 39 only occurs for groups of type "E"8, and the family of size 21 only occurs for groups of type "F"4.) The families are in turn indexed by special representations of the Weyl group, or equivalently by 2-sided cells of the Weyl group.
For example, the group "E"8(F"q") has 46 families of unipotent characters corresponding to the 46 special representations of the Weyl group of "E"8. There are 23 families with 1 character, 18 families with 4 characters, 4 families with 8 characters, and one family with 39 characters (which includes the 13 cuspidal unipotent characters).
Examples.
Suppose that "q" is an odd prime power, and "G" is the algebraic group "SL"2.
We describe the Deligne–Lusztig representations of the group "SL"2(F"q"). (The representation theory of these groups was well known long before Deligne–Lusztig theory.)
The irreducible representations are:
There are two classes of tori associated to the two elements (or conjugacy classes) of the
Weyl group, denoted by "T"(1) (cyclic of order "q"−1) and "T"("w") (cyclic of order "q" + 1). The non-trivial element of the Weyl group acts on the characters of these tori by changing each character to its inverse. So the Weyl group fixes a character if and only if it has order 1 or 2. By the orthogonality formula,
"R"θ("w") is (up to sign) irreducible if θ does not have order 1 or 2, and a sum of two irreducible representations if it has order 1 or 2.
The Deligne-Lusztig variety "X"(1) for the split torus is 0-dimensional with "q"+1 points, and can be identified with the points of 1-dimensional projective space defined over F"q".
The representations "R"θ(1) are given as follows:
The Deligne-Lusztig variety "X"("w") for the non-split torus is 1-dimensional, and can be identified with the complement of "X"(1) in 1-dimensional projective space. So it is the set of points ("x":"y") of projective space not fixed by the Frobenius map ("x":"y")→ ("x""q":"y""q"), in other words the points with
formula_11
Drinfeld's variety of points ("x","y") of affine space with
formula_12
maps to "X"("w") in the obvious way, and is acted on freely by the group of "q"+1th roots
λ of 1 (which can be identified with the elements of the non-split torus that are defined over F"q"), with λ taking ("x","y") to (λ"x",λ"y"). The Deligne Lusztig variety is the quotient of Drinfeld's variety by this group action.
The representations −"R"θ("w") are given as follows:
The unipotent representations are the trivial representation and the Steinberg representation, and the semisimple representations are all the representations other than the Steinberg representation. (In this case the semisimple representations do not correspond exactly to geometric conjugacy classes of the dual group, as the center of "G" is not connected.)
Intersection cohomology and character sheaves.
replaced the ℓ-adic cohomology used to define the Deligne-Lusztig representations with intersection ℓ-adic cohomology, and introduced certain perverse sheaves called character sheaves. The representations defined using intersection cohomology are related to those defined using ordinary cohomology by Kazhdan–Lusztig polynomials. The "F"-invariant irreducible character sheaves are closely related to the irreducible characters of the group "G""F". | [
{
"math_id": 0,
"text": "G^F"
},
{
"math_id": 1,
"text": "H^1_c(X, \\Q_{\\ell})"
},
{
"math_id": 2,
"text": "xy^q-yx^q = 1"
},
{
"math_id": 3,
"text": "xy^q-yx^q"
},
{
"math_id": 4,
"text": "T^F"
},
{
"math_id": 5,
"text": "H^1(F,W)"
},
{
"math_id": 6,
"text": "R^\\theta(w) = \\sum_i(-1)^iH_c^i(X(w),F_\\theta)"
},
{
"math_id": 7,
"text": "\\dim(R^\\theta_{T})= {\\pm|G^F| \\over |T^F||U^F|}"
},
{
"math_id": 8,
"text": "R^\\theta_T(x) = {1\\over |G_s^F|}\\sum_{g\\in G^F, g^{-1}sg\\in T} Q_{gTg^{-1},G_s}(u)\\theta(g^{-1}sg)"
},
{
"math_id": 9,
"text": "\\sum_{(T,\\theta)\\in \\kappa, \\bmod G^F} {R_T^\\theta\\over (R_T^\\theta,R_T^\\theta)}"
},
{
"math_id": 10,
"text": "\\sum_{(T,\\theta)\\in \\kappa, \\bmod G^F} {\\epsilon_G\\epsilon_TR_T^\\theta\\over (R_T^\\theta,R_T^\\theta)}"
},
{
"math_id": 11,
"text": "xy^q-yx^q\\ne 0"
},
{
"math_id": 12,
"text": "xy^q-yx^q=1"
}
] | https://en.wikipedia.org/wiki?curid=10574794 |
10575463 | 65,536 | Natural number
65536 is the natural number following 65535 and preceding 65537.
65536 is a power of two: formula_0 (2 to the 16th power).
65536 is the smallest number with exactly 17 divisors (but there are smaller numbers with more than 17 divisors; e.g., 180 has 18 divisors) (sequence in the OEIS).
In mathematics.
65536 is formula_1, so in tetration notation 65536 is 42.
When expressed using Knuth's up-arrow notation, 65536 is
formula_2,
which is equal to
formula_3,
which is equivalent to
formula_4
or
formula_5.
As formula_6 is also equal to 4, or formula_7,
formula_8 can thus be written as formula_9,
or formula_10, or as the pentation, formula_11 (hyperoperation notation).
65536 is a superperfect number – a number such that σ(σ("n")) = 2"n".
A 16-bit number can distinguish 65536 different possibilities. For example, unsigned binary notation exhausts all possible 16-bit codes in uniquely identifying the numbers 0 to 65535. In this scheme, 65536 is the least natural number that can "not" be represented with 16 bits. Conversely, it is the "first" or smallest positive integer that requires 17 bits.
65536 is the only power of 2 less than 231000 that does not contain the digits 1, 2, 4, or 8 in its decimal representation.
The sum of the unitary divisors of 65536 is prime (1 + 65536 = 65537, which is prime).
65536 is an untouchable number.
In computing.
65,536 (216) is the number of different values representable in a number of 16 binary digits (or "bits"), also known as an unsigned short integer in many computer programming systems.
This number is a limit in many common hardware and software implementations, some examples of which are:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2^{16}"
},
{
"math_id": 1,
"text": "2^{2^{2^2}}"
},
{
"math_id": 2,
"text": "\n2 \\uparrow 16\n"
},
{
"math_id": 3,
"text": "\n2 \\uparrow 2 \\uparrow 2 \\uparrow 2\n"
},
{
"math_id": 4,
"text": "\n2 \\uparrow\\uparrow 4\n"
},
{
"math_id": 5,
"text": "\n2 \\uparrow\\uparrow\\uparrow 3\n"
},
{
"math_id": 6,
"text": "^{2}2"
},
{
"math_id": 7,
"text": "2 \\uparrow \\uparrow 2 = 4"
},
{
"math_id": 8,
"text": "^{4}2"
},
{
"math_id": 9,
"text": "^{^{2}2}2"
},
{
"math_id": 10,
"text": "2 \\uparrow \\uparrow (2 \\uparrow \\uparrow 2) "
},
{
"math_id": 11,
"text": "2[5]3"
}
] | https://en.wikipedia.org/wiki?curid=10575463 |
10575678 | Maximum subarray problem | Problem in computer science
In computer science, the maximum sum subarray problem, also known as the maximum segment sum problem, is the task of finding a contiguous subarray with the largest sum, within a given one-dimensional array A[1...n] of numbers. It can be solved in formula_0 time and formula_1 space.
Formally, the task is to find indices formula_2 and formula_3 with formula_4, such that the sum
formula_5
is as large as possible. (Some formulations of the problem also allow the empty subarray to be considered; by convention, the sum of all values of the empty subarray is zero.) Each number in the input array A could be positive, negative, or zero.
For example, for the array of values [−2, 1, −3, 4, −1, 2, 1, −5, 4], the contiguous subarray with the largest sum is [4, −1, 2, 1], with sum 6.
Some properties of this problem are:
Although this problem can be solved using several different algorithmic techniques, including brute force, divide and conquer, dynamic programming, and reduction to shortest paths, a simple single-pass algorithm known as Kadane's algorithm solves it efficiently.
History.
The maximum subarray problem was proposed by Ulf Grenander in 1977 as a simplified model for maximum likelihood estimation of patterns in digitized images.
Grenander was looking to find a rectangular subarray with maximum sum, in a two-dimensional array of real numbers. A brute-force algorithm for the two-dimensional problem runs in "O"("n"6) time; because this was prohibitively slow, Grenander proposed the one-dimensional problem to gain insight into its structure. Grenander derived an algorithm that solves the one-dimensional problem in "O"("n"2) time,
improving the brute force running time of "O"("n"3). When Michael Shamos heard about the problem, he overnight devised an "O"("n" log "n") divide-and-conquer algorithm for it.
Soon after, Shamos described the one-dimensional problem and its history at a Carnegie Mellon University seminar attended by Jay Kadane, who designed within a minute an "O"("n")-time algorithm, which is as fast as possible. In 1982, David Gries obtained the same "O"("n")-time algorithm by applying Dijkstra's "standard strategy"; in 1989, Richard Bird derived it by purely algebraic manipulation of the brute-force algorithm using the Bird–Meertens formalism.
Grenander's two-dimensional generalization can be solved in O("n"3) time either by using Kadane's algorithm as a subroutine, or through a divide-and-conquer approach. Slightly faster algorithms based on distance matrix multiplication have been proposed by and by . There is some evidence that no significantly faster algorithm exists; an algorithm that solves the two-dimensional maximum subarray problem in O("n"3−ε) time, for any ε>0, would imply a similarly fast algorithm for the all-pairs shortest paths problem.
Applications.
Maximum subarray problems arise in many fields, such as genomic sequence analysis and computer vision.
Genomic sequence analysis employs maximum subarray algorithms to identify important biological segments of protein sequences that have unusual properties, by assigning scores to points within the sequence that are positive when a motif to be recognized is present, and negative when it is not, and then seeking the maximum subarray among these scores. These problems include conserved segments, GC-rich regions, tandem repeats, low-complexity filter, DNA binding domains, and regions of high charge.
In computer vision, bitmap images generally consist only of positive values, for which the maximum subarray problem is trivial: the result is always the whole array. However, after subtracting a threshold value (such as the average pixel value) from each pixel, so that above-average pixels will be positive and below-average pixels will be negative, the maximum subarray problem can be applied to the modified image to detect bright areas within it.
Kadane's algorithm.
No empty subarrays admitted.
Kadane's algorithm scans the given array formula_6 from left to right.
In the formula_3th step, it computes the subarray with the largest sum ending at formula_3; this sum is maintained in variable codice_0.
Moreover, it computes the subarray with the largest sum anywhere in formula_7, maintained in variable codice_1,
and easily obtained as the maximum of all values of codice_0 seen so far, cf. line 7 of the algorithm.
As a loop invariant, in the formula_3th step, the old value of codice_0 holds the maximum over all formula_8 of the sum formula_9.
Therefore, codice_0formula_10
is the maximum over all formula_8 of the sum formula_11. To extend the latter maximum to cover also the case formula_12, it is sufficient to consider also the singleton subarray formula_13. This is done in line 6 by assigning formula_14codice_0formula_15 as the new value of codice_0, which after that holds the maximum over all formula_16 of the sum formula_11.
Thus, the problem can be solved with the following code, expressed in Python.
def max_subarray(numbers):
"""Find the largest sum of any contiguous subarray."""
best_sum = float('-inf')
current_sum = 0
for x in numbers:
current_sum = max(x, current_sum + x)
best_sum = max(best_sum, current_sum)
return best_sum
If the input contains no positive element, the returned value is that of the largest element (i.e., the value closest to 0), or negative infinity if the input was empty. For correctness, an exception should be raised when the input array is empty, since an empty array has no maximum nonempty subarray. If the array is nonempty, its first element could be used in place of negative infinity, if needed to avoid mixing numeric and non-numeric values.
The algorithm can be adapted to the case which allows empty subarrays or to keep track of the starting and ending indices of the maximum subarray.
This algorithm calculates the maximum subarray ending at each position from the maximum subarray ending at the previous position, so it can be viewed as a trivial case of dynamic programming.
Empty subarrays admitted.
Kadane's original algorithm solves the problem variant when empty subarrays are admitted.
This variant will return 0 if the input contains no positive elements (including when the input is empty).
It is obtained by two changes in code: in line 3, codice_1 should be initialized to 0 to account for the empty subarray formula_17
best_sum = 0;
and line 6 in the for loop codice_0 should be updated as codice_9.
current_sum = max(0, current_sum + x)
As a loop invariant, in the formula_3th step, the old value of codice_0 holds the maximum over all formula_18 of the sum formula_9.
Therefore, codice_0formula_10
is the maximum over all formula_18 of the sum formula_11. To extend the latter maximum to cover also the case formula_19, it is sufficient to consider also the empty subarray formula_20. This is done in line 6 by assigning formula_21codice_0formula_15 as the new value of codice_0, which after that holds the maximum over all formula_22 of the sum formula_11. Machine-verified C / Frama-C code of both variants can be found .
Computing the best subarray's position.
The algorithm can be modified to keep track of the starting and ending indices of the maximum subarray as well.
Because of the way this algorithm uses optimal substructures (the maximum subarray ending at each position is calculated in a simple way from a related but smaller and overlapping subproblem: the maximum subarray ending at the previous position) this algorithm can be viewed as a simple/trivial example of dynamic programming.
Complexity.
The runtime complexity of Kadane's algorithm is formula_0 and its space complexity is formula_1.
Generalizations.
Similar problems may be posed for higher-dimensional arrays, but their solutions are more complicated; see, e.g., . showed how to find the "k" largest subarray sums in a one-dimensional array, in the optimal time bound formula_23.
The Maximum sum "k"-disjoint subarrays can also be computed in the optimal time bound formula_23
Notes.
<templatestyles src="Reflist/styles.css" />
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(n)"
},
{
"math_id": 1,
"text": "O(1)"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "j"
},
{
"math_id": 4,
"text": "1 \\leq i \\leq j \\leq n "
},
{
"math_id": 5,
"text": "\\sum_{x=i}^j A[x] "
},
{
"math_id": 6,
"text": "A[1\\ldots n]"
},
{
"math_id": 7,
"text": "A[1 \\ldots j]"
},
{
"math_id": 8,
"text": "i \\in \\{ 1,\\ldots, j-1 \\}"
},
{
"math_id": 9,
"text": "A[i]+\\cdots+A[j-1]"
},
{
"math_id": 10,
"text": "+A[j]"
},
{
"math_id": 11,
"text": "A[i]+\\cdots+A[j]"
},
{
"math_id": 12,
"text": "i=j"
},
{
"math_id": 13,
"text": "A[j \\; \\ldots \\; j]"
},
{
"math_id": 14,
"text": "\\max(A[j],"
},
{
"math_id": 15,
"text": "+A[j])"
},
{
"math_id": 16,
"text": "i \\in \\{ 1, \\ldots, j \\}"
},
{
"math_id": 17,
"text": "A[0 \\ldots -1]"
},
{
"math_id": 18,
"text": "i \\in \\{ 1,\\ldots, j \\}"
},
{
"math_id": 19,
"text": "i=j+1"
},
{
"math_id": 20,
"text": "A[j+1 \\; \\ldots \\; j]"
},
{
"math_id": 21,
"text": "\\max(0,"
},
{
"math_id": 22,
"text": "i \\in \\{ 1, \\ldots, j+1 \\}"
},
{
"math_id": 23,
"text": "O(n + k)"
}
] | https://en.wikipedia.org/wiki?curid=10575678 |
10575807 | Expectancy-value theory | Behavioral theory
Expectancy–value theory has been developed in many different fields including education, health, communications, marketing and economics. Although the model differs in its meaning and implications for each field, the general idea is that there are expectations as well as values or beliefs that affect subsequent behavior.
Education model.
History and model overview.
John William Atkinson developed the expectancy–value theory in the 1950s and 1960s in an effort to understand the achievement motivation of individuals. In the 1980s, Jacquelynne Eccles expanded this research into the field of education. According to expectancy–value theory, students' achievement and achievement related choices are most proximally determined by two factors: expectancies for success, and subjective task values. Expectancies refer to how confident an individual is in his or her ability to succeed in a task whereas task values refer to how important, useful, or enjoyable the individual perceives the task. Theoretical and empirical work suggests that expectancies and values interact to predict important outcomes such as engagement, continuing interest, and academic achievement. Other factors, including demographic characteristics, stereotypes, prior experiences, and perceptions of others' beliefs and behaviors affect achievement related outcomes indirectly through these expectancies and values. This model has most widely been applied and used in research in the field of education.
Expectancies.
Expectancies are specific beliefs individuals have regarding their success on certain tasks they will carry out in the short-term future or long-term future. An individual's expectancies are related to their behaviors as well as the choices they make. Expectancies are related to ability-beliefs such as self-concept and self-efficacy. Self-concept is a domain specific concept that involves one's beliefs about their own abilities based on their past experiences in the specific domain. Self-efficacy is the belief that an individual has the ability to successfully engage in a future specific task or series of related tasks
Subjective task values.
According to Eccles and colleagues subjective task value can be thought of the motivation that allows an individual to answer the question "Do I Want to do This Activity and Why?" Subjective task values can be broken into four subcategories: Attainment Value (Importance for identity or self), Intrinsic Value (Enjoyment or Interest), Utility Value (Usefulness or Relevance), and Cost (loss of time, overly-high effort demands, loss of valued alternatives, or negative psychological experiences such as stress). Traditionally, attainment value and intrinsic value are more highly correlated. What's more, these two constructs tend to be related to intrinsic motivation, interest, and task persistence. Alternatively, utility value has both intrinsic and extrinsic components. and has been related to both intrinsic and extrinsic outcomes such as course performance and interest. Other research shows that utility value has time-dependent characteristics as well. Cost has been relatively neglected in the empirical research; however, the construct has received some attention more recently. Feather combined subjective task values with more universal human values and suggested that the former are just one type of general human motives that help to direct behavior.
Applications.
Developmental trajectories.
Researchers have found that expectancies and values can be distinguished as separate types of motivation as early as 6 years old. Similarly, types of value (e.g., attainment vs. utility) can be distinguished within an academic domain as early as fifth grade. Generally speaking, Eccles and colleagues implicate a wide array of different factors that determine an individual's expectancies and values, including:
Experts agree that student motivation tends to decline throughout their time in school. Longitudinal research has confirmed this general trend of motivational decline and also demonstrated that motivation is domain specific. Researchers have also demonstrated that there are gender differences in motivation. Motivation decline is particularly steep for Math achievement, but less so for reading or sports domains among both boys and girls. Researchers offer two general explanations for these declines in motivation. The first is that students' conceptualizations of different domains become more complex and nuanced—they differentiate between subdomains, which results in an appearance of mean-level decrease. In fact, children as young as 11 years old have demonstrated that they can differentiate between academic domains. The second is that the focus of their environment changes as they age. As students reach higher grades, the focus shifts from learning to achievement. In fact, a large body of research exists showing that shifts from learning to performance as an educational focus can be detrimental to student motivation.
Interventions.
Expectancy–value theory constructs can and have been applied to intervention programs that strive to change motivational beliefs. These interventions are able to increase expectancy and value or decrease cost. Such interventions not only target motivation, but also ultimately increase general student achievement and help to close traditionally problematic achievement gaps. For example, value- focused interventions have been developed to help teachers design their curriculum in ways that allow students to see the connections between the material they learn in the classroom and their own lives. This intervention is able to boost student's performance and interest, particularly for students who have low initial expectancy. According to the expectancy–value theory, this intervention is effective because it increases students interest in the material.
Psychology, health, communications, marketing, and economics model.
Expectancy–value theory was originally created in order to explain and predict individual's attitudes toward objects and actions. Originally the work of psychologist Martin Fishbein, the theory states that attitudes are developed and modified based on assessments about beliefs and values. Primarily, the theory attempts to determine the mental calculations that take place in attitude development. Expectancy–value theory has been used to develop other theories and is still utilized today in numerous fields of study.
History.
Dr. Martin Fishbein is credited with developing the expectancy–value theory (EVT) in the early to mid-1970s. It is sometimes referred to as Fishbein's expectancy–value theory or simply expectancy–value model. The primary work typically cited by scholars referring to EVT is Martin Fishbein and Icek Ajzen's 1975 book called "Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research". The seed work of EVT can be seen in Fishbein's doctoral dissertation, "A Theoretical and Empirical Investigation of the Interrelation between Belief about an Object and the Attitude toward that Object" (1961, UCLA) and two subsequent articles in 1962 and 1963 in the journal "Human Relations". Fishbein's work drew on the writings of researchers such as Ward Edwards, Milton J. Rosenberg, Edward Tolman, and John B. Watson.
Concepts.
EVT has three basic components. First, individuals respond to novel information about an item or action by developing a belief about the item or action. If a belief already exists, it can and most likely will be modified by new information. Second, individuals assign a value to each attribute that a belief is based on. Third, an expectation is created or modified based on the result of a calculation based on beliefs and values. For example, a student finds out that a professor has a reputation for being humorous. The student assigns a positive value to humor in the classroom, so the student has the expectation that their experience with the professor will be positive. When the student attends class and finds the professor humorous, the student calculates that it is a good class. EVT also states that the result of the calculation, often called the "attitude", stems from complex equations that contain many belief/values pairs. Fishbein and Ajzen (1975) represented the theory with the following equation where attitudes (a) are a factorial function of beliefs (b) and values (v).
Theory of reasoned action:
Formula
In its simplest form, the TRA can be expressed as the following equation:
formula_0
where:
formula_1 = behavioral intention
formula_2 = one's attitude toward performing the behavior
formula_3 = empirically derived weights
formula_4 = one's subjective norm related to performing the behavior
Current usage.
In the late 1970s and early 1980s, Fishbein and Ajzen expanded expectancy–value theory into the theory of reasoned action (TRA). Later Ajzen posited the theory of planned behavior (TPB) in his book "Attitudes, Personality, and Behavior" (1988). Both TRA and TPB address predictive and explanatory weaknesses with EVT and are still prominent theories in areas such as health communication research, marketing, and economics. Although not used as much since the early 1980s, EVT is still utilized in research within fields as diverse as audience research (Palmgreen & Rayburn, 1985) advertising (Shoham, Rose, & Kahle 1998; Smith & Vogt, 1995), child development (Watkinson, Dwyer, & Nielsen, 2005), education (Eklof, 2006; Ping, McBride, & Breune, 2006), health communication (Purvis Cooper, Burgoon, & Roter, 2001; Ludman & Curry, 1999), and organization communication (Westaby, 2002). | [
{
"math_id": 0,
"text": "BI {{=}} (AB)W_1 + (SN)W_2"
},
{
"math_id": 1,
"text": "BI"
},
{
"math_id": 2,
"text": "AB"
},
{
"math_id": 3,
"text": "W"
},
{
"math_id": 4,
"text": "SN"
}
] | https://en.wikipedia.org/wiki?curid=10575807 |
10575816 | Tropical cyclone forecasting | Science of forecasting how a tropical cyclone moves and its effects
Tropical cyclone forecasting is the science of forecasting where a tropical cyclone's center, and its effects, are expected to be at some point in the future. There are several elements to tropical cyclone forecasting: track forecasting, intensity forecasting, rainfall forecasting, storm surge, tornado, and seasonal forecasting. While skill is increasing in regard to track forecasting, intensity forecasting skill remains unchanged over the past several years. Seasonal forecasting began in the 1980s in the Atlantic basin and has spread into other basins in the years since.
History.
Short term.
The methods through which tropical cyclones are forecast have changed with the passage of time. The first known forecasts in the Western Hemisphere were made by Lt. Col. William Reed of the Corps of Royal Engineers at Barbados in 1847. Reed mostly utilized barometric pressure measurements as the basis of his forecasts. Benito Vines introduced a forecast and warning system based on cloud cover changes in Havana during the 1870s. Before the early 1900s, though, most forecasts were done by direct observations at weather stations, which were then relayed to forecast centres via telegraph. It wasn't until the advent of radio in the early twentieth century that observations from ships at sea were available to forecasters. The 1930s saw the usage of radiosondes in tropical cyclone forecasting. The next decade saw the advent of aircraft-based reconnaissance by the military, starting with the first dedicated flight into a hurricane in 1943, and the establishment of the Hurricane Hunters in 1944. In the 1950s, coastal weather radars began to be used in the United States, and research reconnaissance flights by the precursor of the Hurricane Research Division began in 1954.
The launch of the first weather satellite, TIROS-I, in 1960, introduced new forecasting techniques that remain important to tropical cyclone forecasting to the present. In the 1970s, buoys were introduced to improve the resolution of surface measurements, which until that point, were not available at all oversea surfaces.
Long term.
In the late 1970s, William Gray noticed a trend of low hurricane activity in the North Atlantic basin during El Niño years. He was the first researcher to make a connection between such events and positive results led him to pursue further research. He found numerous factors across the globe influence tropical cyclone activity, such as connecting wet periods over the African Sahel to an increase in major hurricane landfalls along the United States East Coast. However, his findings also showed inconsistencies when only looking at a single factor as a primary influence.
Utilizing his findings, Gray developed an objective, statistical forecast for seasonal hurricane activity; he predicted only the number of tropical storms, hurricanes, and major hurricanes, foregoing specifics on tracks and potential landfalls due to the aforementioned inconsistencies. Gray issued his first seasonal forecast ahead of the 1984 season, which used the statistical relationships between tropical cyclone activity, the El Niño–Southern Oscillation (ENSO), Quasi-biennial oscillation (QBO), and Caribbean basin sea-level pressures. The endeavour proved modestly successful. He subsequently issued forecasts ahead of the start of the Atlantic hurricane season in May and before the peak of the season in August. Students and colleagues joined his forecast team in the following years, including Christopher Landsea, Paul W. Mielke Jr., and Kenneth J. Berry.
Track.
The large-scale synoptic flow determines 70 to 90 percent of a tropical cyclone's motion. The deep-layer mean flow is the best tool in determining track direction and speed. If storms are significantly sheared, use of a lower-level wind is a better predictor. Knowledge of the beta effect can be used to steer a tropical cyclone, since it leads to a more northwest heading for tropical cyclones in the Northern Hemisphere. It is also best to smooth out short term wobbles of the storm centre to determine a more accurate trajectory.
Because of the forces that affect tropical cyclone tracks, accurate track predictions depend on determining the position and strength of high- and low-pressure areas and predicting how those areas will change during the life of a tropical system. Combining forecast models with increased understanding of the forces that act on tropical cyclones, and a wealth of data from Earth-orbiting satellites and other sensors, scientists have increased the accuracy of track forecasts over recent decades. An accurate track forecast is important, because if the track forecast is incorrect, forecasts for intensity, rainfall, storm surge, and tornado threat will also be incorrect.
1-2-3 rule.
The 1-2-3 rule (mariner's 1-2-3 rule or danger area) is a guideline commonly taught to mariners for severe storm (specifically hurricane and tropical storm) tracking and prediction. The 1-2-3 rule has two parts, the 34-Knot Rule which is the danger area to be avoided. The 1-2-3 rule itself refers to the rounded long-term NHC/TPC forecast errors of 100-200-300 nautical miles at 24-48-72 hours, respectively. These numbers were close to the 10-year average for the 1982–1991-time frame. However, these errors have decreased to near 50-100-150 as NHC forecasters become more accurate. The "danger area" to be avoided is constructed by expanding the forecast path by a radius equal to the respective hundreds of miles plus the forecast 34-Knot wind field radii.
Intensity.
Forecasters say they are less skilful at predicting the intensity of tropical cyclones than cyclone track. Available computing power limits forecasters' ability to accurately model a large number of complex factors, such as exact topology and atmospheric conditions, though with increased experience and understanding, even models with the same resolution can be tuned to more accurately reflect real-world behaviour. Another weakness is lack of frequent wind speed measurements in the eye of the storm. The Cyclone Global Navigation Satellite System, launched by NASA in 2016, is expected to provide much more data compared to sporadic measurements by weather buoys and hurricane-penetrating aircraft.
An accurate track forecast is essential to creating accurate intensity forecasts, particularly in an area with large islands such as the western north Pacific and the Caribbean Sea, as proximity to land is an inhibiting factor to developing tropical cyclones. A strong hurricane/typhoon/cyclone can weaken if an outer eye wall forms (typically around from the centre of the storm), choking off the convection within the inner eye wall. Such weakening is called an eyewall replacement cycle, and is usually temporary.
Maximum potential intensity.
Dr. Kerry Emanuel created a mathematical model around 1988, called the "maximum potential intensity" or MPI, to compute the upper limit of tropical cyclone intensity based on sea surface temperature and atmospheric profiles from the latest global model runs. Maps created from this equation show values of the maximum achievable intensity due to the thermodynamics of the atmosphere at the time of the last model run (either 0000 or 1200 UTC). However, MPI does not take vertical wind shear into account. MPI is computed using the following formula:
formula_0
Where formula_1 is the maximum potential velocity in meters per second; formula_2 is the sea surface temperature underneath the center of the tropical cyclone, formula_3 is a reference temperature (30 °C) and formula_4, formula_5 and formula_6 are curve-fit constants. When formula_7, formula_8, and formula_9, the graph generated by this function corresponds to the 99th percentile of empirical tropical cyclone intensity data.
Rainfall.
Tropical cyclone rainfall forecasting is important, since between 1970 and 2004, inland flooding from tropical cyclones caused most of the fatalities from tropical cyclones in the United States. While flooding is common to tropical cyclones near a landmass, there are a few factors which lead to excessive rainfall from tropical cyclones. Slow motion, as was seen during Hurricane Danny and Hurricane Wilma, can lead to high amounts. The presence of topography near the coast, as is the case across much of Mexico, Haiti, the Dominican Republic, much of Central America, Madagascar, Réunion, China, and Japan acts to magnify amounts due to upslope flow into the mountains. Strong upper level forcing from a trough moving through the Westerlies, as was the case during Hurricane Floyd, can lead to excessive amounts even from systems moving at an average forward motion. A combination of two of these factors could be especially crippling, as was seen during Hurricane Mitch in Central America. Therefore, an accurate track forecast is essential in order to produce an accurate tropical cyclone rainfall forecast.
However, as a result of global warming, the heat that has built up on the ocean's surface has allowed storms and hurricanes to capture more water vapour and, given the increased temperatures in the atmosphere also, retain the moisture for a longer capacity. This results in incredible amounts of rainfall upon striking land which can often be the most damaging aspect of a hurricane.
Operational methods.
Historically, tropical cyclone tracking charts were used to include the past track and prepare future forecasts at Regional Specialized Meteorological Centers and Tropical Cyclone Warning Centers. The need for a more modernized method for forecasting tropical cyclones had become apparent to operational weather forecasters by the mid-1980s. At that time the United States Department of Defense was using paper maps, acetate, grease pencils, and disparate computer programs to forecast tropical cyclones. The Automated Tropical Cyclone Forecasting System (ATCF) software was developed by the Naval Research Laboratory for the Joint Typhoon Warning Center (JTWC) beginning in 1986, and used since 1988. During 1990 the system was adapted by the National Hurricane Center (NHC) for use at the NHC, National Centers for Environmental Prediction and the Central Pacific Hurricane Center. This provided the NHC with a multitasking software environment which allowed them to improve efficiency and cut the time required to make a forecast by 25% or 1 hour. ATCF was originally developed for use within DOS, before later being adapted to Unix and Linux.
Storm surge.
The main storm surge forecast model in the Atlantic basin is SLOSH, which stands for Sea, Lake, Overland, Surge from Hurricanes. It uses the size of a storm, its intensity, its forward motion, and the topography of the coastal plain to estimate the depth of a storm surge at any individual grid point across the United States. An accurate forecast track is required in order to produce accurate storm surge forecasts. However, if the landfall point is uncertain, a maximum envelope of water (MEOW) map can be generated based on the direction of approach. If the forecast track itself is also uncertain, a maximum of maximums (MoM) map can be generated which will show the worst possible scenario for a hurricane of a specific strength.
Tornado.
The location of most tropical cyclone-related tornadoes is their northeast quadrant in the Northern Hemisphere and southeast quadrant in the Southern Hemisphere. Like most of the other forecasts for tropical cyclone effects, an accurate track forecast is required in order to produce an accurate tornado threat forecast.
Seasonal forecast.
By looking at annual variations in various climate parameters, forecasters can make predictions about the overall number and intensity of tropical cyclones that will occur in a given season. For example, when constructing its seasonal outlooks, the Climate Prediction Center in the United States considers the effects of the El Niño-Southern Oscillation, 25–40 year tropical cycle, wind shear over the oceans, and ocean surface temperature.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V = A + B \\cdot e^{C(T-T_0)}"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "T"
},
{
"math_id": 3,
"text": "T_0"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "B"
},
{
"math_id": 6,
"text": "C"
},
{
"math_id": 7,
"text": "A = 28.2"
},
{
"math_id": 8,
"text": "B = 55.8"
},
{
"math_id": 9,
"text": "C = 0.1813"
}
] | https://en.wikipedia.org/wiki?curid=10575816 |
1057638 | Einstein tensor | Tensor used in general relativity
In differential geometry, the Einstein tensor (named after Albert Einstein; also known as the trace-reversed Ricci tensor) is used to express the curvature of a pseudo-Riemannian manifold. In general relativity, it occurs in the Einstein field equations for gravitation that describe spacetime curvature in a manner that is consistent with conservation of energy and momentum.
Definition.
The Einstein tensor formula_0 is a tensor of order 2 defined over pseudo-Riemannian manifolds. In index-free notation it is defined as
formula_1
where formula_2 is the Ricci tensor, formula_3 is the metric tensor and formula_4 is the scalar curvature, which is computed as the trace of the Ricci Tensor formula_5 by formula_6. In component form, the previous equation reads as
formula_7
The Einstein tensor is symmetric
formula_8
and, like the on shell stress–energy tensor, has zero divergence:
formula_9
Explicit form.
The Ricci tensor depends only on the metric tensor, so the Einstein tensor can be defined directly with just the metric tensor. However, this expression is complex and rarely quoted in textbooks. The complexity of this expression can be shown using the formula for the Ricci tensor in terms of Christoffel symbols:
formula_10
where formula_11 is the Kronecker tensor and the Christoffel symbol formula_12 is defined as
formula_13
and terms of the form formula_14 represent its partial derivative in the μ-direction, i.e.:
formula_15
Before cancellations, this formula results in formula_16 individual terms. Cancellations bring this number down somewhat.
In the special case of a locally inertial reference frame near a point, the first derivatives of the metric tensor vanish and the component form of the Einstein tensor is considerably simplified:
formula_17
where square brackets conventionally denote antisymmetrization over bracketed indices, i.e.
formula_18
Trace.
The trace of the Einstein tensor can be computed by contracting the equation in the definition with the metric tensor formula_19. In formula_20 dimensions (of arbitrary signature):
formula_21
Therefore, in the special case of "n" = 4 dimensions, formula_22. That is, the trace of the Einstein tensor is the negative of the Ricci tensor's trace. Thus, another name for the Einstein tensor is the "trace-reversed Ricci tensor". This formula_23 case is especially relevant in the theory of general relativity.
Use in general relativity.
The Einstein tensor allows the Einstein field equations to be written in the concise form:
formula_24
where formula_25 is the cosmological constant and formula_26 is the Einstein gravitational constant.
From the explicit form of the Einstein tensor, the Einstein tensor is a nonlinear function of the metric tensor, but is linear in the second partial derivatives of the metric. As a symmetric order-2 tensor, the Einstein tensor has 10 independent components in a 4-dimensional space. It follows that the Einstein field equations are a set of 10 quasilinear second-order partial differential equations for the metric tensor.
The contracted Bianchi identities can also be easily expressed with the aid of the Einstein tensor:
formula_27
The (contracted) Bianchi identities automatically ensure the covariant conservation of the stress–energy tensor in curved spacetimes:
formula_28
The physical significance of the Einstein tensor is highlighted by this identity. In terms of the densitized stress tensor contracted on a Killing vector formula_29, an ordinary conservation law holds:
formula_30
Uniqueness.
David Lovelock has shown that, in a four-dimensional differentiable manifold, the Einstein tensor is the only tensorial and divergence-free function of the formula_31 and at most their first and second partial derivatives.
However, the Einstein field equation is not the only equation which satisfies the three conditions:
Many alternative theories have been proposed, such as the Einstein–Cartan theory, that also satisfy the above conditions.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{G}"
},
{
"math_id": 1,
"text": "\\mathbf{G}=\\mathbf{R}-\\frac{1}{2}\\mathbf{g}R,"
},
{
"math_id": 2,
"text": "\\mathbf{R}"
},
{
"math_id": 3,
"text": "\\mathbf{g}"
},
{
"math_id": 4,
"text": "R"
},
{
"math_id": 5,
"text": "R_{\\mu \\nu}"
},
{
"math_id": 6,
"text": "R = g^{\\mu \\nu}R_{\\mu \\nu } = R_\\mu^\\mu"
},
{
"math_id": 7,
"text": "G_{\\mu\\nu} = R_{\\mu\\nu} - {1\\over2} g_{\\mu\\nu}R ."
},
{
"math_id": 8,
"text": "G_{\\mu\\nu} = G_{\\nu\\mu}"
},
{
"math_id": 9,
"text": "\\nabla_\\mu G^{\\mu\\nu} = 0\\,."
},
{
"math_id": 10,
"text": "\\begin{align}\n G_{\\alpha\\beta}\n &= R_{\\alpha\\beta} - \\frac{1}{2} g_{\\alpha\\beta} R \\\\\n &= R_{\\alpha\\beta} - \\frac{1}{2} g_{\\alpha\\beta} g^{\\gamma\\zeta} R_{\\gamma\\zeta} \\\\\n &= \\left(\\delta^\\gamma_\\alpha \\delta^\\zeta_\\beta - \\frac{1}{2} g_{\\alpha\\beta}g^{\\gamma\\zeta}\\right) R_{\\gamma\\zeta} \\\\\n &= \\left(\\delta^\\gamma_\\alpha \\delta^\\zeta_\\beta - \\frac{1}{2} g_{\\alpha\\beta}g^{\\gamma\\zeta}\\right)\\left(\\Gamma^\\epsilon{}_{\\gamma\\zeta,\\epsilon} - \\Gamma^\\epsilon{}_{\\gamma\\epsilon,\\zeta} + \\Gamma^\\epsilon{}_{\\epsilon\\sigma} \\Gamma^\\sigma{}_{\\gamma\\zeta} - \\Gamma^\\epsilon{}_{\\zeta\\sigma} \\Gamma^\\sigma{}_{\\epsilon\\gamma}\\right), \\\\[2pt]\n G^{\\alpha\\beta}\n &= \\left(g^{\\alpha\\gamma} g^{\\beta\\zeta} - \\frac{1}{2} g^{\\alpha\\beta}g^{\\gamma\\zeta}\\right)\\left(\\Gamma^\\epsilon{}_{\\gamma\\zeta,\\epsilon} - \\Gamma^\\epsilon{}_{\\gamma\\epsilon,\\zeta} + \\Gamma^\\epsilon{}_{\\epsilon\\sigma} \\Gamma^\\sigma{}_{\\gamma\\zeta} - \\Gamma^\\epsilon{}_{\\zeta\\sigma} \\Gamma^\\sigma{}_{\\epsilon\\gamma}\\right),\n\\end{align}"
},
{
"math_id": 11,
"text": "\\delta^\\alpha_\\beta"
},
{
"math_id": 12,
"text": "\\Gamma^\\alpha{}_{\\beta\\gamma}"
},
{
"math_id": 13,
"text": "\\Gamma^\\alpha{}_{\\beta\\gamma} = \\frac{1}{2} g^{\\alpha\\epsilon}\\left(g_{\\beta\\epsilon,\\gamma} + g_{\\gamma\\epsilon,\\beta} - g_{\\beta\\gamma,\\epsilon}\\right)."
},
{
"math_id": 14,
"text": "\\Gamma ^\\alpha _{\\beta \\gamma, \\mu} "
},
{
"math_id": 15,
"text": "\\Gamma^\\alpha{}_{\\beta\\gamma, \\mu} = \\partial _\\mu \\Gamma^\\alpha{}_{\\beta\\gamma} = \n\\frac{\\partial}{\\partial x^\\mu}\n\\Gamma^\\alpha{}_{\\beta\\gamma}"
},
{
"math_id": 16,
"text": "2 \\times (6 + 6 + 9 + 9) = 60"
},
{
"math_id": 17,
"text": "\\begin{align}\n G_{\\alpha\\beta}\n & = g^{\\gamma\\mu}\\left[ g_{\\gamma[\\beta,\\mu]\\alpha} + g_{\\alpha[\\mu,\\beta]\\gamma} - \\frac{1}{2} g_{\\alpha\\beta} g^{\\epsilon\\sigma} \\left(g_{\\epsilon[\\mu,\\sigma]\\gamma} + g_{\\gamma[\\sigma,\\mu]\\epsilon}\\right)\\right] \\\\\n & = g^{\\gamma\\mu} \\left(\\delta^\\epsilon_\\alpha \\delta^\\sigma_\\beta - \\frac{1}{2} g^{\\epsilon\\sigma}g_{\\alpha\\beta}\\right)\\left(g_{\\epsilon[\\mu,\\sigma]\\gamma} + g_{\\gamma[\\sigma,\\mu]\\epsilon}\\right),\n\\end{align}"
},
{
"math_id": 18,
"text": "g_{\\alpha[\\beta,\\gamma]\\epsilon} \\, = \\frac{1}{2} \\left(g_{\\alpha\\beta,\\gamma\\epsilon} - g_{\\alpha\\gamma,\\beta\\epsilon}\\right)."
},
{
"math_id": 19,
"text": "g^{\\mu\\nu}"
},
{
"math_id": 20,
"text": "n"
},
{
"math_id": 21,
"text": "\\begin{align}\ng^{\\mu\\nu}G_{\\mu\\nu} &= g^{\\mu\\nu}R_{\\mu\\nu} - {1\\over2} g^{\\mu\\nu}g_{\\mu\\nu}R \\\\\nG &= R - {1\\over2} (nR) = {{2-n}\\over2}R\n\\end{align}"
},
{
"math_id": 22,
"text": "G\\ = -R"
},
{
"math_id": 23,
"text": "n=4"
},
{
"math_id": 24,
"text": "G_{\\mu\\nu} + \\Lambda g_{\\mu \\nu} = \\kappa T_{\\mu\\nu} ,"
},
{
"math_id": 25,
"text": "\\Lambda"
},
{
"math_id": 26,
"text": "\\kappa"
},
{
"math_id": 27,
"text": "\\nabla_{\\mu} G^{\\mu\\nu} = 0."
},
{
"math_id": 28,
"text": "\\nabla_{\\mu} T^{\\mu\\nu} = 0."
},
{
"math_id": 29,
"text": "\\xi^\\mu"
},
{
"math_id": 30,
"text": "\\partial_{\\mu}\\left(\\sqrt{-g} T^\\mu{}_\\nu \\xi^\\nu\\right) = 0."
},
{
"math_id": 31,
"text": "g_{\\mu\\nu}"
}
] | https://en.wikipedia.org/wiki?curid=1057638 |
1057955 | Super-Poincaré algebra | Supersymmetric generalization of the Poincaré algebra
In theoretical physics, a super-Poincaré algebra is an extension of the Poincaré algebra to incorporate supersymmetry, a relation between bosons and fermions. They are examples of supersymmetry algebras (without central charges or internal symmetries), and are Lie superalgebras. Thus a super-Poincaré algebra is a Z2-graded vector space with a graded Lie bracket such that the even part is a Lie algebra containing the Poincaré algebra, and the odd part is built from spinors on which there is an anticommutation relation with values in the even part.
Informal sketch.
The Poincaré algebra describes the isometries of Minkowski spacetime. From the representation theory of the Lorentz group, it is known that the Lorentz group admits two inequivalent complex spinor representations, dubbed formula_0 and formula_1. Taking their tensor product, one obtains formula_2; such decompositions of tensor products of representations into direct sums is given by the Littlewood–Richardson rule.
Normally, one treats such a decomposition as relating to specific particles: so, for example, the pion, which is a chiral vector particle, is composed of a quark-anti-quark pair. However, one could also identify formula_3 with Minkowski spacetime itself. This leads to a natural question: if Minkowski space-time belongs to the adjoint representation, then can Poincaré symmetry be extended to the fundamental representation? Well, it can: this is exactly the super-Poincaré algebra. There is a corresponding experimental question: if we live in the adjoint representation, then where is the fundamental representation hiding? This is the program of supersymmetry, which has not been found experimentally.
History.
The super-Poincaré algebra was first proposed in the context of the Haag–Łopuszański–Sohnius theorem, as a means of avoiding the conclusions of the Coleman–Mandula theorem. That is, the Coleman–Mandula theorem is a no-go theorem that states that the Poincaré algebra cannot be extended with additional symmetries that might describe the internal symmetries of the observed physical particle spectrum. However, the Coleman–Mandula theorem assumed that the algebra extension would be by means of a commutator; this assumption, and thus the theorem, can be avoided by considering the anti-commutator, that is, by employing anti-commuting Grassmann numbers. The proposal was to consider a supersymmetry algebra, defined as the semidirect product of a central extension of the super-Poincaré algebra by a compact Lie algebra of internal symmetries.
Definition.
The simplest supersymmetric extension of the Poincaré algebra contains two Weyl spinors with the following anti-commutation relation:
formula_4
and all other anti-commutation relations between the "Q"s and "P"s vanish. The operators formula_5 are known as supercharges. In the above expression formula_6 are the generators of translation and formula_7 are the Pauli matrices. The index formula_8 runs over the values formula_9 A dot is used over the index formula_10 to remind that this index transforms according to the inequivalent conjugate spinor representation; one must never accidentally contract these two types of indexes. The Pauli matrices can be considered to be a direct manifestation of the Littlewood–Richardson rule mentioned before: they indicate how the tensor product formula_11 of the two spinors can be re-expressed as a vector. The index formula_12 of course ranges over the space-time dimensions formula_13
It is convenient to work with Dirac spinors instead of Weyl spinors; a Dirac spinor can be thought of as an element of formula_14; it has four components. The Dirac matrices are thus also four-dimensional, and can be expressed as direct sums of the Pauli matrices. The tensor product then gives an algebraic relation to the Minkowski metric formula_15 which is expressed as:
formula_16
and
formula_17
This then gives the full algebra
formula_18
which are to be combined with the normal Poincaré algebra. It is a closed algebra, since all Jacobi identities are satisfied and can have since explicit matrix representations. Following this line of reasoning will lead to supergravity.
Extended supersymmetry.
It is possible to add more supercharges. That is, we fix a number which by convention is labelled formula_19, and define supercharges
formula_20 with formula_21
These can be thought of as many copies of the original supercharges, and hence satisfy
formula_22
formula_23
and
formula_24
but can also satisfy
formula_25
and
formula_26
where formula_27 is the "central charge".
Super-Poincaré group and superspace.
Just as the Poincaré algebra generates the Poincaré group of isometries of Minkowski space, the super-Poincaré algebra, an example of a Lie super-algebra, generates what is known as a supergroup. This can be used to define superspace with formula_19 supercharges: these are the right cosets of the Lorentz group within the formula_19 super-Poincaré group.
Just as formula_6 has the interpretation as being the generator of spacetime translations, the charges formula_20, with formula_28, have the interpretation as generators of superspace translations in the 'spin coordinates' of superspace. That is, we can view superspace as the direct sum of Minkowski space with 'spin dimensions' labelled by coordinates formula_29. The supercharge formula_30 generates translations in the direction labelled by the coordinate formula_31 By counting, there are formula_32 spin dimensions.
Notation for superspace.
The superspace consisting of Minkowski space with formula_19 supercharges is therefore labelled formula_33 or sometimes simply formula_34.
SUSY in 3 + 1 Minkowski spacetime.
In (3 + 1) Minkowski spacetime, the Haag–Łopuszański–Sohnius theorem states that the SUSY algebra with N spinor generators is as follows.
The even part of the star Lie superalgebra is the direct sum of the Poincaré algebra and a reductive Lie algebra "B" (such that its self-adjoint part is the tangent space of a real compact Lie group). The odd part of the algebra would be
formula_35
where formula_36 and formula_37 are specific representations of the Poincaré algebra. (Compared to the notation used earlier in the article, these correspond formula_38 and formula_39, respectively, also see the footnote where the previous notation was introduced). Both components are conjugate to each other under the * conjugation. "V" is an "N"-dimensional complex representation of "B" and "V"* is its dual representation. The Lie bracket for the odd part is given by a symmetric equivariant pairing {..} on the odd part with values in the even part. In particular, its reduced intertwiner from formula_40 to the ideal of the Poincaré algebra generated by translations is given as the product of a nonzero intertwiner from formula_41 to (1/2,1/2) by the "contraction intertwiner" from formula_42 to the trivial representation. On the other hand, its reduced intertwiner from formula_43 is the product of a (antisymmetric) intertwiner from formula_44 to (0,0) and an antisymmetric intertwiner "A" from formula_45 to "B". Conjugate it to get the corresponding case for the other half.
"N" = 1.
"B" is now formula_46 (called R-symmetry) and "V" is the 1D representation of formula_46 with charge 1. "A" (the intertwiner defined above) would have to be zero since it is antisymmetric.
Actually, there are two versions of "N=1" SUSY, one without the formula_46 (i.e. "B" is zero-dimensional) and the other with formula_46.
"N" = 2.
"B" is now formula_47 and "V" is the 2D doublet representation of formula_48 with a zero formula_46 charge. Now, "A" is a nonzero intertwiner to the formula_46 part of "B".
Alternatively, "V" could be a 2D doublet with a nonzero formula_46 charge. In this case, "A" would have to be zero.
Yet another possibility would be to let "B" be formula_49. "V" is invariant under formula_50 and formula_51 and decomposes into a 1D rep with formula_52 charge 1 and another 1D rep with charge -1. The intertwiner "A" would be complex with the real part mapping to formula_50 and the imaginary part mapping to formula_51.
Or we could have "B" being formula_53 with "V" being the doublet rep of formula_48 with zero formula_46 charges and "A" being a complex intertwiner with the real part mapping to formula_52 and the imaginary part to formula_50.
This doesn't even exhaust all the possibilities. We see that there is more than one "N" = 2 supersymmetry; likewise, the SUSYs for "N" > 2 are also not unique (in fact, it only gets worse).
"N" = 3.
It is theoretically allowed, but the multiplet structure becomes automatically the same with
that of an "N"=4 supersymmetric theory. So it is less often discussed compared to "N"=1,2,4 version.
"N" = 4.
This is the maximal number of supersymmetries in a theory without gravity.
"N" = 8.
This is the maximal number of supersymmetries in any supersymmetric theory. Beyond formula_54, any massless supermultiplet contains a sector with helicity formula_55 such that formula_56. Such theories on Minkowski space must be free (non-interacting).
SUSY in various dimensions.
In 0 + 1, 2 + 1, 3 + 1, 4 + 1, 6 + 1, 7 + 1, 8 + 1, and 10 + 1 dimensions, a SUSY algebra is classified by a positive integer "N".
In 1 + 1, 5 + 1 and 9 + 1 dimensions, a SUSY algebra is classified by two nonnegative integers ("M", "N"), at least one of which is nonzero. "M" represents the number of left-handed SUSYs and "N" represents the number of right-handed SUSYs.
The reason of this has to do with the reality conditions of the spinors.
Hereafter "d" = 9 means "d" = 8 + 1 in Minkowski signature, etc. The structure of supersymmetry algebra is mainly determined by the number of the fermionic generators, that is the number "N" times the real dimension of the spinor in "d" dimensions. It is because one can obtain a supersymmetry algebra of lower dimension easily from that of higher dimensionality by the use of dimensional reduction.
Upper bound on dimension of supersymmetric theories.
The maximum allowed dimension of theories with supersymmetry is formula_57, which admits a unique theory called eleven-dimensional supergravity which is the low-energy limit of M-theory. This incorporates supergravity: without supergravity, the maximum allowed dimension is formula_58.
"d" = 11.
The only example is the "N" = 1 supersymmetry with 32 supercharges.
"d" = 10.
From "d" = 11, "N" = 1 SUSY, one obtains "N" = (1, 1) nonchiral SUSY algebra, which is also called the type IIA supersymmetry. There is also "N" = (2, 0) SUSY algebra, which is called the type IIB supersymmetry. Both of them have 32 supercharges.
"N" = (1, 0) SUSY algebra with 16 supercharges is the minimal susy algebra in 10 dimensions. It is also called the type I supersymmetry. Type IIA / IIB / I superstring theory has the SUSY algebra of the corresponding name. The supersymmetry algebra for the heterotic superstrings is that of type I.
Remarks.
<templatestyles src="Reflist/styles.css" />
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2"
},
{
"math_id": 1,
"text": "\\overline{2}"
},
{
"math_id": 2,
"text": "2\\otimes \\overline{2}=3\\oplus 1"
},
{
"math_id": 3,
"text": "3\\oplus 1"
},
{
"math_id": 4,
"text": "\\{Q_{\\alpha}, \\bar Q_{\\dot{\\beta}}\\} = 2{\\sigma^\\mu}_{\\alpha\\dot{\\beta}}P_\\mu "
},
{
"math_id": 5,
"text": "Q_\\alpha, \\bar Q_\\dot\\alpha"
},
{
"math_id": 6,
"text": "P_\\mu"
},
{
"math_id": 7,
"text": "\\sigma^\\mu"
},
{
"math_id": 8,
"text": "\\alpha"
},
{
"math_id": 9,
"text": "\\alpha=1,2."
},
{
"math_id": 10,
"text": "\\dot{\\beta}"
},
{
"math_id": 11,
"text": "2\\otimes\\overline{2}"
},
{
"math_id": 12,
"text": "\\mu"
},
{
"math_id": 13,
"text": "\\mu=0,1,2,3."
},
{
"math_id": 14,
"text": "2\\oplus\\overline{2}"
},
{
"math_id": 15,
"text": " g^{\\mu \\nu} "
},
{
"math_id": 16,
"text": " \\{ \\gamma^\\mu,\\gamma^\\nu \\} = 2g^{\\mu \\nu} "
},
{
"math_id": 17,
"text": " \\sigma^{\\mu \\nu}=\\frac{i}{2}\\left[ \\gamma^\\mu,\\gamma^\\nu \\right] "
},
{
"math_id": 18,
"text": "\\begin{align}\n \\left[ M^{\\mu \\nu} , Q_\\alpha \\right] &= \\frac{1}{2} ( \\sigma^{\\mu \\nu})_\\alpha^{\\;\\;\\beta} Q_\\beta \\\\\n\n \\left[ Q_\\alpha , P^\\mu \\right] &= 0 \\\\\n\n \\{ Q_\\alpha , \\bar{Q}_{\\dot{\\beta}} \\} &= 2 ( \\sigma^\\mu )_{\\alpha \\dot{\\beta}} P_\\mu \\\\\n\\end{align}"
},
{
"math_id": 19,
"text": "\\mathcal{N}"
},
{
"math_id": 20,
"text": "Q^I_\\alpha, \\bar Q^I_\\dot\\alpha"
},
{
"math_id": 21,
"text": "I = 1, \\cdots, \\mathcal{N}."
},
{
"math_id": 22,
"text": "[M^{\\mu\\nu}, Q^I_\\alpha] = (\\sigma^{\\mu\\nu})_\\alpha{}^\\beta Q^I_\\beta"
},
{
"math_id": 23,
"text": "[P^\\mu, Q^I_\\alpha] = 0"
},
{
"math_id": 24,
"text": "\\{Q^I_\\alpha, \\bar Q^J_\\dot\\alpha\\} = 2\\sigma^\\mu_{\\alpha\\dot\\alpha}P_\\mu\\delta^{IJ}"
},
{
"math_id": 25,
"text": "\\{Q^I_\\alpha, Q^J_\\beta\\} = \\epsilon_{\\alpha\\beta}Z^{IJ}"
},
{
"math_id": 26,
"text": "\\{\\bar Q^I_\\dot\\alpha, \\bar Q^J_\\dot\\beta\\} = \\epsilon_{\\dot\\alpha\\dot\\beta}Z^{\\dagger IJ}"
},
{
"math_id": 27,
"text": "Z^{IJ} = -Z^{JI}"
},
{
"math_id": 28,
"text": "I = 1, \\cdots, \\mathcal{N}"
},
{
"math_id": 29,
"text": "\\theta^I_\\alpha, \\bar\\theta^{I\\dot\\alpha}"
},
{
"math_id": 30,
"text": "Q^I_\\alpha"
},
{
"math_id": 31,
"text": "\\theta^I_\\alpha."
},
{
"math_id": 32,
"text": "4\\mathcal{N}"
},
{
"math_id": 33,
"text": "\\mathbb{R}^{1,3|4\\mathcal{N}}"
},
{
"math_id": 34,
"text": "\\mathbb{R}^{4|4\\mathcal{N}}"
},
{
"math_id": 35,
"text": "\\left(\\frac{1}{2},0\\right)\\otimes V\\oplus\\left(0,\\frac{1}{2}\\right)\\otimes V^*"
},
{
"math_id": 36,
"text": "(1/2,0)"
},
{
"math_id": 37,
"text": "(0,1/2)"
},
{
"math_id": 38,
"text": "\\overline{2}\\oplus 1"
},
{
"math_id": 39,
"text": "1\\oplus 2"
},
{
"math_id": 40,
"text": "\\left[\\left(\\frac{1}{2},0\\right)\\otimes V\\right]\\otimes\\left[\\left(0,\\frac{1}{2}\\right)\\otimes V^*\\right]"
},
{
"math_id": 41,
"text": "\\left(\\frac{1}{2},0\\right)\\otimes\\left(0,\\frac{1}{2}\\right)"
},
{
"math_id": 42,
"text": "V\\otimes V^*"
},
{
"math_id": 43,
"text": "\\left[\\left(\\frac{1}{2},0\\right)\\otimes V\\right]\\otimes \\left[\\left(\\frac{1}{2},0\\right)\\otimes V\\right]"
},
{
"math_id": 44,
"text": "\\left(\\frac{1}{2},0\\right)\\otimes\\left(\\frac{1}{2},0\\right)"
},
{
"math_id": 45,
"text": "N^2"
},
{
"math_id": 46,
"text": "\\mathfrak{u}(1)"
},
{
"math_id": 47,
"text": "\\mathfrak{su}(2)\\oplus \\mathfrak{u}(1)"
},
{
"math_id": 48,
"text": "\\mathfrak{su}(2)"
},
{
"math_id": 49,
"text": "\\mathfrak{u}(1)_A\\oplus \\mathfrak{u}(1)_B \\oplus \\mathfrak{u}(1)_C"
},
{
"math_id": 50,
"text": "\\mathfrak{u}(1)_B"
},
{
"math_id": 51,
"text": "\\mathfrak{u}(1)_C"
},
{
"math_id": 52,
"text": "\\mathfrak{u}(1)_A"
},
{
"math_id": 53,
"text": "\\mathfrak{su}(2)\\oplus \\mathfrak{u}(1)_A\\oplus \\mathfrak{u}(1)_B"
},
{
"math_id": 54,
"text": "\\mathcal{N} = 8"
},
{
"math_id": 55,
"text": "\\lambda"
},
{
"math_id": 56,
"text": "|\\lambda| > 2"
},
{
"math_id": 57,
"text": "d = 11 = 10 + 1"
},
{
"math_id": 58,
"text": "d = 10 = 9 + 1"
}
] | https://en.wikipedia.org/wiki?curid=1057955 |
10580078 | Navigation function | Navigation function usually refers to a function of position, velocity, acceleration and time which is used to plan robot trajectories through the environment. Generally, the goal of a navigation function is to create feasible, safe paths that avoid obstacles while allowing a robot to move from its starting configuration to its goal configuration.
Potential functions as navigation functions.
Potential functions assume that the environment or work space is known. Obstacles are assigned a high potential value, and the goal position is assigned a low potential. To reach the goal position, a robot only needs to follow the negative gradient of the surface.
We can formalize this concept mathematically as following: Let formula_0 be the state space of all possible configurations of a robot. Let formula_1 denote the goal region of the state space.
Then a potential function formula_2 is called a (feasible) navigation function if
Probabilistic navigation function.
Probabilistic navigation function is an extension of the classical navigation function for static stochastic scenarios. The function is defined by permitted collision probability, which limits the risk during motion. The Minkowski sum used for in the classical definition is replaced with a convolution of the geometries and the Probability Density Functionss of locations. Denoting the target position by formula_10, the Probabilistic navigation function is defined as:
formula_11
where formula_12 is a predefined constant like in the classical navigation function, which ensures the Morse nature of the function. formula_13 is the distance to the target position formula_14, and formula_15 takes into account all obstacles, defined as
formula_16
where formula_17 is based on the probability for a collision at location formula_18. The probability for a collision is limited by a predetermined value formula_19, meaning:
formula_20
and,
formula_21
where formula_22 is the probability to collide with the i-th obstacle.
A map formula_23 is said to be a probabilistic navigation function if it satisfies the following conditions:
Navigation Function in Optimal Control.
While for certain applications, it suffices to have a feasible navigation function, in many cases it is desirable to have an optimal navigation function with respect to a given cost functional formula_24. Formalized as an optimal control problem, we can write
formula_25
formula_26
whereby formula_18 is the state, formula_27 is the control to apply, formula_28 is a cost at a certain state formula_18 if we apply a control formula_27, and formula_29 models the transition dynamics of the system.
Applying Bellman's principle of optimality the optimal cost-to-go function is defined as
formula_30
Together with the above defined axioms we can define the optimal navigation function as
Even if a navigation function is an example for reactive control, it can be utilized for optimal control problems as well which includes planning capabilities.
Stochastic Navigation Function.
If we assume the transition dynamics of the system or the cost function as subjected to noise, we obtain a stochastic optimal control problem with a cost formula_33 and dynamics formula_29. In the field of reinforcement learning the cost is replaced by a reward function formula_34 and the dynamics by the transition probabilities formula_35.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "X_g \\subset X"
},
{
"math_id": 2,
"text": "\\phi(x)"
},
{
"math_id": 3,
"text": "\\phi(x)=0\\ \\forall x \\in X_g"
},
{
"math_id": 4,
"text": " \\phi(x) = \\infty"
},
{
"math_id": 5,
"text": " {X_{g}}"
},
{
"math_id": 6,
"text": " x"
},
{
"math_id": 7,
"text": " x \\in X \\setminus {X_{g}}"
},
{
"math_id": 8,
"text": " x'"
},
{
"math_id": 9,
"text": " \\phi(x') < \\phi(x)"
},
{
"math_id": 10,
"text": "x_d"
},
{
"math_id": 11,
"text": "\n{\\varphi}(x) = \\frac{\\gamma_d (x)}{{{{\\left[ {\\gamma_d^{K}(x) + \\beta \\left( x \\right)} \\right]}^{\\frac{1}{K}}}}}\n"
},
{
"math_id": 12,
"text": "K"
},
{
"math_id": 13,
"text": "\\gamma_d(x)"
},
{
"math_id": 14,
"text": "{{||x - {x_d}|{|^2}}} "
},
{
"math_id": 15,
"text": "\\beta \\left( x \\right)"
},
{
"math_id": 16,
"text": "\n\\beta \\left( x \\right) = \\prod\\limits_{i = 0}^{{N_{o}}} {{\\beta _i}\\left( x \\right)}\n"
},
{
"math_id": 17,
"text": "\\beta _i(x)"
},
{
"math_id": 18,
"text": "x"
},
{
"math_id": 19,
"text": "\\Delta"
},
{
"math_id": 20,
"text": "\n\\beta _i (x) = \\Delta- p^i\\left( x \\right)\n"
},
{
"math_id": 21,
"text": "\n\\beta _0 (x)= - \\Delta + p^0\\left( x \\right) \n"
},
{
"math_id": 22,
"text": "p^i(x)"
},
{
"math_id": 23,
"text": "\\varphi"
},
{
"math_id": 24,
"text": "J"
},
{
"math_id": 25,
"text": "\\text{minimize } J(x_{1:T},u_{1:T})=\\int\\limits_T L(x_t,u_t,t) dt"
},
{
"math_id": 26,
"text": "\\text{subject to } \\dot{x_t} = f(x_t,u_t)"
},
{
"math_id": 27,
"text": "u"
},
{
"math_id": 28,
"text": "L"
},
{
"math_id": 29,
"text": "f"
},
{
"math_id": 30,
"text": "\\displaystyle \\phi(x_t) = \\min_{u_t \\in U(x_t)} \\Big\\{ L(x_t,u_t) + \\phi(f(x_t,u_t)) \\Big\\} "
},
{
"math_id": 31,
"text": " {X_{G}}"
},
{
"math_id": 32,
"text": " x \\in X \\setminus {X_{G}}"
},
{
"math_id": 33,
"text": "J(x_t,u_t)"
},
{
"math_id": 34,
"text": "R(x_t,u_t)"
},
{
"math_id": 35,
"text": "P(x_{t+1}|x_t,u_t)"
}
] | https://en.wikipedia.org/wiki?curid=10580078 |
1058218 | Semigroup action | An action or act of a semigroup on a set
In algebra and theoretical computer science, an action or act of a semigroup on a set is a rule which associates to each element of the semigroup a transformation of the set in such a way that the product of two elements of the semigroup (using the semigroup operation) is associated with the composite of the two corresponding transformations. The terminology conveys the idea that the elements of the semigroup are "acting" as transformations of the set. From an algebraic perspective, a semigroup action is a generalization of the notion of a group action in group theory. From the computer science point of view, semigroup actions are closely related to automata: the set models the state of the automaton and the action models transformations of that state in response to inputs.
An important special case is a monoid action or act, in which the semigroup is a monoid and the identity element of the monoid acts as the identity transformation of a set. From a category theoretic point of view, a monoid is a category with one object, and an act is a functor from that category to the category of sets. This immediately provides a generalization to monoid acts on objects in categories other than the category of sets.
Another important special case is a transformation semigroup. This is a semigroup of transformations of a set, and hence it has a tautological action on that set. This concept is linked to the more general notion of a semigroup by an analogue of Cayley's theorem.
"(A note on terminology: the terminology used in this area varies, sometimes significantly, from one author to another. See the article for details.)"
Formal definitions.
Let "S" be a semigroup. Then a (left) semigroup action (or act) of "S" is a set "X" together with an operation • : "S" × "X" → "X" which is compatible with the semigroup operation ∗ as follows:
This is the analogue in semigroup theory of a (left) group action, and is equivalent to a semigroup homomorphism into the set of functions on "X". Right semigroup actions are defined in a similar way using an operation • : "X" × "S" → "X" satisfying ("x" • "a") • "b" = "x" • ("a" ∗ "b").
If "M" is a monoid, then a (left) monoid action (or act) of "M" is a (left) semigroup action of "M" with the additional property that
where "e" is the identity element of "M". This correspondingly gives a monoid homomorphism. Right monoid actions are defined in a similar way. A monoid "M" with an action on a set is also called an operator monoid.
A semigroup action of "S" on "X" can be made into monoid act by adjoining an identity to the semigroup and requiring that it acts as the identity transformation on "X".
Terminology and notation.
If "S" is a semigroup or monoid, then a set "X" on which "S" acts as above (on the left, say) is also known as a (left) S"-act, S"-set, S"-action, S"-operand, or left act over "S". Some authors do not distinguish between semigroup and monoid actions, by regarding the identity axiom ("e" • "x" = "x") as empty when there is no identity element, or by using the term unitary "S"-act for an "S"-act with an identity.
The defining property of an act is analogous to the associativity of the semigroup operation, and means that all parentheses can be omitted. It is common practice, especially in computer science, to omit the operations as well so that both the semigroup operation and the action are indicated by juxtaposition. In this way strings of letters from "S" act on "X", as in the expression "stx" for "s", "t" in "S" and "x" in "X".
It is also quite common to work with right acts rather than left acts. However, every right S-act can be interpreted as a left act over the opposite semigroup, which has the same elements as S, but where multiplication is defined by reversing the factors, "s" • "t" = "t" • "s", so the two notions are essentially equivalent. Here we primarily adopt the point of view of left acts.
Acts and transformations.
It is often convenient (for instance if there is more than one act under consideration) to use a letter, such as formula_0, to denote the function
formula_1
defining the formula_2-action and hence write formula_3 in place of formula_4. Then for any formula_5 in formula_2, we denote by
formula_6
the transformation of formula_7 defined by
formula_8
By the defining property of an formula_2-act, formula_0 satisfies
formula_9
Further, consider a function formula_10. It is the same as formula_11 (see "Currying"). Because formula_12 is a bijection, semigroup actions can be defined as functions formula_13 which satisfy
formula_14
That is, formula_0 is a semigroup action of formula_2 on formula_7 if and only if formula_15 is a semigroup homomorphism from formula_2 to the full transformation monoid of formula_7.
"S"-homomorphisms.
Let "X" and "X"′ be "S"-acts. Then an "S"-homomorphism from "X" to "X"′ is a map
formula_16
such that
formula_17 for all formula_18 and formula_19.
The set of all such "S"-homomorphisms is commonly written as formula_20.
"M"-homomorphisms of "M"-acts, for "M" a monoid, are defined in exactly the same way.
"S"-Act and "M"-Act.
For a fixed semigroup "S", the left "S"-acts are the objects of a category, denoted "S"-Act, whose morphisms are the "S"-homomorphisms. The corresponding category of right "S"-acts is sometimes denoted by Act-"S". (This is analogous to the categories "R"-Mod and Mod-"R" of left and right modules over a ring.)
For a monoid "M", the categories "M"-Act and Act-"M" are defined in the same way.
Transformation semigroups.
A correspondence between transformation semigroups and semigroup actions is described below. If we restrict it to faithful semigroup actions, it has nice properties.
Any transformation semigroup can be turned into a semigroup action by the following construction. For any transformation semigroup formula_2 of formula_7, define a semigroup action formula_0 of formula_2 on formula_7 as formula_33 for formula_34. This action is faithful, which is equivalent to formula_35 being injective.
Conversely, for any semigroup action formula_0 of formula_2 on formula_7, define a transformation semigroup formula_36. In this construction we "forget" the set formula_2. formula_37 is equal to the image of formula_35. Let us denote formula_35 as formula_38 for brevity. If formula_38 is injective, then it is a semigroup isomorphism from formula_2 to formula_37. In other words, if formula_0 is faithful, then we forget nothing important. This claim is made precise by the following observation: if we turn formula_37 back into a semigroup action formula_39 of formula_37 on formula_7, then formula_40 for all formula_41. formula_0 and formula_39 are "isomorphic" via formula_38, i.e., we essentially recovered formula_0. Thus, some authors see no distinction between faithful semigroup actions and transformation semigroups.
Applications to computer science.
Semiautomata.
Transformation semigroups are of essential importance for the structure theory of finite state machines in automata theory. In particular, a "semiautomaton" is a triple (Σ,"X","T"), where Σ is a non-empty set called the "input alphabet", "X" is a non-empty set called the "set of states" and "T" is a function
formula_42
called the "transition function". Semiautomata arise from deterministic automata by ignoring the initial state and the set of accept states.
Given a semiautomaton, let "T""a": "X" → "X", for "a" ∈ Σ, denote the transformation of "X" defined by "T""a"("x") = "T"("a","x"). Then the semigroup of transformations of "X" generated by {"T""a" : "a" ∈ Σ} is called the "characteristic semigroup" or "transition system" of (Σ,"X","T"). This semigroup is a monoid, so this monoid is called the "characteristic" or "transition monoid". It is also sometimes viewed as a Σ∗-act on "X", where Σ∗ is the free monoid of strings generated by the alphabet Σ, and the action of strings extends the action of Σ via the property
formula_43
Krohn–Rhodes theory.
Krohn–Rhodes theory, sometimes also called "algebraic automata theory", gives powerful decomposition results for finite transformation semigroups by cascading simpler components.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T"
},
{
"math_id": 1,
"text": " T\\colon S\\times X \\to X"
},
{
"math_id": 2,
"text": "S"
},
{
"math_id": 3,
"text": "T(s, x)"
},
{
"math_id": 4,
"text": "s\\cdot x"
},
{
"math_id": 5,
"text": "s"
},
{
"math_id": 6,
"text": " T_s\\colon X \\to X"
},
{
"math_id": 7,
"text": "X"
},
{
"math_id": 8,
"text": " T_s(x) = T(s,x)."
},
{
"math_id": 9,
"text": " T_{s*t} = T_s\\circ T_t."
},
{
"math_id": 10,
"text": "s\\mapsto T_s"
},
{
"math_id": 11,
"text": "\\operatorname{curry}(T):S\\to(X\\to X)"
},
{
"math_id": 12,
"text": "\\operatorname{curry}"
},
{
"math_id": 13,
"text": "S\\to(X\\to X)"
},
{
"math_id": 14,
"text": " \\operatorname{curry}(T)(s*t) = \\operatorname{curry}(T)(s)\\circ \\operatorname{curry}(T)(t)."
},
{
"math_id": 15,
"text": "\\operatorname{curry}(T)"
},
{
"math_id": 16,
"text": "F\\colon X\\to X'"
},
{
"math_id": 17,
"text": "F(sx) =s F(x)"
},
{
"math_id": 18,
"text": "s\\in S"
},
{
"math_id": 19,
"text": "x\\in X"
},
{
"math_id": 20,
"text": "\\mathrm{Hom}_S(X,X')"
},
{
"math_id": 21,
"text": "(S, *)"
},
{
"math_id": 22,
"text": "\\cdot = *"
},
{
"math_id": 23,
"text": "*"
},
{
"math_id": 24,
"text": "F\\colon (S, *) \\rightarrow (T, \\oplus)"
},
{
"math_id": 25,
"text": "s \\cdot t = F(s) \\oplus t"
},
{
"math_id": 26,
"text": "X^*"
},
{
"math_id": 27,
"text": "(\\mathbb{N}, \\times)"
},
{
"math_id": 28,
"text": "n \\cdot s = s^n"
},
{
"math_id": 29,
"text": "s^n"
},
{
"math_id": 30,
"text": "n"
},
{
"math_id": 31,
"text": "(\\mathbb{N}, \\cdot)"
},
{
"math_id": 32,
"text": "x \\cdot y = x^y"
},
{
"math_id": 33,
"text": "T(s, x) = s(x)"
},
{
"math_id": 34,
"text": " s\\in S, x\\in X"
},
{
"math_id": 35,
"text": "curry(T)"
},
{
"math_id": 36,
"text": "S' = \\{T_s \\mid s \\in S\\}"
},
{
"math_id": 37,
"text": "S'"
},
{
"math_id": 38,
"text": "f"
},
{
"math_id": 39,
"text": "T'"
},
{
"math_id": 40,
"text": "T'(f(s), x) = T(s, x)"
},
{
"math_id": 41,
"text": "s \\in S, x \\in X"
},
{
"math_id": 42,
"text": "T\\colon \\Sigma\\times X \\to X"
},
{
"math_id": 43,
"text": "T_{vw} = T_w \\circ T_v."
}
] | https://en.wikipedia.org/wiki?curid=1058218 |
10582276 | Frequency addition source of optical radiation | Frequency addition source of optical radiation (acronym FASOR) is used for a certain type of guide star laser deployed at US Air Force Research Laboratory facilities SOR and AMOS. The laser light is produced in a sum-frequency generation process from two solid-state laser sources that operate at different wavelengths. The "frequencies" of the sources add directly to a summed frequency. Thus, if the source wavelengths are formula_0 and formula_1, the resulting wavelength is
formula_2
Application.
The FASOR was initially used for many laser guide star experiments. These have ranged from mapping the photon return verse wavelength, power, and pointing location in the sky. Two FASORS were used to show the advantages of 'back pumping' or pumping at both D2a and D2b lines. Later a FASOR was used to measure the Earth's magnetic field. It has also been used for its intended application of generating a laser guidestar for adaptive optics, see first reference. It is tuned to the D2a hyperfine component of the sodium D line and used to excite sodium atoms in the mesospheric upper atmosphere. The FASOR consists of two single-frequency injection-locked lasers close to 1064 and 1319 nm that are both resonant in a cavity containing a lithium triborate (LBO) crystal, which sums the frequencies yielding 589.159 nm light.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda_1"
},
{
"math_id": 1,
"text": "\\lambda_2"
},
{
"math_id": 2,
"text": " \\lambda = \\left(\\frac{1}{\\lambda_1} + \\frac{1}{\\lambda_2} \\right)^{-1}."
}
] | https://en.wikipedia.org/wiki?curid=10582276 |
1058299 | Particle image velocimetry | Method to measure velocities in fluid
Particle image velocimetry (PIV) is an optical method of flow visualization used in education and research. It is used to obtain instantaneous velocity measurements and related properties in fluids. The fluid is seeded with tracer particles which, for sufficiently small particles, are assumed to faithfully follow the flow dynamics (the degree to which the particles faithfully follow the flow is represented by the Stokes number). The fluid with entrained particles is illuminated so that particles are visible. The motion of the seeding particles is used to calculate speed and direction (the velocity field) of the flow being studied.
Other techniques used to measure flows are laser Doppler velocimetry and hot-wire anemometry. The main difference between PIV and those techniques is that PIV produces two-dimensional or even three-dimensional vector fields, while the other techniques measure the velocity at a point. During PIV, the particle concentration is such that it is possible to identify individual particles in an image, but not with certainty to track it between images. When the particle concentration is so low that it is possible to follow an individual particle it is called particle tracking velocimetry, while laser speckle velocimetry is used for cases where the particle concentration is so high that it is difficult to observe individual particles in an image.
Typical PIV apparatus consists of a camera (normally a digital camera with a charge-coupled device (CCD) chip in modern systems), a strobe or laser with an optical arrangement to limit the physical region illuminated (normally a cylindrical lens to convert a light beam to a line), a synchronizer to act as an external trigger for control of the camera and laser, the seeding particles and the fluid under investigation. A fiber-optic cable or liquid light guide may connect the laser to the lens setup. PIV software is used to post-process the optical images.
History.
Particle image velocimetry (PIV) is a non-intrusive optical flow measurement technique used to study fluid flow patterns and velocities. PIV has found widespread applications in various fields of science and engineering, including aerodynamics, combustion, oceanography, and biofluids. The development of PIV can be traced back to the early 20th century when researchers started exploring different methods to visualize and measure fluid flow.
The early days of PIV can be credited to the pioneering work of Ludwig Prandtl, a German physicist and engineer, who is often regarded as the father of modern aerodynamics. In the 1920s, Prandtl and his colleagues used shadowgraph and schlieren techniques to visualize and measure flow patterns in wind tunnels. These methods relied on the refractive index differences between the fluid regions of interest and the surrounding medium to generate contrast in the images. However, these methods were limited to qualitative observations and did not provide quantitative velocity measurements.
The early PIV setups were relatively simple and used photographic film as the image recording medium. A laser was used to illuminate particles, such as oil droplets or smoke, added to the flow, and the resulting particle motion was captured on film. The films were then developed and analyzed to obtain flow velocity information. These early PIV systems had limited spatial resolution and were labor-intensive, but they provided valuable insights into fluid flow behavior.
The advent of lasers in the 1960s revolutionized the field of flow visualization and measurement. Lasers provided a coherent and monochromatic light source that could be easily focused and directed, making them ideal for optical flow diagnostics. In the late 1960s and early 1970s, researchers such as Arthur L. Lavoie, Hervé L. J. H. Scohier, and Adrian Fouriaux independently proposed the concept of particle image velocimetry (PIV). PIV was initially used for studying air flows and measuring wind velocities, but its applications soon extended to other areas of fluid dynamics.
In the 1980s, the development of charge-coupled devices (CCDs) and digital image processing techniques revolutionized PIV. CCD cameras replaced photographic film as the image recording medium, providing higher spatial resolution, faster data acquisition, and real-time processing capabilities. Digital image processing techniques allowed for accurate and automated analysis of the PIV images, greatly reducing the time and effort required for data analysis.
The advent of digital imaging and computer processing capabilities in the 1980s and 1990s revolutionized PIV, leading to the development of advanced PIV techniques, such as multi-frame PIV, stereo-PIV, and time-resolved PIV. These techniques allowed for higher accuracy, higher spatial and temporal resolution, and three-dimensional measurements, expanding the capabilities of PIV and enabling its application in more complex flow systems.
In the following decades, PIV continued to evolve and advance in several key areas. One significant advancement was the use of dual or multiple exposures in PIV, which allowed for the measurement of both instantaneous and time-averaged velocity fields. Dual-exposure PIV (often referred to as "stereo PIV" or "stereo-PIV") uses two cameras to capture two consecutive images with a known time delay, allowing for the measurement of three-component velocity vectors in a plane. This provided a more complete picture of the flow field and enabled the study of complex flows, such as turbulence and vortices.
In the 2000s and beyond, PIV continued to evolve with the development of high-power lasers, high-speed cameras, and advanced image analysis algorithms. These advancements have enabled PIV to be used in extreme conditions, such as high-speed flows, combustion systems, and microscale flows, opening up new frontiers for PIV research. PIV has also been integrated with other measurement techniques, such as temperature and concentration measurements, and has been used in emerging fields, such as microscale and nanoscale flows, granular flows, and additive manufacturing.
The advancement of PIV has been driven by the development of new laser sources, cameras, and image analysis techniques. Advances in laser technology have led to the use of high-power lasers, such as s and diode lasers, which provide increased illumination intensity and allow for measurements in more challenging environments, such as high-speed flows and combustion systems. High-speed cameras with improved sensitivity and frame rates have also been developed, enabling the capture of transient flow phenomena with high temporal resolution. Furthermore, advanced image analysis techniques, such as correlation-based algorithms, phase-based methods, and machine learning algorithms, have been developed to enhance the accuracy and efficiency of PIV measurements.
Another major advancement in PIV was the development of digital correlation algorithms for image analysis. These algorithms allowed for more accurate and efficient processing of PIV images, enabling higher spatial resolution and faster data acquisition rates. Various correlation algorithms, such as cross-correlation, Fourier-transform-based correlation, and adaptive correlation, were developed and widely used in PIV research.
PIV has also benefited from the development of computational fluid dynamics (CFD) simulations, which have become powerful tools for predicting and analyzing fluid flow behavior. PIV data can be used to validate and calibrate CFD simulations, and in turn, CFD simulations can provide insights into the interpretation and analysis of PIV data. The combination of experimental PIV measurements and numerical simulations has enabled researchers to gain a deeper understanding of fluid flow phenomena and has led to new discoveries and advancements in various scientific and engineering fields.
In addition to the technical advancements, PIV has also been integrated with other measurement techniques, such as temperature and concentration measurements, to provide more comprehensive and multi-parameter flow measurements. For example, combining PIV with thermographic phosphors or laser-induced fluorescence allows for simultaneous measurement of velocity and temperature or concentration fields, providing valuable data for studying heat transfer, mixing, and chemical reactions in fluid flows.
Applications.
The historical development of PIV has been driven by the need for accurate and non-intrusive flow measurements in various fields of science and engineering. The early years of PIV were marked by the development of basic PIV techniques, such as two-frame PIV, and the application of PIV in fundamental fluid dynamics research, primarily in academic settings. As PIV gained popularity, researchers started using it in more practical applications, such as aerodynamics, combustion, and oceanography.
As PIV continues to advance and evolve, it is expected to find further applications in a wide range of fields, from fundamental research in fluid dynamics to practical applications in engineering, environmental science, and medicine. The continued development of PIV techniques, including advancements in lasers, cameras, image analysis algorithms, and integration with other measurement techniques, will further enhance its capabilities and broaden its applications.
In aerodynamics, PIV has been used to study the flow over aircraft wings, rotor blades, and other aerodynamic surfaces, providing insights into the flow behavior and aerodynamic performance of these systems.
As PIV gained popularity, it found applications in a wide range of fields beyond aerodynamics, including combustion, oceanography, biofluids, and microscale flows. In combustion research, PIV has been used to study the details of combustion processes, such as flame propagation, ignition, and fuel spray dynamics, providing valuable insights into the complex interactions between fuel and air in combustion systems. In oceanography, PIV has been used to study the motion of water currents, waves, and turbulence, aiding in the understanding of ocean circulation patterns and coastal erosion. In biofluids research, PIV has been applied to study blood flow in arteries and veins, respiratory flow, and the motion of cilia and flagella in microorganisms, providing important information for understanding physiological processes and disease mechanisms.
PIV has also been used in new and emerging fields, such as microscale and nanoscale flows, granular flows, and multiphase flows. Micro-PIV and nano-PIV have been used to study flows in microchannels, nanopores, and biological systems at the microscale and nanoscale, providing insights into the unique behaviors of fluids at these length scales. PIV has been applied to study the motion of particles in granular flows, such as avalanches and landslides, and to investigate multiphase flows, such as bubbly flows and oil-water flows, which are important in environmental and industrial processes. In microscale flows, conventional measurement techniques are challenging to apply due to the small length scales involved. Micro-PIV has been used to study flows in microfluidic devices, such as lab-on-a-chip systems, and to investigate phenomena such as droplet formation, mixing, and cell motion, with applications in drug delivery, biomedical diagnostics, and microscale engineering.
PIV has also found applications in advanced manufacturing processes, such as additive manufacturing, where understanding and optimizing fluid flow behavior is critical for achieving high-quality and high-precision products. PIV has been used to study the flow dynamics of gases, liquids, and powders in additive manufacturing processes, providing insights into the process parameters that affect the quality and properties of the manufactured products.
PIV has also been used in environmental science to study the dispersion of pollutants in air and water, sediment transport in rivers and coastal areas, and the behavior of pollutants in natural and engineered systems. In energy research, PIV has been used to study the flow behavior in wind turbines, hydroelectric power plants, and combustion processes in engines and turbines, aiding in the development of more efficient and environmentally friendly energy systems.
Equipment and apparatus.
Seeding particles.
The seeding particles are an inherently critical component of the PIV system. Depending on the fluid under investigation, the particles must be able to match the fluid properties reasonably well. Otherwise they will not follow the flow satisfactorily enough for the PIV analysis to be considered accurate. Ideal particles will have the same density as the fluid system being used, and are spherical (these particles are called microspheres). While the actual particle choice is dependent on the nature of the fluid, generally for macro PIV investigations they are glass beads, polystyrene, polyethylene, aluminum flakes or oil droplets (if the fluid under investigation is a gas). Refractive index for the seeding particles should be different from the fluid which they are seeding, so that the laser sheet incident on the fluid flow will reflect off of the particles and be scattered towards the camera.
The particles are typically of a diameter in the order of 10 to 100 micrometers. As for sizing, the particles should be small enough so that response time of the particles to the motion of the fluid is reasonably short to accurately follow the flow, yet large enough to scatter a significant quantity of the incident laser light. For some experiments involving combustion, seeding particle size may be smaller, in the order of 1 micrometer, to avoid the quenching effect that the inert particles may have on flames. Due to the small size of the particles, the particles' motion is dominated by Stokes' drag and settling or rising effects. In a model where particles are modeled as spherical (microspheres) at a very low Reynolds number, the ability of the particles to follow the fluid's flow is inversely proportional to the difference in density between the particles and the fluid, and also inversely proportional to the square of their diameter. The scattered light from the particles is dominated by Mie scattering and so is also proportional to the square of the particles' diameters. Thus the particle size needs to be balanced to scatter enough light to accurately visualize all particles within the laser sheet plane, but small enough to accurately follow the flow.
The seeding mechanism needs to also be designed so as to seed the flow to a sufficient degree without overly disturbing the flow.
Camera.
To perform PIV analysis on the flow, two exposures of laser light are required upon the camera from the flow. Originally, with the inability of cameras to capture multiple frames at high speeds, both exposures were captured on the same frame and this single frame was used to determine the flow. A process called autocorrelation was used for this analysis. However, as a result of autocorrelation the direction of the flow becomes unclear, as it is not clear which particle spots are from the first pulse and which are from the second pulse. Faster digital cameras using CCD or CMOS chips were developed since then that can capture two frames at high speed with a few hundred ns difference between the frames. This has allowed each exposure to be isolated on its own frame for more accurate cross-correlation analysis. The limitation of typical cameras is that this fast speed is limited to a pair of shots. This is because each pair of shots must be transferred to the computer before another pair of shots can be taken. Typical cameras can only take a pair of shots at a much slower speed. High speed CCD or CMOS cameras are available but are much more expensive.
Laser and optics.
For macro PIV setups, lasers are predominant due to their ability to produce high-power light beams with short pulse durations. This yields short exposure times for each frame. s, commonly used in PIV setups, emit primarily at 1064 nm wavelength and its harmonics (532, 266, etc.) For safety reasons, the laser emission is typically bandpass filtered to isolate the 532 nm harmonics (this is green light, the only harmonic able to be seen by the naked eye). A fiber-optic cable or liquid light guide might be used to direct the laser light to the experimental setup.
The optics consist of a spherical lens and cylindrical lens combination. The cylindrical lens expands the laser into a plane while the spherical lens compresses the plane into a thin sheet. This is critical as the PIV technique cannot generally measure motion normal to the laser sheet and so ideally this is eliminated by maintaining an entirely 2-dimensional laser sheet. The spherical lens cannot compress the laser sheet into an actual 2-dimensional plane. The minimum thickness is on the order of the wavelength of the laser light and occurs at a finite distance from the optics setup (the focal point of the spherical lens). This is the ideal location to place the analysis area of the experiment.
The correct lens for the camera should also be selected to properly focus on and visualize the particles within the investigation area.
Synchronizer.
The synchronizer acts as an external trigger for both the camera(s) and the laser. While analogue systems in the form of a photosensor, rotating aperture and a light source have been used in the past, most systems in use today are digital. Controlled by a computer, the synchronizer can dictate the timing of each frame of the CCD camera's sequence in conjunction with the firing of the laser to within 1 ns precision. Thus the time between each pulse of the laser and the placement of the laser shot in reference to the camera's timing can be accurately controlled. Knowledge of this timing is critical as it is needed to determine the velocity of the fluid in the PIV analysis. Stand-alone electronic synchronizers, called digital delay generators, offer variable resolution timing from as low as 250 ps to as high as several ms. With up to eight channels of synchronized timing, they offer the means to control several flash lamps and Q-switches as well as provide for multiple camera exposures.
Analysis.
The frames are split into a large number of interrogation areas, or windows. It is then possible to calculate a displacement vector for each window with help of signal processing and autocorrelation or cross-correlation techniques. This is converted to a velocity using the time between laser shots and the physical size of each pixel on the camera. The size of the interrogation window should be chosen to have at least 6 particles per window on average. A visual example of PIV analysis can be seen here.
The synchronizer controls the timing between image exposures and also permits image pairs to be acquired at various times along the flow. For accurate PIV analysis, it is ideal that the region of the flow that is of interest should display an average particle displacement of about 8 pixels. This is a compromise between a longer time spacing which would allow the particles to travel further between frames, making it harder to identify which interrogation window traveled to which point, and a shorter time spacing, which could make it overly difficult to identify any displacement within the flow.
The scattered light from each particle should be in the region of 2 to 4 pixels across on the image. If too large an area is recorded, particle image size drops and peak locking might occur with loss of sub pixel precision. There are methods to overcome the peak locking effect, but they require some additional work.
If there is in house PIV expertise and time to develop a system, even though it is not trivial, it is possible to build a custom PIV system. Research grade PIV systems do, however, have high power lasers and high end camera specifications for being able to take measurements with the broadest spectrum of experiments required in research.
An example of PIV analysis without installation:
PIV is closely related to digital image correlation, an optical displacement measurement technique that uses correlation techniques to study the deformation of solid materials.
Pros and cons.
Advantages.
The method is, to a large degree, nonintrusive. The added tracers (if they are properly chosen) generally cause negligible distortion of the fluid flow.
Optical measurement avoids the need for Pitot tubes, hotwire anemometers or other intrusive Flow measurement probes. The method is capable of measuring an entire two-dimensional cross section (geometry) of the flow field simultaneously.
High speed data processing allows the generation of large numbers of image pairs which, on a personal computer may be analysed in real time or at a later time, and a high quantity of near-continuous information may be gained.
Sub pixel displacement values allow a high degree of accuracy, since each vector is the statistical average for many particles within a particular tile. Displacement can typically be accurate down to 10% of one pixel on the image plane.
Drawbacks.
In some cases the particles will, due to their higher density, not perfectly follow the motion of the fluid (gas/liquid). If experiments are done in water, for instance, it is easily possible to find very cheap particles (e.g. plastic powder with a diameter of ~60 μm) with the same density as water. If the density still does not fit, the density of the fluid can be tuned by increasing/ decreasing its temperature. This leads to slight changes in the Reynolds number, so the fluid velocity or the size of the experimental object has to be changed to account for this.
Particle image velocimetry methods will in general not be able to measure components along the z-axis (towards to/away from the camera). These components might not only be missed, they might also introduce an interference in the data for the x/y-components caused by parallax. These problems do not exist in Stereoscopic PIV, which uses two cameras to measure all three velocity components.
Since the resulting velocity vectors are based on cross-correlating the intensity distributions over small areas of the flow, the resulting velocity field is a spatially averaged representation of the actual velocity field. This obviously has consequences for the accuracy of spatial derivatives of the velocity field, vorticity, and spatial correlation functions that are often derived from PIV velocity fields.
PIV systems used in research often use class IV lasers and high-resolution, high-speed cameras, which bring cost and safety constraints.
More complex PIV setups.
Stereoscopic PIV.
Stereoscopic PIV utilises two cameras with separate viewing angles to extract the z-axis displacement. Both cameras must be focused on the same spot in the flow and must be properly calibrated to have the same point in focus.
In fundamental fluid mechanics, displacement within a unit time in the X, Y and Z directions are commonly defined by the variables U, V and W. As was previously described, basic PIV extracts the U and V displacements as functions of the in-plane X and Y directions. This enables calculations of the formula_0, formula_1, formula_2 and formula_3 velocity gradients. However, the other 5 terms of the velocity gradient tensor are unable to be found from this information. The stereoscopic PIV analysis also grants the Z-axis displacement component, W, within that plane. Not only does this grant the Z-axis velocity of the fluid at the plane of interest, but two more velocity gradient terms can be determined: formula_4 and formula_5. The velocity gradient components formula_6, formula_7, and formula_8 can not be determined.
The velocity gradient components form the tensor:
formula_9
Dual plane stereoscopic PIV.
This is an expansion of stereoscopic PIV by adding a second plane of investigation directly offset from the first one. Four cameras are required for this analysis. The two planes of laser light are created by splitting the laser emission with a beam splitter into two beams. Each beam is then polarized orthogonally with respect to one another. Next, they are transmitted through a set of optics and used to illuminate one of the two planes simultaneously.
The four cameras are paired into groups of two. Each pair focuses on one of the laser sheets in the same manner as single-plane stereoscopic PIV. Each of the four cameras has a polarizing filter designed to only let pass the polarized scattered light from the respective planes of interest. This essentially creates a system by which two separate stereoscopic PIV analysis setups are run simultaneously with only a minimal separation distance between the planes of interest.
This technique allows the determination of the three velocity gradient components single-plane stereoscopic PIV could not calculate: formula_6, formula_7, and formula_8. With this technique, the entire velocity gradient tensor of the fluid at the 2-dimensional plane of interest can be quantified. A difficulty arises in that the laser sheets should be maintained close enough together so as to approximate a two-dimensional plane, yet offset enough that meaningful velocity gradients can be found in the z-direction.
Multi-plane stereoscopic PIV.
There are several extensions of the dual-plane stereoscopic PIV idea available. There is an option to create several parallel laser sheets using a set of beamsplitters and quarter-wave plates, providing three or more planes, using a single laser unit and stereoscopic PIV setup, called XPIV.
Micro PIV.
With the use of an epifluorescent microscope, microscopic flows can be analyzed. MicroPIV makes use of fluorescing particles that excite at a specific wavelength and emit at another wavelength. Laser light is reflected through a dichroic mirror, travels through an objective lens that focuses on the point of interest, and illuminates a regional volume. The emission from the particles, along with reflected laser light, shines back through the objective, the dichroic mirror and through an emission filter that blocks the laser light. Where PIV draws its 2-dimensional analysis properties from the planar nature of the laser sheet, microPIV utilizes the ability of the objective lens to focus on only one plane at a time, thus creating a 2-dimensional plane of viewable particles.
MicroPIV particles are on the order of several hundred nm in diameter, meaning they are extremely susceptible to Brownian motion. Thus, a special ensemble averaging analysis technique must be utilized for this technique. The cross-correlation of a series of basic PIV analyses are averaged together to determine the actual velocity field. Thus, only steady flows can be investigated. Special preprocessing techniques must also be utilized since the images tend to have a zero-displacement bias from background noise and low signal-noise ratios. Usually, high numerical aperture objectives are also used to capture the maximum emission light possible. Optic choice is also critical for the same reasons.
Holographic PIV.
Holographic PIV (HPIV) encompasses a variety of experimental techniques which use the interference of coherent light scattered by a particle and a reference beam to encode information of the amplitude and phase of the scattered light incident on a sensor plane. This encoded information, known as a hologram, can then be used to reconstruct the original intensity field by illuminating the hologram with the original reference beam via optical methods or digital approximations. The intensity field is interrogated using 3-D cross-correlation techniques to yield a velocity field.
Off-axis HPIV uses separate beams to provide the object and reference waves. This setup is used to avoid speckle noise form being generated from interference of the two waves within the scattering medium, which would occur if they were both propagated through the medium. An off-axis experiment is a highly complex optical system comprising numerous optical elements, and the reader is referred to an example schematic in Sheng et al. for a more complete presentation.
In-line holography is another approach that provides some unique advantages for particle imaging. Perhaps the largest of these is the use of forward scattered light, which is orders of magnitude brighter than scattering oriented normal to the beam direction. Additionally, the optical setup of such systems is much simpler because the residual light does not need to be separated and recombined at a different location. The in-line configuration also provides a relatively easy extension to apply CCD sensors, creating a separate class of experiments known as digital in-line holography. The complexity of such setups shifts from the optical setup to image post-processing, which involves the use of simulated reference beams. Further discussion of these topics is beyond the scope of this article and is treated in Arroyo and Hinsch
A variety of issues degrade the quality of HPIV results. The first class of issues involves the reconstruction itself. In holography, the object wave of a particle is typically assumed to be spherical; however, due to Mie scattering theory, this wave is a complex shape which can distort the reconstructed particle. Another issue is the presence of substantial speckle noise which lowers the overall signal-to-noise ratio of particle images. This effect is of greater concern for in-line holographic systems because the reference beam is propagated through the volume along with the scattered object beam. Noise can also be introduced through impurities in the scattering medium, such as temperature variations and window blemishes. Because holography requires coherent imaging, these effects are much more severe than traditional imaging conditions. The combination of these factors increases the complexity of the correlation process. In particular, the speckle noise in an HPIV recording often prevents traditional image-based correlation methods from being used. Instead, single particle identification and correlation are implemented, which set limits on particle number density. A more comprehensive outline of these error sources is given in Meng et al.
In light of these issues, it may seem that HPIV is too complicated and error-prone to be used for flow measurements. However, many impressive results have been obtained with all holographic approaches. Svizher and Cohen used a hybrid HPIV system to study the physics of hairpin vortices. Tao et al. investigated the alignment of vorticity and strain rate tensors in high Reynolds number turbulence. As a final example, Sheng et al. used holographic microscopy to perform near-wall measurements of turbulent shear stress and velocity in turbulent boundary layers.
Scanning PIV.
By using a rotating mirror, a high-speed camera and correcting for geometric changes, PIV can be performed nearly instantly on a set of planes throughout the flow field. Fluid properties between the planes can then be interpolated. Thus, a quasi-volumetric analysis can be performed on a target volume. Scanning PIV can be performed in conjunction with the other 2-dimensional PIV methods described to approximate a 3-dimensional volumetric analysis.
Tomographic PIV.
Tomographic PIV is based on the illumination, recording, and reconstruction of tracer particles within a 3-D measurement volume. The technique uses several cameras to record simultaneous views of the illuminated volume, which is then reconstructed to yield a discretized 3-D intensity field. A pair of intensity fields are analyzed using 3-D cross-correlation algorithms to calculate the 3-D, 3-C velocity field within the volume. The technique was originally developed
by Elsinga et al. in 2006.
The reconstruction procedure is a complex under-determined inverse problem. The primary complication is that a single set of views can result from a large number of 3-D volumes. Procedures to properly determine the unique volume from a set of views are the foundation for the field of tomography. In most Tomo-PIV experiments, the multiplicative algebraic reconstruction technique (MART) is used. The advantage of this pixel-by-pixel reconstruction technique is that it avoids the need to identify individual particles. Reconstructing the discretized 3-D intensity field is computationally intensive and, beyond MART, several developments have sought to significantly reduce this computational expense, for example the multiple line-of-sight simultaneous multiplicative algebraic reconstruction technique (MLOS-SMART)
which takes advantage of the sparsity of the 3-D intensity field to reduce memory storage and calculation requirements.
As a rule of thumb, at least four cameras are needed for acceptable reconstruction accuracy, and best results are obtained when the cameras are placed at approximately 30 degrees normal to the measurement volume. Many additional factors are necessary to consider for a successful experiment.
Tomo-PIV has been applied to a broad range of flows. Examples include the structure of a turbulent boundary layer/shock wave interaction, the vorticity of a cylinder wake or pitching airfoil,
rod-airfoil aeroacoustic experiments, and to measure small-scale, micro flows. More recently, Tomo-PIV has been used together with 3-D particle tracking velocimetry to understand predator-prey interactions, and portable version of Tomo-PIV has been used to study unique swimming organisms in Antarctica.
Thermographic PIV.
Thermographic PIV is based on the use of thermographic phosphors as seeding particles. The use of these thermographic phosphors permits simultaneous measurement of velocity and temperature in a flow.
Thermographic phosphors consist of ceramic host materials doped with rare-earth or transition metal ions, which exhibit phosphorescence when they are illuminated with UV-light. The decay time and the spectra of this phosphorescence are temperature sensitive and offer two different methods to measure temperature. The decay time method consists on the fitting of the phosphorescence decay to an exponential function and is normally used in point measurements, although it has been demonstrated in surface measurements. The intensity ratio between two different spectral lines of the phosphorescence emission, tracked using spectral filters, is also temperature-dependent and can be employed for surface measurements.
The micrometre-sized phosphor particles used in thermographic PIV are seeded into the flow as a tracer and, after illumination with a thin laser light sheet, the temperature of the particles can be measured from the phosphorescence, normally using an intensity ratio technique. It is important that the particles are of small size so that not only they follow the flow satisfactorily but also they rapidly assume its temperature. For a diameter of 2 μm, the thermal slip between particle and gas is as small as the velocity slip.
Illumination of the phosphor is achieved using UV light. Most thermographic phosphors absorb light in a broad band in the UV and therefore can be excited using a YAG:Nd laser. Theoretically, the same light can be used both for PIV and temperature measurements, but this would mean that UV-sensitive cameras are needed. In practice, two different beams originated in separate lasers are overlapped. While one of the beams is used for velocity measurements, the other is used to measure the temperature.
The use of thermographic phosphors offers some advantageous features including ability to survive in reactive and high temperature environments, chemical stability and insensitivity of their phosphorescence emission to pressure and gas composition. In addition, thermographic phosphors emit light at different wavelengths, allowing spectral discrimination against excitation light and background.
Thermographic PIV has been demonstrated for time averaged
and single shot
measurements. Recently, also time-resolved high speed (3 kHz) measurements
have been successfully performed.
Artificial Intelligence PIV.
With the development of artificial intelligence, there have been scientific publications and commercial software proposing PIV calculations based on deep learning and convolutional neural networks. The methodology used stems mainly from optical flow neural networks popular in machine vision. A data set that includes particle images is generated to train the parameters of the networks. The result is a deep neural network for PIV which can provide estimation of dense motion, down to a maximum of one vector for one pixel if the recorded images allow. AI PIV promises a dense velocity field, not limited by the size of the interrogation window, which limits traditional PIV to one vector per 16 x 16 pixels.
Real time processing and applications of PIV.
With the advance of digital technologies, real time processing and applications of PIV became possible. For instance, GPUs can be used to speed up substantially the direct of Fourier transform based correlations of single interrogation windows. Similarly multi-processing, parallel or multi-threading processes on several CPUs or multi-core CPUs are beneficial for the distributed processing of multiple interrogation windows or multiple images. Some of the applications use real time image processing methods, such as FPGA based on-the-fly image compression or image processing. More recently, the PIV real time measurement and processing capabilities are implemented for the future use in active flow control with the flow based feedback.
Applications.
PIV has been applied to a wide range of flow problems, varying from the flow over an aircraft wing in a wind tunnel to vortex formation in prosthetic heart valves. 3-dimensional techniques have been sought to analyze turbulent flow and jets.
Rudimentary PIV algorithms based on cross-correlation can be implemented in a matter of hours, while more sophisticated algorithms may require a significant investment of time. Several open source implementations are available. Application of PIV in the US education system has been limited due to high price and safety concerns of industrial research grade PIV systems.
Granular PIV: velocity measurement in granular flows and avalanches.
PIV can also be used to measure the velocity field of the free surface and basal boundary in a granular flows such as those in shaken containers, tumblers and avalanches.
This analysis is particularly well-suited for nontransparent media such as sand, gravel, quartz, or other granular materials that are common in geophysics. This PIV approach is called "granular PIV". The set-up for granular PIV differs from the usual PIV setup in that the optical surface structure which is produced by illumination of the surface of the granular flow is already sufficient to detect the motion. This means one does not need to add tracer particles in the bulk material.
Notes.
<templatestyles src="Reflist/styles.css" />
External links.
Test and Measurement at Curlie
PIV research at the Laboratory for Experimental Fluid Dynamics (J. Katz lab) | [
{
"math_id": 0,
"text": "U_x"
},
{
"math_id": 1,
"text": "V_y"
},
{
"math_id": 2,
"text": "U_y"
},
{
"math_id": 3,
"text": "V_x"
},
{
"math_id": 4,
"text": "W_x"
},
{
"math_id": 5,
"text": "W_y"
},
{
"math_id": 6,
"text": "U_z"
},
{
"math_id": 7,
"text": "V_z"
},
{
"math_id": 8,
"text": "W_z"
},
{
"math_id": 9,
"text": "\n\\begin{bmatrix}\n U_x & U_y & U_z \\\\\n V_x & V_y & V_z \\\\ \n W_x & W_y & W_z \\\\\n\\end{bmatrix}\n"
}
] | https://en.wikipedia.org/wiki?curid=1058299 |
105836 | Oncotic pressure | Measure of pressure exerted by large dissolved molecules in biological fluids
Oncotic pressure, or colloid osmotic-pressure, is a type of osmotic pressure induced by the plasma proteins, notably albumin, in a blood vessel's plasma (or any other body fluid such as blood and lymph) that causes a pull on fluid back into the capillary. Participating colloids displace water molecules, thus creating a relative water molecule deficit with water molecules moving back into the circulatory system within the lower venous pressure end of capillaries.
It has an effect opposing both the hydrostatic blood pressure, which pushes water and small molecules out of the blood into the interstitial spaces at the arterial end of capillaries, and the interstitial colloidal osmotic pressure. These interacting factors determine the partitioning of extracellular water between the blood plasma and the extravascular space.
Oncotic pressure strongly affects the physiological function of the circulatory system. It is suspected to have a major effect on the pressure across the glomerular filter. However, this concept has been strongly criticised and attention has shifted to the impact of the intravascular glycocalyx layer as the major player.
Etymology.
The word 'oncotic' by definition is termed as 'pertaining to swelling', indicating the effect of oncotic imbalance on the swelling of tissues.
The word itself is derived from onco- and -ic; 'onco-' meaning 'pertaining to mass or tumors' and '-ic', which forms an adjective.
Description.
Throughout the body, dissolved compounds have an osmotic pressure. Because large plasma proteins cannot easily cross through the capillary walls, their effect on the osmotic pressure of the capillary interiors will, to some extent, balance out the tendency for fluid to leak out of the capillaries. In other words, the oncotic pressure tends to pull fluid into the capillaries. In conditions where plasma proteins are reduced, e.g. from being lost in the urine (proteinuria), there will be a reduction in oncotic pressure and an increase in filtration across the capillary, resulting in excess fluid buildup in the tissues (edema).
The large majority of oncotic pressure in capillaries is generated by the presence of high quantities of albumin, a protein that constitutes approximately 80% of the total oncotic pressure exerted by blood plasma on interstitial fluid . The total oncotic pressure of an average capillary is about 28 mmHg with albumin contributing approximately 22 mmHg of this oncotic pressure, despite only representing 50% of all protein in blood plasma at 35-50 g/L. Because blood proteins cannot escape through capillary endothelium, oncotic pressure of capillary beds tends to draw water into the vessels. It is necessary to understand the oncotic pressure as a balance; because the blood proteins reduce interior permeability, less plasma fluid can exit the vessel.
Oncotic pressure is represented by the symbol Π or π in the Starling equation and elsewhere. The Starling equation in particular describes filtration in volume/s (formula_0) by relating oncotic pressure (formula_1) to capillary hydrostatic pressure (formula_2), interstitial fluid hydrostatic pressure (formula_3), and interstitial fluid oncotic pressure (formula_4), as well as several descriptive coefficients, as shown below:
formula_5
At the arteriolar end of the capillary, blood pressure starts at about 36 mm Hg and decreases to around 15 mm Hg at the venous end, with oncotic pressure at a stable 25–28 mm Hg. Within the capillary, reabsorption due to this venous pressure difference is estimated to be around 90% that of the filtered fluid, with the extra 10% being returned via lymphatics in order to maintain stable blood volume.
Physiological impact.
In tissues, physiological disruption can arise with decreased oncotic pressure, which can be determined using blood tests for protein concentration.
Decreased colloidal osmotic pressure, most notably seen in hypoalbuminemia, can cause edema and decrease in blood volume as fluid is not reabsorbed into the bloodstream. Colloid pressure in these cases can be lost due to a number of different factors, but primarily decreased colloid production or increased loss of colloids through glomerular filtration. This low pressure often correlates with poor surgical outcomes.
In the clinical setting, there are two types of fluids that are used for intravenous drips: crystalloids and colloids. Crystalloids are aqueous solutions of mineral salts or other water-soluble molecules. Colloids contain larger insoluble molecules, such as gelatin. There is some debate concerning the advantages and disadvantages of using biological vs. synthetic colloid solutions. Oncotic pressure values are approximately 290 mOsm per kg of water, which slightly differs from the osmotic pressure of the blood that has values approximating 300 mOsm /L. These colloidal solutions are typically used to remedy low colloid concentration, such as in hypoalbuminemia, but is also suspected to assist in injuries that typically increase fluid loss, such as burns.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "J_\\mathrm{v}"
},
{
"math_id": 1,
"text": "\\pi_\\mathrm{p}"
},
{
"math_id": 2,
"text": "P_\\mathrm{c}"
},
{
"math_id": 3,
"text": "P_\\mathrm{i}"
},
{
"math_id": 4,
"text": "\\pi_\\mathrm{i}"
},
{
"math_id": 5,
"text": "\\ J_\\mathrm{v} = L_\\mathrm{p} S ( [P_\\mathrm{c} - P_\\mathrm{i}] - \\sigma[\\pi_\\mathrm{p} - \\pi_\\mathrm{i}] )"
}
] | https://en.wikipedia.org/wiki?curid=105836 |
10584297 | State–action–reward–state–action | Machine learning algorithm
<templatestyles src="Machine learning/styles.css"/>
State–action–reward–state–action (SARSA) is an algorithm for learning a Markov decision process policy, used in the reinforcement learning area of machine learning. It was proposed by Rummery and Niranjan in a technical note with the name "Modified Connectionist Q-Learning" (MCQ-L). The alternative name SARSA, proposed by Rich Sutton, was only mentioned as a footnote.
This name reflects the fact that the main function for updating the Q-value depends on the current state of the agent "S1", the action the agent chooses "A1", the reward "R2" the agent gets for choosing this action, the state "S2" that the agent enters after taking that action, and finally the next action "A2" the agent chooses in its new state. The acronym for the quintuple (St, At, Rt+1, St+1, At+1) is SARSA. Some authors use a slightly different convention and write the quintuple (St, At, Rt, St+1, At+1), depending on which time step the reward is formally assigned. The rest of the article uses the former convention.
formula_0
Algorithm.
A SARSA agent interacts with the environment and updates the policy based on actions taken, hence this is known as an "on-policy learning algorithm". The Q value for a state-action is updated by an error, adjusted by the learning rate α. Q values represent the possible reward received in the next time step for taking action "a" in state "s", plus the discounted future reward received from the next state-action observation.
Watkin's Q-learning updates an estimate of the optimal state-action value function formula_1 based on the maximum reward of available actions. While SARSA learns the Q values associated with taking the policy it follows itself, Watkin's Q-learning learns the Q values associated with taking the optimal policy while following an exploration/exploitation policy.
Some optimizations of Watkin's Q-learning may be applied to SARSA.
Hyperparameters.
Learning rate (alpha).
The learning rate determines to what extent newly acquired information overrides old information. A factor of 0 will make the agent not learn anything, while a factor of 1 would make the agent consider only the most recent information.
Discount factor (gamma).
The discount factor determines the importance of future rewards. A discount factor of 0 makes the agent "opportunistic", or "myopic", e.g., by only considering current rewards, while a factor approaching 1 will make it strive for a long-term high reward. If the discount factor meets or exceeds 1, the formula_2 values may diverge.
Initial conditions ("Q"("S"0, "A"0)).
Since SARSA is an iterative algorithm, it implicitly assumes an initial condition before the first update occurs. A high (infinite) initial value, also known as "optimistic initial conditions", can encourage exploration: no matter what action takes place, the update rule causes it to have higher values than the other alternative, thus increasing their choice probability. In 2013 it was suggested that the first reward formula_3 could be used to reset the initial conditions. According to this idea, the first time an action is taken the reward is used to set the value of formula_2. This allows immediate learning in case of fixed deterministic rewards. This resetting-of-initial-conditions (RIC) approach seems to be consistent with human behavior in repeated binary choice experiments.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Q^{new}(S_t, A_t) \\leftarrow (1 - \\alpha) Q(S_t,A_t) + \\alpha \\, [R_{t+1} + \\gamma \\, Q(S_{t+1}, A_{t+1})]"
},
{
"math_id": 1,
"text": "Q^*"
},
{
"math_id": 2,
"text": "Q"
},
{
"math_id": 3,
"text": "r"
}
] | https://en.wikipedia.org/wiki?curid=10584297 |
10586076 | Triple product property | In abstract algebra, the triple product property is an identity satisfied in some groups.
Let formula_0 be a non-trivial group. Three nonempty subsets formula_1 are said to have the "triple product property" in formula_0 if for all elements formula_2, formula_3, formula_4 it is the case that
formula_5
where formula_6 is the identity of formula_0.
It plays a role in research of fast matrix multiplication algorithms. | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "S, T, U \\subset G"
},
{
"math_id": 2,
"text": "s, s' \\in S"
},
{
"math_id": 3,
"text": "t, t' \\in T"
},
{
"math_id": 4,
"text": "u, u' \\in U"
},
{
"math_id": 5,
"text": "\ns's^{-1}t't^{-1}u'u^{-1} = 1 \\Rightarrow s' = s, t' = t, u' = u\n"
},
{
"math_id": 6,
"text": "1"
}
] | https://en.wikipedia.org/wiki?curid=10586076 |
10586673 | Optical properties of water and ice | The refractive index of water at 20 °C for visible light is 1.33. The refractive index of normal ice is 1.31 (from List of refractive indices). In general, an index of refraction is a complex number with real and imaginary parts, where the latter indicates the strength of absorption loss at a particular wavelength. In the visible part of the electromagnetic spectrum, the imaginary part of the refractive index is very small. However, water and ice absorb in infrared and close the infrared atmospheric window, thereby contributing to the greenhouse effect.
The absorption spectrum of pure water is used in numerous applications, including light scattering and absorption by ice crystals and cloud water droplets, theories of the rainbow, determination of the single-scattering albedo, ocean color, and many others.
Quantitative description of the refraction index.
Over the wavelengths from 0.2 μm to 1.2 μm, and over temperatures from −12 °C to 500 °C, the real part of the index of refraction of water can be calculated by the following empirical expression:
formula_0
Where:
formula_1,
formula_2, and
formula_3
and the appropriate constants are
formula_4 = 0.244257733, formula_5 = 0.00974634476, formula_6 = −0.00373234996, formula_7 = 0.000268678472, formula_8 = 0.0015892057, formula_9 = 0.00245934259, formula_10 = 0.90070492, formula_11 = −0.0166626219, formula_12 = 273.15 K,formula_13 = 1000 kg/m3, formula_14 = 589 nm, formula_15 = 5.432937, and formula_16 = 0.229202.
In the above expression, T is the absolute temperature of water (in K), formula_17 is the wavelength of light in nm, formula_18 is the density of the water in kg/m3, and n is the real part of the index of refraction of water.
Volumic mass of water.
In the above formula, the density of water also varies with temperature and is defined by:
formula_19
with:
Refractive index (real and imaginary parts) for liquid water.
The total refractive index of water is given as "m = n + ik". The absorption coefficient α' is used in the Beer–Lambert law with the prime here signifying base e convention. Values are for water at 25 °C, and were obtained through various sources in the cited literature review.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{n^{2}-1}{n^{2}+2}(1/\\overline{\\rho })=a_{0}+a_{1}\\overline{\\rho}+a_{2}\\overline{T}+a_{3}{\\overline{\\lambda}}^{2}\\overline{T}+\\frac{a_{4}}{{\\overline{\\lambda}}^{2}}+\\frac{a_{5}}{{\\overline{\\lambda }}^{2}-{\\overline{\\lambda}}_{\\mathit{UV}}^{2}}+\\frac{a_{6}}{{\\overline{\\lambda}}^{2}-{\\overline{\\lambda }}_{\\mathit{IR}}^{2}}+a_{7}{\\overline{\\rho}}^{2}"
},
{
"math_id": 1,
"text": "\\overline T = \\frac{T}{T^{\\text{*}}}"
},
{
"math_id": 2,
"text": "\\overline \\rho = \\frac{\\rho}{\\rho^{\\text{*}}}"
},
{
"math_id": 3,
"text": "\\overline \\lambda = \\frac{\\lambda}{\\lambda^{\\text{*}}}"
},
{
"math_id": 4,
"text": "a_0"
},
{
"math_id": 5,
"text": "a_1"
},
{
"math_id": 6,
"text": "a_2"
},
{
"math_id": 7,
"text": "a_3"
},
{
"math_id": 8,
"text": "a_4"
},
{
"math_id": 9,
"text": "a_5"
},
{
"math_id": 10,
"text": "a_6"
},
{
"math_id": 11,
"text": "a_7"
},
{
"math_id": 12,
"text": "T^{*}"
},
{
"math_id": 13,
"text": "\\rho^{*}"
},
{
"math_id": 14,
"text": "\\lambda^{*}"
},
{
"math_id": 15,
"text": "\\overline\\lambda_{\\text{IR}}"
},
{
"math_id": 16,
"text": "\\overline\\lambda_{\\text{UV}}"
},
{
"math_id": 17,
"text": "\\lambda"
},
{
"math_id": 18,
"text": "\\rho"
},
{
"math_id": 19,
"text": "\\rho(t) = a_5 \\left( 1-\\frac{(t+a_1)^2(t+a_2)}{a_3(t+a_4)} \\right)"
}
] | https://en.wikipedia.org/wiki?curid=10586673 |
1058719 | Harmonic spectrum | A harmonic spectrum is a spectrum containing only frequency components whose frequencies are whole number multiples of the fundamental frequency; such frequencies are known as harmonics. "The individual partials are not heard separately but are blended together by the ear into a single tone."
In other words, if formula_0 is the fundamental frequency, then a harmonic spectrum has the form
formula_1
A standard result of Fourier analysis is that a function has a harmonic spectrum if and only if it is periodic.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\omega"
},
{
"math_id": 1,
"text": "\\{\\dots, -2\\omega, -\\omega, 0, \\omega, 2\\omega, \\dots\\}."
}
] | https://en.wikipedia.org/wiki?curid=1058719 |
1058777 | Imperiali quota | Formula in proportional-representation voting
The Imperiali quota or pseudoquota is an inadmissible electoral quota named after Belgian senator Pierre Imperiali. Some election laws have mandated it as the number of votes needed to earn a seat in single transferable vote or largest remainder elections.
The Czech Republic is the only country that currently uses the quota, while Italy and Ecuador have used it in the past. The pseudoquota is unpopular because of its logically incoherent nature: it is possible for elections using the Imperiali quota to have more candidates winners than seats. In this case, the result must be recalculated using a different method. If this does not happen, Imperiali distributes seats in a way that is a hybrid between majoritarian and proportional representation, rather than providing actual proportional representation.
Formula.
The Imperiali quota may be given as:
formula_0
However, Imperiali violates the inequality for a valid fixed quota:
formula_1
As a result, it can lead to impossible allocations that assign parties more seats than actually exist.
An example of use in STV.
To see how the Imperiali quota works in an STV election imagine an election in which there are two seats to be filled and three candidates: Andrea, Carter and Brad. There are 100 voters as follows:
There are 100 voters and 2 seats. The Imperiali quota is therefore:
formula_2
To begin the count the first preferences cast for each candidate are tallied and are as follows:
Andrea has more than 25 votes. She therefore has reached the quota and is declared elected. She has 40 votes more than the quota so these votes are transferred to Carter. The tallies therefore become:
Carter has now reached the quota so he is declared elected. The winners are therefore Andrea and Carter.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\mbox{votes}}{\\mbox{seats}+2}"
},
{
"math_id": 1,
"text": "\\frac{\\mbox{votes}}{\\mbox{seats}+1} \\leq \\mbox{electoral quota} \\leq \\frac{\\mbox{votes}}{\\mbox{seats}-1}"
},
{
"math_id": 2,
"text": " \\frac{100}{2+2} = 25 "
}
] | https://en.wikipedia.org/wiki?curid=1058777 |
1058812 | Hare quota | Electoral system quota formula
In the study of apportionment, the Hare quota (sometimes called the simple, ideal, or Hamilton quota) is the number of voters represented by each legislator under an idealized system of proportional representation, where every legislator represents an equal number of voters. The Hare quota is the total number of votes divided by the number of seats to be filled. The Hare quota was used in the original proposal for a single transferable vote system, and is still occasionally used, although it has since been largely supplanted by the Droop quota.
The quota continues to be used in setting electoral thresholds, as well as for calculating apportionments by the largest remainder method (LR-Hare) or other quota-based methods of proportional representation. In such use cases, the Hare quota gives unbiased apportionments that favor neither large nor small parties.
Formula.
The Hare quota may be given as:
formula_0
where
Use in STV.
In an STV election a candidate who reaches the quota is elected while any votes a candidate receives above the quota in many cases have the opportunity to be transferred to another candidate in accordance to the voter's next usable marked preference. Thus the quota is used both to determine who is elected and to determine the number of surplus votes when a person is elected with quota. When the Droop quota is used, often about a quota of votes are not used to elect anyone (a much lower proportion that under the first-past-the-post voting system) so the quota is a cue to the number of votes that are used to actually elect someone.
The Hare quota was devised by Thomas Hare, the first to identify of STV. In 1868, Henry Richmond Droop (1831–1884) invented the Droop quota as an alternative to the Hare quota. The he Hare quota today is rarely used with STV due to fact that Droop is considered more fair to both large parties and small parties.
The number of votes in the quota is determined by the district magnitude of the district in conjunction with the number of valid votes cast.
Example.
To see how the Hare quota works in an STV election, imagine an election in which there are two seats to be filled and three candidates: Andrea, Brad, and Carter. One hundred voters voted, each casting one vote and marking a back-up preference, to be used only in case the first preference candidate is un-electable or elected with surplus. There are 100 ballots showing preferences as follows:
Because there are 100 voters and 2 seats, the Hare quota is:
formula_1
To begin the count the first preferences cast for each candidate are tallied and are as follows:
Andrea has reached the quota and is declared elected. She has 10 votes more than the quota so these votes are transferred to Carter, as specified on the ballots. The tallies of the remaining candidates therefore now become:
At this stage, there are only two candidates remaining and one seat open. The most popular candidate is declared elected and the other is declared defeated.
Although Brad has not reached the quota, he is declared elected since he has more votes than Carter.
The winners are therefore Andrea and Brad.
Use in party-list PR.
Hong Kong and Brazil use the Hare quota in largest-remainder systems.
In Brazil's largest remainder system the Hare quota is used to set the minimum number of seats allocated to each party or coalition. Remaining seats are allocated according to the D'Hondt method. This procedure is used for the Federal Chamber of Deputies, State Assemblies, Municipal and Federal District Chambers.
In Hong Kong.
For geographical constituencies, the SAR government adopted weakly-proportional representation using the largest remainder method with Hare quota in 1997. Typically, largest remainders paired with the Hare quota produces unbiased results that are difficult to manipulate. However, the combination of extremely small districts, no electoral thresholds, and low led to a system that parties could manipulate using careful vote management.
By running candidates on separate tickets, Hong Kong parties aimed to ensure they received no seats in the first step of apportionment, but still received enough votes to take several of the remainder seats when running against a divided opposition. The Democratic Party, for example, filled three separate tickets in the 8-seat New Territories West constituency in the 2008 Legislative Council elections. In the 2012 election, no candidate list won more than one seat in any of the six PR constituencies (a total of 40 seats). In Hong Kong, the Hare quota has effectively created a multi-member single-vote system in the territory.
Mathematical properties.
In situations where parties' total share of the vote varies randomly, the Hare quota is the unique unbiased quota for an electoral system based on vote-transfers or quotas. When the number of seats in each constituency is large and vote management becomes impractical (as it requires an unrealistic degree of coordination of voters), the quota becomes fully unbiased.
However, if the quota is used in small constituencies with no electoral threshold, it is possible to manipulate the system by running several candidates on separate lists, allowing each to win a remainder seat with less than a full quota. This can transform the method into a de facto single nontransferable vote system, a situation that has been seen in Hong Kong. By contrast, Droop quota cannot be manipulated in the same way, as it is never possible for a party to gain seats by splitting.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\mbox{total} \\; \\mbox{votes}}{\\mbox{total} \\; \\mbox{seats}}"
},
{
"math_id": 1,
"text": " \\frac{100}{2} = 50 "
}
] | https://en.wikipedia.org/wiki?curid=1058812 |
1058833 | Orthogonal complement | Concept in linear algebra
In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace formula_0 of a vector space formula_1 equipped with a bilinear form formula_2 is the set formula_3of all vectors in formula_1 that are orthogonal to every vector in formula_0. Informally, it is called the perp, short for perpendicular complement. It is a subspace of formula_1.
Example.
Let formula_4 be the vector space equipped with the usual dot product formula_5 (thus making it an inner product space), and let formula_6 with
formula_7
then its orthogonal complement formula_8 can also be defined as formula_9 being
formula_10
The fact that every column vector in formula_11 is orthogonal to every column vector in formula_12 can be checked by direct computation. The fact that the spans of these vectors are orthogonal then follows by bilinearity of the dot product. Finally, the fact that these spaces are orthogonal complements follows from the dimension relationships given below.
General bilinear forms.
Let formula_1 be a vector space over a field formula_13 equipped with a bilinear form formula_14 We define formula_15 to be left-orthogonal to formula_16, and formula_16 to be right-orthogonal to formula_15, when formula_17 For a subset formula_0 of formula_18 define the left-orthogonal complement formula_3 to be
formula_19
There is a corresponding definition of the right-orthogonal complement. For a reflexive bilinear form, where formula_20, the left and right complements coincide. This will be the case if formula_2 is a symmetric or an alternating form.
The definition extends to a bilinear form on a free module over a commutative ring, and to a sesquilinear form extended to include any free module over a commutative ring with conjugation.
Inner product spaces.
This section considers orthogonal complements in an inner product space formula_29.
Two vectors formula_30 and formula_31 are called orthogonal if formula_32, which happens if and only if formula_33 scalars formula_34.
If formula_35 is any subset of an inner product space formula_29 then its is the vector subspace
formula_36
which is always a closed subset (hence, a closed vector subspace) of formula_29 that satisfies:
If formula_35 is a vector subspace of an inner product space formula_29 then
formula_42
If formula_35 is a closed vector subspace of a Hilbert space formula_29 then
formula_43
where formula_44 is called the of formula_29 into formula_35 and formula_45 and it indicates that formula_35 is a complemented subspace of formula_29 with complement formula_46
Properties.
The orthogonal complement is always closed in the metric topology. In finite-dimensional spaces, that is merely an instance of the fact that all subspaces of a vector space are closed. In infinite-dimensional Hilbert spaces, some subspaces are not closed, but all orthogonal complements are closed. If formula_0 is a vector subspace of an inner product space the orthogonal complement of the orthogonal complement of formula_0 is the closure of formula_47 that is,
formula_48
Some other useful properties that always hold are the following. Let formula_29 be a Hilbert space and let formula_49 and formula_50 be linear subspaces. Then:
The orthogonal complement generalizes to the annihilator, and gives a Galois connection on subsets of the inner product space, with associated closure operator the topological closure of the span.
Finite dimensions.
For a finite-dimensional inner product space of dimension formula_58, the orthogonal complement of a formula_59-dimensional subspace is an formula_60-dimensional subspace, and the double orthogonal complement is the original subspace:
formula_61
If formula_62, where formula_63, formula_64, and formula_65 refer to the row space, column space, and null space of formula_11 (respectively), then
formula_66
Banach spaces.
There is a natural analog of this notion in general Banach spaces. In this case one defines the orthogonal complement of formula_0 to be a subspace of the dual of formula_1 defined similarly as the annihilator
formula_67
It is always a closed subspace of formula_68. There is also an analog of the double complement property. formula_69is now a subspace of formula_70(which is not identical to formula_1). However, the reflexive spaces have a natural isomorphism formula_71 between formula_1 and formula_70. In this case we have formula_72
This is a rather straightforward consequence of the Hahn–Banach theorem.
Applications.
In special relativity the orthogonal complement is used to determine the simultaneous hyperplane at a point of a world line. The bilinear form formula_73 used in Minkowski space determines a pseudo-Euclidean space of events. The origin and all events on the light cone are self-orthogonal. When a time event and a space event evaluate to zero under the bilinear form, then they are hyperbolic-orthogonal. This terminology stems from the use of conjugate hyperbolas in the pseudo-Euclidean plane: conjugate diameters of these hyperbolas are hyperbolic-orthogonal.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "W"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "W^\\perp"
},
{
"math_id": 4,
"text": "V = (\\R^5, \\langle \\cdot, \\cdot \\rangle)"
},
{
"math_id": 5,
"text": "\\langle \\cdot, \\cdot \\rangle"
},
{
"math_id": 6,
"text": "W = \\{\\mathbf{u} \\in V: \\mathbf{A}x = \\mathbf{u},\\ x\\in \\R^2\\},"
},
{
"math_id": 7,
"text": "\\mathbf{A} = \\begin{pmatrix}\n1 & 0\\\\\n0 & 1\\\\\n2 & 6\\\\\n3 & 9\\\\\n5 & 3\\\\\n\\end{pmatrix}."
},
{
"math_id": 8,
"text": "W^\\perp = \\{\\mathbf{v}\\in V:\\langle \\mathbf{u},\\mathbf{v}\\rangle = 0 \\ \\ \\forall \\ \\mathbf{u} \\in W\\}"
},
{
"math_id": 9,
"text": "W^\\perp = \\{\\mathbf{v} \\in V: \\mathbf{\\tilde{A}}y = \\mathbf{v},\\ y \\in \\R^3\\},"
},
{
"math_id": 10,
"text": "\\mathbf{\\tilde{A}} =\n\\begin{pmatrix}\n-2 & -3 & -5 \\\\\n-6 & -9 & -3 \\\\\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1 \n\\end{pmatrix}."
},
{
"math_id": 11,
"text": "\\mathbf{A}"
},
{
"math_id": 12,
"text": "\\mathbf{\\tilde{A}}"
},
{
"math_id": 13,
"text": "\\mathbb{F}"
},
{
"math_id": 14,
"text": "B."
},
{
"math_id": 15,
"text": "\\mathbf{u}"
},
{
"math_id": 16,
"text": "\\mathbf{v}"
},
{
"math_id": 17,
"text": "B(\\mathbf{u},\\mathbf{v}) = 0."
},
{
"math_id": 18,
"text": "V,"
},
{
"math_id": 19,
"text": "W^\\perp = \\left\\{ \\mathbf{x} \\in V : B(\\mathbf{x}, \\mathbf{y}) = 0 \\ \\ \\forall \\ \\mathbf{y} \\in W \\right\\}."
},
{
"math_id": 20,
"text": "B(\\mathbf{u},\\mathbf{v}) = 0 \\implies B(\\mathbf{v},\\mathbf{u}) = 0 \\ \\ \\forall \\ \\mathbf{u} , \\mathbf{v} \\in V"
},
{
"math_id": 21,
"text": "X \\subseteq Y"
},
{
"math_id": 22,
"text": "X^\\perp \\supseteq Y^\\perp"
},
{
"math_id": 23,
"text": "V^\\perp"
},
{
"math_id": 24,
"text": "W \\subseteq (W^\\perp)^\\perp"
},
{
"math_id": 25,
"text": "\\dim(W)+\\dim (W^\\perp)=\\dim (V)"
},
{
"math_id": 26,
"text": "L_1, \\ldots, L_r"
},
{
"math_id": 27,
"text": "L_* = L_1 \\cap \\cdots \\cap L_r,"
},
{
"math_id": 28,
"text": "L_*^\\perp = L_1^\\perp + \\cdots + L_r^\\perp"
},
{
"math_id": 29,
"text": "H"
},
{
"math_id": 30,
"text": "\\mathbf{x}"
},
{
"math_id": 31,
"text": "\\mathbf{y}"
},
{
"math_id": 32,
"text": "\\langle \\mathbf{x}, \\mathbf{y} \\rangle = 0"
},
{
"math_id": 33,
"text": "\\| \\mathbf{x} \\| \\le \\| \\mathbf{x} + s\\mathbf{y} \\| \\ \\forall"
},
{
"math_id": 34,
"text": "s"
},
{
"math_id": 35,
"text": "C"
},
{
"math_id": 36,
"text": "\\begin{align}\nC^\\perp\n:&= \\{\\mathbf{x} \\in H : \\langle \\mathbf{x}, \\mathbf{c} \\rangle = 0 \\ \\ \\forall \\ \\mathbf{c} \\in C\\} \\\\\n&= \\{\\mathbf{x} \\in H : \\langle \\mathbf{c}, \\mathbf{x} \\rangle = 0 \\ \\ \\forall \\ \\mathbf{c} \\in C\\} \n\\end{align}"
},
{
"math_id": 37,
"text": "C^{\\bot} = \\left(\\operatorname{cl}_H \\left(\\operatorname{span} C\\right)\\right)^{\\bot}"
},
{
"math_id": 38,
"text": "C^{\\bot} \\cap \\operatorname{cl}_H \\left(\\operatorname{span} C\\right) = \\{ 0 \\}"
},
{
"math_id": 39,
"text": "C^{\\bot} \\cap \\left(\\operatorname{span} C\\right) = \\{ 0 \\}"
},
{
"math_id": 40,
"text": "C \\subseteq \\left(C^{\\bot}\\right)^{\\bot}"
},
{
"math_id": 41,
"text": "\\operatorname{cl}_H \\left(\\operatorname{span} C\\right) = \\left(C^{\\bot}\\right)^{\\bot}"
},
{
"math_id": 42,
"text": "C^{\\bot} = \\left\\{\\mathbf{x} \\in H : \\|\\mathbf{x}\\| \\leq \\|\\mathbf{x} + \\mathbf{c}\\| \\ \\ \\forall \\ \\mathbf{c} \\in C \\right\\}."
},
{
"math_id": 43,
"text": "H = C \\oplus C^{\\bot} \\qquad \\text{ and } \\qquad \\left(C^{\\bot}\\right)^{\\bot} = C"
},
{
"math_id": 44,
"text": "H = C \\oplus C^{\\bot}"
},
{
"math_id": 45,
"text": "C^{\\bot}"
},
{
"math_id": 46,
"text": "C^{\\bot}."
},
{
"math_id": 47,
"text": "W,"
},
{
"math_id": 48,
"text": "\\left(W^\\bot\\right)^\\bot = \\overline W."
},
{
"math_id": 49,
"text": "X"
},
{
"math_id": 50,
"text": "Y"
},
{
"math_id": 51,
"text": "X^\\bot = \\overline{X}^{\\bot}"
},
{
"math_id": 52,
"text": "Y \\subseteq X"
},
{
"math_id": 53,
"text": "X^\\bot \\subseteq Y^\\bot"
},
{
"math_id": 54,
"text": "X \\cap X^\\bot = \\{ 0 \\}"
},
{
"math_id": 55,
"text": "X \\subseteq (X^\\bot)^\\bot"
},
{
"math_id": 56,
"text": "(X^\\bot)^\\bot = X"
},
{
"math_id": 57,
"text": "H = X \\oplus X^\\bot,"
},
{
"math_id": 58,
"text": "n"
},
{
"math_id": 59,
"text": "k"
},
{
"math_id": 60,
"text": "(n-k)"
},
{
"math_id": 61,
"text": "\\left(W^{\\bot}\\right)^{\\bot} = W."
},
{
"math_id": 62,
"text": "\\mathbf{A} \\in \\mathbb{M}_{mn}"
},
{
"math_id": 63,
"text": "\\mathcal{R}(\\mathbf{A})"
},
{
"math_id": 64,
"text": "\\mathcal{C} (\\mathbf{A})"
},
{
"math_id": 65,
"text": "\\mathcal{N} (\\mathbf{A})"
},
{
"math_id": 66,
"text": "\\left(\\mathcal{R} (\\mathbf{A}) \\right)^{\\bot} = \\mathcal{N} (\\mathbf{A}) \\qquad \\text{ and } \\qquad \\left(\\mathcal{C} (\\mathbf{A}) \\right)^{\\bot} = \\mathcal{N} (\\mathbf{A}^{\\operatorname{T}})."
},
{
"math_id": 67,
"text": "W^\\bot = \\left\\{ x\\in V^* : \\forall y\\in W, x(y) = 0 \\right\\}. "
},
{
"math_id": 68,
"text": "V^*"
},
{
"math_id": 69,
"text": "W^{\\perp \\perp}"
},
{
"math_id": 70,
"text": "V^{**}"
},
{
"math_id": 71,
"text": "i"
},
{
"math_id": 72,
"text": "i\\overline{W} = W^{\\perp\\perp}."
},
{
"math_id": 73,
"text": "\\eta"
}
] | https://en.wikipedia.org/wiki?curid=1058833 |
10591072 | Axiom of reducibility | The axiom of reducibility was introduced by Bertrand Russell in the early 20th century as part of his ramified theory of types. Russell devised and introduced the axiom in an attempt to manage the contradictions he had discovered in his analysis of set theory.
History.
With Russell's discovery (1901, 1902) of a paradox in Gottlob Frege's 1879 "Begriffsschrift" and Frege's acknowledgment of the same (1902), Russell tentatively introduced his solution as "Appendix B: Doctrine of Types" in his 1903 "The Principles of Mathematics". This contradiction can be stated as "the class of all classes that do not contain themselves as elements". At the end of this appendix Russell asserts that his "doctrine" would solve the immediate problem posed by Frege, but "there is at least one closely analogous contradiction which is probably not soluble by this doctrine. The totality of all logical objects, or of all propositions, involves, it would seem a fundamental logical difficulty. What the complete solution of the difficulty may be, I have not succeeded in discovering; but as it affects the very foundations of reasoning..."
By the time of his 1908 "Mathematical logic as based on the theory of types" Russell had studied "the contradictions" (among them the Epimenides paradox, the Burali-Forti paradox, and Richard's paradox) and concluded that "In all the contradictions there is a common characteristic, which we may describe as self-reference or reflexiveness".
In 1903, Russell defined "predicative" functions as those whose order is one more than the highest-order function occurring in the expression of the function. While these were fine for the situation, "impredicative" functions had to be disallowed:
<templatestyles src="Template:Blockquote/styles.css" />A function whose argument is an individual and whose value is always a first-order proposition will be called a first-order function. A function involving a first-order function or proposition as apparent variable will be called a second-order function, and so on. A function of one variable which is of the order next above that of its argument will be called a "predicative" function; the same name will be given to a function of several variables [etc].
He repeats this definition in a slightly different way later in the paper (together with a subtle prohibition that they would express more clearly in 1913):
<templatestyles src="Template:Blockquote/styles.css" />A predicative function of "x" is one whose values are propositions of the type next above that of "x", if "x" is an individual or a proposition, or that of values of "x" if "x" is a function. It may be described as one in which the apparent variables, if any, are all of the same type as "x" or of lower type; and a variable is of lower type than "x" if it can significantly occur as argument to "x", or as argument to an argument to "x", and so forth. [emphasis added]
This usage carries over to Alfred North Whitehead and Russell's 1913 "Principia Mathematica" wherein the authors devote an entire subsection of their Chapter II: "The Theory of Logical Types" to subchapter I. "The Vicious-Circle Principle": "We will define a function of one variable as "predicative" when it is of the next order above that of its argument, i.e. of the lowest order compatible with its having that argument. . . A function of several arguments is predicative if there is one of its arguments such that, when the other arguments have values assigned to them, we obtain a predicative function of the one undetermined argument."
They again propose the definition of a "predicative function" as one that does not violate The Theory of Logical Types. Indeed the authors assert such violations are "incapable [to achieve]" and "impossible":
<templatestyles src="Template:Blockquote/styles.css" />We are thus led to the conclusion, both from the vicious-circle principle and from direct inspection, that the functions to which a given object "a" can be an argument are incapable of being arguments to each other, and that they have no term in common with the functions to which they can be arguments. We are thus led to construct a hierarchy.
The authors stress the word "impossible":
<templatestyles src="Template:Blockquote/styles.css" />if we are not mistaken, that not only is it impossible for a function φz^ to have itself or anything derived from it as argument, but that, if ψz^ is another function such there are arguments "a" with which both "φa" and "ψa" are significant, then ψz^ and anything derived from it cannot significantly be argument to φz^.
Russell's axiom of reducibility.
The axiom of reducibility states that any truth function (i.e. propositional function) can be expressed by a formally equivalent "predicative" truth function. It made its first appearance in Bertrand Russell's (1908) "Mathematical logic as based on the theory of types", but only after some five years of trial and error. In his words:
<templatestyles src="Template:Blockquote/styles.css" />Thus a predicative function of an individual is a first-order function; and for higher types of arguments, predicative functions take the place that first-order functions take in respect of individuals. We assume then, that every function is equivalent, for all its values, to some predicative function of the same argument. This assumption seems to be the essence of the usual assumption of classes [modern sets] . . . we will call this assumption the "axiom of classes", or the axiom of reducibility.
For relations (functions of two variables such as "For all x and for all y, those values for which f(x,y) is true" i.e. ∀x∀y: f(x,y)), Russell assumed an "axiom of relations", or [the same] axiom of reducibility.
In 1903, he proposed a possible process of evaluating such a 2-place function by comparing the process to double integration: One after another, plug into "x" definite values "am" (i.e. the particular "aj" is "a constant" or a parameter held constant), then evaluate f("am","yn") across all the "n" instances of possible "yn". For all "yn" evaluate f(a1, "yn"), then for all "yn" evaluate f("a2", "yn"), etc until all the "x" = "am" are exhausted). This would create an "m" by "n" matrix of values: TRUE or UNKNOWN. (In this exposition, the use of indices is a modern convenience.)
In 1908, Russell made no mention of this matrix of "x", "y" values that render a two-place function (e.g. relation) TRUE, but by 1913 he has introduced a matrix-like concept into "function". In *12 of Principia Mathematica (1913) he defines "a matrix" as "any function, of however many variables, which does not involve any apparent variables. Then any possible function other than a matrix is derived from a matrix by means of generalisation, i.e. by considering the proposition which asserts that the function in question is true with all possible values or with some values of one of the arguments, the other argument or arguments remaining undetermined". For example, if one asserts that "∀y: f(x, y) is true", then "x" is the apparent variable because it is unspecified.
Russell now defines a matrix of "individuals" as a "first-order" matrix, and he follows a similar process to define a "second-order matrix", etc. Finally, he introduces the definition of a "predicative function":
<templatestyles src="Template:Blockquote/styles.css" />A function is said to be "predicative" when it is a matrix. It will be observed that, in a hierarchy in which all the variables are individuals or matrices, a matrix is the same thing as an elementary function [cf. 1913:127, meaning: the function contains "no" apparent variables]. ¶ "Matrix" or "predicative function" is a primitive idea.
From this reasoning, he then uses the same wording to propose the same "axioms of reducibility" as he did in his 1908.
As an aside, Russell in his 1903 considered, and then rejected, "a temptation to regard a relation as definable in extension as a class of couples", i.e. the modern set-theoretic notion of ordered pair. An intuitive version of this notion appeared in Frege's (1879) "Begriffsschrift" (translated in van Heijenoort 1967:23); Russell's 1903 followed closely the work of Frege (cf. Russell 1903:505ff). Russell worried that "it is necessary to give sense to the couple, to distinguish the referent from the relatum: thus a couple becomes essentially distinct from a class of two terms, and must itself be introduced as a primitive idea. It would seem, viewing the idea philosophically, that sense can only be derived from some relational proposition . . . it seems therefore more correct to take an intensional view of relations, and to identify them rather with class-concepts than with classes". As shown below, Norbert Wiener (1914) reduced the notion of relation to class by his definition of an ordered pair.
Criticism.
Zermelo 1908.
The outright prohibition implied by Russell's "axiom of reducibility" was roundly criticised by Ernst Zermelo in his 1908 "Investigations in the foundations of set theory I", stung as he was by a demand similar to that of Russell that came from Poincaré:
<templatestyles src="Template:Blockquote/styles.css" />According to Poincaré (1906, p. 307) a definition is "predicative" and logically admissible only if it "excludes" all objects that are "dependent" upon the notion defined, that is, that can in any way be determined by it.
Zermelo countered:
<templatestyles src="Template:Blockquote/styles.css" />A definition may very well rely upon notions that are equivalent to the one being defined; indeed in every definition "definiens" and "definiendum" are equivalent notions, and the strict observance of Poincaré's demand would make every definition, hence all of science, impossible.
Wiener 1914.
In his 1914 "A simplification of the logic of relations", Norbert Wiener removed the need for the axiom of reducibility as applied to relations between two variables "x", and "y" e.g. φ("x","y"). He did this by introducing a way to express a relation as a set of ordered pairs: "It will be seen that what we have done is practically to revert to Schröder's treatment of a relation as a class [set] of ordered couples". Van Heijenoort observes that "[b]y giving a definition of the ordered pair of two-elements in terms of class operations, the note reduced the theory of relations to that of classes." But Wiener opined that while he had dispatched Russell and Whitehead's two-variable version of the axiom *12.11, the single-variable version of the axiom of reducibility for (axiom *12.1 in "Principia Mathematica") was still necessary.
Wittgenstein 1918.
Ludwig Wittgenstein, while imprisoned in a prison camp, finished his "Tractatus Logico-Philosophicus". His introduction credits "the great works of Frege and the writings of my friend Bertrand Russell". Not a self-effacing intellectual, he pronounced that "the "truth" of the thoughts communicated here seems to me unassailable and definitive. I am, therefore, of the opinion that the problems have in essentials been finally solved." So given such an attitude, it is no surprise that Russell's theory of types comes under criticism:
<templatestyles src="Template:Blockquote/styles.css" />3.33
In logical syntax the meaning of a sign ought never to play a role; it must admit of being established without mention being thereby made of the "meaning" of a sign; it ought to presuppose "only" the description of the expressions.
3.331
From this observation we get a further view – into Russell's "Theory of Types". Russell's error is shown by the fact that in drawing up his symbolic rules he has to speak of the meaning of the signs.
3.332
No proposition can say anything about itself, because the proposition sign cannot be contained in itself (that is the "whole theory of types").
3.333
A function cannot be its own argument, because the functional sign already contains the prototype of its own argument and it cannot contain itself. ... Herewith Russell's paradox vanishes.
This appears to support the same argument Russell uses to erase his "paradox". This "using the signs" to "speak of the signs" Russell criticises in his introduction that preceded the original English translation:
<templatestyles src="Template:Blockquote/styles.css" />What causes hesitation is the fact that, after all, Mr Wittgenstein manages to say a good deal about what cannot be said, thus suggesting to the sceptical reader that possibly there may be some loophole through a hierarchy of languages, or by some other exit.
This problem appears later when Wittgenstein arrives at this gentle disavowal of the axiom of reducibility—one interpretation of the following is that Wittgenstein is saying that Russell has made (what is known today as) a category error; Russell has asserted (inserted into the theory) a "further law of logic" when "all" the laws (e.g. the unbounded Sheffer stroke adopted by Wittgenstein) have "already" been asserted:
<templatestyles src="Template:Blockquote/styles.css" />6.123
It is clear that the laws of logic cannot themselves obey further logical laws. (There is not, as Russell supposed, for every "type" a special law of contradiction; but one is sufficient, since it is not applied to itself.)
6.1231
The mark of logical propositions is not their general validity. To be general is only to be accidentally valid for all things. An ungeneralised proposition can be tautologous just as well as a generalised one.
6.1232
Logical general validity, we could call essential as opposed to accidental general validity, e.g., of the proposition "all men are mortal". Propositions like Russell's "axiom of reducibility" are not logical propositions, and this explains our feeling that, if true, they can only be true by a happy chance.
6.1233
We can imagine a world in which the axiom of reducibility is not valid. But it is clear that logic has nothing to do with the question of whether our world is really of this kind or not.
Russell 1919.
Bertrand Russell in his 1919 "Introduction to Mathematical Philosophy", a non-mathematical companion to his first edition of "PM", discusses his Axiom of Reducibility in Chapter 17 "Classes" (pp. 146ff). He concludes that "we cannot accept "class" as a primitive idea; the symbols for classes are "mere conveniences" and classes are "logical fictions, or (as we say) 'incomplete symbols' ... classes cannot be regarded as part of the ultimate furniture of the world" (p. 146). The reason for this is because of the problem of impredicativity: "classes cannot be regarded as a species of individuals, on account of the contradiction about classes which are not members of themselves ... and because we can prove that the number of classes is greater than the number of individuals, [etc]". What he then does is propose 5 obligations that must be satisfied with respect to a theory of classes, and the result is his axiom of reducibility. He states that this axiom is "a generalised form of Leibniz's identity of indiscernibles" (p. 155). But he concludes Leibniz's assumption is not necessarily true for all possible predicates in all possible worlds, so he concludes that:
<templatestyles src="Template:Blockquote/styles.css" />I do not see any reason to believe that the axiom of reducibility is logically necessary, which is what would be meant by saying that it is true in all possible worlds. The admission of this axiom into a system of logic is therefore a defect ... a dubious assumption. (p. 155)
The goal that he sets for himself then is "adjustments to his theory" of avoiding classes:
<templatestyles src="Template:Blockquote/styles.css" />in its reduction of propositions nominally about classes to propositions about their defining functions. The avoidance of classes as entities by this method must, it would be seem, be sound in principle, however the detail may still require adjustment. (p. 155)
Skolem 1922.
Thoralf Skolem in his 1922 "Some remarks on axiomatised set theory" took a less than positive attitude toward "Russell and Whitehead" (i.e. their work "Principia Mathematica"):
<templatestyles src="Template:Blockquote/styles.css" />Until now, so far as I know, only "one" such system of axioms has found rather general acceptance, namely that constructed by Zermelo (1908). Russell and Whitehead, too, constructed a system of logic that provides a foundation for set theory; if I am not mistaken, however, mathematicians have taken but little interest in it.
Skolem then observes the problems of what he called "nonpredicative definition" in the set theory of Zermelo:
<templatestyles src="Template:Blockquote/styles.css" />the difficulty is that we have to form some sets whose existence depends upon "all" sets ... Poincaré called this kind of definition and regarded it as the real logical weakness of set theory.
While Skolem is mainly addressing a problem with Zermelo's set theory, he does make this observation about the "axiom of reducibility":
<templatestyles src="Template:Blockquote/styles.css" />they [Russell and Whitehead], too, simply content themselves with circumventing the difficulty by introducing a stipulation, the "axiom of reducibility". Actually, this axiom decrees that the nonpredicative stipulations will be satisfied. There is no proof of that; besides, so far as I can see, such a proof must be impossible from Russell and Whitehead's point of view as well as from Zermelo's. [emphasis added]
Russell 1927.
In his 1927 "Introduction" to the second edition of "Principia Mathematica", Russell criticises his own axiom:
<templatestyles src="Template:Blockquote/styles.css" />One point in regard to which improvement is obviously desirable is the axiom of reducibility (*12.1.11). This axiom has a purely pragmatic justification: it leads to the desired results, and to no others. But clearly it is not the sort of axiom with which we can rest content. On this subject, however, it cannot be said that a satisfactory solution is as yet obtainable. ... There is another course recommended by Wittgenstein† [† "Tractatus Logico-Philosophicus", *5.54ff] for philosophical reasons. This is to assume that functions of propositions are always truth-functions, and that a function can only occur as in a proposition through its values. There are difficulties ... It involves the consequence that all functions of functions are extensional. ... [But the consequences of his logic are that] the theory of infinite Dedekindian and well-ordering collapses, so that irrationals, and real numbers generally, can no longer be adequately dealt with. Also Cantor's proof that 2n > "n" breaks down unless "n" is finite. Perhaps some further axiom, less objectionable than the axiom of reducibility, might give these results, but we have not succeeded in finding such an axiom.
Wittgenstein's 5.54ff is more centred on the notion of function:
<templatestyles src="Template:Blockquote/styles.css" />5.54
In the general propositional form, propositions occur in a proposition only as bases of the truth-operations.
5.541
At first sight it appears as if there were also a different way in which one proposition could occur in another. ¶ Especially in certain propositional forms of psychology, like "A thinks, that "p" is the case," or "A thinks "p"," etc. ¶ Here it appears superficially as if the proposition "p" stood to the object A in a kind of relation. ¶ (And in modern epistemology [Russell, Moore, etc.] those propositions have been conceived in this way.)
5.542
But it is clear that "A believes that "p", "A thinks "p"", "A says "p"", are of the form " ' "p" ' thinks "p" "; and here we have no co-ordination of a fact and an object, but a co-ordination of facts by means of a co-ordination of their objects.
5.5421 [etc: "A composite soul would not be a soul any longer."]
5.5422
The correct explanation of the form of the proposition "A judges "p"" must show that it is impossible to judge a nonsense. (Russell's theory does not satisfy this condition).
A possible interpretation of Wittgenstein's stance is that the thinker A i.e. "'p'" "is identically" the thought "p", in this way the "soul" remains a unit and not a composite. So to utter "the thought thinks the thought" is nonsense, because per 5.542 the utterance does not specify anything.
von Neumann 1925.
John von Neumann in his 1925 "An axiomatisation of set theory" wrestled with the same issues as did Russell, Zermelo, Skolem, and Fraenkel. He summarily rejected the effort of Russell:
<templatestyles src="Template:Blockquote/styles.css" />Here Russell, J. Konig, Weyl, and Brouwer must be mentioned. They arrived at entirely different results [from the set theorists], but the over-all effect of their activity seems to me outright devastating. In Russell, all of mathematics and set theory seems to rest upon the highly problematic "axiom of reducibility", while Weyl and Brouwer systematically reject the larger part of mathematics and set theory as completely meaningless.
He then notes the work of the set theorists Zermelo, Fraenkel and Schoenflies, in which "one understands by "set" nothing but an object of which one knows no more and wants to know no more than what follows about it from the postulates. The postulates [of set theory] are to be formulated in such a way that all the desired theorems of Cantor's set theory follow from them, but not the antinomies.
While he mentions the efforts of David Hilbert to prove the consistency of his axiomatisation of mathematics von Neumann placed him in the same group as Russell. Rather, von Neumann considered his proposal to be "in the spirit of the second group ... We must, however, avoid forming sets by collecting or separating elements [durch Zusammenfassung oder Aussonderung von Elementen], and so on, as well as eschew the unclear principle of 'definiteness' that can still be found in Zermelo. [...] We prefer, however, to axiomatise not 'set' but 'function'."
Van Heijenoort observes that ultimately this axiomatic system of von Neumann's, "was simplified, revised, and expanded ... and it come to be known as the von Neumann-Bernays-Gödel set theory."
David Hilbert 1927.
David Hilbert's axiomatic system that he presents in his 1925 "The Foundations of Mathematics" is the mature expression of a task he set about in the early 1900s but let lapse for a while (cf. his 1904 "On the foundations of logic and arithmetic"). His system is neither set theoretic nor derived directly from Russell and Whitehead. Rather, it invokes 13 axioms of logic—four axioms of Implication, six axioms of logical AND and logical OR, 2 axioms of logical negation, and 1 ε-axiom ("existence" axiom)-- plus a version of the Peano axioms in 4 axioms including mathematical induction, some definitions that "have the character of axioms, and certain "recursion axioms" that result from a general recursion schema" plus some formation rules that "govern the use of the axioms".
Hilbert states that, with regard to this system, i.e. "Russell and Whitehead's theory of foundations[,] ... the foundation that it provides for mathematics rests, first, upon the axiom of infinity and, then upon what is called the axiom of reducibility, and both of these axioms are genuine contentual assumptions that are not supported by a consistency proof; they are assumptions whose validity in fact remains dubious and that, in any case, my theory does not require ... reducibility is not presupposed in my theory ... the execution of the reduction would be required only in case a proof of a contradiction were given, and then, according to my proof theory, this reduction would always be bound to succeed."
It is upon this foundation that modern recursion theory rests.
Ramsey 1925.
In 1925, Frank Plumpton Ramsey argued that it is not needed. However in the second edition of Principia Mathematica (1927, page xiv) and in Ramsey's 1926 paper it is stated that certain theorems about real numbers could not be proved using Ramsey's approach. Most later mathematical formalisms (Hilbert's Formalism or Brower's Intuitionism for example) do not use it.
Ramsey showed that it is possible to reformulate the definition of "predicative" by using the definitions in Wittgenstein's Tractatus Logico-Philosophicus. As a result, all functions of a given order are "predicative", irrespective of how they are expressed. He goes on to show that his formulation still avoids the paradoxes. However, the "Tractatus" theory did not appear strong enough to prove some mathematical results.
Gödel 1944.
Kurt Gödel in his 1944 "Russell's mathematical logic" offers in the words of his commentator Charles Parsons, "[what] might be seen as a defense of these [realist] attitudes of Russell against the reductionism prominent in his philosophy and implicit in much of his actual logical work. It was perhaps the most robust defense of realism about mathematics and its objects since the paradoxes and come to the consciousness of the mathematical world after 1900".
In general, Gödel is sympathetic to the notion that a propositional function can be reduced to (identified with) the "real objects" that satisfy it, but this causes problems with respect to the theory of real numbers, and even integers (p. 134). He observes that the first edition of "PM" "abandoned" the realist (constructivistic) "attitude" with his proposal of the axiom of reducibility (p. 133). However, within the introduction to the second edition of "PM" (1927) Gödel asserts "the constructivistic attitude is resumed again" (p. 133) when Russell "dropped" of the axiom of reducibility in favour of the matrix (truth-functional) theory; Russell "stated explicitly that all primitive predicates belong to the lowest type and that the only purpose of variables (and evidently also of constants) is to make it possible to assert more complicated truth-functions of atomic propositions ... [i.e.] the higher types and orders are solely a "façon de parler"" (p. 134). But this only works when the number of individuals and primitive predicates is finite, for one can construct finite strings of symbols such as:
formula_0 [example on page 134]
And from such strings one can form strings of strings to obtain the equivalent of classes of classes, with a mixture of types possible. However, from such finite strings the whole of mathematics cannot be constructed because they cannot be "analyzed", i.e. reducible to the law of identity or disprovable by a negations of the law:
<templatestyles src="Template:Blockquote/styles.css" />Even the theory of integers is non-analytic, provided that one requires of the rules of elimination that they allow one actually to carry out the elimination in a finite number of steps in each case.44 (44Because this would imply the existence of a decision procedure for all arithmetical propositions. Cf. "Turing 1937".) ... [Thus] the whole of mathematics as applied to sentences of infinite length has to be presupposed to prove [the] analyticity [of the theory of integers], e.g., the axiom of choice can be proved to be analytic only if it is assumed to be true. (p. 139)
But he observes that "this procedure seems to presuppose arithmetic in some form or other" (p. 134), and he states in the next paragraph that "the question of whether (or to what extent) the theory of integers can be obtained on the basis of the ramified hierarchy must be considered as unsolved." (p. 135)
Gödel proposed that one should take a "more conservative approach":
<templatestyles src="Template:Blockquote/styles.css" />make the meaning of the terms "class" and "concept" clearer, and to set up a consistent theory of classes and concepts as objectively existing entities. This is the course which the actual development of mathematical logic has been taking ... Major among the attempts in this direction ... are the simple theory of types ... and axiomatic set theory, both of which have been successful at least to this extent, that they permit the derivation of modern mathematics and at the same time avoid all known paradoxes. Many symptoms show only too clearly, however, that the primitive concepts need further elucidation. (p. 140)
Quine 1967.
In a critique that also discusses the pros and cons of Ramsey (1931) W. V. O. Quine calls Russell's formulation of "types" to be "troublesome ... the confusion persists as he attempts to define '"n"th order propositions'... the method is indeed oddly devious ... the axiom of reducibility is self-effacing", etc.
Like Stephen Kleene, Quine observes that Ramsey (1926) divided the various paradoxes into two varieties (i) "those of pure set theory" and (ii) those derived from "semantic concepts such as falsity and specifiability", and Ramsey believed that the second variety should have been left out of Russell's solution. Quine ends with the opinion that "because of the confusion of propositions with sentences, and of attributes with their expressions, Russell's purported solution of the semantic paradoxes was enigmatic anyway."
Kleene 1952.
In his section "§12. First inferences from the paradoxes" (subchapter "LOGICISM"), Stephen Kleene (1952) traces the development of Russell's theory of types:
<templatestyles src="Template:Blockquote/styles.css" />To adapt the logicistic [sic] construction of mathematics to the situation arising from the discovery of the paradoxes, Russell excluded impredicative definitions by his "ramified theory of types" (1908, 1910).
Kleene observes that "to exclude impredicative definitions within a type, the types above type 0 [primary objects or individuals "not subjected to logical analysis"] are further separated into orders. Thus for type 1 [properties of individuals, i.e. logical results of the propositional calculus ], properties defined without mentioning any totality belong to "order" 0, and properties defined using the totality of properties of a given order below to the next higher order)".
Kleene, however, parenthetically observes that "the logicistic definition of natural number now becomes predicative when the [property] P in it is specified to range only over properties of a given order; in [this] case the property of being a natural number is of the next higher order". But this separation into orders makes it impossible to construct the familiar analysis, which [see Kleene's example at Impredicativity] contains impredicative definitions. To escape this outcome, Russell postulated his "axiom of reducibility". But, Kleene wonders, "on what grounds should we believe in the axiom of reducibility?" He observes that, whereas "Principia Mathematica" is presented as derived from "intuitively"-derived axioms that "were intended to be believed about the world, or at least to be accepted as plausible hypotheses concerning the world[,] ... if properties are to be constructed, the matter should be settled on the basis of constructions, not by an axiom." Indeed, he quotes Whitehead and Russell (1927) questioning their own axiom: "clearly it is not the sort of axiom with which we can rest content".
Kleene references the work of Ramsey 1926, but notes that "neither Whitehead and Russell nor Ramsey succeeded in attaining the logicistic goal constructively" and "an interesting proposal ... by Langford 1927 and Carnap 1931-2, is also not free of difficulties." Kleene ends this discussion with quotes from Weyl (1946) that "the system of "Principia Mathematica" ... [is founded on] a sort of logician's paradise" and anyone "who is ready to believe in this 'transcendental world' could also accept the system of axiomatic set theory (Zermelo, Fraenkel, etc), which, for the deduction of mathematics, has the advantage of being simpler in structure."
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x = a_1 \\vee x = a_2 \\vee \\dots \\vee x = a_k"
}
] | https://en.wikipedia.org/wiki?curid=10591072 |
10592503 | Charging station | Installation for charging electric vehicles
A charging station, also known as a charge point, chargepoint, or electric vehicle supply equipment (EVSE), is a power supply device that supplies electrical power for recharging plug-in electric vehicles (including battery electric vehicles, electric trucks, electric buses, neighborhood electric vehicles, and plug-in hybrid vehicles).
There are two main types of EV chargers: Alternating current (AC) charging stations and direct current (DC) charging stations. Electric vehicle batteries can only be charged by direct current electricity, while most mains electricity is delivered from the power grid as alternating current. For this reason, most electric vehicles have a built-in AC-to-DC converter commonly known as the "onboard charger" (OBC). At an AC charging station, AC power from the grid is supplied to this onboard charger, which converts it into DC power to recharge the battery. DC chargers provide higher power charging (which requires much larger AC-to-DC converters) by building the converter into the charging station instead of the vehicle to avoid size and weight restrictions. The station then directly supplies DC power to the vehicle, bypassing the onboard converter. Most modern electric car models can accept both AC and DC power.
Charging stations provide connectors that conform to a variety of international standards. DC charging stations are commonly equipped with multiple connectors to charge various vehicles that use competing standards.
Public charging stations are typically found street-side or at retail shopping centers, government facilities, and other parking areas. Private charging stations are usually found at residences, workplaces, and hotels.
<templatestyles src="Template:TOC limit/styles.css" />
Standards.
Multiple standards have been established for charging technology to enable interoperability across vendors. Standards are available for nomenclature, power, and connectors. Tesla developed proprietary technology in these areas and began building its charging networking in 2012.
Nomenclature.
In 2011, the European Automobile Manufacturers Association (ACEA) defined the following terms:
The terms "electric vehicle connector" and "electric vehicle inlet" were previously defined in the same way under Article 625 of the United States National Electric Code (NEC) of 1999. NEC-1999 also defined the term "electric vehicle supply equipment" as the entire unit "installed specifically for the purpose of delivering energy from the premises wiring to the electric vehicle", including "conductors ... electric vehicle connectors, attachment plugs, and all other fittings, devices, power outlets, or apparatuses".
Tesla, Inc. uses the term "charging station" as the location of a group of chargers, and the term "connector" for an individual EVSE.
Voltage and power.
Early standards.
The National Electric Transportation Infrastructure Working Council (IWC) was formed in 1991 by the Electric Power Research Institute with members drawn from automotive manufacturers and the electric utilities to define standards in the United States; early work by the IWC led to the definition of three levels of charging in the 1999 National Electric Code (NEC) Handbook.
Under the 1999 NEC, Level 1 charging equipment (as defined in the NEC handbook but not in the code) was connected to the grid through a standard NEMA 5-20R 3-prong electrical outlet with grounding, and a ground-fault circuit interrupter was required within of the plug. The supply circuit required protection at 125% of the maximum rated current; for example, charging equipment rated at 16 amperes ("amps" or "A") continuous current required a breaker sized to 20 A.
Level 2 charging equipment (as defined in the handbook) was permanently wired and fastened at a fixed location under NEC-1999. It also required grounding and ground-fault protection; in addition, it required an interlock to prevent vehicle startup during charging and a safety breakaway for the cable and connector. A 40 A breaker (125% of continuous maximum supply current) was required to protect the branch circuit. For convenience and speedier charging, many early EVs preferred that owners and operators install Level 2 charging equipment, which was connected to the EV either through an inductive paddle (Magne Charge) or a conductive connector (Avcon).
Level 3 charging equipment used an off-vehicle rectifier to convert the input AC power to DC, which was then supplied to the vehicle. At the time it was written, the 1999 NEC handbook anticipated that Level 3 charging equipment would require utilities to upgrade their distribution systems and transformers.
SAE.
The Society of Automotive Engineers (SAE International) defines the general physical, electrical, communication, and performance requirements for EV charging systems used in North America, as part of standard SAE J1772, initially developed in 2001. SAE J1772 defines four levels of charging, two levels each for AC and DC supplies; the differences between levels are based upon the power distribution type, standards and maximum power.
Alternating current (AC).
AC charging stations connect the vehicle's onboard charging circuitry directly to the AC supply.
Direct current (DC).
Commonly, though incorrectly, called "Level 3" charging based on the older NEC-1999 definition, DC charging is categorized separately in the SAE standard. In DC fast-charging, grid AC power is passed through an AC-to-DC converter in the station before reaching the vehicle's battery, bypassing any AC-to-DC converter on board the vehicle.
Additional standards released by SAE for charging include SAE J3068 (three-phase AC charging, using the Type 2 connector defined in IEC 62196-2) and SAE J3105 (automated connection of DC charging devices).
IEC.
In 2003, the International Electrotechnical Commission (IEC) adopted a majority of the SAE J1772 standard under IEC 62196-1 for international implementation.
The IEC alternatively defines charging in "modes" (IEC 61851-1):
The connection between the electric grid and "charger" (electric vehicle supply equipment) is defined by three cases (IEC 61851-1):
Tesla NACS.
The North American Charging Standard (NACS) was developed by Tesla, Inc. for use in the company's vehicles. It remained a proprietary standard until 2022 when its specifications were published by Tesla. The connector is physically smaller than the J1172/CCS connector, and uses the same pins for both AC and DC charging functionality.
As of November 2023, automakers Ford, General Motors, Rivian, Volvo, Polestar, Mercedes-Benz, Nissan, Honda, Jaguar, Fisker, Hyundai, BMW, Toyota, Subaru, and Lucid Motors have all committed to equipping their North American vehicles with NACS connectors in the future. Automotive startup Aptera Motors has also adopted the connector standard in its vehicles. Other automakers, such as Stellantis and Volkswagen have not made an announcement.
To meet European Union (EU) requirements on recharging points, Tesla vehicles sold in the EU are equipped with a CCS Combo 2 port. Both the North America and the EU port take 480V DC fast charging through Tesla's network of Superchargers, which variously use NACS and CCS charging connectors. Depending on the Supercharger version, power is supplied at 72, 150, or 250 kW, the first corresponding to DC Level 1 and the second and third corresponding to DC Level 2 of SAE J1772. As of Q4 2021, Tesla reported 3,476 supercharging locations worldwide and 31,498 supercharging chargers (about 9 chargers per location on average).
Future development.
An extension to the CCS DC fast-charging standard for electric cars and light trucks is under development, which will provide higher power charging for large commercial vehicles (Class 8, and possibly 6 and 7 as well, including school and transit buses). When the Charging Interface Initiative e. V. (CharIN) task force was formed in March 2018, the new standard being developed was originally called High Power Charging (HPC) for Commercial Vehicles (HPCCV), later renamed Megawatt Charging System (MCS). MCS is expected to operate in the range of 200–1500V and 0–3000A for a theoretical maximum power of 4.5megawatts (MW). The proposal calls for MCS charge ports to be compatible with existing CCS and HPC chargers. The task force released aggregated requirements in February 2019, which called for maximum limits of 1000V DC (optionally, 1500V DC) and 3000A continuous rating.
A connector design was selected in May 2019 and tested at the National Renewable Energy Laboratory (NREL) in September 2020. Thirteen manufacturers participated in the test, which checked the coupling and thermal performance of seven vehicle inlets and eleven charger connectors. The final connector requirements and specification was adopted in December 2021 as MCS connector version 3.2.
With support from Portland General Electric, on 21 April 2021 Daimler Trucks North America opened the "Electric Island", the first heavy-duty vehicle charging station, across the street from its headquarters in Portland, Oregon. The station is capable of charging eight vehicles simultaneously, and the charging bays are sized to accommodate tractor-trailers. In addition, the design is capable of accommodating >1MW chargers once they are available. A startup company, WattEV, announced plans in May 2021 to build a 40-stall truck stop/charging station in Bakersfield, California. At full capacity, it would provide a combined 25MW of charging power, partially drawn from an on-site solar array and battery storage.
Connectors.
Common connectors include Type 1 (Yazaki), Type 2 (Mennekes), Type 3 (Scame), CCS Combo 1 and 2, CHAdeMO, and Tesla. Many standard plug types are defined in IEC 62196-2 (for AC supplied power) and 62196-3 (for DC supplied power):
<templatestyles src="Reflist/styles.css" />
CCS DC charging requires power-line communication (PLC). Two connectors are added at the bottom of Type 1 or Type 2 vehicle inlets and charging plugs to supply DC current. These are commonly known as Combo 1 or Combo 2 connectors. The choice of style inlets is normally standardized on a per-country basis so that public chargers do not need to fit cables with both variants. Generally, North America uses Combo 1 style vehicle inlets, while most of the rest of the world uses Combo 2.
The CHAdeMO standard is favored by Nissan, Mitsubishi, and Toyota, while the SAE J1772 Combo standard is backed by GM, Ford, Volkswagen, BMW, and Hyundai. Both systems charge to 80% in approximately 20 minutes, but the two systems are incompatible. Richard Martin, editorial director for clean technology marketing and consultant firm Navigant Research, stated:
The broader conflict between the CHAdeMO and SAE Combo connectors, we see that as a hindrance to the market over the next several years that needs to be worked out.
Historical connectors.
In the United States, many of the EVs first marketed in the late 1990s and early 2000s such as the GM EV1, Ford Ranger EV, and Chevrolet S-10 EV preferred the use of Level 2 (single-phase AC) EVSE, as defined under NEC-1999, to maintain acceptable charging speed. These EVSEs were fitted with either an inductive connector (Magne Charge) or a conductive connector (generally AVCON). Proponents of the inductive system were GM, Nissan, and Toyota; DaimlerChrysler, Ford, and Honda backed the conductive system.
Magne Charge paddles were available in two different sizes: an older, larger paddle (used for the EV1 and S-10 EV) and a newer, smaller paddle (used for the first-generation Toyota RAV4 EV, but backwards compatible with large-paddle vehicles through an adapter). The larger paddle (introduced in 1994) was required to accommodate a liquid-cooled vehicle inlet charge port; the smaller paddle (introduced in 2000) interfaced with an air-cooled inlet instead. SAE J1773, which described the technical requirements for inductive paddle coupling, was first issued in January 1995, with another revision issued in November 1999.
The influential California Air Resources Board adopted the conductive connector as its standard on 28 June 2001, based on lower costs and durability, and the Magne Charge paddle was discontinued by the following March. Three conductive connectors existed at the time, named according to their manufacturers: Avcon (aka butt-and-pin, used by Ford, Solectria, and Honda); Yazaki (aka pin-and-sleeve, on the RAV4 EV); and ODU (used by DaimlerChrysler). The Avcon butt-and-pin connector supported Level 2 and Level 3 (DC) charging and was described in the appendix of the first version (1996) of the SAE J1772 recommended practice; the 2001 version moved the connector description into the body of the practice, making it the de facto standard for the United States. IWC recommended the Avcon butt connector for North America, based on environmental and durability testing. As implemented, the Avcon connector used four contacts for Level 2 (L1, L2, Pilot, Ground) and added five more (three for serial communications, and two for DC power) for Level 3 (L1, L2, Pilot, Com1, Com2, Ground, Clean Data ground, DC+, DC−). By 2009, J1772 had instead adopted the round pin-and-sleeve (Yazaki) connector as its standard implementation, and the rectangular Avcon butt connector was rendered obsolete.
Charging time.
Charging time depends on the battery's capacity, power density, and charging power. The larger the capacity, the more charge the battery can hold (analogous to the size of a fuel tank). Higher power density allows the battery to accept more charge per unit time (the size of the tank opening). Higher charging power supplies more energy per unit time (analogous to a pump's flow rate). An important downside of charging at fast speeds is that it also adds stress to the mains electricity grid.
The California Air Resources Board specified a target minimum range of 150 miles to qualify as a zero-emission vehicle, and further specified that the vehicle should allow for fast-charging.
Charge time can be calculated as:
formula_0
The effective charging power can be lower than the maximum charging power due to limitations of the battery or battery management system, charging losses (which can be as high as 25%), and vary over time due to charging limits applied by a charge controller.
Battery capacity.
The usable battery capacity of a first-generation electric vehicle, such as the original Nissan Leaf, was about 20kilowatt-hours (kWh), giving it a range of about . Tesla was the first company to introduce longer-range vehicles, initially releasing their Model S with battery capacities of 40kWh, 60kWh and 85kWh, with the latter lasting for about . Current plug-in hybrid vehicles typically have an electric range of 15 to 60 miles.
AC to DC conversion.
Batteries are charged with DC power. To charge from the AC power supplied by the electrical grid, EVs have a small AC-to-DC converter built into the vehicle. The charging cable supplies AC power directly from the grid, and the vehicle converts this power to DC internally and charges its battery. The built-in converters on most EVs typically support charging speeds up to 6–7kW, sufficient for overnight charging. This is known as "AC charging". To facilitate rapid recharging of EVs, much higher power (50–100+kW) is necessary. This requires a much larger AC-to-DC converter which is not practical to integrate into the vehicle. Instead, the AC-to-DC conversion is performed by the charging station, and DC power is supplied to the vehicle directly, bypassing the built-in converter. This is known as DC fast charging.
Safety.
Charging stations are usually accessible to multiple electric vehicles and are equipped with current or connection sensing mechanisms to disconnect the power when the EV is not charging.
The two main types of safety sensors:
Sensor wires react more quickly, have fewer parts to fail, and are possibly less expensive to design and implement. Current sensors however can use standard connectors and can allow suppliers to monitor or charge for the electricity actually consumed.
Public charging stations.
Longer drives require a network of public charging stations. In addition, they are essential for vehicles that lack access to a home charging station, as is common in multi-family housing. Costs vary greatly by country, power supplier, and power source. Some services charge by the minute, while others charge by the amount of energy received (measured in kilowatt-hours). In the United States, some states have banned the use of charging by kWh.
Charging stations may not need much new infrastructure in developed countries, less than delivering a new fuel over a new network. The stations can leverage the existing ubiquitous electrical grid.
Charging stations are offered by public authorities, commercial enterprises, and some major employers to address a range of barriers. Options include simple charging posts for roadside use, charging cabinets for covered parking places, and fully automated charging stations integrated with power distribution equipment.
As of 2012[ [update]], around 50,000 non-residential charging points were deployed in the U.S., Europe, Japan and China. As of 2014[ [update]], some 3,869 CHAdeMO quick chargers were deployed, with 1,978 in Japan, 1,181 in Europe and 686 in the United States, and 24 in other countries. As of December 2021 the total number of public and private EV charging stations was over 57,000 in the United States and Canada combined. As of May 2023, there are over 3.9 million public EV charging points worldwide, with Europe having over 600,000, China leading with over 2.7 million. United States has over 138,100 charging outlets for plug-in electric vehicles (EVs). In January 2023, S&P Global Mobility estimated that the US has about 126,500 Level 2 and 20,431 Level 3 charging stations, plus another 16,822 Tesla Superchargers and Tesla destination chargers.
Asia/Pacific.
As of 2012[ [update]], Japan had 1,381 public DC fast-charging stations, the largest deployment of fast chargers in the world, but only around 300 AC chargers. As of 2012[ [update]], China had around 800 public slow charging points, and no fast charging stations.
As of 2013[ [update]], the largest public charging networks in Australia were in the capital cities of Perth and Melbourne, with around 30 stations (7kW AC) established in both cities – smaller networks exist in other capital cities.
Europe.
As of 2013[ [update]], Estonia was the only country that had completed the deployment of an EV charging network with nationwide coverage, with 165 fast chargers available along highways at a maximum distance of between , and a higher density in urban areas.
As of 2012[ [update]], about 15,000 charging stations had been installed in Europe.
As of 2013[ [update]], Norway had 4,029 charging points and 127 DC fast-charging stations. As part of its commitment to environmental sustainability, the Dutch government initiated a plan to establish over 200 fast (DC) charging stations across the country by 2015. The rollout will be undertaken by ABB and Dutch startup Fastned, aiming to provide at least one station every for the Netherlands' 16 million residents. In addition to that, the E-laad foundation installed about 3000 public (slow) charge points since 2009.
Compared to other markets, such as China, the European electric car market has developed slowly. This, together with the lack of charging stations, has reduced the number of electric models available in Europe. In 2018 and 2019 the European Investment Bank (EIB) signed several projects with companies like Allego, Greenway, BeCharge and Enel X. The EIB loans will support the deployment of the charging station infrastructure with a total of €200 million. The UK government declared that it will ban the selling of new petrol and diesel vehicles by 2035 for a complete shift towards electric charging vehicles.
North America.
As of October 2023, there are 69,222 charging stations, including the Level 1, Level 2 and DC fast charging stations, across the United States and Canada.
As of October 2023, in the U.S. and Canada, there are 6,502 stations with CHAdeMO connectors, 7,480 stations with SAE CCS1 connectors, and 7,171 stations with Tesla North American Charging Standard (NACS) connectors, according to the U.S. Department of Energy's Alternative Fuels Data Center.
As of August 2018[ [update]], 800,000 electric vehicles and 18,000 charging stations operated in the United States, up from 5,678 public charging stations and 16,256 public charging points in 2013. By July 2020, Tesla had installed 1,971 stations (17,467 plugs).
Colder areas in northern US states and Canada have some infrastructure for public power receptacles provided primarily for use by block heaters. Although their circuit breakers prevent large current draws for other uses, they can be used to recharge electric vehicles, albeit slowly. In public lots, some such outlets are turned on only when the temperature falls below −20°C, further limiting their value.
As of late 2023, a limited number of Tesla Superchargers are starting to open to non-Tesla vehicles through the use of a built in CCS adapter for existing superchargers.
Other charging networks are available for all electric vehicles. Networks like Electrify America, EVgo, ChargeFinder and ChargePoint are popular among consumers. Electrify America currently has 15 agreements with various automakers for their electric vehicles to use its network of chargers or provide discounted charging rates or complimentary charging, including Audi, BMW, Ford, Hyundai, Kia, Lucid Motors, Mercedes, Volkswagen, and more. Prices are generally based on local rates and other networks may accept cash or a credit card.
In June 2022, United States President Biden announced a plan for a standardized nationwide network of 500,000 electric vehicle charging stations by 2030 that will be agnostic to EV brands, charging companies, or location, in the United States. The US will provide US$5 billion between 2022 and 2026 to states through the National Electric Vehicle Infrastructure (NEVI) Formula Program to build charging stations along major highways and corridors. One such proposed corridor called Greenlane plans to establish charging infrastructure between Los Angeles, California and Las Vegas, Nevada. However, by December 2023, no charging stations had been built.
Africa.
South African based ElectroSA and automobile manufacturers including BMW, Nissan and Jaguar have so far been able to install 80 electric car chargers nationwide.
South America.
In April 2017 YPF, the state-owned oil company of Argentina, reported that it will install 220 fast-load stations for electric vehicles in 110 of its service stations in the national territory.
Projects.
Electric car manufacturers, charging infrastructure providers, and regional governments have entered into agreements and ventures to promote and provide electric vehicle networks of public charging stations.
The EV Plug Alliance is an association of 21 European manufacturers that proposed an IEC norm and a European standard for sockets and plugs. Members (Schneider Electric, Legrand, Scame, Nexans, etc.) claimed that the system was safer because they use shutters. Prior consensus was that the IEC 62196 and IEC 61851-1 standards have already established safety by making parts non-live when touchable.
Home chargers.
Over 80% of electric vehicle charging is done at home, usually in a garage. In North America, Level 1 charging is connected to a standard 120 volt outlet and provides less than 5 miles of range per hour of charging.
To address the need for faster charging, Level 2 charging stations have become more prevalent. These stations operate at 240 volts and can significantly increase the charging speed, delivering up to 30+ miles of range per hour. Level 2 chargers offer a more practical solution for EV owners, especially for those who have higher daily mileage requirements.
Charging stations can be installed using two main methods: hardwired connections to the main electrical panel box or through a cord and plug connected to a 240-volt receptacle. A popular choice for the latter is the NEMA 14-50 receptacle. This type of outlet provides 240 volts and, when wired to a 50-amp circuit, can support charging at 40 amps according to North American electrical code. This translates to a power supply of up to 9.6 kilowatts, offering a faster and more efficient charging experience.
Battery swap.
A battery swapping (or switching) station allow vehicles to exchange a discharged battery pack for a charged one, eliminating the charge interval. Battery swapping is common in electric forklift applications.
History.
The concept of an exchangeable battery service was proposed as early as 1896. It was first offered between 1910 and 1924, by Hartford Electric Light Company, through the GeVeCo battery service, serving electric trucks. The vehicle owner purchased the vehicle, without a battery, from General Vehicle Company (GeVeCo), part-owned by General Electric. The power was purchased from Hartford Electric in the form of an exchangeable battery. Both vehicles and batteries were designed to facilitate a fast exchange. The owner paid a variable per-mile charge and a monthly service fee to cover truck maintenance and storage. These vehicles covered more than 6 million miles.
Beginning in 1917, a similar service operated in Chicago for owners of Milburn Electric cars. 91 years later, a rapid battery replacement system was implemented to service 50 electric buses at the 2008 Summer Olympics.
Better Place, Tesla, and Mitsubishi Heavy Industries considered battery switch approaches. One complicating factor was that the approach requires vehicle design modifications.
In 2012, Tesla started building a proprietary fast-charging Tesla Supercharger network. In 2013, Tesla announced it would also support battery pack swaps. A demonstration swapping station was built at Harris Ranch and operated for a short period of time. However customers vastly preferred using the Superchargers, so the swapping program was shut down.
Benefits.
The following benefits were claimed for battery swapping:
Providers.
The Better Place network was the first modern attempt at the battery switching model. The Renault Fluence Z.E. was the first car enabled to adopt the approach and was offered in Israel and Denmark.
Better Place launched its first battery-swapping station in Israel, in Kiryat Ekron, near Rehovot in March 2011. The exchange process took five minutes. Better Place filed for bankruptcy in Israel in May 2013.
In June 2013, Tesla announced its plan to offer battery swapping. Tesla showed that a battery swap with the Model S took just over 90 seconds. Elon Musk said the service would be offered at around US$ to US$ at June 2013 prices. The vehicle purchase included one battery pack. After a swap, the owner could later return and receive their battery pack fully charged. A second option would be to keep the swapped battery and receive/pay the difference in value between the original and the replacement. Pricing was not announced. In 2015 the company abandoned the idea for lack of customer interest.
By 2022, Chinese luxury carmaker Nio had built more than 900 battery swap stations across China and Europe, up from 131 in 2020.
Sites.
Unlike filling stations, which need to be located near roads that tank trucks can enter conveniently, charging stations can theoretically be placed anywhere with access to electric power and adequate parking.
Private locations include residences, workplaces, and hotels. Residences are by far the most common charging location. Residential charging stations typically lack user authentication and separate metering, and may require a dedicated circuit. Many vehicles being charged at residences simply use a cable that plugs into a standard household electrical outlet. These cables may be wall mounted.
Public stations have been sited along highways, in shopping centers, hotels, government facilities and at workplaces. Some gas stations offer EV charging stations. Some charging stations have been criticized as inaccessible, hard to find, out of order, and slow, thus slowing EV adoption.
Public charge stations may charge a fee or offer free service based on government or corporate promotions. Charge rates vary from residential rates for electricity to many times higher. The premium is usually for the convenience of faster charging. Vehicles can typically be charged without the owner present, allowing the owner to partake in other activities. Sites include malls, freeway rest areas, transit stations, and government offices. Typically, AC Type 1/Type 2 plugs are used.
Wireless charging uses inductive charging mats that charge without a wired connection and can be embedded in parking stalls or even on roadways.
Mobile charging involves another vehicle that brings the charge station to the electric vehicle; the power is supplied via a fuel generator (typically gasoline or diesel), or a large battery.
An offshore electricity recharging system named Stillstrom, to be launched by Danish shipping firm Maersk Supply Service, will give ships access to renewable energy while at sea. Connecting ships to electricity generated by offshore wind farms, Stillstrom is designed to cut emissions from idling ships.
Related technologies.
Smart grid.
A smart grid is a power grid that can adapt to changing conditions by limiting service or adjusting prices. Some charging stations can communicate with the grid and activate charging when conditions are optimal, such as when prices are relatively low. Some vehicles allow the operator to control recharging. Vehicle-to-grid scenarios allow the vehicle battery to supply the grid during periods of peak demand. This requires communication between the grid, charging station, and vehicle. SAE International is developing related standards. These include SAE J2847/1. ISO and IEC are developing similar standards known as ISO/IEC 15118, which also provide protocols for automatic payment.
Renewable energy.
Electric vehicles (EVs) can be powered by renewable energy sources like wind, solar, hydropower, geothermal, biogas, and some low-impact hydroelectric sources. Renewable energy sources are generally less expensive, cleaner, and more sustainable than non-renewable sources like coal, natural gas, and petroleum power.
Charging stations are powered by whatever the power grid runs on, which might include oil, coal, and natural gas. However, many companies have been making advancements towards clean energy for their charging stations. As of November 2023, Electrify America has invested over $5 million to develop over 50 solar-powered electric vehicle (EV) charging stations in rural California, including areas like Fresno County. These resilient Level 2 (L2) stations aren't tied to the electrical grid, and they provide drivers in rural areas access to EV charging via renewable resources. Electrify America’s Solar Glow 1 project, a 75-megawatt solar power initiative in San Bernardino County, is expected to generate 225,000 megawatt-hours of clean electricity annually, enough to power over 20,000 homes.
Tesla's Superchargers and Destination Chargers are mostly powered by solar energy. Tesla's Superchargers have solar canopies with solar panels that generate energy to offset electricity use. Some Destination Chargers have solar panels mounted on canopies or nearby rooftops to generate energy. As of 2023, Tesla's global network was 100% renewable, achieved through a combination of onsite resources and annual renewable matching.
The E-Move Charging Station is equipped with eight monocrystalline solar panels, which can supply 1.76kW of solar power.
In 2012, Urban Green Energy introduced the world's first wind-powered electric vehicle charging station, the Sanya SkyPump. The design features a 4kW vertical-axis wind turbine paired with a GE WattStation.
In 2021, Nova Innovation introduced the world's first direct from tidal power EV charge station.
Alternative technologies.
Along a section of the Highway E20 in Sweden, which connects Stockholm, Gothenburg and Malmö, a plate has been placed under the asphalt that interfaces with electric cars, recharging an electromagnetic coil receiver.
This allows greater vehicle autonomy and reduces the size of the battery compartment. The technology is planned to be implemented along 3,000 km of Swedish roads. Sweden's first electrified stretch of road, and the world's first permanent one, connects the Hallsberg and Örebro area. The work is scheduled for completion by 2025.
See also.
<templatestyles src="Div col/styles.css"/>
Commercial projects:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\text{Charging Time (h)} = \\frac{ \\text{Battery capacity (kWh)} }{ \\text{Charging power (kW)} }\n"
}
] | https://en.wikipedia.org/wiki?curid=10592503 |
10592733 | Positive form | In complex geometry, the term "positive form" refers to several classes of real differential forms of Hodge type "(p, p)".
(1,1)-forms.
Real ("p","p")-forms on a complex manifold "M" are forms which are of type ("p","p") and real, that is, lie in the intersection formula_0 A real (1,1)-form formula_1 is called semi-positive (sometimes just "positive"), respectively, positive (or "positive definite") if any of the following equivalent conditions holds:
Positive line bundles.
In algebraic geometry, positive definite (1,1)-forms arise as curvature forms of ample line bundles (also known as "positive line bundles"). Let "L" be a holomorphic Hermitian line bundle on a complex manifold,
formula_13
its complex structure operator. Then "L" is equipped with a unique connection preserving the Hermitian structure and satisfying
formula_14.
This connection is called "the Chern connection".
The curvature formula_15 of the Chern connection is always a
purely imaginary (1,1)-form. A line bundle "L" is called positive if formula_16 is a positive (1,1)-form. (Note that the de Rham cohomology class of formula_16 is formula_17 times the first Chern class of "L".) The Kodaira embedding theorem claims that a positive line bundle is ample, and conversely, any ample line bundle admits a Hermitian metric with formula_16 positive.
Positivity for "(p, p)"-forms.
Semi-positive (1,1)-forms on "M" form a convex cone. When "M" is a compact complex surface, formula_18, this cone is self-dual, with respect to the Poincaré pairing :formula_19
For "(p, p)"-forms, where formula_20, there are two different notions of positivity. A form is called
strongly positive if it is a linear combination of products of semi-positive forms, with positive real coefficients. A real "(p, p)"-form formula_21 on an "n"-dimensional complex manifold "M" is called weakly positive if for all strongly positive "(n-p, n-p)"-forms ζ with compact support, we have formula_22.
Weakly positive and strongly positive forms form convex cones. On compact manifolds these cones are dual with respect to the Poincaré pairing.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Lambda^{p,p}(M)\\cap \\Lambda^{2p}(M,{\\mathbb R})."
},
{
"math_id": 1,
"text": "\\omega"
},
{
"math_id": 2,
"text": "-\\omega"
},
{
"math_id": 3,
"text": "dz_1, ... dz_n"
},
{
"math_id": 4,
"text": "\\Lambda^{1,0}M"
},
{
"math_id": 5,
"text": "\\omega = \\sqrt{-1} \\sum_i \\alpha_i dz_i\\wedge d\\bar z_i,"
},
{
"math_id": 6,
"text": "\\alpha_i"
},
{
"math_id": 7,
"text": "v\\in T^{1,0}M"
},
{
"math_id": 8,
"text": "-\\sqrt{-1}\\omega(v, \\bar v) \\geq 0"
},
{
"math_id": 9,
"text": ">0"
},
{
"math_id": 10,
"text": "v\\in TM"
},
{
"math_id": 11,
"text": "\\omega(v, I(v)) \\geq 0"
},
{
"math_id": 12,
"text": "I:\\; TM\\mapsto TM"
},
{
"math_id": 13,
"text": " \\bar\\partial:\\; L\\mapsto L\\otimes \\Lambda^{0,1}(M)"
},
{
"math_id": 14,
"text": "\\nabla^{0,1}=\\bar\\partial"
},
{
"math_id": 15,
"text": "\\Theta"
},
{
"math_id": 16,
"text": "\\sqrt{-1}\\Theta"
},
{
"math_id": 17,
"text": "2\\pi"
},
{
"math_id": 18,
"text": "dim_{\\mathbb C}M=2"
},
{
"math_id": 19,
"text": " \\eta, \\zeta \\mapsto \\int_M \\eta\\wedge\\zeta"
},
{
"math_id": 20,
"text": "2\\leq p \\leq dim_{\\mathbb C}M-2"
},
{
"math_id": 21,
"text": "\\eta"
},
{
"math_id": 22,
"text": "\\int_M \\eta\\wedge\\zeta\\geq 0 "
}
] | https://en.wikipedia.org/wiki?curid=10592733 |
1059530 | Simple polygon | Shape bounded by non-intersecting line segments
In geometry, a simple polygon is a polygon that does not intersect itself and has no holes. That is, it is a piecewise-linear Jordan curve consisting of finitely many line segments. These polygons include as special cases the convex polygons, star-shaped polygons, and monotone polygons.
The sum of external angles of a simple polygon is formula_0. Every simple polygon with formula_1 sides can be triangulated by formula_2 of its diagonals, and by the art gallery theorem its interior is visible from some formula_3 of its vertices.
Simple polygons are commonly seen as the input to computational geometry problems, including point in polygon testing, area computation, the convex hull of a simple polygon, triangulation, and Euclidean shortest paths.
Other constructions in geometry related to simple polygons include Schwarz–Christoffel mapping, used to find conformal maps involving simple polygons, polygonalization of point sets, constructive solid geometry formulas for polygons, and visibility graphs of polygons.
Definitions.
A simple polygon is a closed curve in the Euclidean plane consisting of straight line segments, meeting end-to-end to form a polygonal chain. Two line segments meet at every endpoint, and there are no other points of intersection between the line segments. No proper subset of the line segments has the same properties. The qualifier "simple" is sometimes omitted, with the word "polygon" assumed to mean a simple polygon.
The line segments that form a polygon are called its "edges" or "sides". An endpoint of a segment is called a "vertex" (plural: vertices) or a "corner". "Edges" and "vertices" are more formal, but may be ambiguous in contexts that also involve the edges and vertices of a graph; the more colloquial terms "sides" and "corners" can be used to avoid this ambiguity. The number of edges always equals the number of vertices. Some sources allow two line segments to form a straight angle (180°), while others disallow this, instead requiring collinear segments of a closed polygonal chain to be merged into a single longer side. Two vertices are "neighbors" if they are the two endpoints of one of the sides of the polygon.
Simple polygons are sometimes called Jordan polygons, because they are Jordan curves; the Jordan curve theorem can be used to prove that such a polygon divides the plane into two regions. Indeed, Camille Jordan's original proof of this theorem took the special case of simple polygons (stated without proof) as its starting point. The region inside the polygon (its "interior") forms a bounded set topologically equivalent to an open disk by the Jordan–Schönflies theorem, with a finite but nonzero area. The polygon itself is topologically equivalent to a circle, and the region outside (the "exterior") is an unbounded connected open set, with infinite area. Although the formal definition of a simple polygon is typically as a system of line segments, it is also possible (and common in informal usage) to define a simple polygon as a closed set in the plane, the union of these line segments with the interior of the polygon.
A "diagonal" of a simple polygon is any line segment that has two polygon vertices as its endpoints, and that otherwise is entirely interior to the polygon.
Properties.
The "internal angle" of a simple polygon, at one of its vertices, is the angle spanned by the interior of the polygon at that vertex. A vertex is "convex" if its internal angle is less than formula_4 (a straight angle, 180°) and "concave" if the internal angle is greater than formula_4. If the internal angle is formula_5, the "external angle" at the same vertex is defined to be its supplement formula_6, the turning angle from one directed side to the next. The external angle is positive at a convex vertex or negative at a concave vertex. For every simple polygon, the sum of the external angles is formula_0 (one full turn, 360°). Thus the sum of the internal angles, for a simple polygon with formula_1 sides is formula_7.
Every simple polygon can be partitioned into non-overlapping triangles by a subset of its diagonals. When the polygon has formula_1 sides, this produces formula_8 triangles, separated by formula_2 diagonals. The resulting partition is called a "polygon triangulation". The shape of a triangulated simple polygon can be uniquely determined by the internal angles of the polygon and by the cross-ratios of the quadrilaterals formed by pairs of triangles that share a diagonal.
According to the two ears theorem, every simple polygon that is not a triangle has at least two "ears", vertices whose two neighbors are the endpoints of a diagonal. A related theorem states that every simple polygon that is not a convex polygon has a "mouth", a vertex whose two neighbors are the endpoints of a line segment that is otherwise entirely exterior to the polygon. The polygons that have exactly two ears and one mouth are called "anthropomorphic polygons".
According to the art gallery theorem, in a simple polygon with formula_1 vertices, it is always possible to find a subset of at most formula_3 of the vertices with the property that every point in the polygon is visible from one of the selected vertices. This means that, for each point formula_9 in the polygon, there exists a line segment connecting formula_9 to a selected vertex, passing only through interior points of the polygon. One way to prove this is to use graph coloring on a triangulation of the polygon: it is always possible to color the vertices with three colors, so that each side or diagonal in the triangulation has two endpoints of different colors. Each point of the polygon is visible to a vertex of each color, for instance one of the three vertices of the triangle containing that point in the chosen triangulation. One of the colors is used by at most formula_3 of the vertices, proving the theorem.
Special cases.
Every convex polygon is a simple polygon. Another important class of simple polygons are the star-shaped polygons, the polygons that have a point (interior or on their boundary) from which every point is visible.
A monotone polygon, with respect to a straight line formula_10, is a polygon for which every line perpendicular to formula_10 intersects the interior of the polygon in a connected set. Equivalently, it is a polygon whose boundary can be partitioned into two monotone polygonal chains, subsequences of edges whose vertices, when projected perpendicularly onto formula_10, have the same order along formula_10 as they do in the chain.
Computational problems.
In computational geometry, several important computational tasks involve inputs in the form of a simple polygon.
Other computational problems studied for simple polygons include constructions of the longest diagonal or the longest line segment interior to a polygon, of the convex skull (the largest convex polygon within the given simple polygon), and of various one-dimensional "skeletons" approximating its shape, including the medial axis and straight skeleton. Researchers have also studied producing other polygons from simple polygons using their offset curves, unions and intersections, and Minkowski sums, but these operations do not always produce a simple polygon as their result. They can be defined in a way that always produces a two-dimensional region, but this requires careful definitions of the intersection and difference operations in order to avoid creating one-dimensional features or isolated points.
Related constructions.
According to the Riemann mapping theorem, any simply connected open subset of the plane can be conformally mapped onto a disk. Schwarz–Christoffel mapping provides a method to explicitly construct a map from a disk to any simple polygon using specified vertex angles and pre-images of the polygon vertices on the boundary of the disk. These "pre-vertices" are typically computed numerically.
Every finite set of points in the plane that does not lie on a single line can be connected to form the vertices of a simple polygon (allowing 180° angles); for instance, one such polygon is the solution to the traveling salesperson problem. Connecting points to form a polygon in this way is called polygonalization.
Every simple polygon can be represented by a formula in constructive solid geometry that constructs the polygon (as a closed set including the interior) from unions and intersections of half-planes, with each side of the polygon appearing once as a half-plane in the formula. Converting an formula_1-sided polygon into this representation can be performed in time formula_13.
The visibility graph of a simple polygon connects its vertices by edges representing the sides and diagonals of the polygon. It always contains a Hamiltonian cycle, formed by the polygon sides. The computational complexity of reconstructing a polygon that has a given graph as its visibility graph, with a specified Hamiltonian cycle as its cycle of sides, remains an open problem.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2\\pi"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "n-3"
},
{
"math_id": 3,
"text": "\\lfloor n/3\\rfloor"
},
{
"math_id": 4,
"text": "\\pi"
},
{
"math_id": 5,
"text": "\\theta"
},
{
"math_id": 6,
"text": "\\pi-\\theta"
},
{
"math_id": 7,
"text": "(n-2)\\pi"
},
{
"math_id": 8,
"text": "n-2"
},
{
"math_id": 9,
"text": "p"
},
{
"math_id": 10,
"text": "L"
},
{
"math_id": 11,
"text": "P"
},
{
"math_id": 12,
"text": "q"
},
{
"math_id": 13,
"text": "O(n\\log n)"
}
] | https://en.wikipedia.org/wiki?curid=1059530 |