id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
9821563 | Metal–organic framework | Class of chemical substance
Metal–organic frameworks (MOFs) are a class of porous polymers consisting of metal clusters (also known as Secondary Building Units - SBUs) coordinated to organic ligands to form one-, two- or three-dimensional structures. The organic ligands included are sometimes referred to as "struts" or "linkers", one example being 1,4-benzenedicarboxylic acid (BDC).
More formally, a metal–organic framework is a potentially porous extended structure made from metal ions and organic linkers. An extended structure is a structure whose sub-units occur in a constant ratio and are arranged in a repeating pattern. MOFs are a subclass of coordination networks, which is a coordination compound extending, through repeating coordination entities, in one dimension, but with cross-links between two or more individual chains, loops, or spiro-links, or a coordination compound extending through repeating coordination entities in two or three dimensions. Coordination networks including MOFs further belong to coordination polymers, which is a coordination compound with repeating coordination entities extending in one, two, or three dimensions. Most of the MOFs reported in the literature are crystalline compounds, but there are also amorphous MOFs, and other disordered phases.
In most cases for MOFs, the pores are stable during the elimination of the guest molecules (often solvents) and could be refilled with other compounds. Because of this property, MOFs are of interest for the storage of gases such as hydrogen and carbon dioxide. Other possible applications of MOFs are in gas purification, in gas separation, in water remediation, in catalysis, as conducting solids and as supercapacitors.
The synthesis and properties of MOFs constitute the primary focus of the discipline called reticular chemistry (from Latin reticulum, "small net"). In contrast to MOFs, covalent organic frameworks (COFs) are made entirely from light elements (H, B, C, N, and O) with extended structures.
<templatestyles src="Template:TOC limit/styles.css" />
Structure.
MOFs are composed of two main components: an inorganic metal cluster (often referred to as a secondary-building unit or SBU) and an organic molecule called a linker. For this reason, the materials are often referred to as hybrid organic-inorganic materials. The organic units are typically mono-, di-, tri-, or tetravalent ligands. The choice of metal and linker dictates the structure and hence properties of the MOF. For example, the metal's coordination preference influences the size and shape of pores by dictating how many ligands can bind to the metal, and in which orientation.
To describe and organize the structures of MOFs, a system of nomenclature has been developed. Subunits of a MOF, called secondary building units (SBUs), can be described by topologies common to several structures. Each topology, also called a net, is assigned a symbol, consisting of three lower-case letters in bold. MOF-5, for example, has a pcu net.
Attached to the SBUs are bridging ligands. For MOFs, typical bridging ligands are di- and tricarboxylic acids. These ligands typically have rigid backbones. Examples are benzene-1,4-dicarboxylic acid (BDC or terephthalic acid), biphenyl-4,4′-dicarboxylic acid (BPDC), and the tricarboxylic acid trimesic acid.
Synthesis.
General synthesis.
The study of MOFs has roots in coordination chemistry and solid-state inorganic chemistry, but it developed into a new field. In addition, MOFs are constructed from bridging organic ligands that remain intact throughout the synthesis. Zeolite synthesis often makes use of a "template". Templates are ions that influence the structure of the growing inorganic framework. Typical templating ions are quaternary ammonium cations, which are removed later. In MOFs, the framework is templated by the SBU (secondary building unit) and the organic ligands. A templating approach that is useful for MOFs intended for gas storage is the use of metal-binding solvents such as "N","N"-diethylformamide and water. In these cases, metal sites are exposed when the solvent is evacuated, allowing hydrogen to bind at these sites.
Four developments were particularly important in advancing the chemistry of MOFs. (1) The geometric principle of construction where metal-containing units were kept in rigid shapes. Early MOFs contained single atoms linked to ditopic coordinating linkers. The approach not only led to the identification of a small number of preferred topologies that could be targeted in designed synthesis, but was the central point to achieve a permanent porosity. (2) The use of the isoreticular principle where the size and the nature of a structure changes without changing its topology led to MOFs with ultrahigh porosity and unusually large pore openings. (3) Post- synthetic modification of MOFs increased their functionality by reacting organic units and metal-organic complexes with linkers. (4) Multifunctional MOFs incorporated multiple functionalities in a single framework.
Since ligands in MOFs typically bind reversibly, the slow growth of crystals often allows defects to be redissolved, resulting in a material with millimeter-scale crystals and a near-equilibrium defect density. Solvothermal synthesis is useful for growing crystals suitable to structure determination, because crystals grow over the course of hours to days. However, the use of MOFs as storage materials for consumer products demands an immense scale-up of their synthesis. Scale-up of MOFs has not been widely studied, though several groups have demonstrated that microwaves can be used to nucleate MOF crystals rapidly from solution. This technique, termed "microwave-assisted solvothermal synthesis", is widely used in the zeolite literature, and produces micron-scale crystals in a matter of seconds to minutes, in yields similar to the slow growth methods.
Some MOFs, such as the mesoporous MIL-100(Fe), can be obtained under mild conditions at room temperature and in green solvents (water, ethanol) through scalable synthesis methods.
A solvent-free synthesis of a range of crystalline MOFs has been described. Usually the metal acetate and the organic proligand are mixed and ground up with a ball mill. Cu3(BTC)2 can be quickly synthesised in this way in quantitative yield. In the case of Cu3(BTC)2 the morphology of the solvent free synthesised product was the same as the industrially made Basolite C300. It is thought that localised melting of the components due to the high collision energy in the ball mill may assist the reaction. The formation of acetic acid as a by-product in the reactions in the ball mill may also help in the reaction having a solvent effect in the ball mill. It has been shown that the addition of small quantities of ethanol for the mechanochemical synthesis of Cu3(BTC)2 significantly reduces the amounts of structural defects in the obtained material.
A recent advancement in the solvent-free preparation of MOF films and composites is their synthesis by chemical vapor deposition. This process, MOF-CVD, was first demonstrated for ZIF-8 and consists of two steps. In a first step, metal oxide precursor layers are deposited. In the second step, these precursor layers are exposed to sublimed ligand molecules, that induce a phase transformation to the MOF crystal lattice. Formation of water during this reaction plays a crucial role in directing the transformation. This process was successfully scaled up to an integrated cleanroom process, conforming to industrial microfabrication standards.
Numerous methods have been reported for the growth of MOFs as oriented thin films. However, these methods are suitable only for the synthesis of a small number of MOF topologies. One such example being the vapor-assisted conversion (VAC) which can be used for the thin film synthesis of several UiO-type MOFs.
High-throughput synthesis.
High-throughput (HT) methods are a part of combinatorial chemistry and a tool for increasing efficiency. There are two synthetic strategies within the HT-methods: In the combinatorial approach, all reactions take place in one vessel, which leads to product mixtures. In the parallel synthesis, the reactions take place in different vessels. Furthermore, a distinction is made between thin films and solvent-based methods.
Solvothermal synthesis can be carried out conventionally in a teflon reactor in a convection oven or in glass reactors in a microwave oven (high-throughput microwave synthesis). The use of a microwave oven changes, in part dramatically, the reaction parameters.
In addition to solvothermal synthesis, there have been advances in using supercritical fluid as a solvent in a continuous flow reactor. Supercritical water was first used in 2012 to synthesize copper and nickel-based MOFs in just seconds. In 2020, supercritical carbon dioxide was used in a continuous flow reactor along the same time scale as the supercritical water-based method, but the lower critical point of carbon dioxide allowed for the synthesis of the zirconium-based MOF UiO-66.
High-throughput solvothermal synthesis.
In high-throughput solvothermal synthesis, a solvothermal reactor with (e.g.) 24 cavities for teflon reactors is used. Such a reactor is sometimes referred to as a multiclav. The reactor block or reactor insert is made of stainless steel and contains 24 reaction chambers, which are arranged in four rows. With the miniaturized teflon reactors, volumes of up to 2 mL can be used. The reactor block is sealed in a stainless steel autoclave; for this purpose, the filled reactors are inserted into the bottom of the reactor, the teflon reactors are sealed with two teflon films and the reactor top side is put on. The autoclave is then closed in a hydraulic press. The sealed solvothermal reactor can then be subjected to a temperature-time program. The reusable teflon film serves to withstand the mechanical stress, while the disposable teflon film seals the reaction vessels. After the reaction, the products can be isolated and washed in parallel in a vacuum filter device. On the filter paper, the products are then present separately in a so-called sample library and can subsequently be characterized by automated X-ray powder diffraction. The informations obtained are then used to plan further syntheses.
Pseudomorphic replication.
Pseudomorphic mineral replacement events occur whenever a mineral phase comes into contact with a fluid with which it is out of equilibrium. Re-equilibration will tend to take place to reduce the free energy and transform the initial phase into a more thermodynamically stable phase, involving dissolution and reprecipitation subprocesses.
Inspired by such geological processes, MOF thin films can be grown through the combination of atomic layer deposition (ALD) of aluminum oxide onto a suitable substrate (e.g. FTO) and subsequent solvothermal microwave synthesis. The aluminum oxide layer serves both as an architecture-directing agent and as a metal source for the backbone of the MOF structure. The construction of the porous 3D metal-organic framework takes place during the microwave synthesis, when the atomic layer deposited substrate is exposed to a solution of the requisite linker in a DMF/H2O 3:1 mixture (v/v) at elevated temperature. Analogous, Kornienko and coworkers described in 2015 the synthesis of a cobalt-porphyrin MOF (Al2(OH)2TCPP-Co; TCPP-H2=4,4′,4″,4‴-(porphyrin-5,10,15,20-tetrayl)tetrabenzoate), the first MOF catalyst constructed for the electrocatalytic conversion of aqueous CO2 to CO.
Post-synthetic modification.
Although the three-dimensional structure and internal environment of the pores can be in theory controlled through proper selection of nodes and organic linking groups, the direct synthesis of such materials with the desired functionalities can be difficult due to the high sensitivity of MOF systems. Thermal and chemical sensitivity, as well as high reactivity of reaction materials, can make forming desired products challenging to achieve. The exchange of guest molecules and counter-ions and the removal of solvents allow for some additional functionality but are still limited to the integral parts of the framework. The post-synthetic exchange of organic linkers and metal ions is an expanding area of the field and opens up possibilities for more complex structures, increased functionality, and greater system control.
Ligand exchange.
Post-synthetic modification techniques can be used to exchange an existing organic linking group in a prefabricated MOF with a new linker by ligand exchange or partial ligand exchange. This exchange allows for the pores and, in some cases the overall framework of MOFs, to be tailored for specific purposes. Some of these uses include fine-tuning the material for selective adsorption, gas storage, and catalysis. To perform ligand exchange prefabricated MOF crystals are washed with solvent and then soaked in a solution of the new linker. The exchange often requires heat and occurs on the time scale of a few days. Post-synthetic ligand exchange also enables the incorporation of functional groups into MOFs that otherwise would not survive MOF synthesis, due to temperature, pH, or other reaction conditions, or hinder the synthesis itself by competition with donor groups on the loaning ligand.
Metal exchange.
Post-synthetic modification techniques can also be used to exchange an existing metal ion in a prefabricated MOF with a new metal ion by metal ion exchange. The complete metal metathesis from an integral part of the framework has been achieved without altering the framework or pore structure of the MOF. Similarly to post-synthetic ligand exchange, post-synthetic metal exchange is performed by washing prefabricated MOF crystals with solvent and then soaking the crystal in a solution of the new metal. Post-synthetic metal exchange allows for a simple route to the formation of MOFs with the same framework yet different metal ions.
Stratified synthesis.
In addition to modifying the functionality of the ligands and metals themselves, post-synthetic modification can be used to expand upon the structure of the MOF. Using post-synthetic modification MOFs can be converted from a highly ordered crystalline material toward a heterogeneous porous material. Using post-synthetic techniques, it is possible for the controlled installation of domains within a MOF crystal which exhibit unique structural and functional characteristics. Core-shell MOFs and other layered MOFs have been prepared where layers have unique functionalization but in most cases are crystallographically compatible from layer to layer.
Open coordination sites.
In some cases MOF metal nodes have an unsaturated environment, and it is possible to modify this environment using different techniques. If the size of the ligand matches the size of the pore aperture, it is possible to install additional ligands to existing MOF structure. Sometimes metal nodes have a good binding affinity for inorganic species. For instance, it was shown that metal nodes can perform an extension, and create a bond with the uranyl cation.
Composite materials.
Another approach to increasing adsorption in MOFs is to alter the system in such a way that chemisorption becomes possible. This functionality has been introduced by making a composite material, which contains a MOF and a complex of platinum with activated carbon. In an effect known as hydrogen spillover, H2 can bind to the platinum surface through a dissociative mechanism which cleaves the hydrogen molecule into two hydrogen atoms and enables them to travel down the activated carbon onto the surface of the MOF. This innovation produced a threefold increase in the room-temperature storage capacity of a MOF; however, desorption can take upwards of 12 hours, and reversible desorption is sometimes observed for only two cycles. The relationship between hydrogen spillover and hydrogen storage properties in MOFs is not well understood but may prove relevant to hydrogen storage.
Catalysis.
MOFs have potential as heterogeneous catalysts, although applications have not been commercialized. Their high surface area, tunable porosity, diversity in metal and functional groups make them especially attractive for use as catalysts. Zeolites are extraordinarily useful in catalysis. Zeolites are limited by the fixed tetrahedral coordination of the Si/Al connecting points and the two-coordinated oxide linkers. Fewer than 200 zeolites are known. In contrast with this limited scope, MOFs exhibit more diverse coordination geometries, polytopic linkers, and ancillary ligands (F−, OH−, H2O among others). It is also difficult to obtain zeolites with pore sizes larger than 1 nm, which limits the catalytic applications of zeolites to relatively small organic molecules (typically no larger than xylenes). Furthermore, mild synthetic conditions typically employed for MOF synthesis allow direct incorporation of delicate functionalities into the framework structures. Such a process would not be possible with zeolites or other microporous crystalline oxide-based materials because of the harsh conditions typically used for their synthesis (e.g., calcination at high temperatures to remove organic templates). Metal–organic framework MIL-101 is one of the most used MOFs for catalysis incorporating different transition metals such as Cr. However, the stability of some MOF photocatalysts in aqueous medium and under strongly oxidizing conditions is very low.
Zeolites still cannot be obtained in enantiopure form, which precludes their applications in catalytic asymmetric synthesis, e.g., for the pharmaceutical, agrochemical, and fragrance industries. Enantiopure chiral ligands or their metal complexes have been incorporated into MOFs to lead to efficient asymmetric catalysts. Even some MOF materials may bridge the gap between zeolites and enzymes when they combine isolated polynuclear sites, dynamic host–guest responses, and a hydrophobic cavity environment. MOFs might be useful for making semi-conductors. Theoretical calculations show that MOFs are semiconductors or insulators with band gaps between 1.0 and 5.5 eV which can be altered by changing the degree of conjugation in the ligands indicating its possibility for being photocatalysts.
Design.
Like other heterogeneous catalysts, MOFs may allow for easier post-reaction separation and recyclability than homogeneous catalysts. In some cases, they also give a highly enhanced catalyst stability. Additionally, they typically offer substrate-size selectivity. Nevertheless, while clearly important for reactions in living systems, selectivity on the basis of substrate size is of limited value in abiotic catalysis, as reasonably pure feedstocks are generally available.
Metal ions or metal clusters.
Among the earliest reports of MOF-based catalysis was the cyanosilylation of aldehydes by a 2D MOF (layered square grids) of formula Cd(4,4′-bpy)2(NO3)2. This investigation centered mainly on size- and shape-selective clathration. A second set of examples was based on a two-dimensional, square-grid MOF containing single Pd(II) ions as nodes and 2-hydroxypyrimidinolates as struts. Despite initial coordinative saturation, the palladium centers in this MOF catalyze alcohol oxidation, olefin hydrogenation, and Suzuki C–C coupling. At a minimum, these reactions necessarily entail redox oscillations of the metal nodes between Pd(II) and Pd(0) intermediates accompanying by drastic changes in coordination number, which would certainly lead to destabilization and potential destruction of the original framework if all the Pd centers are catalytically active. The observation of substrate shape- and size-selectivity implies that the catalytic reactions are heterogeneous and are indeed occurring within the MOF. Nevertheless, at least for hydrogenation, it is difficult to rule out the possibility that catalysis is occurring at the surface of MOF-encapsulated palladium clusters/nanoparticles (i.e., partial decomposition sites) or defect sites, rather than at transiently labile, but otherwise intact, single-atom MOF nodes. "Opportunistic" MOF-based catalysis has been described for the cubic compound, MOF-5. This material comprises coordinatively saturated Zn4O nodes and fully complexed BDC struts (see above for abbreviation); yet it apparently catalyzes the Friedel–Crafts tert-butylation of both toluene and biphenyl. Furthermore, para alkylation is strongly favored over ortho alkylation, a behavior thought to reflect the encapsulation of reactants by the MOF.
Functional struts.
The porous-framework material [Cu3(btc)2(H2O)3], also known as HKUST-1, contains large cavities having windows of diameter ~6 Å. The coordinated water molecules are easily removed, leaving open Cu(II) sites. Kaskel and co-workers showed that these Lewis acid sites could catalyze the cyanosilylation of benzaldehyde or acetone. The anhydrous version of HKUST-1 is an acid catalyst. Compared to Brønsted vs. Lewis acid-catalyzed pathways, the product selectivity are distinctive for three reactions: isomerization of α-pinene oxide, cyclization of citronellal, and rearrangement of α-bromoacetals, indicating that indeed [Cu3(btc)2] functions primarily as a Lewis acid catalyst. The product selectivity and yield of catalytic reactions (e.g. cyclopropanation) have also been shown to be impacted by defective sites, such as Cu(I) or incompletely deprotonated carboxylic acid moities of the linkers.
MIL-101, a large-cavity MOF having the formula [Cr3F(H2O)2O(BDC)3], is a cyanosilylation catalyst. The coordinated water molecules in MIL-101 are easily removed to expose Cr(III) sites. As one might expect, given the greater Lewis acidity of Cr(III) vs. Cu(II), MIL-101 is much more active than HKUST-1 as a catalyst for the cyanosilylation of aldehydes. Additionally, the Kaskel group observed that the catalytic sites of MIL-101, in contrast to those of HKUST-1, are immune to unwanted reduction by benzaldehyde. The Lewis-acid-catalyzed cyanosilylation of aromatic aldehydes has also been carried out by Long and co-workers using a MOF of the formula Mn3[(Mn4Cl)3BTT8(CH3OH)10]. This material contains a three-dimensional pore structure, with the pore diameter equaling 10 Å. In principle, either of the two types of Mn(II) sites could function as a catalyst. Noteworthy features of this catalyst are high conversion yields (for small substrates) and good substrate-size-selectivity, consistent with channellocalized catalysis.
Encapsulated catalysts.
The MOF encapsulation approach invites comparison to earlier studies of oxidative catalysis by zeolite-encapsulated Fe(porphyrin) as well as Mn(porphyrin) systems. The zeolite studies generally employed iodosylbenzene (PhIO), rather than TPHP as oxidant. The difference is likely mechanistically significant, thus complicating comparisons. Briefly, PhIO is a single oxygen atom donor, while TBHP is capable of more complex behavior. In addition, for the MOF-based system, it is conceivable that oxidation proceeds via both oxygen transfer from a manganese oxo intermediate as well as a manganese-initiated radical chain reaction pathway. Regardless of mechanism, the approach is a promising one for isolating and thereby stabilizing the porphyrins against both oxo-bridged dimer formation and oxidative degradation.
Metal-free organic cavity modifiers.
Most examples of MOF-based catalysis make use of metal ions or atoms as active sites. Among the few exceptions are two nickel- and two copper-containing MOFs synthesized by Rosseinsky and co-workers. These compounds employ amino acids (L- or D-aspartate) together with dipyridyls as struts. The coordination chemistry is such that the amine group of the aspartate cannot be protonated by added HCl, but one of the aspartate carboxylates can. Thus, the framework-incorporated amino acid can exist in a form that is not accessible for the free amino acid. While the nickel-based compounds are marginally porous, on account of tiny channel dimensions, the copper versions are clearly porous.
The Rosseinsky group showed that the carboxylic acids behave as Brønsted acidic catalysts, facilitating (in the copper cases) the ring-opening methanolysis of a small, cavity-accessible epoxide at up to 65% yield. Superior homogeneous catalysts exist however.
Kitagawa and co-workers have reported the synthesis of a catalytic MOF having the formula [Cd(4-BTAPA)2(NO3)2]. The MOF is three-dimensional, consisting of an identical catenated pair of networks, yet still featuring pores of molecular dimensions. The nodes consist of single cadmium ions, octahedrally ligated by pyridyl nitrogens. From a catalysis standpoint, however, the most interesting feature of this material is the presence of guest-accessible amide functionalities. The amides are capable of base-catalyzing the Knoevenagel condensation of benzaldehyde with malononitrile. Reactions with larger nitriles, however, are only marginally accelerated, implying that catalysis takes place chiefly within the material's channels rather than on its exterior. A noteworthy finding is the lack of catalysis by the free strut in homogeneous solution, evidently due to intermolecular H-bonding between bptda molecules. Thus, the MOF architecture elicits catalytic activity not otherwise encountered.
In an interesting alternative approach, Férey and coworkers were able to modify the interior of MIL-101 via Cr(III) coordination of one of the two available nitrogen atoms of each of several ethylenediamine molecules. The free non-coordinated ends of the ethylenediamines were then used as Brønsted basic catalysts, again for Knoevenagel condensation of benzaldehyde with nitriles.
A third approach has been described by Kim Kimoon and coworkers. Using a pyridine-functionalized derivative of tartaric acid and a Zn(II) source they were able to synthesize a 2D MOF termed POST-1. POST-1 possesses 1D channels whose cross sections are defined by six trinuclear zinc clusters and six struts. While three of the six pyridines are coordinated by zinc ions, the remaining three are protonated and directed toward the channel interior. When neutralized, the noncoordinated pyridyl groups are found to catalyze transesterification reactions, presumably by facilitating deprotonation of the reactant alcohol. The absence of significant catalysis when large alcohols are employed strongly suggests that the catalysis occurs within the channels of the MOF.
Achiral catalysis.
Metals as catalytic sites.
The metals in the MOF structure often act as Lewis acids. The metals in MOFs often coordinate to labile solvent molecules or counter ions which can be removed after activation of the framework. The Lewis acidic nature of such unsaturated metal centers can activate the coordinated organic substrates for subsequent organic transformations. The use of unsaturated metal centers was demonstrated in the cyanosilylation of aldehydes and imines by Fujita and coworkers in 2004. They reported MOF of composition {[Cd(4,4′-bpy)2(H2O)2] • (NO3)2 • 4H2O} which was obtained by treating linear bridging ligand 4,4′-bipyridine (bpy) with . The Cd(II) centers in this MOF possess a distorted octahedral geometry having four pyridines in the equatorial positions, and two water molecules in the axial positions to form a two-dimensional infinite network. On activation, two water molecules were removed leaving the metal centers unsaturated and Lewis acidic. The Lewis acidic character of metal center was tested on cyanosilylation reactions of imine where the imine gets attached to the Lewis-acidic metal centre resulting in higher electrophilicity of imines. For the cyanosilylation of imines, most of the reactions were complete within 1 h affording aminonitriles in quantitative yield. Kaskel and coworkers carried out similar cyanosilylation reactions with coordinatively unsaturated metals in three-dimensional (3D) MOFs as heterogeneous catalysts. The 3D framework [Cu3(btc)2(H2O)3] (btc: benzene-1,3,5-tricarboxylate) (HKUST-1) used in this study was first reported by Williams "et al." The open framework of [Cu3(btc)2(H2O)3] is built from dimeric cupric tetracarboxylate units (paddle-wheels) with aqua molecules coordinating to the axial positions and btc bridging ligands. The resulting framework after removal of two water molecules from axial positions possesses porous channel. This activated MOF catalyzes the trimethylcyanosilylation of benzaldehydes with a very low conversion (<5% in 24 h) at 293 K. As the reaction temperature was raised to 313 K, a good conversion of 57% with a selectivity of 89% was obtained after 72 h. In comparison, less than 10% conversion was observed for the background reaction (without MOF) under the same conditions. But this strategy suffers from some problems like 1) the decomposition of the framework with increase of the reaction temperature due to the reduction of Cu(II) to Cu(I) by aldehydes; 2) strong solvent inhibition effect; electron donating solvents such as THF competed with aldehydes for coordination to the Cu(II) sites, and no cyanosilylation product was observed in these solvents; 3) the framework instability in some organic solvents. Several other groups have also reported the use of metal centres in MOFs as catalysts. Again, electron-deficient nature of some metals and metal clusters makes the resulting MOFs efficient oxidation catalysts. Mori and coworkers reported MOFs with Cu2 paddle wheel units as heterogeneous catalysts for the oxidation of alcohols. The catalytic activity of the resulting MOF was examined by carrying out alcohol oxidation with H2O2 as the oxidant. It also catalyzed the oxidation of primary alcohol, secondary alcohol and benzyl alcohols with high selectivity. Hill "et al." have demonstrated the sulfoxidation of thioethers using a MOF based on vanadium-oxo cluster V6O13 building units.
Functional linkers as catalytic sites.
Functional linkers can be also utilized as catalytic sites. A 3D MOF {[Cd(4-BTAPA)2(NO3)2] • 6H2O • 2DMF} (4-BTAPA = 1,3,5-benzene tricarboxylic acid tris [N-(4-pyridyl)amide], DMF = "N","N"-dimethylformamide) constructed by tridentate amide linkers and cadmium salt catalyzes the Knoevenagel condensation reaction. The pyridine groups on the ligand 4-BTAPA act as ligands binding to the octahedral cadmium centers, while the amide groups can provide the functionality for interaction with the incoming substrates. Specifically, the −NH moiety of the amide group can act as electron acceptor whereas the C=O group can act as electron donor to activate organic substrates for subsequent reactions. Ferey "et al." reported a robust and highly porous MOF [Cr3(μ3-O)F(H2O)2(BDC)3] (BDC: benzene-1,4-dicarboxylate) where instead of directly using the unsaturated Cr(III) centers as catalytic sites, the authors grafted ethylenediamine (ED) onto the Cr(III) sites. The uncoordinated ends of ED can act as base catalytic sites. ED-grafted MOF was investigated for Knoevenagel condensation reactions. A significant increase in conversion was observed for ED-grafted MOF compared to untreated framework (98% vs. 36%). Another example of linker modification to generate catalytic site is iodo-functionalized well-known Al-based MOFs (MIL-53 and DUT-5) and Zr-based MOFs (UiO-66 and UiO-67) for the catalytic oxidation of diols.
Entrapment of catalytically active noble metal nanoparticles.
The entrapment of catalytically active noble metals can be accomplished by grafting on functional groups to the unsaturated metal site on MOFs. Ethylenediamine (ED) has been shown to be grafted on the Cr metal sites and can be further modified to encapsulate noble metals such as Pd. The entrapped Pd has similar catalytic activity as Pd/C in the Heck reaction. Ruthenium nanoparticles have catalytic activity in a number of reactions when entrapped in the MOF-5 framework. This Ru-encapsulated MOF catalyzes oxidation of benzyl alcohol to benzaldehyde, although degradation of the MOF occurs. The same catalyst was used in the hydrogenation of benzene to cyclohexane. In another example, Pd nanoparticles embedded within defective HKUST-1 framework enable the generation of tunable Lewis basic sites. Therefore, this multifunctional Pd/MOF composite is able to perform stepwise benzyl alcohol oxidation and Knoevenagel condensation.
Reaction hosts with size selectivity.
MOFs might prove useful for both photochemical and polymerization reactions due to the tuneability of the size and shape of their pores. A 3D MOF {[Co(bpdc)3(bpy)] • 4DMF • H2O} (bpdc: biphenyldicarboxylate, bpy: 4,4′-bipyridine) was synthesized by Li and coworkers. Using this MOF photochemistry of "o"-methyl dibenzyl ketone ("o"-MeDBK) was extensively studied. This molecule was found to have a variety of photochemical reaction properties including the production of cyclopentanol. MOFs have been used to study polymerization in the confined space of MOF channels. Polymerization reactions in confined space might have different properties than polymerization in open space. Styrene, divinylbenzene, substituted acetylenes, methyl methacrylate, and vinyl acetate have all been studied by Kitagawa and coworkers as possible activated monomers for radical polymerization. Due to the different linker size the MOF channel size could be tunable on the order of roughly 25 and 100 Å2. The channels were shown to stabilize propagating radicals and suppress termination reactions when used as radical polymerization sites.
Asymmetric catalysis.
Several strategies exist for constructing homochiral MOFs. Crystallization of homochiral MOFs via self-resolution from achiral linker ligands is one of the way to accomplish such a goal. However, the resulting bulk samples contain both enantiomorphs and are racemic. Aoyama and coworkers successfully obtained homochiral MOFs in the bulk from achiral ligands by carefully controlling nucleation in the crystal growth process. Zheng and coworkers reported the synthesis of homochiral MOFs from achiral ligands by chemically manipulating the statistical fluctuation of the formation of enantiomeric pairs of crystals. Growing MOF crystals under chiral influences is another approach to obtain homochiral MOFs using achiral linker ligands. Rosseinsky and coworkers have introduced a chiral coligand to direct the formation of homochiral MOFs by controlling the handedness of the helices during the crystal growth. Morris and coworkers utilized ionic liquid with chiral cations as reaction media for synthesizing MOFs, and obtained homochiral MOFs. The most straightforward and rational strategy for synthesizing homochiral MOFs is, however, to use the readily available chiral linker ligands for their construction.
Homochiral MOFs with interesting functionalities and reagent-accessible channels.
Homochiral MOFs have been made by Lin and coworkers using 2,2′-bis(diphenylphosphino)-1,1′-binaphthyl (BINAP) and 1,1′-bi-2,2′-naphthol (BINOL) as chiral ligands. These ligands can coordinate with catalytically active metal sites to enhance the enantioselectivity. A variety of linking groups such as pyridine, phosphonic acid, and carboxylic acid can be selectively introduced to the 3,3′, 4,4′, and the 6,6′ positions of the 1,1'-binaphthyl moiety. Moreover, by changing the length of the linker ligands the porosity and framework structure of the MOF can be selectively tuned.
Postmodification of homochiral MOFs.
Lin and coworkers have shown that the postmodification of MOFs can be achieved to produce enantioselective homochiral MOFs for use as catalysts. The resulting 3D homochiral MOF {[Cd3(L)3Cl6] • 4DMF • 6MeOH • 3H2O} (L=(R)-6,6'-dichloro-2,2'-dihydroxyl-1,1'-binaphthyl-bipyridine) synthesized by Lin was shown to have a similar catalytic efficiency for the diethylzinc addition reaction as compared to the homogeneous analogue when was pretreated by Ti(OiPr)4 to generate the grafted Ti- BINOLate species. The catalytic activity of MOFs can vary depending on the framework structure. Lin and others found that MOFs synthesized from the same materials could have drastically different catalytic activities depending on the framework structure present.
Homochiral MOFs with precatalysts as building blocks.
Another approach to construct catalytically active homochiral MOFs is to incorporate chiral metal complexes which are either active catalysts or precatalysts directly into the framework structures. For example, Hupp and coworkers have combined a chiral ligand and bpdc (bpdc: biphenyldicarboxylate) with and obtained twofold interpenetrating 3D networks. The orientation of chiral ligand in the frameworks makes all Mn(III) sites accessible through the channels. The resulting open frameworks showed catalytic activity toward asymmetric olefin epoxidation reactions. No significant decrease of catalyst activity was observed during the reaction and the catalyst could be recycled and reused several times. Lin and coworkers have reported zirconium phosphonate-derived Ru-BINAP systems. Zirconium phosphonate-based chiral porous hybrid materials containing the Ru(BINAP)(diamine)Cl2 precatalysts showed excellent enantioselectivity (up to 99.2% ee) in the asymmetric hydrogenation of aromatic ketones.
Biomimetic design and photocatalysis.
Some MOF materials may resemble enzymes when they combine isolated polynuclear sites, dynamic host–guest responses, and hydrophobic cavity environment which are characteristics of an enzyme. Some well-known examples of cooperative catalysis involving two metal ions in biological systems include: the diiron sites in methane monooxygenase, dicopper in cytochrome c oxidase, and tricopper oxidases which have analogy with polynuclear clusters found in the 0D coordination polymers, such as binuclear Cu2 paddlewheel units found in MOP-1 and [Cu3(btc)2] (btc=benzene-1,3,5-tricarboxylate) in HKUST-1 or trinuclear units such as {} in MIL-88, and IRMOP-51. Thus, 0D MOFs have accessible biomimetic catalytic centers. In enzymatic systems, protein units show "molecular recognition", high affinity for specific substrates. It seems that molecular recognition effects are limited in zeolites by the rigid zeolite structure. In contrast, dynamic features and guest-shape response make MOFs more similar to enzymes. Indeed, many hybrid frameworks contain organic parts that can rotate as a result of stimuli, such as light and heat. The porous channels in MOF structures can be used as photocatalysis sites. In photocatalysis, the use of mononuclear complexes is usually limited either because they only undergo single-electron process or from the need for high-energy irradiation. In this case, binuclear systems have a number of attractive features for the development of photocatalysts. For 0D MOF structures, polycationic nodes can act as semiconductor quantum dots which can be activated upon photostimuli with the linkers serving as photon antennae. Theoretical calculations show that MOFs are semiconductors or insulators with band gaps between 1.0 and 5.5 eV which can be altered by changing the degree of conjugation in the ligands. Experimental results show that the band gap of IRMOF-type samples can be tuned by varying the functionality of the linker. An integrated MOF nanozyme was developed for anti-inflammation therapy.
Mechanical properties.
Implementing MOFs in industry necessitates a thorough understanding of the mechanical properties since most processing techniques (e.g. extrusion and pelletization) expose the MOFs to substantial mechanical compressive stresses. The mechanical response of porous structures is of interest as these structures can exhibit unusual response to high pressures. While zeolites (microporous, aluminosilicate minerals) can give some insights into the mechanical response of MOFs, the presence of organic linkers as opposed to zeolites, makes for novel mechanical responses. MOFs are very structurally diverse meaning that it is challenging to classify all of their mechanical properties. Additionally, variability in MOFs from batch to batch and extreme experimental conditions (diamond anvil cells) mean that experimental determination of mechanical response to loading is limited, however many computational models have been made to determine structure-property relationships. Main MOF systems that have been explored are zeolitic imidazolate frameworks (ZIFs), Carboxylate MOFs, Zirconium-based MOFs, among others. Generally, the MOFs undergo three processes under compressive loading (which is relevant in a processing context): amorphization, hyperfilling, and/or pressure induced phase transitions. During amorphization linkers buckle and the internal porosity within the MOF collapses. During hyperfilling the MOF which is being hydrostatically compressed in a liquid (typically solvent) will expand rather than contract due to a filling of pores with the loading media. Finally, pressure induced phase transitions where the structure of the crystal is altered during the loading are possible. The response of the MOF is predominantly dependent on the linker species and the inorganic nodes.
Zeolitic imidazolate frameworks (ZIFs).
Several different mechanical phenomena have been observed in zeolitic imidazolate frameworks (ZIFs), the most widely studied MOF for mechanical properties due to their many similarities to zeolites. General trends for the ZIF family are the tendency of the Young's modulus and hardness of the ZIFs to decrease as the accessible pore volume increases. The bulk moduli of ZIF-62 series increase with the increasing of benzoimidazolate (bim−) concentration. ZIF-62 shows a continuous phase transition from open pore (op) to close pore (cp) phase when bim− concentration is over 0.35 per formular unit. The accessible pore size and volume of ZIF-62-bim0.35 can be precisely tuned by applying adequate pressures. Another study has shown that under hydrostatic loading in solvent the ZIF-8 material expands as opposed to contracting. This is a result of hyperfilling of the internal pores with solvent. A computational study demonstrated that ZIF-4 and ZIF-8 materials undergo a shear softening mechanism with amorphizing (at ~ 0.34 GPa) of the material under hydrostatic loading, while still possessing a bulk modulus on the order of 6.5 GPa. Additionally, the ZIF-4 and ZIF-8 MOFs are subject to many pressure dependent phase transitions.
Carboxylate-based MOFs.
Carboxylate MOFs come in many forms and have been widely studied. Herein, HKUST-1, MOF-5, and the MIL series are discussed as representative examples of the carboxylate MOF class.
HKUST-1.
HKUST-1 consists of a dimeric Cu-paddlewheel that possesses two pore types. Under pelletization MOFs such as HKUST-1 exhibit a pore collapse. Although most carboxylate MOFs have a negative thermal expansion (they densify during heating), it was found that the hardness and Young's moduli unexpectedly decrease with increasing temperature from disordering of linkers. It was also found computationally that a more mesoporous structure has a lower bulk modulus. However, an increased bulk modulus was observed in systems with a few large mesopores versus many small mesopores even though both pore size distributions had the same total pore volume. The HKUST-1 shows a similar, "hyperfilling" phenomenon to the ZIF structures under hydrostatic loading.
MOF-5.
MOF-5 has tetranuclear nodes in an octahedral configuration with an overall cubic structure. MOF-5 has a compressibility and Young's modulus (~14.9 GPa) comparable to wood, which was confirmed with density functional theory (DFT) and nanoindentation. While it was shown that the MOF-5 can demonstrate the hyperfilling phenomenon within a loading media of solvent, these MOFs are very sensitive to pressure and undergo amorphization/pressure induced pore collapse at a pressure of 3.5 MPa when there is no fluid in the pores.
MIL-53.
MIL-53 MOFs possess a "wine rack" structure. These MOFs have been explored for anisotropy in Young's modulus due to the flexibility of loading, and the potential for negative linear compressibility when compressing in one direction, due to the ability of the wine rack opening during loading.
Zirconium-based MOFs.
Zirconium-based MOFs such as UiO-66 are a very robust class of MOFs (attributed to strong hexanuclear <chem>Zr_6</chem> metallic nodes) with increased resistance to heat, solvents, and other harsh conditions, which makes them of interest in terms of mechanical properties. Determinations of shear modulus and pelletization have shown that the UiO-66 MOFs are very mechanically robust and have high tolerance for pore collapse when compared to ZIFs and carboxylate MOFs. Although the UiO-66 MOF shows increased stability under pelletization, the UiO-66 MOFs amorphized fairly rapidly under ball milling conditions due to destruction of linker coordinating inorganic nodes.
Applications.
Hydrogen storage.
Molecular hydrogen has the highest specific energy of any fuel. However unless the hydrogen gas is compressed, its volumetric energy density is very low, so the transportation and storage of hydrogen require energy-intensive compression and liquefaction processes. Therefore, development of new hydrogen storage methods which decrease the concomitant pressure required for practical volumetric energy density is an active area of research. MOFs attract attention as materials for adsorptive hydrogen storage because of their high specific surface areas and surface to volume ratios, as well as their chemically tunable structures.
Compared to an empty gas cylinder, a MOF-filled gas cylinder can store more hydrogen at a given pressure because hydrogen molecules adsorb to the surface of MOFs. Furthermore, MOFs are free of dead-volume, so there is almost no loss of storage capacity as a result of space-blocking by non-accessible volume. Also, because the hydrogen uptake is based primarily on physisorption, many MOFs have a fully reversible uptake-and-release behavior. No large activation barriers are required when liberating the adsorbed hydrogen. The storage capacity of a MOF is limited by the liquid-phase density of hydrogen because the benefits provided by MOFs can be realized only if the hydrogen is in its gaseous state.
The extent to which a gas can adsorb to a MOF's surface depends on the temperature and pressure of the gas. In general, adsorption increases with decreasing temperature and increasing pressure (until a maximum is reached, typically 20–30 bar, after which the adsorption capacity decreases). However, MOFs to be used for hydrogen storage in automotive fuel cells need to operate efficiently at ambient temperature and pressures between 1 and 100 bar, as these are the values that are deemed safe for automotive applications.
The U.S. Department of Energy (DOE) has published a list of yearly technical system targets for on-board hydrogen storage for light-duty fuel cell vehicles which guide researchers in the field (5.5 wt %/40 g L−1 by 2017; 7.5 wt %/70 g L−1 ultimate). Materials with high porosity and high surface area such as MOFs have been designed and synthesized in an effort to meet these targets. These adsorptive materials generally work via physical adsorption rather than chemisorption due to the large HOMO-LUMO gap and low HOMO energy level of molecular hydrogen. A benchmark material to this end is MOF-177 which was found to store hydrogen at 7.5 wt % with a volumetric capacity of 32 g L−1 at 77 K and 70 bar. MOF-177 consists of [Zn4O]6+ clusters interconnected by 1,3,5-benzenetribenzoate organic linkers and has a measured BET surface area of 4630 m2 g−1. Another exemplary material is PCN-61 which exhibits a hydrogen uptake of 6.24 wt % and 42.5 g L−1 at 35 bar and 77 K and 2.25 wt % at atmospheric pressure. PCN-61 consists of [Cu2]4+ paddle-wheel units connected through 5,5′,5′′-benzene-1,3,5-triyltris(1-ethynyl-2-isophthalate) organic linkers and has a measured BET surface area of 3000 m2 g−1. Despite these promising MOF examples, the classes of synthetic porous materials with the highest performance for practical hydrogen storage are activated carbon and covalent organic frameworks (COFs).
Design principles.
Practical applications of MOFs for hydrogen storage are met with several challenges. For hydrogen adsorption near room temperature, the hydrogen binding energy would need to be increased considerably. Several classes of MOFs have been explored, including carboxylate-based MOFs, heterocyclic azolate-based MOFs, metal-cyanide MOFs, and covalent organic frameworks. Carboxylate-based MOFs have by far received the most attention because
The most common transition metals employed in carboxylate-based frameworks are Cu2+ and Zn2+. Lighter main-group metal ions have also been explored. Be12(OH)12(BTB)4, the first successfully synthesized and structurally characterized MOF consisting of a light main group metal ion, shows high hydrogen storage capacity, but it is too toxic to be employed practically. There is considerable effort being put forth in developing MOFs composed of other light main group metal ions, such as magnesium in Mg4(BDC)3.
The following is a list of several MOFs that are considered to have the best properties for hydrogen storage as of May 2012 (in order of decreasing hydrogen storage capacity). While each MOF described has its advantages, none of these MOFs reach all of the standards set by the U.S. DOE. Therefore, it is not yet known whether materials with high surface areas, small pores, or di- or trivalent metal clusters produce the most favorable MOFs for hydrogen storage.
Structural impacts on hydrogen storage capacity.
To date, hydrogen storage in MOFs at room temperature is a battle between maximizing storage capacity and maintaining reasonable desorption rates, while conserving the integrity of the adsorbent framework (e.g. completely evacuating pores, preserving the MOF structure, etc.) over many cycles. There are two major strategies governing the design of MOFs for hydrogen storage:
1) to increase the theoretical storage capacity of the material, and
2) to bring the operating conditions closer to ambient temperature and pressure. Rowsell and Yaghi have identified several directions to these ends in some of the early papers.
Surface area.
The general trend in MOFs used for hydrogen storage is that the greater the surface area, the more hydrogen the MOF can store. High surface area materials tend to exhibit increased micropore volume and inherently low bulk density, allowing for more hydrogen adsorption to occur.
Hydrogen adsorption enthalpy.
High hydrogen adsorption enthalpy is also important. Theoretical studies have shown that 22–25 kJ/mol interactions are ideal for hydrogen storage at room temperature, as they are strong enough to adsorb H2, but weak enough to allow for quick desorption. The interaction between hydrogen and uncharged organic linkers is not this strong, and so a considerable amount of work has gone in synthesis of MOFs with exposed metal sites, to which hydrogen adsorbs with an enthalpy of 5–10 kJ/mol. Synthetically, this may be achieved by using ligands whose geometries prevent the metal from being fully coordinated, by removing volatile metal-bound solvent molecules over the course of synthesis, and by post-synthetic impregnation with additional metal cations. and are great examples of increased binding energy due to open metal coordination sites; however, their high metal-hydrogen bond dissociation energies result in a tremendous release of heat upon loading with hydrogen, which is not favorable for fuel cells. MOFs, therefore, should avoid orbital interactions that lead to such strong metal-hydrogen bonds and employ simple charge-induced dipole interactions, as demonstrated in Mn3[(Mn4Cl)3(BTT)8]2.
An association energy of 22–25 kJ/mol is typical of charge-induced dipole interactions, and so there is interest in the use of charged linkers and metals. The metal–hydrogen bond strength is diminished in MOFs, probably due to charge diffusion, so 2+ and 3+ metal ions are being studied to strengthen this interaction even further. A problem with this approach is that MOFs with exposed metal surfaces have lower concentrations of linkers; this makes them difficult to synthesize, as they are prone to framework collapse. This may diminish their useful lifetimes as well.
Sensitivity to airborne moisture.
MOFs are frequently sensitive to moisture in the air. In particular, IRMOF-1 degrades in the presence of small amounts of water at room temperature. Studies on metal analogues have unraveled the ability of metals other than Zn to stand higher water concentrations at high temperatures.
To compensate for this, specially constructed storage containers are required, which can be costly. Strong metal-ligand bonds, such as in metal-imidazolate, -triazolate, and -pyrazolate frameworks, are known to decrease a MOF's sensitivity to air, reducing the expense of storage.
Pore size.
In a microporous material where physisorption and weak van der Waals forces dominate adsorption, the storage density is greatly dependent on the size of the pores. Calculations of idealized homogeneous materials, such as graphitic carbons and carbon nanotubes, predict that a microporous material with 7 Å-wide pores will exhibit maximum hydrogen uptake at room temperature. At this width, exactly two layers of hydrogen molecules adsorb on opposing surfaces with no space left in between. 10 Å-wide pores are also of ideal size because at this width, exactly three layers of hydrogen can exist with no space in between. (A hydrogen molecule has a bond length of 0.74 Å with a van der Waals radius of 1.17 Å for each atom; therefore, its effective van der Waals length is 3.08 Å.)
Structural defects.
Structural defects also play an important role in the performance of MOFs. Room-temperature hydrogen uptake via bridged spillover is mainly governed by structural defects, which can have two effects:
1) a partially collapsed framework can block access to pores; thereby reducing hydrogen uptake, and
2) lattice defects can create an intricate array of new pores and channels causing increased hydrogen uptake.
Structural defects can also leave metal-containing nodes incompletely coordinated. This enhances the performance of MOFs used for hydrogen storage by increasing the number of accessible metal centers. Finally, structural defects can affect the transport of phonons, which affects the thermal conductivity of the MOF.
Hydrogen adsorption.
Adsorption is the process of trapping atoms or molecules that are incident on a surface; therefore the adsorption capacity of a material increases with its surface area. In three dimensions, the maximum surface area will be obtained by a structure which is highly porous, such that atoms and molecules can access internal surfaces. This simple qualitative argument suggests that the highly porous metal-organic frameworks (MOFs) should be excellent candidates for hydrogen storage devices.
Adsorption can be broadly classified as being one of two types: physisorption or chemisorption. Physisorption is characterized by weak van der Waals interactions, and bond enthalpies typically less than 20 kJ/mol. Chemisorption, alternatively, is defined by stronger covalent and ionic bonds, with bond enthalpies between 250 and 500 kJ/mol. In both cases, the adsorbate atoms or molecules (i.e. the particles which adhere to the surface) are attracted to the adsorbent (solid) surface because of the surface energy that results from unoccupied bonding locations at the surface. The degree of orbital overlap then determines if the interactions will be physisorptive or chemisorptive.
Adsorption of molecular hydrogen in MOFs is physisorptive. Since molecular hydrogen only has two electrons, dispersion forces are weak, typically 4–7 kJ/mol, and are only sufficient for adsorption at temperatures below 298 K.
A complete explanation of the H2 sorption mechanism in MOFs was achieved by statistical averaging in the grand canonical ensemble, exploring a wide range of pressures and temperatures.
Determining hydrogen storage capacity.
Two hydrogen-uptake measurement methods are used for the characterization of MOFs as hydrogen storage materials: gravimetric and volumetric. To obtain the total amount of hydrogen in the MOF, both the amount of hydrogen absorbed on its surface and the amount of hydrogen residing in its pores should be considered. To calculate the absolute absorbed amount ("N"abs), the surface excess amount ("N"ex) is added to the product of the bulk density of hydrogen (ρbulk) and the pore volume of the MOF ("V"pore), as shown in the following equation:
formula_0
Gravimetric method.
The increased mass of the MOF due to the stored hydrogen is directly calculated by a highly sensitive microbalance. Due to buoyancy, the detected mass of adsorbed hydrogen decreases again when a sufficiently high pressure is applied to the system because the density of the surrounding gaseous hydrogen becomes more and more important at higher pressures. Thus, this "weight loss" has to be corrected using the volume of the MOF's frame and the density of hydrogen.
Volumetric method.
The changing of amount of hydrogen stored in the MOF is measured by detecting the varied pressure of hydrogen at constant volume. The volume of adsorbed hydrogen in the MOF is then calculated by subtracting the volume of hydrogen in free space from the total volume of dosed hydrogen.
Other methods of hydrogen storage.
There are six possible methods that can be used for the reversible storage of hydrogen with a high volumetric and gravimetric density, which are summarized in the following table, (where ρm is the gravimetric density, ρv is the volumetric density, "T" is the working temperature, and "P" is the working pressure):
Of these, high-pressure gas cylinders and liquid hydrogen in cryogenic tanks are the least practical ways to store hydrogen for the purpose of fuel due to the extremely "high" pressure required for storing hydrogen gas or the extremely "low" temperature required for storing hydrogen liquid. The other methods are all being studied and developed extensively.
Electrocatalysis.
The high surface area and atomic metal sites feature of MOFs make them a suitable candidate for electrocatalysts, especially energy-related ones.
Until now, MOFs have been used extensively as electrocatalyst for water splitting (hydrogen evolution reaction and oxygen evolution reaction), carbon dioxide reduction, and oxygen reduction reaction. Currently there are two routes: 1. Using MOFs as precursors to prepare electrocatalysts with carbon support. 2. Using MOFs directly as electrocatalysts. However, some results have shown that some MOFs are not stable under electrochemical environment. The electrochemical conversion of MOFs during electrocatalysis may produce the real catalyst materials, and the MOFs are precatalysts under such conditions. Therefore, claiming MOFs as the electrocatalysts requires "in situ" techniques coupled with electrocatalysis.
Biological imaging and sensing.
A potential application for MOFs is biological imaging and sensing via photoluminescence. A large subset of luminescent MOFs use lanthanides in the metal clusters. Lanthanide photoluminescence has many unique properties that make them ideal for imaging applications, such as characteristically sharp and generally non-overlapping emission bands in the visible and near-infrared (NIR) regions of the spectrum, resistance to photobleaching or "blinking", and long luminescence lifetimes. However, lanthanide emissions are difficult to sensitize directly because they must undergo LaPorte forbidden f-f transitions. Indirect sensitization of lanthanide emission can be accomplished by employing the "antenna effect", where the organic linkers act as antennae and absorb the excitation energy, transfer the energy to the excited state of the lanthanide, and yield lanthanide luminescence upon relaxation. A prime example of the antenna effect is demonstrated by MOF-76, which combines trivalent lanthanide ions and 1,3,5-benzenetricarboxylate (BTC) linkers to form infinite rod SBUs coordinated into a three dimensional lattice. As demonstrated by multiple research groups, the BTC linker can effectively sensitize the lanthanide emission, resulting in a MOF with variable emission wavelengths depending on the lanthanide identity. Additionally, the Yan group has shown that Eu3+- and Tb3+- MOF-76 can be used for selective detection of acetophenone from other volatile monoaromatic hydrocarbons. Upon acetophenone uptake, the MOF shows a very sharp decrease, or quenching, of the luminescence intensity.
For use in biological imaging, however, two main obstacles must be overcome:
Regarding the first point, nanoscale MOF (NMOF) synthesis has been mentioned in an earlier section. The latter obstacle addresses the limitation of the antenna effect. Smaller linkers tend to improve MOF stability, but have higher energy absorptions, predominantly in the ultraviolet (UV) and high-energy visible regions. A design strategy for MOFs with redshifted absorption properties has been accomplished by using large, chromophoric linkers. These linkers are often composed of polyaromatic species, leading to large pore sizes and thus decreased stability. To circumvent the use of large linkers, other methods are required to redshift the absorbance of the MOF so lower energy excitation sources can be used. Post-synthetic modification (PSM) is one promising strategy. Luo et al. introduced a new family of lanthanide MOFs with functionalized organic linkers. The MOFs, deemed MOF-1114, MOF-1115, MOF-1130, and MOF-1131, are composed of octahedral SBUs bridged by amino functionalized dicarboxylate linkers. The amino groups on the linkers served as sites for covalent PSM reactions with either salicylaldehyde or 3-hydroxynaphthalene-2-carboxaldehyde. Both of these reactions extend the π-conjugation of the linker, causing a redshift in the absorbance wavelength from 450 nm to 650 nm. The authors also propose that this technique could be adapted to similar MOF systems and, by increasing pore volumes with increasing linker lengths, larger pi-conjugated reactants can be used to further redshift the absorption wavelengths. Biological imaging using MOFs has been realized by several groups, namely Foucault-Collet and co-workers. In 2013, they synthesized a NIR-emitting Yb3+-NMOF using phenylenevinylene dicarboxylate (PVDC) linkers. They observed cellular uptake in both HeLa cells and NIH-3T3 cells using confocal, visible, and NIR spectroscopy. Although low quantum yields persist in water and Hepes buffer solution, the luminescence intensity is still strong enough to image cellular uptake in both the visible and NIR regimes.
Nuclear wasteform materials.
The development of new pathways for efficient nuclear waste administration is essential in wake of increased public concern about radioactive contamination, due to nuclear plant operation and nuclear weapon decommission. Synthesis of novel materials capable of selective actinide sequestration and separation is one of the current challenges acknowledged in the nuclear waste sector. Metal–organic frameworks (MOFs) are a promising class of materials to address this challenge due to their porosity, modularity, crystallinity, and tunability. Every building block of MOF structures can incorporate actinides. First, a MOF can be synthesized starting from actinide salts. In this case the metal nodes are actinides. In addition, metal nodes can be extended, or cation exchange can exchange metals for actinides. Organic linkers can be functionalized with groups capable of actinide uptake. Lastly, the porosity of MOFs can be used to incorporate contained guest molecules and trap them in a structure by installation of additional or capping linkers.
Drug delivery systems.
The synthesis, characterization, and drug-related studies of low toxicity, biocompatible MOFs has shown that they have potential for medical applications. Many groups have synthesized various low toxicity MOFs and have studied their uses in loading and releasing various therapeutic drugs for potential medical applications. A variety of methods exist for inducing drug release, such as pH-response, magnetic-response, ion-response, temperature-response, and pressure response.
In 2010 Smaldone et al., an international research group, synthesized a biocompatible MOF termed CD-MOF-1 from cheap edible natural products. CD-MOF-1 consists of repeating base units of 6 γ-cyclodextrin rings bound together by potassium ions. γ-cyclodextrin (γ-CD) is a symmetrical cyclic oligosaccharide that is mass-produced enzymatically from starch and consists of eight asymmetric α-1,4-linked D-glucopyranosyl residues. The molecular structure of these glucose derivatives, which approximates a truncated cone, bucket, or torus, generates a hydrophilic exterior surface and a nonpolar interior cavity. Cyclodextrins can interact with appropriately sized drug molecules to yield an inclusion complex. Smaldone's group proposed a cheap and simple synthesis of the CD-MOF-1 from natural products. They dissolved sugar (γ-cyclodextrin) and an alkali salt (KOH, KCl, potassium benzoate) in distilled bottled water and allowed 190 proof grain alcohol (Everclear) to vapor diffuse into the solution for a week. The synthesis resulted in a cubic (γ-CD) repeating motif with a pore size of approximately 1 nm. Subsequently, in 2017 Hartlieb et al. at Northwestern did further research with CD-MOF-1 involving the encapsulation of ibuprofen. The group studied different methods of loading the MOF with ibuprofen as well as performing related bioavailability studies on the ibuprofen-loaded MOF. They investigated two different methods of loading CD-MOF-1 with ibuprofen; crystallization using the potassium salt of ibuprofen as the alkali cation source for production of the MOF, and absorption and deprotonation of the free-acid of ibuprofen into the MOF. From there the group performed in vitro and in vivo studies to determine the applicability of CD-MOF-1 as a viable delivery method for ibuprofen and other NSAIDs. In vitro studies showed no toxicity or effect on cell viability up to 100 μM. In vivo studies in mice showed the same rapid uptake of ibuprofen as the ibuprofen potassium salt control sample with a peak plasma concentration observed within 20 minutes, and the cocrystal has the added benefit of double the half-life in blood plasma samples. The increase in half-life is due to CD-MOF-1 increasing the solubility of ibuprofen compared to the pure salt form.
Since these developments many groups have done further research into drug delivery with water-soluble, biocompatible MOFs involving common over-the-counter drugs. In March 2018 Sara Rojas and her team published their research on drug incorporation and delivery with various biocompatible MOFs other than CD-MOF-1 through simulated cutaneous administration. The group studied the loading and release of ibuprofen (hydrophobic) and aspirin (hydrophilic) in three biocompatible MOFs (MIL-100(Fe), UiO-66(Zr), and MIL-127(Fe)). Under simulated cutaneous conditions (aqueous media at 37 °C) the six different combinations of drug-loaded MOFs fulfilled "the requirements to be used as topical drug delivery systems, such as released payload between 1 and 7 days" and delivering a therapeutic concentration of the drug of choice without causing unwanted side effects. The group discovered that the drug uptake is "governed by the hydrophilic/hydrophobic balance between cargo and matrix" and "the accessibility of the drug through the framework". The "controlled release under cutaneous conditions follows different kinetics profiles depending on: (i) the structure of the framework, with either a fast delivery from the very open structure MIL-100 or a slower drug release from the narrow 1D pore system of MIL-127 or (ii) the hydrophobic/hydrophilic nature of the cargo, with a fast (Aspirin) and slow (Ibuprofen) release from the UiO-66 matrix." Moreover, a simple ball milling technique is used to efficiently encapsulate the model drugs 5-fluorouracil, caffeine, para-aminobenzoic acid, and benzocaine. Both computational and experimental studies confirm the suitability of [Zn4O(dmcapz)3] to incorporate high loadings of the studied bioactive molecules.
Recent research involving MOFs as a drug delivery method includes more than just the encapsulation of everyday drugs like ibuprofen and aspirin. In early 2018 Chen et al., published detailing their work on the use of MOF, ZIF-8 (zeolitic imidazolate framework-8) in antitumor research "to control the release of an autophagy inhibitor, 3-methyladenine (3-MA), and prevent it from dissipating in a large quantity before reaching the target." The group performed in vitro studies and determined that "the autophagy-related proteins and autophagy flux in HeLa cells treated with 3-MA@ZIF-8 NPs show that the autophagosome formation is significantly blocked, which reveals that the pH-sensitive dissociation increases the efficiency of autophagy inhibition at the equivalent concentration of 3-MA." This shows promise for future research and applicability with MOFs as drug delivery methods in the fight against cancer.
Semiconductors.
In 2014 researchers proved that they can create electrically conductive thin films of MOFs (Cu3(BTC)2 (also known as HKUST-1; BTC, benzene-1,3,5-tricarboxylic acid) infiltrated with the molecule 7,7,8,8-tetracyanoquinododimethane) that could be used in applications including photovoltaics, sensors, and electronic materials and a path toward creating semiconductors. The team demonstrated tunable, air-stable electrical conductivity with values as high as 7 siemens per meter, comparable to bronze.
Ni3(2,3,6,7,10,11-hexaiminotriphenylene)2 was shown to be a metal-organic graphene analogue that has a natural band gap, making it a semiconductor, and is able to self-assemble. It is an example of conductive metal-organic framework. It represents a family of similar compounds. Because of the symmetry and geometry in 2,3,6,7,10,11-hexaiminotriphenylene (HITP), the overall organometallic complex has an almost fractal nature that allows it to perfectly self-organize. By contrast, graphene must be doped to give it the properties of a semiconductor. Ni3(HITP)2 pellets had a conductivity of 2 S/cm, a record for a metal-organic compound.
In 2018 researchers synthesized a two-dimensional semiconducting MOF (Fe3(THT)2(NH4)3, also known as THT, 2,3,6,7,10,11-triphenylenehexathiol) and showed high electric mobility at room temperature. In 2020 the same material was integrated in a photo-detecting device, detecting a broad wavelength range from UV to NIR (400–1575 nm). This was the first time a two-dimensional semiconducting MOF was demonstrated to be used in opto-electronic devices.
<chem>Cu3(HHTP)2</chem> is a 2D MOF structure, and there are limited examples of materials which are intrinsically conductive, porous, and crystalline. Layered 2D MOFs have porous crystalline structure showing electrical conductivity. These materials are constructed from trigonal linker molecules (phenylene or triphenylene) and six functional groups of –OH, -<chem>NH2</chem>, or –SH. The trigonal linker molecules and square-planarly coordinated metal ions such as <chem>Cu^{2+}</chem>, <chem>Ni^{2+}</chem>, <chem>Co^{2+}</chem>, and <chem>Pt^{2+}</chem> form layers with hexagonal structures which look like graphene in larger scale. Stacking of these layers can build one-dimensional pore systems. Graphene-like 2D MOFs have shown decent conductivities. This makes them a good choice to be tested as electrode material for evolution of hydrogen from water, oxygen reduction reactions, supercapacitors, and sensing of volatile organic compounds (VOCs). Among these MOFs, <chem>Cu3(HHTP)2</chem> has exhibited the lowest conductivity, but also the strongest reaction in sensing of VOCs.
Bio-mimetic mineralization.
Biomolecules can be incorporated during the MOF crystallization process. Biomolecules including proteins, DNA, and antibodies could be encapsulated within ZIF-8. Enzymes encapsulated in this way were stable and active even after being exposed to harsh conditions (e.g. aggressive solvents and high temperature). ZIF-8, MIL-88A, HKUST-1, and several luminescent MOFs containing lanthanide metals were used for the biomimetic mineralization process.
Carbon capture.
Adsorbent.
MOF's small, tunable pore sizes and high void fractions are promising as an adsorbent to capture CO2. MOFs could provide a more efficient alternative to traditional amine solvent-based methods in CO2 capture from coal-fired power plants.
MOFs could be employed in each of the main three carbon capture configurations for coal-fired power plants: pre-combustion, post-combustion, and oxy-combustion. The post-combustion configuration is the only one that can be retrofitted to existing plants, drawing the most interest and research. The flue gas would be fed through a MOF in a packed-bed reactor setup. Flue gas is generally 40 to 60 °C with a partial pressure of CO2 at 0.13 – 0.16 bar. CO2 can bind to the MOF surface through either physisorption (via Van der Waals interactions) or chemisorption (via covalent bond formation).
Once the MOF is saturated, the CO2 is extracted from the MOF through either a temperature swing or a pressure swing. This process is known as regeneration. In a temperature swing regeneration, the MOF would be heated until CO2 desorbs. To achieve working capacities comparable to the amine process, the MOF must be heated to around 200 °C. In a pressure swing, the pressure would be decreased until CO2 desorbs.
Another relevant MOF property is their low heat capacities. Monoethanolamine (MEA) solutions, the leading capture method, have a heat capacity between 3-4 J/(g⋅K) since they are mostly water. This high heat capacity contributes to the energy penalty in the solvent regeneration step, i.e. when the adsorbed CO2 is removed from the MEA solution. MOF-177, a MOF designed for CO2 capture, has a heat capacity of 0.5 J/(g⋅K) at ambient temperature.
MOFs adsorb 90% of the CO2 using a vacuum pressure swing process. The MOF Mg(dobdc) has a 21.7 wt% CO2 loading capacity. Applied to a large scale power plant, the cost of energy would increase by 65%, while a U.S. NETL baseline amine-based system would cause an increase of 81% (goal is 35%). The capture cost would be $57 / ton, while for the amine system the cost is estimated to be $72 / ton. At that rate the capital required to implement such project in a 580 MW power plant would be $354 million.
Catalyst.
A MOF loaded with propylene oxide can act as a catalyst, converting CO2 into cyclic carbonates (ring-shaped molecules with many applications). They can also remove carbon from biogas. This MOF is based on lanthanides, which provide chemical stability. This is especially important because the gases the MOF will be exposed to are hot, high in humidity, and acidic. Triaminoguanidinium-based POFs and Zn/POFs are new multifunctional materials for environmental remediation and biomedical applications.
Desalination/ion separation.
MOF membranes can mimic substantial ion selectivity. This offers the potential for use in desalination and water treatment. As of 2018 reverse osmosis supplied more than half of global desalination capacity, and the last stage of most water treatment processes. Osmosis does not use dehydration of ions, or selective ion transport in biological channels and it is not energy efficient. The mining industry, uses membrane-based processes to reduce water pollution, and to recover metals. MOFs could be used to extract metals such as lithium from seawater and waste streams.
MOF membranes such as ZIF-8 and UiO-66 membranes with uniform subnanometer pores consisting of angstrom-scale windows and nanometer-scale cavities displayed ultrafast selective transport of alkali metal ions. The windows acted as ion selectivity filters for alkali metal ions, while the cavities functioned as pores for transport. The ZIF-8 and UiO-66 membranes showed a LiCl/RbCl selectivity of ~4.6 and ~1.8, respectively, much higher than the 0.6 to 0.8 selectivity in traditional membranes. A 2020 study suggested that a new MOF called PSP-MIL-53 could be used along with sunlight to purify water in just half an hour.
Gas separation.
MOFs are also predicted to be very effective media to separate gases with low energy cost using computational high throughput screening from their adsorption or gas breakthrough/diffusion properties. One example is NbOFFIVE-1-Ni, also referred to as KAUST-7 which can separate propane and propylene via diffusion at nearly 100% selectivity. The specific molecule selectivity properties provided by Cu-BDC surface mounted metal organic framework (SURMOF-2) growth on alumina layer on top of back gated Graphene Field Effect Transistor (GFET) can provide a sensor that is only sensitive to ethanol but not to methanol or isopropanol.
Water vapor capture and dehumidification.
MOFs have been demonstrated that capture water vapor from the air. In 2021 under humid conditions, a polymer-MOF lab prototype yielded 17 liters (4.5 gal) of water per kg per day without added energy.
MOFs could also be used to increase energy efficiency in room temperature space cooling applications.
When cooling outdoor air, a cooling unit must deal with both the air's sensible heat and latent heat. Typical vapor-compression air-conditioning (VCAC) units manage the latent heat in air through cooling fins held below the dew point temperature of the moist air at the intake. These fins condense the water, dehydrating the air and thus substantially reducing the air's heat content. The cooler's energy usage is highly dependent on the cooling coil's temperature and would be improved greatly if the temperature of this coil could be raised above the dew point. This makes it desirable to handle dehumidification through means other than condensation. One such means is by adsorbing the water from the air into a desiccant coated onto the heat exchangers, using the waste heat exhausted from the unit to desorb the water from the sorbent and thus regenerate the desiccant for repeated usage. This is accomplished by having two condenser/evaporator units through which the flow of refrigerant can be reversed once the desiccant on the condenser is saturated, thus making the condenser the evaporator and vice versa.
MOFs' extremely high surface areas and porosities have made them the subject of much research in water adsorption applications. Chemistry can help tune the optimal relative humidity for adsorption/desorption, and the sharpness of the water uptake.
Ferroelectrics and multiferroics.
Some MOFs also exhibit spontaneous electric polarization, which occurs due to the ordering of electric dipoles (polar linkers or guest molecules) below a certain phase transition temperature. If this long-range dipolar order can be controlled by the external electric field, a MOF is called ferroelectric. Some ferroelectric MOFs also exhibit magnetic ordering making them single structural phase multiferroics. This material property is highly interesting for construction of memory devices with high information density. The coupling mechanism of type-I [(CH3)2NH2][Ni(HCOO)3] molecular multiferroic is spontaneous elastic strain mediated indirect coupling.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N_{\\rm abs}=N_{\\rm ex} + \\rho_{\\rm bulk} V_{\\rm pore} "
}
] | https://en.wikipedia.org/wiki?curid=9821563 |
9823717 | Construction of t-norms | Mathematics
In mathematics, t-norms are a special kind of binary operations on the real unit interval [0, 1]. Various constructions of t-norms, either by explicit definition or by transformation from previously known functions, provide a plenitude of examples and classes of t-norms. This is important, e.g., for finding counter-examples or supplying t-norms with particular properties for use in engineering applications of fuzzy logic. The main ways of construction of t-norms include using "generators", defining "parametric classes" of t-norms, "rotations", or "ordinal sums" of t-norms.
Relevant background can be found in the article on t-norms.
Generators of t-norms.
The method of constructing t-norms by generators consists in using a unary function ("generator") to transform some known binary function (most often, addition or multiplication) into a t-norm.
In order to allow using non-bijective generators, which do not have the inverse function, the following notion of "pseudo-inverse function" is employed:
Let "f": ["a", "b"] → ["c", "d"] be a monotone function between two closed subintervals of extended real line. The "pseudo-inverse function" to "f" is the function "f" (−1): ["c", "d"] → ["a", "b"] defined as
formula_0
Additive generators.
The construction of t-norms by additive generators is based on the following theorem:
Let "f": [0, 1] → [0, +∞] be a strictly decreasing function such that "f"(1) = 0 and "f"("x") + "f"("y") is in the range of "f" or equal to "f"(0+) or +∞ for all "x", "y" in [0, 1]. Then the function "T": [0, 1]2 → [0, 1] defined as
"T"("x", "y") = "f" (-1)("f"("x") + "f"("y"))
is a t-norm.
Alternatively, one may avoid using the notion of pseudo-inverse function by having formula_1. The corresponding residuum can then be expressed as formula_2. And the biresiduum as formula_3.
If a t-norm "T" results from the latter construction by a function "f" which is right-continuous in 0, then "f" is called an "additive generator" of "T".
Examples:
Basic properties of additive generators are summarized by the following theorem:
Let "f": [0, 1] → [0, +∞] be an additive generator of a t-norm "T". Then:
* "T" is an Archimedean t-norm.
* "T" is continuous if and only if "f" is continuous.
* "T" is strictly monotone if and only if "f"(0) = +∞.
* Each element of (0, 1) is a nilpotent element of "T" if and only if f(0) < +∞.
* The multiple of "f" by a positive constant is also an additive generator of "T".
* "T" has no non-trivial idempotents. (Consequently, e.g., the minimum t-norm has no additive generator.)
Multiplicative generators.
The isomorphism between addition on [0, +∞] and multiplication on [0, 1] by the logarithm and the exponential function allow two-way transformations between additive and multiplicative generators of a t-norm. If "f" is an additive generator of a t-norm "T", then the function "h": [0, 1] → [0, 1] defined as "h"("x") = e−"f" ("x") is a "multiplicative generator" of "T", that is, a function "h" such that
Vice versa, if "h" is a multiplicative generator of "T", then "f": [0, 1] → [0, +∞] defined by "f"("x") = −log("h"(x)) is an additive generator of "T".
Parametric classes of t-norms.
Many families of related t-norms can be defined by an explicit formula depending on a parameter "p". This section lists the best known parameterized families of t-norms. The following definitions will be used in the list:
formula_4
for all values "p"0 of the parameter.
Schweizer–Sklar t-norms.
The family of "Schweizer–Sklar t-norms", introduced by Berthold Schweizer and Abe Sklar in the early 1960s, is given by the parametric definition
formula_5
A Schweizer–Sklar t-norm formula_6 is
The family is strictly decreasing for "p" ≥ 0 and continuous with respect to "p" in [−∞, +∞]. An additive generator for formula_6 for −∞ < "p" < +∞ is
formula_7
Hamacher t-norms.
The family of "Hamacher t-norms", introduced by Horst Hamacher in the late 1970s, is given by the following parametric definition for 0 ≤ "p" ≤ +∞:
formula_8
The t-norm formula_9 is called the "Hamacher product."
Hamacher t-norms are the only t-norms which are rational functions.
The Hamacher t-norm formula_10 is strict if and only if "p" < +∞ (for "p" = 1 it is the product t-norm). The family is strictly decreasing and continuous with respect to "p". An additive generator of formula_10 for "p" < +∞ is
formula_11
Frank t-norms.
The family of "Frank t-norms", introduced by M.J. Frank in the late 1970s, is given by the parametric definition for 0 ≤ "p" ≤ +∞ as follows:
formula_12
The Frank t-norm formula_13 is strict if "p" < +∞. The family is strictly decreasing and continuous with respect to "p". An additive generator for formula_13 is
formula_14
Yager t-norms.
The family of "Yager t-norms", introduced in the early 1980s by Ronald R. Yager, is given for 0 ≤ "p" ≤ +∞ by
formula_15
The Yager t-norm formula_16 is nilpotent if and only if 0 < "p" < +∞ (for "p" = 1 it is the Łukasiewicz t-norm). The family is strictly increasing and continuous with respect to "p". The Yager t-norm formula_16 for 0 < "p" < +∞ arises from the Łukasiewicz t-norm by raising its additive generator to the power of "p". An additive generator of formula_16 for 0 < "p" < +∞ is
formula_17
Aczél–Alsina t-norms.
The family of "Aczél–Alsina t-norms", introduced in the early 1980s by János Aczél and Claudi Alsina, is given for 0 ≤ "p" ≤ +∞ by
formula_18
The Aczél–Alsina t-norm formula_19 is strict if and only if 0 < "p" < +∞ (for "p" = 1 it is the product t-norm). The family is strictly increasing and continuous with respect to "p". The Aczél–Alsina t-norm formula_19 for 0 < "p" < +∞ arises from the product t-norm by raising its additive generator to the power of "p". An additive generator of formula_19 for 0 < "p" < +∞ is
formula_20
Dombi t-norms.
The family of "Dombi t-norms", introduced by József Dombi (1982), is given for 0 ≤ "p" ≤ +∞ by
formula_21
The Dombi t-norm formula_22 is strict if and only if 0 < "p" < +∞ (for "p" = 1 it is the Hamacher product). The family is strictly increasing and continuous with respect to "p". The Dombi t-norm formula_22 for 0 < "p" < +∞ arises from the Hamacher product t-norm by raising its additive generator to the power of "p". An additive generator of formula_22 for 0 < "p" < +∞ is
formula_23
Sugeno–Weber t-norms.
The family of "Sugeno–Weber t-norms" was introduced in the early 1980s by Siegfried Weber; the dual t-conorms were defined already in the early 1970s by Michio Sugeno. It is given for −1 ≤ "p" ≤ +∞ by
formula_24
The Sugeno–Weber t-norm formula_25 is nilpotent if and only if −1 < "p" < +∞ (for "p" = 0 it is the Łukasiewicz t-norm). The family is strictly increasing and continuous with respect to "p". An additive generator of formula_25 for 0 < "p" < +∞ [sic] is
formula_26
Ordinal sums.
The ordinal sum constructs a t-norm from a family of t-norms, by shrinking them into disjoint subintervals of the interval [0, 1] and completing the t-norm by using the minimum on the rest of the unit square. It is based on the following theorem:
Let "T""i" for "i" in an index set "I" be a family of t-norms and ("a""i", "b""i") a family of pairwise disjoint (non-empty) open subintervals of [0, 1]. Then the function "T": [0, 1]2 → [0, 1] defined as
formula_27
is a t-norm.
The resulting t-norm is called the "ordinal sum" of the summands ("T"i, "a"i, "b"i) for "i" in "I", denoted by
formula_28
or formula_29 if "I" is finite.
Ordinal sums of t-norms enjoy the following properties:
If formula_30 is a left-continuous t-norm, then its residuum "R" is given as follows:
formula_31
where "R"i is the residuum of "T"i, for each "i" in "I".
Ordinal sums of continuous t-norms.
The ordinal sum of a family of continuous t-norms is a continuous t-norm. By the Mostert–Shields theorem, every continuous t-norm is expressible as the ordinal sum of Archimedean continuous t-norms. Since the latter are either nilpotent (and then isomorphic to the Łukasiewicz t-norm) or strict (then isomorphic to the product t-norm), each continuous t-norm is isomorphic to the ordinal sum of Łukasiewicz and product t-norms.
Important examples of ordinal sums of continuous t-norms are the following ones:
Rotations.
The construction of t-norms by rotation was introduced by Sándor Jenei (2000). It is based on the following theorem:
Let "T" be a left-continuous t-norm without zero divisors, "N": [0, 1] → [0, 1] the function that assigns 1 − "x" to "x" and "t" = 0.5. Let "T"1 be the linear transformation of "T" into ["t", 1] and formula_32 Then the function
formula_33
is a left-continuous t-norm, called the "rotation" of the t-norm "T".
Geometrically, the construction can be described as first shrinking the t-norm "T" to the interval [0.5, 1] and then rotating it by the angle 2π/3 in both directions around the line connecting the points (0, 0, 1) and (1, 1, 0).
The theorem can be generalized by taking for "N" any "strong negation", that is, an involutive strictly decreasing continuous function on [0, 1], and for "t" taking the unique fixed point of "N".
The resulting t-norm enjoys the following "rotation invariance" property with respect to "N":
"T"("x", "y") ≤ "z" if and only if "T"("y", "N"("z")) ≤ "N"("x") for all "x", "y", "z" in [0, 1].
The negation induced by "T"rot is the function "N", that is, "N"("x") = "R"rot("x", 0) for all "x", where "R"rot is the residuum of "T"rot. | [
{
"math_id": 0,
"text": "f^{(-1)}(y) = \\begin{cases}\n \\sup \\{ x\\in[a,b] \\mid f(x) < y \\} & \\text{for } f \\text{ non-decreasing} \\\\\n \\sup \\{ x\\in[a,b] \\mid f(x) > y \\} & \\text{for } f \\text{ non-increasing.}\n\\end{cases}"
},
{
"math_id": 1,
"text": "T(x,y)=f^{-1}\\left(\\min\\left(f(0^+),f(x)+f(y)\\right)\\right)"
},
{
"math_id": 2,
"text": "(x \\Rightarrow y) = f^{-1}\\left(\\max\\left(0,f(y)-f(x)\\right)\\right)"
},
{
"math_id": 3,
"text": "(x \\Leftrightarrow y) = f^{-1}\\left(\\left|f(x)-f(y)\\right|\\right)"
},
{
"math_id": 4,
"text": "\\lim_{p\\to p_0} T_p = T_{p_0}"
},
{
"math_id": 5,
"text": "T^{\\mathrm{SS}}_p(x,y) = \\begin{cases}\n T_{\\min}(x,y) & \\text{if } p = -\\infty \\\\\n (x^p + y^p - 1)^{1/p} & \\text{if } -\\infty < p < 0 \\\\\n T_{\\mathrm{prod}}(x,y) & \\text{if } p = 0 \\\\\n (\\max(0, x^p + y^p - 1))^{1/p} & \\text{if } 0 < p < +\\infty \\\\\n T_{\\mathrm{D}}(x,y) & \\text{if } p = +\\infty.\n\\end{cases}"
},
{
"math_id": 6,
"text": "T^{\\mathrm{SS}}_p"
},
{
"math_id": 7,
"text": "f^{\\mathrm{SS}}_p (x) = \\begin{cases}\n -\\log x & \\text{if } p = 0 \\\\\n \\frac{1 - x^p}{p} & \\text{otherwise.}\n\\end{cases}"
},
{
"math_id": 8,
"text": "T^{\\mathrm{H}}_p (x,y) = \\begin{cases}\n T_{\\mathrm{D}}(x,y) & \\text{if } p = +\\infty \\\\\n 0 & \\text{if } p = x = y = 0 \\\\\n \\frac{xy}{p + (1 - p)(x + y - xy)} & \\text{otherwise.}\n\\end{cases}"
},
{
"math_id": 9,
"text": "T^{\\mathrm{H}}_0"
},
{
"math_id": 10,
"text": "T^{\\mathrm{H}}_p"
},
{
"math_id": 11,
"text": "f^{\\mathrm{H}}_p(x) = \\begin{cases}\n \\frac{1 - x}{x} & \\text{if } p = 0 \\\\\n \\log\\frac{p + (1 - p)x}{x} & \\text{otherwise.}\n\\end{cases}"
},
{
"math_id": 12,
"text": "T^{\\mathrm{F}}_p(x,y) = \\begin{cases}\n T_{\\mathrm{min}}(x,y) & \\text{if } p = 0 \\\\\n T_{\\mathrm{prod}}(x,y) & \\text{if } p = 1 \\\\\n T_{\\mathrm{Luk}}(x,y) & \\text{if } p = +\\infty \\\\\n \\log_p\\left(1 + \\frac{(p^x - 1)(p^y - 1)}{p - 1}\\right) & \\text{otherwise.}\n\\end{cases}"
},
{
"math_id": 13,
"text": "T^{\\mathrm{F}}_p"
},
{
"math_id": 14,
"text": "f^{\\mathrm{F}}_p(x) = \\begin{cases}\n -\\log x & \\text{if } p = 1 \\\\\n 1 - x & \\text{if } p = +\\infty \\\\\n \\log\\frac{p - 1}{p^x - 1} & \\text{otherwise.}\n\\end{cases}\n"
},
{
"math_id": 15,
"text": "T^{\\mathrm{Y}}_p (x,y) = \\begin{cases}\n T_{\\mathrm{D}}(x,y) & \\text{if } p = 0 \\\\\n \\max\\left(0, 1 - ((1 - x)^p + (1 - y)^p)^{1/p}\\right) & \\text{if } 0 < p < +\\infty \\\\\n T_{\\mathrm{min}}(x,y) & \\text{if } p = +\\infty\n\\end{cases}\n"
},
{
"math_id": 16,
"text": "T^{\\mathrm{Y}}_p"
},
{
"math_id": 17,
"text": "f^{\\mathrm{Y}}_p(x) = (1 - x)^p."
},
{
"math_id": 18,
"text": "T^{\\mathrm{AA}}_p (x,y) = \\begin{cases}\n T_{\\mathrm{D}}(x,y) & \\text{if } p = 0 \\\\\n e^{-\\left(|-\\log x|^p + |-\\log y|^p\\right)^{1/p}} & \\text{if } 0 < p < +\\infty \\\\\n T_{\\mathrm{min}}(x,y) & \\text{if } p = +\\infty\n\\end{cases}"
},
{
"math_id": 19,
"text": "T^{\\mathrm{AA}}_p"
},
{
"math_id": 20,
"text": "f^{\\mathrm{AA}}_p(x) = (-\\log x)^p."
},
{
"math_id": 21,
"text": "T^{\\mathrm{D}}_p (x,y) = \\begin{cases}\n 0 & \\text{if } x = 0 \\text{ or } y = 0 \\\\\n T_{\\mathrm{D}}(x,y) & \\text{if } p = 0 \\\\\n T_{\\mathrm{min}}(x,y) & \\text{if } p = +\\infty \\\\\n \\frac{1}{1 + \\left(\n \\left(\\frac{1 - x}{x}\\right)^p + \\left(\\frac{1 - y}{y}\\right)^p\n \\right)^{1/p}} & \\text{otherwise.} \\\\\n\\end{cases}\n"
},
{
"math_id": 22,
"text": "T^{\\mathrm{D}}_p"
},
{
"math_id": 23,
"text": "f^{\\mathrm{D}}_p(x) = \\left(\\frac{1-x}{x}\\right)^p."
},
{
"math_id": 24,
"text": "T^{\\mathrm{SW}}_p (x,y) = \\begin{cases}\n T_{\\mathrm{D}}(x,y) & \\text{if } p = -1 \\\\\n \\max\\left(0, \\frac{x + y - 1 + pxy}{1 + p}\\right) & \\text{if } -1 < p < +\\infty \\\\\n T_{\\mathrm{prod}}(x,y) & \\text{if } p = +\\infty \n\\end{cases}\n"
},
{
"math_id": 25,
"text": "T^{\\mathrm{SW}}_p"
},
{
"math_id": 26,
"text": "f^{\\mathrm{SW}}_p(x) = \\begin{cases}\n 1 - x & \\text{if } p = 0 \\\\\n 1 - \\log_{1 + p}(1 + px) & \\text{otherwise.}\n\\end{cases}"
},
{
"math_id": 27,
"text": "T(x, y) = \\begin{cases}\n a_i + (b_i - a_i) \\cdot T_i\\left(\\frac{x - a_i}{b_i - a_i}, \\frac{y - a_i}{b_i - a_i}\\right)\n & \\text{if } x, y \\in [a_i, b_i]^2 \\\\\n \\min(x, y) & \\text{otherwise}\n\\end{cases}"
},
{
"math_id": 28,
"text": "T = \\bigoplus\\nolimits_{i\\in I} (T_i, a_i, b_i),"
},
{
"math_id": 29,
"text": "(T_1, a_1, b_1) \\oplus \\dots \\oplus (T_n, a_n, b_n)"
},
{
"math_id": 30,
"text": "T = \\bigoplus\\nolimits_{i\\in I} (T_i, a_i, b_i)"
},
{
"math_id": 31,
"text": "R(x, y) = \\begin{cases}\n 1 & \\text{if } x \\le y \\\\\n a_i + (b_i - a_i) \\cdot R_i\\left(\\frac{x - a_i}{b_i - a_i}, \\frac{y - a_i}{b_i - a_i}\\right)\n & \\text{if } a_i < y < x \\le b_i \\\\\n y & \\text{otherwise.}\n\\end{cases}"
},
{
"math_id": 32,
"text": "R_{T_1}(x,y) = \\sup\\{z \\mid T_1(z,x)\\le y\\}."
},
{
"math_id": 33,
"text": "T_{\\mathrm{rot}} = \\begin{cases}\n T_1(x, y) & \\text{if } x, y \\in (t, 1] \\\\\n N(R_{T_1}(x, N(y))) & \\text{if } x \\in (t, 1] \\text{ and } y \\in [0, t] \\\\\n N(R_{T_1}(y, N(x))) & \\text{if } x \\in [0, t] \\text{ and } y \\in (t, 1] \\\\\n 0 & \\text{if } x, y \\in [0, t]\n\\end{cases}"
}
] | https://en.wikipedia.org/wiki?curid=9823717 |
982386 | Killing form | Mathematical concept
In mathematics, the Killing form, named after Wilhelm Killing, is a symmetric bilinear form that plays a basic role in the theories of Lie groups and Lie algebras. Cartan's criteria (criterion of solvability and criterion of semisimplicity) show that Killing form has a close relationship to the semisimplicity of the Lie algebras.
History and name.
The Killing form was essentially introduced into Lie algebra theory by Élie Cartan (1894) in his thesis. In a historical survey of Lie theory, has described how the term "Killing form" first occurred in 1951 during one of his own reports for the Séminaire Bourbaki; it arose as a misnomer, since the form had previously been used by Lie theorists, without a name attached. Some other authors now employ the term "Cartan-Killing form". At the end of the 19th century, Killing had noted that the coefficients of the characteristic equation of a regular semisimple element of a Lie algebra are invariant under the adjoint group, from which it follows that the Killing form (i.e. the degree 2 coefficient) is invariant, but he did not make much use of the fact. A basic result that Cartan made use of was Cartan's criterion, which states that the Killing form is non-degenerate if and only if the Lie algebra is a direct sum of simple Lie algebras.
Definition.
Consider a Lie algebra formula_0 over a field "K". Every element "x" of formula_1 defines the adjoint endomorphism ad("x") (also written as ad"x") of formula_0 with the help of the Lie bracket, as
formula_2
Now, supposing formula_0 is of finite dimension, the trace of the composition of two such endomorphisms defines a symmetric bilinear form
formula_3
with values in "K", the Killing form on formula_0.
Properties.
The following properties follow as theorems from the above definition.
formula_4
where [ , ] is the Lie bracket.
formula_5
for "s" in formula_0.
Matrix elements.
Given a basis "ei" of the Lie algebra formula_0, the matrix elements of the Killing form are given by
formula_6
Here
formula_7
in Einstein summation notation, where the "c""ij""k" are the structure coefficients of the Lie algebra. The index "k" functions as column index and the index "n" as row index in the matrix ad("e""i")ad("e""j"). Taking the trace amounts to putting "k"
"n" and summing, and so we can write
formula_8
The Killing form is the simplest 2-tensor that can be formed from the structure constants. The form itself is then formula_9
In the above indexed definition, we are careful to distinguish upper and lower indices ("co-" and "contra-variant" indices). This is because, in many cases, the Killing form can be used as a metric tensor on a manifold, in which case the distinction becomes an important one for the transformation properties of tensors. When the Lie algebra is semisimple over a zero-characteristic field, its Killing form is nondegenerate, and hence can be used as a metric tensor to raise and lower indexes. In this case, it is always possible to choose a basis for formula_0 such that the structure constants with all upper indices are completely antisymmetric.
The Killing form for some Lie algebras formula_0 are (for "X", "Y" in formula_0 viewed in their fundamental matrix representation):
The table shows that the Dynkin index for the adjoint representation is equal to twice the dual Coxeter number.
Connection with real forms.
Suppose that formula_0 is a semisimple Lie algebra over the field of real numbers formula_10. By Cartan's criterion, the Killing form is nondegenerate, and can be diagonalized in a suitable basis with the diagonal entries ±1. By Sylvester's law of inertia, the number of positive entries is an invariant of the bilinear form, i.e. it does not depend on the choice of the diagonalizing basis, and is called the index of the Lie algebra formula_0. This is a number between 0 and the dimension of formula_0 which is an important invariant of the real Lie algebra. In particular, a real Lie algebra formula_0 is called compact if the Killing form is negative definite (or negative semidefinite if the Lie algebra is not semisimple). Note that this is one of two inequivalent definitions commonly used for compactness of a Lie algebra; the other states that a Lie algebra is compact if it corresponds to a compact Lie group. The definition of compactness in terms of negative definiteness of the Killing form is more restrictive, since using this definition it can be shown that under the Lie correspondence, compact Lie algebras correspond to compact Lie groups.
If formula_11 is a semisimple Lie algebra over the complex numbers, then there are several non-isomorphic real Lie algebras whose complexification is formula_11, which are called its real forms. It turns out that every complex semisimple Lie algebra admits a unique (up to isomorphism) compact real form formula_0. The real forms of a given complex semisimple Lie algebra are frequently labeled by the positive index of inertia of their Killing form.
For example, the complex special linear algebra formula_12 has two real forms, the real special linear algebra, denoted formula_13, and the special unitary algebra, denoted formula_14. The first one is noncompact, the so-called split real form, and its Killing form has signature (2, 1). The second one is the compact real form and its Killing form is negative definite, i.e. has signature (0, 3). The corresponding Lie groups are the noncompact group formula_15 of 2 × 2 real matrices with the unit determinant and the special unitary group formula_16, which is compact.
Trace forms.
Let formula_17 be a finite-dimensional Lie algebra over the field formula_18, and formula_19 be a Lie algebra representation. Let formula_20 be the trace functional on formula_21. Then we can define the trace form for the representation formula_22 as
formula_23
formula_24
Then the Killing form is the special case that the representation is the adjoint representation, formula_25.
It is easy to show that this is symmetric, bilinear and invariant for any representation formula_22.
If furthermore formula_17 is simple and formula_22 is irreducible, then it can be shown formula_26 where formula_27 is the index of the representation.
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathfrak g"
},
{
"math_id": 1,
"text": "\\mathfrak g "
},
{
"math_id": 2,
"text": "\\operatorname{ad}(x)(y) = [x, y]."
},
{
"math_id": 3,
"text": "B(x, y) = \\operatorname{trace}(\\operatorname{ad}(x) \\circ \\operatorname{ad}(y)),"
},
{
"math_id": 4,
"text": "B([x, y], z) = B(x, [y, z])"
},
{
"math_id": 5,
"text": "B(s(x), s(y)) = B(x, y)"
},
{
"math_id": 6,
"text": "B_{ij}= \\mathrm{trace}(\\mathrm{ad}(e_i) \\circ \\mathrm{ad}(e_j))."
},
{
"math_id": 7,
"text": "\\left(\\textrm{ad}(e_i) \\circ \\textrm{ad}(e_j)\\right)(e_k)= [e_i, [e_j, e_k]] = [e_i, {c_{jk}}^{m}e_m] = {c_{im}}^{n} {c_{jk}}^{m} e_n"
},
{
"math_id": 8,
"text": "B_{ij} = {c_{im}}^{n} {c_{jn}}^{m}"
},
{
"math_id": 9,
"text": "B=B_{ij} e^i \\otimes e^j."
},
{
"math_id": 10,
"text": "\\mathbb R"
},
{
"math_id": 11,
"text": "\\mathfrak g_{\\mathbb C}"
},
{
"math_id": 12,
"text": "\\mathfrak {sl}(2, \\mathbb C)"
},
{
"math_id": 13,
"text": "\\mathfrak {sl}(2, \\mathbb R)"
},
{
"math_id": 14,
"text": "\\mathfrak {su}(2)"
},
{
"math_id": 15,
"text": "\\mathrm {SL}(2, \\mathbb R)"
},
{
"math_id": 16,
"text": "\\mathrm {SU}(2)"
},
{
"math_id": 17,
"text": "\\mathfrak{g}"
},
{
"math_id": 18,
"text": "K"
},
{
"math_id": 19,
"text": "\\rho:\\mathfrak{g}\\rightarrow \\text{End}(V)"
},
{
"math_id": 20,
"text": "\\text{Tr}_{V}:\\text{End}(V)\\rightarrow K"
},
{
"math_id": 21,
"text": "V"
},
{
"math_id": 22,
"text": "\\rho"
},
{
"math_id": 23,
"text": "\\text{Tr}_\\rho:\\mathfrak{g}\\times\\mathfrak{g}\\rightarrow K,"
},
{
"math_id": 24,
"text": "\\text{Tr}_\\rho(X,Y) = \\text{Tr}_V(\\rho(X)\\rho(Y))."
},
{
"math_id": 25,
"text": "\\text{Tr}_\\text{ad} = B"
},
{
"math_id": 26,
"text": "\\text{Tr}_\\rho = I(\\rho)B"
},
{
"math_id": 27,
"text": "I(\\rho)"
}
] | https://en.wikipedia.org/wiki?curid=982386 |
9825116 | Diffraction from slits | Diffraction processes affecting waves are amenable to quantitative description and analysis. Such treatments are applied to a wave passing through one or more slits whose width is specified as a proportion of the wavelength. Numerical approximations may be used, including the Fresnel and Fraunhofer approximations.
General diffraction.
Because diffraction is the result of addition of all waves (of given wavelength) along all unobstructed paths, the usual procedure is to consider the contribution of an infinitesimally small neighborhood around a certain path (this contribution is usually called a wavelet) and then integrate over all paths (= add all wavelets) from the source to the detector (or given point on a screen).
Thus in order to determine the pattern produced by diffraction, the phase and the amplitude of each of the wavelets is calculated. That is, at each point in space we must determine the distance to each of the simple sources on the incoming wavefront. If the distance to each of the simple sources differs by an integer number of wavelengths, all the wavelets will be in phase, resulting in constructive interference. If the distance to each source is an integer plus one half of a wavelength, there will be complete destructive interference. Usually, it is sufficient to determine these minima and maxima to explain the observed diffraction effects.
The simplest descriptions of diffraction are those in which the situation can be reduced to a two-dimensional problem. For water waves, this is already the case, as water waves propagate only on the surface of the water. For light, we can often neglect one dimension if the diffracting object extends in that direction over a distance far greater than the wavelength. In the case of light shining through small circular holes we will have to take into account the full three-dimensional nature of the problem.
Several qualitative observations can be made of diffraction in general:
Approximations.
The problem of calculating what a diffracted wave looks like, is the problem of determining the phase of each of the simple sources on the incoming wave front. It is mathematically easier to consider the case of far-field or Fraunhofer diffraction, where the point of observation is far from that of the diffracting obstruction, and as a result, involves less complex mathematics than the more general case of near-field or Fresnel diffraction. To make this statement more quantitative, consider a diffracting object at the origin that has a size formula_0. For definiteness let us say we are diffracting light and we are interested in what the intensity looks like on a screen a distance formula_1 away from the object. At some point on the screen the path length to one side of the object is given by the Pythagorean theorem
formula_2
If we now consider the situation where formula_3, the path length becomes
formula_4
This is the Fresnel approximation. To further simplify things: If the diffracting object is much smaller than the distance formula_1, the last term will contribute much less than a wavelength to the path length, and will then not change the phase appreciably. That is formula_5. The result is the Fraunhofer approximation, which is only valid very far away from the object
formula_6
Depending on the size of the diffraction object, the distance to the object and the wavelength of the wave, the Fresnel approximation, the Fraunhofer approximation or neither approximation may be valid. As the distance between the measured point of diffraction and the obstruction point increases, the diffraction patterns or results predicted converge towards those of Fraunhofer diffraction, which is more often observed in nature due to the extremely small wavelength of visible light.
Multiple narrow slits.
A simple quantitative description.
Multiple-slit arrangements can be mathematically considered as multiple simple wave sources, if the slits are narrow enough. For light, a slit is an opening that is infinitely extended in one dimension, and this has the effect of reducing a wave problem in 3D-space to a simpler problem in 2D-space.
The simplest case is that of two narrow slits, spaced a distance formula_7 apart. To determine the maxima and minima in the amplitude we must determine the path difference to the first slit and to the second one. In the Fraunhofer approximation, with the observer far away from the slits, the difference in path length to the two slits can be seen from the image to be
formula_8
Maxima in the intensity occur if this path length difference is an integer number of wavelengths.
formula_9
where
The corresponding minima are at path differences of an integer number plus one half of the wavelength:
formula_13
For an array of slits, positions of the minima and maxima are not changed, the "fringes" visible on a screen however do become sharper, as can be seen in the image.
Mathematical description.
To calculate this intensity pattern, one needs to introduce some more sophisticated methods. The mathematical representation of a radial wave is given by
formula_14
where formula_15, formula_11 is the wavelength, formula_16 is frequency of the wave and formula_17 is the phase of the wave at the slits at time "t" = 0. The wave at a screen some distance away from the plane of the slits is given by the sum of the waves emanating from each of the slits.
To make this problem a little easier, we introduce the complex wave formula_18, the real part of which is equal to formula_19
formula_20
formula_21
The absolute value of this function gives the wave amplitude, and the complex phase of the function corresponds to the phase of the wave. formula_18 is referred to as the complex amplitude.
With formula_22 slits, the total wave at point formula_23 on the screen is
formula_24
Since we are for the moment only interested in the amplitude and relative phase, we can ignore any overall phase factors that are not dependent on formula_25 or formula_10. We approximate formula_26. In the Fraunhofer limit we can neglect terms of order formula_27 in the exponential, and any terms involving formula_28 or formula_29 in the denominator. The sum becomes
formula_30
The sum has the form of a geometric sum and can be evaluated to give
formula_31
The intensity is given by the absolute value of the complex amplitude squared
formula_32
where formula_33 denotes the complex conjugate of formula_34.
Single slit.
As an example, an exact equation can now be derived for the intensity of the diffraction pattern as a function of angle in the case of single-slit diffraction.
A mathematical representation of Huygens' principle can be used to start an equation.
Consider a monochromatic complex plane wave formula_35 of wavelength "λ" incident on a slit of width "a".
If the slit lies in the x′-y′ plane, with its center at the origin, then it can be assumed that diffraction generates a complex wave ψ, traveling radially in the r direction away from the slit, and this is given by:
formula_36
Let ("x"′, "y"′, 0) be a point inside the slit over which it is being integrated. If ("x", 0, "z") is the location at which the intensity of the diffraction pattern is being computed, the slit extends from formula_37 to formula_38, and from formula_39 to formula_40.
The distance "r" from the slot is:
formula_41
formula_42
Assuming Fraunhofer diffraction will result in the conclusion formula_43. In other words, the distance to the target is much larger than the diffraction width on the target.
By the binomial expansion rule, ignoring terms quadratic and higher, the quantity on the right can be estimated to be:
formula_44
formula_45
It can be seen that 1/"r" in front of the equation is non-oscillatory, i.e. its contribution to the magnitude of the intensity is small compared to our exponential factors. Therefore, we will lose little accuracy by approximating it as 1/"z".
formula_46
To make things cleaner, a placeholder "C" is used to denote constants in the equation. It is important to keep in mind that "C" can contain imaginary numbers, thus the wave function will be complex. However, at the end, the "ψ" will be bracketed, which will eliminate any imaginary components.
Now, in Fraunhofer diffraction, formula_47 is small, so formula_48 (note that formula_49 participates in this exponential and it is being integrated).
In contrast the term formula_50 can be eliminated from the equation, since when bracketed it gives 1.
formula_51
Taking formula_53 results in:
formula_54
It can be noted through Euler's formula and its derivatives that formula_55 and formula_56.
formula_57
where the (unnormalized) sinc function is defined by formula_58.
Now, substituting in formula_59, the intensity (squared amplitude) formula_60 of the diffracted waves at an angle "θ" is given by:
formula_61
Multiple slits.
Let us again start with the mathematical representation of Huygens' principle.
formula_36
Consider formula_62 slits in the prime plane of equal size formula_63 and spacing formula_64 spread along the formula_49 axis. As above, the distance formula_65 from slit 1 is:
formula_42
To generalize this to formula_62 slits, we make the observation that while formula_66 and formula_67 remain constant, formula_49 shifts by
formula_68
Thus
formula_69
and the sum of all formula_62 contributions to the wave function is:
formula_70
Again noting that formula_71 is small, so formula_72, we have:
formula_73
Now, we can use the following identity
formula_74
Substituting into our equation, we find:
formula_75
We now make our formula_76 substitution as before and represent all non-oscillating constants by the formula_77 variable as in the 1-slit diffraction and bracket the result. Remember that
formula_78
This allows us to discard the tailing exponent and we have our answer:
formula_79
General case for far field.
In the far field, where r is essentially constant, then the equation:
formula_36
is equivalent to doing a Fourier transform on the gaps in the barrier.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " a"
},
{
"math_id": 1,
"text": " L"
},
{
"math_id": 2,
"text": "S = \\sqrt{L^2+(x+a/2)^2}"
},
{
"math_id": 3,
"text": " L\\gg(x+a/2)"
},
{
"math_id": 4,
"text": "S\\approx\\left(L+\\frac{(x+a/2)^2}{2 L}\\right)= L + \\frac{x^2}{2L}+\\frac{x a}{2L}+\\frac{a^2}{8L}"
},
{
"math_id": 5,
"text": "\\frac{a^2}{L}\\ll\\lambda"
},
{
"math_id": 6,
"text": "S \\approx L + \\frac{x^2}{2L}+\\frac{x a}{2L}"
},
{
"math_id": 7,
"text": "\\ a"
},
{
"math_id": 8,
"text": " \\Delta S={a} \\sin \\theta"
},
{
"math_id": 9,
"text": "a \\sin \\theta = n \\lambda "
},
{
"math_id": 10,
"text": " n"
},
{
"math_id": 11,
"text": " \\lambda"
},
{
"math_id": 12,
"text": " \\theta"
},
{
"math_id": 13,
"text": " a \\sin \\theta = \\lambda (n+1/2) \\,."
},
{
"math_id": 14,
"text": " E(r) = A \\cos (k r - \\omega t + \\phi)/r"
},
{
"math_id": 15,
"text": " k = \\frac{2 \\pi}{\\lambda}"
},
{
"math_id": 16,
"text": " \\omega"
},
{
"math_id": 17,
"text": " \\phi"
},
{
"math_id": 18,
"text": " \\Psi"
},
{
"math_id": 19,
"text": " E"
},
{
"math_id": 20,
"text": " \\Psi(r)=A e^{i (k r-\\omega t +\\phi)} / r"
},
{
"math_id": 21,
"text": " E(r) = \\operatorname{Re}(\\Psi(r))"
},
{
"math_id": 22,
"text": " N"
},
{
"math_id": 23,
"text": "\\ x"
},
{
"math_id": 24,
"text": "\\Psi_\\text{total}=A e^{i(-\\omega t +\\phi)}\\sum_{n=0}^{N-1} \\frac{e^{i k \\sqrt{(x-n a)^2+L^2}}}{\\sqrt{\\left(x-n a\\right)^2+L^2}}."
},
{
"math_id": 25,
"text": " x"
},
{
"math_id": 26,
"text": "\\sqrt{(x-n a)^2+L^2}\\approx L+ (x-na)^2/2L"
},
{
"math_id": 27,
"text": "\\frac{a^2}{2L}"
},
{
"math_id": 28,
"text": " a/L"
},
{
"math_id": 29,
"text": " x/L"
},
{
"math_id": 30,
"text": "\\Psi = A \\frac{e^{i\\left( k (\\frac{x^2}{2 L}+L)-\\omega t +\\phi\\right)}}{L} \\sum_{n=0}^{N-1} e^{-i k \\frac{x n a}{L}}"
},
{
"math_id": 31,
"text": "\\Psi=A \\frac{e^{i\\left( k (\\frac{x^2-(N-1)ax}{2 L}+L)-\\omega t +\\phi\\right)}}{L} \\frac {\\sin\\left(\\frac{Nkax}{2L}\\right)} {\\sin\\left(\\frac{kax}{2L}\\right)}"
},
{
"math_id": 32,
"text": "I(x)=\\Psi \\Psi^*=|\\Psi|^2=I_0\\left( \\frac{\\sin\\left(\\frac{Nkax}{2L}\\right)}{\\sin\\left(\\frac{kax}{2L}\\right)} \\right)^2 "
},
{
"math_id": 33,
"text": "\\Psi^*"
},
{
"math_id": 34,
"text": "\\Psi"
},
{
"math_id": 35,
"text": "\\Psi^\\prime"
},
{
"math_id": 36,
"text": "\\Psi = \\int_{\\mathrm{slit}} \\frac{i}{r\\lambda} \\Psi^\\prime e^{-ikr}\\,d\\mathrm{slit}"
},
{
"math_id": 37,
"text": "x' = -a/2"
},
{
"math_id": 38,
"text": "+a/2\\,"
},
{
"math_id": 39,
"text": "y'=-\\infty"
},
{
"math_id": 40,
"text": "\\infty"
},
{
"math_id": 41,
"text": "r = \\sqrt{\\left(x - x^\\prime\\right)^2 + y^{\\prime2} + z^2}"
},
{
"math_id": 42,
"text": "r = z \\left(1 + \\frac{\\left(x - x^\\prime\\right)^2 + y^{\\prime2}}{z^2}\\right)^\\frac{1}{2}"
},
{
"math_id": 43,
"text": "z \\gg \\big|\\left(x - x^\\prime\\right)\\big|"
},
{
"math_id": 44,
"text": "r \\approx z \\left( 1 + \\frac{1}{2} \\frac{\\left(x - x' \\right)^2 + y^{\\prime 2}}{z^2} \\right)"
},
{
"math_id": 45,
"text": "r \\approx z + \\frac{\\left(x - x'\\right)^2 + y^{\\prime 2}}{2z}"
},
{
"math_id": 46,
"text": "\\begin{align}\n\\Psi &= \\frac{i \\Psi'}{z \\lambda} \\int_{-\\frac{a}{2}}^{\\frac{a}{2}}\\int_{-\\infty}^{\\infty} e^{-ik\\left[z+\\frac{ \\left(x - x' \\right)^2 + y^{\\prime 2}}{2z}\\right]} \\,dy' \\,dx' \\\\\n&= \\frac{i \\Psi^\\prime}{z \\lambda} e^{-ikz} \\int_{-\\frac{a}{2}}^{\\frac{a}{2}}e^{-ik\\left[\\frac{\\left(x - x' \\right)^2}{2z}\\right]} \\,dx^\\prime \\int_{-\\infty}^{\\infty} e^{-ik\\left[\\frac{y^{\\prime 2}}{2z}\\right]} \\,dy' \\\\\n&=\\Psi^\\prime \\sqrt{\\frac{i}{z\\lambda}} e^\\frac{-ikx^2}{2z} \\int_{-\\frac{a}{2}}^{\\frac{a}{2}} e^\\frac{ikxx'}{z} e^\\frac{-ikx^{\\prime 2}}{2z} \\,dx'\n\\end{align}"
},
{
"math_id": 47,
"text": "kx^{\\prime 2}/z"
},
{
"math_id": 48,
"text": "e^\\frac{-ikx^{\\prime 2}}{2z} \\approx 1"
},
{
"math_id": 49,
"text": "x^\\prime"
},
{
"math_id": 50,
"text": "e^\\frac{-ikx^2}{2z}"
},
{
"math_id": 51,
"text": "\\left\\langle e^\\frac{-ikx^2}{2z}|e^\\frac{-ikx^2}{2z} \\right\\rangle=e^\\frac{-ikx^2}{2z} \\left(e^\\frac{-ikx^2}{2z}\\right)^* = e^\\frac{-ikx^2}{2z} e^\\frac{+ikx^2}{2z} = e^0 = 1"
},
{
"math_id": 52,
"text": "e^{-ikz}"
},
{
"math_id": 53,
"text": "C = \\Psi^\\prime \\sqrt{\\frac{i}{z\\lambda}}"
},
{
"math_id": 54,
"text": " \\Psi = C \\int_{-\\frac{a}{2}}^{\\frac{a}{2}} e^\\frac{ikxx^\\prime}{z} \\,dx^\\prime = C \\frac{e^\\frac{ikax}{2z} - e^\\frac{-ikax}{2z}}{\\frac{ikx}{z}}"
},
{
"math_id": 55,
"text": "\\sin x = \\frac{e^{ix} - e^{-ix}}{2i}"
},
{
"math_id": 56,
"text": "\\sin \\theta = \\frac{x}{z}"
},
{
"math_id": 57,
"text": "\\Psi = aC \\frac{\\sin\\frac{ka\\sin\\theta}{2}}{\\frac{ka\\sin\\theta}{2}} = aC \\left[ \\operatorname{sinc} \\left( \\frac{ka\\sin\\theta}{2} \\right) \\right]"
},
{
"math_id": 58,
"text": "\\operatorname{sinc}(x) \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac{\\sin(x)}{x}"
},
{
"math_id": 59,
"text": "\\frac{2\\pi}{\\lambda} = k"
},
{
"math_id": 60,
"text": "I"
},
{
"math_id": 61,
"text": "I(\\theta) = I_0 {\\left[ \\operatorname{sinc} \\left( \\frac{\\pi a}{\\lambda} \\sin \\theta \\right) \\right] }^2 "
},
{
"math_id": 62,
"text": "N"
},
{
"math_id": 63,
"text": "a"
},
{
"math_id": 64,
"text": "d"
},
{
"math_id": 65,
"text": "r"
},
{
"math_id": 66,
"text": "z"
},
{
"math_id": 67,
"text": "y"
},
{
"math_id": 68,
"text": "x_{j=0 \\cdots n-1}^{\\prime} = x_0^\\prime - j d "
},
{
"math_id": 69,
"text": "r_j = z \\left(1 + \\frac{\\left(x - x^\\prime - j d \\right)^2 + y^{\\prime2}}{z^2}\\right)^\\frac{1}{2}"
},
{
"math_id": 70,
"text": "\\Psi = \\sum_{j=0}^{N-1} C \\int_{-{a}/{2}}^{{a}/{2}} e^\\frac{ikx\\left(x' - jd\\right)}{z} e^\\frac{-ik\\left(x' - jd\\right)^2}{2z} \\,dx^\\prime"
},
{
"math_id": 71,
"text": "\\frac{k\\left(x^\\prime -jd\\right)^2}{z}"
},
{
"math_id": 72,
"text": "e^\\frac{-ik\\left(x' - jd\\right)^2}{2z} \\approx 1"
},
{
"math_id": 73,
"text": "\\begin{align}\n\\Psi &= C\\sum_{j=0}^{N-1} \\int_{-{a}/{2}}^{{a}/{2}} e^\\frac{ikx\\left(x^\\prime - jd\\right)}{z} \\,dx^\\prime \\\\\n&= a C \\sum_{j=0}^{N-1} \\frac{\\left(e^{\\frac{ikax}{2z} - \\frac{ijkxd}{z}} - e^{\\frac{-ikax}{2z}-\\frac{ijkxd}{z}}\\right)}{\\frac{2ikax}{2z}} \\\\\n&= a C \\sum_{j=0}^{N-1} e^\\frac{ijkxd}{z} \\frac{\\left(e^\\frac{ikax}{2z} - e^\\frac{-ikax}{2z}\\right)}{\\frac{2ikax}{2z}} \\\\\n&= a C \\frac{\\sin\\frac{ka\\sin\\theta}{2}}{\\frac{ka\\sin\\theta}{2}} \\sum_{j=0}^{N-1} e^{ijkd\\sin\\theta}\n\\end{align}"
},
{
"math_id": 74,
"text": "\\sum_{j=0}^{N-1} e^{x j} = \\frac{1 - e^{Nx}}{1 - e^x}."
},
{
"math_id": 75,
"text": "\\begin{align}\n\\Psi &= a C \\frac{\\sin\\frac{ka\\sin\\theta}{2}}{\\frac{ka\\sin\\theta}{2}}\\left(\\frac{1 - e^{iNkd\\sin\\theta}}{1 - e^{ikd\\sin\\theta}}\\right) \\\\[1ex]\n&= a C \\frac{\\sin\\frac{ka\\sin\\theta}{2}}{\\frac{ka\\sin\\theta}{2}}\\left(\\frac{e^{-iNkd\\frac{\\sin\\theta}{2}}-e^{iNkd\\frac{\\sin\\theta}{2}}}{e^{-ikd\\frac{\\sin\\theta}{2}}-e^{ikd\\frac{\\sin\\theta}{2}}}\\right)\\left(\\frac{e^{iNkd\\frac{\\sin\\theta}{2}}}{e^{ikd\\frac{\\sin\\theta}{2}}}\\right) \\\\[1ex]\n&= a C \\frac{\\sin\\frac{ka\\sin\\theta}{2}}{\\frac{ka\\sin\\theta}{2}}\\frac{\\frac{e^{-iNkd \\frac{\\sin\\theta}{2}} - e^{iNkd\\frac{\\sin\\theta}{2}}}{2i}}{\\frac{e^{-ikd\\frac{\\sin\\theta}{2}} - e^{ikd\\frac{\\sin\\theta}{2}}}{2i}} \\left(e^{i(N-1)kd\\frac{\\sin\\theta}{2}}\\right) \\\\[1ex]\n&= a C \\frac{\\sin\\left(\\frac{ka\\sin\\theta}{2}\\right)}{\\frac{ka\\sin\\theta}{2}} \\frac{\\sin\\left(\\frac{Nkd\\sin\\theta}{2}\\right)} {\\sin\\left(\\frac{kd\\sin\\theta}{2}\\right)}e^{i\\left(N-1\\right)kd\\frac{\\sin\\theta}{2}}\n\\end{align}"
},
{
"math_id": 76,
"text": "k"
},
{
"math_id": 77,
"text": "I_0"
},
{
"math_id": 78,
"text": "\\left\\langle e^{ix} \\Big| e^{ix}\\right\\rangle = e^0 = 1"
},
{
"math_id": 79,
"text": "I\\left(\\theta\\right) = I_0 \\left[ \\operatorname{sinc} \\left( \\frac{\\pi a}{\\lambda} \\sin \\theta \\right) \\right]^2 \\cdot \\left[\\frac{\\sin\\left(\\frac{N\\pi d}{\\lambda}\\sin\\theta\\right)}{\\sin\\left(\\frac{\\pi d}{\\lambda}\\sin\\theta\\right)}\\right]^2"
}
] | https://en.wikipedia.org/wiki?curid=9825116 |
9827398 | Peierls stress | Peierls stress (or Peierls-Nabarro stress, also known as the lattice friction stress) is the force (first described by Rudolf Peierls and modified by Frank Nabarro) needed to move a dislocation within a plane of atoms in the unit cell. The magnitude varies periodically as the dislocation moves within the plane. Peierls stress depends on the size and width of a dislocation and the distance between planes. Because of this, Peierls stress decreases with increasing distance between atomic planes. Yet since the distance between planes increases with planar atomic density, slip of the dislocation is preferred on closely packed planes.
formula_0
Peierls–Nabarro stress proportionality.
Where:
formula_1 the dislocation width
formula_2 = shear modulus
formula_3 = Poisson's ratio
formula_4 = slip distance or Burgers vector
formula_5 = interplanar spacing
The Peierls stress and yield strength temperature sensitivity.
The Peierls stress also relates to the temperature sensitivity of the yield strength of material because it very much depends on both short-range atomic order and atomic bond strength. As temperature increases, the vibration of atoms increases, and thus both peierls stress and yield strength decrease as a result of weaker atomic bond strength at high temperatures.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\tau_\\mathrm{PN} \\propto Ge^{-2{\\pi}W/b}"
},
{
"math_id": 1,
"text": "W = \\frac{d}{1-\\nu}="
},
{
"math_id": 2,
"text": "G"
},
{
"math_id": 3,
"text": "\\nu"
},
{
"math_id": 4,
"text": "b"
},
{
"math_id": 5,
"text": "d"
}
] | https://en.wikipedia.org/wiki?curid=9827398 |
982771 | Double coset | In group theory, a field of mathematics, a double coset is a collection of group elements which are equivalent under the symmetries coming from two subgroups, generalizing the notion of a single coset.
Definition.
Let "G" be a group, and let "H" and "K" be subgroups. Let "H" act on "G" by left multiplication and let "K" act on "G" by right multiplication. For each "x" in "G", the ("H", "K")-double coset of "x" is the set
formula_0
When "H" = "K", this is called the "H"-double coset of "x". Equivalently, "HxK" is the equivalence class of "x" under the equivalence relation
"x" ~ "y" if and only if there exist "h" in "H" and "k" in "K" such that "hxk" = "y".
The set of all formula_1-double cosets is denoted by formula_2
Properties.
Suppose that "G" is a group with subgroups "H" and "K" acting by left and right multiplication, respectively. The ("H", "K")-double cosets of "G" may be equivalently described as orbits for the product group "H" × "K" acting on "G" by ("h", "k") ⋅ "x" = "hxk"−1. Many of the basic properties of double cosets follow immediately from the fact that they are orbits. However, because "G" is a group and "H" and "K" are subgroups acting by multiplication, double cosets are more structured than orbits of arbitrary group actions, and they have additional properties that are false for more general actions.
There is an equivalent description of double cosets in terms of single cosets. Let "H" and "K" both act by right multiplication on "G". Then "G" acts by left multiplication on the product of coset spaces "G" / "H" × "G" / "K". The orbits of this action are in one-to-one correspondence with "H" \ "G" / "K". This correspondence identifies ("xH", "yK") with the double coset "Hx"−1"yK". Briefly, this is because every "G"-orbit admits representatives of the form ("H", "xK"), and the representative "x" is determined only up to left multiplication by an element of "H". Similarly, "G" acts by right multiplication on "H" \ "G" × "K" \ "G", and the orbits of this action are in one-to-one correspondence with the double cosets "H" \ "G" / "K". Conceptually, this identifies the double coset space "H" \ "G" / "K" with the space of relative configurations of an "H"-coset and a "K"-coset. Additionally, this construction generalizes to the case of any number of subgroups. Given subgroups "H"1, ..., "H""n", the space of ("H"1, ..., "H""n")-multicosets is the set of "G"-orbits of "G" / "H"1 × ... × "G" / "H""n".
The analog of Lagrange's theorem for double cosets is false. This means that the size of a double coset need not divide the order of "G". For example, let "G" = "S"3 be the symmetric group on three letters, and let "H" and "K" be the cyclic subgroups generated by the transpositions (1 2) and (1 3), respectively. If "e" denotes the identity permutation, then
formula_11
This has four elements, and four does not divide six, the order of "S"3. It is also false that different double cosets have the same size. Continuing the same example,
formula_12
which has two elements, not four.
However, suppose that "H" is normal. As noted earlier, in this case the double coset space equals the left coset space "G" / "HK". Similarly, if "K" is normal, then "H" \ "G" / "K" is the right coset space "HK" \ "G". Standard results about left and right coset spaces then imply the following facts.
Products in the free abelian group on the set of double cosets.
Suppose that "G" is a group and that "H", "K", and "L" are subgroups. Under certain finiteness conditions, there is a product on the free abelian group generated by the ("H", "K")- and ("K", "L")-double cosets with values in the free abelian group generated by the ("H", "L")-double cosets. This means there is a bilinear function
formula_17
Assume for simplicity that "G" is finite. To define the product, reinterpret these free abelian groups in terms of the group algebra of "G" as follows. Every element of Z["H" \ "G" / "K"] has the form
formula_18
where { "f""HxK" } is a set of integers indexed by the elements of "H" \ "G" / "K". This element may be interpreted as a Z-valued function on "H" \ "G" / "K", specifically, "HxK" ↦ "f""HxK". This function may be pulled back along the projection "G" → "H" \ "G" / "K" which sends "x" to the double coset "HxK". This results in a function "x" ↦ "f""HxK". By the way in which this function was constructed, it is left invariant under "H" and right invariant under "K". The corresponding element of the group algebra Z["G"] is
formula_19
and this element is invariant under left multiplication by "H" and right multiplication by "K". Conceptually, this element is obtained by replacing "HxK" by the elements it contains, and the finiteness of "G" ensures that the sum is still finite. Conversely, every element of Z["G"] which is left invariant under "H" and right invariant under "K" is the pullback of a function on Z["H" \ "G" / "K"]. Parallel statements are true for Z["K" \ "G" / "L"] and Z["H" \ "G" / "L"].
When elements of Z["H" \ "G" / "K"], Z["K" \ "G" / "L"], and Z["H" \ "G" / "L"] are interpreted as invariant elements of Z["G"], then the product whose existence was asserted above is precisely the multiplication in Z["G"]. Indeed, it is trivial to check that the product of a left-"H"-invariant element and a right-"L"-invariant element continues to be left-"H"-invariant and right-"L"-invariant. The bilinearity of the product follows immediately from the bilinearity of multiplication in Z["G"]. It also follows that if "M" is a fourth subgroup of "G", then the product of ("H", "K")-, ("K", "L")-, and ("L", "M")-double cosets is associative. Because the product in Z["G"] corresponds to convolution of functions on "G", this product is sometimes called the convolution product.
An important special case is when "H" = "K" = "L". In this case, the product is a bilinear function
formula_20
This product turns Z["H" \ "G" / "H"] into an associative ring whose identity element is the class of the trivial double coset ["H"]. In general, this ring is non-commutative. For example, if "H" = {1}, then the ring is the group algebra Z["G"], and a group algebra is a commutative ring if and only if the underlying group is abelian.
If "H" is normal, so that the "H"-double cosets are the same as the elements of the quotient group "G" / "H", then the product on Z["H" \ "G" / "H"] is the product in the group algebra Z["G" / "H"]. In particular, it is the usual convolution of functions on "G" / "H". In this case, the ring is commutative if and only if "G" / "H" is abelian, or equivalently, if and only if "H" contains the commutator subgroup of "G".
If "H" is not normal, then Z["H" \ "G" / "H"] may be commutative even if "G" is non-abelian. A classical example is the product of two Hecke operators. This is the product in the Hecke algebra, which is commutative even though the group "G" is the modular group, which is non-abelian, and the subgroup is an arithmetic subgroup and in particular does not contain the commutator subgroup. Commutativity of the convolution product is closely tied to Gelfand pairs.
When the group "G" is a topological group, it is possible to weaken the assumption that the number of left and right cosets in each double coset is finite. The group algebra Z["G"] is replaced by an algebra of functions such as "L"2("G") or "C"∞("G"), and the sums are replaced by integrals. The product still corresponds to convolution. For instance, this happens for the Hecke algebra of a locally compact group.
Applications.
When a group formula_21 has a transitive group action on a set formula_22, computing certain double coset decompositions of formula_21 reveals extra information about structure of the action of formula_21 on formula_23. Specifically, if formula_24 is the stabilizer subgroup of some element formula_25, then formula_21 decomposes as exactly two double cosets of formula_26 if and only if formula_21 acts transitively on the set of distinct pairs of formula_22. See 2-transitive groups for more information about this action.
Double cosets are important in connection with representation theory, when a representation of "H" is used to construct an induced representation of "G", which is then restricted to "K". The corresponding double coset structure carries information about how the resulting representation decomposes. In the case of finite groups, this is Mackey's decomposition theorem.
They are also important in functional analysis, where in some important cases functions left-invariant and right-invariant by a subgroup "K" can form a commutative ring under convolution: see Gelfand pair.
In geometry, a Clifford–Klein form is a double coset space Γ\"G"/"H", where "G" is a reductive Lie group, "H" is a closed subgroup, and Γ is a discrete subgroup (of "G") that acts properly discontinuously on the homogeneous space "G"/"H".
In number theory, the Hecke algebra corresponding to a congruence subgroup "Γ" of the modular group is spanned by elements of the double coset space formula_27; the algebra structure is that acquired from the multiplication of double cosets described above. Of particular importance are the Hecke operators formula_28 corresponding to the double cosets formula_29 or formula_30, where formula_31 (these have different properties depending on whether "m" and "N" are coprime or not), and the diamond operators formula_32 given by the double cosets formula_33 where formula_34 and we require formula_35 (the choice of "a", "b", "c" does not affect the answer).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "HxK = \\{ hxk \\colon h \\in H, k \\in K \\}."
},
{
"math_id": 1,
"text": "(H,K)"
},
{
"math_id": 2,
"text": "H \\,\\backslash G / K."
},
{
"math_id": 3,
"text": "\\begin{align}\nHxK &= \\bigcup_{k \\in K} Hxk = \\coprod_{Hxk \\,\\in\\, H \\backslash HxK} Hxk, \n\\\\\nHxK &= \\bigcup_{h \\in H} hxK = \\coprod_{hxK \\,\\in\\, HxK / K} hxK.\n\\end{align}"
},
{
"math_id": 4,
"text": "HgK \\to H(gK)"
},
{
"math_id": 5,
"text": "HgK \\to (Hg)K"
},
{
"math_id": 6,
"text": "\\begin{align}\n|HxK| &= [H : H \\cap xKx^{-1}] |K| = |H| [K : K \\cap x^{-1}Hx], \\\\\n \\left[G : H\\right] &= \\sum_{HxK \\,\\in\\, H \\backslash G / K} [K : K \\cap x^{-1}Hx], \\\\\n \\left[G : K\\right] &= \\sum_{HxK \\,\\in\\, H \\backslash G / K} [H : H \\cap xKx^{-1}].\n\\end{align}"
},
{
"math_id": 7,
"text": "\\begin{align}\n|HxK| &= \\frac{|H||K|}{|H \\cap xKx^{-1}|} = \\frac{|H||K|}{|K \\cap x^{-1}Hx|}, \\\\\n \\left[G : H\\right] &= \\sum_{HxK \\,\\in\\, H \\backslash G / K} \\frac{|K|}{|K \\cap x^{-1}Hx|}, \\\\\n \\left[G : K\\right] &= \\sum_{HxK \\,\\in\\, H \\backslash G / K} \\frac{|H|}{|H \\cap xKx^{-1}|}.\n\\end{align}"
},
{
"math_id": 8,
"text": "(H \\times K)_x = \\{(h, x^{-1}h^{-1}x) \\colon h \\in H\\} \\cap H \\times K = \\{(xk^{-1}x^{-1}, k) \\colon k \\in K\\} \\cap H \\times K."
},
{
"math_id": 9,
"text": "|HxK| = [H \\times K : (H \\times K)_x] = |H \\times K| / |(H \\times K)_x|."
},
{
"math_id": 10,
"text": "|H \\,\\backslash G / K| = \\frac{1}{|H||K|}\\sum_{(h, k) \\in H \\times K} |G^{(h, k)}|."
},
{
"math_id": 11,
"text": "HeK = HK = \\{ e, (1 2), (1 3), (1 3 2) \\}."
},
{
"math_id": 12,
"text": "H(2 3)K = \\{ (2 3), (1 2 3) \\},"
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": "\\gamma_1 S_{n-1}, \\gamma_2 S_{n-1}, ..., \\gamma_n S_{n-1}"
},
{
"math_id": 15,
"text": "\\gamma_i(n) = i"
},
{
"math_id": 16,
"text": "B \\,\\backslash\\! \\operatorname{GL}_2(\\mathbf{R}) / B = \\left\\{ B\\begin{pmatrix} 1 & 0 \\\\ 0 & 1 \\end{pmatrix}B,\\ B\\begin{pmatrix} 0 & 1 \\\\ 1 & 0 \\end{pmatrix}B \\right\\}."
},
{
"math_id": 17,
"text": "\\mathbf{Z}[H \\backslash G / K] \\times \\mathbf{Z}[K \\backslash G / L] \\to \\mathbf{Z}[H \\backslash G / L]."
},
{
"math_id": 18,
"text": "\\sum_{HxK \\in H \\backslash G / K} f_{HxK} \\cdot [HxK],"
},
{
"math_id": 19,
"text": "\\sum_{x \\in G} f_{HxK} \\cdot [x],"
},
{
"math_id": 20,
"text": "\\mathbf{Z}[H \\backslash G / H] \\times \\mathbf{Z}[H \\backslash G / H] \\to \\mathbf{Z}[H \\backslash G / H]."
},
{
"math_id": 21,
"text": "G "
},
{
"math_id": 22,
"text": "S"
},
{
"math_id": 23,
"text": "S "
},
{
"math_id": 24,
"text": "H "
},
{
"math_id": 25,
"text": "s\\in S "
},
{
"math_id": 26,
"text": "(H,H) "
},
{
"math_id": 27,
"text": "\\Gamma \\backslash \\mathrm{GL}_2^+(\\mathbb{Q}) / \\Gamma"
},
{
"math_id": 28,
"text": "T_m"
},
{
"math_id": 29,
"text": "\\Gamma_0(N) g \\Gamma_0(N)"
},
{
"math_id": 30,
"text": "\\Gamma_1(N) g \\Gamma_1(N)"
},
{
"math_id": 31,
"text": "g= \\left( \\begin{smallmatrix} 1 & 0 \\\\ 0 & m \\end{smallmatrix} \\right)"
},
{
"math_id": 32,
"text": " \\langle d \\rangle"
},
{
"math_id": 33,
"text": " \\Gamma_1(N) \\left(\\begin{smallmatrix} a & b \\\\ c & d \\end{smallmatrix} \\right) \\Gamma_1(N)"
},
{
"math_id": 34,
"text": " d \\in (\\mathbb{Z}/N\\mathbb{Z})^\\times"
},
{
"math_id": 35,
"text": " \\left( \\begin{smallmatrix} a & b \\\\ c & d \\end{smallmatrix} \\right)\\in \\Gamma_0(N)"
}
] | https://en.wikipedia.org/wiki?curid=982771 |
982970 | Mertens' theorems | In analytic number theory, Mertens' theorems are three 1874 results related to the density of prime numbers proved by Franz Mertens.
In the following, let formula_0 mean all primes not exceeding "n".
First theorem.
Mertens' first theorem is that
formula_1
does not exceed 2 in absolute value for any formula_2. ()
Second theorem.
Mertens' second theorem is
formula_3
where "M" is the Meissel–Mertens constant (). More precisely, Mertens proves that the expression under the limit does not in absolute value exceed
formula_4
for any formula_2.
Proof.
The main step in the proof of Mertens' second theorem is
formula_5
where the last equality needs formula_6 which follows from formula_7.
Thus, we have proved that
formula_8.
Since the sum over prime powers with formula_9 converges, this implies
formula_10.
A partial summation yields
formula_11.
Changes in sign.
In a paper on the growth rate of the sum-of-divisors function published in 1983, Guy Robin proved that in Mertens' 2nd theorem the difference
formula_12
changes sign infinitely often, and that in Mertens' 3rd theorem the difference
formula_13
changes sign infinitely often. Robin's results are analogous to Littlewood's famous theorem that the difference π("x") − li("x") changes sign infinitely often. No analog of the Skewes number (an upper bound on the first natural number "x" for which π("x") > li("x")) is known in the case of Mertens' 2nd and 3rd theorems.
Relation to the prime number theorem.
Regarding this asymptotic formula Mertens refers in his paper to "two curious formula of Legendre", the first one being Mertens' second theorem's prototype (and the second one being Mertens' third theorem's prototype: see the very first lines of the paper). He recalls that it is contained in Legendre's third edition of his "Théorie des nombres" (1830; it is in fact already mentioned in the second edition, 1808), and also that a more elaborate version was proved by Chebyshev in 1851. Note that, already in 1737, Euler knew the asymptotic behaviour of this sum.
Mertens diplomatically describes his proof as more precise and rigorous. In reality none of the previous proofs are acceptable by modern standards: Euler's computations involve the infinity (and the hyperbolic logarithm of infinity, and the logarithm of the logarithm of infinity!); Legendre's argument is heuristic; and Chebyshev's proof, although perfectly sound, makes use of the Legendre-Gauss conjecture, which was not proved until 1896 and became better known as the prime number theorem.
Mertens' proof does not appeal to any unproved hypothesis (in 1874), and only to elementary real analysis. It comes 22 years before the first proof of the prime number theorem which, by contrast, relies on a careful analysis of the behavior of the Riemann zeta function as a function of a complex variable.
Mertens' proof is in that respect remarkable. Indeed, with modern notation it yields
formula_14
whereas the prime number theorem (in its simplest form, without error estimate), can be shown to imply
formula_15
In 1909 Edmund Landau, by using the best version of the prime number theorem then at his disposition, proved that
formula_16
holds; in particular the error term is smaller than formula_17 for any fixed integer "k". A simple summation by parts exploiting the strongest form known of the prime number theorem improves this to
formula_18
for some formula_19.
Similarly a partial summation shows that formula_20 is implied by the PNT.
Third theorem.
Mertens' third theorem is
formula_21
where γ is the Euler–Mascheroni constant ().
Relation to sieve theory.
An estimate of the probability of formula_22 (formula_23) having no factor formula_24 is given by
formula_25
This is closely related to Mertens' third theorem which gives an asymptotic approximation of
formula_26
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p\\le n"
},
{
"math_id": 1,
"text": " \\sum_{p \\le n} \\frac{\\log p}{p} - \\log n"
},
{
"math_id": 2,
"text": " n\\ge 2"
},
{
"math_id": 3,
"text": "\\lim_{n\\to\\infty}\\left(\\sum_{p\\le n}\\frac1p -\\log\\log n-M\\right) =0,"
},
{
"math_id": 4,
"text": " \\frac 4{\\log(n+1)} +\\frac 2{n\\log n}"
},
{
"math_id": 5,
"text": "O(n)+n\\log n=\\log n! =\\sum_{p^k\\le n} \\lfloor n/p^k\\rfloor\\log p =\n\\sum_{p^k\\le n} \\left(\\frac{n}{p^k}+O(1)\\right)\\log p= n \\sum_{p^k\\le n}\\frac{\\log p}{p^k}\\ + O(n)"
},
{
"math_id": 6,
"text": "\\sum_{p^k\\le n}\\log p =O(n)"
},
{
"math_id": 7,
"text": "\\sum_{p\\in (n,2n]}\\log p\\le \\log{2n\\choose n}=O(n)"
},
{
"math_id": 8,
"text": "\\sum_{p^k\\le n}\\frac{\\log p}{p^k}=\\log n+O(1)"
},
{
"math_id": 9,
"text": "k \\ge 2"
},
{
"math_id": 10,
"text": "\\sum_{p\\le n}\\frac{\\log p}{p}=\\log n+O(1)"
},
{
"math_id": 11,
"text": "\\sum_{p\\le n} \\frac1{p} = \\log\\log n+M+O(1/\\log n)"
},
{
"math_id": 12,
"text": "\\sum_{p\\le n}\\frac1p -\\log\\log n-M"
},
{
"math_id": 13,
"text": "\\log n\\prod_{p\\le n}\\left(1-\\frac1p\\right)-e^{-\\gamma}"
},
{
"math_id": 14,
"text": "\\sum_{p\\le x}\\frac1p=\\log\\log x+M+O(1/\\log x)"
},
{
"math_id": 15,
"text": "\\sum_{p\\le x}\\frac1p=\\log\\log x+M+o(1/\\log x)."
},
{
"math_id": 16,
"text": "\\sum_{p\\le x}\\frac1p=\\log\\log x+M+O(e^{-(\\log x)^{1/14}})"
},
{
"math_id": 17,
"text": "1/(\\log x)^k"
},
{
"math_id": 18,
"text": "\\sum_{p\\le x}\\frac1p=\\log\\log x+M+O(e^{-c(\\log x)^{3/5}(\\log\\log x)^{-1/5}})"
},
{
"math_id": 19,
"text": "c > 0"
},
{
"math_id": 20,
"text": "\\sum_{p\\le x} \\frac{\\log p}{p} = \\log x+ C+o(1)"
},
{
"math_id": 21,
"text": "\\lim_{n\\to\\infty}\\log n\\prod_{p\\le n}\\left(1-\\frac1p\\right)=e^{-\\gamma} \\approx 0.561459483566885,"
},
{
"math_id": 22,
"text": "X"
},
{
"math_id": 23,
"text": "X \\gg n"
},
{
"math_id": 24,
"text": "\\le n"
},
{
"math_id": 25,
"text": "\\prod_{p\\le n}\\left(1-\\frac1p\\right)"
},
{
"math_id": 26,
"text": "P(p \\nmid X\\ \\forall p \\le n) = \\frac{1}{e^\\gamma \\log n }"
}
] | https://en.wikipedia.org/wiki?curid=982970 |
983022 | Gliese 65 | Binary star in the constellation Cetus
<indicator name="01-sky-coordinates"><templatestyles src="Template:Sky/styles.css" />Coordinates: &de=-17.950499999999998&zoom=&show_grid=1&show_constellation_lines=1&show_constellation_boundaries=1&show_const_names=1&show_galaxies=1&img_source=IMG_all 01h 39m 01.54s, −17° 57′ 01.8″</indicator>
</td>
! style="text-align: center; background-color: #FFFFC0;" colspan="2" | Observation dataEpoch J2000.0 Equinox J2000.0
! style="text-align:left" | Apparent magnitude (V)
! style="text-align:left" | Apparent magnitude (V)
! style="background-color: #FFFFC0; text-align: center;" colspan="2"| Characteristics
! style="text-align:center" colspan="2" | Gliese 65 A (BL Ceti)
! style="text-align:left" | Spectral type
! style="text-align:left" | U−B
! style="text-align:left" | B−V
! style="text-align:left" | Variable type
! style="text-align:center" colspan="2" | Gliese 65 B (UV Ceti)
! style="text-align:left" | Spectral type
! style="text-align:left" | Variable type
</th></tr>
</th></tr>
</th></tr>
Gliese 65, also known as Luyten 726-8, is a binary star system that is one of Earth's nearest neighbors, at from Earth in the constellation Cetus. The two component stars are both flare stars with the variable star designations BL Ceti and UV Ceti.
Star system.
The star system was discovered in 1948 by Willem Jacob Luyten in the course of compiling a catalog of stars of high proper motion; he noted its exceptionally high proper motion of 3.37 arc seconds annually and cataloged it as Luyten 726-8. The two stars are of nearly equal brightness, with visual magnitudes of 12.7 and 13.2 as seen from Earth. They orbit one another every 26.5 years. The distance between the two stars varies from . The Gliese 65 system is approximately from Earth's Solar System, in the constellation Cetus, and is thus the seventh-closest star system to Earth. Its own nearest neighbor is Tau Ceti, away from it. If formula_0 km/s then approximately 28,700 years ago Gliese 65 was at its minimal distance of 2.21 pc (7.2 ly) from the Sun.
Gliese 65 A was later found to be a variable star and given the variable star designation BL Ceti. It is a red dwarf of spectral type M5.5V. It is also a flare star, and classified as a UV Ceti variable type, but it is not nearly as remarkable or extreme in its behavior as its companion star UV Ceti.
Soon after the discovery of Gliese 65 A, the companion star Gliese 65 B was discovered. Like Gliese 65 A, this star was also found to be variable and given the variable star designation UV Ceti. Although UV Ceti was not the first flare star discovered, it is the most prominent example of such a star, so similar flare stars are now classified as UV Ceti type variable stars. This star goes through fairly extreme changes of brightness: for instance, in 1952, its brightness increased by 75 times in only 20 seconds. UV Ceti is a red dwarf of spectral type M6V.
Both stars are listed as spectral standard stars for their respective classes, being considered typical examples of the classes.
In approximately 31,500 years, Gliese 65 will have a close encounter with Epsilon Eridani at the minimal distance of about 0.93 ly. Gliese 65 can penetrate a conjectured Oort cloud about Epsilon Eridani, which may gravitationally perturb some long-period comets. The duration of mutual transit of two star systems within 1 ly from each other is about 4,600 years.
Gliese 65 is a possible member of the Hyades Stream.
Candidate planet.
In 2024, a candidate super-Neptune-mass planet was detected in the Gliese 65 system via astrometry with Very Large Telescope's GRAVITY instrument. If it exists, it would orbit one of the two stars (it is unclear which) with a period of 156 days. The planet's properties change slightly depending on which star it orbits, but in general its mass is estimated to be about 40 ME and the semi-major axis is about 30% of an astronomical unit. It is estimated to be about seven times the size of Earth based on mass-radius relationships.
! align=center| Companion
! align=center| Mass
! align=center| Semimajor axis
! align=center| Orbital period
! align=center| Eccentricity
! align=center| Inclination
! align=center| Radius
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R_v=+29"
}
] | https://en.wikipedia.org/wiki?curid=983022 |
9830367 | Squared deviations from the mean | Squared deviations from the mean (SDM) result from squaring deviations. In probability theory and statistics, the definition of "variance" is either the expected value of the SDM (when considering a theoretical distribution) or its average value (for actual experimental data). Computations for "analysis of variance" involve the partitioning of a sum of SDM.
Background.
An understanding of the computations involved is greatly enhanced by a study of the statistical value
formula_0, where formula_1 is the expected value operator.
For a random variable formula_2 with mean formula_3 and variance formula_4,
formula_5
(Its derivation is shown here.) Therefore,
formula_6
From the above, the following can be derived:
formula_7
formula_8
Sample variance.
The sum of squared deviations needed to calculate sample variance (before deciding whether to divide by "n" or "n" − 1) is most easily calculated as
formula_9
From the two derived expectations above the expected value of this sum is
formula_10
which implies
formula_11
This effectively proves the use of the divisor "n" − 1 in the calculation of an unbiased sample estimate of "σ"2.
Partition — analysis of variance.
In the situation where data is available for "k" different treatment groups having size "n""i" where "i" varies from 1 to "k", then it is assumed that the expected mean of each group is
formula_12
and the variance of each treatment group is unchanged from the population variance formula_4.
Under the Null Hypothesis that the treatments have no effect, then each of the formula_13 will be zero.
It is now possible to calculate three sums of squares:
formula_14
formula_15
formula_16
formula_17
formula_18
Under the null hypothesis that the treatments cause no differences and all the formula_13 are zero, the expectation simplifies to
formula_19
formula_20
formula_21
Sums of squared deviations.
Under the null hypothesis, the difference of any pair of "I", "T", and "C" does not contain any dependency on formula_3, only formula_4.
formula_22 total squared deviations aka "total sum of squares"
formula_23 treatment squared deviations aka "explained sum of squares"
formula_24 residual squared deviations aka "residual sum of squares"
The constants ("n" − 1), ("k" − 1), and ("n" − "k") are normally referred to as the number of degrees of freedom.
Example.
In a very simple example, 5 observations arise from two treatments. The first treatment gives three values 1, 2, and 3, and the second treatment gives two values 4, and 6.
formula_25
formula_26
formula_27
Giving
Total squared deviations = 66 − 51.2 = 14.8 with 4 degrees of freedom.
Treatment squared deviations = 62 − 51.2 = 10.8 with 1 degree of freedom.
Residual squared deviations = 66 − 62 = 4 with 3 degrees of freedom. | [
{
"math_id": 0,
"text": "\\operatorname{E}( X ^ 2 )"
},
{
"math_id": 1,
"text": "\\operatorname{E}"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "\\mu"
},
{
"math_id": 4,
"text": "\\sigma^2"
},
{
"math_id": 5,
"text": "\\sigma^2 = \\operatorname{E}( X ^ 2 ) - \\mu^2."
},
{
"math_id": 6,
"text": "\\operatorname{E}( X ^ 2 ) = \\sigma^2 + \\mu^2."
},
{
"math_id": 7,
"text": "\\operatorname{E}\\left( \\sum\\left( X ^ 2\\right) \\right) = n\\sigma^2 + n\\mu^2,"
},
{
"math_id": 8,
"text": "\\operatorname{E}\\left( \\left(\\sum X \\right)^ 2 \\right) = n\\sigma^2 + n^2\\mu^2."
},
{
"math_id": 9,
"text": "S = \\sum x ^ 2 - \\frac{\\left(\\sum x\\right)^2}{n}"
},
{
"math_id": 10,
"text": "\\operatorname{E}(S) = n\\sigma^2 + n\\mu^2 - \\frac{n\\sigma^2 + n^2\\mu^2}{n}"
},
{
"math_id": 11,
"text": "\\operatorname{E}(S) = (n - 1)\\sigma^2. "
},
{
"math_id": 12,
"text": "\\operatorname{E}(\\mu_i) = \\mu + T_i"
},
{
"math_id": 13,
"text": "T_i"
},
{
"math_id": 14,
"text": "I = \\sum x^2 "
},
{
"math_id": 15,
"text": "\\operatorname{E}(I) = n\\sigma^2 + n\\mu^2"
},
{
"math_id": 16,
"text": "T = \\sum_{i=1}^k \\left(\\left(\\sum x\\right)^2/n_i\\right)"
},
{
"math_id": 17,
"text": "\\operatorname{E}(T) = k\\sigma^2 + \\sum_{i=1}^k n_i(\\mu + T_i)^2"
},
{
"math_id": 18,
"text": "\\operatorname{E}(T) = k\\sigma^2 + n\\mu^2 + 2\\mu \\sum_{i=1}^k (n_iT_i) + \\sum_{i=1}^k n_i(T_i)^2"
},
{
"math_id": 19,
"text": "\\operatorname{E}(T) = k\\sigma^2 + n\\mu^2."
},
{
"math_id": 20,
"text": "C = \\left(\\sum x\\right)^2/n"
},
{
"math_id": 21,
"text": "\\operatorname{E}(C) = \\sigma^2 + n\\mu^2"
},
{
"math_id": 22,
"text": "\\operatorname{E}(I - C) = (n - 1)\\sigma^2"
},
{
"math_id": 23,
"text": "\\operatorname{E}(T - C) = (k - 1)\\sigma^2"
},
{
"math_id": 24,
"text": "\\operatorname{E}(I - T) = (n - k)\\sigma^2"
},
{
"math_id": 25,
"text": "I = \\frac{1^2}{1} + \\frac{2^2}{1} + \\frac{3^2}{1} + \\frac{4^2}{1} + \\frac{6^2}{1} = 66"
},
{
"math_id": 26,
"text": "T = \\frac{(1 + 2 + 3)^2}{3} + \\frac{(4 + 6)^2}{2} = 12 + 50 = 62"
},
{
"math_id": 27,
"text": "C = \\frac{(1 + 2 + 3 + 4 + 6)^2}{5} = 256/5 = 51.2"
}
] | https://en.wikipedia.org/wiki?curid=9830367 |
9832087 | Bloch space | Space of holomorphic functions on the open unit disk in the complex plane
In the mathematical field of complex analysis, the Bloch space, named after French mathematician André Bloch and denoted formula_0 or ℬ, is the space of holomorphic functions "f" defined on the open unit disc D in the complex plane, such that the function
formula_1
is bounded. formula_0 is a type of Banach space, with the norm defined by
formula_2
This is referred to as the Bloch norm and the elements of the Bloch space are called Bloch functions.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{B}"
},
{
"math_id": 1,
"text": "(1-|z|^2)|f^\\prime(z)|"
},
{
"math_id": 2,
"text": " \\|f\\|_\\mathcal{B} = |f(0)| + \\sup_{z \\in \\mathbf{D}} (1-|z|^2) |f'(z)|. "
}
] | https://en.wikipedia.org/wiki?curid=9832087 |
984061 | Orbit phasing | In astrodynamics, orbit phasing is the adjustment of the time-position of spacecraft along its orbit, usually described as adjusting the orbiting spacecraft's true anomaly. Orbital phasing is primarily used in scenarios where a spacecraft in a given orbit must be moved to a different location within the same orbit. The change in position within the orbit is usually defined as the phase angle, "ϕ", and is the change in true anomaly required between the spacecraft's current position to the final position.
The phase angle can be converted in terms of time using Kepler's Equation:
formula_0
formula_1
where
This time derived from the phase angle is the required time the spacecraft must gain or lose to be located at the final position within the orbit. To gain or lose this time, the spacecraft must be subjected to a simple two-impulse Hohmann transfer which takes the spacecraft away from, and then back to, its original orbit. The first impulse to change the spacecraft's orbit is performed at a specific point in the original orbit (point of impulse, POI), usually performed in the original orbit's periapsis or apoapsis. The impulse creates a new orbit called the “phasing orbit” and is larger or smaller than the original orbit resulting in a different period time than the original orbit. The difference in period time between the original and phasing orbits will be equal to the time converted from the phase angle. Once one period of the phasing orbit is complete, the spacecraft will return to the POI and the spacecraft will once again be subjected to a second impulse, equal and opposite to the first impulse, to return it to the original orbit. When complete, the spacecraft will be in the targeted final position within the original orbit.
To find some of the phasing orbital parameters, first one must find the required period time of the phasing orbit using the following equation.
formula_2
where
Once phasing orbit period is determined, the phasing orbit semimajor axis can be derived from the period formula:
formula_3
where
From the semimajor axis, the phase orbit apogee and perigee can be calculated:
formula_4
where
Finally, the phasing orbit's angular momentum can be found from the equation:
formula_5
where
To find the impulse required to change the spacecraft from its original orbit to the phasing orbit, the change of spacecraft velocity, ∆"V", at POI must be calculated from the angular momentum formula:
formula_6
where
Remember that this change in velocity, ∆"V", is only the amount required to change the spacecraft from its original orbit to the phasing orbit. A second change in velocity equal to the magnitude but opposite in direction of the first must be done after the spacecraft travels one phase orbit period to return the spacecraft from the phasing orbit to the original orbit. Total change of velocity required for the phasing maneuver is equal to two times ∆"V".
Orbit phasing can also be referenced as co-orbital rendezvous like a successful approach to a space station in a docking maneuver. Here, two spacecraft on the same orbit but at different true anomalies rendezvous by either one or both of the spacecraft entering phasing orbits which cause them to return to their original orbit at the same true anomaly at the same time.
Phasing maneuvers are also commonly employed by geosynchronous satellites, either to conduct station-keeping maneuvers to maintain their orbit above a specific longitude, or to change longitude altogether.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "t = \\frac{T_1}{2 \\pi} (E-e_1 \\sin E)"
},
{
"math_id": 1,
"text": "E = 2 \\arctan \\left(\\sqrt{\\frac{1-e_1}{1+e_1}} \\tan{\\frac{\\phi}{2}}\\right) "
},
{
"math_id": 2,
"text": "T_2 = T_1 - t"
},
{
"math_id": 3,
"text": "a_2 = \\left(\\frac{\\sqrt{\\mu} T_2}{2 \\pi}\\right)^{2/3}"
},
{
"math_id": 4,
"text": "2 a_2 = r_a + r_p"
},
{
"math_id": 5,
"text": "h_2= \\sqrt{2 \\mu} \\sqrt{\\frac{r_a r_p}{r_a+r_p}}"
},
{
"math_id": 6,
"text": "\\Delta V = v_2 - v_1 = \\frac{h_2}{r} - \\frac{h_1}{r}"
}
] | https://en.wikipedia.org/wiki?curid=984061 |
9841185 | Tangent developable | In the mathematical study of the differential geometry of surfaces, a tangent developable is a particular kind of developable surface obtained from a curve in Euclidean space as the surface swept out by the tangent lines to the curve. Such a surface is also the envelope of the tangent planes to the curve.
Parameterization.
Let formula_0 be a parameterization of a smooth space curve. That is, formula_1 is a twice-differentiable function with nowhere-vanishing derivative that maps its argument formula_2 (a real number) to a point in space; the curve is the image of formula_1. Then a two-dimensional surface, the tangent developable of formula_1, may be parameterized by the map
formula_3
The original curve forms a boundary of the tangent developable, and is called its directrix or edge of regression. This curve is obtained by first developing the surface into the plane, and then considering the image in the plane of the generators of the ruling on the surface. The envelope of this family of lines is a plane curve whose inverse image under the development is the edge of regression. Intuitively, it is a curve along which the surface needs to be folded during the process of developing into the plane.
Properties.
The tangent developable is a developable surface; that is, it is a surface with zero Gaussian curvature. It is one of three fundamental types of developable surface; the other two are the generalized cones (the surface traced out by a one-dimensional family of lines through a fixed point), and the cylinders (surfaces traced out by a one-dimensional family of parallel lines). (The plane is sometimes given as a fourth type, or may be seen as a special case of either of these two types.) Every developable surface in three-dimensional space may be formed by gluing together pieces of these three types; it follows from this that every developable surface is a ruled surface, a union of a one-dimensional family of lines. However, not every ruled surface is developable; the helicoid provides a counterexample.
The tangent developable of a curve containing a point of zero torsion will contain a self-intersection.
History.
Tangent developables were first studied by Leonhard Euler in 1772. Until that time, the only known developable surfaces were the generalized cones and the cylinders. Euler showed that tangent developables are developable and that every developable surface is of one of these types.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\gamma(t)"
},
{
"math_id": 1,
"text": "\\gamma"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "(s,t)\\mapsto \\gamma(t) + s\\gamma{\\,'}(t)."
}
] | https://en.wikipedia.org/wiki?curid=9841185 |
984289 | Jiuzhaigou | Nature reserve and national park in Sichuan, China
Jiuzhaigou ( ; ) is a nature reserve and national park located in the north of Sichuan Province in southwestern China.The southern end is the Minshan Garna Peak, and the northern end is the Huanglong Scenic Area. It originates from the Baishui River area, one of the headwaters of the Jialing River and a part of the Yangtze River system. A long valley running north to south, Jiuzhaigou was inscribed by UNESCO as a World Heritage Site in 1992 and a World Biosphere Reserve in 1997. It belongs to the category V (Protected Landscape) in the IUCN system of protected area categorization.
The Jiuzhaigou valley is part of the Min Mountains on the edge of the Tibetan Plateau and stretches over . It has an altitude of over 4800 meters and is composed of a series of diverse forest ecosystems. It is known for its many multi-level waterfalls, colorful lakes, and snow-capped peaks. Its elevation ranges from .
History.
Jiuzhaigou (literally "Nine Settlement Valley") takes its name from the nine Tibetan settlements along its length.
The remote region was inhabited by various Tibetan and Qiang peoples for centuries. Until 1975 this inaccessible area was little known. Extensive logging took place until 1979, when the Chinese government banned such activity and made the area a national park in 1982. An Administration Bureau was established and the site officially opened to tourism in 1984; layout of facilities and regulations were completed in 1987.
The site was inscribed by UNESCO as a World Heritage Site in 1992 and a World Biosphere Reserve in 1997. The tourism area is classified as a AAAAA scenic area by the China National Tourism Administration.
Since opening, tourist activity has increased every year: from 5,000 in 1984 to 170,000 in 1991, 160,000 in 1995, to 200,000 in 1997, including about 3,000 foreigners. Visitors numbered 1,190,000 in 2002. As of 2004[ [update]], the site averages 7,000 visits per day, with a quota of 12,000 being reportedly enforced during high season. The Town of Zhangzha at the exit of the valley and the nearby Songpan County feature an ever-increasing number of hotels, including several luxury five-stars, such as Sheraton.
Developments related to mass tourism in the region have caused concerns about the impact on the environment around the park.
Population.
7 of the 9 Tibetan villages are still populated today. The main agglomerations that are readily accessible to tourists are Heye, Shuzheng and Zechawa along the main paths that cater to tourists, selling various handicrafts, souvenirs and snacks. There is also Rexi in the smaller Zaru Valley and behind Heye village are Jianpan, Panya and Yana villages. Guodu and Hejiao villages are no longer populated. Penbu, Panxing and Yongzhu villages lie along the road that passes through the town of Jiuzhaigou/Zhangza outside the valley.
In 2003, the permanent population of the valley was about 1,000 comprising 112 families, and due to the protected nature of the park, agriculture is no longer permitted so the locals now rely on tourism and local government subsidies to make a living.
Geography and climate.
Jiuzhaigou lies at the southern end of the Minshan mountain range, north of the provincial capital of Chengdu. It is part of the Jiuzhaigou County (formerly Nanping County) in the Aba Tibetan Qiang Autonomous Prefecture of northwestern Sichuan province, near the Gansu border.
The valley covers , with buffer zones covering an additional . Its elevation, depending on the area considered, ranges from 1,998 to 2,140 m (at the mouth of Shuzheng Gully) to 4,558-4,764 m (on Mount Ganzigonggai at the top of Zechawa Gully).
The climate is subtropical to temperate monsoon with a mean annual temperature of 7.8 °C, with means of −3.7 °C in January and 16.8 °C in July. Total annual rainfall is 761 mm but in the cloud forest it is at least 1,000 mm. 80% of rainfall occurs between May and October.Due to the monsoon moving towards the valley, summer is mild, cloudy, and moderately humid. Above an altitude of 3500 meters, the climate is colder and drier.
Ecology.
Jiuzhaigou's ecosystem is classified as temperate broad-leaf forest and woodlands, with mixed mountain and highland systems. Nearly of the core scenic area are covered by virgin mixed forests. Those forests take on attractive (vibrant) yellow, orange and red hues in the autumn, making that season a popular one for visitors. They are home to a number of plant species of interest, such as endemic varieties of rhododendron and bamboo.
Local fauna includes the endangered giant panda and golden snub-nosed monkey. Both populations are very small (fewer than 20 individuals for the pandas) and isolated. Their survival is in question in a valley subject to increasing tourism. It is one of only three known locations for the threatened Duke of Bedford's vole. Jiuzhaigou is also home to approximately 140 bird species.
The region is indeed a natural museum for mountain karst hydrology and research. It preserves a series of important forest ecosystems, including ancient forests that provide important habitats for many endangered animal and plant species such as giant pandas and antelopes. Jiuzhaigou Valley Scenic and Historic Interest Area also contains a large number of well preserved Quaternary glacial relics, which are of great scenic value.
Geology and hydrology.
Jiuzhaigou's landscape is made up of high-altitude karsts shaped by glacial, hydrological and tectonic activity. It lies on major faults on the diverging belt between the Qinghai-Tibet Plate and the Yangtze Plate, and earthquakes have also shaped the landscape. The rock strata are mostly made up of carbonate rocks such dolomite and tufa, as well as some sandstone and shales.
The region contains a large amount of tuff, which is a type of limestone formed by the rapid precipitation of calcium carbonate in freshwater. It falls on rocks, lake beds, and even fallen trees in the water, sometimes accumulating into terraces, shoals, and dam barriers in the lake.
The valley includes the catchment area of three gullies (which due to their large size are often called valleys themselves), and is one of the sources of the Jialing River via the Bailong River, part of the Yangtze River system.
Jiuzhaigou's best-known feature is its dozens of blue, green and turquoise-colored lakes. The local Tibetan people call them "Haizi" in Chinese, meaning "son of the sea". Originating in glacial activity, they were dammed by rockfalls and other natural phenomena, then solidified by processes of carbonate deposition. Some lakes have a high concentration of calcium carbonate, and their water is very clear so that the bottom is often visible even at high depths. The lakes vary in color and aspect according to their depths, residues, and surroundings.
Some of the less stable dams and formations have been artificially reinforced, and direct contact with the lakes or other features is forbidden to tourists.
Notable features.
Jiuzhaigou is composed of three valleys arranged in a Y shape. The Rize and Zechawa valleys flow from the south and meet at the centre of the site where they form the Shuzheng valley, flowing north to the mouth of the valley. The mountainous watersheds of these gullies are lined with of roads for shuttle buses, as well as wooden boardwalks and small pavilions. The boardwalks are typically located on the opposite side of the lakes from the road, shielding them from disturbance by passing buses.
Most visitors will first take the shuttle bus to the end of Rize and/or Shuzheng gully, then make their way back downhill by foot on the boardwalks, taking the bus instead when the next site is too distant. Here is a summary of the sites found in each of the gullies:
Rize Valley.
The Rize Valley (日则沟, pinyin: Rìzé Gōu) is the south-western branch of Jiuzhaigou. It contains the largest variety of sites and is typically visited first. Going downhill from its highest point, one passes the following sites:
Zechawa Valley.
The Zechawa Gully (则查洼沟, Zécháwā Gōu) is the south-eastern branch of Jiuzhaigou. It is approximately the same length as Rize gully (18 km) but climbs to a higher altitude (3150 m at the Long Lake). Going downhill from its highest point, it features the following sites:
Shuzheng Valley.
The Shuzheng Valley (树正沟, Shùzhèng Gōu) is the northern (main) branch of Jiuzhaigou. It ends after at the Y-shaped intersection of the three gullies. Going downhill from the intersection to the mouth of the valley, visitors encounter the following:
Tourism.
The "Zharu Valley" (扎如沟, Zhārú Gōu) runs southeast from the main Shuzheng gully and is rarely visited by tourists. The valley begins at the Zharu Buddhist monastery and ends at the Red, Black, and Daling lakes.
Zharu Valley is the home of tourism in Jiuzhaigou. The valley has recently been opened to a small number of tourists wishing to go hiking and camping off the beaten track. Visitors can choose from day walks and multiple day hikes, depending on their time availability. Knowledgeable guides accompany tourists through the valley, sharing their knowledge about the unique biodiversity and local culture of the national park. The Zharu Valley has 40% of all the plant species that exist in China and it is the best place to spot wildlife inside the national park.
The main hike follows the pilgrimage of the local Benbo Buddhists circumnavigating the sacred 4,528 m Zha Yi Zha Ga Mountain.
Access.
Jiuzhaigou, compared with other high-traffic scenic spots in China, can be difficult to reach by land. The majority of tourists reach the valley by a ten-hour bus ride from Chengdu along the Min River canyon, which is prone to occasional minor rock-slides and, in the rainy season, mudslides that can add several hours to the trip. The new highway constructed along this route was badly damaged during the 2008 Sichuan Earthquake, but has since been repaired. Further repairs from Mao Xian to Chuan Zhu Si are proceeding, but the road is open to public buses and private vehicles.
Since 2003, it has been possible to fly from Chengdu or Chongqing to Jiuzhai Huanglong Airport on a mountain side in Songpan County, and then take an hour-long bus ride to Huanglong, or a 90-minute bus ride to Jiuzhaigou. Since 2006, a daily flight to Xi'an opens in the peak season. In October, 2009, new direct flights were added from Beijing, Shanghai, and Hangzhou. Jiuzhaigou and Huanglong National Parks did not experience any damage during the earthquake of May, 2008, and did not close after the event.
The Chengdu–Lanzhou Railway is under construction and will have a station in Jiuzhaigou County.
2017 earthquake impact.
In August 2017, a magnitude 7.0 earthquake struck Jiuzhaigou County, causing significant structural damage. The authorities closed the valley to tourists until March 3, 2018, before reopening the park with limited access.
The Jiuzhaigou earthquake in Sichuan, China had a significant impact on the scenic area. The earthquake also resulted in the damage and breakage of two natural dams, namely the Nuorilang Waterfall dam and the Huohua Lake dam.
Dam damage of Nuorilang Waterfall.
The Nuorilang Waterfall suffered damage due to its initial low stability and topographic effects. The dam was composed of poor material with low mechanical strength, making it prone to rockfalls even during non-earthquake periods. The nearly vertical structure of the dam amplified the seismic influences at its upper part, increasing the likelihood of deformation and collapse.
Dam break of Huohua Lake.
The Jiuzhaigou earthquake caused the Huohua Lake dam to break. After the dam broke, the water discharge increased from the normal level of formula_0 to a maximum of formula_1. As a result, the water level rapidly descended, leading to collapses along the dam.
Protect.
As a national park and national nature reserve, Jiuzhaigou Valley Scenic and Historic Interest Area is protected by national and provincial laws and regulations, ensuring the long-term management and protection of the heritage. In 2004, the Sichuan Provincial Regulations on the Protection of World Heritage and the Implementation of the Sichuan Provincial Regulations on the Protection of World Heritage in Aba Autonomous Prefecture became laws, providing a stricter basis for the protection of heritage.
The Sichuan Provincial Construction Commission is fully responsible for the protection and management of the site. The Jiuzhaigou Valley Scenic and Historic Interest Area Administrative Bureau (ABJ) consists of several departments, including the Protection Section, the Construction Section, the police station and so on, which are responsible for the field administration. In addition to national legislation, there are many relevant local government laws and regulations. The revised management plan in 2001 is based on these laws and contains specific regulations and recommendations: prohibiting the logging of trees and forests, as well as activities that cause pollution, and fully considering the needs of local Tibetan residents.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "9.3 m^3/s"
},
{
"math_id": 1,
"text": "21.5 m^3/s"
}
] | https://en.wikipedia.org/wiki?curid=984289 |
9845792 | Leibniz harmonic triangle | The Leibniz harmonic triangle is a triangular arrangement of unit fractions in which the outermost diagonals consist of the reciprocals of the row numbers and each inner cell is the cell diagonally above and to the left minus the cell to the left. To put it algebraically, "L"("r", 1) = 1/"r" (where "r" is the number of the row, starting from 1, and "c" is the column number, never more than "r") and "L"("r", "c") = "L"("r" − 1, "c" − 1) − "L"("r", "c" − 1).
Values.
The first eight rows are:
formula_0
The denominators are listed in (sequence in the OEIS), while the numerators are all 1s.
Terms.
The terms are given by the recurrences
formula_1
formula_2
and explicitly by
formula_3
where formula_4 is a binomial coefficient.
Relation to Pascal's triangle.
Whereas each entry in Pascal's triangle is the sum of the two entries in the above row, each entry in the Leibniz triangle is the sum of the two entries in the row "below" it. For example, in the 5th row, the entry (1/30) is the sum of the two (1/60)s in the 6th row.
Just as Pascal's triangle can be computed by using binomial coefficients, so can Leibniz's: formula_5. Furthermore, the entries of this triangle can be computed from Pascal's: "The terms in each row are the initial term divided by the corresponding Pascal triangle entries." In fact, each diagonal relates to corresponding Pascal Triangle diagonals: The first Leibniz diagonal consists of 1/(1x natural numbers), the second of 1/(2x triangular numbers), the third of 1/(3x tetrahedral numbers) and so on.
Moreover, each entry in the Harmonic triangle is equal to the reciprocal of the respective entry in Pascal's triangle multiplied by the reciprocal of the respective row, formula_6 formula_7, where formula_8 is the entry in the Harmonic triangle and formula_9 is the respective entry in Pascal's triangle
Infinite series.
The infinite sum of all the terms in any diagonal equals the first term in the previous diagonal, that is formula_10 because the recurrence can be used to telescope the series as formula_11 where formula_12.
formula_13
For example,
formula_14
formula_15
Replacing the formula for the coefficients we get the infinite series formula_16, the first example given here appeared originally on work of Leibniz around 1694
Properties.
If one takes the denominators of the "n"th row and adds them, then the result will equal formula_17. For example, for the 3rd row, we have 3 + 6 + 3 = 12 = 3 × 22.
We have formula_18
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{array}{cccccccccccccccccc}\n& & & & & & & & & 1 & & & & & & & &\\\\\n& & & & & & & & \\frac{1}{2} & & \\frac{1}{2} & & & & & & &\\\\\n& & & & & & & \\frac{1}{3} & & \\frac{1}{6} & & \\frac{1}{3} & & & & & &\\\\\n& & & & & & \\frac{1}{4} & & \\frac{1}{12} & & \\frac{1}{12} & & \\frac{1}{4} & & & & &\\\\\n& & & & & \\frac{1}{5} & & \\frac{1}{20} & & \\frac{1}{30} & & \\frac{1}{20} & & \\frac{1}{5} & & & &\\\\\n& & & & \\frac{1}{6} & & \\frac{1}{30} & & \\frac{1}{60} & & \\frac{1}{60} & & \\frac{1}{30} & & \\frac{1}{6} & & &\\\\\n& & & \\frac{1}{7} & & \\frac{1}{42} & & \\frac{1}{105} & & \\frac{1}{140} & & \\frac{1}{105} & & \\frac{1}{42} & & \\frac{1}{7} & &\\\\\n& & \\frac{1}{8} & & \\frac{1}{56} & & \\frac{1}{168} & & \\frac{1}{280} & & \\frac{1}{280} & & \\frac{1}{168} & & \\frac{1}{56} & & \\frac{1}{8} &\\\\\n& & & & &\\vdots & & & & \\vdots & & & & \\vdots& & & & \\\\\n\\end{array}"
},
{
"math_id": 1,
"text": "a_{n,1} = \\frac{1}{n},"
},
{
"math_id": 2,
"text": "a_{n,k} = \\frac{1}{n\\binom{n-1}{k-1}},"
},
{
"math_id": 3,
"text": "a_{n,k} = \\frac{1}{\\binom{n}{k}k}"
},
{
"math_id": 4,
"text": "\\binom{n}{k}"
},
{
"math_id": 5,
"text": "L(r, c) = \\frac{1}{r {r-1 \\choose c-1}}"
},
{
"math_id": 6,
"text": "r"
},
{
"math_id": 7,
"text": " h_{(r,c)} = \\frac{1}{p_{(r,c)}}\\times \\frac{1}{r} "
},
{
"math_id": 8,
"text": " h_{(r,c)} "
},
{
"math_id": 9,
"text": " p_{(r,c)} "
},
{
"math_id": 10,
"text": "\\sum_{r=c}^{\\infty} L(r,c)=L(c-1,c-1)\n"
},
{
"math_id": 11,
"text": "\\sum_{r=c}^{\\infty} L(r,c)=\\sum_{r=c}^{\\infty} L(r-1,c-1)-L(r,c-1)=L(c-1,c-1)-\\cancelto{0}{L(\\infty,c-1)}\n:"
},
{
"math_id": 12,
"text": "L(\\infty,c-1)=\\lim_{r\\to\\infty}L(r,c-1)=\\lim_{r\\to\\infty}\\frac{1}{r{r-1 \\choose c-2}}=0"
},
{
"math_id": 13,
"text": "\\begin{array}{cccccccccccccccccc}\n& & & & & & {\\color{red}1} & & & & & &\\\\\n& & & & & \\frac{1}{2} & & {\\color{blue}\\frac{1}{2}} & & & &\\\\\n& & & & \\frac{1}{3} & & {\\color{blue}\\frac{1}{6}} & & \\frac{1}{3} & & &\\\\\n& & & \\frac{1}{4} & & {\\color{blue}\\frac{1}{12}} & & \\frac{1}{12} & & {\\color{red}\\frac{1}{4}} & &\\\\\n& & \\frac{1}{5} & & {\\color{blue}\\frac{1}{20}} & & \\frac{1}{30} & & \\frac{1}{20} & & {\\color{blue}\\frac{1}{5}} &\\\\\n& \\frac{1}{6} & & {\\color{blue}\\frac{1}{30}} & & \\frac{1}{60} & & \\frac{1}{60} & & {\\color{blue}\\frac{1}{30}} & & \\frac{1}{6}\\\\\n& & & &\\vdots & & & &\\vdots & & &\\\\\n\\end{array}"
},
{
"math_id": 14,
"text": "{\\color{blue}\\frac{1}{2}+\\frac{1}{6}+\\frac{1}{12}+...}=\\frac{1}{1}-\\frac{1}{2}+\\frac{1}{2}-\\frac{1}{3}+\\frac{1}{3}-\\frac{1}{4}+...={\\color{red}1}"
},
{
"math_id": 15,
"text": "{\\color{blue}\\frac{1}{5}+\\frac{1}{30}+\\frac{1}{105}+...}=\\frac{1}{4}-\\frac{1}{20}+\\frac{1}{20}-\\frac{1}{60}+\\frac{1}{60}-\\frac{1}{140}+...={\\color{red}\\frac{1}{4}}"
},
{
"math_id": 16,
"text": "\\sum_{r=c}^{\\infty} \\frac{1}{r{r-1 \\choose c-1}}=\\frac{1}{c-1}\n"
},
{
"math_id": 17,
"text": "n 2^{n - 1}"
},
{
"math_id": 18,
"text": "L(r, c) = \\int_0^1 \\! x ^ {c-1} (1-x)^{r-c} \\,dx."
}
] | https://en.wikipedia.org/wiki?curid=9845792 |
984752 | Lyapunov equation | Equation from stability analysis
The Lyapunov equation, named after the Russian mathematician Aleksandr Lyapunov, is a matrix equation used in the stability analysis of linear dynamical systems.
In particular, the discrete-time Lyapunov equation (also known as Stein equation) for formula_0 is
formula_1
where formula_2 is a Hermitian matrix and formula_3 is the conjugate transpose of formula_4, while the continuous-time Lyapunov equation is
formula_5.
Application to stability.
In the following theorems formula_6, and formula_7 and formula_2 are symmetric. The notation formula_8 means that the matrix formula_7 is positive definite.
Theorem (continuous time version). Given any formula_9, there exists a unique formula_8 satisfying formula_10 if and only if the linear system formula_11 is globally asymptotically stable. The quadratic function formula_12 is a Lyapunov function that can be used to verify stability.
Theorem (discrete time version). Given any formula_9, there exists a unique formula_8 satisfying formula_13 if and only if the linear system formula_14 is globally asymptotically stable. As before, formula_15 is a Lyapunov function.
Computational aspects of solution.
The Lyapunov equation is linear, and so if formula_0 contains formula_16 entries can be solved in formula_17 time using standard matrix factorization methods.
However, specialized algorithms are available which can yield solutions much quicker owing to the specific structure of the Lyapunov equation. For the discrete case, the Schur method of Kitagawa is often used. For the continuous Lyapunov equation the Bartels–Stewart algorithm can be used.
Analytic solution.
Defining the vectorization operator formula_18 as stacking the columns of a matrix formula_4 and formula_19 as the Kronecker product of formula_4 and formula_20, the continuous time and discrete time Lyapunov equations can be expressed as solutions of a matrix equation. Furthermore, if the matrix formula_4 is "stable", the solution can also be expressed as an integral (continuous time case) or as an infinite sum (discrete time case).
Discrete time.
Using the result that formula_21, one has
formula_22
where formula_23 is a conformable identity matrix and formula_24 is the element-wise complex conjugate of formula_4. One may then solve for formula_25 by inverting or solving the linear equations. To get formula_0, one must just reshape formula_26 appropriately.
Moreover, if formula_4 is stable (in the sense of Schur stability, i.e., having eigenvalues with magnitude less than 1), the solution formula_0 can also be written as
formula_27.
For comparison, consider the one-dimensional case, where this just says that the solution of formula_28 is
formula_29.
Continuous time.
Using again the Kronecker product notation and the vectorization operator, one has the matrix equation
formula_30
where formula_24 denotes the matrix obtained by complex conjugating the entries of formula_4.
Similar to the discrete-time case, if formula_4 is stable (in the sense of Hurwitz stability, i.e., having eigenvalues with negative real parts), the solution formula_0 can also be written as
formula_31,
which holds because
formula_32
For comparison, consider the one-dimensional case, where this just says that the solution of formula_33 is
formula_34.
Relationship Between Discrete and Continuous Lyapunov Equations.
We start with the continuous-time linear dynamics:
formula_35.
And then discretize it as follows:
formula_36
Where formula_37 indicates a small forward displacement in time. Substituting the bottom equation into the top and shuffling terms around, we get a discrete-time equation for formula_38.
formula_39
Where we've defined formula_40. Now we can use the discrete time Lyapunov equation for formula_41:
formula_42
Plugging in our definition for formula_41, we get:
formula_43
Expanding this expression out yields:
formula_44
Recall that formula_45 is a small displacement in time. Letting formula_45 go to zero brings us closer and closer to having continuous dynamics—and in the limit we achieve them. It stands to reason that we should also recover the continuous-time Lyapunov equations in the limit as well. Dividing through by formula_45 on both sides, and then letting formula_46 we find that:
formula_47
which is the continuous-time Lyapunov equation, as desired.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "A X A^{H} - X + Q = 0"
},
{
"math_id": 2,
"text": "Q"
},
{
"math_id": 3,
"text": "A^H"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "AX + XA^H + Q = 0"
},
{
"math_id": 6,
"text": "A, P, Q \\in \\mathbb{R}^{n \\times n}"
},
{
"math_id": 7,
"text": "P"
},
{
"math_id": 8,
"text": "P>0"
},
{
"math_id": 9,
"text": "Q>0"
},
{
"math_id": 10,
"text": "A^T P + P A + Q = 0"
},
{
"math_id": 11,
"text": "\\dot{x}=A x"
},
{
"math_id": 12,
"text": "V(x)=x^T P x"
},
{
"math_id": 13,
"text": "A^T P A -P + Q = 0"
},
{
"math_id": 14,
"text": "x_{t+1}=A x_{t}"
},
{
"math_id": 15,
"text": "x^T P x"
},
{
"math_id": 16,
"text": "n"
},
{
"math_id": 17,
"text": "\\mathcal O(n^3)"
},
{
"math_id": 18,
"text": "\\operatorname{vec} (A)"
},
{
"math_id": 19,
"text": "A \\otimes B"
},
{
"math_id": 20,
"text": "B"
},
{
"math_id": 21,
"text": " \\operatorname{vec}(ABC)=(C^{T} \\otimes A)\\operatorname{vec}(B) "
},
{
"math_id": 22,
"text": " (I_{n^2}-\\bar{A} \\otimes A)\\operatorname{vec}(X) = \\operatorname{vec}(Q) "
},
{
"math_id": 23,
"text": "I_{n^2}"
},
{
"math_id": 24,
"text": "\\bar{A}"
},
{
"math_id": 25,
"text": "\\operatorname{vec}(X)"
},
{
"math_id": 26,
"text": "\\operatorname{vec} (X)"
},
{
"math_id": 27,
"text": " X = \\sum_{k=0}^{\\infty} A^{k} Q (A^{H})^k "
},
{
"math_id": 28,
"text": " (1 - a^2) x = q "
},
{
"math_id": 29,
"text": " x = \\frac{q}{1-a^2} = \\sum_{k=0}^{\\infty} qa^{2k} "
},
{
"math_id": 30,
"text": " (I_n \\otimes A + \\bar{A} \\otimes I_n) \\operatorname{vec}X = -\\operatorname{vec}Q, "
},
{
"math_id": 31,
"text": " X = \\int_0^\\infty {e}^{A \\tau} Q \\mathrm{e}^{A^H \\tau} d\\tau "
},
{
"math_id": 32,
"text": " \\begin{align}AX+XA^H =& \\int_0^\\infty A{e}^{A \\tau} Q \\mathrm{e}^{A^H \\tau}+{e}^{A \\tau} Q \\mathrm{e}^{A^H \\tau}A^H d\\tau \\\\\n=&\\int_0^\\infty \\frac{d}{d\\tau} {e}^{A \\tau} Q \\mathrm{e}^{A^H \\tau} d\\tau \\\\\n=& {e}^{A \\tau} Q \\mathrm{e}^{A^H \\tau} \\bigg|_0^\\infty\\\\\n=& -Q.\n\\end{align}\n"
},
{
"math_id": 33,
"text": " 2ax = - q "
},
{
"math_id": 34,
"text": " x = \\frac{-q}{2a} = \\int_0^{\\infty} q{e}^{2 a \\tau} d\\tau "
},
{
"math_id": 35,
"text": " \\dot{\\mathbf{x}} = \\mathbf{A}\\mathbf{x} "
},
{
"math_id": 36,
"text": " \\dot{\\mathbf{x}} \\approx \\frac{\\mathbf{x}_{t+1}-\\mathbf{x}_{t}}{\\delta}"
},
{
"math_id": 37,
"text": " \\delta > 0 "
},
{
"math_id": 38,
"text": " \\mathbf{x}_{t+1}"
},
{
"math_id": 39,
"text": " \\mathbf{x}_{t+1} = \\mathbf{x}_t + \\delta \\mathbf{A} \\mathbf{x}_t = (\\mathbf{I} + \\delta\\mathbf{A})\\mathbf{x}_t = \\mathbf{B}\\mathbf{x}_t"
},
{
"math_id": 40,
"text": " \\mathbf{B} \\equiv \\mathbf{I} + \\delta\\mathbf{A}"
},
{
"math_id": 41,
"text": " \\mathbf{B}"
},
{
"math_id": 42,
"text": " \\mathbf{B}^T\\mathbf{M}\\mathbf{B} - \\mathbf{M} = -\\delta\\mathbf{Q}"
},
{
"math_id": 43,
"text": " (\\mathbf{I} + \\delta \\mathbf{A})^T\\mathbf{M}(\\mathbf{I} + \\delta \\mathbf{A}) - \\mathbf{M} = -\\delta \\mathbf{Q}"
},
{
"math_id": 44,
"text": " (\\mathbf{M} + \\delta \\mathbf{A}^T\\mathbf{M}) (\\mathbf{I} + \\delta \\mathbf{A}) - \\mathbf{M} = \\delta(\\mathbf{A}^T\\mathbf{M} + \\mathbf{M}\\mathbf{A}) + \\delta^2 \\mathbf{A}^T\\mathbf{M}\\mathbf{A} = -\\delta \\mathbf{Q}"
},
{
"math_id": 45,
"text": " \\delta "
},
{
"math_id": 46,
"text": " \\delta \\to 0 "
},
{
"math_id": 47,
"text": " \\mathbf{A}^T\\mathbf{M} + \\mathbf{M}\\mathbf{A} = -\\mathbf{Q} "
}
] | https://en.wikipedia.org/wiki?curid=984752 |
9849183 | Wien bridge | The Wien bridge is a type of bridge circuit that was developed by Max Wien in 1891. The bridge consists of four resistors and two capacitors.
At the time of the Wien bridge's invention, bridge circuits were a common way of measuring component values by comparing them to known values. Often an unknown component would be put in one arm of a bridge, and then the bridge would be nulled by adjusting the other arms or changing the frequency of the voltage source. See, for example, the Wheatstone bridge.
The Wien bridge is one of many common bridges. Wien's bridge is used for precision measurement of capacitance in terms of resistance and frequency. It was also used to measure audio frequencies.
The Wien bridge does not require equal values of "R" or "C". At some frequency, the reactance of the series "R"2–"C"2 arm will be an exact multiple of the shunt "R""x"–"C""x" arm. If the two "R"3 and "R"4 arms are adjusted to the same ratio, then the bridge is balanced.
The bridge is balanced when:
formula_0 and formula_1
The equations simplify if one chooses "R"2 = "R""x" and "C"2 = "C"x; the result is "R"4 = 2"R"3.
In practice, the values of "R" and "C" will never be exactly equal, but the equations above show that for fixed values in the "2" and "x" arms, the bridge will balance at some "ω" and some ratio of "R"4/"R"3.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\omega^2 = {1 \\over R_x R_2 C_x C_2}"
},
{
"math_id": 1,
"text": " {C_x \\over C_2} = {R_4 \\over R_3} - {R_2 \\over R_x} \\, ."
}
] | https://en.wikipedia.org/wiki?curid=9849183 |
9852079 | Ragsdale conjecture | The Ragsdale conjecture is a mathematical conjecture that concerns the possible arrangements of real algebraic curves embedded in the projective plane. It was proposed by Virginia Ragsdale in her dissertation in 1906 and was disproved in 1979. It has been called "the oldest and most famous conjecture on the topology of real algebraic curves".
Formulation of the conjecture.
Ragsdale's dissertation, "On the Arrangement of the Real Branches of Plane Algebraic Curves," was published by the American Journal of Mathematics in 1906. The dissertation was a treatment of Hilbert's sixteenth problem, which had been proposed by Hilbert in 1900, along with 22 other unsolved problems of the 19th century; it is one of the handful of Hilbert's problems that remains wholly unresolved. Ragsdale formulated a conjecture that provided an upper bound on the number of topological circles of a certain type, along with the basis of evidence.
Conjecture.
Ragsdale's main conjecture is as follows.
Assume that an algebraic curve of degree 2"k" contains "p" even and "n" odd ovals. Ragsdale conjectured that
formula_0
She also posed the inequality
formula_1
and showed that the inequality could not be further improved. This inequality was later proved by Petrovsky.
Disproving the conjecture.
The conjecture was held of very high importance in the field of real algebraic geometry for most of the twentieth century. Later, in 1980, Oleg Viro introduced a technique known as "patchworking algebraic curves" and used to generate a counterexample to the conjecture.
In 1993, Ilia Itenberg produced additional counterexamples to the Ragsdale conjecture, so Viro and Itenberg wrote a paper in 1996 discussing their work on disproving the conjecture using the "patchworking" technique.
The problem of finding a sharp upper bound remains unsolved. | [
{
"math_id": 0,
"text": " p \\le \\tfrac32 k(k-1) + 1 \\quad\\text{and}\\quad n \\le \\tfrac32 k(k-1). "
},
{
"math_id": 1,
"text": " | 2(p-n)-1 | \\le 3k^2 - 3k + 1, "
}
] | https://en.wikipedia.org/wiki?curid=9852079 |
985410 | Binary GCD algorithm | The binary GCD algorithm, also known as Stein's algorithm or the binary Euclidean algorithm, is an algorithm that computes the greatest common divisor (GCD) of two nonnegative integers. Stein's algorithm uses simpler arithmetic operations than the conventional Euclidean algorithm; it replaces division with arithmetic shifts, comparisons, and subtraction.
Although the algorithm in its contemporary form was first published by the physicist and programmer Josef Stein in 1967, it was known by the 2nd century BCE, in ancient China.
Algorithm.
The algorithm finds the GCD of two nonnegative numbers formula_0 and formula_1 by repeatedly applying these identities:
As GCD is commutative (formula_9), those identities still apply if the operands are swapped: formula_10, formula_11 if formula_1 is odd, etc.
Implementation.
While the above description of the algorithm is mathematically correct, performant software implementations typically differ from it in a few notable ways:
The following is an implementation of the algorithm in Rust exemplifying those differences, adapted from "uutils":
use std::cmp::min;
use std::mem::swap;
pub fn gcd(mut u: u64, mut v: u64) -> u64 {
// Base cases: gcd(n, 0) = gcd(0, n) = n
if u == 0 {
return v;
} else if v == 0 {
return u;
// Using identities 2 and 3:
// gcd(2ⁱ u, 2ʲ v) = 2ᵏ gcd(u, v) with u, v odd and k = min(i, j)
// 2ᵏ is the greatest power of two that divides both 2ⁱ u and 2ʲ v
let i = u.trailing_zeros(); u »= i;
let j = v.trailing_zeros(); v »= j;
let k = min(i, j);
loop {
// u and v are odd at the start of the loop
debug_assert!(u % 2 == 1, "u = {} should be odd", u);
debug_assert!(v % 2 == 1, "v = {} should be odd", v);
// Swap if necessary so u ≤ v
if u > v {
swap(&mut u, &mut v);
// Identity 4: gcd(u, v) = gcd(u, v-u) as u ≤ v and u, v are both odd
v -= u;
// v is now even
if v == 0 {
// Identity 1: gcd(u, 0) = u
// The shift by k is necessary to add back the 2ᵏ factor that was removed before the loop
return u « k;
// Identity 3: gcd(u, 2ʲ v) = gcd(u, v) as u is odd
v »= v.trailing_zeros();
Note: The implementation above accepts "unsigned" (non-negative) integers; given that formula_13, the signed case can be handled as follows:
/// Computes the GCD of two signed 64-bit integers
/// The result is unsigned and not always representable as i64: gcd(i64::MIN, i64::MIN) == 1 « 63
pub fn signed_gcd(u: i64, v: i64) -> u64 {
gcd(u.unsigned_abs(), v.unsigned_abs())
Complexity.
Asymptotically, the algorithm requires formula_14 steps, where formula_15 is the number of bits in the larger of the two numbers, as every two steps reduce at least one of the operands by at least a factor of formula_4. Each step involves only a few arithmetic operations (formula_16 with a small constant); when working with word-sized numbers, each arithmetic operation translates to a single machine operation, so the number of machine operations is on the order of formula_15, i.e. formula_17.
For arbitrarily-large numbers, the asymptotic complexity of this algorithm is formula_18, as each arithmetic operation (subtract and shift) involves a linear number of machine operations (one per word in the numbers' binary representation).
If the numbers can be represented in the machine's memory, "i.e." each number's "size" can be represented by a single machine word, this bound is reduced to:
formula_19
This is the same as for the Euclidean algorithm, though a more precise analysis by Akhavi and Vallée proved that binary GCD uses about 60% fewer bit operations.
Extensions.
The binary GCD algorithm can be extended in several ways, either to output additional information, deal with arbitrarily-large integers more efficiently, or to compute GCDs in domains other than the integers.
The "extended binary GCD" algorithm, analogous to the extended Euclidean algorithm, fits in the first kind of extension, as it provides the Bézout coefficients in addition to the GCD: integers formula_20 and formula_21 such that formula_22.
In the case of large integers, the best asymptotic complexity is formula_23, with formula_24 the cost of formula_15-bit multiplication; this is near-linear and vastly smaller than the binary GCD algorithm's formula_18, though concrete implementations only outperform older algorithms for numbers larger than about 64 kilobits ("i.e." greater than 8×1019265). This is achieved by extending the binary GCD algorithm using ideas from the Schönhage–Strassen algorithm for fast integer multiplication.
The binary GCD algorithm has also been extended to domains other than natural numbers, such as Gaussian integers, Eisenstein integers, quadratic rings, and integer rings of number fields.
Historical description.
An algorithm for computing the GCD of two numbers was known in ancient China, under the Han dynasty, as a method to reduce fractions:
<templatestyles src="Template:Blockquote/styles.css" />If possible halve it; otherwise, take the denominator and the numerator, subtract the lesser from the greater, and do that alternately to make them the same. Reduce by the same number.
The phrase "if possible halve it" is ambiguous,
Further reading.
Covers the extended binary GCD, and a probabilistic analysis of the algorithm.
Covers a variety of topics, including the extended binary GCD algorithm which outputs Bézout coefficients, efficient handling of multi-precision integers using a variant of Lehmer's GCD algorithm, and the relationship between GCD and continued fraction expansions of real numbers.
An analysis of the algorithm in the average case, through the lens of functional analysis: the algorithms' main parameters are cast as a dynamical system, and their average value is related to the invariant measure of the system's transfer operator. | [
{
"math_id": 0,
"text": "u"
},
{
"math_id": 1,
"text": "v"
},
{
"math_id": 2,
"text": "\\gcd(u, 0) = u"
},
{
"math_id": 3,
"text": "\\gcd(2u, 2v) = 2 \\cdot \\gcd(u, v)"
},
{
"math_id": 4,
"text": "2"
},
{
"math_id": 5,
"text": "\\gcd(u, 2v) = \\gcd(u, v)"
},
{
"math_id": 6,
"text": "\\gcd(u, v) = \\gcd(u, v - u)"
},
{
"math_id": 7,
"text": "u, v"
},
{
"math_id": 8,
"text": "u \\leq v"
},
{
"math_id": 9,
"text": "\\gcd(u, v) = \\gcd(v, u)"
},
{
"math_id": 10,
"text": "\\gcd(0, v) = v"
},
{
"math_id": 11,
"text": "\\gcd(2u, v) = \\gcd(u, v)"
},
{
"math_id": 12,
"text": "v = 0"
},
{
"math_id": 13,
"text": "\\gcd(u, v) = \\gcd(\\pm{}u, \\pm{}v)"
},
{
"math_id": 14,
"text": "O(n)"
},
{
"math_id": 15,
"text": "n"
},
{
"math_id": 16,
"text": "O(1)"
},
{
"math_id": 17,
"text": "\\log_{2}(\\max(u, v))"
},
{
"math_id": 18,
"text": "O(n^2)"
},
{
"math_id": 19,
"text": "\nO\\left(\\frac{n^2}{\\log_2 n}\\right)\n"
},
{
"math_id": 20,
"text": "a"
},
{
"math_id": 21,
"text": "b"
},
{
"math_id": 22,
"text": "a\\cdot{}u + b\\cdot{}v = \\gcd(u, v)"
},
{
"math_id": 23,
"text": "O(M(n) \\log n)"
},
{
"math_id": 24,
"text": "M(n)"
}
] | https://en.wikipedia.org/wiki?curid=985410 |
9854301 | LBOZ | Coefficient used in spectrophotometry
LBOZ is a coefficient used in spectrophotometry to estimate selectivity (amount of overlapping of spectra) in a quantitative manner. It is named after its creators: Lorber, Bergmann, von Oepen, and Zinn.
Definition.
Let formula_0 be a matrix of the spectra (absorbances), where the "k" rows correspond to the components in mixture and "n" columns correspond to the sequence of wavelengths. The LBOZ criterion for "k"th component is calculated from the following formula:
formula_1
where formula_2 means a pseudoinverse of the matrix and formula_3 means an euclidean length of a vector.
Properties.
The image above show synthetic Gaussian spectra. The LBOZ criteria are: 0.561 for black compound, 0.402 for red compound, 0.899 for green and 0.549 for blue. LBOZ always lie in range <0,1> and has strong mathematical sense – it presents the amount of spectral signal which is not overlapped by the others. Hence, the uncertainty of a compound quantity increases by formula_4 in presence of the other compounds. In this case, the highest uncertainty is expected during determination of red compound – theoretically 2.38 times greater than during determination of its compound alone. | [
{
"math_id": 0,
"text": "\\mathbf{X}"
},
{
"math_id": 1,
"text": "\\xi_k = \\frac{1}{\\|\\mathbf{X}_{k-row}\\| \\|\\mathbf{X}^{+}_{k-col}\\|}"
},
{
"math_id": 2,
"text": "\\mathbf{X}^{+}"
},
{
"math_id": 3,
"text": "\\| \\cdots \\|"
},
{
"math_id": 4,
"text": "1/\\xi"
}
] | https://en.wikipedia.org/wiki?curid=9854301 |
985583 | Sothic cycle | 1460 year calendar cycle of ancient Egypt
The Sothic cycle or Canicular period is a period of 1,461 Egyptian civil years of 365 days each or 1,460 Julian years averaging <templatestyles src="Fraction/styles.css" />365+1⁄4 days each. During a Sothic cycle, the 365-day year loses enough time that the start of its year once again coincides with the heliacal rising of the star Sirius ( or , 'Triangle'; Greek: , ) on 19 July in the Julian calendar. It is an important aspect of Egyptology, particularly with regard to reconstructions of the Egyptian calendar and its history. Astronomical records of this displacement may have been responsible for the later establishment of the more accurate Julian and Alexandrian calendars.
Mechanics.
The ancient Egyptian civil year, its holidays, and religious records reflect its apparent establishment at a point when the return of the bright star Sirius to the night sky was considered to herald the annual flooding of the Nile. However, because the civil calendar was exactly 365 days long and did not incorporate leap years until 22 BCE, its months "wandered" backwards through the solar year at the rate of about one day in every four years. This almost exactly corresponded to its displacement against the Sothic year as well. (The Sothic year is about a minute longer than a Julian year.) The sidereal year of 365.25636 days is valid only for stars on the ecliptic (the apparent path of the Sun across the sky) and having no proper motion, whereas Sirius's displacement ~40° below the ecliptic, its proper motion, and the wobbling of the celestial equator cause the period between its heliacal risings to be almost exactly 365.25 days long instead. This steady loss of one relative day every four years over the course of the 365-day calendar meant that the "wandering" day would return to its original place relative to the solar and Sothic year after precisely 1461 Egyptian civil years or 1460 Julian years.
Discovery.
This calendar cycle was well known in antiquity. Censorinus described it in his book "De Die Natale", in CE 238, and stated that the cycle had renewed 100 years earlier on the 12th of August. In the ninth century, Syncellus epitomized the Sothic Cycle in the "Old Egyptian Chronicle." Isaac Cullimore, an early Egyptologist and member of the Royal Society, published a discourse on it in 1833 in which he was the first to suggest that Censorinus had fudged the terminus date, and that it was more likely to fall in CE 136. He also computed the likely date of its invention as being around 1600 BCE.
In 1904, seven decades after Cullimore, Eduard Meyer carefully combed known Egyptian inscriptions and written materials to find any mention of the calendar dates when Sirius rose at dawn. He found six of them, on which the dates of much of conventional Egyptian chronology are based. A heliacal rise of Sirius was recorded by Censorinus as having happened on the Egyptian New Year's Day between 139 CE and 142 CE.
The record itself actually refers to 21 July 140 CE, but astronomical calculation definitely dates the heliacal rising at 20 July 139 CE, Julian. This correlates the Egyptian calendar to the Julian calendar. A Julian leap day occurs in 140 CE, and so the new year on 1 Thoth is 20 July in 139 CE but it is 19 July for 140–142 CE. Thus Meyer was able to compare the Egyptian civil calendar date on which Sirius was observed rising heliacally to the Julian calendar date on which Sirius "ought" to have risen, count the number of intercalary days needed, and determine how many years were between the beginning of a cycle and the observation.
To calculate a date astronomically, one also needs to know the place of observation, since the latitude of the observation changes the day when the heliacal rising of Sirius can be seen, and mislocating an observation can potentially throw off the resulting chronology by several decades. Official observations are known to have been made at Heliopolis (or Memphis, near Cairo), Thebes, and Elephantine (near Aswan), with the rising of Sirius observed at Cairo about 8 days after it is seen at Aswan.
Meyer concluded that the Egyptian civil calendar was created in 4241 BCE. Recent scholarship, however, has discredited that claim. Most scholars either move the observation upon which he based this forward by one cycle of Sirius, to 19 July 2781 BCE, or reject the assumption that the document on which Meyer relied indicates a rise of Sirius at all. 52
Chronological interpretation.
Three specific observations of the heliacal rise of Sirius are extremely important for Egyptian chronology. The first is the aforementioned ivory tablet from the reign of Djer which supposedly indicates the beginning of a Sothic cycle, the rising of Sirius on the same day as the new year. If this does indicate the beginning of a Sothic cycle, it must date to about 17 July 2773 BCE. 51 However, this date is too late for Djer's reign, so many scholars believe that it indicates a correlation between the rising of Sirius and the Egyptian "lunar" calendar, instead of the solar Egyptian civil calendar, which would render the tablet essentially devoid of chronological value. 52
Gautschy "et al". (2017) claimed that a newly discovered Sothis date from the Old Kingdom and a subsequent astronomic study confirms the Sothic cycle model.
The second observation is clearly a reference to a heliacal rising, and is believed to date to the seventh year of Senusret III. This observation was almost certainly made at Itj-Tawy, the Twelfth Dynasty capital, which would date the Twelfth Dynasty from 1963 to 1786 BCE. The Ramses or Turin Papyrus Canon says 213 years (1991–1778 BCE), Parker reduces it to 206 years (1991–1785 BCE), based on 17 July 1872 BCE as the Sothic date (120th year of 12th dynasty, a drift of 30 leap days). Prior to Parker's investigation of lunar dates, the 12th dynasty was placed as 213 years of 2007–1794 BCE interpreting the date 21 July 1888 BCE as the 120th year, and then for 2003–1790 BCE interpreting the date 20 July 1884 BCE as the 120th year.
The third observation was in the reign of Amenhotep I, and, assuming it was made in Thebes, dates his reign between 1525 and 1504 BCE. If made in Memphis, Heliopolis, or some other Delta site instead, as a minority of scholars still argue, the entire chronology of the 18th Dynasty needs to be extended some 20 years. 202
Observational procedure and precession.
The Sothic cycle is a specific example of two cycles of differing length interacting to cycle together, here called a tertiary cycle. This is mathematically defined by the formula formula_0 or half the harmonic mean. In the case of the Sothic cycle the two cycles are the Egyptian civil year and the Sothic year.
The Sothic year is the length of time for the star Sirius to visually return to the same position in relation to the sun. Star years measured in this way vary due to axial precession, the movement of the Earth's axis in relation to the sun.
The length of time for a star to make a yearly path can be marked when it rises to a defined altitude above a local horizon at the time of sunrise. This altitude does not have to be the altitude of first possible visibility, nor the exact position observed. Throughout the year the star will rise to whatever altitude was chosen near the horizon approximately four minutes earlier each successive sunrise. Eventually the star will return to the same relative location at sunrise, regardless of the altitude chosen. This length of time can be called an "observational year". Stars that reside close to the ecliptic or the ecliptic meridian will – on average – exhibit observational years close to the sidereal year of 365.2564 days. The ecliptic and the meridian cut the sky into four quadrants. The axis of the earth wobbles around slowly moving the observer and changing the observation of the event. If the axis swings the observer closer to the event its observational year will be shortened. Likewise, the observational year can be lengthened when the axis swings away from the observer. This depends upon which quadrant of the sky the phenomenon is observed.
The Sothic year is remarkable because its average duration happened to have been nearly exactly 365.25 days, in the early 4th millennium BCE before the unification of Egypt. The slow rate of change from this value is also of note. If observations and records could have been maintained during predynastic times the Sothic rise would optimally return to the same calendar day after 1461 calendar years. This value would drop to about 1456 calendar years by the Middle Kingdom. The value 1461 could also be maintained if the date of the Sothic rise were artificially maintained by moving the feast in celebration of this event one day every fourth year instead of rarely adjusting it according to observation.
It has been noticed, and the Sothic cycle confirms, that Sirius does not move retrograde across the sky, like other stars, a phenomenon widely known as the precession of the equinox:
Sirius remains about the same distance from the equinoxes – and so from the solstices – throughout these many centuries, despite precession. — J.Z. Buchwald (2003)
For the same reason, the heliacal rising or zenith of Sirius does not slip through the calendar at the precession rate of about one day per 71.6 years as other stars do, but much slower. This remarkable stability within the solar year may be one reason that the Egyptians used it as a basis for their calendar. The coincidence of a heliacal rising of Sirius and the New Year reported by Censorinus occurred about 20 July, that is a month "after" the summer solstice.
Problems and criticisms.
Determining the date of a heliacal rise of Sirius has been shown to be difficult, especially considering the need to know the exact latitude of the observation. Another problem is that because the Egyptian calendar loses one day every four years, a heliacal rise will take place on the same day for four years in a row, and any observation of that rise can date to any of those four years, making the observation imprecise.
A number of criticisms have been levelled against the reliability of dating by the Sothic cycle. Some are serious enough to be considered problematic. Firstly, none of the astronomical observations have dates that mention the specific pharaoh in whose reign they were observed, forcing Egyptologists to supply that information on the basis of a certain amount of informed speculation. Secondly, there is no information regarding the nature of the civil calendar throughout the course of Egyptian history, forcing Egyptologists to assume that it existed unchanged for thousands of years; the Egyptians would only have needed to carry out one calendar reform in a few thousand years for these calculations to be worthless. Other criticisms are not considered as problematic, e.g. there is no extant mention of the Sothic cycle in ancient Egyptian writing, which may simply be a result of it either being so obvious to Egyptians that it didn't merit mention, or to relevant texts being destroyed over time or still awaiting discovery.
Marc Van de Mieroop, in his discussion of chronology and dating, does not mention the Sothic cycle at all, and asserts that the bulk of historians nowadays would consider that it is not possible to put forward exact dates earlier than the 8th century BCE.
Some have recently claimed that the Theran eruption marks the beginning of the Eighteenth Dynasty, due to Theran ash and pumice discovered in the ruins of Avaris, in layers that mark the end of the Hyksos era. Because the evidence of dendrochronologists indicates the eruption took place in 1626 BCE, this has been taken to indicate that dating by the Sothic cycle is off by 50–80 years at the outset of the 18th Dynasty. Claims that the Thera eruption is described on the "Tempest Stele" of Ahmose I have been disputed by writers such as Peter James.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{1}{a} + \\frac{1}{b} = \\frac{1}{t}"
}
] | https://en.wikipedia.org/wiki?curid=985583 |
9856577 | Point-finite collection | Topological concept for collections of sets
In mathematics, a collection or family formula_0 of subsets of a topological space formula_1 is said to be point-finite if every point of formula_1 lies in only finitely many members of formula_2
A metacompact space is a topological space in which every open cover admits a point-finite open refinement. Every locally finite collection of subsets of a topological space is also point-finite.
A topological space in which every open cover admits a locally finite open refinement is called a paracompact space. Every paracompact space is therefore metacompact.
References.
<templatestyles src="Reflist/styles.css" />
"This article incorporates material from point finite on PlanetMath, which is licensed under the ." | [
{
"math_id": 0,
"text": "\\mathcal{U}"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "\\mathcal{U}."
}
] | https://en.wikipedia.org/wiki?curid=9856577 |
9857613 | Power delay profile | The power delay profile (PDP) gives the intensity of a signal received through a multipath channel as a function of time delay. The time delay is the difference in travel time between multipath arrivals. The abscissa is in units of time and the ordinate is usually in decibels.
It is easily measured empirically and can be used to extract certain channel's parameters such as the delay spread.
For Small Scale channel modeling, the power delay profile of the channel is found by taking the spatial average of the channel's baseband impulse response i.e. formula_0 over a local area.
References.
<templatestyles src="Reflist/styles.css" />
2. 36521 section B2 | [
{
"math_id": 0,
"text": "|h_{b}(t, \\tau)|^{2}"
}
] | https://en.wikipedia.org/wiki?curid=9857613 |
985897 | Derived category | Homological construction
In mathematics, the derived category "D"("A") of an abelian category "A" is a construction of homological algebra introduced to refine and in a certain sense to simplify the theory of derived functors defined on "A". The construction proceeds on the basis that the objects of "D"("A") should be chain complexes in "A", with two such chain complexes considered isomorphic when there is a chain map that induces an isomorphism on the level of homology of the chain complexes. Derived functors can then be defined for chain complexes, refining the concept of hypercohomology. The definitions lead to a significant simplification of formulas otherwise described (not completely faithfully) by complicated spectral sequences.
The development of the derived category, by Alexander Grothendieck and his student Jean-Louis Verdier shortly after 1960, now appears as one terminal point in the explosive development of homological algebra in the 1950s, a decade in which it had made remarkable strides. The basic theory of Verdier was written down in his dissertation, published finally in 1996 in Astérisque (a summary had earlier appeared in SGA 4½). The axiomatics required an innovation, the concept of triangulated category, and the construction is based on localization of a category, a generalization of localization of a ring. The original impulse to develop the "derived" formalism came from the need to find a suitable formulation of Grothendieck's coherent duality theory. Derived categories have since become indispensable also outside of algebraic geometry, for example in the formulation of the theory of D-modules and microlocal analysis. Recently derived categories have also become important in areas nearer to physics, such as D-branes and mirror symmetry.
Unbounded derived categories were introduced by Spaltenstein in 1988.
Motivations.
In coherent sheaf theory, pushing to the limit of what could be done with Serre duality without the assumption of a non-singular scheme, the need to take a whole complex of sheaves in place of a single "dualizing sheaf" became apparent. In fact the Cohen–Macaulay ring condition, a weakening of non-singularity, corresponds to the existence of a single dualizing sheaf; and this is far from the general case. From the top-down intellectual position, always assumed by Grothendieck, this signified a need to reformulate. With it came the idea that the 'real' tensor product and "Hom" functors would be those existing on the derived level; with respect to those, Tor and Ext become more like computational devices.
Despite the level of abstraction, derived categories became accepted over the following decades, especially as a convenient setting for sheaf cohomology. Perhaps the biggest advance was the formulation of the Riemann–Hilbert correspondence in dimensions greater than 1 in derived terms, around 1980. The Sato school adopted the language of derived categories, and the subsequent history of D-modules was of a theory expressed in those terms.
A parallel development was the category of spectra in homotopy theory. The homotopy category of spectra and the derived category of a ring are both examples of triangulated categories.
Definition.
Let formula_0 be an abelian category. (Examples include the category of modules over a ring and the category of sheaves of abelian groups on a topological space.) The derived category formula_1 is defined by a universal property with respect to the category formula_2 of cochain complexes with terms in formula_0. The objects of formula_2 are of the form
formula_3
where each "X""i" is an object of formula_0 and each of the composites formula_4 is zero. The "i"th cohomology group of the complex is formula_5. If formula_6 and formula_7 are two objects in this category, then a morphism formula_8 is defined to be a family of morphisms formula_9 such that formula_10. Such a morphism induces morphisms on cohomology groups formula_11, and formula_12 is called a quasi-isomorphism if each of these morphisms is an isomorphism in formula_0.
The universal property of the derived category is that it is a localization of the category of complexes with respect to quasi-isomorphisms. Specifically, the derived category formula_1 is a category, together with a functor formula_13, having the following universal property: Suppose formula_14 is another category (not necessarily abelian) and formula_15 is a functor such that, whenever formula_12 is a quasi-isomorphism in formula_2, its image formula_16 is an isomorphism in formula_14; then formula_17 factors through formula_18. Any two categories having this universal property are equivalent.
Relation to the homotopy category.
If formula_19 and formula_20 are two morphisms formula_21 in formula_2, then a chain homotopy or simply homotopy formula_22 is a collection of morphisms formula_23 such that formula_24 for every "i". It is straightforward to show that two homotopic morphisms induce identical morphisms on cohomology groups. We say that formula_25 is a chain homotopy equivalence if there exists formula_26 such that formula_27 and formula_28 are chain homotopic to the identity morphisms on formula_29 and formula_30, respectively. The homotopy category of cochain complexes formula_31 is the category with the same objects as formula_2 but whose morphisms are equivalence classes of morphisms of complexes with respect to the relation of chain homotopy. There is a natural functor formula_32 which is the identity on objects and which sends each morphism to its chain homotopy equivalence class. Since every chain homotopy equivalence is a quasi-isomorphism, formula_18 factors through this functor. Consequently formula_1 can be equally well viewed as a localization of the homotopy category.
From the point of view of model categories, the derived category "D"("A") is the true 'homotopy category' of the category of complexes, whereas "K"("A") might be called the 'naive homotopy category'.
Constructing the derived category.
There are several possible constructions of the derived category. When formula_0 is a small category, then there is a direct construction of the derived category by formally adjoining inverses of quasi-isomorphisms. This is an instance of the general construction of a category by generators and relations.
When formula_0 is a large category, this construction does not work for set theoretic reasons. This construction builds morphisms as equivalence classes of paths. If formula_0 has a proper class of objects, all of which are isomorphic, then there is a proper class of paths between any two of these objects. The generators and relations construction therefore only guarantees that the morphisms between two objects form a proper class. However, the morphisms between two objects in a category are usually required to be sets, and so this construction fails to produce an actual category.
Even when formula_0 is small, however, the construction by generators and relations generally results in a category whose structure is opaque, where morphisms are arbitrarily long paths subject to a mysterious equivalence relation. For this reason, it is conventional to construct the derived category more concretely even when set theory is not at issue.
These other constructions go through the homotopy category. The collection of quasi-isomorphisms in formula_31 forms a multiplicative system. This is a collection of conditions that allow complicated paths to be rewritten as simpler ones. The Gabriel–Zisman theorem implies that localization at a multiplicative system has a simple description in terms of roofs. A morphism formula_21 in formula_1 may be described as a pair formula_33, where for some complex formula_34, formula_35 is a quasi-isomorphism and formula_36 is a chain homotopy equivalence class of morphisms. Conceptually, this represents formula_37. Two roofs are equivalent if they have a common overroof.
Replacing chains of morphisms with roofs also enables the resolution of the set-theoretic issues involved in derived categories of large categories. Fix a complex formula_29 and consider the category formula_38 whose objects are quasi-isomorphisms in formula_31 with codomain formula_29 and whose morphisms are commutative diagrams. Equivalently, this is the category of objects over formula_29 whose structure maps are quasi-isomorphisms. Then the multiplicative system condition implies that the morphisms in formula_1 from formula_29 to formula_30 are
formula_39
assuming that this colimit is in fact a set. While formula_38 is potentially a large category, in some cases it is controlled by a small category. This is the case, for example, if formula_0 is a Grothendieck abelian category (meaning that it satisfies AB5 and has a set of generators), with the essential point being that only objects of bounded cardinality are relevant. In these cases, the limit may be calculated over a small subcategory, and this ensures that the result is a set. Then formula_1 may be defined to have these sets as its formula_40 sets.
There is a different approach based on replacing morphisms in the derived category by morphisms in the homotopy category. A morphism in the derived category with codomain being a bounded below complex of injective objects is the same as a morphism to this complex in the homotopy category; this follows from termwise injectivity. By replacing termwise injectivity by a stronger condition, one gets a similar property that applies even to unbounded complexes. A complex formula_41 is "K"-injective if, for every acyclic complex formula_29, we have formula_42. A straightforward consequence of this is that, for every complex formula_29, morphisms formula_43 in formula_31 are the same as such morphisms in formula_1. A theorem of Serpé, generalizing work of Grothendieck and of Spaltenstein, asserts that in a Grothendieck abelian category, every complex is quasi-isomorphic to a K-injective complex with injective terms, and moreover, this is functorial. In particular, we may define morphisms in the derived category by passing to K-injective resolutions and computing morphisms in the homotopy category. The functoriality of Serpé's construction ensures that composition of morphisms is well-defined. Like the construction using roofs, this construction also ensures suitable set theoretic properties for the derived category, this time because these properties are already satisfied by the homotopy category.
Derived Hom-sets.
As noted before, in the derived category the hom sets are expressed through roofs, or valleys formula_44, where formula_45 is a quasi-isomorphism. To get a better picture of what elements look like, consider an exact sequence
formula_46
We can use this to construct a morphism formula_47 by truncating the complex above, shifting it, and using the obvious morphisms above. In particular, we have the picture
formula_48
where the bottom complex has formula_49 concentrated in degree formula_50, the only non-trivial upward arrow is the equality morphism, and the only-nontrivial downward arrow is formula_51. This diagram of complexes defines a morphism
formula_52
in the derived category. One application of this observation is the construction of the Atiyah-class.
Remarks.
For certain purposes (see below) one uses "bounded-below" (formula_53 for formula_54), "bounded-above" (formula_53 for formula_55) or "bounded" (formula_53 for formula_56) complexes instead of unbounded ones. The corresponding derived categories are usually denoted "D+(A)", "D−(A)" and "Db(A)", respectively.
If one adopts the classical point of view on categories, that there is a set of morphisms from one object to another (not just a class), then one has to give an additional argument to prove this. If, for example, the abelian category "A" is small, i.e. has only a set of objects, then this issue will be no problem. Also, if "A" is a Grothendieck abelian category, then the derived category "D"("A") is equivalent to a full subcategory of the homotopy category "K"("A"), and hence has only a set of morphisms from one object to another. Grothendieck abelian categories include the category of modules over a ring, the category of sheaves of abelian groups on a topological space, and many other examples.
Composition of morphisms, i.e. roofs, in the derived category is accomplished by finding a third roof on top of the two roofs to be composed. It may be checked that this is possible and gives a well-defined, associative composition.
Since "K(A)" is a triangulated category, its localization "D(A)" is also triangulated. For an integer "n" and a complex "X", define the complex "X"["n"] to be "X" shifted down by "n", so that
formula_57
with differential
formula_58
By definition, a distinguished triangle in "D(A)" is a triangle that is isomorphic in "D(A)" to the triangle "X" → "Y" → Cone("f") → "X"[1] for some map of complexes "f": "X" → "Y". Here Cone("f") denotes the mapping cone of "f". In particular, for a short exact sequence
formula_59
in "A", the triangle "X" → "Y" → "Z" → "X"[1] is distinguished in "D(A)". Verdier explained that the definition of the shift "X"[1] is forced by requiring "X"[1] to be the cone of the morphism "X" → 0.
By viewing an object of "A" as a complex concentrated in degree zero, the derived category "D(A)" contains "A" as a full subcategory. Morphisms in the derived category include information about all Ext groups: for any objects "X" and "Y" in "A" and any integer "j",
formula_60
Projective and injective resolutions.
One can easily show that a homotopy equivalence is a quasi-isomorphism, so the second step in the above construction may be omitted. The definition is usually given in this way because it reveals the existence of a canonical functor
formula_61
In concrete situations, it is very difficult or impossible to handle morphisms in the derived category directly. Therefore, one looks for a more manageable category which is equivalent to the derived category. Classically, there are two (dual) approaches to this: projective and injective resolutions. In both cases, the restriction of the above canonical functor to an appropriate subcategory will be an equivalence of categories.
In the following we will describe the role of injective resolutions in the context of the derived category, which is the basis for defining right derived functors, which in turn have important applications in cohomology of sheaves on topological spaces or more advanced cohomology theories like étale cohomology or group cohomology.
In order to apply this technique, one has to assume that the abelian category in question has "enough injectives", which means that every object "X" of the category admits a monomorphism to an injective object "I". (Neither the map nor the injective object has to be uniquely specified.) For example, every Grothendieck abelian category has enough injectives. Embedding "X" into some injective object "I"0, the cokernel of this map into some injective "I"1 etc., one constructs an "injective resolution" of "X", i.e. an exact (in general infinite) sequence
formula_62
where the "I"* are injective objects. This idea generalizes to give resolutions of bounded-below complexes "X", i.e. "Xn = 0" for sufficiently small "n". As remarked above, injective resolutions are not uniquely defined, but it is a fact that any two resolutions are homotopy equivalent to each other, i.e. isomorphic in the homotopy category. Moreover, morphisms of complexes extend uniquely to a morphism of two given injective resolutions.
This is the point where the homotopy category comes into play again: mapping an object "X" of "A" to (any) injective resolution "I"* of "A" extends to a functor
formula_63
from the bounded below derived category to the bounded below homotopy category of complexes whose terms are injective objects in "A".
It is not difficult to see that this functor is actually inverse to the restriction of the canonical localization functor mentioned in the beginning. In other words, morphisms Hom("X","Y") in the derived category may be computed by resolving both "X" and "Y" and computing the morphisms in the homotopy category, which is at least theoretically easier. In fact, it is enough to resolve "Y": for any complex "X" and any bounded below complex "Y" of injectives,
formula_64
Dually, assuming that "A" has "enough projectives", i.e. for every object "X" there is an epimorphism from a projective object "P" to "X", one can use projective resolutions instead of injective ones.
In 1988 Spaltenstein defined an unbounded derived category () which immediately proved useful in the study of singular spaces; see, for example, the book by Kashiwara and Schapira (Categories and Sheaves) on various applications of unbounded derived category. Spaltenstein used so-called "K-injective" and "K-projective" resolutions.
and May (2006) describe the derived category of modules over DG-algebras. Keller also gives applications to Koszul duality, Lie algebra cohomology, and Hochschild homology.
More generally, carefully adapting the definitions, it is possible to define the derived category of an exact category .
The relation to derived functors.
The derived category is a natural framework to define and study derived functors. In the following, let "F": "A" → "B" be a functor of abelian categories. There are two dual concepts:
In the following we will describe right derived functors. So, assume that "F" is left exact. Typical examples are "F": "A" → Ab given by "X" ↦ Hom("X", "A") or "X" ↦ Hom("A", "X") for some fixed object "A", or the global sections functor on sheaves or the direct image functor. Their right derived functors are Ext"n"(–,"A"), Ext"n"("A",–), "H""n"("X", "F") or "R""n""f"∗ ("F"), respectively.
The derived category allows us to encapsulate all derived functors "RnF" in one functor, namely the so-called "total derived functor" "RF": "D"+("A") → "D"+("B"). It is the following composition: "D"+("A") ≅ "K"+(Inj("A")) → "K"+("B") → "D"+("B"), where the first equivalence of categories is described above. The classical derived functors are related to the total one via "RnF"("X") = "Hn"("RF"("X")). One might say that the "RnF" forget the chain complex and keep only the cohomologies, whereas "RF" does keep track of the complexes.
Derived categories are, in a sense, the "right" place to study these functors. For example, the Grothendieck spectral sequence of a composition of two functors
formula_65
such that "F" maps injective objects in "A" to "G"-acyclics (i.e. "R""i""G"("F"("I")) = 0 for all "i" > 0 and injective "I"), is an expression of the following identity of total derived functors
"R"("G"∘"F") ≅ "RG"∘"RF".
J.-L. Verdier showed how derived functors associated with an abelian category "A" can be viewed as Kan extensions along embeddings of "A" into suitable derived categories [Mac Lane].
Derived equivalence.
It may happen that two abelian categories "A" and "B" are not equivalent, but their derived categories D("A") and D("B") are. Often this is an interesting relation between "A" and "B". Such equivalences are related to the theory of t-structures in triangulated categories. Here are some examples.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
Four textbooks that discuss derived categories are: | [
{
"math_id": 0,
"text": "\\mathcal{A}"
},
{
"math_id": 1,
"text": "D(\\mathcal{A})"
},
{
"math_id": 2,
"text": "\\operatorname{Kom}(\\mathcal{A})"
},
{
"math_id": 3,
"text": "\\cdots \\to \nX^{-1} \\xrightarrow{d^{-1}}\nX^0 \\xrightarrow{d^0}\nX^1 \\xrightarrow{d^1}\nX^2 \\to \\cdots,"
},
{
"math_id": 4,
"text": "d^{i+1} \\circ d^i"
},
{
"math_id": 5,
"text": "H^i(X^\\bullet) = \\operatorname{ker} d^i / \\operatorname{im} d^{i-1}"
},
{
"math_id": 6,
"text": "(X^\\bullet, d_X^\\bullet)"
},
{
"math_id": 7,
"text": "(Y^\\bullet, d_Y^\\bullet)"
},
{
"math_id": 8,
"text": "f^\\bullet \\colon (X^\\bullet, d_X^\\bullet) \\to (Y^\\bullet, d_Y^\\bullet)"
},
{
"math_id": 9,
"text": "f_i \\colon X^i \\to Y^i"
},
{
"math_id": 10,
"text": "f_{i+1} \\circ d_X^i = d_Y^i \\circ f_i"
},
{
"math_id": 11,
"text": "H^i(f^\\bullet) \\colon H^i(X^\\bullet) \\to H^i(Y^\\bullet)"
},
{
"math_id": 12,
"text": "f^\\bullet"
},
{
"math_id": 13,
"text": "Q \\colon \\operatorname{Kom}(\\mathcal{A}) \\to D(\\mathcal{A})"
},
{
"math_id": 14,
"text": "\\mathcal{C}"
},
{
"math_id": 15,
"text": "F \\colon \\operatorname{Kom}(\\mathcal{A}) \\to \\mathcal{C}"
},
{
"math_id": 16,
"text": "F(f^\\bullet)"
},
{
"math_id": 17,
"text": "F"
},
{
"math_id": 18,
"text": "Q"
},
{
"math_id": 19,
"text": "f"
},
{
"math_id": 20,
"text": "g"
},
{
"math_id": 21,
"text": "X^\\bullet \\to Y^\\bullet"
},
{
"math_id": 22,
"text": "h \\colon f \\to g"
},
{
"math_id": 23,
"text": "h^i \\colon X^i \\to Y^{i-1}"
},
{
"math_id": 24,
"text": "f^i - g^i = d_Y^{i-1} \\circ h^i + h^{i+1} \\circ d_X^i"
},
{
"math_id": 25,
"text": "f \\colon X^\\bullet \\to Y^\\bullet"
},
{
"math_id": 26,
"text": "g \\colon Y^\\bullet \\to X^\\bullet"
},
{
"math_id": 27,
"text": "g \\circ f"
},
{
"math_id": 28,
"text": "f \\circ g"
},
{
"math_id": 29,
"text": "X^\\bullet"
},
{
"math_id": 30,
"text": "Y^\\bullet"
},
{
"math_id": 31,
"text": "K(\\mathcal{A})"
},
{
"math_id": 32,
"text": "\\operatorname{Kom}(\\mathcal{A}) \\to K(\\mathcal{A})"
},
{
"math_id": 33,
"text": "(s, f)"
},
{
"math_id": 34,
"text": "Z^\\bullet"
},
{
"math_id": 35,
"text": "s \\colon Z^\\bullet \\to X^\\bullet"
},
{
"math_id": 36,
"text": "f \\colon Z^\\bullet \\to Y^\\bullet"
},
{
"math_id": 37,
"text": "f \\circ s^{-1}"
},
{
"math_id": 38,
"text": "I_{X^\\bullet}"
},
{
"math_id": 39,
"text": "\\varinjlim_{I_{X^\\bullet}} \\operatorname{Hom}_{K(\\mathcal{A})}((X')^\\bullet, Y^\\bullet),"
},
{
"math_id": 40,
"text": "\\operatorname{Hom}"
},
{
"math_id": 41,
"text": "I^\\bullet"
},
{
"math_id": 42,
"text": "\\operatorname{Hom}_{K(\\mathcal{A})}(X^\\bullet, I^\\bullet) = 0"
},
{
"math_id": 43,
"text": "X^\\bullet \\to I^\\bullet"
},
{
"math_id": 44,
"text": "X \\rightarrow Y' \\leftarrow Y"
},
{
"math_id": 45,
"text": "Y \\to Y'"
},
{
"math_id": 46,
"text": "\n0 \\to \\mathcal{E}_n \\overset{\\phi_{n,n-1}}{\\rightarrow} \\mathcal{E}_{n-1} \\overset{\\phi_{n-1,n-2}}{\\rightarrow} \\cdots \\overset{\\phi_{1,0}}{\\rightarrow} \\mathcal{E}_0 \\to 0\n"
},
{
"math_id": 47,
"text": "\\phi: \\mathcal{E}_0 \\to \\mathcal{E}_n[+(n-1)]"
},
{
"math_id": 48,
"text": "\n\\begin{matrix}\n0 &\\to& \\mathcal{E}_n &\\to & 0 & \\to & \\cdots & \\to & 0 & \\to & 0\\\\\n\\uparrow & & \\uparrow & & \\uparrow & & \\cdots & & \\uparrow & & \\uparrow \\\\\n0 &\\to& \\mathcal{E}_n& \\to &\\mathcal{E}_{n-1} & \\to & \\cdots & \\to &\\mathcal{E}_1 &\\to &0 \\\\\n\\downarrow& & \\downarrow & & \\downarrow & & \\cdots & & \\downarrow & & \\downarrow \\\\\n0 & \\to & 0 & \\to & 0 & \\to & \\cdots & \\to & \\mathcal{E}_0 &\\to& 0\n\\end{matrix}\n"
},
{
"math_id": 49,
"text": "\\mathcal{E}_0"
},
{
"math_id": 50,
"text": "0"
},
{
"math_id": 51,
"text": "\\phi_{1,0}:\\mathcal{E}_1 \\to \\mathcal{E}_0"
},
{
"math_id": 52,
"text": "\n\\phi \\in \\mathbf{RHom}(\\mathcal{E}_0, \\mathcal{E}_n[+(n-1)])\n"
},
{
"math_id": 53,
"text": "X^n = 0"
},
{
"math_id": 54,
"text": "n \\ll 0"
},
{
"math_id": 55,
"text": "n \\gg 0"
},
{
"math_id": 56,
"text": "|n| \\gg 0"
},
{
"math_id": 57,
"text": "X[n]^{i} = X^{n+i},"
},
{
"math_id": 58,
"text": "d_{X[n]} = (-1)^n d_X."
},
{
"math_id": 59,
"text": "0 \\rightarrow X \\rightarrow Y \\rightarrow Z \\rightarrow 0"
},
{
"math_id": 60,
"text": "\\text{Hom}_{D(\\mathcal{A})}(X,Y[j]) = \\text{Ext}^j_{\\mathcal{A}}(X,Y)."
},
{
"math_id": 61,
"text": "K(\\mathcal A) \\rightarrow D(\\mathcal A)."
},
{
"math_id": 62,
"text": "0 \\rightarrow X \\rightarrow I^0 \\rightarrow I^1 \\rightarrow \\cdots, \\, "
},
{
"math_id": 63,
"text": "D^+(\\mathcal A) \\rightarrow K^+(\\mathrm{Inj}(\\mathcal A))"
},
{
"math_id": 64,
"text": "\\mathrm{Hom}_{D(A)}(X, Y) = \\mathrm{Hom}_{K(A)}(X, Y)."
},
{
"math_id": 65,
"text": "\\mathcal A \\stackrel{F}{\\rightarrow} \\mathcal B \\stackrel{G}{\\rightarrow} \\mathcal C, \\,"
},
{
"math_id": 66,
"text": "\\mathrm{Coh}(\\mathbb{P}^1)"
}
] | https://en.wikipedia.org/wiki?curid=985897 |
985963 | Lambda-CDM model | Model of Big Bang cosmology
The Lambda-CDM, Lambda cold dark matter, or ΛCDM model is a mathematical model of the Big Bang theory with three major components:
It is referred to as the "standard model" of Big Bang cosmology because it is the simplest model that provides a reasonably good account of:
The model assumes that general relativity is the correct theory of gravity on cosmological scales. It emerged in the late 1990s as a concordance cosmology, after a period of time when disparate observed properties of the universe appeared mutually inconsistent, and there was no consensus on the makeup of the energy density of the universe.
Some alternative models challenge the assumptions of the ΛCDM model. Examples of these are modified Newtonian dynamics, entropic gravity, modified gravity, theories of large-scale variations in the matter density of the universe, bimetric gravity, scale invariance of empty space, and decaying dark matter (DDM).
Overview.
The ΛCDM model includes an expansion of metric space that is well documented, both as the redshift of prominent spectral absorption or emission lines in the light from distant galaxies, and as the time dilation in the light decay of supernova luminosity curves. Both effects are attributed to a Doppler shift in electromagnetic radiation as it travels across expanding space. Although this expansion increases the distance between objects that are not under shared gravitational influence, it does not increase the size of the objects (e.g. galaxies) in space. It also allows for distant galaxies to recede from each other at speeds greater than the speed of light; local expansion is less than the speed of light, but expansion summed across great distances can collectively exceed the speed of light.
The letter Λ (lambda) represents the cosmological constant, which is associated with a vacuum energy or dark energy in empty space that is used to explain the contemporary accelerating expansion of space against the attractive effects of gravity. A cosmological constant has negative pressure, formula_0, which contributes to the stress–energy tensor that, according to the general theory of relativity, causes accelerating expansion. The fraction of the total energy density of our (flat or almost flat) universe that is dark energy, formula_1, is estimated to be 0.669 ± 0.038 based on the 2018 Dark Energy Survey results using Type Ia supernovae or based on the 2018 release of "Planck" satellite data, or more than 68.3 % (2018 estimate) of the mass–energy density of the universe.
Dark matter is postulated in order to account for gravitational effects observed in very large-scale structures (the "non-keplerian" rotation curves of galaxies; the gravitational lensing of light by galaxy clusters; and the enhanced clustering of galaxies) that cannot be accounted for by the quantity of observed matter.
The ΛCDM model proposes specifically cold dark matter, hypothesized as:
Dark matter constitutes about 26.5 % of the mass–energy density of the universe. The remaining 4.9 % comprises all ordinary matter observed as atoms, chemical elements, gas and plasma, the stuff of which visible planets, stars and galaxies are made. The great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10 % of the ordinary matter contribution to the mass–energy density of the universe.
The model includes a single originating event, the "Big Bang", which was not an explosion but the abrupt appearance of expanding spacetime containing radiation at temperatures of around 1015 K. This was immediately (within 10−29 seconds) followed by an exponential expansion of space by a scale multiplier of 1027 or more, known as cosmic inflation. The early universe remained hot (above 10 000 K) for several hundred thousand years, a state that is detectable as a residual cosmic microwave background, or CMB, a very low-energy radiation emanating from all parts of the sky. The "Big Bang" scenario, with cosmic inflation and standard particle physics, is the only cosmological model consistent with the observed continuing expansion of space, the observed distribution of lighter elements in the universe (hydrogen, helium, and lithium), and the spatial texture of minute irregularities (anisotropies) in the CMB radiation. Cosmic inflation also addresses the "horizon problem" in the CMB; indeed, it seems likely that the universe is larger than the observable particle horizon.
The model uses the Friedmann–Lemaître–Robertson–Walker metric, the Friedmann equations, and the cosmological equations of state to describe the observable universe from approximately 0.1 s to the present.
Cosmic expansion history.
The expansion of the universe is parameterized by a dimensionless scale factor formula_2 (with time formula_3 counted from the birth of the universe), defined relative to the present time, so formula_4; the usual convention in cosmology is that subscript 0 denotes present-day values, so formula_5 denotes the age of the universe. The scale factor is related to the observed redshift formula_6 of the light emitted at time formula_7 by
formula_8
The expansion rate is described by the time-dependent Hubble parameter, formula_9, defined as
formula_10
where formula_11 is the time-derivative of the scale factor. The first Friedmann equation gives the expansion rate in terms of the matter+radiation density formula_12, the curvature formula_13, and the cosmological constant formula_14,
formula_15
where, as usual formula_16 is the speed of light and formula_17 is the gravitational constant.
A critical density formula_18 is the present-day density, which gives zero curvature formula_13, assuming the cosmological constant formula_14 is zero, regardless of its actual value. Substituting these conditions to the Friedmann equation gives
formula_19
where formula_20 is the reduced Hubble constant.
If the cosmological constant were actually zero, the critical density would also mark the dividing line between eventual recollapse of the universe to a Big Crunch, or unlimited expansion. For the Lambda-CDM model with a positive cosmological constant (as observed), the universe is predicted to expand forever regardless of whether the total density is slightly above or below the critical density; though other outcomes are possible in extended models where the dark energy is not constant but actually time-dependent.
It is standard to define the present-day density parameter formula_21 for various species as the dimensionless ratio
formula_22
where the subscript formula_23 is one of formula_24 for baryons, formula_25 for cold dark matter, formula_26 for radiation (photons plus relativistic neutrinos), and formula_14 for dark energy.
Since the densities of various species scale as different powers of formula_27, e.g. formula_28 for matter etc.,
the Friedmann equation can be conveniently rewritten in terms of the various density parameters as
formula_29
where formula_30 is the equation of state parameter of dark energy, and assuming negligible neutrino mass (significant neutrino mass requires a more complex equation). The various formula_31 parameters add up to formula_32 by construction. In the general case this is integrated by computer to give the expansion history formula_33 and also observable distance–redshift relations for any chosen values of the cosmological parameters, which can then be compared with observations such as supernovae and baryon acoustic oscillations.
In the minimal 6-parameter Lambda-CDM model, it is assumed that curvature formula_34 is zero and formula_35, so this simplifies to
formula_36
Observations show that the radiation density is very small today, formula_37; if this term is neglected
the above has an analytic solution
formula_38
where formula_39
this is fairly accurate for formula_40 or formula_41 million years.
Solving for formula_42 gives the present age of the universe formula_43 in terms of the other parameters.
It follows that the transition from decelerating to accelerating expansion (the second derivative formula_44 crossing zero) occurred when
formula_45
which evaluates to formula_46 or formula_47 for the best-fit parameters estimated from the "Planck" spacecraft.
Historical development.
The discovery of the cosmic microwave background (CMB) in 1964 confirmed a key prediction of the Big Bang cosmology. From that point on, it was generally accepted that the universe started in a hot, dense state and has been expanding over time. The rate of expansion depends on the types of matter and energy present in the universe, and in particular, whether the total density is above or below the so-called critical density.
During the 1970s, most attention focused on pure-baryonic models, but there were serious challenges explaining the formation of galaxies, given the small anisotropies in the CMB (upper limits at that time). In the early 1980s, it was realized that this could be resolved if cold dark matter dominated over the baryons, and the theory of cosmic inflation motivated models with critical density.
During the 1980s, most research focused on cold dark matter with critical density in matter, around 95 % CDM and 5 % baryons: these showed success at forming galaxies and clusters of galaxies, but problems remained; notably, the model required a Hubble constant lower than preferred by observations, and observations around 1988–1990 showed more large-scale galaxy clustering than predicted.
These difficulties sharpened with the discovery of CMB anisotropy by the Cosmic Background Explorer in 1992, and several modified CDM models, including ΛCDM and mixed cold and hot dark matter, came under active consideration through the mid-1990s. The ΛCDM model then became the leading model following the observations of accelerating expansion in 1998, and was quickly supported by other observations: in 2000, the BOOMERanG microwave background experiment measured the total (matter–energy) density to be close to 100 % of critical, whereas in 2001 the 2dFGRS galaxy redshift survey measured the matter density to be near 25 %; the large difference between these values supports a positive Λ or dark energy. Much more precise spacecraft measurements of the microwave background from WMAP in 2003–2010 and "Planck" in 2013–2015 have continued to support the model and pin down the parameter values, most of which are constrained below 1 percent uncertainty.
Research is active into many aspects of the ΛCDM model, both to refine the parameters and to resolve the tensions between recent observations and the ΛCDM model, such as the Hubble tension and the CMB dipole. In addition, ΛCDM has no explicit physical theory for the origin or physical nature of dark matter or dark energy; the nearly scale-invariant spectrum of the CMB perturbations, and their image across the celestial sphere, are believed to result from very small thermal and acoustic irregularities at the point of recombination.
Historically, a large majority of astronomers and astrophysicists support the ΛCDM model or close relatives of it, but recent observations that contradict the ΛCDM model have led some astronomers and astrophysicists to search for alternatives to the ΛCDM model, which include dropping the Friedmann–Lemaître–Robertson–Walker metric or modifying dark energy. On the other hand, Milgrom, McGaugh, and Kroupa have long been leading critics of the ΛCDM model, attacking the dark matter portions of the theory from the perspective of galaxy formation models and supporting the alternative modified Newtonian dynamics (MOND) theory, which requires a modification of the Einstein field equations and the Friedmann equations as seen in proposals such as modified gravity theory (MOG theory) or tensor–vector–scalar gravity theory (TeVeS theory). Other proposals by theoretical astrophysicists of cosmological alternatives to Einstein's general relativity that attempt to account for dark energy or dark matter include f(R) gravity, scalar–tensor theories such as galileon theories, brane cosmologies, the DGP model, and massive gravity and its extensions such as bimetric gravity.
Successes.
In addition to explaining many pre-2000 observations, the model has made a number of successful predictions: notably the existence of the baryon acoustic oscillation feature, discovered in 2005 in the predicted location; and the statistics of weak gravitational lensing, first observed in 2000 by several teams. The polarization of the CMB, discovered in 2002 by DASI, has been successfully predicted by the model: in the 2015 "Planck" data release, there are seven observed peaks in the temperature (TT) power spectrum, six peaks in the temperature–polarization (TE) cross spectrum, and five peaks in the polarization (EE) spectrum. The six free parameters can be well constrained by the TT spectrum alone, and then the TE and EE spectra can be predicted theoretically to few-percent precision with no further adjustments allowed.
Challenges.
Over the years, numerous simulations of ΛCDM and observations of our universe have been made that challenge the validity of the ΛCDM model, to the point where some cosmologists believe that the ΛCDM model may be superseded by a different, as yet unknown cosmological model.
Lack of detection.
Extensive searches for dark matter particles have so far shown no well-agreed detection, while dark energy may be almost impossible to detect in a laboratory, and its value is extremely small compared to vacuum energy theoretical predictions.
Violations of the cosmological principle.
The ΛCDM model has been shown to satisfy the cosmological principle, which states that, on a large-enough scale, the universe looks the same in all directions (isotropy) and from every location (homogeneity); "the universe looks the same whoever and wherever you are." The cosmological principle exists because when the predecessors of the ΛCDM model were being developed, there was not sufficient data available to distinguish between more complex anisotropic or inhomogeneous models, so homogeneity and isotropy were assumed to simplify the models, and the assumptions were carried over into the ΛCDM model. However, recent findings have suggested that violations of the cosmological principle, especially of isotropy, exist. These violations have called the ΛCDM model into question, with some authors suggesting that the cosmological principle is obsolete or that the Friedmann–Lemaître–Robertson–Walker metric breaks down in the late universe. This has additional implications for the validity of the cosmological constant in the ΛCDM model, as dark energy is implied by observations only if the cosmological principle is true.
Violations of isotropy.
Evidence from galaxy clusters, quasars, and type Ia supernovae suggest that isotropy is violated on large scales.
Data from the Planck Mission shows hemispheric bias in the cosmic microwave background in two respects: one with respect to average temperature (i.e. temperature fluctuations), the second with respect to larger variations in the degree of perturbations (i.e. densities). The European Space Agency (the governing body of the Planck Mission) has concluded that these anisotropies in the CMB are, in fact, statistically significant and can no longer be ignored.
Already in 1967, Dennis Sciama predicted that the cosmic microwave background has a significant dipole anisotropy. In recent years, the CMB dipole has been tested, and the results suggest our motion with respect to distant radio galaxies and quasars differs from our motion with respect to the cosmic microwave background. The same conclusion has been reached in recent studies of the Hubble diagram of Type Ia supernovae and quasars. This contradicts the cosmological principle.
The CMB dipole is hinted at through a number of other observations. First, even within the cosmic microwave background, there are curious directional alignments and an anomalous parity asymmetry that may have an origin in the CMB dipole. Separately, the CMB dipole direction has emerged as a preferred direction in studies of alignments in quasar polarizations, scaling relations in galaxy clusters, strong lensing time delay, Type Ia supernovae, and quasars and gamma-ray bursts as standard candles. The fact that all these independent observables, based on different physics, are tracking the CMB dipole direction suggests that the Universe is anisotropic in the direction of the CMB dipole.
Nevertheless, some authors have stated that the universe around Earth is isotropic at high significance by studies of the cosmic microwave background temperature maps.
Violations of homogeneity.
Based on N-body simulations in ΛCDM, Yadav and his colleagues showed that the spatial distribution of galaxies is statistically homogeneous if averaged over scales 260/h Mpc or more. However, many large-scale structures have been discovered, and some authors have reported some of the structures to be in conflict with the predicted scale of homogeneity for ΛCDM, including
Other authors claim that the existence of structures larger than the scale of homogeneity in the ΛCDM model does not necessarily violate the cosmological principle in the ΛCDM model.
El Gordo galaxy cluster collision.
El Gordo is a massive interacting galaxy cluster in the early Universe (formula_48). The extreme properties of El Gordo in terms of its redshift, mass, and the collision velocity leads to strong (formula_49) tension with the ΛCDM model. The properties of El Gordo are however consistent with cosmological simulations in the framework of MOND due to more rapid structure formation.
KBC void.
The KBC void is an immense, comparatively empty region of space containing the Milky Way approximately 2 billion light-years (600 megaparsecs, Mpc) in diameter. Some authors have said the existence of the KBC void violates the assumption that the CMB reflects baryonic density fluctuations at formula_50 or Einstein's theory of general relativity, either of which would violate the ΛCDM model, while other authors have claimed that supervoids as large as the KBC void are consistent with the ΛCDM model.
Hubble tension.
Statistically significant differences remain in measurements of the Hubble constant based on the cosmic background radiation compared to astronomical distance measurements. This difference has been called the Hubble tension.
The Hubble tension in cosmology is widely acknowledged to be a major problem for the ΛCDM model. In December 2021, "National Geographic" reported that the cause of the Hubble tension discrepancy is not known. However, if the cosmological principle fails (see Violations of the cosmological principle), then the existing interpretations of the Hubble constant and the Hubble tension have to be revised, which might resolve the Hubble tension.
Some authors postulate that the Hubble tension can be explained entirely by the KBC void, as measuring galactic supernovae inside a void is predicted by the authors to yield a larger local value for the Hubble constant than cosmological measures of the Hubble constant. However, other work has found no evidence for this in observations, finding the scale of the claimed underdensity to be incompatible with observations which extend beyond its radius. Important deficiencies were subsequently pointed out in this analysis, leaving open the possibility that the Hubble tension is indeed caused by outflow from the KBC void.
As a result of the Hubble tension, other researchers have called for new physics beyond the ΛCDM model. Moritz Haslbauer et al. proposed that MOND would resolve the Hubble tension. Another group of researchers led by Marc Kamionkowski proposed a cosmological model with early dark energy to replace ΛCDM.
"S"8 tension.
The formula_51 tension in cosmology is another major problem for the ΛCDM model. The formula_51 parameter in the ΛCDM model quantifies the amplitude of matter fluctuations in the late universe and is defined as
formula_52
Early- (e.g. from CMB data collected using the Planck observatory) and late-time (e.g. measuring weak gravitational lensing events) facilitate increasingly precise values of formula_51. However, these two categories of measurement differ by more standard deviations than their uncertainties. This discrepancy is called the formula_51 tension. The name "tension" reflects that the disagreement is not merely between two data sets: the many sets of early- and late-time measurements agree well within their own categories, but there is an unexplained difference between values obtained from different points in the evolution of the universe. Such a tension indicates that the ΛCDM model may be incomplete or in need of correction.
Some values for formula_51 are (2020 Planck), (2021 KIDS), (2022 DES), (2023 DES+KIDS), – (2023 HSC-SSP), (2024 EROSITA). Values have also obtained using peculiar velocities, (2020) and (2020), among other methods.
Axis of evil.
The ΛCDM model assumes that the data of the cosmic microwave background and our interpretation of the CMB are correct. However, there exists an apparent correlation between the plane of the Solar System, the rotation of galaxies, and certain aspects of the CMB. This may indicate that there is something wrong with the data or the interpretation of the cosmic microwave background used as evidence for the ΛCDM model, or that the Copernican principle and cosmological principle are violated.
Cosmological lithium problem.
The actual observable amount of lithium in the universe is less than the calculated amount from the ΛCDM model by a factor of 3–4. If every calculation is correct, then solutions beyond the existing ΛCDM model might be needed.
Shape of the universe.
The ΛCDM model assumes that the shape of the universe is flat (zero curvature). However, recent Planck data have hinted that the shape of the universe might in fact be closed (positive curvature), which would contradict the ΛCDM model. Some authors have suggested that the Planck data detecting a positive curvature could be evidence of a local inhomogeneity in the curvature of the universe rather than the universe actually being closed.
Violations of the strong equivalence principle.
The ΛCDM model assumes that the strong equivalence principle is true. However, in 2020 a group of astronomers analyzed data from the Spitzer Photometry and Accurate Rotation Curves (SPARC) sample, together with estimates of the large-scale external gravitational field from an all-sky galaxy catalog. They concluded that there was highly statistically significant evidence of violations of the strong equivalence principle in weak gravitational fields in the vicinity of rotationally supported galaxies. They observed an effect inconsistent with tidal effects in the ΛCDM model. These results have been challenged as failing to consider inaccuracies in the rotation curves and correlations between galaxy properties and clustering strength. and as inconsistent with similar analysis of other galaxies.
Cold dark matter discrepancies.
Several discrepancies between the predictions of cold dark matter in the ΛCDM model and observations of galaxies and their clustering have arisen. Some of these problems have proposed solutions, but it remains unclear whether they can be solved without abandoning the ΛCDM model.
Cuspy halo problem.
The density distributions of dark matter halos in cold dark matter simulations (at least those that do not include the impact of baryonic feedback) are much more peaked than what is observed in galaxies by investigating their rotation curves.
Dwarf galaxy problem.
Cold dark matter simulations predict large numbers of small dark matter halos, more numerous than the number of small dwarf galaxies that are observed around galaxies like the Milky Way.
Satellite disk problem.
Dwarf galaxies around the Milky Way and Andromeda galaxies are observed to be orbiting in thin, planar structures whereas the simulations predict that they should be distributed randomly about their parent galaxies. However, latest research suggests this seemingly bizarre alignment is just a quirk which will dissolve over time.
High-velocity galaxy problem.
Galaxies in the NGC 3109 association are moving away too rapidly to be consistent with expectations in the ΛCDM model. In this framework, NGC 3109 is too massive and distant from the Local Group for it to have been flung out in a three-body interaction involving the Milky Way or Andromeda Galaxy.
Galaxy morphology problem.
If galaxies grew hierarchically, then massive galaxies required many mergers. Major mergers inevitably create a classical bulge. On the contrary, about 80 % of observed galaxies give evidence of no such bulges, and giant pure-disc galaxies are commonplace. The tension can be quantified by comparing the observed distribution of galaxy shapes today with predictions from high-resolution hydrodynamical cosmological simulations in the ΛCDM framework, revealing a highly significant problem that is unlikely to be solved by improving the resolution of the simulations. The high bulgeless fraction was nearly constant for 8 billion years.
Fast galaxy bar problem.
If galaxies were embedded within massive halos of cold dark matter, then the bars that often develop in their central regions would be slowed down by dynamical friction with the halo. This is in serious tension with the fact that observed galaxy bars are typically fast.
Small scale crisis.
Comparison of the model with observations may have some problems on sub-galaxy scales, possibly predicting too many dwarf galaxies and too much dark matter in the innermost regions of galaxies. This problem is called the "small scale crisis". These small scales are harder to resolve in computer simulations, so it is not yet clear whether the problem is the simulations, non-standard properties of dark matter, or a more radical error in the model.
High redshift galaxies.
Observations from the James Webb Space Telescope have resulted in various galaxies confirmed by spectroscopy at high redshift, such as JADES-GS-z13-0 at cosmological redshift of 13.2. Other candidate galaxies which have not been confirmed by spectroscopy include CEERS-93316 at cosmological redshift of 16.4.
Existence of surprisingly massive galaxies in the early universe challenges the preferred models describing how dark matter halos drive galaxy formation. It remains to be seen whether a revision of the Lambda-CDM model with parameters given by Planck Collaboration is necessary to resolve this issue. The discrepancies could also be explained by particular properties (stellar masses or effective volume) of the candidate galaxies, yet unknown force or particle outside of the Standard Model through which dark matter interacts, more efficient baryonic matter accumulation by the dark matter halos, early dark energy models, or the hypothesized long-sought Population III stars.
Missing baryon problem.
Massimo Persic and Paolo Salucci first estimated the baryonic density today present in ellipticals, spirals, groups and clusters of galaxies.
They performed an integration of the baryonic mass-to-light ratio over luminosity (in the following formula_53), weighted with the luminosity function formula_54 over the previously mentioned classes of astrophysical objects:
formula_55
The result was:
formula_56
where formula_57.
Note that this value is much lower than the prediction of standard cosmic nucleosynthesis formula_58, so that stars and gas in galaxies and in galaxy groups and clusters account for less than 10 % of the primordially synthesized baryons. This issue is known as the problem of the "missing baryons".
The missing baryon problem is claimed to be resolved. Using observations of the kinematic Sunyaev–Zel'dovich effect spanning more than 90 % of the lifetime of the Universe, in 2021 astrophysicists found that approximately 50 % of all baryonic matter is outside dark matter haloes, filling the space between galaxies. Together with the amount of baryons inside galaxies and surrounding them, the total amount of baryons in the late time Universe is compatible with early Universe measurements.
Unfalsifiability.
It has been argued that the ΛCDM model is built upon a foundation of conventionalist stratagems, rendering it unfalsifiable in the sense defined by Karl Popper.
Parameters.
The simple ΛCDM model is based on six parameters: physical baryon density parameter; physical dark matter density parameter; the age of the universe; scalar spectral index; curvature fluctuation amplitude; and reionization optical depth. In accordance with Occam's razor, six is the smallest number of parameters needed to give an acceptable fit to the observations; other possible parameters are fixed at "natural" values, e.g. total density parameter = 1.00, dark energy equation of state = −1. (See below for extended models that allow these to vary.)
The values of these six parameters are mostly not predicted by theory (though, ideally, they may be related by a future "Theory of Everything"), except that most versions of cosmic inflation predict the scalar spectral index should be slightly smaller than 1, consistent with the estimated value 0.96. The parameter values, and uncertainties, are estimated using large computer searches to locate the region of parameter space providing an acceptable match to cosmological observations. From these six parameters, the other model values, such as the Hubble constant and the dark energy density, can be readily calculated.
Commonly, the set of observations fitted includes the cosmic microwave background anisotropy, the brightness/redshift relation for supernovae, and large-scale galaxy clustering including the baryon acoustic oscillation feature. Other observations, such as the Hubble constant, the abundance of galaxy clusters, weak gravitational lensing and globular cluster ages, are generally consistent with these, providing a check of the model, but are less precisely measured at present.
Parameter values listed below are from the "Planck" Collaboration Cosmological parameters 68 % confidence limits for the base ΛCDM model from "Planck" CMB power spectra, in combination with lensing reconstruction and external data (BAO + JLA + H0). See also Planck (spacecraft).
<templatestyles src="Reflist/styles.css" />
Extended models.
Extended models allow one or more of the "fixed" parameters above to vary, in addition to the basic six; so these models join smoothly to the basic six-parameter model in the limit that the additional parameter(s) approach the default values. For example, possible extensions of the simplest ΛCDM model allow for spatial curvature (formula_59 may be different from 1); or quintessence rather than a cosmological constant where the equation of state of dark energy is allowed to differ from −1. Cosmic inflation predicts tensor fluctuations (gravitational waves). Their amplitude is parameterized by the tensor-to-scalar ratio (denoted formula_60), which is determined by the unknown energy scale of inflation. Other modifications allow hot dark matter in the form of neutrinos more massive than the minimal value, or a running spectral index; the latter is generally not favoured by simple cosmic inflation models.
Allowing additional variable parameter(s) will generally "increase" the uncertainties in the standard six parameters quoted above, and may also shift the central values slightly. The table below shows results for each of the possible "6+1" scenarios with one additional variable parameter; this indicates that, as of 2015, there is no convincing evidence that any additional parameter is different from its default value.
Some researchers have suggested that there is a running spectral index, but no statistically significant study has revealed one. Theoretical expectations suggest that the tensor-to-scalar ratio formula_60 should be between 0 and 0.3, and the latest results are within those limits.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " p = - \\rho c^{2} "
},
{
"math_id": 1,
"text": "\\Omega_{\\Lambda}"
},
{
"math_id": 2,
"text": "a = a(t)"
},
{
"math_id": 3,
"text": "t"
},
{
"math_id": 4,
"text": "a_0 = a(t_0) = 1 "
},
{
"math_id": 5,
"text": "t_0"
},
{
"math_id": 6,
"text": "z"
},
{
"math_id": 7,
"text": "t_\\mathrm{em}"
},
{
"math_id": 8,
"text": "a(t_\\text{em}) = \\frac{1}{1 + z}\\,."
},
{
"math_id": 9,
"text": "H(t)"
},
{
"math_id": 10,
"text": "H(t) \\equiv \\frac{\\dot a}{a},"
},
{
"math_id": 11,
"text": "\\dot a"
},
{
"math_id": 12,
"text": "\\rho"
},
{
"math_id": 13,
"text": "k"
},
{
"math_id": 14,
"text": "\\Lambda"
},
{
"math_id": 15,
"text": "H^2 = \\left(\\frac{\\dot{a}}{a}\\right)^2 = \\frac{8 \\pi G}{3} \\rho - \\frac{kc^2}{a^2} + \\frac{\\Lambda c^2}{3}, "
},
{
"math_id": 16,
"text": "c"
},
{
"math_id": 17,
"text": "G"
},
{
"math_id": 18,
"text": "\\rho_\\mathrm{crit}"
},
{
"math_id": 19,
"text": "\\rho_\\mathrm{crit} = \\frac{3 H_0^2}{8 \\pi G} = 1.878\\;47(23) \\times 10^{-26} \\; h^2 \\; \\mathrm{kg{\\cdot}m^{-3}},"
},
{
"math_id": 20,
"text": " h \\equiv H_0 / (100 \\; \\mathrm{km{\\cdot}s^{-1}{\\cdot}Mpc^{-1}}) "
},
{
"math_id": 21,
"text": "\\Omega_x"
},
{
"math_id": 22,
"text": "\\Omega_x \\equiv \\frac{\\rho_x(t=t_0)}{\\rho_\\mathrm{crit} } = \\frac{8 \\pi G\\rho_x(t=t_0)}{3 H_0^2}"
},
{
"math_id": 23,
"text": "x"
},
{
"math_id": 24,
"text": "\\mathrm b"
},
{
"math_id": 25,
"text": "\\mathrm c"
},
{
"math_id": 26,
"text": "\\mathrm{rad}"
},
{
"math_id": 27,
"text": "a"
},
{
"math_id": 28,
"text": "a^{-3}"
},
{
"math_id": 29,
"text": "H(a) \\equiv \\frac{\\dot{a}}{a} = H_0 \\sqrt{ (\\Omega_{\\rm c} + \\Omega_{\\rm b}) a^{-3} + \\Omega_\\mathrm{rad} a^{-4} + \\Omega_k a^{-2} + \\Omega_{\\Lambda} a^{-3(1+w)} } ,"
},
{
"math_id": 30,
"text": "w"
},
{
"math_id": 31,
"text": " \\Omega "
},
{
"math_id": 32,
"text": "1"
},
{
"math_id": 33,
"text": "a(t)"
},
{
"math_id": 34,
"text": "\\Omega_k"
},
{
"math_id": 35,
"text": " w = -1 "
},
{
"math_id": 36,
"text": " H(a) = H_0 \\sqrt{ \\Omega_{\\rm m} a^{-3} + \\Omega_\\mathrm{rad} a^{-4} + \\Omega_\\Lambda } "
},
{
"math_id": 37,
"text": " \\Omega_\\text{rad} \\sim 10^{-4} "
},
{
"math_id": 38,
"text": " a(t) = (\\Omega_{\\rm m} / \\Omega_\\Lambda)^{1/3} \\, \\sinh^{2/3} ( t / t_\\Lambda) "
},
{
"math_id": 39,
"text": " t_\\Lambda \\equiv 2 / (3 H_0 \\sqrt{\\Omega_\\Lambda} ) \\ ; "
},
{
"math_id": 40,
"text": "a > 0.01"
},
{
"math_id": 41,
"text": "t > 10"
},
{
"math_id": 42,
"text": " a(t) = 1 "
},
{
"math_id": 43,
"text": " t_0 "
},
{
"math_id": 44,
"text": " \\ddot{a} "
},
{
"math_id": 45,
"text": " a = ( \\Omega_{\\rm m} / 2 \\Omega_\\Lambda )^{1/3} ,"
},
{
"math_id": 46,
"text": "a \\sim 0.6"
},
{
"math_id": 47,
"text": "z \\sim 0.66"
},
{
"math_id": 48,
"text": "z = 0.87"
},
{
"math_id": 49,
"text": "6.16\\sigma"
},
{
"math_id": 50,
"text": "z = 1100"
},
{
"math_id": 51,
"text": "S_8"
},
{
"math_id": 52,
"text": "S_8 \\equiv \\sigma_8\\sqrt{\\Omega_{\\rm m}/0.3}"
},
{
"math_id": 53,
"text": " M_{\\rm b}/L "
},
{
"math_id": 54,
"text": "\\phi(L)"
},
{
"math_id": 55,
"text": "\\rho_{\\rm b} = \\sum \\int L\\phi(L) \\frac{M_{\\rm b}}{L} \\, dL."
},
{
"math_id": 56,
"text": " \\Omega_{\\rm b}=\\Omega_*+\\Omega_\\text{gas}=2.2\\times10^{-3}+1.5\\times10^{-3}\\;h^{-1.3}\\simeq0.003 ,"
},
{
"math_id": 57,
"text": " h\\simeq0.72 "
},
{
"math_id": 58,
"text": " \\Omega_{\\rm b}\\simeq0.0486 "
},
{
"math_id": 59,
"text": "\\Omega_\\text{tot}"
},
{
"math_id": 60,
"text": "r"
}
] | https://en.wikipedia.org/wiki?curid=985963 |
986029 | Strong CP problem | Question of why quantum chromodynamics does seem to not break CP-symmetry
The strong CP problem is a question in particle physics, which brings up the following quandary: why does quantum chromodynamics (QCD) seem to preserve CP-symmetry?
In particle physics, CP stands for the combination of charge conjugation symmetry (C) and parity symmetry (P). According to the current mathematical formulation of quantum chromodynamics, a violation of CP-symmetry in strong interactions could occur. However, no violation of the CP-symmetry has ever been seen in any experiment involving only the strong interaction. As there is no known reason in QCD for it to necessarily be conserved, this is a "fine tuning" problem known as the strong CP problem.
The strong CP problem is sometimes regarded as an unsolved problem in physics, and has been referred to as "the most underrated puzzle in all of physics." There are several proposed solutions to solve the strong CP problem. The most well-known is Peccei–Quinn theory, involving new pseudoscalar particles called axions.
Theory.
CP-symmetry states that physics should be unchanged if particles were swapped with their antiparticles and then left-handed and right-handed particles were also interchanged. This corresponds to performing a charge conjugation transformation and then a parity transformation. The symmetry is known to be broken in the Standard Model through weak interactions, but it is also expected to be broken through strong interactions which govern quantum chromodynamics (QCD), something that has not yet been observed.
To illustrate how the CP violation can come about in QCD, consider a Yang–Mills theory with a single massive quark. The most general mass term possible for the quark is a complex mass written as formula_0 for some arbitrary phase formula_1. In that case the Lagrangian describing the theory consists of four terms:
formula_2
The first and third terms are the CP-symmetric kinetic terms of the gauge and quark fields. The fourth term is the quark mass term which is CP violating for non-zero phases formula_3 while the second term is the so-called θ-term, which also violates CP-symmetry.
Quark fields can always be redefined by performing a chiral transformation by some angle formula_4 as
formula_5
which changes the complex mass phase by formula_6 while leaving the kinetic terms unchanged. The transformation also changes the θ-term as formula_7 due to a change in the path integral measure, an effect closely connected to the chiral anomaly.
The theory would be CP invariant if one could eliminate both sources of CP violation through such a field redefinition. But this cannot be done unless formula_8. This is because even under such field redefinitions, the combination formula_9 remains unchanged. For example, the CP violation due to the mass term can be eliminated by picking formula_10, but then all the CP violation goes to the θ-term which is now proportional to formula_11. If instead the θ-term is eliminated through a chiral transformation, then there will be a CP violating complex mass with a phase formula_11. Practically, it is usually useful to put all the CP violation into the θ-term and thus only deal with real masses.
In the Standard Model where one deals with six quarks whose masses are described by the Yukawa matrices formula_12 and formula_13, the physical CP violating angle is formula_14. Since the θ-term has no contributions to perturbation theory, all effects from strong CP violation is entirely non-perturbative. Notably, it gives rise to a neutron electric dipole moment
formula_15
Current experimental upper bounds on the dipole moment give an upper bound of formula_16cm, which requires formula_17. The angle formula_11 can take any value between zero and formula_18, so it taking on such a particularly small value is a fine-tuning problem called the strong CP problem.
Proposed solutions.
The strong CP problem is solved automatically if one of the quarks is massless. In that case one can perform a set of chiral transformations on all the massive quark fields to get rid of their complex mass phases and then perform another chiral transformation on the massless quark field to eliminate the residual θ-term without also introducing a complex mass term for that field. This then gets rid of all CP violating terms in the theory. The problem with this solution is that all quarks are known to be massive from experimental matching with lattice calculations. Even if one of the quarks was essentially massless to solve the problem, this would in itself just be another fine-tuning problem since there is nothing requiring a quark mass to take on such a small value.
The most popular solution to the problem is through the Peccei–Quinn mechanism. This introduces a new global anomalous symmetry which is then spontaneously broken at low energies, giving rise to a pseudo-Goldstone boson called an axion. The axion ground state dynamically forces the theory to be CP-symmetric by setting formula_19. Axions are also considered viable candidates for dark matter and axion-like particles are also predicted by string theory.
Other less popular proposed solutions exist such as Nelson–Barr models. These set formula_19 at some high energy scale where CP-symmetry is exact but the symmetry is then spontaneously broken. The Nelson–Barr mechanism is a way of explaining why formula_11 remains small at low energies while the CP breaking phase in the CKM matrix is large.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m e^{i\\theta' \\gamma_5}"
},
{
"math_id": 1,
"text": "\\theta'"
},
{
"math_id": 2,
"text": "\n\\mathcal L = -\\frac{1}{4}F_{\\mu \\nu}F^{\\mu \\nu} +\\theta \\frac{g^2}{32\\pi^2}F_{\\mu \\nu}\\tilde F^{\\mu \\nu} +\\bar \\psi(i\\gamma^\\mu D_\\mu -me^{i\\theta' \\gamma_5})\\psi.\n"
},
{
"math_id": 3,
"text": "\\theta' \\neq 0"
},
{
"math_id": 4,
"text": "\\alpha"
},
{
"math_id": 5,
"text": "\n\\psi' = e^{i\\alpha \\gamma_5/2}\\psi, \\ \\ \\ \\ \\ \\ \\bar \\psi' = \\bar \\psi e^{i\\alpha \\gamma_5/2},\n"
},
{
"math_id": 6,
"text": "\\theta' \\rightarrow \\theta'-\\alpha"
},
{
"math_id": 7,
"text": "\\theta \\rightarrow \\theta + \\alpha"
},
{
"math_id": 8,
"text": "\\theta = -\\theta'"
},
{
"math_id": 9,
"text": "\\theta'+ \\theta \\rightarrow (\\theta'-\\alpha) + (\\theta + \\alpha) = \\theta'+\\theta"
},
{
"math_id": 10,
"text": "\\alpha = \\theta'"
},
{
"math_id": 11,
"text": "\\bar \\theta"
},
{
"math_id": 12,
"text": "Y_u"
},
{
"math_id": 13,
"text": "Y_d"
},
{
"math_id": 14,
"text": "\\bar \\theta = \\theta - \\arg \\det(Y_u Y_d)"
},
{
"math_id": 15,
"text": "\nd_N = (5.2 \\times 10^{-16}\\text{e}\\cdot\\text{cm}) \\bar \\theta.\n"
},
{
"math_id": 16,
"text": "d_N < 10^{-26} \\text{e}\\cdot"
},
{
"math_id": 17,
"text": "\\bar \\theta < 10^{-10}"
},
{
"math_id": 18,
"text": "2\\pi"
},
{
"math_id": 19,
"text": "\\bar \\theta = 0"
}
] | https://en.wikipedia.org/wiki?curid=986029 |
986096 | Fermi surface | Abstract boundary in condensed matter physics
In condensed matter physics, the Fermi surface is the surface in reciprocal space which separates occupied from unoccupied electron states at zero temperature. The shape of the Fermi surface is derived from the periodicity and symmetry of the crystalline lattice and from the occupation of electronic energy bands. The existence of a Fermi surface is a direct consequence of the Pauli exclusion principle, which allows a maximum of one electron per quantum state. The study of the Fermi surfaces of materials is called fermiology.
Theory.
Consider a spin-less ideal Fermi gas of formula_0 particles. According to Fermi–Dirac statistics, the mean occupation number of a state with energy formula_1 is given by
formula_2
where
Suppose we consider the limit formula_9. Then we have,
formula_10
By the Pauli exclusion principle, no two fermions can be in the same state. Additionally, at zero temperature the enthalpy of the electrons must be minimal, meaning that they cannot change state. If, for a particle in some state, there existed an unoccupied lower state that it could occupy, then the energy difference between those states would give the electron an additional enthalpy. Hence, the enthalpy of the electron would not be minimal. Therefore, at zero temperature all the lowest energy states must be saturated. For a large ensemble the Fermi level will be approximately equal to the chemical potential of the system, and hence every state below this energy must be occupied. Thus, particles fill up all energy levels below the Fermi level at absolute zero, which is equivalent to saying that is the energy level below which there are exactly formula_0 states.
In momentum space, these particles fill up a ball of radius formula_11, the surface of which is called the Fermi surface.
The linear response of a metal to an electric, magnetic, or thermal gradient is determined by the shape of the Fermi surface, because currents are due to changes in the occupancy of states near the Fermi energy. In reciprocal space, the Fermi surface of an ideal Fermi gas is a sphere of radius
formula_12,
determined by the valence electron concentration where formula_13 is the reduced Planck constant. A material whose Fermi level falls in a gap between bands is an insulator or semiconductor depending on the size of the bandgap. When a material's Fermi level falls in a bandgap, there is no Fermi surface.
Materials with complex crystal structures can have quite intricate Fermi surfaces. Figure 2 illustrates the anisotropic Fermi surface of graphite, which has both electron and hole pockets in its Fermi surface due to multiple bands crossing the Fermi energy along the formula_14 direction. Often in a metal, the Fermi surface radius formula_11 is larger than the size of the first Brillouin zone, which results in a portion of the Fermi surface lying in the second (or higher) zones. As with the band structure itself, the Fermi surface can be displayed in an extended-zone scheme where formula_15 is allowed to have arbitrarily large values or a reduced-zone scheme where wavevectors are shown modulo formula_16 (in the 1-dimensional case) where a is the lattice constant. In the three-dimensional case the reduced zone scheme means that from any wavevector formula_15 there is an appropriate number of reciprocal lattice vectors formula_17 subtracted that the new formula_15 now is closer to the origin in formula_15-space than to any formula_17. Solids with a large density of states at the Fermi level become unstable at low temperatures and tend to form ground states where the condensation energy comes from opening a gap at the Fermi surface. Examples of such ground states are superconductors, ferromagnets, Jahn–Teller distortions and spin density waves.
The state occupancy of fermions like electrons is governed by Fermi–Dirac statistics so at finite temperatures the Fermi surface is accordingly broadened. In principle all fermion energy level populations are bound by a Fermi surface although the term is not generally used outside of condensed-matter physics.
Experimental determination.
Electronic Fermi surfaces have been measured through observation of the oscillation of transport properties in magnetic fields formula_18, for example the de Haas–van Alphen effect (dHvA) and the Shubnikov–de Haas effect (SdH). The former is an oscillation in magnetic susceptibility and the latter in resistivity. The oscillations are periodic versus formula_19 and occur because of the quantization of energy levels in the plane perpendicular to a magnetic field, a phenomenon first predicted by Lev Landau. The new states are called Landau levels and are separated by an energy formula_20 where formula_21 is called the cyclotron frequency, formula_22 is the electronic charge, formula_23 is the electron effective mass and formula_24 is the speed of light. In a famous result, Lars Onsager proved that the period of oscillation formula_25 is related to the cross-section of the Fermi surface (typically given in Å−2) perpendicular to the magnetic field direction formula_26 by the equationformula_27. Thus the determination of the periods of oscillation for various applied field directions allows mapping of the Fermi surface. Observation of the dHvA and SdH oscillations requires magnetic fields large enough that the circumference of the cyclotron orbit is smaller than a mean free path. Therefore, dHvA and SdH experiments are usually performed at high-field facilities like the High Field Magnet Laboratory in Netherlands, Grenoble High Magnetic Field Laboratory in France, the Tsukuba Magnet Laboratory in Japan or the National High Magnetic Field Laboratory in the United States.
The most direct experimental technique to resolve the electronic structure of crystals in the momentum-energy space (see reciprocal lattice), and, consequently, the Fermi surface, is the angle-resolved photoemission spectroscopy (ARPES). An example of the Fermi surface of superconducting cuprates measured by ARPES is shown in Figure 3.
With positron annihilation it is also possible to determine the Fermi surface as the annihilation process conserves the momentum of the initial particle. Since a positron in a solid will thermalize prior to annihilation, the annihilation radiation carries the information about the electron momentum. The corresponding experimental technique is called angular correlation of electron positron annihilation radiation (ACAR) as it measures the angular deviation from of both annihilation quanta. In this way it is possible to probe the electron momentum density of a solid and determine the Fermi surface. Furthermore, using spin polarized positrons, the momentum distribution for the two spin states in magnetized materials can be obtained. ACAR has many advantages and disadvantages compared to other experimental techniques: It does not rely on UHV conditions, cryogenic temperatures, high magnetic fields or fully ordered alloys. However, ACAR needs samples with a low vacancy concentration as they act as effective traps for positrons. In this way, the first determination of a "smeared Fermi surface" in a 30% alloy was obtained in 1978.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "\\epsilon_i"
},
{
"math_id": 2,
"text": "\\langle n_i\\rangle =\\frac{1}{e^{(\\epsilon_i-\\mu)/k_{\\rm B}T}+1},"
},
{
"math_id": 3,
"text": "\\left\\langle n_i\\right\\rangle"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "\\mu"
},
{
"math_id": 6,
"text": "E_{\\rm F}"
},
{
"math_id": 7,
"text": "T"
},
{
"math_id": 8,
"text": "k_{\\rm B}"
},
{
"math_id": 9,
"text": "T\\to 0"
},
{
"math_id": 10,
"text": "\\left\\langle n_i\\right\\rangle\\to\\begin{cases}1 & (\\epsilon_i<\\mu) \\\\ 0 & (\\epsilon_i>\\mu)\\end{cases}."
},
{
"math_id": 11,
"text": "k_{\\rm F}"
},
{
"math_id": 12,
"text": "k_{\\rm F} = \\frac{p_{\\rm F}}{\\hbar}= \\frac{\\sqrt{2 m E_{\\rm F}}} {\\hbar}"
},
{
"math_id": 13,
"text": "\\hbar"
},
{
"math_id": 14,
"text": "\\mathbf{k}_z"
},
{
"math_id": 15,
"text": "\\mathbf{k}"
},
{
"math_id": 16,
"text": "\\frac{2 \\pi} {a}"
},
{
"math_id": 17,
"text": "\\mathbf{K}"
},
{
"math_id": 18,
"text": "H"
},
{
"math_id": 19,
"text": "1/H"
},
{
"math_id": 20,
"text": "\\hbar \\omega_{\\rm c}"
},
{
"math_id": 21,
"text": "\\omega_{\\rm c} = eH/m^*c"
},
{
"math_id": 22,
"text": "e"
},
{
"math_id": 23,
"text": "m^*"
},
{
"math_id": 24,
"text": "c"
},
{
"math_id": 25,
"text": "\\Delta H"
},
{
"math_id": 26,
"text": "A_{\\perp}"
},
{
"math_id": 27,
"text": "A_{\\perp} = \\frac{2 \\pi e \\Delta H}{\\hbar c}"
}
] | https://en.wikipedia.org/wiki?curid=986096 |
986135 | Peccei–Quinn theory | Now-discarded theory in particle physics
In particle physics, the Peccei–Quinn theory is a well-known, long-standing proposal for the resolution of the strong CP problem formulated by Roberto Peccei and Helen Quinn in 1977. The theory introduces a new anomalous symmetry to the Standard Model along with a new scalar field which spontaneously breaks the symmetry at low energies, giving rise to an axion that suppresses the problematic CP violation. This model has long since been ruled out by experiments and has instead been replaced by similar invisible axion models which utilize the same mechanism to solve the strong CP problem.
Overview.
Quantum chromodynamics (QCD) has a complicated vacuum structure which gives rise to a CP violating θ-term in the Lagrangian. Such a term can have a number of non-perturbative effects, one of which is to give the neutron an electric dipole moment. The absence of this dipole moment in experiments requires the fine-tuning of the θ-term to be very small, something known as the strong CP problem. Motivated as a solution to this problem, Peccei–Quinn (PQ) theory introduces a new complex scalar field formula_0 in addition to the standard Higgs doublet. This scalar field couples to d-type quarks through Yukawa terms, while the Higgs now only couples to the up-type quarks. Additionally, a new global chiral anomalous U(1) symmetry is introduced, the Peccei–Quinn symmetry, under which formula_0 is charged, requiring some of the fermions also have a PQ charge. The scalar field also has a potential
formula_1
where formula_2 is a dimensionless parameter and formula_3 is known as the decay constant. The potential results in formula_0 having the vacuum expectation value of formula_4 at the electroweak phase transition.
Spontaneous symmetry breaking of the Peccei–Quinn symmetry below the electroweak scale gives rise to a pseudo-Goldstone boson known as the axion formula_5, with the resulting Lagrangian taking the form
formula_6
where the first term is the Standard Model (SM) and axion Lagrangian which includes axion-fermion interactions arising from the Yukawa terms. The second term is the CP violating θ-term, with formula_7 the strong coupling constant, formula_8 the gluon field strength tensor, and formula_9 the dual field strength tensor. The third term is known as the color anomaly, a consequence of the Peccei–Quinn symmetry being anomalous, with formula_10 determined by the choice of PQ charges for the quarks. If the symmetry is also anomalous in the electromagnetic sector, there will additionally be an anomaly term coupling the axion to photons. Due to the presence of the color anomaly, the effective formula_11 angle is modified to formula_12, giving rise to an effective potential through instanton effects, which can be approximated in the dilute gas approximation as
formula_13
To minimize the ground state energy, the axion field picks the vacuum expectation value formula_14, with axions now being excitations around this vacuum. This prompts the field redefinition formula_15 which leads to the cancellation of the formula_11 angle, dynamically solving the strong CP problem. It is important to point out that the axion is massive since the Peccei–Quinn symmetry is explicitly broken by the chiral anomaly, with the axion mass roughly given in terms of the pion mass and pion decay constant as formula_16.
Invisible axion models.
For the Peccei–Quinn model to work, the decay constant must be set at the electroweak scale, leading to a heavy axion. Such an axion has long been ruled out by experiments, for example through bounds on rare kaon decays formula_17. Instead, there are a variety of modified models called invisible axion models which introduce the new scalar field formula_0 independently of the electroweak scale, enabling much larger vacuum expectation values, hence very light axions.
The most popular such models are the Kim–Shifman–Vainshtein–Zakharov (KSVZ) and the Dine–Fischler–Srednicki–Zhitnisky (DFSZ) models. The KSVZ model introduces a new heavy quark doublet with PQ charge, acquiring its mass through a Yukawa term involving formula_0. Since in this model the only fermions that carry a PQ charge are the heavy quarks, there are no tree-level couplings between the SM fermions and the axion. Meanwhile, the DFSZ model replaces the usual Higgs with two PQ charged Higgs doublets, formula_18 and formula_19, that give mass to the SM fermions through the usual Yukawa terms, while the new scalar only interacts with the standard model through a quartic coupling formula_20. Since the two Higgs doublets carry PQ charge, the resulting axion couples to SM fermions at tree-level.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varphi"
},
{
"math_id": 1,
"text": " \nV(\\varphi) = \\mu^2\\bigg(|\\varphi|^2 - \\frac{f^2_a}{2}\\bigg)^2, \n"
},
{
"math_id": 2,
"text": "\\mu"
},
{
"math_id": 3,
"text": "f_a"
},
{
"math_id": 4,
"text": "\\langle \\varphi \\rangle = f_a/\\sqrt 2"
},
{
"math_id": 5,
"text": "a"
},
{
"math_id": 6,
"text": " \n\\mathcal L_{\\text{tot}} = \\mathcal L_{\\text{SM,axions}} + \\theta \\frac{g_s^2}{32\\pi^2}\\tilde G^{\\mu \\nu}_b G_{b\\mu \\nu} +\\xi \\frac{a}{f_a}\\frac{g_s^2}{32\\pi^2}\\tilde G^{\\mu \\nu}_b G_{b\\mu \\nu},\n"
},
{
"math_id": 7,
"text": "g_s"
},
{
"math_id": 8,
"text": "G_{b\\mu \\nu}"
},
{
"math_id": 9,
"text": "\\tilde G_{b\\mu \\nu}"
},
{
"math_id": 10,
"text": "\\xi"
},
{
"math_id": 11,
"text": "\\theta"
},
{
"math_id": 12,
"text": "\\theta + \\xi a/f_a"
},
{
"math_id": 13,
"text": " \nV_{\\text{eff}} \\sim \\cos \\bigg(\\theta+\\xi \\frac{\\langle a\\rangle}{f_a}\\bigg).\n"
},
{
"math_id": 14,
"text": "\\langle a \\rangle = -f_a \\theta/\\xi"
},
{
"math_id": 15,
"text": "a \\rightarrow a+\\langle a\\rangle"
},
{
"math_id": 16,
"text": "m_a \\approx f_\\pi m_\\pi/f_a"
},
{
"math_id": 17,
"text": "K^+ \\rightarrow \\pi^+ + a"
},
{
"math_id": 18,
"text": "H_u"
},
{
"math_id": 19,
"text": "H_d"
},
{
"math_id": 20,
"text": "\\varphi^2 H_u H_d"
}
] | https://en.wikipedia.org/wiki?curid=986135 |
9861379 | Potentiometric titration | Measuring electric potential of a solution as titrant is added
In analytical chemistry, potentiometric titration is a technique similar to direct titration of a redox reaction. It is a useful means of characterizing an acid. No indicator is used; instead the electric potential is measured across the analyte, typically an electrolyte solution. To do this, two electrodes are used, an indicator electrode (the glass electrode and metal ion indicator electrode) and a reference electrode. Reference electrodes generally used are hydrogen electrodes, calomel electrodes, and silver chloride electrodes. The indicator electrode forms an electrochemical half-cell with the interested ions in the test solution. The reference electrode forms the other half-cell.
The overall electric potential is calculated as
formula_0
"E"sol is the potential drop over the test solution between the two electrodes. "E"cell is recorded at intervals as the titrant is added. A graph of potential against volume added can be drawn and the end point of the reaction is halfway between the jump in voltage.
"E"cell depends on the concentration of the interested ions with which the indicator electrode is in contact. For example, the electrode reaction may be
formula_1
As the concentration of M"n"+ changes, the "E"cell changes correspondingly. Thus the potentiometric titration involve measurement of "E"cell with the addition of titrant. Types of potentiometric titration include acid–base titration (total alkalinity and total acidity), redox titration (HI/HY and cerate), precipitation titration (halides), and complexometric titration (free EDTA and Antical #5).
History.
The first potentiometric titration was carried out in 1893 by Robert Behrend at Ostwald's Institute in Leipzig. He titrated mercurous solution with potassium chloride, potassium bromide, and potassium iodide. He used a mercury electrode along with a mercury/mercurous nitrate reference electrode. He found that in a cell composed of mercurous nitrate and mercurous nitrate/mercury, the initial voltage is 0. If potassium chloride is added to mercurous nitrate on one side, mercury (I) chloride is precipitated. This decreased the osmotic pressure of mercury (I) ions on the side and creates a potential difference. This potential difference increases slowly as additional potassium chloride is added, but then increases more rapidly. He found the greatest potential difference is achieved once all of the mercurous nitrate has been precipitated. This was used to discern end points of titrations.
Wilhelm Böttger then developed the tool of potentiometric titration while working at Ostwald's Institute. He used potentiometric titration to observe the differences in titration between strong and weak acids, as well as the behavior of polybasic acids. He introduced the idea of using potentiometric titrations for acids and bases that could not be titrated in conjunction with a colorimetric indicator
Potentiometric titrations were first used for redox titrations by Crotogino. He titrated halide ions with potassium permanganate using a shiny platinum electrode and a calomel electrode. He said that if an oxidizing agent is added to a reducing solution then the equilibrium between the reducing substance and reaction product will shift towards the reaction product. This changes the potential very slowly until the amount of reducing substance becomes very small. A large change in potential will occur then once a small addition of the titrating solution is added, as the final amounts of reducing agent are removed and the potential corresponds solely to the oxidizing agent. This large increase in potential difference signifies the endpoint of the reaction.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_{\\rm cell} = E_{\\rm ind} - E_{\\rm ref} + E_{\\rm sol}."
},
{
"math_id": 1,
"text": "\\ce{M}^{n+} + n\\ \\ce{e- -> M}"
}
] | https://en.wikipedia.org/wiki?curid=9861379 |
9861462 | Limit point compact | In mathematics, a topological space formula_0 is said to be limit point compact or weakly countably compact if every infinite subset of formula_0 has a limit point in formula_1 This property generalizes a property of compact spaces. In a metric space, limit point compactness, compactness, and sequential compactness are all equivalent. For general topological spaces, however, these three notions of compactness are not equivalent.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "X."
},
{
"math_id": 2,
"text": "\\Reals"
},
{
"math_id": 3,
"text": "X = \\Z \\times Y"
},
{
"math_id": 4,
"text": "\\Z"
},
{
"math_id": 5,
"text": "Y = \\{0,1\\}"
},
{
"math_id": 6,
"text": "X = \\Reals,"
},
{
"math_id": 7,
"text": "(x, \\infty)."
},
{
"math_id": 8,
"text": "a \\in X,"
},
{
"math_id": 9,
"text": "x<a"
},
{
"math_id": 10,
"text": "\\{a\\}."
},
{
"math_id": 11,
"text": "Y"
},
{
"math_id": 12,
"text": "f = \\pi_{\\Z}"
},
{
"math_id": 13,
"text": "f(X) = \\Z"
},
{
"math_id": 14,
"text": "f = \\pi_{\\Z},"
},
{
"math_id": 15,
"text": "\\Reals."
},
{
"math_id": 16,
"text": "A = \\{x_1, x_2, x_3, \\ldots\\}"
},
{
"math_id": 17,
"text": "f"
},
{
"math_id": 18,
"text": "A"
},
{
"math_id": 19,
"text": "f(x_n) = n"
},
{
"math_id": 20,
"text": "(X, \\tau)"
},
{
"math_id": 21,
"text": "(X, \\sigma)"
},
{
"math_id": 22,
"text": "\\sigma"
},
{
"math_id": 23,
"text": "\\tau"
},
{
"math_id": 24,
"text": "(X, \\tau)."
}
] | https://en.wikipedia.org/wiki?curid=9861462 |
9862802 | Layered hidden Markov model | The layered hidden Markov model (LHMM) is a statistical model derived from the hidden Markov model (HMM).
A layered hidden Markov model (LHMM) consists of "N" levels of HMMs, where the HMMs on level "i" + 1 correspond to observation symbols or probability generators at level "i".
Every level "i" of the LHMM consists of "K""i" HMMs running in parallel.
Background.
LHMMs are sometimes useful in specific structures because they can facilitate learning and generalization. For example, even though a fully connected HMM could always be used if enough training data were available, it is often useful to constrain the model by not allowing arbitrary state transitions. In the same way it can be beneficial to embed the HMM in a layered structure which, theoretically, may not be able to solve any problems the basic HMM cannot, but can solve some problems more efficiently because less training data is needed.
The layered hidden Markov model.
A layered hidden Markov model (LHMM) consists of formula_0 levels of HMMs where the HMMs on level formula_1 corresponds to observation symbols or probability generators at level formula_0.
Every level formula_2 of the LHMM consists of formula_3 HMMs running in parallel.
At any given level formula_4 in the LHMM a sequence of formula_5 observation symbols
formula_6 can be used to classify the input into one of formula_7 classes, where each class corresponds to each of the formula_7 HMMs at level formula_4. This classification can then be used to generate a new observation for the level formula_8 HMMs. At the lowest layer, i.e. level formula_0, primitive observation symbols formula_9 would be generated directly from observations of the modeled process. For example, in a trajectory tracking task the primitive observation symbols would originate from the quantized sensor values. Thus at each layer in the LHMM the observations originate from the classification of the underlying layer, except for the lowest layer where the observation symbols originate from measurements of the observed process.
It is not necessary to run all levels at the same time granularity. For example, it is possible to use windowing at any level in the structure so that the classification takes the average of several classifications into consideration before passing the results up the layers of the LHMM.
Instead of simply using the winning HMM at level formula_10 as an input symbol for the HMM at level formula_4 it is possible to use it as a probability generator by passing the complete probability distribution up the layers of the LHMM. Thus instead of having a "winner takes all" strategy where the most probable HMM is selected as an observation symbol, the likelihood formula_11 of observing the formula_2th HMM can be used in the recursion formula of the level formula_4 HMM to account for the uncertainty in the classification of the HMMs at level formula_10. Thus, if the classification of the HMMs at level formula_12 is uncertain, it is possible to pay more attention to the a-priori information encoded in the HMM at level formula_4.
A LHMM could in practice be transformed into a single layered HMM where all the different models are concatenated together. Some of the advantages that may be expected from using the LHMM over a large single layer HMM is that the LHMM is less likely to suffer from overfitting since the individual sub-components are trained independently on smaller amounts of data. A consequence of this is that a significantly smaller amount of training data is required for the LHMM to achieve a performance comparable of the HMM. Another advantage is that the layers at the bottom of the LHMM, which are more sensitive to changes in the environment such as the type of sensors, sampling rate etc. can be retrained separately without altering the higher layers of the LHMM.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "N+1"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "K_i"
},
{
"math_id": 4,
"text": "L"
},
{
"math_id": 5,
"text": "T_L"
},
{
"math_id": 6,
"text": "\\mathbf{o}_L=\\{o_1, o_2, \\dots, o_{T_L}\\}"
},
{
"math_id": 7,
"text": "K_L"
},
{
"math_id": 8,
"text": "L-1"
},
{
"math_id": 9,
"text": "\\mathbf{o}_p=\\{o_1, o_2, \\dots, o_{T_p}\\}"
},
{
"math_id": 10,
"text": "L+1"
},
{
"math_id": 11,
"text": "L(i)"
},
{
"math_id": 12,
"text": "n+1"
}
] | https://en.wikipedia.org/wiki?curid=9862802 |
986625 | Total shareholder return | Total shareholder return (TSR) (or simply total return) is a measure of the performance of different companies' stocks and shares over time. It combines share price appreciation and dividends paid to show the total return to the shareholder expressed as an annualized percentage. It is calculated by the growth in capital from purchasing a share in the company assuming that the dividends are reinvested each time they are paid. This growth is expressed as a percentage as the compound annual growth rate.
The main benefit of TSR is that it allows the performance of shares to be compared even though some of the shares may have a high growth and low dividends and others may have low growth and high dividends.
Most stock market indices only use the growth of the prices of the companies making up the index. However, when they use TSR for the companies it is called a total return index or accumulation index. For example, corresponding to the S&P 500 index calculated by Standard and Poor's, there is the S&P 500 TR index.
In the technology sector, a study has found that regardless of a company's size, the more diverse the portfolio, the more difficult it is to generate high TSR.
In practice TSR is difficult to calculate since it involves knowing the price of the shares at the time the dividends are paid. However, as an approximation over one year it can be calculated as follows with:
formula_0 = share price at beginning of year,
formula_1 = share price at end of year,
"Dividends" = dividends paid over year and
"TSR" = total shareholder return, TSR is computed as
formula_2
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Price_{begin}"
},
{
"math_id": 1,
"text": "Price_{end}"
},
{
"math_id": 2,
"text": "\nTSR={(Price_{end} - Price_{begin} + Dividends)}/{Price_{begin}}"
}
] | https://en.wikipedia.org/wiki?curid=986625 |
98663 | Orbital elements | Parameters that uniquely identify a specific orbit
Orbital elements are the parameters required to uniquely identify a specific orbit. In celestial mechanics these elements are considered in two-body systems using a Kepler orbit. There are many different ways to mathematically describe the same orbit, but certain schemes, each consisting of a set of six parameters, are commonly used in astronomy and orbital mechanics.
A real orbit and its elements change over time due to gravitational perturbations by other objects and the effects of general relativity. A Kepler orbit is an idealized, mathematical approximation of the orbit at a particular time.
Keplerian elements.
The traditional orbital elements are the six Keplerian elements, after Johannes Kepler and his laws of planetary motion.
When viewed from an inertial frame, two orbiting bodies trace out distinct trajectories. Each of these trajectories has its focus at the common center of mass. When viewed from a non-inertial frame centered on one of the bodies, only the trajectory of the opposite body is apparent; Keplerian elements describe these non-inertial trajectories. An orbit has two sets of Keplerian elements depending on which body is used as the point of reference. The reference body (usually the most massive) is called the "primary", the other body is called the "secondary". The primary does not necessarily possess more mass than the secondary, and even when the bodies are of equal mass, the orbital elements depend on the choice of the primary.
Two elements define the shape and size of the ellipse:
Two elements define the orientation of the orbital plane in which the ellipse is embedded:
The remaining two elements are as follows:
The mean anomaly M is a mathematically convenient fictitious "angle" which does not correspond to a real geometric angle, but rather varies linearly with time, one whole orbital period being represented by an "angle" of 2π radians. It can be converted into the true anomaly ν, which does represent the real geometric angle in the plane of the ellipse, between periapsis (closest approach to the central body) and the position of the orbiting body at any given time. Thus, the true anomaly is shown as the red angle ν in the diagram, and the mean anomaly is not shown.
The angles of inclination, longitude of the ascending node, and argument of periapsis can also be described as the Euler angles defining the orientation of the orbit relative to the reference coordinate system.
Note that non-elliptic trajectories also exist, but are not closed, and are thus not orbits. If the eccentricity is greater than one, the trajectory is a hyperbola. If the eccentricity is equal to one, the trajectory is a parabola. Regardless of eccentricity, the orbit degenerates to a radial trajectory if the angular momentum equals zero.
Required parameters.
Given an inertial frame of reference and an arbitrary epoch (a specified point in time), exactly six parameters are necessary to unambiguously define an arbitrary and unperturbed orbit.
This is because the problem contains six degrees of freedom. These correspond to the three spatial dimensions which define position (x, y, z in a Cartesian coordinate system), plus the velocity in each of these dimensions. These can be described as orbital state vectors, but this is often an inconvenient way to represent an orbit, which is why Keplerian elements are commonly used instead.
Sometimes the epoch is considered a "seventh" orbital parameter, rather than part of the reference frame.
If the epoch is defined to be at the moment when one of the elements is zero, the number of unspecified elements is reduced to five. (The sixth parameter is still necessary to define the orbit; it is merely numerically set to zero by convention or "moved" into the definition of the epoch with respect to real-world clock time.)
Alternative parametrizations.
Keplerian elements can be obtained from orbital state vectors (a three-dimensional vector for the position and another for the velocity) by manual transformations or with computer software.
Other orbital parameters can be computed from the Keplerian elements such as the period, apoapsis, and periapsis. (When orbiting the Earth, the last two terms are known as the apogee and perigee.) It is common to specify the period instead of the semi-major axis in Keplerian element sets, as each can be computed from the other provided the standard gravitational parameter, GM, is given for the central body.
Instead of the mean anomaly at epoch, the mean anomaly M, mean longitude, true anomaly "ν"0, or (rarely) the eccentric anomaly might be used.
Using, for example, the "mean anomaly" instead of "mean anomaly at epoch" means that time t must be specified as a seventh orbital element. Sometimes it is assumed that mean anomaly is zero at the epoch (by choosing the appropriate definition of the epoch), leaving only the five other orbital elements to be specified.
Different sets of elements are used for various astronomical bodies. The eccentricity, e, and either the semi-major axis, a, or the distance of periapsis, q, are used to specify the shape and size of an orbit. The longitude of the ascending node, Ω, the inclination, i, and the argument of periapsis, ω, or the longitude of periapsis, ϖ, specify the orientation of the orbit in its plane. Either the longitude at epoch, "L"0, the mean anomaly at epoch, "M"0, or the time of perihelion passage, "T"0, are used to specify a known point in the orbit. The choices made depend whether the vernal equinox or the node are used as the primary reference. The semi-major axis is known if the mean motion and the gravitational mass are known.
It is also quite common to see either the mean anomaly (M) or the mean longitude (L) expressed directly, without either "M"0 or "L"0 as intermediary steps, as a polynomial function with respect to time. This method of expression will consolidate the mean motion (n) into the polynomial as one of the coefficients. The appearance will be that L or M are expressed in a more complicated manner, but we will appear to need one fewer orbital element.
Mean motion can also be obscured behind citations of the orbital period P.
Euler angle transformations.
The angles Ω, i, ω are the Euler angles (corresponding to α, β, γ in the notation used in that article) characterizing the orientation of the coordinate system
<templatestyles src="Block indent/styles.css"/>x̂, ŷ, ẑ from the inertial coordinate frame Î, Ĵ, K̂
where:
Then, the transformation from the Î, Ĵ, K̂ coordinate frame to the x̂, ŷ, ẑ frame with the Euler angles Ω, i, ω is:
formula_0
formula_1
where
formula_2
The inverse transformation, which computes the 3 coordinates in the I-J-K system given the 3 (or 2) coordinates in the x-y-z system, is represented by the inverse matrix. According to the rules of matrix algebra, the inverse matrix of the product of the 3 rotation matrices is obtained by inverting the order of the three matrices and switching the signs of the three Euler angles.
That is,
formula_3
where
formula_4
The transformation from x̂, ŷ, ẑ to Euler angles Ω, i, ω is:
formula_5
where arg("x","y") signifies the polar argument that can be computed with the standard function atan2(y,x) available in many programming languages.
Orbit prediction.
Under ideal conditions of a perfectly spherical central body, zero perturbations and negligible relativistic effects, all orbital elements except the mean anomaly are constants. The mean anomaly changes linearly with time, scaled by the mean motion,
formula_6
where "μ" is the standard gravitational parameter. Hence if at any instant "t"0 the orbital parameters are ("e"0, "a"0, "i"0, Ω0, "ω"0, "M"0), then the elements at time "t" = "t"0 + "δt" is given by ("e"0, "a"0, "i"0, Ω0, "ω"0, "M"0 + "n" "δt").
Perturbations and elemental variance.
Unperturbed, two-body, Newtonian orbits are always conic sections, so the Keplerian elements define an ellipse, parabola, or hyperbola. Real orbits have perturbations, so a given set of Keplerian elements accurately describes an orbit only at the epoch. Evolution of the orbital elements takes place due to the gravitational pull of bodies other than the primary, the nonsphericity of the primary, atmospheric drag, relativistic effects, radiation pressure, electromagnetic forces, and so on.
Keplerian elements can often be used to produce useful predictions at times near the epoch. Alternatively, real trajectories can be modeled as a sequence of Keplerian orbits that osculate ("kiss" or touch) the real trajectory. They can also be described by the so-called planetary equations, differential equations which come in different forms developed by Lagrange, Gauss, Delaunay, Poincaré, or Hill.
Two-line elements.
Keplerian elements parameters can be encoded as text in a number of formats. The most common of them is the NASA / NORAD "two-line elements" (TLE) format, originally designed for use with 80 column punched cards, but still in use because it is the most common format, and 80-character ASCII records can be handled efficiently by modern databases.
Depending on the application and object orbit, the data derived from TLEs older than 30 days can become unreliable. Orbital positions can be calculated from TLEs through simplified perturbation models (SGP4 / SDP4 / SGP8 / SDP8).
Example of a two-line element:
Delaunay variables.
The Delaunay orbital elements were introduced by Charles-Eugène Delaunay during his study of the motion of the Moon. Commonly called "Delaunay variables", they are a set of canonical variables, which are action-angle coordinates. The angles are simple sums of some of the Keplerian angles:
along with their respective conjugate momenta, L, G, and H. The momenta L, G, and H are the "action" variables and are more elaborate combinations of the Keplerian elements a, e, and i.
Delaunay variables are used to simplify perturbative calculations in celestial mechanics, for example while investigating the Kozai–Lidov oscillations in hierarchical triple systems. The advantage of the Delaunay variables is that they remain well defined and non-singular (except for h, which can be tolerated) when e and / or i are very small: When the test particle's orbit is very nearly circular (formula_10), or very nearly "flat" (formula_11).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\nx_1 &= \\cos \\Omega \\cdot \\cos \\omega - \\sin \\Omega \\cdot \\cos i \\cdot \\sin \\omega\\ ;\\\\\nx_2 &= \\sin \\Omega \\cdot \\cos \\omega + \\cos \\Omega \\cdot \\cos i \\cdot \\sin \\omega\\ ;\\\\\nx_3 &= \\sin i \\cdot \\sin \\omega ;\\\\\n\\, \\\\\ny_1 &=-\\cos \\Omega \\cdot \\sin \\omega - \\sin \\Omega \\cdot \\cos i \\cdot \\cos \\omega\\ ;\\\\\ny_2 &=-\\sin \\Omega \\cdot \\sin \\omega + \\cos \\Omega \\cdot \\cos i \\cdot \\cos \\omega\\ ;\\\\\ny_3 &= \\sin i \\cdot \\cos \\omega\\ ;\\\\\n\\, \\\\\nz_1 &= \\sin i \\cdot \\sin \\Omega\\ ;\\\\\nz_2 &=-\\sin i \\cdot \\cos \\Omega\\ ;\\\\\nz_3 &= \\cos i\\ ;\\\\\n\\end{align}"
},
{
"math_id": 1,
"text": "\\begin{bmatrix}\nx_1 & x_2 & x_3 \\\\\ny_1 & y_2 & y_3 \\\\\nz_1 & z_2 & z_3\n\\end{bmatrix}\n =\n\\begin{bmatrix}\n\\cos\\omega & \\sin\\omega & 0 \\\\\n-\\sin\\omega & \\cos\\omega& 0 \\\\\n0 & 0 & 1\n\\end{bmatrix}\n\\,\n\\begin{bmatrix}\n1 & 0 &0 \\\\\n0 & \\cos i & \\sin i\\\\\n0 & -\\sin i & \\cos i\n\\end{bmatrix}\n\\,\n\\begin{bmatrix}\n\\cos\\Omega & \\sin\\Omega & 0 \\\\\n-\\sin\\Omega & \\cos\\Omega& 0 \\\\\n0 & 0 & 1\n\\end{bmatrix}\\,; "
},
{
"math_id": 2,
"text": "\\begin{align}\n\\mathbf\\hat{x} &= x_1\\mathbf\\hat{I} + x_2\\mathbf\\hat{J} + x_3\\mathbf\\hat{K} ~;\\\\\n\\mathbf\\hat{y} &= y_1\\mathbf\\hat{I} + y_2\\mathbf\\hat{J} + y_3\\mathbf\\hat{K} ~;\\\\\n\\mathbf\\hat{z} &= z_1\\mathbf\\hat{I} + z_2\\mathbf\\hat{J} + z_3\\mathbf\\hat{K} ~.\\\\\n \\end{align}"
},
{
"math_id": 3,
"text": "\\begin{bmatrix}\ni_1 & i_2 & i_3 \\\\\nj_1 & j_2 & j_3 \\\\\nk_1 & k_2 & k_3\n\\end{bmatrix}\n =\n\\begin{bmatrix}\n\\cos\\Omega & -\\sin\\Omega & 0 \\\\\n\\sin\\Omega & \\cos\\Omega& 0 \\\\\n0 & 0 & 1\n\\end{bmatrix}\n\\,\n\\begin{bmatrix}\n1 & 0 &0 \\\\\n0 & \\cos i & -\\sin i\\\\\n0 & \\sin i & \\cos i\n\\end{bmatrix}\n\\,\n\\begin{bmatrix}\n\\cos\\omega & -\\sin\\omega & 0 \\\\\n\\sin\\omega & \\cos\\omega& 0 \\\\\n0 & 0 & 1\n\\end{bmatrix}\\,;\n "
},
{
"math_id": 4,
"text": "\\begin{align}\n\\mathbf\\hat{I} &= i_1\\mathbf\\hat{x} + i_2\\mathbf\\hat{y} + i_3\\mathbf\\hat{z} ~;\\\\\n\\mathbf\\hat{J} &= j_1\\mathbf\\hat{x} + j_2\\mathbf\\hat{y} + j_3\\mathbf\\hat{z} ~;\\\\\n\\mathbf\\hat{K} &= k_1\\mathbf\\hat{x} + k_2\\mathbf\\hat{y} + k_3\\mathbf\\hat{z} ~.\\\\\n \\end{align}"
},
{
"math_id": 5,
"text": "\\begin{align}\n\\Omega &= \\operatorname{arg}\\left( -z_2, z_1 \\right)\\\\\ni &= \\operatorname{arg}\\left( z_3, \\sqrt{{z_1}^2 + {z_2}^2} \\right)\\\\\n\\omega &= \\operatorname{arg}\\left( y_3, x_3 \\right)\\\\\n \\end{align}"
},
{
"math_id": 6,
"text": "n = \\sqrt{\\frac{\\mu } {a^3}}."
},
{
"math_id": 7,
"text": "\\ell = M + \\omega + \\Omega"
},
{
"math_id": 8,
"text": "g = \\omega + \\Omega"
},
{
"math_id": 9,
"text": "h = \\Omega"
},
{
"math_id": 10,
"text": "e \\approx 0"
},
{
"math_id": 11,
"text": "i \\approx 0"
}
] | https://en.wikipedia.org/wiki?curid=98663 |
986932 | Symbolic method (combinatorics) | In combinatorics, the symbolic method is a technique for counting combinatorial objects. It uses the internal structure of the objects to derive formulas for their generating functions. The method is mostly associated with Philippe Flajolet and is detailed in Part A of his book with Robert Sedgewick, "Analytic Combinatorics", while the rest of the book explains how to use complex analysis in order to get asymptotic and probabilistic results on the corresponding generating functions.
During two centuries, generating functions were popping up via the corresponding recurrences on their coefficients (as can be seen in the seminal works of Bernoulli, Euler, Arthur Cayley, Schröder,
Ramanujan, Riordan, Knuth, Comtet, etc.).
It was then slowly realized that the generating functions were capturing many other facets of the initial discrete combinatorial objects, and that this could be done in a more direct formal way: The recursive nature of some combinatorial structures
translates, via some isomorphisms, into noteworthy identities on the corresponding generating functions.
Following the works of Pólya, further advances were thus done in this spirit in the 1970s with generic uses of languages for specifying combinatorial classes and their generating functions, as found in works by Foata and Schützenberger on permutations,
Bender and Goldman on prefabs, and Joyal on combinatorial species.
Note that this symbolic method in enumeration is unrelated to "Blissard's symbolic method", which is just another old name for umbral calculus.
The symbolic method in combinatorics constitutes the first step of many analyses of combinatorial structures,
which can then lead to fast computation schemes, to asymptotic properties and limit laws, to random generation, all of them being suitable to automatization via computer algebra.
Classes of combinatorial structures.
Consider the problem of distributing objects given by a generating function into a set of "n" slots, where a permutation group "G" of degree "n" acts on the slots to create an equivalence relation of filled slot configurations, and asking about the generating function of the configurations by weight of the configurations with respect to this equivalence relation, where the weight of a configuration is the sum of the weights of the objects in the slots. We will first explain how to solve this problem in the labelled and the unlabelled case and use the solution to motivate the creation of classes of combinatorial structures.
The Pólya enumeration theorem solves this problem in the unlabelled case. Let "f"("z") be the ordinary generating function (OGF) of the objects, then the OGF of the configurations is given by the substituted cycle index
formula_0
In the labelled case we use an exponential generating function (EGF) "g"("z") of the objects and apply the Labelled enumeration theorem, which says that the EGF of the configurations is given by
formula_1
We are able to enumerate filled slot configurations using either PET in the unlabelled case or the labelled enumeration theorem in the labelled case. We now ask about the generating function of configurations obtained when there is more than one set of slots, with a permutation group acting on each. Clearly the orbits do not intersect and we may add the respective generating functions. Suppose, for example, that we want to enumerate unlabelled sequences of length two or three of some objects contained in a set "X". There are two sets of slots, the first one containing two slots, and the second one, three slots. The group acting on the first set is formula_2, and on the second slot, formula_3. We represent this by the following formal power series in "X":
formula_4
where the term formula_5 is used to denote the set of orbits under "G" and formula_6, which denotes in the obvious way the process of distributing the objects from "X" with repetition into the "n" slots. Similarly, consider the labelled problem of creating cycles of arbitrary length from a set of labelled objects "X". This yields the following series of actions of cyclic groups:
formula_7
Clearly we can assign meaning to any such power series of quotients (orbits) with respect to permutation groups, where we restrict the groups of degree "n" to the conjugacy classes formula_8 of the symmetric group formula_9, which form a unique factorization domain. (The orbits with respect to two groups from the same conjugacy class are isomorphic.) This motivates the following definition.
A class formula_10 of combinatorial structures is a formal series
formula_11
where formula_12 (the "A" is for "atoms") is the set of primes of the UFD formula_13 and formula_14
In the following we will simplify our notation a bit and write e.g.
formula_15
for the classes mentioned above.
The Flajolet–Sedgewick fundamental theorem.
A theorem in the Flajolet–Sedgewick theory of symbolic combinatorics treats the enumeration problem of labelled and unlabelled combinatorial classes by means of the creation of symbolic operators that make it possible to translate equations involving combinatorial structures directly (and automatically) into equations in the generating functions of these structures.
Let formula_16 be a class of combinatorial structures. The OGF formula_17 of formula_18 where "X" has OGF formula_19 and the EGF formula_20 of formula_18 where "X" is labelled with EGF formula_21 are given by
formula_22
and
formula_23
In the labelled case we have the additional requirement that "X" not contain elements of size zero. It will sometimes prove convenient to add one to formula_20 to indicate the presence of one copy of the empty set. It is possible to assign meaning to both formula_24 (the most common example is the case of unlabelled sets) and formula_25 To prove the theorem simply apply PET (Pólya enumeration theorem) and the labelled enumeration theorem.
The power of this theorem lies in the fact that it makes it possible to construct operators on generating functions that represent combinatorial classes. A structural equation between combinatorial classes thus translates directly into an equation in the corresponding generating functions. Moreover, in the labelled case it is evident from the formula that we may replace formula_21 by the atom "z" and compute the resulting operator, which may then be applied to EGFs. We now proceed to construct the most important operators. The reader may wish to compare with the data on the cycle index page.
The sequence operator <templatestyles src="Nobold/styles.css"/>.
This operator corresponds to the class
formula_26
and represents sequences, i.e. the slots are not being permuted and there is exactly one empty sequence. We have
formula_27
and
formula_28
The cycle operator <templatestyles src="Nobold/styles.css"/>.
This operator corresponds to the class
formula_29
i.e., cycles containing at least one object. We have
formula_30
or
formula_31
and
formula_32
This operator, together with the set operator SET, and their restrictions to specific degrees are used to compute random permutation statistics. There are two useful restrictions of this operator, namely to even and odd cycles.
The labelled even cycle operator CYCeven is
formula_33
which yields
formula_34
This implies that the labelled odd cycle operator CYCodd
formula_35
is given by
formula_36
The multiset/set operator <templatestyles src="Nobold/styles.css"/>.
The series is
formula_37
i.e., the symmetric group is applied to the slots. This creates multisets in the unlabelled case and sets in the labelled case (there are no multisets in the labelled case because the labels distinguish multiple instances of the same object from the set being put into different slots). We include the empty set in both the labelled and the unlabelled case.
The unlabelled case is done using the function
formula_38
so that
formula_39
Evaluating formula_40 we obtain
formula_41
For the labelled case we have
formula_42
In the labelled case we denote the operator by SET, and in the unlabelled case, by MSET. This is because in the labeled case there are no multisets (the labels distinguish the constituents of a compound combinatorial class) whereas in the unlabeled case there are multisets and sets, with the latter being given by
formula_43
Procedure.
Typically, one starts with the "neutral class" formula_44, containing a single object of size 0 (the "neutral object", often denoted by formula_45), and one or more "atomic classes" formula_46, each containing a single object of size 1. Next, set-theoretic relations involving various simple operations, such as disjoint unions, products, sets, sequences, and multisets define more complex classes in terms of the already defined classes. These relations may be recursive. The elegance of symbolic combinatorics lies in that the set theoretic, or "symbolic", relations translate directly into "algebraic" relations involving the generating functions.
In this article, we will follow the convention of using script uppercase letters to denote combinatorial classes and the corresponding plain letters for the generating functions (so the class formula_47 has generating function formula_48).
There are two types of generating functions commonly used in symbolic combinatorics—ordinary generating functions, used for combinatorial classes of unlabelled objects, and exponential generating functions, used for classes of labelled objects.
It is trivial to show that the generating functions (either ordinary or exponential) for formula_44 and formula_46 are formula_49 and formula_50, respectively. The disjoint union is also simple — for disjoint sets formula_51 and formula_52, formula_53 implies formula_54. The relations corresponding to other operations depend on whether we are talking about labelled or unlabelled structures (and ordinary or exponential generating functions).
Combinatorial sum.
The restriction of unions to disjoint unions is an important one; however, in the formal specification of symbolic combinatorics, it is too much trouble to keep track of which sets are disjoint. Instead, we make use of a construction that guarantees there is no intersection ("be careful, however; this affects the semantics of the operation as well"). In defining the "combinatorial sum" of two sets formula_47 and formula_51, we mark members of each set with a distinct marker, for example formula_55 for members of formula_47 and formula_56 for members of formula_51. The combinatorial sum is then:
formula_57
This is the operation that formally corresponds to addition.
Unlabelled structures.
With unlabelled structures, an ordinary generating function (OGF) is used. The OGF of a sequence formula_58 is defined as
formula_59
Product.
The product of two combinatorial classes formula_47 and formula_51 is specified by defining the size of an ordered pair as the sum of the sizes of the elements in the pair. Thus we have for formula_60 and formula_61, formula_62. This should be a fairly intuitive definition. We now note that the number of elements in formula_63 of size n is
formula_64
Using the definition of the OGF and some elementary algebra, we can show that
formula_65 implies formula_66
Sequence.
The "sequence construction", denoted by formula_67 is defined as
formula_68
In other words, a sequence is the neutral element, or an element of formula_51, or an ordered pair, ordered triple, etc. This leads to the relation
formula_69
Set.
The "set" (or "powerset") "construction", denoted by formula_70 is defined as
formula_71
which leads to the relation
formula_72
where the expansion
formula_73
was used to go from line 4 to line 5.
Multiset.
The "multiset construction", denoted formula_74 is a generalization of the set construction. In the set construction, each element can occur zero or one times. In a multiset, each element can appear an arbitrary number of times. Therefore,
formula_75
This leads to the relation
formula_76
where, similar to the above set construction, we expand formula_77, swap the sums, and substitute for the OGF of formula_51.
Other elementary constructions.
Other important elementary constructions are:
The derivations for these constructions are too complicated to show here. Here are the results:
Examples.
Many combinatorial classes can be built using these elementary constructions. For example, the class of plane trees (that is, trees embedded in the plane, so that the order of the subtrees matters) is specified by the recursive relation
formula_82
In other words, a tree is a root node of size 1 and a sequence of subtrees. This gives
formula_83
we solve for "G"("z") by multiplying formula_84 to get
formula_85
subtracting z and solving for G(z) using the quadratic formula gives
formula_86
Another example (and a classic combinatorics problem) is integer partitions. First, define the class of positive integers formula_87, where the size of each integer is its value:
formula_88
The OGF of formula_87 is then
formula_89
Now, define the set of partitions formula_90 as
formula_91
The OGF of formula_90 is
formula_92
Unfortunately, there is no closed form for formula_93; however, the OGF can be used to derive a recurrence relation, or using more advanced methods of analytic combinatorics, calculate the asymptotic behavior of the counting sequence.
Specification and specifiable classes.
The elementary constructions mentioned above allow us to define the notion of "specification". This specification allows us to use a set of recursive equations, with multiple combinatorial classes.
Formally, a specification for a set of combinatorial classes formula_94 is a set of formula_95 equations formula_96, where formula_97 is an expression, whose atoms are formula_98 and the formula_99's, and whose operators are the elementary constructions listed above.
A class of combinatorial structures is said to be "constructible" or "specifiable" when it admits a specification.
For example, the set of trees whose leaves' depth is even (respectively, odd) can be defined using the specification with two classes formula_100 and formula_101. Those classes should satisfy the equation formula_102 and formula_103.
Labelled structures.
An object is "weakly labelled" if each of its atoms has a nonnegative integer label, and each of these labels is distinct. An object is ("strongly" or "well") "labelled", if furthermore, these labels comprise the consecutive integers formula_104. "Note: some combinatorial classes are best specified as labelled structures or unlabelled structures, but some readily admit both specifications." A good example of labelled structures is the class of labelled graphs.
With labelled structures, an exponential generating function (EGF) is used. The EGF of a sequence formula_58 is defined as
formula_105
Product.
For labelled structures, we must use a different definition for product than for unlabelled structures. In fact, if we simply used the cartesian product, the resulting structures would not even be well labelled. Instead, we use the so-called "labelled product", denoted formula_106
For a pair formula_107 and formula_108, we wish to combine the two structures into a single structure. In order for the result to be well labelled, this requires some relabelling of the atoms in formula_109 and formula_110. We will restrict our attention to relabellings that are consistent with the order of the original labels. Note that there are still multiple ways to do the relabelling; thus, each pair of members determines not a single member in the product, but a set of new members. The details of this construction are found on the page of the Labelled enumeration theorem.
To aid this development, let us define a function, formula_111, that takes as its argument a (possibly weakly) labelled object formula_112 and relabels its atoms in an order-consistent way so that formula_113 is well labelled. We then define the labelled product for two objects formula_112 and formula_109 as
formula_114
Finally, the labelled product of two classes formula_47 and formula_51 is
formula_115
The EGF can be derived by noting that for objects of size formula_116 and formula_117, there are formula_118 ways to do the relabelling. Therefore, the total number of objects of size formula_119 is
formula_120
This "binomial convolution" relation for the terms is equivalent to multiplying the EGFs,
formula_121
Sequence.
The "sequence construction" formula_67 is defined similarly to the unlabelled case:
formula_122
and again, as above,
formula_123
Set.
In labelled structures, a set of formula_116 elements corresponds to exactly formula_124 sequences. This is different from the unlabelled case, where some of the permutations may coincide. Thus for formula_70, we have
formula_125
Cycle.
Cycles are also easier than in the unlabelled case. A cycle of length formula_116 corresponds to formula_116 distinct sequences. Thus for formula_81, we have
formula_126
Boxed product.
In labelled structures, the min-boxed product formula_127 is a variation of the original product which requires the element of formula_51 in the product with the minimal label. Similarly, we can also define a max-boxed product, denoted by formula_128, by the same manner. Then we have,
formula_129
or equivalently,
formula_130
Example.
An increasing Cayley tree is a labelled non-plane and rooted tree whose labels along any branch stemming from the root form an increasing sequence. Then, let formula_131 be the class of such trees. The recursive specification is now formula_132
Other elementary constructions.
The operators
CYCeven,
CYCodd,
SETeven,
and
SETodd
represent cycles of even and odd length, and sets of even and odd cardinality.
Example.
Stirling numbers of the second kind may be derived and analyzed using the structural decomposition
formula_133
The decomposition
formula_134
is used to study unsigned Stirling numbers of the first kind, and in the derivation of the statistics of random permutations. A detailed examination of the exponential generating functions associated to Stirling numbers within symbolic combinatorics may be found on the page on Stirling numbers and exponential generating functions in symbolic combinatorics.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Z(G)(f(z), f(z^2), \\ldots, f(z^n))."
},
{
"math_id": 1,
"text": "\\frac{g(z)^n}{|G|}."
},
{
"math_id": 2,
"text": "E_2"
},
{
"math_id": 3,
"text": "E_3"
},
{
"math_id": 4,
"text": " X^2/E_2 \\; + \\; X^3/E_3 "
},
{
"math_id": 5,
"text": "X^n/G"
},
{
"math_id": 6,
"text": "X^n = X \\times \\cdots \\times X"
},
{
"math_id": 7,
"text": " X/C_1 \\; + \\; X^2/C_2 \\; + \\; X^3/C_3 \\; + \\; X^4/C_4 \\; + \\cdots."
},
{
"math_id": 8,
"text": "\\operatorname{Cl}(S_n)"
},
{
"math_id": 9,
"text": "S_n"
},
{
"math_id": 10,
"text": "\\mathcal{C}\\in \\mathbb{N}[\\mathfrak{A}]"
},
{
"math_id": 11,
"text": "\\mathcal{C} = \\sum_{n \\ge 1} \\sum_{G\\in \\operatorname{Cl}(S_n)} c_G (X^n/G)"
},
{
"math_id": 12,
"text": "\\mathfrak{A}"
},
{
"math_id": 13,
"text": "\\{\\operatorname{Cl}(S_n)\\}_{n\\ge 1}"
},
{
"math_id": 14,
"text": "c_G \\in \\mathbb{N}."
},
{
"math_id": 15,
"text": " E_2 + E_3 \\text{ and } C_1 + C_2 + C_3 + \\cdots."
},
{
"math_id": 16,
"text": "\\mathcal{C}\\in\\mathbb{N}[\\mathfrak{A}]"
},
{
"math_id": 17,
"text": "F(z)"
},
{
"math_id": 18,
"text": "\\mathcal{C}(X)"
},
{
"math_id": 19,
"text": "f(z)"
},
{
"math_id": 20,
"text": "G(z)"
},
{
"math_id": 21,
"text": "g(z)"
},
{
"math_id": 22,
"text": "F(z) = \\sum_{n \\ge 1} \\sum_{G\\in \\operatorname{Cl}(S_n)} c_G Z(G)(f(z), f(z^2), \\ldots, f(z^n))"
},
{
"math_id": 23,
"text": "G(z) = \\sum_{n \\ge 1} \\left(\\sum_{G\\in \\operatorname{Cl}(S_n)} \\frac{c_G}{|G|}\\right) g(z)^n. "
},
{
"math_id": 24,
"text": "\\mathcal{C}\\in\\mathbb{Z}[\\mathfrak{A}]"
},
{
"math_id": 25,
"text": "\\mathcal{C}\\in\\mathbb{Q}[\\mathfrak{A}]."
},
{
"math_id": 26,
"text": "1 + E_1 + E_2 + E_3 + \\cdots"
},
{
"math_id": 27,
"text": " F(z) = 1 + \\sum_{n\\ge 1} Z(E_n)(f(z), f(z^2), \\ldots, f(z^n)) =\n1 + \\sum_{n\\ge 1} f(z)^n = \\frac{1}{1-f(z)}"
},
{
"math_id": 28,
"text": " G(z) = 1 + \\sum_{n\\ge 1} \\left(\\frac{1}{|E_n|}\\right) g(z)^n = \n\\frac{1}{1-g(z)}."
},
{
"math_id": 29,
"text": "C_1 + C_2 + C_3 + \\cdots"
},
{
"math_id": 30,
"text": " \nF(z) = \\sum_{n\\ge 1} Z(C_n)(f(z), f(z^2), \\ldots, f(z^n)) =\n\\sum_{n\\ge 1} \\frac{1}{n} \\sum_{d\\mid n} \\varphi(d) f(z^d)^{n/d}"
},
{
"math_id": 31,
"text": "\nF(z) = \\sum_{k\\ge 1} \\varphi(k) \\sum_{m\\ge 1} \\frac{1}{km} f(z^k)^m =\n\\sum_{k\\ge 1} \\frac{\\varphi(k)}{k} \\log \\frac{1}{1-f(z^k)}"
},
{
"math_id": 32,
"text": " G(z) = \\sum_{n\\ge 1} \\left(\\frac{1}{|C_n|}\\right) g(z)^n = \n\\log \\frac{1}{1-g(z)}."
},
{
"math_id": 33,
"text": "C_2 + C_4 + C_6 + \\cdots"
},
{
"math_id": 34,
"text": " G(z) = \\sum_{n\\ge 1} \\left(\\frac{1}{|C_{2n}|}\\right) g(z)^{2n} = \n\\frac{1}{2} \\log \\frac{1}{1-g(z)^2}."
},
{
"math_id": 35,
"text": "C_1 + C_3 + C_5 + \\cdots"
},
{
"math_id": 36,
"text": " G(z) = \\log \\frac{1}{1-g(z)} - \\frac{1}{2} \\log \\frac{1}{1-g(z)^2} =\n\\frac{1}{2} \\log \\frac{1+g(z)}{1-g(z)}."
},
{
"math_id": 37,
"text": "1 + S_1 + S_2 + S_3 + \\cdots"
},
{
"math_id": 38,
"text": "M(f(z), y) = \\sum_{n\\ge 0} y^n Z(S_n)(f(z), f(z^2), \\ldots, f(z^n))"
},
{
"math_id": 39,
"text": "\\mathfrak{M}(f(z)) = M(f(z), 1)."
},
{
"math_id": 40,
"text": "M(f(z), 1)"
},
{
"math_id": 41,
"text": " F(z) = \\exp \\left( \\sum_{\\ell\\ge 1} \\frac{f(z^\\ell)}{\\ell} \\right)."
},
{
"math_id": 42,
"text": " G(z) = 1 + \\sum_{n\\ge 1} \\left(\\frac{1}{|S_n|}\\right) g(z)^n = \n\\sum_{n\\ge 0} \\frac{g(z)^n}{n!} = \\exp g(z)."
},
{
"math_id": 43,
"text": " F(z) = \\exp \\left( \\sum_{\\ell\\ge 1} (-1)^{\\ell-1} \\frac{f(z^\\ell)} \\ell \\right)."
},
{
"math_id": 44,
"text": "\\mathcal{E}"
},
{
"math_id": 45,
"text": "\\epsilon"
},
{
"math_id": 46,
"text": "\\mathcal{Z}"
},
{
"math_id": 47,
"text": "\\mathcal{A}"
},
{
"math_id": 48,
"text": "A(z)"
},
{
"math_id": 49,
"text": "E(z) = 1"
},
{
"math_id": 50,
"text": "Z(z) = z"
},
{
"math_id": 51,
"text": "\\mathcal{B}"
},
{
"math_id": 52,
"text": "\\mathcal{C}"
},
{
"math_id": 53,
"text": "\\mathcal{A} = \\mathcal{B} \\cup \\mathcal{C}"
},
{
"math_id": 54,
"text": "A(z) = B(z) + C(z)"
},
{
"math_id": 55,
"text": "\\circ"
},
{
"math_id": 56,
"text": "\\bullet"
},
{
"math_id": 57,
"text": "\\mathcal{A} + \\mathcal{B} = (\\mathcal{A} \\times \\{\\circ\\}) \\cup (\\mathcal{B} \\times \\{\\bullet\\})"
},
{
"math_id": 58,
"text": "A_n"
},
{
"math_id": 59,
"text": "A(x)=\\sum_{n=0}^\\infty A_n x^n"
},
{
"math_id": 60,
"text": "a \\in \\mathcal{A}"
},
{
"math_id": 61,
"text": "b \\in \\mathcal{B}"
},
{
"math_id": 62,
"text": "|(a,b)| = |a| + |b|"
},
{
"math_id": 63,
"text": "\\mathcal{A} \\times \\mathcal{B}"
},
{
"math_id": 64,
"text": "\\sum_{k=0}^n A_k B_{n-k}."
},
{
"math_id": 65,
"text": "\\mathcal{A} = \\mathcal{B} \\times \\mathcal{C}"
},
{
"math_id": 66,
"text": "A(z) = B(z) \\cdot C(z)."
},
{
"math_id": 67,
"text": "\\mathcal{A} = \\mathfrak{G}\\{\\mathcal{B}\\}"
},
{
"math_id": 68,
"text": "\\mathfrak{G}\\{\\mathcal{B}\\} = \\mathcal{E} + \\mathcal{B} + (\\mathcal{B} \\times \\mathcal{B}) + (\\mathcal{B} \\times \\mathcal{B} \\times \\mathcal{B}) + \\cdots."
},
{
"math_id": 69,
"text": "A(z) = 1 + B(z) + B(z)^{2} + B(z)^{3} + \\cdots = \\frac{1}{1 - B(z)}."
},
{
"math_id": 70,
"text": "\\mathcal{A} = \\mathfrak{P}\\{\\mathcal{B}\\}"
},
{
"math_id": 71,
"text": "\\mathfrak{P}\\{\\mathcal{B}\\} = \\prod_{\\beta \\in \\mathcal{B}}(\\mathcal{E} + \\{\\beta\\}),"
},
{
"math_id": 72,
"text": "\\begin{align}A(z) &{} = \\prod_{\\beta \\in \\mathcal{B}}(1 + z^{|\\beta|}) \\\\\n &{} = \\prod_{n=1}^\\infty (1 + z^n)^{B_n} \\\\\n &{} = \\exp \\left ( \\ln \\prod_{n=1}^{\\infty}(1 + z^n)^{B_n} \\right ) \\\\\n &{} = \\exp \\left ( \\sum_{n = 1}^\\infty B_n \\ln(1 + z^n) \\right ) \\\\\n &{} = \\exp \\left ( \\sum_{n = 1}^\\infty B_n \\cdot \\sum_{k = 1}^{\\infty} \\frac{(-1)^{k-1}z^{nk}} k \\right ) \\\\\n &{} = \\exp \\left ( \\sum_{k = 1}^\\infty \\frac{(-1)^{k-1}} k \\cdot \\sum_{n = 1}^\\infty B_n z^{nk} \\right ) \\\\\n &{} = \\exp \\left ( \\sum_{k = 1}^\\infty \\frac{(-1)^{k-1} B(z^k)} k \\right),\n\\end{align}"
},
{
"math_id": 73,
"text": "\\ln(1 + u) = \\sum_{k = 1}^\\infty \\frac{(-1)^{k-1} u^k} k "
},
{
"math_id": 74,
"text": "\\mathcal{A} = \\mathfrak{M}\\{\\mathcal{B}\\}"
},
{
"math_id": 75,
"text": "\\mathfrak{M}\\{\\mathcal{B}\\} = \\prod_{\\beta \\in \\mathcal{B}} \\mathfrak{G}\\{\\beta\\}."
},
{
"math_id": 76,
"text": "\\begin{align} A(z) &{} = \\prod_{\\beta \\in \\mathcal{B}} (1 - z^{|\\beta|})^{-1} \\\\\n &{} = \\prod_{n = 1}^\\infty (1 - z^n)^{-B_n} \\\\\n &{} = \\exp \\left ( \\ln \\prod_{n = 1}^\\infty (1 - z^n)^{-B_n} \\right ) \\\\\n &{} = \\exp \\left ( \\sum_{n=1}^\\infty-B_n \\ln (1 - z^n) \\right ) \\\\\n &{} = \\exp \\left ( \\sum_{k=1}^\\infty \\frac{B(z^k)} k \\right ),\n\\end{align}\n"
},
{
"math_id": 77,
"text": "\\ln (1 - z^n)"
},
{
"math_id": 78,
"text": "\\mathfrak{C}\\{\\mathcal{B}\\}"
},
{
"math_id": 79,
"text": "\\Theta\\mathcal{B}"
},
{
"math_id": 80,
"text": "\\mathcal{B} \\circ \\mathcal{C}"
},
{
"math_id": 81,
"text": "\\mathcal{A} = \\mathfrak{C}\\{\\mathcal{B}\\}"
},
{
"math_id": 82,
"text": "\\mathcal{G} = \\mathcal{Z} \\times \\operatorname{SEQ}\\{\\mathcal{G}\\}."
},
{
"math_id": 83,
"text": "G(z) = \\frac{z}{1 - G(z)}"
},
{
"math_id": 84,
"text": "1 - G(z)"
},
{
"math_id": 85,
"text": "G(z) - G(z)^2 = z"
},
{
"math_id": 86,
"text": "G(z) = \\frac{1 - \\sqrt{1 - 4z}}{2}."
},
{
"math_id": 87,
"text": "\\mathcal{I}"
},
{
"math_id": 88,
"text": "\\mathcal{I} = \\mathcal{Z} \\times \\operatorname{SEQ}\\{\\mathcal{Z}\\}"
},
{
"math_id": 89,
"text": "I(z) = \\frac{z}{1 - z}."
},
{
"math_id": 90,
"text": "\\mathcal{P}"
},
{
"math_id": 91,
"text": "\\mathcal{P} = \\operatorname{MSET}\\{\\mathcal{I}\\}. "
},
{
"math_id": 92,
"text": "P(z) = \\exp \\left ( I(z) + \\frac{1}{2} I(z^{2}) + \\frac{1}{3} I(z^{3}) + \\cdots \\right ). "
},
{
"math_id": 93,
"text": "P(z)"
},
{
"math_id": 94,
"text": "(\\mathcal A_1,\\dots,\\mathcal A_r)"
},
{
"math_id": 95,
"text": "r"
},
{
"math_id": 96,
"text": "\\mathcal A_i=\\Phi_i(\\mathcal A_1,\\dots,\\mathcal A_r)"
},
{
"math_id": 97,
"text": "\\Phi_i"
},
{
"math_id": 98,
"text": "\\mathcal E,\\mathcal Z"
},
{
"math_id": 99,
"text": "\\mathcal A_i"
},
{
"math_id": 100,
"text": "\\mathcal A_\\text{even}"
},
{
"math_id": 101,
"text": "\\mathcal A_\\text{odd}"
},
{
"math_id": 102,
"text": "\\mathcal A_\\text{odd}=\\mathcal Z\\times \\operatorname{Seq}_{\\ge1}\\mathcal A_\\text{even}"
},
{
"math_id": 103,
"text": "\\mathcal A_\\text{even} = \\mathcal Z\\times \\operatorname{Seq}\\mathcal A_\\text{odd}"
},
{
"math_id": 104,
"text": "[1 \\ldots n]"
},
{
"math_id": 105,
"text": "A(x)=\\sum_{n=0}^\\infty A_n \\frac{x^n}{n!}."
},
{
"math_id": 106,
"text": "\\mathcal{A} \\star \\mathcal{B}."
},
{
"math_id": 107,
"text": "\\beta \\in \\mathcal{B}"
},
{
"math_id": 108,
"text": "\\gamma \\in \\mathcal{C}"
},
{
"math_id": 109,
"text": "\\beta"
},
{
"math_id": 110,
"text": "\\gamma"
},
{
"math_id": 111,
"text": "\\rho"
},
{
"math_id": 112,
"text": "\\alpha"
},
{
"math_id": 113,
"text": "\\rho(\\alpha)"
},
{
"math_id": 114,
"text": "\\alpha \\star \\beta = \\{(\\alpha',\\beta'): (\\alpha',\\beta') \\text{ is well-labelled, } \\rho(\\alpha') = \\alpha, \\rho(\\beta') = \\beta \\}."
},
{
"math_id": 115,
"text": "\\mathcal{A} \\star \\mathcal{B} = \\bigcup_{\\alpha \\in \\mathcal{A}, \\beta \\in \\mathcal{B}} (\\alpha \\star \\beta)."
},
{
"math_id": 116,
"text": "k"
},
{
"math_id": 117,
"text": "n-k"
},
{
"math_id": 118,
"text": "{n \\choose k}"
},
{
"math_id": 119,
"text": "n"
},
{
"math_id": 120,
"text": "\\sum_{k=0}^n {n \\choose k} A_k B_{n-k}."
},
{
"math_id": 121,
"text": "A(z) \\cdot B(z)."
},
{
"math_id": 122,
"text": "\\mathfrak{G}\\{\\mathcal{B}\\} = \\mathcal{E} + \\mathcal{B} + (\\mathcal{B} \\star \\mathcal{B}) + (\\mathcal{B} \\star \\mathcal{B} \\star \\mathcal{B}) + \\cdots"
},
{
"math_id": 123,
"text": "A(z) = \\frac{1}{1 - B(z)}"
},
{
"math_id": 124,
"text": "k!"
},
{
"math_id": 125,
"text": "A(z) = \\sum_{k = 0}^\\infty \\frac{B(z)^k}{k!} = \\exp(B(z))"
},
{
"math_id": 126,
"text": "A(z) = \\sum_{k = 0}^\\infty \\frac{B(z)^k}{k} = \\ln\\left(\\frac 1 {1-B(z)}\\right)."
},
{
"math_id": 127,
"text": "\\mathcal{A}_{\\min} = \\mathcal{B}^{\\square}\\star \\mathcal{C}"
},
{
"math_id": 128,
"text": "\\mathcal{A}_{\\max} = \\mathcal{B}^\\blacksquare \\star \\mathcal{C}"
},
{
"math_id": 129,
"text": "A_{\\min}(z)=A_{\\max}(z)=\\int^z_0 B'(t)C(t)\\,dt."
},
{
"math_id": 130,
"text": "A_{\\min}'(t)=A_{\\max}'(t)=B'(t)C(t)."
},
{
"math_id": 131,
"text": "\\mathcal{L}"
},
{
"math_id": 132,
"text": "\\mathcal{L}=\\mathcal{Z}^\\square\\star \\operatorname{SET}(\\mathcal{L})."
},
{
"math_id": 133,
"text": " \\operatorname{SET}(\\operatorname{SET}_{\\ge 1}(\\mathcal{Z}))."
},
{
"math_id": 134,
"text": " \\operatorname{SET}(\\operatorname{CYC}(\\mathcal{Z}))"
}
] | https://en.wikipedia.org/wiki?curid=986932 |
9871405 | Image rectification | Image rectification is a transformation process used to project images onto a common image plane. This process has several degrees of freedom and there are many strategies for transforming images to the common plane. Image rectification is used in computer stereo vision to simplify the problem of finding matching points between images (i.e. the correspondence problem), and in geographic information systems (GIS) to merge images taken from multiple perspectives into a common map coordinate system.
In computer vision.
Computer stereo vision takes two or more images with known relative camera positions that show an object from different viewpoints. For each pixel it then determines the corresponding scene point's depth (i.e. distance from the camera) by first finding matching pixels (i.e. pixels showing the same scene point) in the other image(s) and then applying triangulation to the found matches to determine their depth.
Finding matches in stereo vision is restricted by epipolar geometry: Each pixel's match in another image can only be found on a line called the epipolar line.
If two images are coplanar, i.e. they were taken such that the right camera is only offset horizontally compared to the left camera (not being moved towards the object or rotated), then each pixel's epipolar line is horizontal and at the same vertical position as that pixel. However, in general settings (the camera does move towards the object or rotate) the epipolar lines are slanted. Image rectification warps both images such that they appear as if they have been taken with only a horizontal displacement and as a consequence all epipolar lines are horizontal, which slightly simplifies the stereo matching process. Note however, that rectification does not fundamentally change the stereo matching process: It searches on lines, slanted ones before and horizontal ones after rectification.
Image rectification is also an equivalent (and more often used) alternative to perfect camera coplanarity. Even with high-precision equipment, image rectification is usually performed because it may be impractical to maintain perfect coplanarity between cameras.
Image rectification can only be performed with two images at a time and simultaneous rectification of more than two images is generally impossible.
Transformation.
If the images to be rectified are taken from camera pairs without geometric distortion, this calculation can easily be made with a linear transformation. X & Y rotation puts the images on the same plane, scaling makes the image frames be the same size and Z rotation & skew adjustments make the image pixel rows directly line up. The rigid alignment of the cameras needs to be known (by calibration) and the calibration coefficients are used by the transform.
In performing the transform, if the cameras themselves are calibrated for internal parameters, an essential matrix provides the relationship between the cameras. The more general case (without camera calibration) is represented by the fundamental matrix. If the fundamental matrix is not known, it is necessary to find preliminary point correspondences between stereo images to facilitate its extraction.
Algorithms.
There are three main categories for image rectification algorithms: planar rectification, cylindrical rectification and polar rectification.
Implementation details.
All rectified images satisfy the following two properties:
In order to transform the original image pair into a rectified image pair, it is necessary to find a projective transformation "H". Constraints are placed on "H" to satisfy the two properties above. For example, constraining the epipolar lines to be parallel with the horizontal axis means that epipoles must be mapped to the infinite point "[1,0,0]T" in homogeneous coordinates. Even with these constraints, "H" still has four degrees of freedom. It is also necessary to find a matching "H' " to rectify the second image of an image pair. Poor choices of "H" and "H' " can result in rectified images that are dramatically changed in scale or severely distorted.
There are many different strategies for choosing a projective transform "H" for each image from all possible solutions. One advanced method is minimizing the disparity or least-square difference of corresponding points on the horizontal axis of the rectified image pair. Another method is separating "H" into a specialized projective transform, similarity transform, and shearing transform to minimize image distortion. One simple method is to rotate both images to look perpendicular to the line joining their collective optical centers, twist the optical axes so the horizontal axis of each image points in the direction of the other image's optical center, and finally scale the smaller image to match for line-to-line correspondence. This process is demonstrated in the following example.
Example.
Our model for this example is based on a pair of images that observe a 3D point "P", which corresponds to "p" and "p' " in the pixel coordinates of each image. "O" and "O' " represent the optical centers of each camera, with known camera matrices formula_0 and formula_1 (we assume the world origin is at the first camera). We will briefly outline and depict the results for a simple approach to find a "H" and "H' " projective transformation that rectify the image pair from the example scene.
First, we compute the epipoles, "e" and "e' " in each image:
formula_2
formula_3
Second, we find a projective transformation "H1" that rotates our first image to be parallel to the baseline connecting "O" and "O' " (row 2, column 1 of 2D image set). This rotation can be found by using the cross product between the original and the desired optical axes. Next, we find the projective transformation "H2" that takes the rotated image and twists it so that the horizontal axis aligns with the baseline. If calculated correctly, this second transformation should map the "e" to infinity on the x axis (row 3, column 1 of 2D image set). Finally, define formula_4 as the projective transformation for rectifying the first image.
Third, through an equivalent operation, we can find "H' " to rectify the second image (column 2 of 2D image set). Note that "H'1" should rotate the second image's optical axis to be parallel with the transformed optical axis of the first image. One strategy is to pick a plane parallel to the line where the two original optical axes intersect to minimize distortion from the reprojection process. In this example, we simply define "H' " using the rotation matrix "R" and initial projective transformation "H" as formula_5.
Finally, we scale both images to the same approximate resolution and align the now horizontal epipoles for easier horizontal scanning for correspondences (row 4 of 2D image set).
Note that it is possible to perform this and similar algorithms without having the camera parameter matrices "M" and "M' ". All that is required is a set of seven or more image to image correspondences to compute the fundamental matrices and epipoles.
In geographic information system.
Image rectification in GIS converts images to a standard map coordinate system. This is done by matching ground control points (GCP) in the mapping system to points in the image. These GCPs calculate necessary image transforms.
Primary difficulties in the process occur
The maps that are used with rectified images are non-topographical. However, the images to be used may contain distortion from terrain. Image orthorectification additionally removes these effects.
Image rectification is a standard feature available with GIS software packages.
See also.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "M=K[I~ 0]"
},
{
"math_id": 1,
"text": "M'=K'[R~ T]"
},
{
"math_id": 2,
"text": "\ne=M \\begin{bmatrix} O' \\\\ 1 \\end{bmatrix}\n=M\\begin{bmatrix} -R^T T \\\\ 1 \\end{bmatrix} = K[I~ 0]\\begin{bmatrix} -R^T T \\\\ 1 \\end{bmatrix} = -KR^T T\n"
},
{
"math_id": 3,
"text": "\ne'=M'\\begin{bmatrix} O \\\\ 1 \\end{bmatrix} = M'\\begin{bmatrix} 0 \\\\ 1 \\end{bmatrix} = K'[R~T]\\begin{bmatrix} 0 \\\\ 1 \\end{bmatrix} = K'T\n"
},
{
"math_id": 4,
"text": "H=H_2H_1"
},
{
"math_id": 5,
"text": "H' = HR^T"
}
] | https://en.wikipedia.org/wiki?curid=9871405 |
9871765 | Tango tree | A tango tree is a type of binary search tree proposed by Erik D. Demaine, Dion Harmon, John Iacono, and Mihai Pătrașcu in 2004. It is named after Buenos Aires, of which the tango is emblematic.
It is an online binary search tree that achieves an formula_0 competitive ratio relative to the offline optimal binary search tree, while only using formula_0 additional bits of memory per node. This improved upon the previous best known competitive ratio, which was formula_1.
Structure.
Tango trees work by partitioning a binary search tree into a set of "preferred paths", which are themselves stored in auxiliary trees (so the tango tree is represented as a tree of trees).
Reference tree.
To construct a tango tree, we simulate a complete binary search tree called the "reference tree", which is simply a traditional binary search tree containing all the elements. This tree never shows up in the actual implementation, but is the conceptual basis behind the following pieces of a tango tree.
In particular, the height of the reference tree is ⌈log2("n"+1)⌉. This equals the length of the longest path, and therefore the size of the largest auxiliary tree. By keeping the auxiliary trees reasonably balanced, the height of the auxiliary trees can be bounded to "O"(log log "n"). This is the source of the algorithm's performance guarantees.
Preferred paths.
First, we define for each node its "preferred child", which informally is the most-recently visited child by a traditional binary search tree lookup. More formally, consider a subtree "T", rooted at "p", with children "l" (left) and "r" (right). We set "r" as the preferred child of "p" if the most recently accessed node in "T" is in the subtree rooted at "r", and "l" as the preferred child otherwise. Note that if the most recently accessed node of "T" is "p" itself, then "l" is the preferred child by definition.
A preferred path is defined by starting at the root and following the preferred children until reaching a leaf node. Removing the nodes on this path partitions the remainder of the tree into a number of subtrees, and we recurse on each subtree (forming a preferred path from its root, which partitions the subtree into more subtrees).
Auxiliary trees.
To represent a preferred path, we store its nodes in a balanced binary search tree, specifically a red–black tree. For each non-leaf node "n" in a preferred path "P", it has a non-preferred child "c", which is the root of a new auxiliary tree. We attach this other auxiliary tree's root ("c") to "n" in "P", thus linking the auxiliary trees together. We also augment the auxiliary tree by storing at each node the minimum and maximum depth (depth in the reference tree, that is) of nodes in the subtree under that node.
Algorithm.
Searching.
To search for an element in the tango tree, we simply simulate searching the reference tree. We start by searching the preferred path connected to the root, which is simulated by searching the auxiliary tree corresponding to that preferred path. If the auxiliary tree doesn't contain the desired element, the search terminates on the parent of the root of the subtree containing the desired element (the beginning of another preferred path), so we simply proceed by searching the auxiliary tree for that preferred path, and so forth.
Updating.
In order to maintain the structure of the tango tree (auxiliary trees correspond to preferred paths), we must do some updating work whenever preferred children change as a result of searches. When a preferred child changes, the top part of a preferred path becomes detached from the bottom part (which becomes its own preferred path) and reattached to another preferred path (which becomes the new bottom part). In order to do this efficiently, we'll define "cut" and "join" operations on our auxiliary trees.
Join.
Our "join" operation will combine two auxiliary trees as long as they have the property that the top node of one (in the reference tree) is a child of the bottom node of the other (essentially, that the corresponding preferred paths can be concatenated). This will work based on the "concatenate" operation of red–black trees, which combines two trees as long as they have the property that all elements of one are less than all elements of the other, and "split", which does the reverse. In the reference tree, note that there exist two nodes in the top path such that a node is in the bottom path if and only if its key-value is between them. Now, to join the bottom path to the top path, we simply "split" the top path between those two nodes, then "concatenate" the two resulting auxiliary trees on either side of the bottom path's auxiliary tree, and we have our final, joined auxiliary tree.
Cut.
Our "cut" operation will break a preferred path into two parts at a given node, a top part and a bottom part. More formally, it'll partition an auxiliary tree into two auxiliary trees, such that one contains all nodes at or above a certain depth in the reference tree, and the other contains all nodes below that depth. As in "join", note that the top part has two nodes that bracket the bottom part. Thus, we can simply "split" on each of these two nodes to divide the path into three parts, then "concatenate" the two outer ones so we end up with two parts, the top and bottom, as desired.
Analysis.
In order to bound the competitive ratio for tango trees, we must find a lower bound on the performance of the optimal offline tree that we use as a benchmark. Once we find an upper bound on the performance of the tango tree, we can divide them to bound the competitive ratio.
Interleave bound.
To find a lower bound on the work done by the optimal offline binary search tree, we again use the notion of preferred children. When considering an access sequence (a sequence of searches), we keep track of how many times a reference tree node's preferred child switches. The total number of switches (summed over all nodes) gives an asymptotic lower bound on the work done by any binary search tree algorithm on the given access sequence. This is called the "interleave lower bound".
Tango tree.
In order to connect this to tango trees, we will find an upper bound on the work done by the tango tree for a given access sequence. Our upper bound will be formula_2, where "k" is the number of interleaves.
The total cost is divided into two parts, searching for the element, and updating the structure of the tango tree to maintain the proper invariants (switching preferred children and re-arranging preferred paths).
Searching.
To see that the searching (not updating) fits in this bound, simply note that every time an auxiliary tree search is unsuccessful and we have to move to the next auxiliary tree, that results in a preferred child switch (since the parent preferred path now switches directions to join the child preferred path). Since all auxiliary tree searches are unsuccessful except the last one (we stop once a search is successful, naturally), we search formula_3 auxiliary trees. Each search takes formula_0, because an auxiliary tree's size is bounded by formula_4, the height of the reference tree.
Updating.
The update cost fits within this bound as well, because we only have to perform one "cut" and one "join" for every visited auxiliary tree. A single "cut" or "join" operation takes only a constant number of searches, "splits", and "concatenates", each of which takes logarithmic time in the size of the auxiliary tree, so our update cost is formula_2.
Competitive ratio.
Tango trees are formula_0-competitive, because the work done by the optimal offline binary search tree is at least linear in "k" (the total number of preferred child switches), and the work done by the tango tree is at most formula_2. | [
{
"math_id": 0,
"text": "O(\\log \\log n)"
},
{
"math_id": 1,
"text": "O(\\log n)"
},
{
"math_id": 2,
"text": "(k+1) O(\\log \\log n)"
},
{
"math_id": 3,
"text": "k+1"
},
{
"math_id": 4,
"text": "\\log n"
}
] | https://en.wikipedia.org/wiki?curid=9871765 |
9875658 | Channel surface | Surface formed from spheres centered along a curve
In geometry and topology, a channel or canal surface is a surface formed as the envelope of a family of spheres whose centers lie on a space curve, its "directrix". If the radii of the generating spheres are constant, the canal surface is called a pipe surface. Simple examples are:
Canal surfaces play an essential role in descriptive geometry, because in case of an orthographic projection its contour curve can be drawn as the envelope of circles.
Envelope of a pencil of implicit surfaces.
Given the pencil of implicit surfaces
formula_0,
two neighboring surfaces formula_1 and
formula_2 intersect in a curve that fulfills the equations
formula_3 and formula_4.
For the limit formula_5 one gets
formula_6.
The last equation is the reason for the following definition.
is the envelope of the given pencil of surfaces.
Canal surface.
Let formula_10 be a regular space curve and formula_11 a formula_12-function with formula_13 and formula_14. The last condition means that the curvature of the curve is less than that of the corresponding sphere.
The envelope of the 1-parameter pencil of spheres
formula_15
is called a canal surface and formula_16 its directrix. If the radii are constant, it is called a pipe surface.
Parametric representation of a canal surface.
The envelope condition
formula_17
of the canal surface above is for any value of formula_18 the equation of a plane, which is orthogonal to the tangent
formula_19 of the directrix. Hence the envelope is a collection of circles.
This property is the key for a parametric representation of the canal surface. The center of the circle (for parameter formula_18) has the distance
formula_20 (see condition above)
from the center of the corresponding sphere and its radius is formula_21. Hence
*formula_22
where the vectors formula_23 and the tangent vector formula_24 form an orthonormal basis, is a parametric representation of the canal surface.
For formula_25 one gets the parametric representation of a pipe surface:
* formula_26
a) The first picture shows a canal surface with
#the helix formula_27 as directrix and
#the radius function formula_28.
#The choice for formula_23 is the following:
formula_29.
b) For the second picture the radius is constant:formula_30, i. e. the canal surface is a pipe surface.
c) For the 3. picture the pipe surface b) has parameter formula_31.
d) The 4. picture shows a pipe knot. Its directrix is a curve on a torus
e) The 5. picture shows a Dupin cyclide (canal surface).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Phi_c: f({\\mathbf x},c)=0 , c\\in [c_1,c_2]"
},
{
"math_id": 1,
"text": "\\Phi_c"
},
{
"math_id": 2,
"text": "\\Phi_{c+\\Delta c}"
},
{
"math_id": 3,
"text": " f({\\mathbf x},c)=0"
},
{
"math_id": 4,
"text": "f({\\mathbf x},c+\\Delta c)=0"
},
{
"math_id": 5,
"text": "\\Delta c \\to 0"
},
{
"math_id": 6,
"text": "f_c({\\mathbf x},c)= \\lim_{\\Delta c \\to \\ 0} \\frac{f({\\mathbf x},c)-f({\\mathbf x},c+\\Delta c)}{\\Delta c}=0"
},
{
"math_id": 7,
"text": "C^2"
},
{
"math_id": 8,
"text": "f"
},
{
"math_id": 9,
"text": " f({\\mathbf x},c)=0, \\quad f_c({\\mathbf x},c)=0 "
},
{
"math_id": 10,
"text": "\\Gamma: {\\mathbf x}={\\mathbf c}(u)=(a(u),b(u),c(u))^\\top"
},
{
"math_id": 11,
"text": "r(t)"
},
{
"math_id": 12,
"text": "C^1"
},
{
"math_id": 13,
"text": "r>0"
},
{
"math_id": 14,
"text": "|\\dot{r}|<\\|\\dot{\\mathbf c}\\|"
},
{
"math_id": 15,
"text": "f({\\mathbf x};u):= \\big\\|{\\mathbf x}-{\\mathbf c}(u)\\big\\|^2-r^2(u)=0"
},
{
"math_id": 16,
"text": "\\Gamma"
},
{
"math_id": 17,
"text": "f_u({\\mathbf x},u)= \n2\\Big(-\\big({\\mathbf x}-{\\mathbf c}(u)\\big)^\\top\\dot{\\mathbf c}(u)-r(u)\\dot{r}(u)\\Big)=0"
},
{
"math_id": 18,
"text": "u"
},
{
"math_id": 19,
"text": "\\dot{\\mathbf c}(u)"
},
{
"math_id": 20,
"text": "d:=\\frac{r\\dot{r}}{\\|\\dot{\\mathbf c}\\|}<r"
},
{
"math_id": 21,
"text": "\\sqrt{r^2-d^2}"
},
{
"math_id": 22,
"text": "{\\mathbf x}={\\mathbf x}(u,v):=\n{\\mathbf c}(u)-\\frac{r(u)\\dot{r}(u)}{\\|\\dot{\\mathbf c}(u)\\|^2}\\dot{\\mathbf c}(u)\n+r(u)\\sqrt{1-\\frac{\\dot{r}(u)^2}{\\|\\dot{\\mathbf c}(u)\\|^2}}\n\\big({\\mathbf e}_1(u)\\cos(v)+ {\\mathbf e}_2(u)\\sin(v)\\big),"
},
{
"math_id": 23,
"text": "{\\mathbf e}_1,{\\mathbf e}_2"
},
{
"math_id": 24,
"text": "\\dot{\\mathbf c}/\\|\\dot{\\mathbf c}\\|"
},
{
"math_id": 25,
"text": "\\dot{r}=0"
},
{
"math_id": 26,
"text": "{\\mathbf x}={\\mathbf x}(u,v):=\n{\\mathbf c}(u)+r\\big({\\mathbf e}_1(u)\\cos(v)+ {\\mathbf e}_2(u)\\sin(v)\\big)."
},
{
"math_id": 27,
"text": "(\\cos(u),\\sin(u), 0.25u), u\\in[0,4]"
},
{
"math_id": 28,
"text": "r(u):= 0.2+0.8u/2\\pi"
},
{
"math_id": 29,
"text": "{\\mathbf e}_1:=(\\dot{b},-\\dot{a},0)/\\|\\cdots\\|,\\ \n {\\mathbf e}_2:= ({\\mathbf e}_1\\times \\dot{\\mathbf c})/\\|\\cdots\\|"
},
{
"math_id": 30,
"text": "r(u):= 0.2"
},
{
"math_id": 31,
"text": "u\\in[0,7.5]"
}
] | https://en.wikipedia.org/wiki?curid=9875658 |
9875692 | Stochastic frontier analysis | Stochastic frontier analysis (SFA) is a method of economic modeling. It has its starting point in the stochastic production frontier models simultaneously introduced by Aigner, Lovell and Schmidt (1977) and Meeusen and Van den Broeck (1977).
The "production frontier model" without random component can be written as:
formula_0
where "yi" is the observed scalar output of the producer "i"; "i=1..I, xi" is a vector of "N" inputs used by the producer "i"; formula_1 is a vector of technology parameters to be estimated; and "f(xi, β)" is the production frontier function.
"TEi" denotes the technical efficiency defined as the ratio of observed output to maximum feasible output.
"TEi = 1" shows that the "i-th" firm obtains the maximum feasible output, while "TEi < 1" provides a measure of the shortfall of the observed output from maximum feasible output.
A stochastic component that describes random shocks affecting the production process is added. These shocks are not directly attributable to the producer or the underlying technology. These shocks may come from weather changes, economic adversities or plain luck. We denote these effects with formula_2. Each producer is facing a different shock, but we assume the shocks are random and they are described by a common distribution.
The stochastic production frontier will become:
formula_3
We assume that "TEi" is also a stochastic variable, with a specific distribution function, common to all producers.
We can also write it as an exponential formula_4, where "ui ≥ 0", since we required "TEi ≤ 1". Thus, we obtain the following equation:
formula_5
Now, if we also assume that "f(xi, β)" takes the log-linear Cobb–Douglas form, the model can be written as:
formula_6
where "vi" is the “noise” component, which we will almost always consider as a two-sided normally distributed variable, and "ui" is the non-negative technical inefficiency component. Together they constitute a compound error term, with a specific distribution to be determined, hence the name of “composed error model” as is often referred.
Stochastic frontier analysis has examined also "cost" and "profit" efficiency. The "cost frontier" approach attempts to measure how far from full-cost minimization (i.e. cost-efficiency) is the firm. Modeling-wise, the non-negative cost-inefficiency component is added rather than subtracted in the stochastic specification. "Profit frontier analysis" examines the case where producers are treated as profit-maximizers (both output and inputs should be decided by the firm) and not as cost-minimizers, (where level of output is considered as exogenously given). The specification here is similar with the "production frontier" one.
Stochastic frontier analysis has also been applied in micro data of consumer demand in an attempt to benchmark consumption and segment consumers. In a two-stage approach, a stochastic frontier model is estimated and subsequently deviations from the frontier are regressed on consumer characteristics.
Extensions: The two-tier stochastic frontier model.
Polacheck & Yoon (1987) have introduced a three-component error structure, where one non-negative error term is added to, while the other is subtracted from, the zero-mean symmetric random disturbance. This modeling approach attempts to measure the impact of informational inefficiencies (incomplete and imperfect information) on the prices of realized transactions, inefficiencies that in most cases characterize both parties in a transaction (hence the two inefficiency components, to disentangle the two effects).
In the 2010s, various non-parametric and semi-parametric approaches were proposed in the literature, where no parametric assumption on the functional form of production relationship is made.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "y_i = f(x_i ;\\beta ) \\cdot TE_i"
},
{
"math_id": 1,
"text": "\\beta "
},
{
"math_id": 2,
"text": "\\exp \\left\\{ {v_i } \\right\\}"
},
{
"math_id": 3,
"text": "y_i = f(x_i ;\\beta ) \\cdot TE_i \\cdot \\exp \\left\\{ {v_i } \\right\\}"
},
{
"math_id": 4,
"text": "TE_i = \\exp \\left\\{ { - u_i } \\right\\}"
},
{
"math_id": 5,
"text": "y_i = f(x_i ;\\beta ) \\cdot \\exp \\left\\{ { - u_i } \\right\\} \\cdot \\exp \\left\\{ {v_i } \\right\\} "
},
{
"math_id": 6,
"text": "\\ln y_i = \\beta _0 + \\sum\\limits_n {\\beta _n \\ln x_{ni} + v_i - u_i } "
}
] | https://en.wikipedia.org/wiki?curid=9875692 |
9875880 | Intercept theorem | On ratios of line segments formed when 2 intersecting lines are cut by a pair of parallels
The intercept theorem, also known as Thales's theorem, basic proportionality theorem or side splitter theorem, is an important theorem in elementary geometry about the ratios of various line segments that are created if two rays with a common starting point are intercepted by a pair of parallels. It is equivalent to the theorem about ratios in similar triangles. It is traditionally attributed to Greek mathematician Thales. It was known to the ancient Babylonians and Egyptians, although its first known proof appears in Euclid's "Elements".
Formulation of the theorem.
Suppose S is the common starting point of two rays, and two parallel lines are intersecting those two rays (see figure). Let A, B be the intersections of the first ray with the two parallels, such that B is further away from S than A, and similarly C, D are the intersections of the second ray with the two parallels such that D is further away from S than C. In this configuration the following statements hold:
Extensions and conclusions.
The first two statements remain true if the two rays get replaced by two lines intersecting in formula_5. In this case there are two scenarios with regard to formula_5, either it lies between the 2 parallels (X figure) or it does not (V figure). If formula_5 is not located between the two parallels, the original theorem applies directly. If formula_5 lies between the two parallels, then a reflection of formula_6 and formula_7 at formula_5 yields V figure with identical measures for which the original theorem now applies. The third statement (converse) however does not remain true for lines.
If there are more than two rays starting at formula_5 or more than two lines intersecting at formula_5, then each parallel contains more than one line segment and the ratio of two line segments on one parallel equals the ratio of the according line segments on the other parallel. For instance if there's a third ray starting at formula_5 and intersecting the parallels in formula_8 and formula_9, such that formula_9 is further away from formula_5 than formula_8, then the following equalities holds:
formula_10 , formula_11
For the second equation the converse is true as well, that is if the 3 rays are intercepted by two lines and the ratios of the according line segments on each line are equal, then those 2 lines must be parallel.
Related concepts.
Similarity and similar triangles.
The intercept theorem is closely related to similarity. It is equivalent to the concept of similar triangles, i.e. it can be used to prove the properties of similar triangles and similar triangles can be used to prove the intercept theorem. By matching identical angles you can always place two similar triangles in one another so that you get the configuration in which the intercept theorem applies; and conversely the intercept theorem configuration always contains two similar triangles.
Scalar multiplication in vector spaces.
In a normed vector space, the axioms concerning the scalar multiplication (in particular formula_12 and formula_13) ensure that the intercept theorem holds. One has
formula_14
Applications.
Algebraic formulation of compass and ruler constructions.
There are three famous problems in elementary geometry which were posed by the Greeks in terms of compass and straightedge constructions:
It took more than 2000 years until all three of them were finally shown to be impossible. This was achieved in the 19th century with the help of algebraic methods, that had become available by then. In order to reformulate the three problems in algebraic terms using field extensions, one needs to match field operations with compass and straightedge constructions (see constructible number). In particular it is important to assure that for two given line segments, a new line segment can be constructed, such that its length equals the product of lengths of the other two. Similarly one needs to be able to construct, for a line segment of length formula_15, a new line segment of length formula_16. The intercept theorem can be used to show that for both cases, that such a construction is possible.
Dividing a line segment in a given ratio.
To divide an arbitrary line segment formula_17 in a formula_18 ratio, draw an arbitrary angle in A with formula_17 as one leg. On the other leg construct formula_19 equidistant points, then draw the line through the last point and B and parallel line through the "m"th point. This parallel line divides formula_17 in the desired ratio. The graphic to the right shows the partition of a line segment formula_17 in a formula_20 ratio.
Measuring and survey.
Height of the Cheops pyramid.
According to some historical sources the Greek mathematician Thales applied the intercept theorem to determine the height of the Cheops' pyramid. The following description illustrates the use of the intercept theorem to compute the height of the pyramid. It does not, however, recount Thales' original work, which was lost.
Thales measured the length of the pyramid's base and the height of his pole. Then at the same time of the day he measured the length of the pyramid's shadow and the length of the pole's shadow. This yielded the following data:
From this he computed
formula_21
Knowing A, B and C he was now able to apply the intercept theorem to compute
formula_22
Measuring the width of a river.
The intercept theorem can be used to determine a distance that cannot be measured directly, such as the width of a river or a lake, the height of tall buildings or similar. The graphic to the right illustrates measuring the width of a river. The segments formula_23,formula_24,formula_25 are measured and used to compute the wanted distance formula_26.
Parallel lines in triangles and trapezoids.
The intercept theorem can be used to prove that a certain construction yields parallel line (segment)s.
Historical aspects.
The theorem is traditionally attributed to the Greek mathematician Thales of Miletus, who may have used some form of the theorem to determine heights of pyramids in Egypt and to compute the distance of ship from the shore.
Proof.
An elementary proof of the theorem uses triangles of equal area to derive the basic statements about the ratios (claim 1). The other claims then follow by applying the first claim and contradiction. | [
{
"math_id": 0,
"text": "\\frac{| SA |}{| AB |} =\\frac{| SC |}{ | CD |}"
},
{
"math_id": 1,
"text": "\\frac{| SB |}{| AB |} =\\frac{| SD |}{| CD |} "
},
{
"math_id": 2,
"text": "\\frac{| SA |}{| SB |} =\\frac{| SC |}{| SD |} "
},
{
"math_id": 3,
"text": "\\frac{| SA |}{| SB |} = \\frac{| SC |}{| SD |} =\\frac{| AC |}{| BD |} "
},
{
"math_id": 4,
"text": "\\frac{| SA |}{| AB |} =\\frac{| SC |}{| CD |} "
},
{
"math_id": 5,
"text": "S"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "C"
},
{
"math_id": 8,
"text": "E"
},
{
"math_id": 9,
"text": "F"
},
{
"math_id": 10,
"text": "\\frac{| AE |}{| BF |} =\\frac{| EC |}{ | FD |}"
},
{
"math_id": 11,
"text": "\\frac{| AE |}{ | EC |} =\\frac{| BF |}{| FD |} "
},
{
"math_id": 12,
"text": " \\lambda \\cdot (\\vec{a}+\\vec{b})=\\lambda \\cdot \\vec{a}+ \\lambda \\cdot \\vec{b} "
},
{
"math_id": 13,
"text": " \\|\\lambda \\vec{a}\\|=|\\lambda|\\cdot\\ \\|\\vec{a}\\| "
},
{
"math_id": 14,
"text": "\n\\frac{ \\| \\lambda \\cdot \\vec{a} \\| }{ \\| \\vec{a} \\|}\n=\\frac{\\|\\lambda\\cdot\\vec{b}\\|}{\\|\\vec{b}\\|}\n=\\frac{\\|\\lambda\\cdot(\\vec{a}+\\vec{b}) \\|}{\\|\\vec{a}+\\vec{b}\\|}\n=|\\lambda|\n"
},
{
"math_id": 15,
"text": " a "
},
{
"math_id": 16,
"text": " a^{-1} "
},
{
"math_id": 17,
"text": "\\overline{AB}"
},
{
"math_id": 18,
"text": "m:n "
},
{
"math_id": 19,
"text": "m+n "
},
{
"math_id": 20,
"text": "5:3"
},
{
"math_id": 21,
"text": " C = 65~\\text{m}+\\frac{230~\\text{m}}{2}=180~\\text{m} "
},
{
"math_id": 22,
"text": " D=\\frac{C \\cdot A}{B}=\\frac{1.63~\\text{m} \\cdot 180~\\text{m}}{2~\\text{m}}=146.7~\\text{m}"
},
{
"math_id": 23,
"text": "|CF|"
},
{
"math_id": 24,
"text": "|CA|"
},
{
"math_id": 25,
"text": "|FE|"
},
{
"math_id": 26,
"text": " |AB|=\\frac{|AC||FE|}{|FC|} "
}
] | https://en.wikipedia.org/wiki?curid=9875880 |
98759 | Coset | Disjoint, equal-size subsets of a group's underlying set
In mathematics, specifically group theory, a subgroup H of a group G may be used to decompose the underlying set of G into disjoint, equal-size subsets called cosets. There are "left cosets" and "right cosets". Cosets (both left and right) have the same number of elements (cardinality) as does H. Furthermore, H itself is both a left coset and a right coset. The number of left cosets of H in G is equal to the number of right cosets of H in G. This common value is called the index of H in G and is usually denoted by ["G" : "H"].
Cosets are a basic tool in the study of groups; for example, they play a central role in Lagrange's theorem that states that for any finite group G, the number of elements of every subgroup H of G divides the number of elements of G. Cosets of a particular type of subgroup (a normal subgroup) can be used as the elements of another group called a quotient group or factor group. Cosets also appear in other areas of mathematics such as vector spaces and error-correcting codes.
Definition.
Let H be a subgroup of the group G whose operation is written multiplicatively (juxtaposition denotes the group operation). Given an element g of G, the left cosets of H in G are the sets obtained by multiplying each element of H by a fixed element g of G (where g is the left factor). In symbols these are,
<templatestyles src="Block indent/styles.css"/>"gH" = {"gh" : "h" an element of "H"} for g in G.
The right cosets are defined similarly, except that the element g is now a right factor, that is,
<templatestyles src="Block indent/styles.css"/>"Hg" = {"hg" : "h" an element of "H"} for g in G.
As g varies through the group, it would appear that many cosets (right or left) would be generated. Nevertheless, it turns out that any two left cosets (respectively right cosets) are either disjoint or are identical as sets.
If the group operation is written additively, as is often the case when the group is abelian, the notation used changes to "g" + "H" or "H" + "g", respectively.
The symbol "G"/"H" is sometimes used for the set of (left) cosets {"gH" : "g" an element of "G"} (see below for a extension to right cosets and double cosets). However, some authors (including Dummit & Foote and Rotman) reserve this notation specifically for representing the quotient group formed from the cosets in the case where "H" is a "normal" subgroup of "G."
First example.
Let G be the dihedral group of order six. Its elements may be represented by {"I", "a", "a"2, "b", "ab", "a"2"b"}. In this group, "a"3 = "b"2 = "I" and "ba" = "a"2"b". This is enough information to fill in the entire Cayley table:
Let T be the subgroup {"I", "b"}. The (distinct) left cosets of T are:
Since all the elements of G have now appeared in one of these cosets, generating any more can not give new cosets; any new coset would have to have an element in common with one of these and therefore would be identical to one of these cosets. For instance, "abT" = {"ab", "a"} = "aT".
The right cosets of T are:
In this example, except for T, no left coset is also a right coset.
Let H be the subgroup {"I", "a", "a"2}. The left cosets of H are "IH" = "H" and "bH" = {"b", "ba", "ba"2}. The right cosets of H are "HI" = "H" and "Hb" = {"b", "ab", "a"2"b"} = {"b", "ba"2, "ba"}. In this case, every left coset of H is also a right coset of H.
Let H be a subgroup of a group G and suppose that "g"1, "g"2 ∈ "G". The following statements are equivalent:
Properties.
The disjointness of non-identical cosets is a result of the fact that if x belongs to "gH" then "gH" = "xH". For if "x" ∈ "gH" then there must exist an "a" ∈ "H" such that "ga" = "x". Thus "xH" = ("ga")"H" = "g"("aH"). Moreover, since "H" is a group, left multiplication by a is a bijection, and "aH" = "H".
Thus every element of "G" belongs to exactly one left coset of the subgroup "H", and "H" is itself a left coset (and the one that contains the identity).
Two elements being in the same left coset also provide a natural equivalence relation. Define two elements of G, x and y, to be equivalent with respect to the subgroup H if "xH" = "yH" (or equivalently if "x"−1"y" belongs to H). The equivalence classes of this relation are the left cosets of H. As with any set of equivalence classes, they form a partition of the underlying set. A coset representative is a representative in the equivalence class sense. A set of representatives of all the cosets is called a transversal. There are other types of equivalence relations in a group, such as conjugacy, that form different classes which do not have the properties discussed here.
Similar statements apply to right cosets.
If "G" is an abelian group, then "g" + "H" = "H" + "g" for every subgroup "H" of "G" and every element g of "G". For general groups, given an element g and a subgroup "H" of a group "G", the right coset of "H" with respect to g is also the left coset of the conjugate subgroup "g"−1"Hg" with respect to g, that is, "Hg" = "g"("g"−1"Hg").
Normal subgroups.
A subgroup "N" of a group "G" is a normal subgroup of "G" if and only if for all elements g of "G" the corresponding left and right cosets are equal, that is, "gN" = "Ng". This is the case for the subgroup H in the first example above. Furthermore, the cosets of "N" in "G" form a group called the quotient group or factor group "G" / "N".
If "H" is not normal in "G", then its left cosets are different from its right cosets. That is, there is an a in "G" such that no element b satisfies "aH" = "Hb". This means that the partition of "G" into the left cosets of "H" is a different partition than the partition of "G" into right cosets of "H". This is illustrated by the subgroup T in the first example above. ("Some" cosets may coincide. For example, if a is in the center of "G", then "aH" = "Ha".)
On the other hand, if the subgroup "N" is normal the set of all cosets forms a group called the quotient group "G" / "N" with the operation ∗ defined by ("aN") ∗ ("bN") = "abN". Since every right coset is a left coset, there is no need to distinguish "left cosets" from "right cosets".
Index of a subgroup.
Every left or right coset of "H" has the same number of elements (or cardinality in the case of an infinite "H") as "H" itself. Furthermore, the number of left cosets is equal to the number of right cosets and is known as the index of "H" in "G", written as ["G" : "H"]. Lagrange's theorem allows us to compute the index in the case where "G" and "H" are finite:
formula_0
This equation can be generalized to the case where the groups are infinite.
More examples.
Integers.
Let "G" be the additive group of the integers, Z = ({..., −2, −1, 0, 1, 2, ...}, +) and "H" the subgroup (3Z, +) = ({..., −6, −3, 0, 3, 6, ...}, +). Then the cosets of "H" in "G" are the three sets 3Z, 3Z + 1, and 3Z + 2, where 3Z + "a" = {..., −6 + "a", −3 + "a", "a", 3 + "a", 6 + "a", ...}. These three sets partition the set Z, so there are no other right cosets of H. Due to the commutivity of addition "H" + 1 = 1 + "H" and "H" + 2 = 2 + "H". That is, every left coset of H is also a right coset, so H is a normal subgroup. (The same argument shows that every subgroup of an Abelian group is normal.)
This example may be generalized. Again let "G" be the additive group of the integers, Z = ({..., −2, −1, 0, 1, 2, ...}, +), and now let "H" the subgroup ("mZ, +) = ({..., −2"m", −"m", 0, "m", 2"m", ...}, +), where m is a positive integer. Then the cosets of "H" in "G" are the m sets "mZ, "mZ + 1, ..., "mZ + ("m" − 1), where "mZ + "a" = {..., −2"m" + "a", −"m" + "a", "a", "m" + "a", 2"m" + "a", ...}. There are no more than m cosets, because "mZ + "m" = "m"(Z + 1) = "mZ. The coset ("mZ + "a", +) is the congruence class of a modulo m. The subgroup "mZ is normal in Z, and so, can be used to form the quotient group Z / "mZ the group of integers mod "m".
Vectors.
Another example of a coset comes from the theory of vector spaces. The elements (vectors) of a vector space form an abelian group under vector addition. The subspaces of the vector space are subgroups of this group. For a vector space "V", a subspace "W", and a fixed vector a in "V", the sets
formula_1
are called affine subspaces, and are cosets (both left and right, since the group is abelian). In terms of 3-dimensional geometric vectors, these affine subspaces are all the "lines" or "planes" parallel to the subspace, which is a line or plane going through the origin. For example, consider the plane R2. If m is a line through the origin O, then m is a subgroup of the abelian group R2. If P is in R2, then the coset "P" + "m" is a line "m"′ parallel to m and passing through P.
Matrices.
Let G be the multiplicative group of matrices,
formula_2
and the subgroup H of G,
formula_3
For a fixed element of G consider the left coset
formula_4
That is, the left cosets consist of all the matrices in G having the same upper-left entry. This subgroup H is normal in G, but the subgroup
formula_5
is not normal in G.
As orbits of a group action.
A subgroup H of a group G can be used to define an action of H on G in two natural ways. A "right action", "G" × "H" → "G" given by ("g", "h") → "gh" or a "left action", "H" × "G" → "G" given by ("h", "g") → "hg". The orbit of g under the right action is the left coset gH, while the orbit under the left action is the right coset Hg.
History.
The concept of a coset dates back to Galois's work of 1830–31. He introduced a notation but did not provide a name for the concept. The term "co-set" apparently appears for the first time in 1910 in a paper by G. A. Miller in the "Quarterly Journal of Pure and Applied Mathematics" (vol. 41, p. 382). Various other terms have been used including the German "Nebengruppen" (Weber) and "conjugate group" (Burnside). (Note that Miller abbreviated his self-citation to the "Quarterly Journal of Mathematics"; this does not refer to the journal of the same name, which did not start publication until 1930.)
Galois was concerned with deciding when a given polynomial equation was solvable by radicals. A tool that he developed was in noting that a subgroup H of a group of permutations G induced two decompositions of G (what we now call left and right cosets). If these decompositions coincided, that is, if the left cosets are the same as the right cosets, then there was a way to reduce the problem to one of working over H instead of G. Camille Jordan in his commentaries on Galois's work in 1865 and 1869 elaborated on these ideas and defined normal subgroups as we have above, although he did not use this term.
Calling the coset gH the "left coset" of g with respect to H, while most common today, has not been universally true in the past. For instance, would call gH a "right coset", emphasizing the subgroup being on the right.
An application from coding theory.
A binary linear code is an n-dimensional subspace C of an m-dimensional vector space V over the binary field GF(2). As V is an additive abelian group, C is a subgroup of this group. Codes can be used to correct errors that can occur in transmission. When a "codeword" (element of C) is transmitted some of its bits may be altered in the process and the task of the receiver is to determine the most likely codeword that the corrupted "received word" could have started out as. This procedure is called "decoding" and if only a few errors are made in transmission it can be done effectively with only a very few mistakes. One method used for decoding uses an arrangement of the elements of V (a received word could be any element of V) into a standard array. A standard array is a coset decomposition of V put into tabular form in a certain way. Namely, the top row of the array consists of the elements of C, written in any order, except that the zero vector should be written first. Then, an element of V with a minimal number of ones that does not already appear in the top row is selected and the coset of C containing this element is written as the second row (namely, the row is formed by taking the sum of this element with each element of C directly above it). This element is called a coset leader and there may be some choice in selecting it. Now the process is repeated, a new vector with a minimal number of ones that does not already appear is selected as a new coset leader and the coset of C containing it is the next row. The process ends when all the vectors of V have been sorted into the cosets.
An example of a standard array for the 2-dimensional code "C" = {00000, 01101, 10110, 11011} in the 5-dimensional space V (with 32 vectors) is as follows:
The decoding procedure is to find the received word in the table and then add to it the coset leader of the row it is in. Since in binary arithmetic adding is the same operation as subtracting, this always results in an element of C. In the event that the transmission errors occurred in precisely the non-zero positions of the coset leader the result will be the right codeword. In this example, if a single error occurs, the method will always correct it, since all possible coset leaders with a single one appear in the array.
Syndrome decoding can be used to improve the efficiency of this method. It is a method of computing the correct coset (row) that a received word will be in. For an n-dimensional code C in an m-dimensional binary vector space, a parity check matrix is an ("m" − "n") × "m" matrix H having the property that xH"T = 0 if and only if x is in C. The vector xH"T is called the "syndrome" of x, and by linearity, every vector in the same coset will have the same syndrome. To decode, the search is now reduced to finding the coset leader that has the same syndrome as the received word.
Double cosets.
Given two subgroups, "H" and "K" (which need not be distinct) of a group "G", the double cosets of "H" and "K" in "G" are the sets of the form "HgK" = {"hgk" : "h" an element of "H", "k" an element of "K"}. These are the left cosets of "K" and right cosets of "H" when "H" = 1 and "K" = 1 respectively.
Two double cosets "HxK" and "HyK" are either disjoint or identical. The set of all double cosets for fixed H and K form a partition of G.
A double coset "HxK" contains the complete right cosets of H (in G) of the form "Hxk", with k an element of K and the complete left cosets of K (in G) of the form "hxK", with h in H.
Notation.
Let "G" be a group with subgroups "H" and "K". Several authors working with these sets have developed a specialized notation for their work, where
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "|G| = [G : H]|H|."
},
{
"math_id": 1,
"text": "\\{\\mathbf{x} \\in V \\mid \\mathbf{x} = \\mathbf{a} + \\mathbf{w}, \\mathbf{w} \\in W\\}"
},
{
"math_id": 2,
"text": "G = \\left \\{\\begin{bmatrix} a & 0 \\\\ b & 1 \\end{bmatrix} \\colon a, b \\in \\R, a \\neq 0 \\right\\},"
},
{
"math_id": 3,
"text": "H= \\left \\{\\begin{bmatrix} 1 & 0 \\\\ c & 1 \\end{bmatrix} \\colon c \\in \\mathbb{R} \\right\\}."
},
{
"math_id": 4,
"text": "\\begin{align}\n\\begin{bmatrix} a & 0 \\\\ b & 1 \\end{bmatrix} H = &~ \\left \\{\\begin{bmatrix} a & 0 \\\\ b & 1 \\end{bmatrix}\\begin{bmatrix} 1 & 0 \\\\ c & 1 \\end{bmatrix} \\colon c \\in \\R \\right\\} \\\\\n=&~ \\left \\{\\begin{bmatrix} a & 0 \\\\ b + c & 1 \\end{bmatrix} \\colon c \\in \\mathbb{R}\\right\\} \\\\\n=&~ \\left \\{\\begin{bmatrix} a & 0 \\\\ d & 1 \\end{bmatrix} \\colon d \\in \\mathbb{R}\\right\\}.\n\\end{align}"
},
{
"math_id": 5,
"text": "T= \\left \\{\\begin{bmatrix} a & 0 \\\\ 0 & 1 \\end{bmatrix} \\colon a \\in \\mathbb{R} - \\{0\\} \\right\\}"
}
] | https://en.wikipedia.org/wiki?curid=98759 |
987604 | Tobin's q | Ratio between a physical asset's market value and its replacement value
Tobin's q (or the q ratio, and Kaldor's v), is the ratio between a physical asset's market value and its replacement value. It was first introduced by Nicholas Kaldor in 1966 in his paper: "Marginal Productivity and the Macro-Economic Theories of Distribution: Comment on Samuelson and Modigliani". It was popularised a decade later by James Tobin, who in 1970, described its two quantities as:
<templatestyles src="Template:Blockquote/styles.css" />One, the numerator, is the market valuation: the going price in the market for exchanging existing assets. The other, the denominator, is the replacement or reproduction cost: the price in the market for newly produced commodities. We believe that this ratio has considerable macroeconomic significance and usefulness, as the nexus between financial markets and markets for goods and services.
Measurement.
Single company.
Although it is not the direct equivalent of Tobin's q, it has become common practice in the finance literature to calculate the ratio by comparing the market value of a company's equity and liabilities with its corresponding book values, as the replacement values of a company's assets is hard to estimate:
Tobin's q = formula_0
It is also common practice to assume equivalence of the liabilities market and book value, yielding:
Tobin's q = formula_1.
Even if market and book value of liabilities are assumed to be equal, this is not equal to the "Market to Book Ratio" or "Price to Book Ratio", used in financial analysis. The latter ratio is only calculated for equity values: Market to Book Ratio= formula_2.
Financial analysis also often uses the inverse of this ratio, the "Book to Market Ratio", i.e. Book to Market Ratio= formula_3
For stock-listed companies, the market value of equity or market capitalization is often quoted in financial databases. It can be calculated for a specific point in time by formula_4.
Aggregate corporations.
Another use for q is to determine the valuation of the whole market in ratio to the aggregate corporate assets. The formula for this is: formula_5
The following graph is an example of Tobin's q for all U.S. corporations. The line shows the ratio of the US stock market value to US net assets at replacement cost since 1900.
Effect on capital investment.
If the market value reflected solely the recorded assets of a company, Tobin's q would be 1.0.
If Tobin's q is greater than 1.0, then the market value is greater than the value of the company's recorded assets. This suggests that the market value reflects some unmeasured or unrecorded assets of the company. High Tobin's q values encourage companies to invest more in capital because they are "worth" more than the price they paid for them.
If a company's stock price (which is a measure of the company's capital market value) is $2 and the price of the capital in the current market is $1, so that q > 1, the company can issue shares and with the proceeds invest in capital, thus obtaining economic profit.
On the other hand, if Tobin's q is less than 1, the market value is less than the recorded value of the assets of the company. This suggests that the market may be undervaluing the company, or that the company could increase profit by getting rid of some capital stock, either by selling it or by declining to replace it as it wears out.
John Mihaljevic points out that "no straightforward balancing mechanism exists in the case of low Q ratios, i.e., when the market is valuing an asset below its replacement cost (Q<1). When Q is less than parity, the market seems to be saying that the deployed real assets will not earn a sufficient rate of return and that, therefore, the owners of such assets must accept a discount to the replacement value if they desire to sell their assets in the market. If the real assets can be sold off at replacement cost, for example via an asset liquidation, such an action would be beneficial to shareholders because it would drive the Q ratio back up toward parity (Q->1). In the case of the stock market as a whole, rather than a single firm, the conclusion that assets should be liquidated does not typically apply. A low Q ratio for the entire market does not mean that blanket redeployment of resources across the economy will create value. Instead, when market-wide Q is less than parity, investors are probably being overly pessimistic about future asset returns."
Lang and Stulz found out that diversified companies have a lower q-ratio than focused firms because the market penalizes the value of the firm assets.
Tobin's insights show that movements in stock prices will be reflected in changes in consumption and investment, although empirical evidence shows that the relationship is not as tight as one would have thought. This is largely because firms do not blindly base fixed investment decisions on movements in the stock price; rather they examine future interest rates and the present value of expected profits.
Other influences on q.
Tobin's q measures two variables - the current price of capital assets as measured by accountants or statisticians and the market value of equity and bonds - but there are other elements that may affect the value of q, namely:
Tobin's q is said to be influenced by market hype and intangible assets so that we see swings in q around the value of 1.
Kaldor's v and Tobin's q.
In his 1966 paper "Marginal Productivity and the Macro-Economic Theory of Distribution: Comment on Samuelson and Modigliani" co-authored with Luigi Pasinetti, Nicholas Kaldor introduced this relationship as part of his broader theory of distribution that was non-marginalist. This theory is today known as the ‘Cambridge Growth Model’ after the location (University of Cambridge, UK) where the theories was devised. In the paper Kaldor writes:
The "valuation ratio" (v) [is] the relation of the market value of shares to the capital employed by corporations."
Kaldor then goes on to explore the properties of v at a properly macroeconomic level. He ends up deriving the following equation:
where "c" is net consumption out of capital, "sw" is the savings of workers, "g" is the growth rate, "Y" is income, "K" is capital, "sc" is savings out of capital and "i" is the fraction of new securities issued by firms. Kaldor then supplements this with a price, "p", equation for securities which is as follows:
He then goes on to lay out his interpretation of these equations:
The interpretation of these equations is as follows. Given the savings-coefficients and the capital-gains-coefficient, there will be a certain valuation ratio which will secure just enough savings by the personal sector to take up the new securities issued by corporations. Hence the "net" savings of the personal sector (available for investment by the business sector) will depend, not only on the savings propensities of individuals, but on the policies of corporations towards new issues. In the absence of new issues the level of security prices will be established at the point at which the purchases of securities by the savers will be balanced by the sale of securities of the dis-savers, making the net savings of the personal sector zero. The issue of new securities by corporations will depress security prices (i.e. the valuation ratio "v") just enough to reduce the sale of securities by the dis-savers sufficiently to induce the net savings required to take up the new issues. If "i" were negative and the corporations were net "purchasers" of securities from the personal sector (which they could be through the redemption of past securities, or purchasing shares from the personal sector for the acquisition of subsidiaries) the valuation ratio "v" would be driven up to the point at which net personal savings would be negative to the extent necessary to match the sale of securities by the personal sector.
Kaldor is clearly laying out equilibrium condition by which, "ceteris paribus", the stock of savings in existence at any given time is matched to the total numbers of securities outstanding in the market. He goes on to state:
In a state of Golden Age equilibrium (given a constant "g" and a constant "K/Y", however determined), "v" will be constant, with a value that can be ><1, depending on the values of "sc", "sw", "c", and "i".
In this sentence Kaldor is laying out the determination of the "v" ratio in equilibrium (a constant "g" and a constant "K/Y") by: the savings out of capital, the savings of workers, net consumption out of capital and the issuance of new shares by firms.
Kaldor goes further still. Prior to this he had asserted that "the share of investment in total income is higher than the share of savings in wages, or in total personal income" is a "matter of fact" (i.e. a matter of empirical investigation that Kaldor thought would likely hold true). This is the so-called "Pasinetti inequality" and if we allow for it we can say something more concrete about the determination of "v":
[One] can assert that, given the Pasinetti inequality, "gK>sw.Y", "v"<1 when "c"=(1-"sw"). "i"=0; with "i">0 this will be true "a fortiori".
This fits nicely with the fact that Kaldor's "v" and Tobin's "q" tend on average to be below 1 thus suggesting that Pasinetti's inequality likely does hold in empirical reality.
Finally, Kaldor considers whether this exercise give us any clue to the future development of income distribution in the capitalist system. The neoclassicals tended to argue that capitalism would eventually liquidate the capitalists and lead to more homogenous income distribution. Kaldor lays out a case whereby this might take place in his framework:
Has this "neo-Pasinetti theorem" any very-long-run "Pasinetti" or "anti-Pasinetti" solution? So far we have not taken any account of the change in distribution of assets between "workers" (i.e. pension funds) and "capitalists" - indeed we assumed it to be constant. However since the capitalists are "selling" shares (if "c">0) and the pension funds are buying them, one could suppose that the "share" of total assets in the hands of the capitalists would diminish continually, whereas the share of assets in the hands of the workers' funds would increase continuously until, at some distant day, the capitalists have no shares left; the pension funds and insurance companies would own them all!.
While this is a possible interpretation of the analysis Kaldor warns against it and lays out an alternative interpretation of the results:
But this view ignores that the ranks of the capitalist class are constantly renewed by the sons and daughters of the new Captains of Industry, replacing the grandsons and granddaughters of the older Captains who gradually dissipate their inheritance through living beyond their dividend income. It is reasonable to assume that the value of the shares of the newly formed and growing companies grows at a higher rate than the average, whilst those of older companies (which decline in relative importance) grow at a lower rate. This means that the rate of capital appreciation of the shares in the hands of the capitalist group as a whole, for the reasons given above, is greater than the rate of appreciation of the assets in the hands of pension funds, etc. Given the difference in the rates of appreciation of the two funds of securities-and this depends on the rate at which new corporations emerge and replace older ones-I think it can be shown that there will be, for any given constellation of the value of the parameters, a long run equilibrium distribution of the assets between capitalists and pension funds which will remain constant.
Kaldor's theory of "v" is comprehensive and provides an equilibrium determination of the variable based on macroeconomic theory that was missing in most other discussions. But today it is largely neglected and the focus is placed on Tobin's later contribution - hence the fact that the variable is known as Tobin's "q" and not Kaldor's "v".
Cassel's q.
In September 1996, at a lunch at the European Bank for Reconstruction and Development (EBRD), attended by Tobin, Mark Cutis of the EBRD and Brian Reading and Gabriel Stein of Lombard Street Research Ltd, Tobin mentioned that "in common with most American economists, [he] did not read anything in a foreign language] and that "in common with most post-War economists [he] did not read anything published before World War II". He was therefore greatly embarrassed when he discovered that in the 1920s, the Swedish economist Gustav Cassel, had introduced a ratio between a physical asset's market value and its replacement value, which he called 'q'. Cassel's q thus antedates both Kaldor's and Tobin's by a number of decades.
Tobin's marginal q.
Tobin's marginal q is the ratio of the market value of an additional unit of capital to its replacement cost.
Price-to-book ratio (P/B).
In inflationary times, q will be lower than the price-to-book ratio. During periods of very high inflation, the book value would understate the cost of replacing a firm's assets, since the inflated prices of its assets would not be reflected on its balance sheet.
Criticism.
Olivier Blanchard, Changyong Rhee and Lawrence Summers found with data of the US economy from the 1920s to the 1990s that "fundamentals" predict investment much better than Tobin's q. What these authors call fundamentals is however the rate of profit, which connects these empirical findings with older ideas of authors such as Wesley Mitchell, or even Karl Marx, that profits are the basic engine of the market economy.
Doug Henwood, in his book "Wall Street", argues that the q ratio fails to accurately predict investment, as Tobin claims. "The data for Tobin and Brainard’s 1977 paper covers 1960 to 1974, a period for which "q" seemed to explain investment pretty well," he writes. "But as the chart [see right] shows, things started going away even before the paper was published. While "q" and investment seemed to move together for the first half of the chart, they part ways almost at the middle; "q" collapsed during the bearish stock markets of the 1970s, yet investment rose." (p. 145)
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\text{(Equity Market Value + Liabilities Market Value)}}{\\text{(Equity Book Value + Liabilities Book Value)}}"
},
{
"math_id": 1,
"text": "\\frac{\\text{(Equity Market Value + Liabilities Book Value)}}{\\text{(Equity Book Value + Liabilities Book Value)}}"
},
{
"math_id": 2,
"text": "\\frac{\\text{Equity Market Value}}{\\text{Equity Book Value}}"
},
{
"math_id": 3,
"text": "\\frac{\\text{Equity Book Value}}{\\text{Equity Market Value}}"
},
{
"math_id": 4,
"text": "\\text{number of shares} \\times \\text{share price}"
},
{
"math_id": 5,
"text": "q=\\frac{\\text{value of stock market}}{\\text{corporate net worth}}"
}
] | https://en.wikipedia.org/wiki?curid=987604 |
98770 | Hidden Markov model | Statistical Markov model
A hidden Markov model (HMM) is a Markov model in which the observations are dependent on a latent (or "hidden") Markov process (referred to as formula_0). An HMM requires that there be an observable process formula_1 whose outcomes depend on the outcomes of formula_2 in a known way. Since formula_0 cannot be observed directly, the goal is to learn about state of formula_2 by observing formula_3 By definition of being a Markov model, an HMM has an additional requirement that the outcome of formula_1 at time formula_4 must be "influenced" exclusively by the outcome of formula_0 at formula_4 and that the outcomes of formula_0 and formula_1 at formula_5 must be conditionally independent of formula_1 at formula_6 given formula_0 at time formula_7 Estimation of the parameters in an HMM can be performed using maximum likelihood. For linear chain HMMs, the Baum–Welch algorithm can be used to estimate the parameters.
Hidden Markov models are known for their applications to thermodynamics, statistical mechanics, physics, chemistry, economics, finance, signal processing, information theory, pattern recognition—such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics.
Definition.
Let formula_8 and formula_9 be discrete-time stochastic processes and formula_10. The pair formula_11 is a "hidden Markov model" if
for every formula_13 formula_14 and every Borel set formula_15.
Let formula_16 and formula_17 be continuous-time stochastic processes. The pair formula_18 is a "hidden Markov model" if
for every formula_20 every Borel set formula_21 and every family of Borel sets formula_22
Terminology.
The states of the process formula_8 (resp. formula_23 are called "hidden states", and formula_24 (resp. formula_25 is called "emission probability" or "output probability".
Examples.
Drawing balls from hidden urns.
In its discrete form, a hidden Markov process can be visualized as a generalization of the urn problem with replacement (where each item from the urn is returned to the original urn before the next step). Consider this example: in a room that is not visible to an observer there is a genie. The room contains urns X1, X2, X3, ... each of which contains a known mix of balls, with each ball having a unique label y1, y2, y3, ... . The genie chooses an urn in that room and randomly draws a ball from that urn. It then puts the ball onto a conveyor belt, where the observer can observe the sequence of the balls but not the sequence of urns from which they were drawn. The genie has some procedure to choose urns; the choice of the urn for the "n"-th ball depends only upon a random number and the choice of the urn for the ("n" − 1)-th ball. The choice of urn does not directly depend on the urns chosen before this single previous urn; therefore, this is called a Markov process. It can be described by the upper part of Figure 1.
The Markov process itself cannot be observed, only the sequence of labeled balls, thus this arrangement is called a "hidden Markov process". This is illustrated by the lower part of the diagram shown in Figure 1, where one can see that balls y1, y2, y3, y4 can be drawn at each state. Even if the observer knows the composition of the urns and has just observed a sequence of three balls, "e.g." y1, y2 and y3 on the conveyor belt, the observer still cannot be "sure" which urn ("i.e.", at which state) the genie has drawn the third ball from. However, the observer can work out other information, such as the likelihood that the third ball came from each of the urns.
Weather guessing game.
Consider two friends, Alice and Bob, who live far apart from each other and who talk together daily over the telephone about what they did that day. Bob is only interested in three activities: walking in the park, shopping, and cleaning his apartment. The choice of what to do is determined exclusively by the weather on a given day. Alice has no definite information about the weather, but she knows general trends. Based on what Bob tells her he did each day, Alice tries to guess what the weather must have been like.
Alice believes that the weather operates as a discrete Markov chain. There are two states, "Rainy" and "Sunny", but she cannot observe them directly, that is, they are "hidden" from her. On each day, there is a certain chance that Bob will perform one of the following activities, depending on the weather: "walk", "shop", or "clean". Since Bob tells Alice about his activities, those are the "observations". The entire system is that of a hidden Markov model (HMM).
Alice knows the general weather trends in the area, and what Bob likes to do on average. In other words, the parameters of the HMM are known. They can be represented as follows in Python:
states = ("Rainy", "Sunny")
observations = ("walk", "shop", "clean")
transition_probability = {
"Rainy": {"Rainy": 0.7, "Sunny": 0.3},
"Sunny": {"Rainy": 0.4, "Sunny": 0.6},
emission_probability = {
"Rainy": {"walk": 0.1, "shop": 0.4, "clean": 0.5},
"Sunny": {"walk": 0.6, "shop": 0.3, "clean": 0.1},
In this piece of code, codice_0 represents Alice's belief about which state the HMM is in when Bob first calls her (all she knows is that it tends to be rainy on average). The particular probability distribution used here is not the equilibrium one, which is (given the transition probabilities) approximately codice_1. The codice_2 represents the change of the weather in the underlying Markov chain. In this example, there is only a 30% chance that tomorrow will be sunny if today is rainy. The codice_3 represents how likely Bob is to perform a certain activity on each day. If it is rainy, there is a 50% chance that he is cleaning his apartment; if it is sunny, there is a 60% chance that he is outside for a walk.
"A similar example is further elaborated in the Viterbi algorithm page."
Structural architecture.
The diagram below shows the general architecture of an instantiated HMM. Each oval shape represents a random variable that can adopt any of a number of values. The random variable "x"("t") is the hidden state at time t (with the model from the above diagram, "x"("t") ∈ { "x"1, "x"2, "x"3 }). The random variable "y"("t") is the observation at time t (with "y"("t") ∈ { "y"1, "y"2, "y"3, "y"4 }). The arrows in the diagram (often called a trellis diagram) denote conditional dependencies.
From the diagram, it is clear that the conditional probability distribution of the hidden variable "x"("t") at time t, given the values of the hidden variable x at all times, depends "only" on the value of the hidden variable "x"("t" − 1); the values at time "t" − 2 and before have no influence. This is called the Markov property. Similarly, the value of the observed variable "y"("t") only depends on the value of the hidden variable "x"("t") (both at time t).
In the standard type of hidden Markov model considered here, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a categorical distribution) or continuous (typically from a Gaussian distribution). The parameters of a hidden Markov model are of two types, "transition probabilities" and "emission probabilities" (also known as "output probabilities"). The transition probabilities control the way the hidden state at time t is chosen given the hidden state at time formula_26.
The hidden state space is assumed to consist of one of N possible values, modelled as a categorical distribution. (See the section below on extensions for other possibilities.) This means that for each of the N possible states that a hidden variable at time t can be in, there is a transition probability from this state to each of the N possible states of the hidden variable at time formula_27, for a total of formula_28 transition probabilities. Note that the set of transition probabilities for transitions from any given state must sum to 1. Thus, the formula_29 matrix of transition probabilities is a Markov matrix. Because any transition probability can be determined once the others are known, there are a total of formula_30 transition parameters.
In addition, for each of the N possible states, there is a set of emission probabilities governing the distribution of the observed variable at a particular time given the state of the hidden variable at that time. The size of this set depends on the nature of the observed variable. For example, if the observed variable is discrete with M possible values, governed by a categorical distribution, there will be formula_31 separate parameters, for a total of formula_32 emission parameters over all hidden states. On the other hand, if the observed variable is an M-dimensional vector distributed according to an arbitrary multivariate Gaussian distribution, there will be M parameters controlling the means and formula_33 parameters controlling the covariance matrix, for a total of formula_34 emission parameters. (In such a case, unless the value of M is small, it may be more practical to restrict the nature of the covariances between individual elements of the observation vector, e.g. by assuming that the elements are independent of each other, or less restrictively, are independent of all but a fixed number of adjacent elements.)
Inference.
Several inference problems are associated with hidden Markov models, as outlined below.
Probability of an observed sequence.
The task is to compute in a best way, given the parameters of the model, the probability of a particular output sequence. This requires summation over all possible state sequences:
The probability of observing a sequence
formula_35
of length "L" is given by
formula_36
where the sum runs over all possible hidden-node sequences
formula_37
Applying the principle of dynamic programming, this problem, too, can be handled efficiently using the forward algorithm.
Probability of the latent variables.
A number of related tasks ask about the probability of one or more of the latent variables, given the model's parameters and a sequence of observations formula_38
Filtering.
The task is to compute, given the model's parameters and a sequence of observations, the distribution over hidden states of the last latent variable at the end of the sequence, i.e. to compute formula_39. This task is used when the sequence of latent variables is thought of as the underlying states that a process moves through at a sequence of points in time, with corresponding observations at each point. Then, it is natural to ask about the state of the process at the end.
This problem can be handled efficiently using the forward algorithm. An example is when the algorithm is applied to a Hidden Markov Network to determine formula_40.
Smoothing.
This is similar to filtering but asks about the distribution of a latent variable somewhere in the middle of a sequence, i.e. to compute formula_41 for some formula_42. From the perspective described above, this can be thought of as the probability distribution over hidden states for a point in time "k" in the past, relative to time "t".
The forward-backward algorithm is a good method for computing the smoothed values for all hidden state variables.
Most likely explanation.
The task, unlike the previous two, asks about the joint probability of the "entire" sequence of hidden states that generated a particular sequence of observations (see illustration on the right). This task is generally applicable when HMM's are applied to different sorts of problems from those for which the tasks of filtering and smoothing are applicable. An example is part-of-speech tagging, where the hidden states represent the underlying parts of speech corresponding to an observed sequence of words. In this case, what is of interest is the entire sequence of parts of speech, rather than simply the part of speech for a single word, as filtering or smoothing would compute.
This task requires finding a maximum over all possible state sequences, and can be solved efficiently by the Viterbi algorithm.
Statistical significance.
For some of the above problems, it may also be interesting to ask about statistical significance. What is the probability that a sequence drawn from some null distribution will have an HMM probability (in the case of the forward algorithm) or a maximum state sequence probability (in the case of the Viterbi algorithm) at least as large as that of a particular output sequence? When an HMM is used to evaluate the relevance of a hypothesis for a particular output sequence, the statistical significance indicates the false positive rate associated with failing to reject the hypothesis for the output sequence.
Learning.
The parameter learning task in HMMs is to find, given an output sequence or a set of such sequences, the best set of state transition and emission probabilities. The task is usually to derive the maximum likelihood estimate of the parameters of the HMM given the set of output sequences. No tractable algorithm is known for solving this problem exactly, but a local maximum likelihood can be derived efficiently using the Baum–Welch algorithm or the Baldi–Chauvin algorithm. The Baum–Welch algorithm is a special case of the expectation-maximization algorithm.
If the HMMs are used for time series prediction, more sophisticated Bayesian inference methods, like Markov chain Monte Carlo (MCMC) sampling are proven to be favorable over finding a single maximum likelihood model both in terms of accuracy and stability. Since MCMC imposes significant computational burden, in cases where computational scalability is also of interest, one may alternatively resort to variational approximations to Bayesian inference, e.g. Indeed, approximate variational inference offers computational efficiency comparable to expectation-maximization, while yielding an accuracy profile only slightly inferior to exact MCMC-type Bayesian inference.
Applications.
HMMs can be applied in many fields where the goal is to recover a data sequence that is not immediately observable (but other data that depend on the sequence are). Applications include:
History.
Hidden Markov models were described in a series of statistical papers by Leonard E. Baum and other authors in the second half of the 1960s. One of the first applications of HMMs was speech recognition, starting in the mid-1970s.
In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences, in particular DNA. Since then, they have become ubiquitous in the field of bioinformatics.
Extensions.
General state spaces.
In the hidden Markov models considered above, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a categorical distribution) or continuous (typically from a Gaussian distribution). Hidden Markov models can also be generalized to allow continuous state spaces. Examples of such models are those where the Markov process over hidden variables is a linear dynamical system, with a linear relationship among related variables and where all hidden and observed variables follow a Gaussian distribution. In simple cases, such as the linear dynamical system just mentioned, exact inference is tractable (in this case, using the Kalman filter); however, in general, exact inference in HMMs with continuous latent variables is infeasible, and approximate methods must be used, such as the extended Kalman filter or the particle filter.
Nowadays, inference in hidden Markov models is performed in nonparametric settings, where the dependency structure enables identifiability of the model and the learnability limits are still under exploration.
Bayesian modeling of the transitions probabilities.
Hidden Markov models are generative models, in which the joint distribution of observations and hidden states, or equivalently both the prior distribution of hidden states (the "transition probabilities") and conditional distribution of observations given states (the "emission probabilities"), is modeled. The above algorithms implicitly assume a uniform prior distribution over the transition probabilities. However, it is also possible to create hidden Markov models with other types of prior distributions. An obvious candidate, given the categorical distribution of the transition probabilities, is the Dirichlet distribution, which is the conjugate prior distribution of the categorical distribution. Typically, a symmetric Dirichlet distribution is chosen, reflecting ignorance about which states are inherently more likely than others. The single parameter of this distribution (termed the "concentration parameter") controls the relative density or sparseness of the resulting transition matrix. A choice of 1 yields a uniform distribution. Values greater than 1 produce a dense matrix, in which the transition probabilities between pairs of states are likely to be nearly equal. Values less than 1 result in a sparse matrix in which, for each given source state, only a small number of destination states have non-negligible transition probabilities. It is also possible to use a two-level prior Dirichlet distribution, in which one Dirichlet distribution (the upper distribution) governs the parameters of another Dirichlet distribution (the lower distribution), which in turn governs the transition probabilities. The upper distribution governs the overall distribution of states, determining how likely each state is to occur; its concentration parameter determines the density or sparseness of states. Such a two-level prior distribution, where both concentration parameters are set to produce sparse distributions, might be useful for example in unsupervised part-of-speech tagging, where some parts of speech occur much more commonly than others; learning algorithms that assume a uniform prior distribution generally perform poorly on this task. The parameters of models of this sort, with non-uniform prior distributions, can be learned using Gibbs sampling or extended versions of the expectation-maximization algorithm.
An extension of the previously described hidden Markov models with Dirichlet priors uses a Dirichlet process in place of a Dirichlet distribution. This type of model allows for an unknown and potentially infinite number of states. It is common to use a two-level Dirichlet process, similar to the previously described model with two levels of Dirichlet distributions. Such a model is called a "hierarchical Dirichlet process hidden Markov model", or "HDP-HMM" for short. It was originally described under the name "Infinite Hidden Markov Model" and was further formalized in "Hierarchical Dirichlet Processes".
Discriminative approach.
A different type of extension uses a discriminative model in place of the generative model of standard HMMs. This type of model directly models the conditional distribution of the hidden states given the observations, rather than modeling the joint distribution. An example of this model is the so-called "maximum entropy Markov model" (MEMM), which models the conditional distribution of the states using logistic regression (also known as a "maximum entropy model"). The advantage of this type of model is that arbitrary features (i.e. functions) of the observations can be modeled, allowing domain-specific knowledge of the problem at hand to be injected into the model. Models of this sort are not limited to modeling direct dependencies between a hidden state and its associated observation; rather, features of nearby observations, of combinations of the associated observation and nearby observations, or in fact of arbitrary observations at any distance from a given hidden state can be included in the process used to determine the value of a hidden state. Furthermore, there is no need for these features to be statistically independent of each other, as would be the case if such features were used in a generative model. Finally, arbitrary features over pairs of adjacent hidden states can be used rather than simple transition probabilities. The disadvantages of such models are: (1) The types of prior distributions that can be placed on hidden states are severely limited; (2) It is not possible to predict the probability of seeing an arbitrary observation. This second limitation is often not an issue in practice, since many common usages of HMM's do not require such predictive probabilities.
A variant of the previously described discriminative model is the linear-chain conditional random field. This uses an undirected graphical model (aka Markov random field) rather than the directed graphical models of MEMM's and similar models. The advantage of this type of model is that it does not suffer from the so-called "label bias" problem of MEMM's, and thus may make more accurate predictions. The disadvantage is that training can be slower than for MEMM's.
Other extensions.
Yet another variant is the "factorial hidden Markov model", which allows for a single observation to be conditioned on the corresponding hidden variables of a set of formula_43 independent Markov chains, rather than a single Markov chain. It is equivalent to a single HMM, with formula_44 states (assuming there are formula_45 states for each chain), and therefore, learning in such a model is difficult: for a sequence of length formula_46, a straightforward Viterbi algorithm has complexity formula_47. To find an exact solution, a junction tree algorithm could be used, but it results in an formula_48 complexity. In practice, approximate techniques, such as variational approaches, could be used.
All of the above models can be extended to allow for more distant dependencies among hidden states, e.g. allowing for a given state to be dependent on the previous two or three states rather than a single previous state; i.e. the transition probabilities are extended to encompass sets of three or four adjacent states (or in general formula_43 adjacent states). The disadvantage of such models is that dynamic-programming algorithms for training them have an formula_49 running time, for formula_43 adjacent states and formula_46 total observations (i.e. a length-formula_46 Markov chain). This extension has been widely used in bioinformatics, in the modeling of DNA sequences.
Another recent extension is the "triplet Markov model", in which an auxiliary underlying process is added to model some data specificities. Many variants of this model have been proposed. One should also mention the interesting link that has been established between the "theory of evidence" and the "triplet Markov models" and which allows to fuse data in Markovian context and to model nonstationary data. Note that alternative multi-stream data fusion strategies have also been proposed in the recent literature, e.g.
Finally, a different rationale towards addressing the problem of modeling nonstationary data by means of hidden Markov models was suggested in 2012. It consists in employing a small recurrent neural network (RNN), specifically a reservoir network, to capture the evolution of the temporal dynamics in the observed data. This information, encoded in the form of a high-dimensional vector, is used as a conditioning variable of the HMM state transition probabilities. Under such a setup, we eventually obtain a nonstationary HMM the transition probabilities of which evolve over time in a manner that is inferred from the data itself, as opposed to some unrealistic ad-hoc model of temporal evolution.
In 2023, two innovative algorithms were introduced for the Hidden Markov Model. These algorithms enable the computation of the posterior distribution of the HMM without the necessity of explicitly modeling the joint distribution, utilizing only the conditional distributions. Unlike traditional methods such as the Forward-Backward and Viterbi algorithms, which require knowledge of the joint law of the HMM and can be computationally intensive to learn, the Discriminative Forward-Backward and Discriminative Viterbi algorithms circumvent the need for the observation's law. This breakthrough allows the HMM to be applied as a discriminative model, offering a more efficient and versatile approach to leveraging Hidden Markov Models in various applications.
The model suitable in the context of longitudinal data is named latent Markov model. The basic version of this model has been extended to include individual covariates, random effects and to model more complex data structures such as multilevel data. A complete overview of the latent Markov models, with special attention to the model assumptions and to their practical use is provided in
Measure theory.
Given a Markov transition matrix and an invariant distribution on the states, we can impose a probability measure on the set of subshifts. For example, consider the Markov chain given on the left on the states formula_50, with invariant distribution formula_51. If we "forget" the distinction between formula_52, we project this space of subshifts on formula_50 into another space of subshifts on formula_53, and this projection also projects the probability measure down to a probability measure on the subshifts on formula_53.
The curious thing is that the probability measure on the subshifts on formula_53 is not created by a Markov chain on formula_53, not even multiple orders. Intuitively, this is because if one observes a long sequence of formula_54, then one would become increasingly sure that the formula_55, meaning that the observable part of the system can be affected by something infinitely in the past.
Conversely, there exists a space of subshifts on 6 symbols, projected to subshifts on 2 symbols, such that any Markov measure on the smaller subshift has a preimage measure that is not Markov of any order (Example 2.6 ).
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " X "
},
{
"math_id": 1,
"text": " Y "
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": " Y. "
},
{
"math_id": 4,
"text": " t = t_0 "
},
{
"math_id": 5,
"text": " t < t_0 "
},
{
"math_id": 6,
"text": " t=t_0 "
},
{
"math_id": 7,
"text": " t = t_0. "
},
{
"math_id": 8,
"text": "X_n"
},
{
"math_id": 9,
"text": "Y_n"
},
{
"math_id": 10,
"text": "n\\geq 1"
},
{
"math_id": 11,
"text": "(X_n,Y_n)"
},
{
"math_id": 12,
"text": "\\operatorname{\\mathbf{P}}\\bigl(Y_n \\in A\\ \\bigl|\\ X_1=x_1,\\ldots,X_n=x_n\\bigr)=\\operatorname{\\mathbf{P}}\\bigl(Y_n \\in A\\ \\bigl|\\ X_n=x_n\\bigr),"
},
{
"math_id": 13,
"text": "n\\geq 1,"
},
{
"math_id": 14,
"text": "x_1,\\ldots, x_n,"
},
{
"math_id": 15,
"text": "A"
},
{
"math_id": 16,
"text": "X_t"
},
{
"math_id": 17,
"text": "Y_t"
},
{
"math_id": 18,
"text": "(X_t,Y_t)"
},
{
"math_id": 19,
"text": "\\operatorname{\\mathbf{P}}(Y_{t_0} \\in A \\mid \\{X_t \\in B_t\\}_{ t\\leq t_0}) = \\operatorname{\\mathbf{P}}(Y_{t_0} \\in A \\mid X_{t_0} \\in B_{t_0})"
},
{
"math_id": 20,
"text": " t_0, "
},
{
"math_id": 21,
"text": " A, "
},
{
"math_id": 22,
"text": " \\{B_t\\}_{t \\leq t_0}. "
},
{
"math_id": 23,
"text": "X_t)"
},
{
"math_id": 24,
"text": "\\operatorname{\\mathbf{P}}\\bigl(Y_n \\in A \\mid X_n=x_n\\bigr)"
},
{
"math_id": 25,
"text": "\\operatorname{\\mathbf{P}}\\bigl(Y_t \\in A \\mid X_t \\in B_t\\bigr))"
},
{
"math_id": 26,
"text": "t-1"
},
{
"math_id": 27,
"text": "t+1"
},
{
"math_id": 28,
"text": "N^2"
},
{
"math_id": 29,
"text": "N \\times N"
},
{
"math_id": 30,
"text": "N(N-1)"
},
{
"math_id": 31,
"text": "M-1"
},
{
"math_id": 32,
"text": "N(M-1)"
},
{
"math_id": 33,
"text": "\\frac {M(M+1)} 2"
},
{
"math_id": 34,
"text": "N \\left(M + \\frac{M(M+1)}{2}\\right) = \\frac {NM(M+3)} 2 = O(NM^2)"
},
{
"math_id": 35,
"text": "Y=y(0), y(1),\\dots,y(L-1)\\,"
},
{
"math_id": 36,
"text": "P(Y)=\\sum_{X}P(Y\\mid X)P(X),\\,"
},
{
"math_id": 37,
"text": "X=x(0), x(1), \\dots, x(L-1).\\,"
},
{
"math_id": 38,
"text": "y(1),\\dots,y(t)."
},
{
"math_id": 39,
"text": "P(x(t)\\ |\\ y(1),\\dots,y(t))"
},
{
"math_id": 40,
"text": "\\mathrm{P}\\big( h_t \\ | v_{1:t} \\big)"
},
{
"math_id": 41,
"text": "P(x(k)\\ |\\ y(1), \\dots, y(t))"
},
{
"math_id": 42,
"text": "k < t"
},
{
"math_id": 43,
"text": "K"
},
{
"math_id": 44,
"text": "N^K"
},
{
"math_id": 45,
"text": "N"
},
{
"math_id": 46,
"text": "T"
},
{
"math_id": 47,
"text": "O(N^{2K} \\, T)"
},
{
"math_id": 48,
"text": "O(N^{K+1} \\, K \\, T)"
},
{
"math_id": 49,
"text": "O(N^K \\, T)"
},
{
"math_id": 50,
"text": "A, B_1, B_2 "
},
{
"math_id": 51,
"text": "\\pi = (2/7, 4/7, 1/7) "
},
{
"math_id": 52,
"text": "B_1, B_2"
},
{
"math_id": 53,
"text": "A, B "
},
{
"math_id": 54,
"text": "B^n"
},
{
"math_id": 55,
"text": "Pr(A | B^n) \\to \\frac 23 "
}
] | https://en.wikipedia.org/wiki?curid=98770 |
9877288 | Skew lattice | In abstract algebra, a skew lattice is an algebraic structure that is a non-commutative generalization of a lattice. While the term "skew lattice" can be used to refer to any non-commutative generalization of a lattice, since 1989 it has been used primarily as follows.
Definition.
A skew lattice is a set "S" equipped with two associative, idempotent binary operations formula_0 and formula_1, called "meet" and "join", that validate the following dual pair of absorption laws
formula_2,
formula_3.
Given that formula_4 and formula_5 are associative and idempotent, these identities are equivalent to validating the following dual pair of statements:
formula_6 if formula_7,
formula_8 if formula_9.
Historical background.
For over 60 years, noncommutative variations of lattices have been studied with differing motivations. For some the motivation has been an interest in the conceptual boundaries of lattice theory; for others it was a search for noncommutative forms of logic and Boolean algebra; and for others it has been the behavior of idempotents in rings. A "noncommutative lattice", generally speaking, is an algebra formula_10 where formula_5 and formula_4 are associative, idempotent binary operations connected by absorption identities guaranteeing that formula_5 in some way dualizes formula_4. The precise identities chosen depends upon the underlying motivation, with differing choices producing distinct varieties of algebras.
Pascual Jordan, motivated by questions in quantum logic, initiated a study of "noncommutative lattices" in his 1949 paper, "Über Nichtkommutative Verbände", choosing the absorption identities
formula_11
He referred to those algebras satisfying them as "Schrägverbände". By varying or augmenting these identities, Jordan and others obtained a number of varieties of noncommutative lattices.
Beginning with Jonathan Leech's 1989 paper, "Skew lattices in rings", skew lattices as defined above have been the primary objects of study. This was aided by previous results about bands. This was especially the case for many of the basic properties.
Basic properties.
Natural partial order and natural quasiorder
In a skew lattice formula_12, the natural partial order is defined by formula_13 if formula_14, or dually, formula_15. The natural preorder on formula_12 is given by formula_16 if formula_17 or dually formula_18. While formula_19 and formula_20 agree on lattices, formula_19 properly refines formula_20 in the noncommutative case. The induced natural equivalence formula_21 is defined by formula_22 if formula_23, that is, formula_24
and formula_25 or dually, formula_18 and formula_26. The blocks of the partition formula_27 are
lattice ordered by formula_28 if formula_29 and formula_30 exist such that formula_31. This permits us to draw Hasse diagrams of skew lattices such as the following pair:
E.g., in the diagram on the left above, that formula_32 and formula_33 are formula_21 related is expressed by the dashed
segment. The slanted lines reveal the natural partial order between elements of the distinct formula_21-classes. The elements formula_34, formula_35 and formula_36 form the singleton formula_21-classes.
Rectangular Skew Lattices
Skew lattices consisting of a single formula_21-class are called rectangular. They are characterized by the equivalent identities: formula_37, formula_26 and formula_38. Rectangular skew lattices are isomorphic to skew lattices having the following construction (and conversely): given nonempty
sets formula_39 and formula_40, on formula_41 define formula_42 and formula_43. The formula_21-class partition of a skew lattice formula_12, as indicated in the above diagrams, is the unique partition of formula_12 into its maximal rectangular subalgebras, Moreover, formula_21 is a congruence with the induced quotient algebra formula_44 being the maximal lattice image of formula_12, thus making every skew lattice formula_12 a lattice of rectangular subalgebras. This is the Clifford–McLean theorem for skew lattices, first given for bands separately by Clifford and McLean. It is also known as "the first decomposition theorem for skew lattices".
Right (left) handed skew lattices and the Kimura factorization
A skew lattice is right-handed if it satisfies the identity formula_45 or dually, formula_46.
These identities essentially assert that formula_47 and formula_48 in each formula_21-class. Every skew lattice formula_12 has a unique maximal right-handed image formula_49 where the congruence formula_39 is defined by formula_50 if both formula_51 and formula_52 (or dually, formula_53 and formula_54). Likewise a skew lattice is left-handed if formula_51 and formula_53 in each formula_21-class. Again the maximal left-handed image of a skew lattice formula_12 is the image formula_55 where the congruence formula_40 is defined in dual fashion to formula_39. Many examples of skew lattices are either right- or left-handed. In the lattice of congruences, formula_56 and formula_57 is the identity congruence formula_58. The induced epimorphism formula_59 factors through both induced epimorphisms formula_60 and formula_61. Setting formula_62, the homomorphism formula_63 defined by formula_64, induces an isomorphism formula_65. This is the Kimura factorization of formula_12 into a fibred product of its maximal right- and left-handed images.
Like the Clifford–McLean theorem, Kimura factorization (or the "second decomposition theorem for skew lattices") was first given for regular bands (bands that satisfy the middle absorption
identity, formula_66). Indeed, both formula_5 and formula_4 are regular band operations. The above symbols formula_21, formula_40 and formula_39 come, of course, from basic semigroup theory.
Subvarieties of skew lattices.
Skew lattices form a variety. Rectangular skew lattices, left-handed and right-handed skew lattices all form subvarieties that are central to the basic structure theory of skew lattices. Here are several
more.
Symmetric skew lattices
A skew lattice "S" is symmetric if for any formula_67 , formula_68 if formula_69. Occurrences of commutation are thus unambiguous for such skew lattices, with subsets of pairwise commuting elements generating commutative subalgebras, i.e., sublattices. (This is not true for skew lattices in general.) Equational bases for this subvariety, first given by Spinks are:
formula_70 and formula_71.
A lattice section of a skew lattice formula_12 is a sublattice formula_72 of formula_12 meeting each formula_21-class of formula_12 at a single element. formula_72 is thus an internal copy of the lattice formula_44 with the composition formula_73 being an isomorphism. All symmetric skew lattices for which formula_74 admit a lattice section. Symmetric or not, having a lattice section formula_72 guarantees that formula_12 also has internal copies of formula_49 and formula_55 given respectively by formula_75 and formula_76, where formula_77 and formula_78 are the formula_40 and formula_39 congruence classes of formula_79 in formula_72 . Thus formula_80 and formula_81 are isomorphisms. This leads to a commuting diagram of embedding dualizing the preceding Kimura diagram.
Cancellative skew lattices
A skew lattice is cancellative if formula_82 and formula_83 implies formula_84 and likewise formula_85 and formula_86 implies formula_87. Cancellatice skew lattices are symmetric and can be shown to form a variety. Unlike lattices, they need not be distributive, and conversely.
Distributive skew lattices
Distributive skew lattices are determined by the identities:
formula_88 (D1)
formula_89 (D'1)
Unlike lattices, (D1) and (D'1) are not equivalent in general for skew lattices, but they are for symmetric skew lattices. The condition (D1) can be strengthened to
formula_90 (D2)
in which case (D'1) is a consequence. A skew lattice formula_12 satisfies both (D2) and its dual, formula_91, if and only if it factors as the product of a distributive lattice and a rectangular skew lattice. In this latter case (D2) can be strengthened to
formula_92 and formula_93. (D3)
On its own, (D3) is equivalent to (D2) when symmetry is added. We thus have six subvarieties of skew lattices determined respectively by (D1), (D2), (D3) and their duals.
Normal skew lattices
As seen above, formula_5 and formula_4 satisfy the identity formula_66. Bands satisfying the stronger identity, formula_94, are called normal. A skew lattice is normal skew if it satisfies
formula_95
For each element a in a normal skew lattice formula_12, the set formula_96 defined by {formula_97} or equivalently {formula_98} is a sublattice of formula_12, and conversely. (Thus normal skew lattices have also been called local lattices.) When both formula_0 and formula_1 are normal, formula_12 splits isomorphically into a product formula_99 of a lattice formula_72 and a rectangular skew lattice formula_21, and conversely. Thus both normal skew lattices and split skew lattices form varieties. Returning to distribution, formula_100 so that formula_101 characterizes the variety of distributive, normal skew lattices, and (D3) characterizes the variety of symmetric, distributive, normal skew lattices.
Categorical skew lattices
A skew lattice is categorical if nonempty composites of coset bijections are coset bijections. Categorical skew lattices form a variety. Skew lattices in rings and normal skew lattices are examples
of algebras in this variety. Let formula_102 with formula_29, formula_30 and formula_103, formula_104 be the coset bijection from formula_105 to formula_106 taking formula_32 to formula_33, formula_107 be the coset bijection from formula_106 to formula_108 taking formula_33 to formula_35 and finally formula_109 be the coset bijection from formula_105 to formula_108 taking formula_32 to formula_35. A skew lattice formula_12 is categorical if one always has the equality formula_110, i.e. , if the
composite partial bijection formula_111 if nonempty is a coset bijection from a formula_108 -coset of formula_105 to an formula_105-coset
of formula_108 . That is formula_112.
All distributive skew lattices are categorical. Though symmetric skew lattices might not be. In a sense they reveal the independence between the properties of symmetry and distributivity.
Skew Boolean algebras.
A zero element in a skew lattice "S" is an element 0 of "S" such that for all formula_113 formula_114 or, dually, formula_115 (0)
A Boolean skew lattice is a symmetric, distributive normal skew lattice with 0, formula_116 such that formula_96 is a Boolean lattice for each formula_117 Given such skew lattice "S", a difference operator \ is defined by x \ y = formula_118 where the latter is evaluated in the Boolean lattice formula_119 In the presence of (D3) and (0), \ is characterized by the identities:
formula_120 and formula_121 (S B)
One thus has a variety of skew Boolean algebras formula_122 characterized by identities (D3), (0) and (S B). A primitive skew Boolean algebra consists of 0 and a single non-0 "D"-class. Thus it is the result of adjoining a 0 to a rectangular skew lattice "D" via (0) with formula_123, if formula_124
and formula_36 otherwise. Every skew Boolean algebra is a subdirect product of primitive algebras. Skew Boolean algebras play an important role in the study of discriminator varieties and other generalizations in universal algebra of Boolean behavior.
Skew lattices in rings.
Let formula_105 be a ring and let formula_125 denote the set of all idempotents in formula_105. For all formula_126 set formula_127 and formula_128.
Clearly formula_5 but also formula_4 is associative. If a subset formula_129 is closed under formula_5 and formula_4, then formula_130 is a distributive, cancellative skew lattice. To find such skew lattices in formula_125 one looks at bands in formula_125, especially the ones that are maximal with respect to some constraint. In fact, every multiplicative band in formula_131 that is maximal with respect to being right regular (= ) is also closed under formula_4 and so forms a right-handed skew lattice. In general, every right regular band in formula_125 generates a right-handed skew lattice in formula_125. Dual remarks also hold for left regular bands (bands satisfying the identity formula_132) in formula_125. Maximal regular bands need not to be closed under formula_4 as defined; counterexamples are easily found using multiplicative rectangular bands. These cases are closed, however, under the cubic variant of formula_4 defined by formula_133 since in these cases formula_134 reduces to formula_135 to give the dual rectangular band. By replacing the condition of regularity by normality formula_136, every maximal normal multiplicative band formula_12 in formula_125 is also closed under formula_137 with formula_138, where formula_139, forms a Boolean skew lattice. When formula_125 itself is closed under multiplication, then it is a normal band and thus forms a Boolean skew lattice. In fact, any skew Boolean algebra can be embedded into such an algebra. When A has a multiplicative identity formula_34, the condition that formula_125 is multiplicatively closed is well known to imply that formula_125 forms a Boolean algebra. Skew lattices in rings continue to be a good source of examples and motivation.
Primitive skew lattices.
Skew lattices consisting of exactly two "D"-classes are called primitive skew lattices. Given such a skew lattice formula_12 with formula_21-classes formula_28 in formula_27, then for any formula_29 and formula_30, the subsets
formula_140{formula_141} formula_142 and formula_143 {formula_144} formula_145
are called, respectively, "cosets of A in B" and "cosets of B in A". These cosets partition B and A with formula_146 and formula_147. Cosets are always rectangular subalgebras in their formula_21-classes. What is more, the partial order formula_148 induces a coset bijection formula_149 defined by:
formula_150 iff formula_151, for formula_152 and formula_153.
Collectively, coset bijections describe formula_148 between the subsets formula_105 and formula_106. They also determine formula_4 and formula_0 for pairs of elements from distinct formula_21-classes. Indeed, given formula_29 and formula_30, let formula_154 be the
cost bijection between the cosets formula_155 in formula_105 and formula_156 in formula_106. Then:
formula_157 and formula_158.
In general, given formula_159 and formula_160 with formula_31 and formula_161, then formula_162 belong to a common formula_106- coset in formula_105 and formula_163 belong to a common formula_105-coset in formula_106 if and only if formula_164. Thus each coset bijection is, in some sense, a maximal collection of mutually parallel pairs formula_31.
Every primitive skew lattice formula_12 factors as the fibred product of its maximal left and right- handed primitive images formula_165. Right-handed primitive skew lattices are constructed as follows. Let formula_166 and formula_167 be partitions of disjoint nonempty sets formula_105 and formula_106, where all formula_168 and formula_169 share a common size. For each pair formula_170 pick a fixed bijection formula_171 from formula_168 onto formula_169. On formula_105 and formula_106 separately set formula_172 and formula_173; but given formula_29 and formula_30, set
formula_174 and formula_175
where formula_176 and formula_177 with formula_178 belonging to the cell formula_168 of formula_32 and formula_179 belonging to the cell formula_169 of formula_33. The various formula_180 are the coset bijections. This is illustrated in the following partial Hasse diagram where formula_181 and the arrows indicate the formula_182 -outputs and formula_148 from formula_105 and formula_106.
One constructs left-handed primitive skew lattices in dual fashion. All right [left] handed primitive skew lattices can be constructed in this fashion.
The coset structure of skew lattices.
A nonrectangular skew lattice formula_12 is covered by its maximal primitive skew lattices: given comparable formula_21-classes formula_28 in formula_27, formula_183 forms a maximal primitive subalgebra of formula_12 and every formula_21-class in formula_12 lies in such a subalgebra. The coset structures on these primitive subalgebras combine to determine the outcomes formula_184 and formula_185 at least when formula_186 and formula_187 are comparable under formula_188. It turns out that formula_184 and formula_185 are determined in general by cosets and their bijections, although in
a slightly less direct manner than the formula_188-comparable case. In particular, given two incomparable "D"-classes A and B with join "D"-class "J" and meet "D"-class formula_189 in formula_27, interesting connections arise between the two coset decompositions of J (or M) with respect to A and B.
Thus a skew lattice may be viewed as a coset atlas of rectangular skew lattices placed on the vertices of a lattice and coset bijections between them, the latter seen as partial isomorphisms
between the rectangular algebras with each coset bijection determining a corresponding pair of cosets. This perspective gives, in essence, the Hasse diagram of the skew lattice, which is easily
drawn in cases of relatively small order. (See the diagrams in Section 3 above.) Given a chain of "D"-classes formula_190 in formula_27, one has three sets of coset bijections: from A to B, from B to C and from A to C. In general, given coset bijections formula_191 and formula_192, the composition of partial bijections formula_193 could be empty. If it is not, then a unique coset bijection formula_194 exists such that formula_195. (Again, formula_196 is a bijection between a pair of cosets in formula_105 and formula_108.) This inclusion can be strict. It is always an equality (given formula_197) on a given skew lattice "S" precisely when "S" is categorical. In this case, by including the identity maps on each rectangular "D"-class and adjoining empty bijections between properly comparable "D"-classes, one has a category of rectangular algebras and coset bijections between them. The simple examples in Section 3 are categorical.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\wedge"
},
{
"math_id": 1,
"text": "\\vee"
},
{
"math_id": 2,
"text": "x\\wedge (x\\vee y) = x = (y\\vee x)\\wedge x "
},
{
"math_id": 3,
"text": " x\\vee (x\\wedge y) = x = (y\\wedge x)\\vee x "
},
{
"math_id": 4,
"text": "\\vee "
},
{
"math_id": 5,
"text": "\\wedge "
},
{
"math_id": 6,
"text": "x\\vee y= x"
},
{
"math_id": 7,
"text": "x\\wedge y=y"
},
{
"math_id": 8,
"text": "x\\wedge y=x"
},
{
"math_id": 9,
"text": "x\\vee y=y"
},
{
"math_id": 10,
"text": "(S; \\wedge, \\vee)"
},
{
"math_id": 11,
"text": "x\\wedge (y\\vee x) = x = (x\\wedge y)\\vee x."
},
{
"math_id": 12,
"text": "S"
},
{
"math_id": 13,
"text": "y\\leq x"
},
{
"math_id": 14,
"text": "x \\wedge y = y = y \\wedge x"
},
{
"math_id": 15,
"text": "x \\vee y = x = y \\vee x"
},
{
"math_id": 16,
"text": "y \\preceq x"
},
{
"math_id": 17,
"text": "y \\wedge x\\wedge y = y"
},
{
"math_id": 18,
"text": "x \\vee y \\vee x = x"
},
{
"math_id": 19,
"text": "\\leq"
},
{
"math_id": 20,
"text": "\\preceq"
},
{
"math_id": 21,
"text": "D"
},
{
"math_id": 22,
"text": "xDy"
},
{
"math_id": 23,
"text": "x \\preceq y \\preceq x"
},
{
"math_id": 24,
"text": "x \\wedge y \\wedge x = x "
},
{
"math_id": 25,
"text": "y \\wedge x \\wedge y = y"
},
{
"math_id": 26,
"text": "y \\vee x \\vee y = y"
},
{
"math_id": 27,
"text": "S/D"
},
{
"math_id": 28,
"text": "A > B"
},
{
"math_id": 29,
"text": "a \\in A"
},
{
"math_id": 30,
"text": "b \\in B"
},
{
"math_id": 31,
"text": "a > b"
},
{
"math_id": 32,
"text": "a"
},
{
"math_id": 33,
"text": "b"
},
{
"math_id": 34,
"text": "1"
},
{
"math_id": 35,
"text": "c"
},
{
"math_id": 36,
"text": "0"
},
{
"math_id": 37,
"text": "x \\wedge y \\wedge x = x"
},
{
"math_id": 38,
"text": "x \\vee y = y \\wedge x"
},
{
"math_id": 39,
"text": "L"
},
{
"math_id": 40,
"text": "R"
},
{
"math_id": 41,
"text": "L \\times R"
},
{
"math_id": 42,
"text": "(x, y) \\vee (z , w ) = (z , y)"
},
{
"math_id": 43,
"text": "(x, y) \\wedge (z , w ) = (x, w )"
},
{
"math_id": 44,
"text": "S/ D"
},
{
"math_id": 45,
"text": "x\\wedge y \\wedge x = y \\wedge x"
},
{
"math_id": 46,
"text": "x \\vee y \\vee x = x \\vee y"
},
{
"math_id": 47,
"text": "x \\wedge y = y"
},
{
"math_id": 48,
"text": "x \\vee y = x"
},
{
"math_id": 49,
"text": "S/ L"
},
{
"math_id": 50,
"text": "xLy"
},
{
"math_id": 51,
"text": "x \\wedge y = x"
},
{
"math_id": 52,
"text": "y \\wedge x = y "
},
{
"math_id": 53,
"text": "x \\vee y = y"
},
{
"math_id": 54,
"text": "y \\vee x = x"
},
{
"math_id": 55,
"text": "S/ R"
},
{
"math_id": 56,
"text": "R \\vee L = D"
},
{
"math_id": 57,
"text": "R \\cap L"
},
{
"math_id": 58,
"text": "\\Delta "
},
{
"math_id": 59,
"text": "S\\rightarrow S/D"
},
{
"math_id": 60,
"text": "S \\rightarrow S/L"
},
{
"math_id": 61,
"text": "S \\rightarrow S/R"
},
{
"math_id": 62,
"text": "T = S/D"
},
{
"math_id": 63,
"text": "k : S \\rightarrow S/L \\times S/R"
},
{
"math_id": 64,
"text": "k(x) = (L_x, R_x)"
},
{
"math_id": 65,
"text": "k* : S\\sim S/ L \\times _T S/R"
},
{
"math_id": 66,
"text": "xyxzx = xyzx"
},
{
"math_id": 67,
"text": "x, y \\in S"
},
{
"math_id": 68,
"text": "x \\wedge y = y \\wedge x"
},
{
"math_id": 69,
"text": "x \\vee y = y \\vee x"
},
{
"math_id": 70,
"text": "x \\vee y \\vee (x \\wedge y) = (y \\wedge x) \\vee y \\vee x"
},
{
"math_id": 71,
"text": "x \\wedge y \\wedge (x \\vee y) = (y \\vee x) \\wedge y \\wedge x"
},
{
"math_id": 72,
"text": "T"
},
{
"math_id": 73,
"text": "T \\subseteq S \\rightarrow S/D"
},
{
"math_id": 74,
"text": "|S/D| \\leq \\aleph_0"
},
{
"math_id": 75,
"text": "T [R] = \\bigcup_{t\\in T} R_t"
},
{
"math_id": 76,
"text": "T [L] = \\bigcup_{t\\in T} L_t"
},
{
"math_id": 77,
"text": "R_t"
},
{
"math_id": 78,
"text": "Lt"
},
{
"math_id": 79,
"text": "t"
},
{
"math_id": 80,
"text": "T [ R] \\subseteq S \\rightarrow S/L"
},
{
"math_id": 81,
"text": "T [L] \\subseteq S \\rightarrow S/R"
},
{
"math_id": 82,
"text": "x \\vee y = x \\vee z"
},
{
"math_id": 83,
"text": "x \\wedge y = x \\wedge z"
},
{
"math_id": 84,
"text": "y = z"
},
{
"math_id": 85,
"text": "x \\vee z = y \\vee z"
},
{
"math_id": 86,
"text": "x \\wedge z = y \\wedge z"
},
{
"math_id": 87,
"text": "x = y"
},
{
"math_id": 88,
"text": "x \\wedge (y \\vee z ) \\wedge x = (x \\wedge y \\wedge x) \\vee (x \\wedge z \\wedge x)"
},
{
"math_id": 89,
"text": "x \\vee (y \\wedge z ) \\vee x = (x \\vee y \\vee x) \\wedge (x \\vee z \\vee x)."
},
{
"math_id": 90,
"text": "x \\wedge (y \\vee z ) \\wedge w = (x \\wedge y \\wedge w) \\vee (x \\wedge z \\wedge w) "
},
{
"math_id": 91,
"text": "x \\vee (y \\wedge z ) \\vee w = (x \\vee y \\vee w) \\wedge (x \\vee z \\vee w)"
},
{
"math_id": 92,
"text": "x \\wedge (y \\vee z ) = (x \\wedge y) \\vee (x \\wedge z )"
},
{
"math_id": 93,
"text": "(y \\vee z ) \\wedge w = (y \\wedge w) \\vee (z \\wedge w)"
},
{
"math_id": 94,
"text": "xyzx = xzyx"
},
{
"math_id": 95,
"text": "x \\wedge y \\wedge z \\wedge x = x \\wedge z \\wedge y \\wedge x. (N) "
},
{
"math_id": 96,
"text": "a \\wedge S \\wedge a"
},
{
"math_id": 97,
"text": "a \\wedge x \\wedge a | x \\in S "
},
{
"math_id": 98,
"text": " x \\in S | x\\leq a "
},
{
"math_id": 99,
"text": "T \\times D"
},
{
"math_id": 100,
"text": "(D2) = (D1) + (N)"
},
{
"math_id": 101,
"text": "(D2)"
},
{
"math_id": 102,
"text": "a > b > c"
},
{
"math_id": 103,
"text": "c \\in C"
},
{
"math_id": 104,
"text": "\\varphi"
},
{
"math_id": 105,
"text": "A"
},
{
"math_id": 106,
"text": "B"
},
{
"math_id": 107,
"text": "\\psi"
},
{
"math_id": 108,
"text": "C"
},
{
"math_id": 109,
"text": "\\chi"
},
{
"math_id": 110,
"text": "\\psi \\circ \\varphi = \\chi"
},
{
"math_id": 111,
"text": "\\psi \\circ \\varphi"
},
{
"math_id": 112,
"text": "(A \\wedge b \\wedge A) \\cap (C \\vee b \\vee C ) = (C \\vee a \\vee C ) \\wedge b \\wedge (C \\vee a \\vee C ) = (A \\wedge c \\wedge A) \\vee b \\vee (A \\wedge c \\wedge A)"
},
{
"math_id": 113,
"text": "x \\in S,"
},
{
"math_id": 114,
"text": "0 \\wedge x = 0 = x \\wedge 0"
},
{
"math_id": 115,
"text": "0 \\vee x = x = x \\vee 0."
},
{
"math_id": 116,
"text": "(S ; \\vee, \\wedge, 0),"
},
{
"math_id": 117,
"text": "a \\in S."
},
{
"math_id": 118,
"text": "x - x \\wedge y \\wedge x"
},
{
"math_id": 119,
"text": "x \\wedge S \\wedge x."
},
{
"math_id": 120,
"text": "y \\wedge x \\setminus y = 0 = x \\setminus y \\wedge y"
},
{
"math_id": 121,
"text": "(x \\wedge y \\wedge x) \\vee x \\setminus y = x = x \\setminus y \\vee (x \\wedge y \\wedge x)."
},
{
"math_id": 122,
"text": "(S ; \\vee, \\wedge, \\, 0)"
},
{
"math_id": 123,
"text": "x \\setminus y = x"
},
{
"math_id": 124,
"text": "y = 0"
},
{
"math_id": 125,
"text": "E(A)"
},
{
"math_id": 126,
"text": "x, y \\in A"
},
{
"math_id": 127,
"text": "x \\wedge y = xy"
},
{
"math_id": 128,
"text": "x \\vee y = x + y - xy "
},
{
"math_id": 129,
"text": "S \\subseteq E(A)"
},
{
"math_id": 130,
"text": "(S, \\wedge, \\vee)"
},
{
"math_id": 131,
"text": "()"
},
{
"math_id": 132,
"text": "xyx = xy"
},
{
"math_id": 133,
"text": "x \\nabla y = x + y + yx - xyx - yxy "
},
{
"math_id": 134,
"text": "x \\nabla y"
},
{
"math_id": 135,
"text": "yx"
},
{
"math_id": 136,
"text": "(xyz w = xz yw)"
},
{
"math_id": 137,
"text": "\\nabla"
},
{
"math_id": 138,
"text": "(S ; \\wedge, \\vee, /, 0)"
},
{
"math_id": 139,
"text": "x/y = x - xyx"
},
{
"math_id": 140,
"text": "A \\wedge b \\wedge A ="
},
{
"math_id": 141,
"text": "u \\wedge b \\wedge u : u \\in A"
},
{
"math_id": 142,
"text": "\\subseteq B"
},
{
"math_id": 143,
"text": "B \\vee a \\vee B ="
},
{
"math_id": 144,
"text": "v \\vee a \\vee v : v \\in B"
},
{
"math_id": 145,
"text": "\\subseteq A"
},
{
"math_id": 146,
"text": "b \\in A\\wedge b\\wedge A"
},
{
"math_id": 147,
"text": "a \\in B\\wedge a\\wedge B"
},
{
"math_id": 148,
"text": "\\geq "
},
{
"math_id": 149,
"text": "\\varphi : B \\vee a \\vee B \\rightarrow A \\wedge b \\wedge A"
},
{
"math_id": 150,
"text": "\\phi (x) = y"
},
{
"math_id": 151,
"text": "x > y"
},
{
"math_id": 152,
"text": "x \\in B \\vee a \\vee B"
},
{
"math_id": 153,
"text": "y \\in A \\wedge b \\wedge A"
},
{
"math_id": 154,
"text": "\\varphi "
},
{
"math_id": 155,
"text": "B\\vee a\\vee B"
},
{
"math_id": 156,
"text": "A \\wedge b \\wedge A"
},
{
"math_id": 157,
"text": "a\\vee b = a\\vee \\varphi -1(b), b\\vee a = \\varphi -1(b)\\vee a"
},
{
"math_id": 158,
"text": " a\\wedge b = \\varphi(a)\\wedge b , b\\wedge a = b\\wedge \\varphi (a)"
},
{
"math_id": 159,
"text": "a, c \\in A"
},
{
"math_id": 160,
"text": "b, d \\in B"
},
{
"math_id": 161,
"text": "c > d"
},
{
"math_id": 162,
"text": "a, c"
},
{
"math_id": 163,
"text": "b, d"
},
{
"math_id": 164,
"text": "a > b // c > d"
},
{
"math_id": 165,
"text": "S/R \\times_2 S/L"
},
{
"math_id": 166,
"text": "A = \\cup _i A_i"
},
{
"math_id": 167,
"text": "B = \\cup _j B_j"
},
{
"math_id": 168,
"text": "A_i"
},
{
"math_id": 169,
"text": "B_j"
},
{
"math_id": 170,
"text": "i, j"
},
{
"math_id": 171,
"text": "\\varphi_i,j"
},
{
"math_id": 172,
"text": "x\\wedge y = y"
},
{
"math_id": 173,
"text": "x\\vee y = x"
},
{
"math_id": 174,
"text": "a \\vee b = a, b \\vee a = a', a \\wedge b = b"
},
{
"math_id": 175,
"text": "b \\wedge a = b' "
},
{
"math_id": 176,
"text": "\\varphi_{i,j}(a') = b"
},
{
"math_id": 177,
"text": "\\varphi_{i,j}(a) = b'"
},
{
"math_id": 178,
"text": "a'"
},
{
"math_id": 179,
"text": "b'"
},
{
"math_id": 180,
"text": "\\varphi i,j"
},
{
"math_id": 181,
"text": "|A_i| = |B_j| = 2"
},
{
"math_id": 182,
"text": "\\varphi_{i,j}"
},
{
"math_id": 183,
"text": "A \\cup B"
},
{
"math_id": 184,
"text": "x\\vee y"
},
{
"math_id": 185,
"text": "x\\wedge y"
},
{
"math_id": 186,
"text": "x"
},
{
"math_id": 187,
"text": "y"
},
{
"math_id": 188,
"text": "\\preceq "
},
{
"math_id": 189,
"text": "M"
},
{
"math_id": 190,
"text": "A > B > C"
},
{
"math_id": 191,
"text": "\\varphi: A \\rightarrow B"
},
{
"math_id": 192,
"text": "\\psi: B \\rightarrow C"
},
{
"math_id": 193,
"text": "\\psi \\varphi "
},
{
"math_id": 194,
"text": "\\chi: A \\rightarrow C "
},
{
"math_id": 195,
"text": "\\psi \\varphi \\subseteq \\chi "
},
{
"math_id": 196,
"text": "\\chi "
},
{
"math_id": 197,
"text": "\\psi \\varphi \\neq \\empty "
}
] | https://en.wikipedia.org/wiki?curid=9877288 |
987959 | Gram matrix | Matrix of inner products of a set of vectors
In linear algebra, the Gram matrix (or Gramian matrix, Gramian) of a set of vectors formula_0 in an inner product space is the Hermitian matrix of inner products, whose entries are given by the inner product formula_1. If the vectors formula_0 are the columns of matrix formula_2 then the Gram matrix is formula_3 in the general case that the vector coordinates are complex numbers, which simplifies to formula_4 for the case that the vector coordinates are real numbers.
An important application is to compute linear independence: a set of vectors are linearly independent if and only if the Gram determinant (the determinant of the Gram matrix) is non-zero.
It is named after Jørgen Pedersen Gram.
Examples.
For finite-dimensional real vectors in formula_5 with the usual Euclidean dot product, the Gram matrix is formula_6, where formula_7 is a matrix whose columns are the vectors formula_8 and formula_9 is its transpose whose rows are the vectors formula_10. For complex vectors in formula_11, formula_12, where formula_13 is the conjugate transpose of formula_7.
Given square-integrable functions formula_14 on the interval formula_15, the Gram matrix formula_16 is:
formula_17
where formula_18 is the complex conjugate of formula_19.
For any bilinear form formula_20 on a finite-dimensional vector space over any field we can define a Gram matrix formula_21 attached to a set of vectors formula_22 by formula_23. The matrix will be symmetric if the bilinear form formula_20 is symmetric.
Properties.
Positive-semidefiniteness.
The Gram matrix is symmetric in the case the inner product is real-valued; it is Hermitian in the general, complex case by definition of an inner product.
The Gram matrix is positive semidefinite, and every positive semidefinite matrix is the Gramian matrix for some set of vectors. The fact that the Gramian matrix is positive-semidefinite can be seen from the following simple derivation:
formula_34
The first equality follows from the definition of matrix multiplication, the second and third from the bi-linearity of the inner-product, and the last from the positive definiteness of the inner product.
Note that this also shows that the Gramian matrix is positive definite if and only if the vectors formula_35 are linearly independent (that is, formula_36 for all formula_37).
Finding a vector realization.
Given any positive semidefinite matrix formula_29, one can decompose it as:
formula_38,
where formula_39 is the conjugate transpose of formula_20 (or formula_40 in the real case).
Here formula_20 is a formula_41 matrix, where formula_24 is the rank of formula_29. Various ways to obtain such a decomposition include computing the Cholesky decomposition or taking the non-negative square root of formula_29.
The columns formula_42 of formula_20 can be seen as "n" vectors in formula_43 (or "k"-dimensional Euclidean space formula_44, in the real case). Then
formula_45
where the dot product formula_46 is the usual inner product on formula_43.
Thus a Hermitian matrix formula_29 is positive semidefinite if and only if it is the Gram matrix of some vectors formula_42. Such vectors are called a vector realization of formula_29. The infinite-dimensional analog of this statement is Mercer's theorem.
Uniqueness of vector realizations.
If formula_29 is the Gram matrix of vectors formula_47 in formula_44 then applying any rotation or reflection of formula_44 (any orthogonal transformation, that is, any Euclidean isometry preserving 0) to the sequence of vectors results in the same Gram matrix. That is, for any formula_48 orthogonal matrix formula_49, the Gram matrix of formula_50 is also formula_29.
This is the only way in which two real vector realizations of formula_29 can differ: the vectors formula_47 are unique up to orthogonal transformations. In other words, the dot products formula_51 and formula_52 are equal if and only if some rigid transformation of formula_44 transforms the vectors formula_47 to formula_53 and 0 to 0.
The same holds in the complex case, with unitary transformations in place of orthogonal ones.
That is, if the Gram matrix of vectors formula_22 is equal to the Gram matrix of vectors formula_53 in formula_43 then there is a unitary formula_48 matrix formula_54 (meaning formula_55) such that formula_56 for formula_57.
Gram determinant.
The Gram determinant or Gramian is the determinant of the Gram matrix:
formula_60
If formula_22 are vectors in formula_61 then it is the square of the "n"-dimensional volume of the parallelotope formed by the vectors. In particular, the vectors are linearly independent if and only if the parallelotope has nonzero "n"-dimensional volume, if and only if Gram determinant is nonzero, if and only if the Gram matrix is nonsingular. When "n" > "m" the determinant and volume are zero. When "n" = "m", this reduces to the standard theorem that the absolute value of the determinant of "n" "n"-dimensional vectors is the "n"-dimensional volume. The Gram determinant is also useful for computing the volume of the simplex formed by the vectors; its volume is Volume(parallelotope) / "n"!.
The Gram determinant can also be expressed in terms of the exterior product of vectors by
formula_62
When the vectors formula_63 are defined from the positions of points formula_64 relative to some reference point formula_65,
formula_66
then the Gram determinant can be written as the difference of two Gram determinants,
formula_67
where each formula_68 is the corresponding point formula_69 supplemented with the coordinate value of 1 for an formula_70-st dimension. Note that in the common case that "n" = "m", the second term on the right-hand side will be zero.
Constructing an orthonormal basis.
Given a set of linearly independent vectors formula_71 with Gram matrix formula_21 defined by formula_72, one can construct an orthonormal basis
formula_73
In matrix notation, formula_74, where formula_54 has orthonormal basis vectors formula_75 and the matrix formula_7 is composed of the given column vectors formula_71.
The matrix formula_76 is guaranteed to exist. Indeed, formula_21 is Hermitian, and so can be decomposed as formula_77 with formula_54 a unitary matrix and formula_78 a real diagonal matrix. Additionally, the formula_79 are linearly independent if and only if formula_21 is positive definite, which implies that the diagonal entries of formula_78 are positive. formula_76 is therefore uniquely defined by formula_80. One can check that these new vectors are orthonormal:
formula_81
where we used formula_82.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "v_1,\\dots, v_n"
},
{
"math_id": 1,
"text": "G_{ij} = \\left\\langle v_i, v_j \\right\\rangle"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "X^\\dagger X"
},
{
"math_id": 4,
"text": "X^\\top X"
},
{
"math_id": 5,
"text": "\\mathbb{R}^n"
},
{
"math_id": 6,
"text": "G = V^\\top V"
},
{
"math_id": 7,
"text": "V"
},
{
"math_id": 8,
"text": "v_k"
},
{
"math_id": 9,
"text": "V^\\top"
},
{
"math_id": 10,
"text": "v_k^\\top"
},
{
"math_id": 11,
"text": "\\mathbb{C}^n"
},
{
"math_id": 12,
"text": "G = V^\\dagger V"
},
{
"math_id": 13,
"text": "V^\\dagger"
},
{
"math_id": 14,
"text": "\\{\\ell_i(\\cdot),\\, i = 1,\\dots,n\\}"
},
{
"math_id": 15,
"text": "\\left[t_0, t_f\\right]"
},
{
"math_id": 16,
"text": "G = \\left[G_{ij}\\right]"
},
{
"math_id": 17,
"text": "G_{ij} = \\int_{t_0}^{t_f} \\ell_i^*(\\tau)\\ell_j(\\tau)\\, d\\tau. "
},
{
"math_id": 18,
"text": "\\ell_i^*(\\tau)"
},
{
"math_id": 19,
"text": "\\ell_i(\\tau)"
},
{
"math_id": 20,
"text": "B"
},
{
"math_id": 21,
"text": "G"
},
{
"math_id": 22,
"text": "v_1, \\dots, v_n"
},
{
"math_id": 23,
"text": "G_{ij} = B\\left(v_i, v_j\\right)"
},
{
"math_id": 24,
"text": "k"
},
{
"math_id": 25,
"text": "M\\subset \\mathbb{R}^n"
},
{
"math_id": 26,
"text": "\\phi: U\\to M"
},
{
"math_id": 27,
"text": "(x_1, \\ldots, x_k)\\in U\\subset\\mathbb{R}^k"
},
{
"math_id": 28,
"text": "\\omega"
},
{
"math_id": 29,
"text": "M"
},
{
"math_id": 30,
"text": "\\omega = \\sqrt{\\det G}\\ dx_1 \\cdots dx_k,\\quad G = \\left[\\left\\langle \\frac{\\partial\\phi}{\\partial x_i},\\frac{\\partial\\phi}{\\partial x_j}\\right\\rangle\\right]."
},
{
"math_id": 31,
"text": "\\phi:U\\to S\\subset \\mathbb{R}^3"
},
{
"math_id": 32,
"text": "(x, y)\\in U\\subset\\mathbb{R}^2"
},
{
"math_id": 33,
"text": "\\int_S f\\ dA = \\iint_U f(\\phi(x, y))\\, \\left|\\frac{\\partial\\phi}{\\partial x}\\,{\\times}\\,\\frac{\\partial\\phi}{\\partial y}\\right|\\, dx\\, dy."
},
{
"math_id": 34,
"text": "\n x^\\dagger \\mathbf{G} x =\n \\sum_{i,j}x_i^* x_j\\left\\langle v_i, v_j \\right\\rangle =\n \\sum_{i,j}\\left\\langle x_i v_i, x_j v_j \\right\\rangle =\n \\biggl\\langle \\sum_i x_i v_i, \\sum_j x_j v_j \\biggr\\rangle =\n \\biggl\\| \\sum_i x_i v_i \\biggr\\|^2 \\geq 0 .\n"
},
{
"math_id": 35,
"text": " v_i "
},
{
"math_id": 36,
"text": "\\sum_i x_i v_i \\neq 0"
},
{
"math_id": 37,
"text": "x"
},
{
"math_id": 38,
"text": "M = B^\\dagger B"
},
{
"math_id": 39,
"text": "B^\\dagger"
},
{
"math_id": 40,
"text": "M = B^\\textsf{T} B"
},
{
"math_id": 41,
"text": "k \\times n"
},
{
"math_id": 42,
"text": "b^{(1)}, \\dots, b^{(n)}"
},
{
"math_id": 43,
"text": "\\mathbb{C}^k"
},
{
"math_id": 44,
"text": "\\mathbb{R}^k"
},
{
"math_id": 45,
"text": "M_{ij} = b^{(i)} \\cdot b^{(j)}"
},
{
"math_id": 46,
"text": "a \\cdot b = \\sum_{\\ell=1}^k a_\\ell^* b_\\ell"
},
{
"math_id": 47,
"text": "v_1,\\dots,v_n"
},
{
"math_id": 48,
"text": "k \\times k"
},
{
"math_id": 49,
"text": "Q"
},
{
"math_id": 50,
"text": "Q v_1,\\dots, Q v_n"
},
{
"math_id": 51,
"text": "v_i \\cdot v_j"
},
{
"math_id": 52,
"text": "w_i \\cdot w_j"
},
{
"math_id": 53,
"text": "w_1, \\dots, w_n"
},
{
"math_id": 54,
"text": "U"
},
{
"math_id": 55,
"text": "U^\\dagger U = I"
},
{
"math_id": 56,
"text": "v_i = U w_i"
},
{
"math_id": 57,
"text": "i = 1, \\dots, n"
},
{
"math_id": 58,
"text": "G = G^\\dagger"
},
{
"math_id": 59,
"text": "G^\\dagger"
},
{
"math_id": 60,
"text": "\\bigl|G(v_1, \\dots, v_n)\\bigr| = \\begin{vmatrix}\n \\langle v_1,v_1\\rangle & \\langle v_1,v_2\\rangle &\\dots & \\langle v_1,v_n\\rangle \\\\\n \\langle v_2,v_1\\rangle & \\langle v_2,v_2\\rangle &\\dots & \\langle v_2,v_n\\rangle \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n \\langle v_n,v_1\\rangle & \\langle v_n,v_2\\rangle &\\dots & \\langle v_n,v_n\\rangle\n\\end{vmatrix}."
},
{
"math_id": 61,
"text": "\\mathbb{R}^m"
},
{
"math_id": 62,
"text": "\\bigl|G(v_1, \\dots, v_n)\\bigr| = \\| v_1 \\wedge \\cdots \\wedge v_n\\|^2."
},
{
"math_id": 63,
"text": "v_1, \\ldots, v_n \\in \\mathbb{R}^m"
},
{
"math_id": 64,
"text": "p_1, \\ldots, p_n"
},
{
"math_id": 65,
"text": "p_{n+1}"
},
{
"math_id": 66,
"text": "(v_1, v_2, \\ldots, v_n) = (p_1 - p_{n+1}, p_2 - p_{n+1}, \\ldots, p_n - p_{n+1})\\,,"
},
{
"math_id": 67,
"text": "\n\\bigl|G(v_1, \\dots, v_n)\\bigr| = \\bigl|G((p_1, 1), \\dots, (p_{n+1}, 1))\\bigr| - \\bigl|G(p_1, \\dots, p_{n+1})\\bigr|\\,,\n"
},
{
"math_id": 68,
"text": "(p_j, 1)"
},
{
"math_id": 69,
"text": "p_j"
},
{
"math_id": 70,
"text": "(m+1)"
},
{
"math_id": 71,
"text": "\\{v_i\\}"
},
{
"math_id": 72,
"text": "G_{ij}:= \\langle v_i,v_j\\rangle"
},
{
"math_id": 73,
"text": "u_i := \\sum_j \\bigl(G^{-1/2}\\bigr)_{ji} v_j."
},
{
"math_id": 74,
"text": "U = V G^{-1/2} "
},
{
"math_id": 75,
"text": "\\{u_i\\}"
},
{
"math_id": 76,
"text": "G^{-1/2}"
},
{
"math_id": 77,
"text": "G=UDU^\\dagger"
},
{
"math_id": 78,
"text": "D"
},
{
"math_id": 79,
"text": "v_i"
},
{
"math_id": 80,
"text": "G^{-1/2}:=UD^{-1/2}U^\\dagger"
},
{
"math_id": 81,
"text": "\\begin{align}\n\\langle u_i,u_j \\rangle\n&= \\sum_{i'} \\sum_{j'} \\Bigl\\langle \\bigl(G^{-1/2}\\bigr)_{i'i} v_{i'},\\bigl(G^{-1/2}\\bigr)_{j'j} v_{j'} \\Bigr\\rangle \\\\[10mu]\n&= \\sum_{i'} \\sum_{j'} \\bigl(G^{-1/2}\\bigr)_{ii'} G_{i'j'} \\bigl(G^{-1/2}\\bigr)_{j'j} \\\\[8mu]\n&= \\bigl(G^{-1/2} G G^{-1/2}\\bigr)_{ij} = \\delta_{ij}\n\\end{align}"
},
{
"math_id": 82,
"text": "\\bigl(G^{-1/2}\\bigr)^\\dagger=G^{-1/2} "
}
] | https://en.wikipedia.org/wiki?curid=987959 |
9880197 | Cohomological dimension | In abstract algebra, cohomological dimension is an invariant of a group which measures the homological complexity of its representations. It has important applications in geometric group theory, topology, and algebraic number theory.
Cohomological dimension of a group.
As most cohomological invariants, the cohomological dimension involves a choice of a "ring of coefficients" "R", with a prominent special case given by formula_0, the ring of integers. Let "G" be a discrete group, "R" a non-zero ring with a unit, and formula_1 the group ring. The group "G" has cohomological dimension less than or equal to "n", denoted formula_2, if the trivial formula_1-module "R" has a projective resolution of length "n", i.e. there are projective formula_1-modules formula_3 and formula_1-module homomorphisms formula_4 and formula_5, such that the image of formula_6 coincides with the kernel of formula_7 for formula_8 and the kernel of formula_9 is trivial.
Equivalently, the cohomological dimension is less than or equal to "n" if for an arbitrary formula_1-module "M", the cohomology of "G" with coefficients in "M" vanishes in degrees formula_10, that is, formula_11 whenever formula_10. The "p"-cohomological dimension for prime "p" is similarly defined in terms of the "p"-torsion groups formula_12.
The smallest "n" such that the cohomological dimension of "G" is less than or equal to "n" is the cohomological dimension of "G" (with coefficients "R"), which is denoted formula_13.
A free resolution of formula_14 can be obtained from a free action of the group "G" on a contractible topological space "X". In particular, if "X" is a contractible CW complex of dimension "n" with a free action of a discrete group "G" that permutes the cells, then formula_15.
Examples.
In the first group of examples, let the ring "R" of coefficients be formula_14.
Now consider the case of a general ring "R".
Cohomological dimension of a field.
The "p"-cohomological dimension of a field "K" is the "p"-cohomological dimension of the Galois group of a separable closure of "K". The cohomological dimension of "K" is the supremum of the "p"-cohomological dimension over all primes "p".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R=\\Z"
},
{
"math_id": 1,
"text": "RG"
},
{
"math_id": 2,
"text": "\\operatorname{cd}_R(G)\\le n"
},
{
"math_id": 3,
"text": "P_0,\\dots , P_n"
},
{
"math_id": 4,
"text": "d_k\\colon P_k\\to P_{k-1} (k = 1,\\dots, n)"
},
{
"math_id": 5,
"text": "d_0\\colon P_0\\to R"
},
{
"math_id": 6,
"text": "d_k"
},
{
"math_id": 7,
"text": "d_{k-1}"
},
{
"math_id": 8,
"text": "k = 1, \\dots, n"
},
{
"math_id": 9,
"text": "d_n"
},
{
"math_id": 10,
"text": "k>n"
},
{
"math_id": 11,
"text": "H^k(G,M) = 0"
},
{
"math_id": 12,
"text": "H^k(G,M){p}"
},
{
"math_id": 13,
"text": "n=\\operatorname{cd}_{R}(G)"
},
{
"math_id": 14,
"text": "\\Z"
},
{
"math_id": 15,
"text": "\\operatorname{cd}_{\\Z}(G)\\le n"
},
{
"math_id": 16,
"text": "\\hat{\\Z}"
},
{
"math_id": 17,
"text": "k((t))"
}
] | https://en.wikipedia.org/wiki?curid=9880197 |
9880707 | Logarithmic spiral beach | A logarithmic spiral beach is a type of beach which develops in the direction under which it is sheltered by a headland, in an area called the "shadow zone". It is shaped like a logarithmic spiral when seen in a map, plan view, or aerial photograph. These beaches are also commonly referred to as half heart beach, crenulate-shaped bay, or headland-bay beach.
Logarithmic spiral relation.
The logarithmic spiral can be determined using the equation (written in polar coordinates):
formula_0
where:
formula_1 = the angle of rotation, is located between two lines drawn from the origin to any two points on the spiral.
formula_2 = the ratio of the lengths between two lines that extend out from the origin. The two lines are given as formula_3 and formula_4.
So formula_2 also equals the ratio formula_5.
formula_6 = the angle between any line formula_4 from the origin and the line tangent to the spiral which is at the point where line formula_4 intersects the spiral.
formula_6 is a constant for any given logarithmic spiral.
Spiral development.
This type of beach forms due to the refraction of approaching waves and their diffraction by an upcoast headland. The approaching wave front curves as a result of wave diffraction at the headland, which in turn causes the shoreline to bend and yield a log spiral shape. Log spiral beaches are often on swell-dominated coasts where waves generally approach the shoreline from one main direction at an oblique angle. The oblique approaching waves refract and diffract into the "shadow zone" which can be considered a relatively sheltered hook of beach behind the headland. Increase in sediment size, wave height, berm height, and swash zone gradient from the up coast headland generally characterizes the concave seaward curved part of the beach. | [
{
"math_id": 0,
"text": "r = e^{\\theta \\cot \\alpha}"
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "R_O"
},
{
"math_id": 4,
"text": "R"
},
{
"math_id": 5,
"text": "R/R_O"
},
{
"math_id": 6,
"text": "\\alpha"
}
] | https://en.wikipedia.org/wiki?curid=9880707 |
9884592 | Suspended load | The suspended load of a flow of fluid, such as a river, is the portion of its sediment uplifted by the fluid's flow in the process of sediment transportation. It is kept suspended by the fluid's turbulence. The suspended load generally consists of smaller particles, like clay, silt, and fine sands"."
Sediment transportation.
The suspended load is one of the three layers of the fluvial sediment transportation system. The bed load consists of the larger sediment which is transported by saltation, rolling, and dragging on the riverbed. The suspended load is the middle layer that consists of the smaller sediment that's suspended. The wash load is uppermost layer which consist of the smallest sediment that can be seen with the naked eye; however, the wash load gets easily mixed with suspended load during transportation due to the very similar process. The wash load never touches the bed even outside of a current.
Composition.
The boundary between bed load and suspended load is not straightforward because whether a particle is in suspension or not depends on the flow velocity – it is easy to imagine a particle moving between bed load, part-suspension and full suspension in a fluid with variable flow. Suspended load generally consists of fine sand, silt and clay size particles although larger particles (coarser sands) may be carried in the lower water column in more intense flows.
Suspended load vs suspended sediment.
"Suspended load" and "suspended sediment" are very similar, but are not the same. Suspended Sediment contains sediment uplifted in Fluvial zones, but unlike suspended load no turbulence is required to keep it uplifted. Suspended loads required the Velocity to keep the sediment transporting above the bed. With low velocity the sediment will deposit.
Velocity.
The suspended load is carried within the lower to middle part of the water column and moves at a large fraction of the mean flow velocity of the stream, with a Rouse number between 0.8 and 1.2. The rates within the Rouse number reveal how at which the sediment will transport at the current velocity. It is the ratio of the fall velocity and uplift velocity on a grain.
Diagrams.
Suspended load is often visualised using two diagrams. The Hjulström curve uses velocity and sediment size to compare the rate of erosion, transportation, and deposition. While the diagram shows the rate, one flaw about the Hjulström Diagram is that it doesn't show the depth of the creek giving an estimated rate.
The second diagram used is the Shields Diagram. The Shields Diagram uses the critical shield stress and Reynolds number to estimate transportation rate. Shields Diagram is considered a more precise chart to estimate suspended load.
Measuring suspended load.
Shear stress.
To find the stream power for sediment transportation, shear stress helps determine the force required to allow sediment transportation.
formula_0
Critical shear stress.
The point at which the sediment is transported within a stream
formula_1
Suspended load transport rate.
formula_2
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\tau= Pw.g.d.s"
},
{
"math_id": 1,
"text": "\\tau {\\scriptstyle\\text{c}}= \\tau{\\scriptstyle\\text{c}} . g. (p{\\scriptstyle\\text{s}}-p{\\scriptstyle\\text{w}})d50"
},
{
"math_id": 2,
"text": "q{\\scriptstyle\\text{s}}=w.h.c{\\scriptstyle\\text{a}} .[((a/h)^z-(a/h))/((1-a/h)Z . (1.2-Z)) ]\n\n"
}
] | https://en.wikipedia.org/wiki?curid=9884592 |
9885419 | Tip-speed ratio | Measurement used for wind turbines
The tip-speed ratio, λ, or TSR for wind turbines is the ratio between the tangential speed of the tip of a blade and the actual speed of the wind, "v". The tip-speed ratio is related to efficiency, with the optimum varying with blade design. Higher tip speeds result in higher noise levels and require stronger blades due to larger centrifugal forces.
formula_0
The tip speed of the blade can be calculated as formula_1, where formula_2 is the rotational speed of the rotor and "R" is the rotor radius. Therefore, we can also write:
formula_3
where formula_4 is the wind speed at the height of the blade hub.
"Cp"–λ curves.
The power coefficient, formula_5, expresses what fraction of the power in the wind is being extracted by the wind turbine. It is generally assumed to be a function of both tip-speed ratio and pitch angle. Below is a plot of the variation of the power coefficient with variations in the tip-speed ratio when the pitch is held constant:
The case for variable speed wind turbines.
Originally, wind turbines were fixed speed. This has the benefit that the rotor speed in the generator is constant, so that the frequency of the AC voltage is fixed. This allows the wind turbine to be directly connected to a transmission system. However, from the figure above, we can see that the power coefficient is a function of the tip-speed ratio. By extension, the efficiency of the wind turbine is a function of the tip-speed ratio.
Ideally, one would like to have a turbine operating at the maximum value of "Cp" at all wind speeds. This means that as the wind speed changes, the rotor speed must change as well such that "Cp" = "Cp max". A wind turbine with a variable rotor speed is called a variable-speed wind turbine. Whilst this does mean that the wind turbine operates at or close to "Cp max" for a range of wind speeds, the frequency of the AC voltage generator will not be constant. This can be seen in the equation
formula_6
where "N" is the rotor's angular speed, "f" is the frequency of the AC voltage generated in the stator windings, and "P" is the number of poles in the generator inside the nacelle. Therefore, direct connection to a transmission system for a variable speed is not permissible. What is required is a power converter which converts the signal generated by the turbine generator into DC and then converts that signal to an AC signal with the grid/transmission system frequency.
The case against variable speed wind turbines.
Variable-speed wind turbines cannot be directly connected to a transmission system. One of the drawbacks of this is that the inertia of the transmission system is reduced as more variable-speed wind turbines are put online. This can result in more significant drops in the transmission system's voltage frequency in the event of the loss of a generating unit. Furthermore, variable-speed wind turbines require power electronics, which increases the complexity of the turbine and introduces new sources of failures. On the other hand, it has been suggested that additional energy capture achieved by comparing a variable-speed wind turbine to a fixed speed wind turbine is approximately 2%.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda = \\frac{\\mbox{tip speed of blade}}{\\mbox{wind speed}}"
},
{
"math_id": 1,
"text": "\\omega \\cdot R"
},
{
"math_id": 2,
"text": "\\omega"
},
{
"math_id": 3,
"text": "\\lambda = \\frac{\\omega R}{v},"
},
{
"math_id": 4,
"text": "v"
},
{
"math_id": 5,
"text": "C_p"
},
{
"math_id": 6,
"text": "\nN = \\frac{120f}{P},\n"
}
] | https://en.wikipedia.org/wiki?curid=9885419 |
9886801 | Invariant factor | The invariant factors of a module over a principal ideal domain (PID) occur in one form of the structure theorem for finitely generated modules over a principal ideal domain.
If formula_0 is a PID and formula_1 a finitely generated formula_0-module, then
formula_2
for some integer formula_3 and a (possibly empty) list of nonzero elements formula_4 for which formula_5. The nonnegative integer formula_6 is called the "free rank" or "Betti number" of the module formula_1, while formula_7 are the "invariant factors" of formula_1 and are unique up to associatedness.
The invariant factors of a matrix over a PID occur in the Smith normal form and provide a means of computing the structure of a module from a set of generators and relations. | [
{
"math_id": 0,
"text": "R"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "M\\cong R^r\\oplus R/(a_1)\\oplus R/(a_2)\\oplus\\cdots\\oplus R/(a_m)"
},
{
"math_id": 3,
"text": "r\\geq0"
},
{
"math_id": 4,
"text": "a_1,\\ldots,a_m\\in R"
},
{
"math_id": 5,
"text": "a_1 \\mid a_2 \\mid \\cdots \\mid a_m"
},
{
"math_id": 6,
"text": "r"
},
{
"math_id": 7,
"text": "a_1,\\ldots,a_m"
}
] | https://en.wikipedia.org/wiki?curid=9886801 |
9888821 | Trimean | In statistics the trimean (TM), or Tukey's trimean, is a measure of a probability distribution's location defined as a weighted average of the distribution's median and its two quartiles:
formula_0
This is equivalent to the average of the median and the midhinge:
formula_1
The foundations of the trimean were part of Arthur Bowley's teachings, and later popularized by statistician John Tukey in his 1977 book which has given its name to a set of techniques called exploratory data analysis.
Like the median and the midhinge, but unlike the sample mean, it is a statistically resistant L-estimator with a breakdown point of 25%. This beneficial property has been described as follows:
<templatestyles src="Template:Blockquote/styles.css" />An advantage of the trimean as a measure of the center (of a distribution) is that it combines the median's emphasis on center values with the midhinge's attention to the extremes.
Efficiency.
Despite its simplicity, the trimean is a remarkably efficient estimator of population mean. More precisely, for a large data set (over 100 points) from a symmetric population, the average of the 18th, 50th, and 82nd percentile is the most efficient 3-point L-estimator, with 88% efficiency. For context, the best single point estimate by L-estimators is the median, with an efficiency of 64% or better (for all "n"), while using two points (for a large data set of over 100 points from a symmetric population), the most efficient estimate is the 27% midsummary (mean of 27th and 73rd percentiles), which has an efficiency of about 81%. Using quartiles, these optimal estimators can be approximated by the midhinge and the trimean. Using further points yield higher efficiency, though it is notable that only three points are needed for very high efficiency.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "TM= \\frac{Q_1 + 2Q_2 + Q_3}{4}"
},
{
"math_id": 1,
"text": "TM= \\frac{1}{2}\\left(Q_2 + \\frac{Q_1 + Q_3}{2}\\right)"
}
] | https://en.wikipedia.org/wiki?curid=9888821 |
9889683 | Action description language | Robot programming language
In artificial intelligence, action description language (ADL) is an automated planning and scheduling system in particular for robots. It is considered an advancement of STRIPS. Edwin Pednault (a specialist in the field of data abstraction and modelling who has been an IBM Research Staff Member in the Data Abstraction Research Group since 1996) proposed this language in 1987. It is an example of an action language.
Origins.
Pednault observed that the expressive power of STRIPS was susceptible to being improved by allowing the effects of an operator to be conditional. This is the main idea of ADL-A, which is roughly the propositional fragment of the ADL proposed by Pednault, with ADL-B an extension of -A. In the -B extension, actions can be described with indirect effects by the introduction of a new kind of propositions: ”static laws". A third variation of ADL is ADL-C which is similar to -B, in the sense that its propositions can be classified into static and dynamic laws, but with some more particularities.
The sense of a planning language is to represent certain conditions in the environment and, based on these, automatically generate a chain of actions which lead to a desired goal. A goal is a certain partially specified condition. Before an action can be executed its preconditions must be fulfilled; after the execution the action yields effects, by which the environment changes. The environment is described by means of certain predicates, which are either fulfilled or not.
Contrary to STRIPS, the principle of the open world applies with ADL: everything not occurring in the conditions is unknown (Instead of being assumed false). In addition, whereas in STRIPS only positive literals and conjunctions are permitted, ADL allows negative literals and disjunctions as well.
Syntax of ADL.
An ADL schema consists of an action name, an optional parameter list and four optional groups of clauses labeled Precond, Add, Delete and Update.
The Precond group is a list of formulae that define the preconditions for the execution of an action. If the set is empty the value "TRUE" is inserted into the group and the preconditions are always evaluated as holding conditions.
The Add and Delete conditions are specified by the Add and Delete groups, respectively. Each group consists of a set of clauses of the forms shown in the left-hand column of the figure 1:
The Update groups are used to specify the update conditions to change the values of function symbols. An Update group consists of a set of clauses of the forms shown in the left column of the figure 2:
Semantics of ADL.
The formal semantic of ADL is defined by 4 constraints. The first constraint is that actions may not change the set of objects that exist in the world; this means that for every action α and every current-state/next-state pair ("s", "t") ∈ "a", it must be the case that the domain of t should be equal to the domain of "s".
The second constraint is that actions in ADL must be deterministic. If ("s", "t"1) and ("s", "t"2) are current-state/next-state pairs of action ∃, then it must be the case that "t"1 = "t"2.
The third constraint incorporated into ADL is that the functions introduced above must be representable as first-order formulas. For every "n"-ary relation symbol "R", there must exist a formula Φ"a""R"("x"1... ,"x""n") with free variables "x"2, ..., "x""n" such that "f""a""R"("s") is given by:
formula_0
Consequently, "F"("n"1, ..., "x""n") = "y" will be true after performing action |= if and only if Φ"a""R" ("x"1, ..., "x""n","y") was true beforehand. Note that this representability requirement relies on the first constraint (domain of "f" should be equal to domain of "s").
The fourth and final constraint incorporated into ADL is that set of states in which an action is executable must also be representable as a formula. For every action "α" that can be represented in ADL, there must exist a formula Πa with the property that "s" |= Π"a" if and only if there is some state "t" for which ("s", "t") ∈ "α" (i.e. action α is executable in state "s")
Complexity of planning.
In terms of computational efficiency, ADL can be located between STRIPS and the Situation Calculus. Any ADL problem can be translated into a STRIPS instance – however, existing compilation techniques are worst-case exponential. This worst case cannot be improved if we are willing to preserve the length of plans polynomially, and thus ADL is strictly more brief than STRIPS.
ADL planning is still a PSPACE-complete problem. Most of the algorithms polynomial space even if the preconditions and effects are complex formulae.
Most of the top-performing approaches to classical planning internally utilize a STRIPS like representation. In fact most of the planners (FF, LPG, Fast-Downward, SGPLAN5 and LAMA) first translate the ADL instance into one that is essentially a STRIPS one (without conditional or quantified effects or goals).
Comparison between STRIPS and ADL.
The expressiveness of the STRIPS language is constrained by the types of transformations on sets of formulas that can be described in the language. Transformations on sets of formulas using STRIPS operators are accomplished by removing some formulas from the set to be transformed and adding new additional formulas. For a given STRIPS operator the formulas to be added and deleted are fixed for all sets of formulas to be transformed. Consequently, STRIPS operators cannot adequately model actions whose effects depend on the situations in which they are performed. Consider a rocket which is going to be fired for a certain amount of time. The trajectory may vary not only because of the burn duration but also because of the velocity, mass and orientation of the rocket. It cannot be modelled by means of a STRIPS operator because the formulas that would have to be added and deleted would depend on the set of formulas to be transformed.
Although an efficient reasoning is possible when the STRIPS language is being used it is generally recognized that the expressiveness of STRIPS is not suitable for modeling actions in many real world applications. This inadequacy motivated the development of the ADL language. ADL expressiveness and complexity lies between the STRIPS language and the situation calculus. Its expressive power is sufficient to allow the rocket example described above to be represented yet, at the same time, it is restrictive enough to allow efficient reasoning algorithms to be developed.
As an example in a more complex version of the blocks world: It could be that block A is twice as big as blocks B and C, so the action xMoveOnto(B,A) might only have the effect of negating Clear(A) if On(A,C) is already true, or creating the conditional effect depending on the size of the blocks. This kind of conditional effects would be hard to express in STRIPS notation without the conditional effects.
Example.
Consider the problem of air freight transport, where certain goods must be transported from an airport to another airport by plane and where airplanes need to be loaded and unloaded.
The necessary actions would be "loading", "unloading" and "flying"; over the
descriptors one could express and whether a freight "c" is in an airplane "p" and whether an object "x" is at an airport "A".
The actions could be defined then as follows:
Action (
Load (c: Freight, p: Airplane, A: Airport)
Precondition: At(c, A) ^ At(p, A)
Effect: ¬At(c, A) ^ In(c, p)
Action (
Unload (c: Freight, p: Airplane, A: Airport)
Precondition: In(c, p) ^ At(p, A)
Effect: At(c, A) ^ ¬In(c, p)
Action (
Fly (p: Airplane, from: Airport, to: Airport)
Precondition: At(p, from)
Effect: ¬At(p, from) ^ At(p, to)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " t(R) = f^a_R(s) = (d_1, \\ldots, d_n) \\in \\operatorname{Dom}(s)^n \\mid s[d_1/x_1, \\ldots, d_n/x_n \\models \\Phi^a_R(x_1, \\ldots, x_n)] "
}
] | https://en.wikipedia.org/wiki?curid=9889683 |
9891 | Entropy | Property of a thermodynamic system
Entropy is a scientific concept that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the microscopic description of nature in statistical physics, and to the principles of information theory. It has found far-ranging applications in chemistry and physics, in biological systems and their relation to life, in cosmology, economics, sociology, weather science, climate change, and information systems including the transmission of information in telecommunication.
Entropy is central to the second law of thermodynamics, which states that the entropy of an isolated system left to spontaneous evolution cannot decrease with time. As a result, isolated systems evolve toward thermodynamic equilibrium, where the entropy is highest. A consequence of the second law of thermodynamics is that certain processes are irreversible.
The thermodynamic concept was referred to by Scottish scientist and engineer William Rankine in 1850 with the names "thermodynamic function" and "heat-potential". In 1865, German physicist Rudolf Clausius, one of the leading founders of the field of thermodynamics, defined it as the quotient of an infinitesimal amount of heat to the instantaneous temperature. He initially described it as "transformation-content", in German "Verwandlungsinhalt", and later coined the term "entropy" from a Greek word for "transformation".
Austrian physicist Ludwig Boltzmann explained entropy as the measure of the number of possible microscopic arrangements or states of individual atoms and molecules of a system that comply with the macroscopic condition of the system. He thereby introduced the concept of statistical disorder and probability distributions into a new field of thermodynamics, called statistical mechanics, and found the link between the microscopic interactions, which fluctuate about an average configuration, to the macroscopically observable behavior, in form of a simple logarithmic law, with a proportionality constant, the Boltzmann constant, that has become one of the defining universal constants for the modern International System of Units (SI).
History.
In his 1803 paper "Fundamental Principles of Equilibrium and Movement", the French mathematician Lazare Carnot proposed that in any machine, the accelerations and shocks of the moving parts represent losses of "moment of activity"; in any natural process there exists an inherent tendency towards the dissipation of useful energy. In 1824, building on that work, Lazare's son, Sadi Carnot, published "Reflections on the Motive Power of Fire", which posited that in all heat-engines, whenever "caloric" (what is now known as heat) falls through a temperature difference, work or motive power can be produced from the actions of its fall from a hot to cold body. He used an analogy with how water falls in a water wheel. That was an early insight into the second law of thermodynamics. Carnot based his views of heat partially on the early 18th-century "Newtonian hypothesis" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, and partially on the contemporary views of Count Rumford, who showed in 1789 that heat could be created by friction, as when cannon bores are machined. Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its original state at the end of a complete engine cycle, "no change occurs in the condition of the working body".
The first law of thermodynamics, deduced from the heat-friction experiments of James Joule in 1843, expresses the concept of energy and its conservation in all processes; the first law, however, is unsuitable to separately quantify the effects of friction and dissipation.
In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in the working body, and gave that change a mathematical interpretation, by questioning the nature of the inherent loss of usable heat when work is done, e.g., heat produced by friction. He described his observations as a dissipative use of energy, resulting in a "transformation-content" ( in German), of a thermodynamic system or working body of chemical species during a change of state. That was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Clausius discovered that the non-usable energy increases as steam proceeds from inlet to exhaust in a steam engine. From the prefix "en-", as in 'energy', and from the Greek word [tropē], which is translated in an established lexicon as "turning" or "change" and that he rendered in German as , a word often translated into English as "transformation", in 1865 Clausius coined the name of that property as "entropy". The word was adopted into the English language in 1868.
Later, scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, and James Clerk Maxwell gave entropy a statistical basis. In 1877, Boltzmann visualized a probabilistic way to measure the entropy of an ensemble of ideal gas particles, in which he defined entropy as proportional to the natural logarithm of the number of microstates such a gas could occupy. The proportionality constant in this definition, called the Boltzmann constant, has become one of the defining universal constants for the modern International System of Units (SI). Henceforth, the essential problem in statistical thermodynamics has been to determine the distribution of a given amount of energy "E" over "N" identical systems. Constantin Carathéodory, a Greek mathematician, linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability.
Etymology.
In 1865, Clausius named the concept of "the differential of a quantity which depends on the configuration of the system", "entropy" () after the Greek word for 'transformation'. He gave "transformational content" () as a synonym, paralleling his "thermal and ergonal content" () as the name of "U", but preferring the term "entropy" as a close parallel of the word "energy", as he found the concepts nearly "analogous in their physical significance". This term was formed by replacing the root of ('ergon', 'work') by that of ('tropy', 'transformation').
In more detail, Clausius explained his choice of "entropy" as a name as follows:
I prefer going to the ancient languages for the names of important scientific quantities, so that they may mean the same thing in all living tongues. I propose, therefore, to call "S" the "entropy" of a body, after the Greek word "transformation". I have designedly coined the word "entropy" to be similar to energy, for these two quantities are so analogous in their physical significance, that an analogy of denominations seems to me helpful.
Leon Cooper added that in this way "he succeeded in coining a word that meant the same thing to everybody: nothing".
Definitions and descriptions.
<templatestyles src="Template:Quote_box/styles.css" />
Any method involving the notion of entropy, the very existence of which depends on the second law of thermodynamics, will doubtless seem to many far-fetched, and may repel beginners as obscure and difficult of comprehension.
Willard Gibbs, "Graphical Methods in the Thermodynamics of Fluids"
The concept of entropy is described by two principal approaches, the macroscopic perspective of classical thermodynamics, and the microscopic description central to statistical mechanics. The classical approach defines entropy in terms of macroscopically measurable physical properties, such as bulk mass, volume, pressure, and temperature. The statistical definition of entropy defines it in terms of the statistics of the motions of the microscopic constituents of a system — modeled at first classically, e.g. Newtonian particles constituting a gas, and later quantum-mechanically (photons, phonons, spins, etc.). The two approaches form a consistent, unified view of the same phenomenon as expressed in the second law of thermodynamics, which has found universal applicability to physical processes.
State variables and functions of state.
Many thermodynamic properties are defined by physical variables that define a state of thermodynamic equilibrium, which essentially are state variables. State variables depend only on the equilibrium condition, not on the path evolution to that state. State variables can be functions of state, also called state functions, in a sense that one state variable is a mathematical function of other state variables. Often, if some properties of a system are determined, they are sufficient to determine the state of the system and thus other properties' values. For example, temperature and pressure of a given quantity of gas determine its state, and thus also its volume via the ideal gas law. A system composed of a pure substance of a single phase at a particular uniform temperature and pressure is determined, and is thus a particular state, and has a particular volume. The fact that entropy is a function of state makes it useful. In the Carnot cycle, the working fluid returns to the same state that it had at the start of the cycle, hence the change or line integral of any state function, such as entropy, over this reversible cycle is zero.
Reversible process.
The entropy change "formula_0" of a system excluding its surroundings can be well-defined as a small portion of heat "formula_1" transferred to the system during reversible process divided by the temperature "formula_2" of the system during this heat transfer:formula_3The reversible process is quasistatic (i.e., it occurs without any dissipation, deviating only infinitesimally from the thermodynamic equilibrium), and it may conserve total entropy. For example, in the Carnot cycle, while the heat flow from a hot reservoir to a cold reservoir represents the increase in the entropy in a cold reservoir, the work output, if reversibly and perfectly stored, represents the decrease in the entropy which could be used to operate the heat engine in reverse, returning to the initial state; thus the total entropy change may still be zero at all times if the entire process is reversible.
In contrast, irreversible process increases the total entropy of the system and surroundings. Any process that happens quickly enough to deviate from the thermal equilibrium cannot be reversible, the total entropy increases, and the potential for maximum work to be done during the process is lost.
Carnot cycle.
The concept of entropy arose from Rudolf Clausius's study of the Carnot cycle that is a thermodynamic cycle performed by a Carnot heat engine as a reversible heat engine. In a Carnot cycle the heat formula_4 is transferred from a hot reservoir to a working gas at the constant temperature formula_5 during isothermal expansion stage and the heat formula_6 is transferred from a working gas to a cold reservoir at the constant temperature formula_7 during isothermal compression stage. According to Carnot's theorem, a heat engine with two thermal reservoirs can produce a work formula_8 if and only if there is a temperature difference between reservoirs. Originally, Carnot did not distinguish between heats formula_4 and formula_6, as he assumed caloric theory to be valid and hence that the total heat in the system was conserved. But in fact, the magnitude of heat formula_4 is greater than the magnitude of heat formula_6. Through the efforts of Clausius and Kelvin, the work formula_8 done by a reversible heat engine was found to be the product of the Carnot efficiency (i.e., the efficiency of all reversible heat engines with the same pair of thermal reservoirs) and the heat formula_4 absorbed by a working body of the engine during isothermal expansion:formula_9To derive the Carnot efficiency Kelvin had to evaluate the ratio of the work output to the heat absorbed during the isothermal expansion with the help of the Carnot–Clapeyron equation, which contained an unknown function called the Carnot function. The possibility that the Carnot function could be the temperature as measured from a zero point of temperature was suggested by Joule in a letter to Kelvin. This allowed Kelvin to establish his absolute temperature scale.
It is known that a work formula_10 produced by an engine over a cycle equals to a net heat formula_11 absorbed over a cycle. Thus, with the sign convention for a heat formula_12 transferred in a thermodynamic process (formula_13 for an absorption and formula_14 for a dissipation) we get:formula_15Since this equality holds over an entire Carnot cycle, it gave Clausius the hint that at each stage of the cycle the difference between a work and a net heat would be conserved, rather than a net heat itself. Which means there exists a state function formula_16 with a change of formula_17. It is called an internal energy and forms a central concept for the first law of thermodynamics.
Finally, comparison for the both representations of a work output in a Carnot cycle gives us:formula_18Similarly to the derivation of internal energy, this equality implies existence of a state function formula_19 with a change of formula_20 and which is conserved over an entire cycle. Clausius called this state function "entropy".
In addition, the total change of entropy in both thermal reservoirs over Carnot cycle is zero too, since the inversion of a heat transfer direction means a sign inversion for the heat transferred during isothermal stages:formula_21Here we denote the entropy change for a thermal reservoir by formula_22, where formula_23 is either formula_24 for a hot reservoir or formula_25 for a cold one.
If we consider a heat engine which is less effective than Carnot cycle (i.e., the work formula_26 produced by this engine is less than the maximum predicted by Carnot's theorem), its work output is capped by Carnot efficiency as:formula_27Substitution of the work formula_8 as the net heat into the inequality above gives us:formula_28or in terms of the entropy change formula_29:formula_30A Carnot cycle and an entropy as shown above prove to be useful in the study of any classical thermodynamic heat engine: other cycles, such as an Otto, Diesel or Brayton cycle, could be analyzed from the same standpoint. Notably, any machine or cyclic process converting heat into work (i.e., heat engine) what is claimed to produce an efficiency greater than the one of Carnot is not viable — due to violation of the second law of thermodynamics.
For further analysis of sufficiently discrete systems, such as an assembly of particles, statistical thermodynamics must be used. Additionally, description of devices operating near limit of de Broglie waves, e.g. photovoltaic cells, have to be consistent with quantum statistics.
Classical thermodynamics.
The thermodynamic definition of entropy was developed in the early 1850s by Rudolf Clausius and essentially describes how to measure the entropy of an isolated system in thermodynamic equilibrium with its parts. Clausius created the term entropy as an extensive thermodynamic variable that was shown to be useful in characterizing the Carnot cycle. Heat transfer in the isotherm steps (isothermal expansion and isothermal compression) of the Carnot cycle was found to be proportional to the temperature of a system (known as its absolute temperature). This relationship was expressed in an increment of entropy that is equal to incremental heat transfer divided by temperature. Entropy was found to vary in the thermodynamic cycle but eventually returned to the same value at the end of every cycle. Thus it was found to be a function of state, specifically a thermodynamic state of the system.
While Clausius based his definition on a reversible process, there are also irreversible processes that change entropy. Following the second law of thermodynamics, entropy of an isolated system always increases for irreversible processes. The difference between an isolated system and closed system is that energy may "not" flow to and from an isolated system, but energy flow to and from a closed system is possible. Nevertheless, for both closed and isolated systems, and indeed, also in open systems, irreversible thermodynamics processes may occur.
According to the Clausius equality, for a reversible cyclic thermodynamic process: formula_31which means the line integral formula_32 is path-independent. Thus we can define a state function formula_19, called "entropy":formula_3Therefore thermodynamic entropy has the dimension of energy divided by temperature, and the unit joule per kelvin (J/K) in the International System of Units (SI).
To find the entropy difference between any two states of the system, the integral must be evaluated for some reversible path between the initial and final states. Since an entropy is a state function, the entropy change of the system for an irreversible path is the same as for a reversible path between the same two states. However, the heat transferred to or from the surroundings is different as well as its entropy change.
We can calculate the change of entropy only by integrating the above formula. To obtain the absolute value of the entropy, we consider the third law of thermodynamics: perfect crystals at the absolute zero have an entropy formula_33.
From a macroscopic perspective, in classical thermodynamics the entropy is interpreted as a state function of a thermodynamic system: that is, a property depending only on the current state of the system, independent of how that state came to be achieved. In any process, where the system gives up formula_34 of energy to the surrounding at the temperature formula_2, its entropy falls by formula_35 and at least formula_36 of that energy must be given up to the system's surroundings as a heat. Otherwise, this process cannot go forward. In classical thermodynamics, the entropy of a system is defined if and only if it is in a thermodynamic equilibrium (though a chemical equilibrium is not required: for example, the entropy of a mixture of two moles of hydrogen and one mole of oxygen in standard conditions is well-defined).
Statistical mechanics.
The statistical definition was developed by Ludwig Boltzmann in the 1870s by analyzing the statistical behavior of the microscopic components of the system. Boltzmann showed that this definition of entropy was equivalent to the thermodynamic entropy to within a constant factor—known as the Boltzmann constant. In short, the thermodynamic definition of entropy provides the experimental verification of entropy, while the statistical definition of entropy extends the concept, providing an explanation and a deeper understanding of its nature.
The interpretation of entropy in statistical mechanics is the measure of uncertainty, disorder, or "mixedupness" in the phrase of Gibbs, which remains about a system after its observable macroscopic properties, such as temperature, pressure and volume, have been taken into account. For a given set of macroscopic variables, the entropy measures the degree to which the probability of the system is spread out over different possible microstates. In contrast to the macrostate, which characterizes plainly observable average quantities, a microstate specifies all molecular details about the system including the position and momentum of every molecule. The more such states are available to the system with appreciable probability, the greater the entropy. In statistical mechanics, entropy is a measure of the number of ways a system can be arranged, often taken to be a measure of "disorder" (the higher the entropy, the higher the disorder). This definition describes the entropy as being proportional to the natural logarithm of the number of possible microscopic configurations of the individual atoms and molecules of the system (microstates) that could cause the observed macroscopic state (macrostate) of the system. The constant of proportionality is the Boltzmann constant.
The Boltzmann constant, and therefore entropy, have dimensions of energy divided by temperature, which has a unit of joules per kelvin (J⋅K−1) in the International System of Units (or kg⋅m2⋅s−2⋅K−1 in terms of base units). The entropy of a substance is usually given as an intensive property — either entropy per unit mass (SI unit: J⋅K−1⋅kg−1) or entropy per unit amount of substance (SI unit: J⋅K−1⋅mol−1).
Specifically, entropy is a logarithmic measure for the system with a number of states, each with a probability formula_37 of being occupied (usually given by the Boltzmann distribution):formula_38where formula_39 is the Boltzmann constant and the summation is performed over all possible microstates of the system.
In case states are defined in a continuous manner, the summation is replaced by an integral over all possible states, or equivalently we can consider the expected value of the logarithm of the probability that a microstate is occupied:formula_40This definition assumes the basis states to be picked in a way that there is no information on their relative phases. In a general case the expression is:formula_41where formula_42 is a density matrix, formula_43 is a trace operator and formula_44 is a matrix logarithm. Density matrix formalism is not required if the system occurs to be in a thermal equilibrium so long as the basis states are chosen to be eigenstates of Hamiltonian. For most practical purposes it can be taken as the fundamental definition of entropy since all other formulae for formula_19 can be derived from it, but not vice versa.
In what has been called "the fundamental postulate in statistical mechanics", among system microstates of the same energy (i.e., degenerate microstates) each microstate is assumed to be populated with equal probability formula_45, where formula_46 is the number of microstates whose energy equals to the one of the system. Usually, this assumption is justified for an isolated system in a thermodynamic equilibrium. Then in case of an isolated system the previous formula reduces to:formula_47In thermodynamics, such a system is one with a fixed volume, number of molecules, and internal energy, called a microcanonical ensemble.
The most general interpretation of entropy is as a measure of the extent of uncertainty about a system. The equilibrium state of a system maximizes the entropy because it does not reflect all information about the initial conditions, except for the conserved variables. This uncertainty is not of the everyday subjective kind, but rather the uncertainty inherent to the experimental method and interpretative model.
The interpretative model has a central role in determining entropy. The qualifier "for a given set of macroscopic variables" above has deep implications when two observers use different sets of macroscopic variables. For example, let's consider observer A using variables formula_16, formula_48, formula_8 and observer B using variables formula_16, formula_48, formula_8, formula_49. If observer B changes variable formula_49, then observer A will see a violation of the second law of thermodynamics, since he does not posses information about variable formula_49 and its influence on the system. In other words, one must choose a complete set of macroscopic variables to describe the system, i.e. every independent parameter that may change during experiment.
Entropy can also be defined for any Markov processes with reversible dynamics and the detailed balance property.
In Boltzmann's 1896 "Lectures on Gas Theory", he showed that this expression gives a measure of entropy for systems of atoms and molecules in the gas phase, thus providing a measure for the entropy of classical thermodynamics.
Entropy of a system.
Entropy arises directly from the Carnot cycle. It can also be described as the reversible heat divided by temperature. Entropy is a fundamental function of state.
In a thermodynamic system, pressure and temperature tend to become uniform over time because the equilibrium state has higher probability (more possible combinations of microstates) than any other state.
As an example, for a glass of ice water in air at room temperature, the difference in temperature between the warm room (the surroundings) and the cold glass of ice and water (the system and not part of the room) decreases as portions of the thermal energy from the warm surroundings spread to the cooler system of ice and water. Over time the temperature of the glass and its contents and the temperature of the room become equal. In other words, the entropy of the room has decreased as some of its energy has been dispersed to the ice and water, of which the entropy has increased.
However, as calculated in the example, the entropy of the system of ice and water has increased more than the entropy of the surrounding room has decreased. In an isolated system such as the room and ice water taken together, the dispersal of energy from warmer to cooler always results in a net increase in entropy. Thus, when the "universe" of the room and ice water system has reached a temperature equilibrium, the entropy change from the initial state is at a maximum. The entropy of the thermodynamic system is a measure of how far the equalization has progressed.
Thermodynamic entropy is a non-conserved state function that is of great importance in the sciences of physics and chemistry. Historically, the concept of entropy evolved to explain why some processes (permitted by conservation laws) occur spontaneously while their time reversals (also permitted by conservation laws) do not; systems tend to progress in the direction of increasing entropy. For isolated systems, entropy never decreases. This fact has several important consequences in science: first, it prohibits "perpetual motion" machines; and second, it implies the arrow of entropy has the same direction as the arrow of time. Increases in the total entropy of system and surroundings correspond to irreversible changes, because some energy is expended as waste heat, limiting the amount of work a system can do.
Unlike many other functions of state, entropy cannot be directly observed but must be calculated. Absolute standard molar entropy of a substance can be calculated from the measured temperature dependence of its heat capacity. The molar entropy of ions is obtained as a difference in entropy from a reference state defined as zero entropy. The second law of thermodynamics states that the entropy of an isolated system must increase or remain constant. Therefore, entropy is not a conserved quantity: for example, in an isolated system with non-uniform temperature, heat might irreversibly flow and the temperature become more uniform such that entropy increases. Chemical reactions cause changes in entropy and system entropy, in conjunction with enthalpy, plays an important role in determining in which direction a chemical reaction spontaneously proceeds.
One dictionary definition of entropy is that it is "a measure of thermal energy per unit temperature that is not available for useful work" in a cyclic process. For instance, a substance at uniform temperature is at maximum entropy and cannot drive a heat engine. A substance at non-uniform temperature is at a lower entropy (than if the heat distribution is allowed to even out) and some of the thermal energy can drive a heat engine.
A special case of entropy increase, the entropy of mixing, occurs when two or more different substances are mixed. If the substances are at the same temperature and pressure, there is no net exchange of heat or work – the entropy change is entirely due to the mixing of the different substances. At a statistical mechanical level, this results due to the change in available volume per particle with mixing.
Equivalence of definitions.
Proofs of equivalence between the entropy in statistical mechanics — the Gibbs entropy formula:formula_38and the entropy in classical thermodynamics:formula_3together with the fundamental thermodynamic relation are known for the microcanonical ensemble, the canonical ensemble, the grand canonical ensemble, and the isothermal–isobaric ensemble. These proofs are based on the probability density of microstates of the generalized Boltzmann distribution and the identification of the thermodynamic internal energy as the ensemble average formula_50. Thermodynamic relations are then employed to derive the well-known Gibbs entropy formula. However, the equivalence between the Gibbs entropy formula and the thermodynamic definition of entropy is not a fundamental thermodynamic relation but rather a consequence of the form of the generalized Boltzmann distribution.
Furthermore, it has been shown that the definitions of entropy in statistical mechanics is the only entropy that is equivalent to the classical thermodynamics entropy under the following postulates:
Second law of thermodynamics.
The second law of thermodynamics requires that, in general, the total entropy of any system does not decrease other than by increasing the entropy of some other system. Hence, in a system isolated from its environment, the entropy of that system tends not to decrease. It follows that heat cannot flow from a colder body to a hotter body without the application of work to the colder body. Secondly, it is impossible for any device operating on a cycle to produce net work from a single temperature reservoir; the production of net work requires flow of heat from a hotter reservoir to a colder reservoir, or a single expanding reservoir undergoing adiabatic cooling, which performs adiabatic work. As a result, there is no possibility of a perpetual motion machine. It follows that a reduction in the increase of entropy in a specified process, such as a chemical reaction, means that it is energetically more efficient.
It follows from the second law of thermodynamics that the entropy of a system that is not isolated may decrease. An air conditioner, for example, may cool the air in a room, thus reducing the entropy of the air of that system. The heat expelled from the room (the system), which the air conditioner transports and discharges to the outside air, always makes a bigger contribution to the entropy of the environment than the decrease of the entropy of the air of that system. Thus, the total of entropy of the room plus the entropy of the environment increases, in agreement with the second law of thermodynamics.
In mechanics, the second law in conjunction with the fundamental thermodynamic relation places limits on a system's ability to do useful work. The entropy change of a system at temperature formula_2 absorbing an infinitesimal amount of heat formula_51 in a reversible way, is given by formula_52. More explicitly, an energy formula_53 is not available to do useful work, where formula_54 is the temperature of the coldest accessible reservoir or heat sink external to the system. For further discussion, see "Exergy".
Statistical mechanics demonstrates that entropy is governed by probability, thus allowing for a decrease in disorder even in an isolated system. Although this is possible, such an event has a small probability of occurring, making it unlikely.
The applicability of a second law of thermodynamics is limited to systems in or sufficiently near equilibrium state, so that they have defined entropy. Some inhomogeneous systems out of thermodynamic equilibrium still satisfy the hypothesis of local thermodynamic equilibrium, so that entropy density is locally defined as an intensive quantity. For such systems, there may apply a principle of maximum time rate of entropy production. It states that such a system may evolve to a steady state that maximizes its time rate of entropy production. This does not mean that such a system is necessarily always in a condition of maximum time rate of entropy production; it means that it may evolve to such a steady state.
Applications.
The fundamental thermodynamic relation.
The entropy of a system depends on its internal energy and its external parameters, such as its volume. In the thermodynamic limit, this fact leads to an equation relating the change in the internal energy formula_16 to changes in the entropy and the external parameters. This relation is known as the "fundamental thermodynamic relation". If external pressure formula_55 bears on the volume formula_48 as the only external parameter, this relation is:formula_56Since both internal energy and entropy are monotonic functions of temperature formula_2, implying that the internal energy is fixed when one specifies the entropy and the volume, this relation is valid even if the change from one state of thermal equilibrium to another with infinitesimally larger entropy and volume happens in a non-quasistatic way (so during this change the system may be very far out of thermal equilibrium and then the whole-system entropy, pressure, and temperature may not exist).
The fundamental thermodynamic relation implies many thermodynamic identities that are valid in general, independent of the microscopic details of the system. Important examples are the Maxwell relations and the relations between heat capacities.
Entropy in chemical thermodynamics.
Thermodynamic entropy is central in chemical thermodynamics, enabling changes to be quantified and the outcome of reactions predicted. The second law of thermodynamics states that entropy in an isolated system — the combination of a subsystem under study and its surroundings — increases during all spontaneous chemical and physical processes. The Clausius equation introduces the measurement of entropy change which describes the direction and quantifies the magnitude of simple changes such as heat transfer between systems — always from hotter body to cooler one spontaneously.
Thermodynamic entropy is an extensive property, meaning that it scales with the size or extent of a system. In many processes it is useful to specify the entropy as an intensive property independent of the size, as a specific entropy characteristic of the type of system studied. Specific entropy may be expressed relative to a unit of mass, typically the kilogram (unit: J⋅kg−1⋅K−1). Alternatively, in chemistry, it is also referred to one mole of substance, in which case it is called the "molar entropy" with a unit of J⋅mol−1⋅K−1.
Thus, when one mole of substance at about is warmed by its surroundings to , the sum of the incremental values of formula_57 constitute each element's or compound's standard molar entropy, an indicator of the amount of energy stored by a substance at . Entropy change also measures the mixing of substances as a summation of their relative quantities in the final mixture.
Entropy is equally essential in predicting the extent and direction of complex chemical reactions. For such applications, formula_35 must be incorporated in an expression that includes both the system and its surroundings: formula_58Via additional steps this expression becomes the equation of Gibbs free energy change formula_59 for reactants and products in the system at the constant pressure and temperature formula_2:formula_60where formula_61 is the enthalpy change and formula_35 is the entropy change.
World's technological capacity to store and communicate entropic information.
A 2011 study in "Science" estimated the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources. The author's estimate that human kind's technological capacity to store information grew from 2.6 (entropically compressed) exabytes in 1986 to 295 (entropically compressed) exabytes in 2007. The world's technological capacity to receive information through one-way broadcast networks was 432 exabytes of (entropically compressed) information in 1986, to 1.9 zettabytes in 2007. The world's effective capacity to exchange information through two-way telecommunication networks was 281 petabytes of (entropically compressed) information in 1986, to 65 (entropically compressed) exabytes in 2007.
Entropy balance equation for open systems.
In chemical engineering, the principles of thermodynamics are commonly applied to "open systems", i.e. those in which heat, work, and mass flow across the system boundary. In general, flow of heat formula_62, flow of shaft work formula_63 and pressure-volume work formula_64 across the system boundaries cause changes in the entropy of the system. Heat transfer entails entropy transfer formula_65, where formula_2 is the absolute thermodynamic temperature of the system at the point of the heat flow. If there are mass flows across the system boundaries, they also influence the total entropy of the system. This account, in terms of heat and work, is valid only for cases in which the work and heat transfers are by paths physically distinct from the paths of entry and exit of matter from the system.
To derive a generalized entropy balanced equation, we start with the general balance equation for the change in any extensive quantity formula_66 in a thermodynamic system, a quantity that may be either conserved, such as energy, or non-conserved, such as entropy. The basic generic balance expression states that formula_67, i.e. the rate of change of formula_66 in the system, equals the rate at which formula_66 enters the system at the boundaries, minus the rate at which formula_66 leaves the system across the system boundaries, plus the rate at which formula_66 is generated within the system. For an open thermodynamic system in which heat and work are transferred by paths separate from the paths for transfer of matter, using this generic balance equation, with respect to the rate of change with time formula_68 of the extensive quantity entropy formula_19, the entropy balance equation is:formula_69where formula_70 is the net rate of entropy flow due to the flows of mass formula_71 into and out of the system with entropy per unit mass formula_72, formula_73 is the rate of entropy flow due to the flow of heat across the system boundary and formula_74 is the rate of entropy generation within the system, e.g. by chemical reactions, phase transitions, internal heat transfer or frictional effects such as viscosity.
In case of multiple heat flows the term formula_65 is replaced by formula_75, where formula_76 is the heat flow through formula_77-th port into the system and formula_78 is the temperature at the formula_77-th port.
The nomenclature "entropy balance" is misleading and often deemed inappropriate because entropy is not a conserved quantity. In other words, the term formula_74 is never a known quantity but always a derived one based on the expression above. Therefore, the open system version of the second law is more appropriately described as the "entropy generation equation" since it specifies that:formula_79with zero for reversible process and positive values for irreversible one.
Entropy change formulas for simple processes.
For certain simple transformations in systems of constant composition, the entropy changes are given by simple formulas.
Isothermal expansion or compression of an ideal gas.
For the expansion (or compression) of an ideal gas from an initial volume formula_80 and pressure formula_81 to a final volume formula_48 and pressure formula_82 at any constant temperature, the change in entropy is given by:formula_83Here formula_84 is the amount of gas (in moles) and formula_85 is the ideal gas constant. These equations also apply for expansion into a finite vacuum or a throttling process, where the temperature, internal energy and enthalpy for an ideal gas remain constant.
Cooling and heating.
For pure heating or cooling of any system (gas, liquid or solid) at constant pressure from an initial temperature formula_86 to a final temperature formula_2, the entropy change is:
formula_87
provided that the constant-pressure molar heat capacity (or specific heat) formula_88 is constant and that no phase transition occurs in this temperature interval.
Similarly at constant volume, the entropy change is:formula_89where the constant-volume molar heat capacity formula_90 is constant and there is no phase change.
At low temperatures near absolute zero, heat capacities of solids quickly drop off to near zero, so the assumption of constant heat capacity does not apply.
Since entropy is a state function, the entropy change of any process in which temperature and volume both vary is the same as for a path divided into two steps – heating at constant volume and expansion at constant temperature. For an ideal gas, the total entropy change is:formula_91Similarly if the temperature and pressure of an ideal gas both vary:formula_92
Phase transitions.
Reversible phase transitions occur at constant temperature and pressure. The reversible heat is the enthalpy change for the transition, and the entropy change is the enthalpy change divided by the thermodynamic temperature. For fusion (i.e., melting) of a solid to a liquid at the melting point formula_93, the entropy of fusion is:formula_94Similarly, for vaporization of a liquid to a gas at the boiling point formula_95, the entropy of vaporization is:formula_96
Approaches to understanding entropy.
As a fundamental aspect of thermodynamics and physics, several different approaches to entropy beyond that of Clausius and Boltzmann are valid.
Standard textbook definitions.
The following is a list of additional definitions of entropy from a collection of textbooks:
In Boltzmann's analysis in terms of constituent particles, entropy is a measure of the number of possible microscopic states (or microstates) of a system in thermodynamic equilibrium.
Order and disorder.
Entropy is often loosely associated with the amount of order or disorder, or of chaos, in a thermodynamic system. The traditional qualitative description of entropy is that it refers to changes in the state of the system and is a measure of "molecular disorder" and the amount of wasted energy in a dynamical energy transformation from one state or form to another. In this direction, several recent authors have derived exact entropy formulas to account for and measure disorder and order in atomic and molecular assemblies. One of the simpler entropy order/disorder formulas is that derived in 1984 by thermodynamic physicist Peter Landsberg, based on a combination of thermodynamics and information theory arguments. He argues that when constraints operate on a system, such that it is prevented from entering one or more of its possible or permitted states, as contrasted with its forbidden states, the measure of the total amount of "disorder" in the system is given by:formula_97Similarly, the total amount of "order" in the system is given by:formula_98In which formula_99 is the "disorder" capacity of the system, which is the entropy of the parts contained in the permitted ensemble, formula_100 is the "information" capacity of the system, an expression similar to Shannon's channel capacity, and formula_101 is the "order" capacity of the system.
Energy dispersal.
The concept of entropy can be described qualitatively as a measure of energy dispersal at a specific temperature. Similar terms have been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels.
Ambiguities in the terms "disorder" and "chaos", which usually have meanings directly opposed to equilibrium, contribute to widespread confusion and hamper comprehension of entropy for most students. As the second law of thermodynamics shows, in an isolated system internal portions at different temperatures tend to adjust to a single uniform temperature and thus produce equilibrium. A recently developed educational approach avoids ambiguous terms and describes such spreading out of energy as dispersal, which leads to loss of the differentials required for work even though the total energy remains constant in accordance with the first law of thermodynamics (compare discussion in next section). Physical chemist Peter Atkins, in his textbook "Physical Chemistry", introduces entropy with the statement that "spontaneous changes are always accompanied by a dispersal of energy or matter and often both".
Relating entropy to energy "usefulness".
It is possible (in a thermal context) to regard lower entropy as a measure of the "effectiveness" or "usefulness" of a particular quantity of energy. Energy supplied at a higher temperature (i.e. with low entropy) tends to be more useful than the same amount of energy available at a lower temperature. Mixing a hot parcel of a fluid with a cold one produces a parcel of intermediate temperature, in which the overall increase in entropy represents a "loss" that can never be replaced.
As the entropy of the universe is steadily increasing, its total energy is becoming less useful. Eventually, this is theorized to lead to the heat death of the universe.
Entropy and adiabatic accessibility.
A definition of entropy based entirely on the relation of adiabatic accessibility between equilibrium states was given by E. H. Lieb and J. Yngvason in 1999. This approach has several predecessors, including the pioneering work of Constantin Carathéodory from 1909 and the monograph by R. Giles. In the setting of Lieb and Yngvason, one starts by picking, for a unit amount of the substance under consideration, two reference states formula_102 and formula_103 such that the latter is adiabatically accessible from the former but not conversely. Defining the entropies of the reference states to be 0 and 1 respectively, the entropy of a state formula_49 is defined as the largest number formula_104 such that formula_49 is adiabatically accessible from a composite state consisting of an amount formula_104 in the state formula_103 and a complementary amount, formula_105, in the state formula_102. A simple but important result within this setting is that entropy is uniquely determined, apart from a choice of unit and an additive constant for each chemical element, by the following properties: it is monotonic with respect to the relation of adiabatic accessibility, additive on composite systems, and extensive under scaling.
Entropy in quantum mechanics.
In quantum statistical mechanics, the concept of entropy was developed by John von Neumann and is generally referred to as "von Neumann entropy":formula_41where formula_42 is the density matrix, formula_106 is the trace operator and formula_39 is the Boltzmann constant.
This upholds the correspondence principle, because in the classical limit, when the phases between the basis states are purely random, this expression is equivalent to the familiar classical definition of entropy for states with classical probabilities formula_37:formula_38i.e. in such a basis the density matrix is diagonal.
Von Neumann established a rigorous mathematical framework for quantum mechanics with his work . He provided in this work a theory of measurement, where the usual notion of wave function collapse is described as an irreversible process (the so-called von Neumann or projective measurement). Using this concept, in conjunction with the density matrix he extended the classical concept of entropy into the quantum domain.
Information theory.
<templatestyles src="Template:Quote_box/styles.css" />
I thought of calling it "information", but the word was overly used, so I decided to call it "uncertainty". [...] Von Neumann told me, "You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage.
Conversation between Claude Shannon and John von Neumann regarding what name to give to the attenuation in phone-line signals
When viewed in terms of information theory, the entropy state function is the amount of information in the system that is needed to fully specify the microstate of the system. Entropy is the measure of the amount of missing information before reception. Often called "Shannon entropy", it was originally devised by Claude Shannon in 1948 to study the size of information of a transmitted message. The definition of information entropy is expressed in terms of a discrete set of probabilities formula_37 so that:formula_107where the base of the logarithm determines the units (for example, the binary logarithm corresponds to bits).
In the case of transmitted messages, these probabilities were the probabilities that a particular message was actually transmitted, and the entropy of the message system was a measure of the average size of information of a message. For the case of equal probabilities (i.e. each message is equally probable), the Shannon entropy (in bits) is just the number of binary questions needed to determine the content of the message.
Most researchers consider information entropy and thermodynamic entropy directly linked to the same concept, while others argue that they are distinct. Both expressions are mathematically similar. If formula_8 is the number of microstates that can yield a given macrostate, and each microstate has the same "a priori" probability, then that probability is formula_108. The Shannon entropy (in nats) is:formula_109and if entropy is measured in units of formula_110 per nat, then the entropy is given by:formula_111which is the Boltzmann entropy formula, where formula_110 is the Boltzmann constant, which may be interpreted as the thermodynamic entropy per nat. Some authors argue for dropping the word entropy for the formula_112 function of information theory and using Shannon's other term, "uncertainty", instead.
Measurement.
The entropy of a substance can be measured, although in an indirect way. The measurement, known as entropymetry, is done on a closed system with constant number of particles formula_113 and constant volume formula_48, and it uses the definition of temperature in terms of entropy, while limiting energy exchange to heat formula_114:formula_115The resulting relation describes how entropy changes formula_0 when a small amount of energy formula_116 is introduced into the system at a certain temperature formula_2.
The process of measurement goes as follows. First, a sample of the substance is cooled as close to absolute zero as possible. At such temperatures, the entropy approaches zero – due to the definition of temperature. Then, small amounts of heat are introduced into the sample and the change in temperature is recorded, until the temperature reaches a desired value (usually 25°C). The obtained data allows the user to integrate the equation above, yielding the absolute value of entropy of the substance at the final temperature. This value of entropy is called calorimetric entropy.
Interdisciplinary applications.
Although the concept of entropy was originally a thermodynamic concept, it has been adapted in other fields of study, including information theory, psychodynamics, thermoeconomics/ecological economics, and evolution.
Philosophy and theoretical physics.
Entropy is the only quantity in the physical sciences that seems to imply a particular direction of progress, sometimes called an arrow of time. As time progresses, the second law of thermodynamics states that the entropy of an isolated system never decreases in large systems over significant periods of time. Hence, from this perspective, entropy measurement is thought of as a clock in these conditions.
Biology.
Chiavazzo "et al." proposed that where cave spiders choose to lay their eggs can be explained through entropy minimization.
Entropy has been proven useful in the analysis of base pair sequences in DNA. Many entropy-based measures have been shown to distinguish between different structural regions of the genome, differentiate between coding and non-coding regions of DNA, and can also be applied for the recreation of evolutionary trees by determining the evolutionary distance between different species.
Cosmology.
Assuming that a finite universe is an isolated system, the second law of thermodynamics states that its total entropy is continually increasing. It has been speculated, since the 19th century, that the universe is fated to a heat death in which all the energy ends up as a homogeneous distribution of thermal energy so that no more work can be extracted from any source.
If the universe can be considered to have generally increasing entropy, then – as Roger Penrose has pointed out – gravity plays an important role in the increase because gravity causes dispersed matter to accumulate into stars, which collapse eventually into black holes. The entropy of a black hole is proportional to the surface area of the black hole's event horizon. Jacob Bekenstein and Stephen Hawking have shown that black holes have the maximum possible entropy of any object of equal size. This makes them likely end points of all entropy-increasing processes, if they are totally effective matter and energy traps. However, the escape of energy from black holes might be possible due to quantum activity (see Hawking radiation).
The role of entropy in cosmology remains a controversial subject since the time of Ludwig Boltzmann. Recent work has cast some doubt on the heat death hypothesis and the applicability of any simple thermodynamic model to the universe in general. Although entropy does increase in the model of an expanding universe, the maximum possible entropy rises much more rapidly, moving the universe further from the heat death with time, not closer. This results in an "entropy gap" pushing the system further away from the posited heat death equilibrium. Other complicating factors, such as the energy density of the vacuum and macroscopic quantum effects, are difficult to reconcile with thermodynamical models, making any predictions of large-scale thermodynamics extremely difficult.
Current theories suggest the entropy gap to have been originally opened up by the early rapid exponential expansion of the universe.
Economics.
Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and a paradigm founder of ecological economics, made extensive use of the entropy concept in his magnum opus on "The Entropy Law and the Economic Process". Due to Georgescu-Roegen's work, the laws of thermodynamics form an integral part of the ecological economics school. Although his work was blemished somewhat by mistakes, a full chapter on the economics of Georgescu-Roegen has approvingly been included in one elementary physics textbook on the historical development of thermodynamics.
In economics, Georgescu-Roegen's work has generated the term 'entropy pessimism'. Since the 1990s, leading ecological economist and steady-state theorist Herman Daly – a student of Georgescu-Roegen – has been the economics profession's most influential proponent of the entropy pessimism position.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{d} S"
},
{
"math_id": 1,
"text": "\\delta Q_{\\mathsf{rev}}"
},
{
"math_id": 2,
"text": "T"
},
{
"math_id": 3,
"text": "\\mathrm{d} S = \\frac{\\delta Q_\\mathsf{rev}}{T}"
},
{
"math_id": 4,
"text": "Q_\\mathsf{H}"
},
{
"math_id": 5,
"text": "T_\\mathsf{H}"
},
{
"math_id": 6,
"text": "Q_\\mathsf{C}"
},
{
"math_id": 7,
"text": "T_\\mathsf{C}"
},
{
"math_id": 8,
"text": "W"
},
{
"math_id": 9,
"text": "W = \\frac{ T_\\mathsf{H} - T_\\mathsf{C} }{ T_\\mathsf{H} } \\cdot Q_\\mathsf{H} = \\left( 1 - \\frac{ T_\\mathsf{C} }{ T_\\mathsf{H} } \\right) Q_\\mathsf{H}"
},
{
"math_id": 10,
"text": "W > 0"
},
{
"math_id": 11,
"text": " Q_\\Sigma = \\left\\vert Q_\\mathsf{H} \\right\\vert - \\left\\vert Q_\\mathsf{C} \\right\\vert "
},
{
"math_id": 12,
"text": " Q"
},
{
"math_id": 13,
"text": " Q > 0"
},
{
"math_id": 14,
"text": " Q < 0"
},
{
"math_id": 15,
"text": "W - Q_\\Sigma = W - \\left\\vert Q_\\mathsf{H} \\right\\vert + \\left\\vert Q_\\mathsf{C} \\right\\vert = W - Q_\\mathsf{H} - Q_\\mathsf{C} = 0"
},
{
"math_id": 16,
"text": "U"
},
{
"math_id": 17,
"text": "\\mathrm{d} U = \\delta Q - \\mathrm{d} W"
},
{
"math_id": 18,
"text": "\\frac{\\left\\vert Q_\\mathsf{H} \\right\\vert}{T_\\mathsf{H}} - \\frac{\\left\\vert Q_\\mathsf{C} \\right\\vert}{T_\\mathsf{C}} = \\frac{Q_\\mathsf{H}}{T_\\mathsf{H}} + \\frac{Q_\\mathsf{C}}{T_\\mathsf{C}} = 0"
},
{
"math_id": 19,
"text": "S"
},
{
"math_id": 20,
"text": "\\mathrm{d} S = \\delta Q / T"
},
{
"math_id": 21,
"text": "- \\frac{ Q_\\mathsf{H} }{ T_\\mathsf{H} } - \\frac{ Q_\\mathsf{C} }{ T_\\mathsf{C} } = \\Delta S_\\mathsf{r, H} + \\Delta S_\\mathsf{r, C} = 0"
},
{
"math_id": 22,
"text": "\\Delta S_{\\mathsf{r}, i} = - Q_i / T_i"
},
{
"math_id": 23,
"text": "i"
},
{
"math_id": 24,
"text": "\\mathsf{H}"
},
{
"math_id": 25,
"text": "\\mathsf{C}"
},
{
"math_id": 26,
"text": " W"
},
{
"math_id": 27,
"text": " W < \\left( 1 - \\frac{T_\\mathsf{C}}{T_\\mathsf{H}} \\right) Q_\\mathsf{H} "
},
{
"math_id": 28,
"text": "\\frac{Q_\\mathsf{H}}{T_\\mathsf{H}} + \\frac{Q_\\mathsf{C}}{T_\\mathsf{C}} < 0"
},
{
"math_id": 29,
"text": "\\Delta S_{\\mathsf{r}, i}"
},
{
"math_id": 30,
"text": "\\Delta S_\\mathsf{r, H} + \\Delta S_\\mathsf{r, C} > 0"
},
{
"math_id": 31,
"text": "\\oint{\\frac{\\delta Q_\\mathsf{rev}}{T}} = 0"
},
{
"math_id": 32,
"text": "\\int_L{\\delta Q_\\mathsf{rev} / T}"
},
{
"math_id": 33,
"text": "S = 0"
},
{
"math_id": 34,
"text": "\\Delta E"
},
{
"math_id": 35,
"text": "\\Delta S"
},
{
"math_id": 36,
"text": "T \\cdot \\Delta S"
},
{
"math_id": 37,
"text": "p_i"
},
{
"math_id": 38,
"text": "S = - k_\\mathsf{B} \\sum_i{p_i \\ln{p_i}}"
},
{
"math_id": 39,
"text": "k_\\mathsf{B}"
},
{
"math_id": 40,
"text": "S = - k_\\mathsf{B} \\left\\langle \\ln{p} \\right\\rangle"
},
{
"math_id": 41,
"text": "S = - k_\\mathsf{B}\\ \\mathrm{tr}{\\left( \\hat{\\rho} \\times \\ln{\\hat{\\rho}} \\right)}"
},
{
"math_id": 42,
"text": "\\hat{\\rho}"
},
{
"math_id": 43,
"text": "\\mathrm{tr}"
},
{
"math_id": 44,
"text": "\\ln"
},
{
"math_id": 45,
"text": "p_i = 1 / \\Omega"
},
{
"math_id": 46,
"text": "\\Omega"
},
{
"math_id": 47,
"text": "S = k_\\mathsf{B} \\ln{\\Omega}"
},
{
"math_id": 48,
"text": "V"
},
{
"math_id": 49,
"text": "X"
},
{
"math_id": 50,
"text": "U = \\left\\langle E_i \\right\\rangle "
},
{
"math_id": 51,
"text": "\\delta q"
},
{
"math_id": 52,
"text": "\\delta q / T"
},
{
"math_id": 53,
"text": "T_R S"
},
{
"math_id": 54,
"text": "T_R"
},
{
"math_id": 55,
"text": "p"
},
{
"math_id": 56,
"text": "\\mathrm{d} U = T\\ \\mathrm{d} S - p\\ \\mathrm{d} V"
},
{
"math_id": 57,
"text": "q_\\mathsf{rev} / T"
},
{
"math_id": 58,
"text": "\\Delta S_\\mathsf{universe} = \\Delta S_\\mathsf{surroundings} + \\Delta S_\\mathsf{system}"
},
{
"math_id": 59,
"text": "\\Delta G"
},
{
"math_id": 60,
"text": "\\Delta G = \\Delta H - T\\ \\Delta S"
},
{
"math_id": 61,
"text": "\\Delta H"
},
{
"math_id": 62,
"text": "\\dot{Q}"
},
{
"math_id": 63,
"text": " \\dot{W}_\\mathsf{S} "
},
{
"math_id": 64,
"text": "P \\dot{V}"
},
{
"math_id": 65,
"text": "\\dot{Q}/T"
},
{
"math_id": 66,
"text": "\\theta"
},
{
"math_id": 67,
"text": "\\mathrm{d} \\theta / \\mathrm{d} t"
},
{
"math_id": 68,
"text": "t"
},
{
"math_id": 69,
"text": "\\frac{\\mathrm{d} S}{\\mathrm{d} t} = \\sum_{k=1}^K{\\dot{M}_k \\hat{S}_k + \\frac{\\dot{Q}}{T} + \\dot{S}_\\mathsf{gen}}"
},
{
"math_id": 70,
"text": "\\sum_{k=1}^K{\\dot{M}_k \\hat{S}_k}"
},
{
"math_id": 71,
"text": "\\dot{M}_k "
},
{
"math_id": 72,
"text": "\\hat{S}_k"
},
{
"math_id": 73,
"text": "\\dot{Q} / T"
},
{
"math_id": 74,
"text": "\\dot{S}_\\mathsf{gen}"
},
{
"math_id": 75,
"text": "\\sum_j{\\dot{Q}_j/T_j}"
},
{
"math_id": 76,
"text": "\\dot{Q}_j"
},
{
"math_id": 77,
"text": "j"
},
{
"math_id": 78,
"text": "T_j"
},
{
"math_id": 79,
"text": "\\dot{S}_\\mathsf{gen} \\ge 0"
},
{
"math_id": 80,
"text": "V_0"
},
{
"math_id": 81,
"text": "P_0"
},
{
"math_id": 82,
"text": "P"
},
{
"math_id": 83,
"text": "\\Delta S = n R \\ln{\\frac{V}{V_0}} = - n R \\ln{\\frac{P}{P_0}}"
},
{
"math_id": 84,
"text": "n"
},
{
"math_id": 85,
"text": "R"
},
{
"math_id": 86,
"text": "T_0"
},
{
"math_id": 87,
"text": "\\Delta S = n C_\\mathrm{P} \\ln{\\frac{T}{T_0}}"
},
{
"math_id": 88,
"text": "C_\\mathrm{P}"
},
{
"math_id": 89,
"text": "\\Delta S = n C_\\mathrm{V} \\ln{\\frac{T}{T_0}}"
},
{
"math_id": 90,
"text": "C_\\mathrm{V} "
},
{
"math_id": 91,
"text": "\\Delta S = n C_\\mathrm{V} \\ln{\\frac{T}{T_0}} + n R \\ln{\\frac{V}{V_0}}"
},
{
"math_id": 92,
"text": "\\Delta S = n C_\\mathrm{P} \\ln{\\frac{T}{T_0}} - n R \\ln{\\frac{P}{P_0}}"
},
{
"math_id": 93,
"text": "T_\\mathsf{m} "
},
{
"math_id": 94,
"text": "\\Delta S_\\mathsf{fus} = \\frac{\\Delta H_\\mathsf{fus}}{T_\\mathsf{m}}."
},
{
"math_id": 95,
"text": "T_\\mathsf{b}"
},
{
"math_id": 96,
"text": "\\Delta S_\\mathsf{vap} = \\frac{\\Delta H_\\mathsf{vap}}{T_\\mathsf{b}}"
},
{
"math_id": 97,
"text": "\\mathsf{Disorder} = \\frac{C_\\mathsf{D}}{C_\\mathsf{I}}"
},
{
"math_id": 98,
"text": "\\mathsf{Order} = 1 - \\frac{C_\\mathsf{O}}{C_\\mathsf{I}}"
},
{
"math_id": 99,
"text": "C_\\mathsf{D}"
},
{
"math_id": 100,
"text": "C_\\mathsf{I}"
},
{
"math_id": 101,
"text": "C_\\mathsf{O}"
},
{
"math_id": 102,
"text": "X_0"
},
{
"math_id": 103,
"text": "X_1"
},
{
"math_id": 104,
"text": "\\lambda"
},
{
"math_id": 105,
"text": "(1 - \\lambda)"
},
{
"math_id": 106,
"text": "\\mathrm{tr}"
},
{
"math_id": 107,
"text": "H(X) = - \\sum_{i=1}^n{p(x_i) \\log{p(x_i)}}"
},
{
"math_id": 108,
"text": "p = 1/W"
},
{
"math_id": 109,
"text": "H = - \\sum_{i=1}^W{p_i \\ln{p_i}} = \\ln{W}"
},
{
"math_id": 110,
"text": "k"
},
{
"math_id": 111,
"text": "H = k \\ln{W}"
},
{
"math_id": 112,
"text": "H"
},
{
"math_id": 113,
"text": "N"
},
{
"math_id": 114,
"text": "\\mathrm{d} U \\rightarrow \\mathrm{d} Q"
},
{
"math_id": 115,
"text": "T := {\\left( \\frac{\\partial U}{\\partial S} \\right)}_{V, N}\\ \\Rightarrow\\ \\cdots\\ \\Rightarrow\\ \\mathrm{d} S = \\frac{\\mathrm{d} Q}{T}"
},
{
"math_id": 116,
"text": "\\mathrm{d} Q"
}
] | https://en.wikipedia.org/wiki?curid=9891 |
9891368 | Subgradient method | Subgradient methods are convex optimization methods which use subderivatives. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method of steepest descent.
Subgradient methods are slower than Newton's method when applied to minimize twice continuously differentiable convex functions. However, Newton's method fails to converge on problems that have non-differentiable kinks.
In recent years, some interior-point methods have been suggested for convex minimization problems, but subgradient projection methods and related bundle methods of descent remain competitive. For convex minimization problems with very large number of dimensions, subgradient-projection methods are suitable, because they require little storage.
Subgradient projection methods are often applied to large-scale problems with decomposition techniques. Such decomposition methods often allow a simple distributed method for a problem.
Classical subgradient rules.
Let formula_0 be a convex function with domain formula_1
A classical subgradient method iterates
formula_2
where formula_3 denotes "any" subgradient of formula_4 at formula_5 and formula_6 is the formula_7 iterate of formula_8
If formula_9 is differentiable, then its only subgradient is the gradient vector formula_10 itself.
It may happen that formula_11 is not a descent direction for formula_9 at formula_12 We therefore maintain a list formula_13 that keeps track of the lowest objective function value found so far, i.e.
formula_14
Step size rules.
Many different types of step-size rules are used by subgradient methods. This article notes five classical step-size rules for which convergence proofs are known:
For all five rules, the step-sizes are determined "off-line", before the method is iterated; the step-sizes do not depend on preceding iterations. This "off-line" property of subgradient methods differs from the "on-line" step-size rules used for descent methods for differentiable functions: Many methods for minimizing differentiable functions satisfy Wolfe's sufficient conditions for convergence, where step-sizes typically depend on the current point and the current search-direction. An extensive discussion of stepsize rules for subgradient methods, including incremental versions, is given in the books by Bertsekas and by Bertsekas, Nedic, and Ozdaglar.
Convergence results.
For constant step-length and scaled subgradients having Euclidean norm equal to one, the subgradient method converges to an arbitrarily close approximation to the minimum value, that is
formula_22 by a result of Shor.
These classical subgradient methods have poor performance and are no longer recommended for general use. However, they are still used widely in specialized applications because they are simple and they can be easily adapted to take advantage of the special structure of the problem at hand.
Subgradient-projection and bundle methods.
During the 1970s, Claude Lemaréchal and Phil Wolfe proposed "bundle methods" of descent for problems of convex minimization. The meaning of the term "bundle methods" has changed significantly since that time. Modern versions and full convergence analysis were provided by Kiwiel.
Contemporary bundle-methods often use "level control" rules for choosing step-sizes, developing techniques from the "subgradient-projection" method of Boris T. Polyak (1969). However, there are problems on which bundle methods offer little advantage over subgradient-projection methods.
Constrained optimization.
Projected subgradient.
One extension of the subgradient method is the projected subgradient method, which solves the constrained optimization problem
minimize formula_23 subject to formula_24
where formula_25 is a convex set.
The projected subgradient method uses the iteration
formula_26
where formula_27 is projection on formula_25 and formula_3 is any subgradient of formula_9 at formula_12
General constraints.
The subgradient method can be extended to solve the inequality constrained problem
minimize formula_28 subject to formula_29
where formula_30 are convex. The algorithm takes the same form as the unconstrained case
formula_2
where formula_31 is a step size, and formula_3 is a subgradient of the objective or one of the constraint functions at formula_32 Take
formula_33
where formula_34 denotes the subdifferential of formula_35 If the current point is feasible, the algorithm uses an objective subgradient; if the current point is infeasible, the algorithm chooses a subgradient of any violated constraint.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f : \\Reals^n \\to \\Reals"
},
{
"math_id": 1,
"text": "\\Reals^n."
},
{
"math_id": 2,
"text": "x^{(k+1)} = x^{(k)} - \\alpha_k g^{(k)} \\ "
},
{
"math_id": 3,
"text": "g^{(k)}"
},
{
"math_id": 4,
"text": " f \\ "
},
{
"math_id": 5,
"text": "x^{(k)}, \\ "
},
{
"math_id": 6,
"text": "x^{(k)}"
},
{
"math_id": 7,
"text": "k^{th}"
},
{
"math_id": 8,
"text": "x."
},
{
"math_id": 9,
"text": "f \\ "
},
{
"math_id": 10,
"text": "\\nabla f"
},
{
"math_id": 11,
"text": "-g^{(k)}"
},
{
"math_id": 12,
"text": "x^{(k)}."
},
{
"math_id": 13,
"text": "f_{\\rm{best}} \\ "
},
{
"math_id": 14,
"text": "f_{\\rm{best}}^{(k)} = \\min\\{f_{\\rm{best}}^{(k-1)} , f(x^{(k)}) \\}."
},
{
"math_id": 15,
"text": "\\alpha_k = \\alpha."
},
{
"math_id": 16,
"text": "\\alpha_k = \\gamma/\\lVert g^{(k)} \\rVert_2,"
},
{
"math_id": 17,
"text": "\\lVert x^{(k+1)} - x^{(k)} \\rVert_2 = \\gamma."
},
{
"math_id": 18,
"text": "\\alpha_k\\geq0,\\qquad\\sum_{k=1}^\\infty \\alpha_k^2 < \\infty,\\qquad \\sum_{k=1}^\\infty \\alpha_k = \\infty."
},
{
"math_id": 19,
"text": "\\alpha_k \\geq 0,\\qquad \\lim_{k\\to\\infty} \\alpha_k = 0,\\qquad \\sum_{k=1}^\\infty \\alpha_k = \\infty."
},
{
"math_id": 20,
"text": "\\alpha_k = \\gamma_k/\\lVert g^{(k)} \\rVert_2,"
},
{
"math_id": 21,
"text": "\\gamma_k \\geq 0,\\qquad \\lim_{k\\to\\infty} \\gamma_k = 0,\\qquad \\sum_{k=1}^\\infty \\gamma_k = \\infty."
},
{
"math_id": 22,
"text": "\\lim_{k\\to\\infty} f_{\\rm{best}}^{(k)} - f^* <\\epsilon"
},
{
"math_id": 23,
"text": "f(x) \\ "
},
{
"math_id": 24,
"text": "x \\in \\mathcal{C}"
},
{
"math_id": 25,
"text": "\\mathcal{C}"
},
{
"math_id": 26,
"text": "x^{(k+1)} = P \\left(x^{(k)} - \\alpha_k g^{(k)}\\right)"
},
{
"math_id": 27,
"text": "P"
},
{
"math_id": 28,
"text": "f_0(x) \\ "
},
{
"math_id": 29,
"text": "f_i (x) \\leq 0,\\quad i = 1,\\ldots,m"
},
{
"math_id": 30,
"text": "f_i"
},
{
"math_id": 31,
"text": "\\alpha_k>0"
},
{
"math_id": 32,
"text": "x. \\ "
},
{
"math_id": 33,
"text": "g^{(k)} = \n\\begin{cases} \n \\partial f_0 (x) & \\text{ if } f_i(x) \\leq 0 \\; \\forall i = 1 \\dots m \\\\\n \\partial f_j (x) & \\text{ for some } j \\text{ such that } f_j(x) > 0 \n\\end{cases}"
},
{
"math_id": 34,
"text": "\\partial f"
},
{
"math_id": 35,
"text": "f. \\ "
}
] | https://en.wikipedia.org/wiki?curid=9891368 |
989287 | Residue number system | Multi-modular arithmetic
A residue numeral system (RNS) is a numeral system representing integers by their values modulo several pairwise coprime integers called the moduli. This representation is allowed by the Chinese remainder theorem, which asserts that, if M is the product of the moduli, there is, in an interval of length M, exactly one integer having any given set of modular values.
Using a residue numeral system for arithmetic operations is also called multi-modular arithmetic.
Multi-modular arithmetic is widely used for computation with large integers, typically in linear algebra, because it provides faster computation than with the usual numeral systems, even when the time for converting between numeral systems is taken into account. Other applications of multi-modular arithmetic include polynomial greatest common divisor, Gröbner basis computation and cryptography.
Definition.
A residue numeral system is defined by a set of k integers
formula_0
called the "moduli", which are generally supposed to be pairwise coprime (that is, any two of them have a greatest common divisor equal to one). Residue number systems have been defined for non-coprime moduli, but are not commonly used because of worse properties. Therefore, they will not be considered in the remainder of this article.
An integer x is represented in the residue numeral system by the set of its remainders
formula_1
under Euclidean division by the moduli. That is
formula_2
and
formula_3
for every i
Let M be the product of all the formula_4. Two integers whose difference is a multiple of M have the same representation in the residue numeral system defined by the "mi"s. More precisely, the Chinese remainder theorem asserts that each of the M different sets of possible residues represents exactly one residue class modulo M. That is, each set of residues represents exactly one integer formula_5 in the interval formula_6. For signed numbers, the dynamic range is formula_7
(when formula_8 is even, generally an extra negative value is represented).
Arithmetic operations.
For adding, subtracting and multiplying numbers represented in a residue number system, it suffices to perform the same modular operation on each pair of residues. More precisely, if
formula_9
is the list of moduli, the sum of the integers x and y, respectively represented by the residues formula_10 and formula_11 is the integer z represented by formula_12 such that
formula_13
for "i" = 1, ..., "k" (as usual, mod denotes the modulo operation consisting of taking the remainder of the Euclidean division by the right operand). Subtraction and multiplication are defined similarly.
For a succession of operations, it is not necessary to apply the modulo operation at each step. It may be applied at the end of the computation, or, during the computation, for avoiding overflow of hardware operations.
However, operations such as magnitude comparison, sign computation, overflow detection, scaling, and division are difficult to perform in a residue number system.
Comparison.
If two integers are equal, then all their residues are equal. Conversely, if all residues are equal, then the two integers are equal, or their differences is a multiple of M. It follows that testing equality is easy.
At the opposite, testing inequalities ("x" < "y") is difficult and, usually, requires to convert integers to the standard representation. As a consequence, this representation of numbers is not suitable for algorithms using inequality tests, such as Euclidean division and Euclidean algorithm.
Division.
Division in residue numeral systems is problematic. On the other hand, if formula_14 is coprime with formula_8 (that is formula_15) then
formula_16
can be easily calculated by
formula_17
where formula_18 is multiplicative inverse of formula_14 modulo formula_8, and formula_19 is multiplicative inverse of formula_20 modulo formula_4.
Applications.
RNS have applications in the field of digital computer arithmetic. By decomposing in this a large integer into a set of smaller integers, a large calculation can be performed as a series of smaller calculations that can be performed independently and in parallel.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\{ m_1, m_2, m_3,\\ldots, m_k\\},"
},
{
"math_id": 1,
"text": "\\{ x_1,x_2,x_3,\\ldots,x_k \\}"
},
{
"math_id": 2,
"text": " x_i = x \\operatorname{mod}m_i,"
},
{
"math_id": 3,
"text": "0\\le x_i<m_i"
},
{
"math_id": 4,
"text": "m_i"
},
{
"math_id": 5,
"text": "X"
},
{
"math_id": 6,
"text": "0,\\dots,M-1"
},
{
"math_id": 7,
"text": "{-\\lfloor M/2 \\rfloor} \\le X \\le \\lfloor (M-1)/2 \\rfloor"
},
{
"math_id": 8,
"text": "M"
},
{
"math_id": 9,
"text": "[m_1, \\ldots, m_k]"
},
{
"math_id": 10,
"text": "[x_1,\\ldots, x_k]"
},
{
"math_id": 11,
"text": "[y_1,\\ldots, y_k],"
},
{
"math_id": 12,
"text": "[z_1,\\ldots, z_k],"
},
{
"math_id": 13,
"text": "z_i= (x_i+y_i)\\operatorname{mod} m_i,"
},
{
"math_id": 14,
"text": "B"
},
{
"math_id": 15,
"text": "b_i\\not =0"
},
{
"math_id": 16,
"text": "C=A\\cdot B^{-1} \\mod M"
},
{
"math_id": 17,
"text": "c_i=a_i \\cdot b_i^{-1} \\mod m_i,"
},
{
"math_id": 18,
"text": "B^{-1}"
},
{
"math_id": 19,
"text": "b_i^{-1}"
},
{
"math_id": 20,
"text": "b_i"
}
] | https://en.wikipedia.org/wiki?curid=989287 |
9896374 | Hubbert linearization | The Hubbert linearization is a way to plot production data to estimate two important parameters of a Hubbert curve, the approximated production rate of a nonrenewable resource following a logistic distribution:
The linearization technique was introduced by Marion King Hubbert in his 1982 review paper. The Hubbert curve is the first derivative of a logistic function, which has been used for modeling the depletion of crude oil in particular, the depletion of finite mineral resources in general and also population growth patterns.
Principle.
The first step of the Hubbert linearization consists of plotting the yearly production data ("P" in bbl/y) as a fraction of the cumulative production ("Q" in bbl) on the vertical axis and the cumulative production on the horizontal axis. This representation exploits the linear property of the logistic differential equation:
with
We can rewrite (1) as the following:
The above relation is a line equation in the "P/Q" versus "Q" plane. Consequently, a linear regression on the data points gives us an estimate of the line slope calculated by "-k/URR" and intercept from which we can derive the Hubbert curve parameters:
Examples.
Global oil production.
The geologist Kenneth S. Deffeyes applied this technique in 2005 to make a prediction about the peak of overall oil production at the end of the same year, which has since been found to be premature. He did not make a distinction between "conventional" and "non-conventional" oil produced by fracturing, aka tight oil, which has continued further growth in oil production. However, since 2005 conventional oil production has not grown anymore.
US oil production.
The charts below gives an example of the application of the Hubbert Linearization technique in the case of the US Lower-48 oil production. The fit of a line using the data points from 1956 to 2005 (in green) gives a URR of 199 Gb and a logistic growth rate of 6%.
Norway oil production.
The Norwegian Hubbert linearization estimates an URR = 30 Gb and a logistic growth rate of k = 17%.
Alternative techniques.
Second Hubbert linearization.
The Hubbert linearization principle can be extended to the first derivatives of the production rate by computing the derivative of (2):
The left term, the rate of change of production per current production, is often called the decline rate. The decline curve is a line that starts at +k, crosses zero at URR/2 and ends at −k. Thus, we can derive the Hubbert curve parameters:
Hubbert parabola.
This representation was proposed by Roberto Canogar and applied to the oil depletion problem. It is equation (2) multiplied by Q.
The parabola starts from the origin (0,0) and passes through (URR,0). Data points until t are used by the least squares fitting method to find an estimate for URR.
Logit transform.
David Rutledge applied the logit transform for the analysis of coal production data, which often has a worse signal-to-noise ratio than the production data for hydrocarbons. The integrative nature of cumulation serves as a low pass, filtering noise effects. The production data is fitted to the logistic curve after transformation using "e"("t") as normalized exhaustion parameter going from 0 to 1.
The value of URR is varied so that the linearized logit gives a best fit with a maximal coefficient of determination formula_0.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R^2"
}
] | https://en.wikipedia.org/wiki?curid=9896374 |
98974 | Linear classifier | Statistical classification in machine learning
In the field of machine learning, the goal of statistical classification is to use an object's characteristics to identify which class (or group) it belongs to. A linear classifier achieves this by making a classification decision based on the value of a linear combination of the characteristics. An object's characteristics are also known as feature values and are typically presented to the machine in a vector called a feature vector. Such classifiers work well for practical problems such as document classification, and more generally for problems with many variables (features), reaching accuracy levels comparable to non-linear classifiers while taking less time to train and use. 5–12–23
Definition.
If the input feature vector to the classifier is a real vector formula_0, then the output score is
formula_1
where formula_2 is a real vector of weights and "f" is a function that converts the dot product of the two vectors into the desired output. (In other words, formula_3 is a one-form or linear functional mapping formula_0 onto R.) The weight vector formula_4 is learned from a set of labeled training samples. Often "f" is a threshold function, which maps all values of formula_5 above a certain threshold to the first class and all other values to the second class; e.g.,
formula_6
The superscript T indicates the transpose and formula_7 is a scalar threshold. A more complex "f" might give the probability that an item belongs to a certain class.
For a two-class classification problem, one can visualize the operation of a linear classifier as splitting a high-dimensional input space with a hyperplane: all points on one side of the hyperplane are classified as "yes", while the others are classified as "no".
A linear classifier is often used in situations where the speed of classification is an issue, since it is often the fastest classifier, especially when formula_0 is sparse. Also, linear classifiers often work very well when the number of dimensions in formula_0 is large, as in document classification, where each element in formula_0 is typically the number of occurrences of a word in a document (see document-term matrix). In such cases, the classifier should be well-regularized.
Generative models vs. discriminative models.
There are two broad classes of methods for determining the parameters of a linear classifier formula_4. They can be generative and discriminative models. Methods of the former model joint probability distribution, whereas methods of the latter model conditional density functions formula_8. Examples of such algorithms include:
The second set of methods includes discriminative models, which attempt to maximize the quality of the output on a training set. Additional terms in the training cost function can easily perform regularization of the final model. Examples of discriminative training of linear classifiers include:
Note: Despite its name, LDA does not belong to the class of discriminative models in this taxonomy. However, its name makes sense when we compare LDA to the other main linear dimensionality reduction algorithm: principal components analysis (PCA). LDA is a supervised learning algorithm that utilizes the labels of the data, while PCA is an unsupervised learning algorithm that ignores the labels. To summarize, the name is a historical artifact.
Discriminative training often yields higher accuracy than modeling the conditional density functions. However, handling missing data is often easier with conditional density models.
All of the linear classifier algorithms listed above can be converted into non-linear algorithms operating on a different input space formula_9, using the kernel trick.
Discriminative training.
Discriminative training of linear classifiers usually proceeds in a supervised way, by means of an optimization algorithm that is given a training set with desired outputs and a loss function that measures the discrepancy between the classifier's outputs and the desired outputs. Thus, the learning algorithm solves an optimization problem of the form
formula_10
where
Popular loss functions include the hinge loss (for linear SVMs) and the log loss (for linear logistic regression). If the regularization function R is convex, then the above is a convex problem. Many algorithms exist for solving such problems; popular ones for linear classification include (stochastic) gradient descent, L-BFGS, coordinate descent and Newton methods. | [
{
"math_id": 0,
"text": "\\vec x"
},
{
"math_id": 1,
"text": "y = f(\\vec{w}\\cdot\\vec{x}) = f\\left(\\sum_j w_j x_j\\right),"
},
{
"math_id": 2,
"text": "\\vec w "
},
{
"math_id": 3,
"text": "\\vec{w}"
},
{
"math_id": 4,
"text": "\\vec w"
},
{
"math_id": 5,
"text": "\\vec{w}\\cdot\\vec{x}"
},
{
"math_id": 6,
"text": "\nf(\\mathbf{x}) = \\begin{cases}1 & \\text{if }\\ \\mathbf{w}^T \\cdot \\mathbf{x} > \\theta,\\\\0 & \\text{otherwise}\\end{cases}\n"
},
{
"math_id": 7,
"text": " \\theta "
},
{
"math_id": 8,
"text": "P({\\rm class}|\\vec x)"
},
{
"math_id": 9,
"text": "\\varphi(\\vec x)"
},
{
"math_id": 10,
"text": "\\underset{\\mathbf{w}}{\\arg\\min} \\;R(\\mathbf{w}) + C \\sum_{i=1}^N L(y_i, \\mathbf{w}^\\mathsf{T} \\mathbf{x}_i)"
}
] | https://en.wikipedia.org/wiki?curid=98974 |
9897552 | Biarc | A biarc is a smooth curve formed from two circular arcs. In order to make the biarc smooth ("G"1 continuous), the two arcs should have the same tangent at the connecting point where they meet.
Biarcs are commonly used in geometric modeling and computer graphics. They can be used to approximate splines and other plane curves by placing the two outer endpoints of the biarc along the curve to be approximated, with a tangent that matches the curve, and then choosing a middle point that best fits the curve. This choice of three points and two tangents determines a unique pair of circular arcs, and the locus of middle points for which these two arcs form a biarc is itself a circular arc. In particular, to approximate a Bézier curve in this way, the middle point of the biarc should be chosen as the incenter of the triangle formed by the two endpoints of the Bézier curve and the point where their two tangents meet. More generally, one can approximate a curve by a smooth sequence of biarcs; using more biarcs in the sequence will in general improve the approximation's closeness to the original curve.
Examples of biarc curves.
Different colours in figures 3, 4, 5 are explained below as subfamilies
formula_0,
formula_1,
formula_2.
In particular, for biarcs, shown in brown on shaded background (lens-like or lune-like), the following holds:
Family of biarcs with common end tangents.
A family of biarcs with common end points formula_7, formula_8, and common end tangents (1) is denoted as formula_9 or, briefly, as formula_10 formula_11 being the family parameter. Biarc properties are described below in terms of article.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\color{sienna}\\mathcal{B}^{\\,+}"
},
{
"math_id": 1,
"text": "\\color{blue}\\mathcal{B}^{\\,-}_1"
},
{
"math_id": 2,
"text": "\\color{green}\\mathcal{B}^{\\,-}_2"
},
{
"math_id": 3,
"text": "\\beta-\\alpha"
},
{
"math_id": 4,
"text": "\\beta-\\alpha\\pm 2\\pi"
},
{
"math_id": 5,
"text": "\\sgn(\\alpha+\\beta)=\\sgn(k_2-k_1)"
},
{
"math_id": 6,
"text": "\\alpha+\\beta"
},
{
"math_id": 7,
"text": "A = (-c,0)"
},
{
"math_id": 8,
"text": "B = (c,0)"
},
{
"math_id": 9,
"text": "\\mathcal{B}(p;\\,\\alpha,\\beta,c),"
},
{
"math_id": 10,
"text": "\\mathcal{B}(p),"
},
{
"math_id": 11,
"text": "p"
}
] | https://en.wikipedia.org/wiki?curid=9897552 |
98981 | Algebraic integer | Complex number that solves a monic polynomial with integer coefficients
In algebraic number theory, an algebraic integer is a complex number that is integral over the integers. That is, an algebraic integer is a complex root of some monic polynomial (a polynomial whose leading coefficient is 1) whose coefficients are integers. The set of all algebraic integers A is closed under addition, subtraction and multiplication and therefore is a commutative subring of the complex numbers.
The ring of integers of a number field K, denoted by , is the intersection of K and A: it can also be characterised as the maximal order of the field K. Each algebraic integer belongs to the ring of integers of some number field. A number α is an algebraic integer if and only if the ring formula_1 is finitely generated as an abelian group, which is to say, as a formula_0-module.
Definitions.
The following are equivalent definitions of an algebraic integer. Let K be a number field (i.e., a finite extension of formula_2, the field of rational numbers), in other words, formula_3 for some algebraic number formula_4 by the primitive element theorem.
Algebraic integers are a special case of integral elements of a ring extension. In particular, an algebraic integer is an integral element of a finite extension formula_10.
Finite generation of ring extension.
For any α, the ring extension (in the sense that is equivalent to field extension) of the integers by α, denoted by formula_19, is finitely generated if and only if α is an algebraic integer.
The proof is analogous to that of the corresponding fact regarding algebraic numbers, with formula_20 there replaced by formula_8 here, and the notion of field extension degree replaced by finite generation (using the fact that formula_8 is finitely generated itself); the only required change is that only non-negative powers of α are involved in the proof.
The analogy is possible because both algebraic integers and algebraic numbers are defined as roots of monic polynomials over either formula_8 or formula_20, respectively.
Ring.
The sum, difference and product of two algebraic integers is an algebraic integer. In general their quotient is not. Thus the algebraic integers form a ring.
This can be shown analogously to the corresponding proof for algebraic numbers, using the integers formula_8 instead of the rationals formula_20.
One may also construct explicitly the monic polynomial involved, which is generally of higher degree than those of the original algebraic integers, by taking resultants and factoring. For example, if "x"2 − "x" − 1 = 0, "y"3 − "y" − 1 = 0 and "z"
"xy", then eliminating x and y from "z" − "xy" = 0 and the polynomials satisfied by x and y using the resultant gives "z"6 − 3"z"4 − 4"z"3 + "z"2 + "z" − 1 = 0, which is irreducible, and is the monic equation satisfied by the product. (To see that the xy is a root of the x-resultant of "z" − "xy" and "x"2 − "x" − 1, one might use the fact that the resultant is contained in the ideal generated by its two input polynomials.)
Integral closure.
Every root of a monic polynomial whose coefficients are algebraic integers is itself an algebraic integer. In other words, the algebraic integers form a ring that is integrally closed in any of its extensions.
Again, the proof is analogous to the corresponding proof for algebraic numbers being algebraically closed.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{Z}"
},
{
"math_id": 1,
"text": "\\mathbb{Z}[\\alpha]"
},
{
"math_id": 2,
"text": "\\mathbb{Q}"
},
{
"math_id": 3,
"text": "K = \\Q(\\theta)"
},
{
"math_id": 4,
"text": "\\theta \\in \\Complex"
},
{
"math_id": 5,
"text": "f(x) \\in \\Z[x]"
},
{
"math_id": 6,
"text": "\\Z[x]"
},
{
"math_id": 7,
"text": "\\Z[\\alpha]"
},
{
"math_id": 8,
"text": "\\Z"
},
{
"math_id": 9,
"text": "M \\subset \\Complex"
},
{
"math_id": 10,
"text": "K / \\mathbb{Q}"
},
{
"math_id": 11,
"text": "\\sqrt{n}"
},
{
"math_id": 12,
"text": "K = \\mathbb{Q}(\\sqrt{d}\\,)"
},
{
"math_id": 13,
"text": "\\sqrt{d}"
},
{
"math_id": 14,
"text": "\\frac{1}{2}(1 + \\sqrt{d}\\,)"
},
{
"math_id": 15,
"text": "F = \\Q[\\alpha]"
},
{
"math_id": 16,
"text": "\\begin{cases}\n1, \\alpha, \\dfrac{\\alpha^2 \\pm k^2 \\alpha + k^2}{3k} & m \\equiv \\pm 1 \\bmod 9 \\\\\n1, \\alpha, \\dfrac{\\alpha^2}k & \\text{otherwise}\n\\end{cases}"
},
{
"math_id": 17,
"text": "\\Q(\\zeta_n)"
},
{
"math_id": 18,
"text": "\\Z[\\zeta_n]"
},
{
"math_id": 19,
"text": "\\Z(\\alpha) \\equiv \\{\\sum_{i=0}^n \\alpha^i z_i | z_i\\in \\Z, n\\in \\Z\\}"
},
{
"math_id": 20,
"text": "\\Q"
},
{
"math_id": 21,
"text": "\\alpha\\in\\Q"
},
{
"math_id": 22,
"text": "\\alpha\\in\\Z"
}
] | https://en.wikipedia.org/wiki?curid=98981 |
9898864 | Berger code | In telecommunication, a Berger code is a unidirectional error detecting code, named after its inventor, J. M. Berger. Berger codes can detect all unidirectional errors. Unidirectional errors are errors that only flip ones into zeroes or only zeroes into ones, such as in asymmetric channels. The check bits of Berger codes are computed by counting all the zeroes in the information word, and expressing that number in natural binary. If the information word consists of formula_0 bits, then the Berger code needs formula_1 "check bits", giving a Berger code of length k+n. (In other words, the formula_2 check bits are enough to check up to formula_3 information bits).
Berger codes can detect any number of one-to-zero bit-flip errors, as long as no zero-to-one errors occurred in the same code word.
Similarly, Berger codes can detect any number of zero-to-one bit-flip errors, as long as no one-to-zero bit-flip errors occur in the same code word.
Berger codes cannot correct any error.
Like all unidirectional error detecting codes,
Berger codes can also be used in delay-insensitive circuits.
Unidirectional error detection.
As stated above, Berger codes detect "any" number of unidirectional errors. For a "given code word", if the only errors that have occurred are that some (or all) bits with value 1 have changed to value 0, then this transformation will be detected by the Berger code implementation. To understand why, consider that there are three such cases:
For case 1, the number of 0-valued bits in the information section will, by definition of the error, increase. Therefore, our Berger check code will be lower than the actual 0-bit-count for the data, and so the check will fail.
For case 2, the number of 0-valued bits in the information section have stayed the same, but the value of the check data has changed. Since we know some 1s turned into 0s, but no 0s have turned into 1s (that's how we defined the error model in this case), the encoded binary value of the check data will go down (e.g., from binary 1011 to 1010, or to 1001, or 0011). Since the information data has stayed the same, it has the same number of zeros it did before, and that will no longer match the mutated check value.
For case 3, where bits have changed in both the information and the check sections, notice that the number of zeros in the information section has "gone up", as described for case 1, and the binary value stored in the check portion has "gone down", as described for case 2. Therefore, there is no chance that the two will end up mutating in such a way as to become a different valid code word.
A similar analysis can be performed, and is perfectly valid, in the case where the only errors that occur are that some 0-valued bits change to 1. Therefore, if all the errors that occur on a specific codeword all occur in the same direction, these errors will be detected. For the next code word being transmitted (for instance), the errors can go in the opposite direction, and they will still be detected, as long as they all go in the same direction as each other.
Unidirectional errors are common in certain situations. For instance, in flash memory, bits can more easily be programmed to a 0 than can be reset to a 1. | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "k = \\lceil \\log_2 (n+1)\\rceil "
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "n = 2^k - 1"
}
] | https://en.wikipedia.org/wiki?curid=9898864 |
9899778 | Implicit certificate | In cryptography, implicit certificates are a variant of public key certificate. A subject's public key is reconstructed from the data in an implicit certificate, and is then said to be "implicitly" verified. Tampering with the certificate will result in the reconstructed public key being invalid, in the sense that it is infeasible to find the matching private key value, as would be required to make use of the tampered certificate.
By comparison, traditional public-key certificates include a copy of the subject's public key, and a digital signature made by the issuing certificate authority (CA). The public key must be explicitly validated, by verifying the signature using the CA's public key. For the purposes of this article, such certificates will be called "explicit" certificates.
Elliptic Curve Qu-Vanstone (ECQV) is one kind of implicit certificate scheme. It is described in the document "Standards for Efficient Cryptography 4 (SEC4)".This article will use ECQV as a concrete example to illustrate implicit certificates.
Comparison of ECQV with explicit certificates.
Conventional explicit certificates are made up of three parts: subject identification data, a public key and a digital signature which binds the public key to the user's identification data (ID). These are distinct data elements within the certificate, and contribute to the size of the certificate: for example, a standard X.509 certificate is on the order of 1KB in size (~8000 bits).
An ECQV implicit certificate consists of identification data, and a single cryptographic value. This value, an elliptic curve point, combines the function of public key data and CA signature. ECQV implicit certificates can therefore be considerably smaller than explicit certificates, and so are useful in highly constrained environments such as Radio-frequency Identification RFID tags, where not a lot of memory or bandwidth is available.
ECQV certificates are useful for any ECC scheme where the private and public keys are of the form ( "d", "dG" ). This includes key agreement protocols such as ECDH and ECMQV, or signing algorithms such as ECDSA. The operation will fail if the certificate has been altered, as the reconstructed public key will be invalid. Reconstructing the public key is fast (a single point multiplication operation) compared to ECDSA signature verification.
Comparison with ID-based cryptography.
Implicit certificates are not to be confused with identity-based cryptography. In ID-based schemes, the subject's identity itself is used to derive their public key; there is no 'certificate' as such. The corresponding private key is calculated and issued to the subject by a trusted third party.
In an implicit certificate scheme, the subject has a private key which is not revealed to the CA during the certificate-issuing process. The CA is trusted to issue certificates correctly, but not to hold individual user's private keys. Wrongly issued certificates can be revoked, whereas there is no comparable mechanism for misuse of private keys in an identity-based scheme.
Description of the ECQV scheme.
Initially the scheme parameters must be agreed upon. These are:
The certificate authority CA will have private key formula_9 and public key formula_10
Certificate request protocol.
Here, Alice will be the user who requests the implicit certificate from the CA. She has identifying information formula_11.
Using the certificate.
Here, Alice wants to prove her identity to Bob, who trusts the CA.
Proof of equivalence of private and public keys.
Alice's private key is formula_36
The public key reconstruction value formula_37
Alice's public key is formula_38
Therefore, formula_28, which completes the proof.
Security.
A security proof for ECQV has been published by Brown et al.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G \\,"
},
{
"math_id": 1,
"text": "n \\,"
},
{
"math_id": 2,
"text": " \\textrm{Encode}(\\gamma, ID)"
},
{
"math_id": 3,
"text": " \\gamma"
},
{
"math_id": 4,
"text": "ID"
},
{
"math_id": 5,
"text": "\\textrm{Decode}_{\\gamma}( \\sdot )"
},
{
"math_id": 6,
"text": "\\gamma"
},
{
"math_id": 7,
"text": "H_n( \\sdot )"
},
{
"math_id": 8,
"text": "[0, n-1]"
},
{
"math_id": 9,
"text": "c \\,"
},
{
"math_id": 10,
"text": "Q_{CA} = cG"
},
{
"math_id": 11,
"text": "ID_A"
},
{
"math_id": 12,
"text": "\\alpha \\,"
},
{
"math_id": 13,
"text": "A = \\alpha\\,G \\,"
},
{
"math_id": 14,
"text": "A"
},
{
"math_id": 15,
"text": "k \\,"
},
{
"math_id": 16,
"text": "[1, n-1] \\,"
},
{
"math_id": 17,
"text": "kG \\,"
},
{
"math_id": 18,
"text": "\\gamma = A + kG \\,"
},
{
"math_id": 19,
"text": " Cert= \\textrm{Encode} ( \\gamma , \\textrm{ID}_A ) \\,"
},
{
"math_id": 20,
"text": "e = H_n( Cert ) "
},
{
"math_id": 21,
"text": "s = ek + c \\pmod{n} \\,"
},
{
"math_id": 22,
"text": " s \\,"
},
{
"math_id": 23,
"text": "(s, Cert) \\,"
},
{
"math_id": 24,
"text": "e' = H_n (Cert)"
},
{
"math_id": 25,
"text": "a = e'\\alpha + s \\pmod{n} \\,"
},
{
"math_id": 26,
"text": "\\gamma' = \\textrm{Decode}_{\\gamma} (Cert)"
},
{
"math_id": 27,
"text": "Q_A = e'\\gamma' + Q_{CA} \\,"
},
{
"math_id": 28,
"text": "Q_A = aG"
},
{
"math_id": 29,
"text": "Cert"
},
{
"math_id": 30,
"text": "C"
},
{
"math_id": 31,
"text": "a"
},
{
"math_id": 32,
"text": "\\gamma'' = \\textrm{Decode}_{\\gamma} (Cert)"
},
{
"math_id": 33,
"text": "e'' = H_n (Cert)"
},
{
"math_id": 34,
"text": "Q_A' = e''\\gamma'' + Q_{CA}"
},
{
"math_id": 35,
"text": "Q_A'"
},
{
"math_id": 36,
"text": "a = e'\\alpha + s = e\\alpha + ek + c \\pmod{n}"
},
{
"math_id": 37,
"text": "\\gamma = A + kG = (\\alpha + k)G"
},
{
"math_id": 38,
"text": "Q_A = e\\gamma + Q_{CA} = e(\\alpha + k)G + cG = (e\\alpha + ek + c)G"
}
] | https://en.wikipedia.org/wiki?curid=9899778 |
9901365 | Stable normal bundle | In surgery theory, a branch of mathematics, the stable normal bundle of a differentiable manifold is an invariant which encodes the stable normal (dually, tangential) data. There are analogs for generalizations of manifold, notably PL-manifolds and topological manifolds. There is also an analogue in homotopy theory for Poincaré spaces, the Spivak spherical fibration, named after Michael Spivak.
Construction via embeddings.
Given an embedding of a manifold in Euclidean space (provided by the theorem of Hassler Whitney), it has a normal bundle. The embedding is not unique, but for high dimension of the Euclidean space it is unique up to isotopy, thus the (class of the) bundle is unique, and called the "stable normal bundle".
This construction works for any Poincaré space "X": a finite CW-complex admits a stably unique (up to homotopy) embedding in Euclidean space, via general position, and this embedding yields a spherical fibration over "X". For more restricted spaces (notably PL-manifolds and topological manifolds), one gets stronger data.
Details.
Two embeddings formula_0 are "isotopic" if they are homotopic
through embeddings. Given a manifold or other suitable space "X," with two embeddings into Euclidean space formula_1 formula_2 these will not in general be isotopic, or even maps into the same space (formula_3 need not equal formula_4). However, one can embed these into a larger space formula_5 by letting the last formula_6 coordinates be 0:
formula_7.
This process of adjoining trivial copies of Euclidean space is called "stabilization."
One can thus arrange for any two embeddings into Euclidean space to map into the same Euclidean space (taking formula_8), and, further, if formula_9 is sufficiently large, these embeddings are isotopic, which is a theorem.
Thus there is a unique stable isotopy class of embedding: it is not a particular embedding (as there are many embeddings), nor an isotopy class (as the target space is not fixed: it is just "a sufficiently large Euclidean space"), but rather a stable isotopy class of maps. The normal bundle associated with this (stable class of) embeddings is then the stable normal bundle.
One can replace this stable isotopy class with an actual isotopy class by fixing the target space, either by using Hilbert space as the target space, or (for a fixed dimension of manifold formula_4) using a fixed formula_9 sufficiently large, as "N" depends only on "n", not the manifold in question.
More abstractly, rather than stabilizing the embedding, one can take any embedding, and then take a vector bundle direct sum with a sufficient number of trivial line bundles; this corresponds exactly to the normal bundle of the stabilized embedding.
Construction via classifying spaces.
An "n"-manifold "M" has a tangent bundle, which has a classifying map (up to homotopy)
formula_10
Composing with the inclusion formula_11 yields (the homotopy class of a classifying map of) the stable tangent bundle. The normal bundle of an embedding formula_12 (formula_13 large) is an inverse formula_14 for formula_15, such that the Whitney sum formula_16 is trivial. The homotopy class of the composite
formula_17 is independent of the choice of embedding, classifying the stable normal bundle formula_18.
Motivation.
There is no intrinsic notion of a normal vector to a manifold, unlike tangent or cotangent vectors – for instance, the normal space depends on which dimension one is embedding into – so the stable normal bundle instead provides a notion of a stable normal space: a normal space (and normal vectors) up to trivial summands.
Why stable normal, instead of stable tangent? Stable normal data is used instead of unstable tangential data because generalizations of manifolds have natural stable normal-type structures, coming from tubular neighborhoods and generalizations, but not unstable tangential ones, as the local structure is not smooth.
Spherical fibrations over a space "X" are classified by the homotopy classes of maps formula_19 to a
classifying space formula_20, with homotopy groups the stable homotopy groups of spheres
formula_21.
The forgetful map formula_22 extends to a fibration sequence
formula_23.
A Poincaré space "X" does not have a tangent bundle, but it does have a well-defined stable spherical fibration, which for a differentiable manifold is the spherical fibration associated to the stable normal bundle; thus a primary obstruction to "X" having the homotopy type of a differentiable manifold is that the spherical fibration lifts to a vector bundle, i.e., the Spivak spherical fibration formula_19 must lift to formula_24, which is equivalent to the map formula_25 being null homotopic
Thus the bundle obstruction to the existence of a (smooth) manifold structure is the class formula_25.
The secondary obstruction is the Wall surgery obstruction.
Applications.
The stable normal bundle is fundamental in surgery theory as a primary obstruction:
More generally, its generalizations serve as replacements for the (unstable) tangent bundle.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "i,i'\\colon X \\hookrightarrow \\R^m"
},
{
"math_id": 1,
"text": "i\\colon X \\hookrightarrow \\R^m,"
},
{
"math_id": 2,
"text": "j\\colon X \\hookrightarrow \\R^n,"
},
{
"math_id": 3,
"text": "m"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "\\mathbf{R}^N"
},
{
"math_id": 6,
"text": "N-m"
},
{
"math_id": 7,
"text": "i\\colon X \\hookrightarrow \\R^m \\cong \\R^m \\times \\left\\{(0,\\dots,0)\\right\\} \\subset \\R^m \\times \\R^{N-m} \\cong \\R^N"
},
{
"math_id": 8,
"text": "N = \\max(m,n)"
},
{
"math_id": 9,
"text": "N"
},
{
"math_id": 10,
"text": "\\tau_M\\colon M \\to B\\textrm{O}(n)."
},
{
"math_id": 11,
"text": "B\\textrm{O}(n) \\to B\\textrm{O}"
},
{
"math_id": 12,
"text": "M \\subset \\R^{n+k}"
},
{
"math_id": 13,
"text": "k"
},
{
"math_id": 14,
"text": "\\nu_M\\colon M \\to B\\textrm{O}(k)"
},
{
"math_id": 15,
"text": "\\tau_M"
},
{
"math_id": 16,
"text": "\\tau_M\\oplus \\nu_M\n\\colon M \\to B\\textrm{O}(n+k)"
},
{
"math_id": 17,
"text": "\\nu_M\\colon M \\to B\\textrm{O}(k) \\to B\\textrm{O}"
},
{
"math_id": 18,
"text": "\\nu_M"
},
{
"math_id": 19,
"text": "X \\to BG"
},
{
"math_id": 20,
"text": "BG"
},
{
"math_id": 21,
"text": "\\pi_*(BG)=\\pi_{*-1}^S"
},
{
"math_id": 22,
"text": "B\\textrm{O} \\to BG"
},
{
"math_id": 23,
"text": "B\\textrm{O} \\to BG \\to B(G/\\textrm{O})"
},
{
"math_id": 24,
"text": "X \\to B\\textrm{O}"
},
{
"math_id": 25,
"text": "X \\to B(G/\\textrm{O})"
},
{
"math_id": 26,
"text": "f\\colon M \\to N"
}
] | https://en.wikipedia.org/wiki?curid=9901365 |
9902787 | Structure theorem for finitely generated modules over a principal ideal domain | Statement in abstract algebra
In mathematics, in the field of abstract algebra, the structure theorem for finitely generated modules over a principal ideal domain is a generalization of the fundamental theorem of finitely generated abelian groups and roughly states that finitely generated modules over a principal ideal domain (PID) can be uniquely decomposed in much the same way that integers have a prime factorization. The result provides a simple framework to understand various canonical form results for square matrices over fields.
Statement.
When a vector space over a field "F" has a finite generating set, then one may extract from it a basis consisting of a finite number "n" of vectors, and the space is therefore isomorphic to "F""n". The corresponding statement with "F" generalized to a principal ideal domain "R" is no longer true, since a basis for a finitely generated module over "R" might not exist. However such a module is still isomorphic to a quotient of some module "Rn" with "n" finite (to see this it suffices to construct the morphism that sends the elements of the canonical basis of "Rn" to the generators of the module, and take the quotient by its kernel.) By changing the choice of generating set, one can in fact describe the module as the quotient of some "Rn" by a particularly simple submodule, and this is the structure theorem.
The structure theorem for finitely generated modules over a principal ideal domain usually appears in the following two forms.
Invariant factor decomposition.
For every finitely generated module "M" over a principal ideal domain "R", there is a unique decreasing sequence of proper ideals formula_0 such that "M" is isomorphic to the sum of cyclic modules:
formula_1
The generators formula_2 of the ideals are unique up to multiplication by a unit, and are called invariant factors of "M". Since the ideals should be proper, these factors must not themselves be invertible (this avoids trivial factors in the sum), and the inclusion of the ideals means one has divisibility formula_3. The free part is visible in the part of the decomposition corresponding to factors formula_4. Such factors, if any, occur at the end of the sequence.
While the direct sum is uniquely determined by "M", the isomorphism giving the decomposition itself is "not unique" in general. For instance if "R" is actually a field, then all occurring ideals must be zero, and one obtains the decomposition of a finite dimensional vector space into a direct sum of one-dimensional subspaces; the number of such factors is fixed, namely the dimension of the space, but there is a lot of freedom for choosing the subspaces themselves (if dim "M" > 1).
The nonzero formula_2 elements, together with the number of formula_2 which are zero, form a complete set of invariants for the module. Explicitly, this means that any two modules sharing the same set of invariants are necessarily isomorphic.
Some prefer to write the free part of "M" separately:
formula_5
where the visible formula_2 are nonzero, and "f" is the number of formula_2's in the original sequence which are 0.
Primary decomposition.
Every finitely generated module "M" over a principal ideal domain "R" is isomorphic to one of the form
formula_6
where formula_7 and the formula_8 are primary ideals. The formula_9 are unique (up to multiplication by units).
The elements formula_9 are called the "elementary divisors" of "M". In a PID, nonzero primary ideals are powers of primes, and so formula_10. When formula_11, the resulting indecomposable module is formula_12 itself, and this is inside the part of "M" that is a free module.
The summands formula_13 are indecomposable, so the primary decomposition is a decomposition into indecomposable modules, and thus every finitely generated module over a PID is a completely decomposable module. Since PID's are Noetherian rings, this can be seen as a manifestation of the Lasker-Noether theorem.
As before, it is possible to write the free part (where formula_11) separately and express "M" as
formula_14
where the visible formula_15 are nonzero.
Proofs.
One proof proceeds as follows:
This yields the invariant factor decomposition, and the diagonal entries of Smith normal form are the invariant factors.
Another outline of a proof:
formula_17
Where the map is a projection. "M"/"tM" is a finitely generated torsion free module, and such a module over a commutative PID is a free module of finite rank, so it is isomorphic to: formula_18 for a positive integer "n".
Since every free module is projective module, then exists right inverse of the projection map (it suffices to lift each of the generators of "M/tM" into "M"). By splitting lemma (left split) M splits into: formula_19.
Corollaries.
This includes the classification of finite-dimensional vector spaces as a special case, where formula_21. Since fields have no non-trivial ideals, every finitely generated vector space is free.
Taking formula_22 yields the fundamental theorem of finitely generated abelian groups.
Let "T" be a linear operator on a finite-dimensional vector space "V" over "K". Taking formula_23, the algebra of polynomials with coefficients in "K" evaluated at "T", yields structure information about "T". "V" can be viewed as a finitely generated module over formula_24. The last invariant factor is the minimal polynomial, and the product of invariant factors is the characteristic polynomial. Combined with a standard matrix form for formula_25, this yields various canonical forms:
Uniqueness.
While the invariants (rank, invariant factors, and elementary divisors) are unique, the isomorphism between "M" and its canonical form is not unique, and does not even preserve the direct sum decomposition. This follows because there are non-trivial automorphisms of these modules which do not preserve the summands.
However, one has a canonical torsion submodule "T", and similar canonical submodules corresponding to each (distinct) invariant factor, which yield a canonical sequence:
formula_26
Compare composition series in Jordan–Hölder theorem.
For instance, if formula_27, and formula_28 is one basis, then
formula_29 is another basis, and the change of basis matrix formula_30 does not preserve the summand formula_31. However, it does preserve the formula_32 summand, as this is the torsion submodule (equivalently here, the 2-torsion elements).
Generalizations.
Groups.
The Jordan–Hölder theorem is a more general result for finite groups (or modules over an arbitrary ring). In this generality, one obtains a composition series, rather than a direct sum.
The Krull–Schmidt theorem and related results give conditions under which a module has something like a primary decomposition, a decomposition as a direct sum of indecomposable modules in which the summands are unique up to order.
Primary decomposition.
The primary decomposition generalizes to finitely generated modules over commutative Noetherian rings, and this result is called the Lasker–Noether theorem.
Indecomposable modules.
By contrast, unique decomposition into "indecomposable" submodules does not generalize as far, and the failure is measured by the ideal class group, which vanishes for PIDs.
For rings that are not principal ideal domains, unique decomposition need not even hold for modules over a ring generated by two elements. For the ring "R" = Z[√−5], both the module "R" and its submodule "M" generated by 2 and 1 + √−5 are indecomposable. While "R" is not isomorphic to "M", "R" ⊕ "R" is isomorphic to "M" ⊕ "M"; thus the images of the "M" summands give indecomposable submodules "L"1, "L"2 < "R" ⊕ "R" which give a different decomposition of "R" ⊕ "R". The failure of uniquely factorizing "R" ⊕ "R" into a direct sum of indecomposable modules is directly related (via the ideal class group) to the failure of the unique factorization of elements of "R" into irreducible elements of "R".
However, over a Dedekind domain the ideal class group is the only obstruction, and the structure theorem generalizes to finitely generated modules over a Dedekind domain with minor modifications. There is still a unique torsion part, with a torsionfree complement (unique up to isomorphism), but a torsionfree module over a Dedekind domain is no longer necessarily free. Torsionfree modules over a Dedekind domain are determined (up to isomorphism) by rank and Steinitz class (which takes value in the ideal class group), and the decomposition into a direct sum of copies of "R" (rank one free modules) is replaced by a direct sum into rank one projective modules: the individual summands are not uniquely determined, but the Steinitz class (of the sum) is.
Non-finitely generated modules.
Similarly for modules that are not finitely generated, one cannot expect such a nice decomposition: even the number of factors may vary. There are Z-submodules of Q4 which are simultaneously direct sums of two indecomposable modules and direct sums of three indecomposable modules, showing the analogue of the primary decomposition cannot hold for infinitely generated modules, even over the integers, Z.
Another issue that arises with non-finitely generated modules is that there are torsion-free modules which are not free. For instance, consider the ring Z of integers. Then Q is a torsion-free Z-module which is not free. Another classical example of such a module is the Baer–Specker group, the group of all sequences of integers under termwise addition. In general, the question of which infinitely generated torsion-free abelian groups are free depends on which large cardinals exist. A consequence is that any structure theorem for infinitely generated modules depends on a choice of set theory axioms and may be invalid under a different choice.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "(d_1)\\supseteq(d_2)\\supseteq\\cdots\\supseteq(d_n)"
},
{
"math_id": 1,
"text": "M\\cong\\bigoplus_i R/(d_i) = R/(d_1)\\oplus R/(d_2)\\oplus\\cdots\\oplus R/(d_n)."
},
{
"math_id": 2,
"text": "d_i"
},
{
"math_id": 3,
"text": "d_1\\,|\\,d_2\\,|\\,\\cdots\\,|\\,d_n"
},
{
"math_id": 4,
"text": "d_i = 0"
},
{
"math_id": 5,
"text": "R^f \\oplus \\bigoplus_i R/(d_i) = R^f \\oplus R/(d_1)\\oplus R/(d_2)\\oplus\\cdots\\oplus R/(d_{n-f})\\;,"
},
{
"math_id": 6,
"text": "\\bigoplus_i R/(q_i)\\;,"
},
{
"math_id": 7,
"text": "(q_i) \\neq R"
},
{
"math_id": 8,
"text": "(q_i)"
},
{
"math_id": 9,
"text": "q_i"
},
{
"math_id": 10,
"text": "(q_i)=(p_i^{r_i}) = (p_i)^{r_i}"
},
{
"math_id": 11,
"text": "q_i=0"
},
{
"math_id": 12,
"text": "R"
},
{
"math_id": 13,
"text": "R/(q_i)"
},
{
"math_id": 14,
"text": "R^f \\oplus(\\bigoplus_i R/(q_i))\\;,"
},
{
"math_id": 15,
"text": "q_i "
},
{
"math_id": 16,
"text": "R^r \\to R^g"
},
{
"math_id": 17,
"text": " 0\\rightarrow tM \\rightarrow M \\rightarrow M/tM \\rightarrow 0 "
},
{
"math_id": 18,
"text": "R^n"
},
{
"math_id": 19,
"text": "M= tM\\oplus F"
},
{
"math_id": 20,
"text": "N_p= \\{m\\in tM\\mid \\exists i, mp^i=0\\}"
},
{
"math_id": 21,
"text": "R = K"
},
{
"math_id": 22,
"text": "R=\\mathbb{Z}"
},
{
"math_id": 23,
"text": "R = K[T]"
},
{
"math_id": 24,
"text": "K[T]"
},
{
"math_id": 25,
"text": "K[T]/p(T)"
},
{
"math_id": 26,
"text": "0 < \\cdots < T < M."
},
{
"math_id": 27,
"text": "M \\approx \\mathbf{Z} \\oplus \\mathbf{Z}/2"
},
{
"math_id": 28,
"text": "(1,\\bar{0}), (0,\\bar{1})"
},
{
"math_id": 29,
"text": "(1,\\bar{1}), (0,\\bar{1})"
},
{
"math_id": 30,
"text": "\\begin{bmatrix}1&0\\\\1&1\\end{bmatrix}"
},
{
"math_id": 31,
"text": "\\mathbf{Z}"
},
{
"math_id": 32,
"text": "\\mathbf{Z}/2"
}
] | https://en.wikipedia.org/wiki?curid=9902787 |
9903220 | Complete set of invariants | In mathematics, a complete set of invariants for a classification problem is a collection of maps
formula_0
(where formula_1 is the collection of objects being classified, up to some equivalence relation formula_2, and the formula_3 are some sets), such that formula_4 if and only if formula_5 for all formula_6. In words, such that two objects are equivalent if and only if all invariants are equal.
Symbolically, a complete set of invariants is a collection of maps such that
formula_7
is injective.
As invariants are, by definition, equal on equivalent objects, equality of invariants is a "necessary" condition for equivalence; a "complete" set of invariants is a set such that equality of these is also "sufficient" for equivalence. In the context of a group action, this may be stated as: invariants are functions of coinvariants (equivalence classes, orbits), and a complete set of invariants characterizes the coinvariants (is a set of defining equations for the coinvariants).
Realizability of invariants.
A complete set of invariants does not immediately yield a classification theorem: not all combinations of invariants may be realized. Symbolically, one must also determine the image of
formula_8
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f_i : X \\to Y_i "
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "\\sim"
},
{
"math_id": 3,
"text": "Y_i"
},
{
"math_id": 4,
"text": "x \\sim x'"
},
{
"math_id": 5,
"text": "f_i(x) = f_i(x')"
},
{
"math_id": 6,
"text": "i"
},
{
"math_id": 7,
"text": "\\left( \\prod f_i \\right) : (X/\\sim) \\to \\left( \\prod Y_i \\right)"
},
{
"math_id": 8,
"text": "\\prod f_i : X \\to \\prod Y_i."
}
] | https://en.wikipedia.org/wiki?curid=9903220 |
9903344 | Orientation character | In algebraic topology, a branch of mathematics, an orientation character on a group formula_0 is a group homomorphism where:
formula_1
This notion is of particular significance in surgery theory.
Motivation.
Given a manifold "M", one takes formula_2 (the fundamental group), and then formula_3 sends an element of formula_0 to formula_4 if and only if the class it represents is orientation-reversing.
This map formula_3 is trivial if and only if "M" is orientable.
The orientation character is an algebraic structure on the fundamental group of a manifold, which captures which loops are orientation reversing and which are orientation preserving.
Twisted group algebra.
The orientation character defines a twisted involution (*-ring structure) on the group ring formula_5, by formula_6 (i.e., formula_7, accordingly as formula_8 is orientation preserving or reversing). This is denoted formula_9.
Properties.
The orientation character is either trivial or has kernel an index 2 subgroup, which determines the map completely.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\pi"
},
{
"math_id": 1,
"text": "\\omega\\colon \\pi \\to \\left\\{\\pm 1\\right\\}"
},
{
"math_id": 2,
"text": "\\pi=\\pi_1 M"
},
{
"math_id": 3,
"text": "\\omega"
},
{
"math_id": 4,
"text": "-1"
},
{
"math_id": 5,
"text": "\\mathbf{Z}[\\pi]"
},
{
"math_id": 6,
"text": "g \\mapsto \\omega(g)g^{-1}"
},
{
"math_id": 7,
"text": "\\pm g^{-1}"
},
{
"math_id": 8,
"text": "g"
},
{
"math_id": 9,
"text": "\\mathbf{Z}[\\pi]^\\omega"
}
] | https://en.wikipedia.org/wiki?curid=9903344 |
990534 | Norm (mathematics) | Length in a vector space
In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude of the vector. This norm can be defined as the square root of the inner product of a vector with itself.
A seminorm satisfies the first two properties of a norm, but may be zero for vectors other than the origin. A vector space with a specified norm is called a normed vector space. In a similar manner, a vector space with a seminorm is called a "seminormed vector space".
The term pseudonorm has been used for several related meanings. It may be a synonym of "seminorm".
A pseudonorm may satisfy the same axioms as a norm, with the equality replaced by an inequality "formula_0" in the homogeneity axiom.
It can also refer to a norm that can take infinite values, or to certain functions parametrised by a directed set.
<templatestyles src="Template:TOC limit/styles.css" />
Definition.
Given a vector space formula_1 over a subfield formula_2 of the complex numbers formula_3 a norm on formula_1 is a real-valued function formula_4 with the following properties, where formula_5 denotes the usual absolute value of a scalar formula_6:
A seminorm on formula_1 is a function formula_4 that has properties (1.) and (2.) so that in particular, every norm is also a seminorm (and thus also a sublinear functional). However, there exist seminorms that are not norms. Properties (1.) and (2.) imply that if formula_16 is a norm (or more generally, a seminorm) then formula_17 and that formula_16 also has the following property:
Some authors include non-negativity as part of the definition of "norm", although this is not necessary.
Although this article defined "positive" to be a synonym of "positive definite", some authors instead define "positive" to be a synonym of "non-negative"; these definitions are not equivalent.
Equivalent norms.
Suppose that formula_16 and formula_18 are two norms (or seminorms) on a vector space formula_19 Then formula_16 and formula_18 are called equivalent, if there exist two positive real constants formula_20 and formula_21 with formula_22 such that for every vector formula_12
formula_23
The relation "formula_16 is equivalent to formula_18" is reflexive, symmetric (formula_24 implies formula_25), and transitive and thus defines an equivalence relation on the set of all norms on formula_19
The norms formula_16 and formula_18 are equivalent if and only if they induce the same topology on formula_19 Any two norms on a finite-dimensional space are equivalent but this does not extend to infinite-dimensional spaces.
Notation.
If a norm formula_26 is given on a vector space formula_27 then the norm of a vector formula_28 is usually denoted by enclosing it within double vertical lines: formula_29 Such notation is also sometimes used if formula_16 is only a seminorm. For the length of a vector in Euclidean space (which is an example of a norm, as explained below), the notation formula_30 with single vertical lines is also widespread.
Examples.
Every (real or complex) vector space admits a norm: If formula_31 is a Hamel basis for a vector space formula_1 then the real-valued map that sends formula_32 (where all but finitely many of the scalars formula_33 are formula_34) to formula_35 is a norm on formula_19 There are also a large number of norms that exhibit additional properties that make them useful for specific problems.
Absolute-value norm.
The absolute value
formula_30
is a norm on the vector space formed by the real or complex numbers. The complex numbers form a one-dimensional vector space over themselves and a two-dimensional vector space over the reals; the absolute value is a norm for these two structures.
Any norm formula_16 on a one-dimensional vector space formula_1 is equivalent (up to scaling) to the absolute value norm, meaning that there is a norm-preserving isomorphism of vector spaces formula_36 where formula_37 is either formula_38 or formula_3 and norm-preserving means that formula_39
This isomorphism is given by sending formula_40 to a vector of norm formula_41 which exists since such a vector is obtained by multiplying any non-zero vector by the inverse of its norm.
Euclidean norm.
On the formula_42-dimensional Euclidean space formula_43 the intuitive notion of length of the vector formula_44 is captured by the formula
formula_45
This is the Euclidean norm, which gives the ordinary distance from the origin to the point X—a consequence of the Pythagorean theorem.
This operation may also be referred to as "SRSS", which is an acronym for the square root of the sum of squares.
The Euclidean norm is by far the most commonly used norm on formula_43 but there are other norms on this vector space as will be shown below.
However, all these norms are equivalent in the sense that they all define the same topology on finite-dimensional spaces.
The inner product of two vectors of a Euclidean vector space is the dot product of their coordinate vectors over an orthonormal basis.
Hence, the Euclidean norm can be written in a coordinate-free way as
formula_46
The Euclidean norm is also called the quadratic norm, formula_47 norm, formula_48 norm, 2-norm, or square norm; see formula_49 space.
It defines a distance function called the Euclidean length, formula_47 distance, or formula_48 distance.
The set of vectors in formula_50 whose Euclidean norm is a given positive constant forms an formula_42-sphere.
Euclidean norm of complex numbers.
The Euclidean norm of a complex number is the absolute value (also called the modulus) of it, if the complex plane is identified with the Euclidean plane formula_51 This identification of the complex number formula_52 as a vector in the Euclidean plane, makes the quantity formula_53 (as first suggested by Euler) the Euclidean norm associated with the complex number. For formula_54, the norm can also be written as formula_55 where formula_56 is the complex conjugate of formula_57
Quaternions and octonions.
There are exactly four Euclidean Hurwitz algebras over the real numbers. These are the real numbers formula_58 the complex numbers formula_3 the quaternions formula_59 and lastly the octonions formula_60 where the dimensions of these spaces over the real numbers are formula_61 respectively.
The canonical norms on formula_38 and formula_62 are their absolute value functions, as discussed previously.
The canonical norm on formula_63 of quaternions is defined by
formula_64
for every quaternion formula_65 in formula_66 This is the same as the Euclidean norm on formula_63 considered as the vector space formula_67 Similarly, the canonical norm on the octonions is just the Euclidean norm on formula_68
Finite-dimensional complex normed spaces.
On an formula_42-dimensional complex space formula_69 the most common norm is
formula_70
In this case, the norm can be expressed as the square root of the inner product of the vector and itself:
formula_71
where formula_72 is represented as a column vector formula_73 and formula_74 denotes its conjugate transpose.
This formula is valid for any inner product space, including Euclidean and complex spaces. For complex spaces, the inner product is equivalent to the complex dot product. Hence the formula in this case can also be written using the following notation:
formula_75
Taxicab norm or Manhattan norm.
formula_76
The name relates to the distance a taxi has to drive in a rectangular street grid (like that of the New York borough of Manhattan) to get from the origin to the point formula_77
The set of vectors whose 1-norm is a given constant forms the surface of a cross polytope, which has dimension equal to the dimension of the vector space minus 1.
The Taxicab norm is also called the formula_78 norm. The distance derived from this norm is called the Manhattan distance or formula_78 distance.
The 1-norm is simply the sum of the absolute values of the columns.
In contrast,
formula_79
is not a norm because it may yield negative results.
"p"-norm.
Let formula_80 be a real number.
The formula_16-norm (also called formula_81-norm) of vector formula_82 is
formula_83
For formula_84 we get the taxicab norm, for formula_85 we get the Euclidean norm, and as formula_16 approaches formula_86 the formula_16-norm approaches the infinity norm or :
formula_87
The formula_16-norm is related to the generalized mean or power mean.
For formula_88 the formula_89-norm is even induced by a canonical inner product formula_90 meaning that formula_91 for all vectors formula_92 This inner product can be expressed in terms of the norm by using the polarization identity.
On formula_93 this inner product is the "<templatestyles src="Template:Visible anchor/styles.css" />Euclidean inner product" defined by
formula_94
while for the space formula_95 associated with a measure space formula_96 which consists of all square-integrable functions, this inner product is
formula_97
This definition is still of some interest for formula_98 but the resulting function does not define a norm, because it violates the triangle inequality.
What is true for this case of formula_98 even in the measurable analog, is that the corresponding formula_49 class is a vector space, and it is also true that the function
formula_99
(without formula_16th root) defines a distance that makes formula_100 into a complete metric topological vector space. These spaces are of great interest in functional analysis, probability theory and harmonic analysis.
However, aside from trivial cases, this topological vector space is not locally convex, and has no continuous non-zero linear forms. Thus the topological dual space contains only the zero functional.
The partial derivative of the formula_16-norm is given by
formula_101
The derivative with respect to formula_102 therefore, is
formula_103
where formula_104 denotes Hadamard product and formula_105 is used for absolute value of each component of the vector.
For the special case of formula_88 this becomes
formula_106
or
formula_107
Maximum norm (special case of: infinity norm, uniform norm, or supremum norm).
If formula_108 is some vector such that formula_109 then:
formula_110
The set of vectors whose infinity norm is a given constant, formula_111 forms the surface of a hypercube with edge length formula_112
Zero norm.
In probability and functional analysis, the zero norm induces a complete metric topology for the space of measurable functions and for the F-space of sequences with F–norm formula_113
Here we mean by "F-norm" some real-valued function formula_114 on an F-space with distance formula_115 such that formula_116 The "F"-norm described above is not a norm in the usual sense because it lacks the required homogeneity property.
Hamming distance of a vector from zero.
In metric geometry, the discrete metric takes the value one for distinct points and zero otherwise. When applied coordinate-wise to the elements of a vector space, the discrete distance defines the "Hamming distance", which is important in coding and information theory.
In the field of real or complex numbers, the distance of the discrete metric from zero is not homogeneous in the non-zero point; indeed, the distance from zero remains one as its non-zero argument approaches zero.
However, the discrete distance of a number from zero does satisfy the other properties of a norm, namely the triangle inequality and positive definiteness.
When applied component-wise to vectors, the discrete distance from zero behaves like a non-homogeneous "norm", which counts the number of non-zero components in its vector argument; again, this non-homogeneous "norm" is discontinuous.
In signal processing and statistics, David Donoho referred to the "zero" "norm" with quotation marks.
Following Donoho's notation, the zero "norm" of formula_117 is simply the number of non-zero coordinates of formula_102 or the Hamming distance of the vector from zero.
When this "norm" is localized to a bounded set, it is the limit of formula_16-norms as formula_16 approaches 0.
Of course, the zero "norm" is not truly a norm, because it is not positive homogeneous.
Indeed, it is not even an F-norm in the sense described above, since it is discontinuous, jointly and severally, with respect to the scalar argument in scalar–vector multiplication and with respect to its vector argument.
Abusing terminology, some engineers omit Donoho's quotation marks and inappropriately call the number-of-non-zeros function the formula_118 norm, echoing the notation for the Lebesgue space of measurable functions.
Infinite dimensions.
The generalization of the above norms to an infinite number of components leads to formula_81 and formula_49 spaces for formula_119 with norms
formula_120
for complex-valued sequences and functions on formula_121 respectively, which can be further generalized (see Haar measure). These norms are also valid in the limit as formula_122, giving a supremum norm, and are called formula_123 and formula_124
Any inner product induces in a natural way the norm formula_125
Other examples of infinite-dimensional normed vector spaces can be found in the Banach space article.
Generally, these norms do not give the same topologies. For example, an infinite-dimensional formula_81 space gives a strictly finer topology than an infinite-dimensional formula_126 space when formula_127
Composite norms.
Other norms on formula_128 can be constructed by combining the above; for example
formula_129
is a norm on formula_67
For any norm and any injective linear transformation formula_130 we can define a new norm of formula_102 equal to
formula_131
In 2D, with formula_130 a rotation by 45° and a suitable scaling, this changes the taxicab norm into the maximum norm. Each formula_130 applied to the taxicab norm, up to inversion and interchanging of axes, gives a different unit ball: a parallelogram of a particular shape, size, and orientation.
In 3D, this is similar but different for the 1-norm (octahedrons) and the maximum norm (prisms with parallelogram base).
There are examples of norms that are not defined by "entrywise" formulas. For instance, the Minkowski functional of a centrally-symmetric convex body in formula_128 (centered at zero) defines a norm on formula_128 (see below).
All the above formulas also yield norms on formula_132 without modification.
There are also norms on spaces of matrices (with real or complex entries), the so-called matrix norms.
In abstract algebra.
Let formula_133 be a finite extension of a field formula_134 of inseparable degree formula_135 and let formula_134 have algebraic closure formula_136 If the distinct embeddings of formula_133 are formula_137 then the Galois-theoretic norm of an element formula_138 is the value formula_139 As that function is homogeneous of degree formula_140, the Galois-theoretic norm is not a norm in the sense of this article. However, the formula_140-th root of the norm (assuming that concept makes sense) is a norm.
Composition algebras.
The concept of norm formula_141 in composition algebras does not share the usual properties of a norm since null vectors are allowed. A composition algebra formula_142 consists of an algebra over a field formula_143 an involution formula_144 and a quadratic form formula_145 called the "norm".
The characteristic feature of composition algebras is the homomorphism property of formula_146: for the product formula_147 of two elements formula_148 and formula_149 of the composition algebra, its norm satisfies formula_150 In the case of division algebras formula_58 formula_3 formula_59 and formula_151 the composition algebra norm is the square of the norm discussed above. In those cases the norm is a definite quadratic form. In the split algebras the norm is an isotropic quadratic form.
Properties.
For any norm formula_26 on a vector space formula_27 the reverse triangle inequality holds:
formula_152
If formula_153 is a continuous linear map between normed spaces, then the norm of formula_154 and the norm of the transpose of formula_154 are equal.
For the formula_49 norms, we have Hölder's inequality
formula_155
A special case of this is the Cauchy–Schwarz inequality:
formula_156
Every norm is a seminorm and thus satisfies all properties of the latter. In turn, every seminorm is a sublinear function and thus satisfies all properties of the latter. In particular, every norm is a convex function.
Equivalence.
The concept of unit circle (the set of all vectors of norm 1) is different in different norms: for the 1-norm, the unit circle is a square oriented as a diamond; for the 2-norm (Euclidean norm), it is the well-known unit circle; while for the infinity norm, it is an axis-aligned square. For any formula_16-norm, it is a superellipse with congruent axes (see the accompanying illustration). Due to the definition of the norm, the unit circle must be convex and centrally symmetric (therefore, for example, the unit ball may be a rectangle but cannot be a triangle, and formula_80 for a formula_16-norm).
In terms of the vector space, the seminorm defines a topology on the space, and this is a Hausdorff topology precisely when the seminorm can distinguish between distinct vectors, which is again equivalent to the seminorm being a norm. The topology thus defined (by either a norm or a seminorm) can be understood either in terms of sequences or open sets. A sequence of vectors formula_157 is said to converge in norm to formula_158 if formula_159 as formula_160 Equivalently, the topology consists of all sets that can be represented as a union of open balls. If formula_161 is a normed space then
formula_162
Two norms formula_163 and formula_164 on a vector space formula_1 are called <templatestyles src="Template:Visible anchor/styles.css" />equivalent if they induce the same topology, which happens if and only if there exist positive real numbers formula_21 and formula_165 such that for all formula_10
formula_166
For instance, if formula_167 on formula_69 then
formula_168
In particular,
formula_169
formula_170
formula_171
That is,
formula_172
If the vector space is a finite-dimensional real or complex one, all norms are equivalent. On the other hand, in the case of infinite-dimensional vector spaces, not all norms are equivalent.
Equivalent norms define the same notions of continuity and convergence and for many purposes do not need to be distinguished. To be more precise the uniform structure defined by equivalent norms on the vector space is uniformly isomorphic.
Classification of seminorms: absolutely convex absorbing sets.
All seminorms on a vector space formula_1 can be classified in terms of absolutely convex absorbing subsets formula_130 of formula_19 To each such subset corresponds a seminorm formula_173 called the gauge of formula_143 defined as
formula_174
where formula_175 is the infimum, with the property that
formula_176
Conversely:
Any locally convex topological vector space has a local basis consisting of absolutely convex sets. A common method to construct such a basis is to use a family formula_177 of seminorms formula_16 that separates points: the collection of all finite intersections of sets formula_178 turns the space into a locally convex topological vector space so that every p is continuous.
Such a method is used to design weak and weak* topologies.
norm case:
Suppose now that formula_177 contains a single formula_179 since formula_177 is separating, formula_16 is a norm, and formula_180 is its open unit ball. Then formula_130 is an absolutely convex bounded neighbourhood of 0, and formula_181 is continuous.
The converse is due to Andrey Kolmogorov: any locally convex and locally bounded topological vector space is normable. Precisely:
If formula_1 is an absolutely convex bounded neighbourhood of 0, the gauge formula_182 (so that formula_183 is a norm.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\,\\leq\\,"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "F"
},
{
"math_id": 3,
"text": "\\Complex,"
},
{
"math_id": 4,
"text": "p : X \\to \\Reals"
},
{
"math_id": 5,
"text": "|s|"
},
{
"math_id": 6,
"text": "s"
},
{
"math_id": 7,
"text": "p(x + y) \\leq p(x) + p(y)"
},
{
"math_id": 8,
"text": "x, y \\in X."
},
{
"math_id": 9,
"text": "p(s x) = |s| p(x)"
},
{
"math_id": 10,
"text": "x \\in X"
},
{
"math_id": 11,
"text": "s."
},
{
"math_id": 12,
"text": "x \\in X,"
},
{
"math_id": 13,
"text": "p(x) = 0"
},
{
"math_id": 14,
"text": "x = 0."
},
{
"math_id": 15,
"text": "p(0) = 0,"
},
{
"math_id": 16,
"text": "p"
},
{
"math_id": 17,
"text": "p(0) = 0"
},
{
"math_id": 18,
"text": "q"
},
{
"math_id": 19,
"text": "X."
},
{
"math_id": 20,
"text": "c"
},
{
"math_id": 21,
"text": "C"
},
{
"math_id": 22,
"text": "c > 0"
},
{
"math_id": 23,
"text": "c q(x) \\leq p(x) \\leq C q(x)."
},
{
"math_id": 24,
"text": "c q \\leq p \\leq C q"
},
{
"math_id": 25,
"text": "\\tfrac{1}{C} p \\leq q \\leq \\tfrac{1}{c} p"
},
{
"math_id": 26,
"text": "p : X \\to \\R"
},
{
"math_id": 27,
"text": "X,"
},
{
"math_id": 28,
"text": "z \\in X"
},
{
"math_id": 29,
"text": "\\|z\\| = p(z)."
},
{
"math_id": 30,
"text": "|x|"
},
{
"math_id": 31,
"text": "x_{\\bull} = \\left(x_i\\right)_{i \\in I}"
},
{
"math_id": 32,
"text": "x = \\sum_{i \\in I} s_i x_i \\in X"
},
{
"math_id": 33,
"text": "s_i"
},
{
"math_id": 34,
"text": "0"
},
{
"math_id": 35,
"text": "\\sum_{i \\in I} \\left|s_i\\right|"
},
{
"math_id": 36,
"text": "f : \\mathbb{F} \\to X,"
},
{
"math_id": 37,
"text": "\\mathbb{F}"
},
{
"math_id": 38,
"text": "\\R"
},
{
"math_id": 39,
"text": "|x| = p(f(x))."
},
{
"math_id": 40,
"text": "1 \\isin \\mathbb{F}"
},
{
"math_id": 41,
"text": "1,"
},
{
"math_id": 42,
"text": "n"
},
{
"math_id": 43,
"text": "\\R^n,"
},
{
"math_id": 44,
"text": "\\boldsymbol{x} = \\left(x_1, x_2, \\ldots, x_n\\right)"
},
{
"math_id": 45,
"text": "\\|\\boldsymbol{x}\\|_2 := \\sqrt{x_1^2 + \\cdots + x_n^2}."
},
{
"math_id": 46,
"text": "\\|\\boldsymbol{x}\\| := \\sqrt{\\boldsymbol{x} \\cdot \\boldsymbol{x}}."
},
{
"math_id": 47,
"text": "L^2"
},
{
"math_id": 48,
"text": "\\ell^2"
},
{
"math_id": 49,
"text": "L^p"
},
{
"math_id": 50,
"text": "\\R^{n+1}"
},
{
"math_id": 51,
"text": "\\R^2."
},
{
"math_id": 52,
"text": "x + i y"
},
{
"math_id": 53,
"text": "\\sqrt{x^2 + y^2}"
},
{
"math_id": 54,
"text": "z = x +iy"
},
{
"math_id": 55,
"text": "\\sqrt{\\bar z z}"
},
{
"math_id": 56,
"text": "\\bar z"
},
{
"math_id": 57,
"text": "z\\,."
},
{
"math_id": 58,
"text": "\\R,"
},
{
"math_id": 59,
"text": "\\mathbb{H},"
},
{
"math_id": 60,
"text": "\\mathbb{O},"
},
{
"math_id": 61,
"text": "1, 2, 4, \\text{ and } 8,"
},
{
"math_id": 62,
"text": "\\Complex"
},
{
"math_id": 63,
"text": "\\mathbb{H}"
},
{
"math_id": 64,
"text": "\\lVert q \\rVert = \\sqrt{\\,qq^*~} = \\sqrt{\\,q^*q~} = \\sqrt{\\, a^2 + b^2 + c^2 + d^2 ~}"
},
{
"math_id": 65,
"text": "q = a + b\\,\\mathbf i + c\\,\\mathbf j + d\\,\\mathbf k"
},
{
"math_id": 66,
"text": "\\mathbb{H}."
},
{
"math_id": 67,
"text": "\\R^4."
},
{
"math_id": 68,
"text": "\\R^8."
},
{
"math_id": 69,
"text": "\\Complex^n,"
},
{
"math_id": 70,
"text": "\\|\\boldsymbol{z}\\| := \\sqrt{\\left|z_1\\right|^2 + \\cdots + \\left|z_n\\right|^2} = \\sqrt{z_1 \\bar z_1 + \\cdots + z_n \\bar z_n}."
},
{
"math_id": 71,
"text": "\\|\\boldsymbol{x}\\| := \\sqrt{\\boldsymbol{x}^H ~ \\boldsymbol{x}},"
},
{
"math_id": 72,
"text": "\\boldsymbol{x}"
},
{
"math_id": 73,
"text": "\\begin{bmatrix} x_1 \\; x_2 \\; \\dots \\; x_n \\end{bmatrix}^{\\rm T}"
},
{
"math_id": 74,
"text": "\\boldsymbol{x}^H"
},
{
"math_id": 75,
"text": "\\|\\boldsymbol{x}\\| := \\sqrt{\\boldsymbol{x} \\cdot \\boldsymbol{x}}."
},
{
"math_id": 76,
"text": "\\|\\boldsymbol{x}\\|_1 := \\sum_{i=1}^n \\left|x_i\\right|."
},
{
"math_id": 77,
"text": "x."
},
{
"math_id": 78,
"text": "\\ell^1"
},
{
"math_id": 79,
"text": "\\sum_{i=1}^n x_i"
},
{
"math_id": 80,
"text": "p \\geq 1"
},
{
"math_id": 81,
"text": "\\ell^p"
},
{
"math_id": 82,
"text": "\\mathbf{x} = (x_1, \\ldots, x_n)"
},
{
"math_id": 83,
"text": "\\|\\mathbf{x}\\|_p := \\left(\\sum_{i=1}^n \\left|x_i\\right|^p\\right)^{1/p}."
},
{
"math_id": 84,
"text": "p = 1,"
},
{
"math_id": 85,
"text": "p = 2"
},
{
"math_id": 86,
"text": "\\infty"
},
{
"math_id": 87,
"text": "\\|\\mathbf{x}\\|_\\infty := \\max_i \\left|x_i\\right|."
},
{
"math_id": 88,
"text": "p = 2,"
},
{
"math_id": 89,
"text": "\\|\\,\\cdot\\,\\|_2"
},
{
"math_id": 90,
"text": "\\langle \\,\\cdot,\\,\\cdot\\rangle,"
},
{
"math_id": 91,
"text": "\\|\\mathbf{x}\\|_2 = \\sqrt{\\langle \\mathbf{x}, \\mathbf{x} \\rangle}"
},
{
"math_id": 92,
"text": "\\mathbf{x}."
},
{
"math_id": 93,
"text": "\\ell^2,"
},
{
"math_id": 94,
"text": "\\langle \\left(x_n\\right)_{n}, \\left(y_n\\right)_{n} \\rangle_{\\ell^2} ~=~ \\sum_n \\overline{x_n} y_n"
},
{
"math_id": 95,
"text": "L^2(X, \\mu)"
},
{
"math_id": 96,
"text": "(X, \\Sigma, \\mu),"
},
{
"math_id": 97,
"text": "\\langle f, g \\rangle_{L^2} = \\int_X \\overline{f(x)} g(x)\\, \\mathrm dx."
},
{
"math_id": 98,
"text": "0 < p < 1,"
},
{
"math_id": 99,
"text": "\\int_X |f(x) - g(x)|^p ~ \\mathrm d \\mu"
},
{
"math_id": 100,
"text": "L^p(X)"
},
{
"math_id": 101,
"text": "\\frac{\\partial}{\\partial x_k} \\|\\mathbf{x}\\|_p = \\frac{x_k \\left|x_k\\right|^{p-2}} { \\|\\mathbf{x}\\|_p^{p-1}}."
},
{
"math_id": 102,
"text": "x,"
},
{
"math_id": 103,
"text": "\\frac{\\partial \\|\\mathbf{x}\\|_p}{\\partial \\mathbf{x}} =\\frac{\\mathbf{x} \\circ |\\mathbf{x}|^{p-2}} {\\|\\mathbf{x}\\|^{p-1}_p}."
},
{
"math_id": 104,
"text": "\\circ"
},
{
"math_id": 105,
"text": "|\\cdot|"
},
{
"math_id": 106,
"text": "\\frac{\\partial}{\\partial x_k} \\|\\mathbf{x}\\|_2 = \\frac{x_k}{\\|\\mathbf{x}\\|_2},"
},
{
"math_id": 107,
"text": "\\frac{\\partial}{\\partial \\mathbf{x}} \\|\\mathbf{x}\\|_2 = \\frac{\\mathbf{x}}{ \\|\\mathbf{x}\\|_2}."
},
{
"math_id": 108,
"text": "\\mathbf{x}"
},
{
"math_id": 109,
"text": "\\mathbf{x} = (x_1, x_2, \\ldots ,x_n),"
},
{
"math_id": 110,
"text": "\\|\\mathbf{x}\\|_\\infty := \\max \\left(\\left|x_1\\right| , \\ldots , \\left|x_n\\right|\\right)."
},
{
"math_id": 111,
"text": "c,"
},
{
"math_id": 112,
"text": "2 c."
},
{
"math_id": 113,
"text": "(x_n) \\mapsto \\sum_n{2^{-n} x_n/(1+x_n)}."
},
{
"math_id": 114,
"text": "\\lVert \\cdot \\rVert"
},
{
"math_id": 115,
"text": "d,"
},
{
"math_id": 116,
"text": "\\lVert x \\rVert = d(x,0)."
},
{
"math_id": 117,
"text": "x"
},
{
"math_id": 118,
"text": "L^0"
},
{
"math_id": 119,
"text": "p \\ge 1\\,,"
},
{
"math_id": 120,
"text": "\\|x\\|_p = \\bigg(\\sum_{i \\in \\N} \\left|x_i\\right|^p\\bigg)^{1/p} \\text{ and }\\ \\|f\\|_{p,X} = \\bigg(\\int_X |f(x)|^p ~ \\mathrm d x\\bigg)^{1/p}"
},
{
"math_id": 121,
"text": "X \\sube \\R^n"
},
{
"math_id": 122,
"text": "p \\rightarrow +\\infty"
},
{
"math_id": 123,
"text": "\\ell^\\infty"
},
{
"math_id": 124,
"text": "L^\\infty\\,."
},
{
"math_id": 125,
"text": "\\|x\\| := \\sqrt{\\langle x , x\\rangle}."
},
{
"math_id": 126,
"text": "\\ell^q"
},
{
"math_id": 127,
"text": "p < q\\,."
},
{
"math_id": 128,
"text": "\\R^n"
},
{
"math_id": 129,
"text": "\\|x\\| := 2 \\left|x_1\\right| + \\sqrt{3 \\left|x_2\\right|^2 + \\max (\\left|x_3\\right| , 2 \\left|x_4\\right|)^2}"
},
{
"math_id": 130,
"text": "A"
},
{
"math_id": 131,
"text": "\\|A x\\|."
},
{
"math_id": 132,
"text": "\\Complex^n"
},
{
"math_id": 133,
"text": "E"
},
{
"math_id": 134,
"text": "k"
},
{
"math_id": 135,
"text": "p^{\\mu},"
},
{
"math_id": 136,
"text": "K."
},
{
"math_id": 137,
"text": "\\left\\{\\sigma_j\\right\\}_j,"
},
{
"math_id": 138,
"text": "\\alpha \\in E"
},
{
"math_id": 139,
"text": "\\left(\\prod_j {\\sigma_k(\\alpha)}\\right)^{p^{\\mu}}."
},
{
"math_id": 140,
"text": "[E : k]"
},
{
"math_id": 141,
"text": "N(z)"
},
{
"math_id": 142,
"text": "(A, {}^*, N)"
},
{
"math_id": 143,
"text": "A,"
},
{
"math_id": 144,
"text": "{}^*,"
},
{
"math_id": 145,
"text": "N(z) = z z^*"
},
{
"math_id": 146,
"text": "N"
},
{
"math_id": 147,
"text": "w z"
},
{
"math_id": 148,
"text": "w"
},
{
"math_id": 149,
"text": "z"
},
{
"math_id": 150,
"text": "N(wz) = N(w) N(z)."
},
{
"math_id": 151,
"text": "\\mathbb{O}"
},
{
"math_id": 152,
"text": "p(x \\pm y) \\geq |p(x) - p(y)| \\text{ for all } x, y \\in X."
},
{
"math_id": 153,
"text": "u : X \\to Y"
},
{
"math_id": 154,
"text": "u"
},
{
"math_id": 155,
"text": "|\\langle x, y \\rangle| \\leq \\|x\\|_p \\|y\\|_q \\qquad \\frac{1}{p} + \\frac{1}{q} = 1."
},
{
"math_id": 156,
"text": "\\left|\\langle x, y \\rangle\\right| \\leq \\|x\\|_2 \\|y\\|_2."
},
{
"math_id": 157,
"text": "\\{v_n\\}"
},
{
"math_id": 158,
"text": "v,"
},
{
"math_id": 159,
"text": "\\left\\|v_n - v\\right\\| \\to 0"
},
{
"math_id": 160,
"text": "n \\to \\infty."
},
{
"math_id": 161,
"text": "(X, \\|\\cdot\\|)"
},
{
"math_id": 162,
"text": "\\|x - y\\| = \\|x - z\\| + \\|z - y\\| \\text{ for all } x, y \\in X \\text{ and } z \\in [x, y]."
},
{
"math_id": 163,
"text": "\\|\\cdot\\|_\\alpha"
},
{
"math_id": 164,
"text": "\\|\\cdot\\|_\\beta"
},
{
"math_id": 165,
"text": "D"
},
{
"math_id": 166,
"text": "C \\|x\\|_\\alpha \\leq \\|x\\|_\\beta \\leq D \\|x\\|_\\alpha."
},
{
"math_id": 167,
"text": "p > r \\geq 1"
},
{
"math_id": 168,
"text": "\\|x\\|_p \\leq \\|x\\|_r \\leq n^{(1/r-1/p)} \\|x\\|_p."
},
{
"math_id": 169,
"text": "\\|x\\|_2 \\leq \\|x\\|_1 \\leq \\sqrt{n} \\|x\\|_2"
},
{
"math_id": 170,
"text": "\\|x\\|_\\infty \\leq \\|x\\|_2 \\leq \\sqrt{n} \\|x\\|_\\infty"
},
{
"math_id": 171,
"text": "\\|x\\|_\\infty \\leq \\|x\\|_1 \\leq n \\|x\\|_\\infty ,"
},
{
"math_id": 172,
"text": "\\|x\\|_\\infty \\leq \\|x\\|_2 \\leq \\|x\\|_1 \\leq \\sqrt{n} \\|x\\|_2 \\leq n \\|x\\|_\\infty."
},
{
"math_id": 173,
"text": "p_A"
},
{
"math_id": 174,
"text": "p_A(x) := \\inf \\{r \\in \\R : r > 0, x \\in r A\\}"
},
{
"math_id": 175,
"text": "\\inf_{}"
},
{
"math_id": 176,
"text": "\\left\\{x \\in X : p_A(x) < 1\\right\\} ~\\subseteq~ A ~\\subseteq~ \\left\\{x \\in X : p_A(x) \\leq 1\\right\\}."
},
{
"math_id": 177,
"text": "(p)"
},
{
"math_id": 178,
"text": "\\{p < 1/n\\}"
},
{
"math_id": 179,
"text": "p:"
},
{
"math_id": 180,
"text": "A = \\{p < 1\\}"
},
{
"math_id": 181,
"text": "p = p_A"
},
{
"math_id": 182,
"text": "g_X"
},
{
"math_id": 183,
"text": "X = \\{g_X < 1\\}"
}
] | https://en.wikipedia.org/wiki?curid=990534 |
9905395 | G-index | Citation metric
The "g"-index is an author-level metric suggested in 2006 by Leo Egghe. The index is calculated based on the distribution of citations received by a given researcher's publications, such that given a set of articles ranked in decreasing order of the number of citations that they received, the "g"-index is the unique largest number such that the top "g" articles received together at least "g"2 citations. Hence, a "g"-index of 10 indicates that the top 10 publications of an author have been cited at least 100 times (102), a "g"-index of 20 indicates that the top 20 publications of an author have been cited 400 times (202).
It can be equivalently defined as the largest number "n" of highly cited articles for which the average number of citations is at least "n". This is in fact a rewriting of the definition
formula_0
as
formula_1
The "g"-index is an alternative for the older "h"-index. The "h"-index does not average the number of citations. Instead, the "h"-index only requires a minimum of n citations for the least-cited article in the set and thus ignores the citation count of very highly cited papers. Roughly, the effect is that "h" is the number of papers of a quality threshold that rises as h rises; "g" allows citations from higher-cited papers to be used to bolster lower-cited papers in meeting this threshold. In effect, the "g"-index is the maximum reachable value of the "h"-index if a fixed number of citations can be distributed freely over a fixed number of papers. Therefore, in all cases "g" is at least "h", and is in most cases higher. The "g"-index often separates authors based on citations to a greater extent compared to the "h"-index. However, unlike the "h"-index, the "g"-index saturates whenever the average number of citations for all published papers exceeds the total number of published papers; the way it is defined, the "g"-index is not adapted to this situation. However, if an author with a saturated "g"-index publishes more papers, their "g"-index will increase.
The "g"-index has been characterized in terms of three natural axioms by Woeginger (2008). The simplest of these three axioms states that by moving citations from weaker articles to stronger articles, one's research index should not decrease. Like the "h"-index, the "g"-index is a natural number and thus lacks in discriminatory power. Therefore, Tol (2008) proposed a rational generalisation.
Tol also proposed a collective "g"-index.
Given a set of researchers ranked in decreasing order of their "g"-index, the "g"1-index is the (unique) largest number such that the top "g"1 researchers have on average at least a "g"-index of "g"1. | [
{
"math_id": 0,
"text": "g^2 \\le \\sum_{{i \\le g }}c_{i} "
},
{
"math_id": 1,
"text": "g \\le \\frac1g \\sum_{{i\\le g}}c_{i}"
}
] | https://en.wikipedia.org/wiki?curid=9905395 |
9908 | Equation of state | An equation describing the state of matter under a given set of physical conditions
In physics and chemistry, an equation of state is a thermodynamic equation relating state variables, which describe the state of matter under a given set of physical conditions, such as pressure, volume, temperature, or internal energy. Most modern equations of state are formulated in the Helmholtz free energy. Equations of state are useful in describing the properties of pure substances and mixtures in liquids, gases, and solid states as well as the state of matter in the interior of stars.
<templatestyles src="Template:TOC limit/styles.css" />
Overview.
At present, there is no single equation of state that accurately predicts the properties of all substances under all conditions. An example of an equation of state correlates densities of gases and liquids to temperatures and pressures, known as the ideal gas law, which is roughly accurate for weakly polar gases at low pressures and moderate temperatures. This equation becomes increasingly inaccurate at higher pressures and lower temperatures, and fails to predict condensation from a gas to a liquid.
The general form of an equation of state may be written as
formula_0
where formula_1 is the pressure, formula_2 the volume, and formula_3 the temperature of the system. Yet also other variables may be used in that form. It is directly related to Gibbs phase rule, that is, the number of independent variables depends on the number of substances and phases in the system.
An equation used to model this relationship is called an equation of state. In most cases this model will comprise some empirical parameters that are usually adjusted to measurement data. Equations of state can also describe solids, including the transition of solids from one crystalline state to another. Equations of state are also used for the modeling of the state of matter in the interior of stars, including neutron stars, dense matter (quark–gluon plasmas) and radiation fields. A related concept is the perfect fluid equation of state used in cosmology.
Equations of state are applied in many fields such as process engineering and petroleum industry as well as pharmaceutical industry.
Any consistent set of units may be used, although SI units are preferred. Absolute temperature refers to the use of the Kelvin (K), with zero being absolute zero.
Historical background.
Boyle's law was one of the earliest formulation of an equation of state. In 1662, the Irish physicist and chemist Robert Boyle performed a series of experiments employing a J-shaped glass tube, which was sealed on one end. Mercury was added to the tube, trapping a fixed quantity of air in the short, sealed end of the tube. Then the volume of gas was measured as additional mercury was added to the tube. The pressure of the gas could be determined by the difference between the mercury level in the short end of the tube and that in the long, open end. Through these experiments, Boyle noted that the gas volume varied inversely with the pressure. In mathematical form, this can be stated as:formula_11The above relationship has also been attributed to Edme Mariotte and is sometimes referred to as Mariotte's law. However, Mariotte's work was not published until 1676.
In 1787 the French physicist Jacques Charles found that oxygen, nitrogen, hydrogen, carbon dioxide, and air expand to roughly the same extent over the same 80-kelvin interval. This is known today as Charles's law. Later, in 1802, Joseph Louis Gay-Lussac published results of similar experiments, indicating a linear relationship between volume and temperature:formula_12Dalton's law (1801) of partial pressure states that the pressure of a mixture of gases is equal to the sum of the pressures of all of the constituent gases alone.
Mathematically, this can be represented for formula_4 species as:formula_13In 1834, Émile Clapeyron combined Boyle's law and Charles' law into the first statement of the "ideal gas law". Initially, the law was formulated as "pVm" = "R"("TC" + 267) (with temperature expressed in degrees Celsius), where "R" is the gas constant. However, later work revealed that the number should actually be closer to 273.2, and then the Celsius scale was defined with formula_14, giving:formula_15In 1873, J. D. van der Waals introduced the first equation of state derived by the assumption of a finite volume occupied by the constituent molecules. His new formula revolutionized the study of equations of state, and was the starting point of cubic equations of state, which most famously continued via the Redlich–Kwong equation of state and the Soave modification of Redlich-Kwong.
The van der Waals equation of state can be written as
formula_16
where formula_17 is a parameter describing the attractive energy between particles and formula_18 is a parameter describing the volume of the particles.
Ideal gas law.
Classical ideal gas law.
The classical ideal gas law may be written
formula_19
In the form shown above, the equation of state is thus
formula_20
If the calorically perfect gas approximation is used, then the ideal gas law may also be expressed as follows
formula_21
where formula_22 is the number density of the gas (number of atoms/molecules per unit volume), formula_23 is the (constant) adiabatic index (ratio of specific heats), formula_24 is the internal energy per unit mass (the "specific internal energy"), formula_25 is the specific heat capacity at constant volume, and formula_26 is the specific heat capacity at constant pressure.
Quantum ideal gas law.
Since for atomic and molecular gases, the classical ideal gas law is well suited in most cases, let us describe the equation of state for elementary particles with mass formula_27 and spin formula_28 that takes into account quantum effects. In the following, the upper sign will always correspond to Fermi–Dirac statistics and the lower sign to Bose–Einstein statistics. The equation of state of such gases with formula_29 particles occupying a volume formula_2 with temperature formula_3 and pressure formula_1 is given by
formula_30
where formula_31 is the Boltzmann constant and formula_32 the chemical potential is given by the following implicit function
formula_33
In the limiting case where formula_34, this equation of state will reduce to that of the classical ideal gas. It can be shown that the above equation of state in the limit formula_35 reduces to
formula_36
With a fixed number density formula_37, decreasing the temperature causes in Fermi gas, an increase in the value for pressure from its classical value implying an effective repulsion between particles (this is an apparent repulsion due to quantum exchange effects not because of actual interactions between particles since in ideal gas, interactional forces are neglected) and in Bose gas, a decrease in pressure from its classical value implying an effective attraction. The quantum nature of this equation is in it dependence on s and ħ.
Cubic equations of state.
Cubic equations of state are called such because they can be rewritten as a cubic function of formula_5. Cubic equations of state originated from the van der Waals equation of state. Hence, all cubic equations of state can be considered 'modified van der Waals equation of state'. There is a very large number of such cubic equations of state. For process engineering, cubic equations of state are today still highly relevant, e.g. the Peng Robinson equation of state or the Soave Redlich Kwong equation of state.
Virial equations of state.
Virial equation of state.
formula_38
Although usually not the most convenient equation of state, the virial equation is important because it can be derived directly from statistical mechanics. This equation is also called the Kamerlingh Onnes equation. If appropriate assumptions are made about the mathematical form of intermolecular forces, theoretical expressions can be developed for each of the coefficients. "A" is the first virial coefficient, which has a constant value of 1 and makes the statement that when volume is large, all fluids behave like ideal gases. The second virial coefficient "B" corresponds to interactions between pairs of molecules, "C" to triplets, and so on. Accuracy can be increased indefinitely by considering higher order terms. The coefficients "B", "C", "D", etc. are functions of temperature only.
The BWR equation of state.
formula_39
where
Values of the various parameters can be found in reference materials. The BWR equation of state has also frequently been used for the modelling of the Lennard-Jones fluid. There are several extensions and modifications of the classical BWR equation of state available.
The Benedict–Webb–Rubin–Starling equation of state is a modified BWR equation of state and can be written as
formula_40
Note that in this virial equation, the fourth and fifth virial terms are zero. The second virial coefficient is monotonically decreasing as temperature is lowered. The third virial coefficient is monotonically increasing as temperature is lowered.
The Lee–Kesler equation of state is based on the corresponding states principle, and is a modification of the BWR equation of state.
formula_41
Physically based equations of state.
There is a large number of physically based equations of state available today. Most of those are formulated in the Helmholtz free energy as a function of temperature, density (and for mixtures additionally the composition). The Helmholtz energy is formulated as a sum of multiple terms modelling different types of molecular interaction or molecular structures, e.g. the formation of chains or dipolar interactions. Hence, physically based equations of state model the effect of molecular size, attraction and shape as well as hydrogen bonding and polar interactions of fluids. In general, physically based equations of state give more accurate results than traditional cubic equations of state, especially for systems containing liquids or solids. Most physically based equations of state are built on monomer term describing the Lennard-Jones fluid or the Mie fluid.
Perturbation theory-based models.
Perturbation theory is frequently used for modelling dispersive interactions in an equation of state. There is a large number of perturbation theory based equations of state available today, e.g. for the classical Lennard-Jones fluid. The two most important theories used for these types of equations of state are the Barker-Henderson perturbation theory and the Weeks–Chandler–Andersen perturbation theory.
Statistical associating fluid theory (SAFT).
An important contribution for physically based equations of state is the statistical associating fluid theory (SAFT) that contributes the Helmholtz energy that describes the association (a.k.a. hydrogen bonding) in fluids, which can also be applied for modelling chain formation (in the limit of infinite association strength). The SAFT equation of state was developed using statistical mechanical methods (in particular the perturbation theory of Wertheim) to describe the interactions between molecules in a system. The idea of a SAFT equation of state was first proposed by Chapman et al. in 1988 and 1989. Many different versions of the SAFT models have been proposed, but all use the same chain and association terms derived by Chapman et al.
Multiparameter equations of state.
Multiparameter equations of state are empirical equations of state that can be used to represent pure fluids with high accuracy. Multiparameter equations of state are empirical correlations of experimental data and are usually formulated in the Helmholtz free energy. The functional form of these models is in most parts not physically motivated. They can be usually applied in both liquid and gaseous states. Empirical multiparameter equations of state represent the Helmholtz energy of the fluid as the sum of ideal gas and residual terms. Both terms are explicit in temperature and density:
formula_42
with formula_43
The reduced density formula_44 and reduced temperature formula_45 are in most cases the critical values for the pure fluid. Because integration of the multiparameter equations of state is not required and thermodynamic properties can be determined using classical thermodynamic relations, there are few restrictions as to the functional form of the ideal or residual terms. Typical multiparameter equations of state use upwards of 50 fluid specific parameters, but are able to represent the fluid's properties with high accuracy. Multiparameter equations of state are available currently for about 50 of the most common industrial fluids including refrigerants. The IAPWS95 reference equation of state for water is also a multiparameter equations of state. Mixture models for multiparameter equations of state exist, as well. Yet, multiparameter equations of state applied to mixtures are known to exhibit artifacts at times.
One example of such an equation of state is the form proposed by Span and Wagner.
formula_46
This is a somewhat simpler form that is intended to be used more in technical applications. Equations of state that require a higher accuracy use a more complicated form with more terms.
List of further equations of state.
Stiffened equation of state.
When considering water under very high pressures, in situations such as underwater nuclear explosions, sonic shock lithotripsy, and sonoluminescence, the stiffened equation of state is often used:
formula_47
where formula_48 is the internal energy per unit mass, formula_49 is an empirically determined constant typically taken to be about 6.1, and formula_50 is another constant, representing the molecular attraction between water molecules. The magnitude of the correction is about 2 gigapascals (20,000 atmospheres).
The equation is stated in this form because the speed of sound in water is given by formula_51.
Thus water behaves as though it is an ideal gas that is "already" under about 20,000 atmospheres (2 GPa) pressure, and explains why water is commonly assumed to be incompressible: when the external pressure changes from 1 atmosphere to 2 atmospheres (100 kPa to 200 kPa), the water behaves as an ideal gas would when changing from 20,001 to 20,002 atmospheres (2000.1 MPa to 2000.2 MPa).
This equation mispredicts the specific heat capacity of water but few simple alternatives are available for severely nonisentropic processes such as strong shocks.
Morse oscillator equation of state.
An equation of state of Morse oscillator has been derived, and it has the following form:
formula_52
Where formula_53 is the first order virial parameter and it depends on the temperature, formula_54 is the second order virial parameter of Morse oscillator and it depends on the parameters of Morse oscillator in addition to the absolute temperature. formula_55 is the fractional volume of the system.
Ultrarelativistic equation of state.
An ultrarelativistic fluid has equation of state
formula_56
where formula_1 is the pressure, formula_57 is the mass density, and formula_58 is the speed of sound.
Ideal Bose equation of state.
The equation of state for an ideal Bose gas is
formula_59
where "α" is an exponent specific to the system (e.g. in the absence of a potential field, α = 3/2), "z" is exp("μ"/"k"B"T") where "μ" is the chemical potential, Li is the polylogarithm, ζ is the Riemann zeta function, and "T""c" is the critical temperature at which a Bose–Einstein condensate begins to form.
Jones–Wilkins–Lee equation of state for explosives (JWL equation).
The equation of state from Jones–Wilkins–Lee is used to describe the detonation products of explosives.
formula_60
The ratio formula_61 is defined by using formula_62, which is the density of the explosive (solid part) and formula_63, which is the density of the detonation products. The parameters formula_64, formula_65, formula_66, formula_67 and formula_68 are given by several references. In addition, the initial density (solid part) formula_69, speed of detonation formula_70, Chapman–Jouguet pressure formula_71 and the chemical energy per unit volume of the explosive formula_72 are given in such references. These parameters are obtained by fitting the JWL-EOS to experimental results. Typical parameters for some explosives are listed in the table below.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(p, V, T) = 0"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "T"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "V_m"
},
{
"math_id": 6,
"text": "\\frac{V}{n}"
},
{
"math_id": 7,
"text": "R"
},
{
"math_id": 8,
"text": "p_c"
},
{
"math_id": 9,
"text": "V_c"
},
{
"math_id": 10,
"text": "T_c"
},
{
"math_id": 11,
"text": " pV = \\mathrm{constant}."
},
{
"math_id": 12,
"text": "\\frac{V_1}{T_1} = \\frac{V_2}{T_2}."
},
{
"math_id": 13,
"text": "p_\\text{total} = p_1 + p_2 + \\cdots + p_n = \\sum_{i=1}^n p_i."
},
{
"math_id": 14,
"text": "0~^{\\circ}\\mathrm{C} = 273.15~\\mathrm{K}"
},
{
"math_id": 15,
"text": "pV_m = R \\left(T_C + 273.15\\ {}^\\circ\\text{C}\\right)."
},
{
"math_id": 16,
"text": "\\left(P+a\\frac1{V_m^2}\\right)(V_m-b)=R T"
},
{
"math_id": 17,
"text": "a"
},
{
"math_id": 18,
"text": "b"
},
{
"math_id": 19,
"text": "pV = nRT."
},
{
"math_id": 20,
"text": "f(p, V, T) = pV - nRT = 0."
},
{
"math_id": 21,
"text": "p = \\rho(\\gamma - 1)e"
},
{
"math_id": 22,
"text": "\\rho"
},
{
"math_id": 23,
"text": "\\gamma = C_p/C_v"
},
{
"math_id": 24,
"text": "e = C_v T"
},
{
"math_id": 25,
"text": "C_v"
},
{
"math_id": 26,
"text": "C_p"
},
{
"math_id": 27,
"text": "m"
},
{
"math_id": 28,
"text": "s"
},
{
"math_id": 29,
"text": "N"
},
{
"math_id": 30,
"text": "p= \\frac{(2s+1)\\sqrt{2m^3k_\\text{B}^5T^5}}{3\\pi^2\\hbar^3}\\int_0^\\infty\\frac{z^{3/2}\\,\\mathrm{d}z}{e^{z-\\mu/(k_\\text{B} T)}\\pm 1}"
},
{
"math_id": 31,
"text": "k_\\text{B}"
},
{
"math_id": 32,
"text": "\\mu(T,N/V)"
},
{
"math_id": 33,
"text": "\\frac{N}{V}=\\frac{(2s+1)(m k_\\text{B}T)^{3/2}}{\\sqrt 2\\pi^2\\hbar^3}\\int_0^\\infty\\frac{z^{1/2}\\,\\mathrm{d}z}{e^{z-\\mu / (k_\\text{B} T)}\\pm 1}."
},
{
"math_id": 34,
"text": "e^{\\mu / (k_\\text{B} T)}\\ll 1"
},
{
"math_id": 35,
"text": "e^{\\mu/(k_\\text{B} T)}\\ll 1"
},
{
"math_id": 36,
"text": "pV = N k_\\text{B} T\\left[1\\pm\\frac{\\pi^{3/2}}{2(2s+1)} \\frac{N\\hbar^3}{V(m k_\\text{B} T)^{3/2}}+\\cdots\\right]"
},
{
"math_id": 37,
"text": "N/V"
},
{
"math_id": 38,
"text": "\\frac{pV_m}{RT} = A + \\frac{B}{V_m} + \\frac{C}{V_m^2} + \\frac{D}{V_m^3} + \\cdots"
},
{
"math_id": 39,
"text": "\n p = \\rho RT +\n \\left(B_0 RT - A_0 - \\frac{C_0}{T^2} + \\frac{D_0}{T^3} - \\frac{E_0}{T^4}\\right) \\rho^2 +\n \\left(bRT - a - \\frac{d}{T}\\right) \\rho^3 +\n \\alpha\\left(a + \\frac{d}{T}\\right) \\rho^6 +\n \\frac{c\\rho^3}{T^2}\\left(1 + \\gamma\\rho^2\\right)\\exp\\left(-\\gamma\\rho^2\\right)\n"
},
{
"math_id": 40,
"text": "p=\\rho RT + \\left(B_0 RT-A_0 - \\frac{C_0}{T^2} + \\frac{D_0}{T^3} - \\frac{E_0}{T^4}\\right) \\rho^2 + \\left(bRT-a-\\frac{d}{T} + \\frac{c}{T^2}\\right) \\rho^3 + \\alpha\\left(a+\\frac{d}{T}\\right) \\rho^6 "
},
{
"math_id": 41,
"text": "\np = \\frac{RT}{V} \\left( 1 + \\frac{B}{V_r} + \\frac{C}{V_r^2} + \\frac{D}{V_r^5} + \\frac{c_4}{T_r^3 V_r^2} \\left( \\beta + \\frac{\\gamma}{V_r^2} \\right) \\exp \\left( \\frac{-\\gamma}{V_r^2} \\right) \\right)\n"
},
{
"math_id": 42,
"text": "\\frac{a(T, \\rho)}{RT} = \n \\frac{a^\\mathrm{ideal\\,gas}(\\tau, \\delta) + a^\\textrm{residual}(\\tau, \\delta)}{RT}"
},
{
"math_id": 43,
"text": "\\tau = \\frac{T_r}{T}, \\delta = \\frac{\\rho}{\\rho_r}"
},
{
"math_id": 44,
"text": "\\rho_r"
},
{
"math_id": 45,
"text": "T_r"
},
{
"math_id": 46,
"text": "\na^\\mathrm{residual} = \\sum_{i=1}^8 \\sum_{j=-8}^{12} n_{i,j} \\delta^i \\tau^{j/8} + \\sum_{i=1}^5 \\sum_{j=-8}^{24} n_{i,j} \\delta^i \\tau^{j/8} \\exp \\left( -\\delta \\right) + \\sum_{i=1}^5 \\sum_{j=16}^{56} n_{i,j} \\delta^i \\tau^{j/8} \\exp \\left( -\\delta^2 \\right) + \\sum_{i=2}^4 \\sum_{j=24}^{38} n_{i,j} \\delta^i \\tau^{j/2} \\exp \\left( -\\delta^3 \\right)\n"
},
{
"math_id": 47,
"text": "p = \\rho(\\gamma - 1)e - \\gamma p^0 \\,"
},
{
"math_id": 48,
"text": "e"
},
{
"math_id": 49,
"text": "\\gamma"
},
{
"math_id": 50,
"text": "p^0"
},
{
"math_id": 51,
"text": "c^2 = \\gamma\\left(p + p^0\\right)/\\rho"
},
{
"math_id": 52,
"text": "p=\\Gamma_1\\nu+\\Gamma_2\\nu^2"
},
{
"math_id": 53,
"text": "\\Gamma_1"
},
{
"math_id": 54,
"text": "\\Gamma_2"
},
{
"math_id": 55,
"text": "\\nu"
},
{
"math_id": 56,
"text": "p = \\rho_m c_s^2"
},
{
"math_id": 57,
"text": "\\rho_m"
},
{
"math_id": 58,
"text": "c_s"
},
{
"math_id": 59,
"text": "p V_m =\n RT~\\frac{\\operatorname{Li}_{\\alpha+1}(z)}{\\zeta(\\alpha)}\n \\left(\\frac{T}{T_c}\\right)^\\alpha\n"
},
{
"math_id": 60,
"text": "p = A \\left( 1 - \\frac{\\omega}{R_1 V} \\right) \\exp(-R_1 V) + B \\left( 1 - \\frac{\\omega}{R_2 V} \\right) \\exp\\left(-R_2 V\\right) + \\frac{\\omega e_0}{V}"
},
{
"math_id": 61,
"text": " V = \\rho_e / \\rho "
},
{
"math_id": 62,
"text": " \\rho_e "
},
{
"math_id": 63,
"text": " \\rho "
},
{
"math_id": 64,
"text": " A "
},
{
"math_id": 65,
"text": " B "
},
{
"math_id": 66,
"text": " R_1 "
},
{
"math_id": 67,
"text": " R_2 "
},
{
"math_id": 68,
"text": " \\omega "
},
{
"math_id": 69,
"text": " \\rho_0 "
},
{
"math_id": 70,
"text": " V_D "
},
{
"math_id": 71,
"text": " P_{CJ} "
},
{
"math_id": 72,
"text": " e_0 "
}
] | https://en.wikipedia.org/wiki?curid=9908 |
990809 | Moving average | Type of statistical measure over subsets of a dataset
In statistics, a moving average (rolling average or running average or moving mean or rolling mean) is a calculation to analyze data points by creating a series of averages of different selections of the full data set. Variations include: simple, cumulative, or weighted forms.
Mathematically, a moving average is a type of convolution. Thus in signal processing it is viewed as a low-pass finite impulse response filter. Because the boxcar function outlines its filter coefficients, it is called a boxcar filter. It is sometimes followed by downsampling.
Given a series of numbers and a fixed subset size, the first element of the moving average is obtained by taking the average of the initial fixed subset of the number series. Then the subset is modified by "shifting forward"; that is, excluding the first number of the series and including the next value in the subset.
A moving average is commonly used with time series data to smooth out short-term fluctuations and highlight longer-term trends or cycles. The threshold between short-term and long-term depends on the application, and the parameters of the moving average will be set accordingly. It is also used in economics to examine gross domestic product, employment or other macroeconomic time series. When used with non-time series data, a moving average filters higher frequency components without any specific connection to time, although typically some kind of ordering is implied. Viewed simplistically it can be regarded as smoothing the data.
Simple moving average.
In financial applications a simple moving average (SMA) is the unweighted mean of the previous formula_0 data-points. However, in science and engineering, the mean is normally taken from an equal number of data on either side of a central value. This ensures that variations in the mean are aligned with the variations in the data rather than being shifted in time. An example of a simple equally weighted running mean is the mean over the last formula_0 entries of a data-set containing formula_1 entries. Let those data-points be formula_2. This could be closing prices of a stock. The mean over the last formula_0 data-points (days in this example) is denoted as formula_3 and calculated as:
formula_4
When calculating the next mean formula_5 with the same sampling width formula_0 the range from formula_6 to formula_7 is considered. A new value formula_8 comes into the sum and the oldest value formula_9 drops out. This simplifies the calculations by reusing the previous mean formula_10.
formula_11
This means that the moving average filter can be computed quite cheaply on real time data with a FIFO / circular buffer and only 3 arithmetic steps.
During the initial filling of the FIFO / circular buffer the sampling window is equal to the data-set size thus formula_12 and the average calculation is performed as a cumulative moving average.
The period selected (formula_0) depends on the type of movement of interest, such as short, intermediate, or long-term.
If the data used are not centered around the mean, a simple moving average lags behind the latest datum by half the sample width. An SMA can also be disproportionately influenced by old data dropping out or new data coming in. One characteristic of the SMA is that if the data has a periodic fluctuation, then applying an SMA of that period will eliminate that variation (the average always containing one complete cycle). But a perfectly regular cycle is rarely encountered.
For a number of applications, it is advantageous to avoid the shifting induced by using only "past" data. Hence a central moving average can be computed, using data equally spaced on either side of the point in the series where the mean is calculated. This requires using an odd number of points in the sample window.
A major drawback of the SMA is that it lets through a significant amount of the signal shorter than the window length. Worse, it "actually inverts it." This can lead to unexpected artifacts, such as peaks in the smoothed result appearing where there were troughs in the data. It also leads to the result being less smooth than expected since some of the higher frequencies are not properly removed.
Its frequency response is a type of low-pass filter called sinc-in-frequency.
Continuous Moving Average.
The continuous moving average is defined with the following integral. The formula_13 environment formula_14 around formula_15 defines the intensity of smoothing of the graph of the function.
formula_16
The continuous moving average of the function formula_17 is defined as:
formula_18
A larger formula_19 smoothes the source graph of the function (blue) formula_17 more. The animations on the left show the moving average as animation in dependency of different values for formula_19. The fraction formula_20 is used, because formula_21 is the interval width for the integral.
Cumulative average.
In a cumulative average (CA), the data arrive in an ordered datum stream, and the user would like to get the average of all of the data up until the current datum. For example, an investor may want the average price of all of the stock transactions for a particular stock up until the current time. As each new transaction occurs, the average price at the time of the transaction can be calculated for all of the transactions up to that point using the cumulative average, typically an equally weighted average of the sequence of "n" values formula_22 up to the current time:
formula_23
The brute-force method to calculate this would be to store all of the data and calculate the sum and divide by the number of points every time a new datum arrived. However, it is possible to simply update cumulative average as a new value, formula_24 becomes available, using the formula
formula_25
Thus the current cumulative average for a new datum is equal to the previous cumulative average, times "n", plus the latest datum, all divided by the number of points received so far, "n"+1. When all of the data arrive ("n" = "N"), then the cumulative average will equal the final average. It is also possible to store a running total of the data as well as the number of points and dividing the total by the number of points to get the CA each time a new datum arrives.
The derivation of the cumulative average formula is straightforward. Using
formula_26
and similarly for "n" + 1, it is seen that
formula_27
formula_28
Solving this equation for formula_29 results in
formula_30
Weighted moving average.
A weighted average is an average that has multiplying factors to give different weights to data at different positions in the sample window. Mathematically, the weighted moving average is the convolution of the data with a fixed weighting function. One application is removing pixelization from a digital graphical image.
In the financial field, and more specifically in the analyses of financial data, a weighted moving average (WMA) has the specific meaning of weights that decrease in arithmetical progression. In an "n"-day WMA the latest day has weight "n", the second latest formula_31, etc., down to one.
formula_32
The denominator is a triangle number equal to formula_33 In the more general case the denominator will always be the sum of the individual weights.
When calculating the WMA across successive values, the difference between the numerators of formula_34 and formula_35 is formula_36. If we denote the sum formula_37 by formula_38, then
formula_39
The graph at the right shows how the weights decrease, from highest weight for the most recent data, down to zero. It can be compared to the weights in the exponential moving average which follows.
Exponential moving average.
An exponential moving average (EMA), also known as an exponentially weighted moving average (EWMA), is a first-order infinite impulse response filter that applies weighting factors which decrease exponentially. The weighting for each older datum decreases exponentially, never reaching zero.
This formulation is according to Hunter (1986).
Other weightings.
Other weighting systems are used occasionally – for example, in share trading a volume weighting will weight each time period in proportion to its trading volume.
A further weighting, used by actuaries, is Spencer's 15-Point Moving Average (a central moving average). Its symmetric weight coefficients are [−3, −6, −5, 3, 21, 46, 67, 74, 67, 46, 21, 3, −5, −6, −3], which factors as and leaves samples of any quadratic or cubic polynomial unchanged.
Outside the world of finance, weighted running means have many forms and applications. Each weighting function or "kernel" has its own characteristics. In engineering and science the frequency and phase response of the filter is often of primary importance in understanding the desired and undesired distortions that a particular filter will apply to the data.
A mean does not just "smooth" the data. A mean is a form of low-pass filter. The effects of the particular filter used should be understood in order to make an appropriate choice. On this point, the French version of this article discusses the spectral effects of 3 kinds of means (cumulative, exponential, Gaussian).
Moving median.
From a statistical point of view, the moving average, when used to estimate the underlying trend in a time series, is susceptible to rare events such as rapid shocks or other anomalies. A more robust estimate of the trend is the simple moving median over "n" time points:
formula_40
where the median is found by, for example, sorting the values inside the brackets and finding the value in the middle. For larger values of "n", the median can be efficiently computed by updating an indexable skiplist.
Statistically, the moving average is optimal for recovering the underlying trend of the time series when the fluctuations about the trend are normally distributed. However, the normal distribution does not place high probability on very large deviations from the trend which explains why such deviations will have a disproportionately large effect on the trend estimate. It can be shown that if the fluctuations are instead assumed to be Laplace distributed, then the moving median is statistically optimal. For a given variance, the Laplace distribution places higher probability on rare events than does the normal, which explains why the moving median tolerates shocks better than the moving mean.
When the simple moving median above is central, the smoothing is identical to the median filter which has applications in, for example, image signal processing. The Moving Median is a more robust alternative to the Moving Average when it comes to estimating the underlying trend in a time series. While the Moving Average is optimal for recovering the trend if the fluctuations around the trend are normally distributed, it is susceptible to the impact of rare events such as rapid shocks or anomalies. In contrast, the Moving Median, which is found by sorting the values inside the time window and finding the value in the middle, is more resistant to the impact of such rare events. This is because, for a given variance, the Laplace distribution, which the Moving Median assumes, places higher probability on rare events than the normal distribution that the Moving Average assumes. As a result, the Moving Median provides a more reliable and stable estimate of the underlying trend even when the time series is affected by large deviations from the trend. Additionally, the Moving Median smoothing is identical to the Median Filter, which has various applications in image signal processing.
Moving average regression model.
In a moving average regression model, a variable of interest is assumed to be a weighted moving average of unobserved independent error terms; the weights in the moving average are parameters to be estimated.
Those two concepts are often confused due to their name, but while they share many similarities, they represent distinct methods and are used in very different contexts.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "p_1, p_2, \\dots, p_n"
},
{
"math_id": 3,
"text": "\\textit{SMA}_{k}"
},
{
"math_id": 4,
"text": "\\begin{align}\n \\textit{SMA}_{k} &= \\frac{p_{n-k+1} + p_{n-k+2} + \\cdots + p_{n}}{k} \\\\\n &= \\frac{1}{k} \\sum_{i=n-k+1}^{n} p_{i}\n\\end{align}"
},
{
"math_id": 5,
"text": "\\textit{SMA}_{k,\\text{next}}"
},
{
"math_id": 6,
"text": " n - k + 2 "
},
{
"math_id": 7,
"text": " n+1 "
},
{
"math_id": 8,
"text": "p_{n+1}"
},
{
"math_id": 9,
"text": "p_{n-k+1}"
},
{
"math_id": 10,
"text": "\\textit{SMA}_{k,\\text{prev}}"
},
{
"math_id": 11,
"text": "\n\\begin{align}\n \\textit{SMA}_{k, \\text{next}} &= \\frac{1}{k} \\sum_{i=n-k+2}^{n+1} p_{i} \\\\\n &= \\frac{1}{k} \\Big( \\underbrace{ p_{n-k+2} + p_{n-k+3} + \\dots + p_{n} + p_{n+1} }_{ \\sum_{i=n-k+2}^{n+1} p_{i} } + \\underbrace{ p_{n-k+1} - p_{n-k+1} }_{= 0} \\Big) \\\\\n &= \\underbrace{ \\frac{1}{k} \\Big( p_{n-k+1} + p_{n-k+2} + \\dots + p_{n} \\Big) }_{= \\textit{SMA}_{k, \\text{prev}}} - \\frac{p_{n-k+1}}{k} + \\frac{p_{n+1}}{k} \\\\\n &= \\textit{SMA}_{k, \\text{prev}} + \\frac{1}{k} \\Big( p_{n+1} - p_{n-k+1} \\Big)\n\\end{align}\n"
},
{
"math_id": 12,
"text": " k = n "
},
{
"math_id": 13,
"text": "\\varepsilon"
},
{
"math_id": 14,
"text": " [ x_o-\\varepsilon, x_o+\\varepsilon] "
},
{
"math_id": 15,
"text": "x_o"
},
{
"math_id": 16,
"text": "\n \\begin{array}{rrcl} \n f: & \\mathbb{R} & \\rightarrow & \\mathbb{R} \\\\ \n & x & \\mapsto & f\\left( x \\right) \n \\end{array} \n"
},
{
"math_id": 17,
"text": "f"
},
{
"math_id": 18,
"text": "\n \\begin{array}{rrcl} \n MA_f: & \\mathbb{R} & \\rightarrow & \\mathbb{R} \\\\ \n & x & \\mapsto & \\displaystyle \\frac{1}{2\\cdot \\varepsilon} \\cdot \\int_{x_o -\\varepsilon}^{x_o +\\varepsilon} f\\left( t \\right) \\, dt \n \\end{array} \n "
},
{
"math_id": 19,
"text": " \\varepsilon > 0 "
},
{
"math_id": 20,
"text": "\\frac{1}{2\\cdot \\varepsilon} "
},
{
"math_id": 21,
"text": " 2\\cdot \\varepsilon "
},
{
"math_id": 22,
"text": "x_1. \\ldots, x_n"
},
{
"math_id": 23,
"text": "\\textit{CA}_n = {{x_1 + \\cdots + x_n} \\over n}\\,."
},
{
"math_id": 24,
"text": "x_{n+1}"
},
{
"math_id": 25,
"text": "\\textit{CA}_{n+1} = {{x_{n+1} + n \\cdot \\textit{CA}_n} \\over {n+1}}."
},
{
"math_id": 26,
"text": "x_1 + \\cdots + x_n = n \\cdot \\textit{CA}_n"
},
{
"math_id": 27,
"text": "x_{n+1} = (x_1 + \\cdots + x_{n+1}) - (x_1 + \\cdots + x_n)"
},
{
"math_id": 28,
"text": "x_{n+1} = (n + 1) \\cdot \\textit{CA}_{n + 1} - n \\cdot \\textit{CA}_n "
},
{
"math_id": 29,
"text": "\\textit{CA}_{n+1}"
},
{
"math_id": 30,
"text": "\\begin{align}\n\\textit{CA}_{n+1} & = {x_{n+1} + n \\cdot \\textit{CA}_n \\over {n+1}} \\\\[6pt]\n& = {x_{n+1} + (n + 1 - 1) \\cdot \\textit{CA}_n \\over {n+1}} \\\\[6pt]\n& = {(n + 1) \\cdot \\textit{CA}_n + x_{n+1} - \\textit{CA}_n \\over {n+1}} \\\\[6pt]\n& = {\\textit{CA}_n} + {{x_{n+1} - \\textit{CA}_n} \\over {n+1}}\n\\end{align}"
},
{
"math_id": 31,
"text": "n-1"
},
{
"math_id": 32,
"text": "\\text{WMA}_{M} = { n p_{M} + (n-1) p_{M-1} + \\cdots + 2 p_{((M-n)+2)} + p_{((M-n)+1)} \\over n + (n-1) + \\cdots + 2 + 1}"
},
{
"math_id": 33,
"text": "\\frac{n(n + 1)}{2}."
},
{
"math_id": 34,
"text": "\\text{WMA}_{M+1}"
},
{
"math_id": 35,
"text": "\\text{WMA}_{M}"
},
{
"math_id": 36,
"text": "np_{M+1} - p_{M} - \\dots - p_{M-n+1}"
},
{
"math_id": 37,
"text": "p_{M} + \\dots + p_{M-n+1}"
},
{
"math_id": 38,
"text": "\\text{Total}_{M}"
},
{
"math_id": 39,
"text": "\\begin{align}\n \\text{Total}_{M+1} &= \\text{Total}_M + p_{M+1} - p_{M-n+1} \\\\[3pt]\n\\text{Numerator}_{M+1} &= \\text{Numerator}_M + n p_{M+1} - \\text{Total}_M \\\\[3pt]\n \\text{WMA}_{M+1} &= { \\text{Numerator}_{M+1} \\over n + (n-1) + \\cdots + 2 + 1}\n\\end{align}"
},
{
"math_id": 40,
"text": "\\widetilde{p}_\\text{SM} = \\text{Median}( p_M, p_{M-1}, \\ldots, p_{M-n+1} )"
}
] | https://en.wikipedia.org/wiki?curid=990809 |
9908503 | Core (graph theory) | In the mathematical field of graph theory, a core is a notion that describes behavior of a graph with respect to graph homomorphisms.
Definition.
Graph formula_0 is a core if every homomorphism formula_1 is an isomorphism, that is it is a bijection of vertices of formula_0.
A core of a graph formula_2 is a graph formula_0 such that
Two graphs are said to be homomorphism equivalent or hom-equivalent if they have isomorphic cores.
Properties.
Every finite graph has a core, which is determined uniquely, up to isomorphism. The core of a graph "G" is always an induced subgraph of "G". If formula_3 and formula_4 then the graphs formula_2 and formula_5 are necessarily homomorphically equivalent.
Computational complexity.
It is NP-complete to test whether a graph has a homomorphism to a proper subgraph, and co-NP-complete to test whether a graph is its own core (i.e. whether no such homomorphism exists) . | [
{
"math_id": 0,
"text": "C"
},
{
"math_id": 1,
"text": "f:C \\to C"
},
{
"math_id": 2,
"text": "G"
},
{
"math_id": 3,
"text": "G \\to H"
},
{
"math_id": 4,
"text": "H \\to G"
},
{
"math_id": 5,
"text": "H"
}
] | https://en.wikipedia.org/wiki?curid=9908503 |
9909979 | Nowhere-zero flow | Concept in graph theory
In graph theory, a nowhere-zero flow or NZ flow is a network flow that is nowhere zero. It is intimately connected (by duality) to coloring planar graphs.
Definitions.
Let "G" = ("V","E") be a digraph and let "M" be an abelian group. A map "φ": "E" → "M" is an "M"-circulation if for every vertex "v" ∈ "V"
formula_0
where "δ"+("v") denotes the set of edges out of "v" and "δ"−("v") denotes the set of edges into "v". Sometimes, this condition is referred to as Kirchhoff's law.
If "φ"("e") ≠ 0 for every "e" ∈ "E", we call "φ" a nowhere-zero flow, an M"-flow, or an NZ-flow. If "k" is an integer and 0 < |"φ"("e")| < "k" then "φ" is a k"-flow.
Other notions.
Let "G" = ("V","E") be an undirected graph. An orientation of "E" is a modular "k"-flow if for every vertex "v" ∈ "V" we have:
formula_1
Flow polynomial.
Let formula_4 be the number of "M"-flows on "G". It satisfies the deletion–contraction formula:
formula_5
Combining this with induction we can show formula_4 is a polynomial in formula_6 where formula_7 is the order of the group "M". We call formula_4 the flow polynomial of "G" and abelian group "M".
The above implies that two groups of equal order have an equal number of NZ flows. The order is the only group parameter that matters, not the structure of "M". In particular formula_8 if formula_9
The above results were proved by Tutte in 1953 when he was studying the Tutte polynomial, a generalization of the flow polynomial.
Flow-coloring duality.
Bridgeless Planar Graphs.
There is a duality between "k"-face colorings and "k"-flows for bridgeless planar graphs. To see this, let "G" be a directed bridgeless planar graph with a proper "k"-face-coloring with colors formula_10 Construct a map
formula_11
by the following rule: if the edge "e" has a face of color "x" to the left and a face of color "y" to the right, then let "φ"("e") = "x" – "y". Then "φ" is a (NZ) "k"-flow since "x" and "y" must be different colors.
So if "G" and "G*" are planar dual graphs and "G*" is "k"-colorable (there is a coloring of the faces of "G"), then "G" has a NZ "k"-flow. Using induction on |"E"("G")| Tutte proved the converse is also true. This can be expressed concisely as:
formula_12
where the RHS is the flow number, the smallest "k" for which "G" permits a "k"-flow.
General Graphs.
The duality is true for general "M"-flows as well:
The duality follows by combining the last two points. We can specialize to formula_18 to obtain the similar results for "k"-flows discussed above. Given this duality between NZ flows and colorings, and since we can define NZ flows for arbitrary graphs (not just planar), we can use this to extend face-colorings to non-planar graphs.
Existence of "k"-flows.
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
Does every bridgeless graph have a nowhere zero 5-flow? Does every bridgeless graph that does not have the Petersen graph as a minor have a nowhere zero 4-flow?
Interesting questions arise when trying to find nowhere-zero "k"-flows for small values of "k". The following have been proven:
Jaeger's 4-flow Theorem. Every 4-edge-connected graph has a 4-flow.
Seymour's 6-flow Theorem. Every bridgeless graph has a 6-flow.
3-flow, 4-flow and 5-flow conjectures.
As of 2019, the following are currently unsolved (due to Tutte):
3-flow Conjecture. Every 4-edge-connected graph has a nowhere-zero 3-flow.
4-flow Conjecture. Every bridgeless graph that does not have the Petersen graph as a minor has a nowhere-zero 4-flow.
5-flow Conjecture. Every bridgeless graph has a nowhere-zero 5-flow.
The converse of the 4-flow Conjecture does not hold since the complete graph "K"11 contains a Petersen graph "and" a 4-flow. For bridgeless cubic graphs with no Petersen minor, 4-flows exist by the snark theorem (Seymour, et al 1998, not yet published). The four color theorem is equivalent to the statement that no snark is planar.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{e \\in \\delta^+(v)} \\varphi(e) = \\sum_{e \\in \\delta^-(v)} \\varphi(e),"
},
{
"math_id": 1,
"text": "|\\delta^+(v)| \\equiv |\\delta^-(v)| \\bmod k."
},
{
"math_id": 2,
"text": "\\Z_k"
},
{
"math_id": 3,
"text": "h \\geq k"
},
{
"math_id": 4,
"text": "N_M(G)"
},
{
"math_id": 5,
"text": "N_M(G) = N_M(G/ e) - N_M(G\\setminus e)."
},
{
"math_id": 6,
"text": "|M|-1"
},
{
"math_id": 7,
"text": "|M|"
},
{
"math_id": 8,
"text": "N_{M_1}(G) = N_{M_2}(G)"
},
{
"math_id": 9,
"text": "|M_1| = |M_2|."
},
{
"math_id": 10,
"text": "\\{0, 1, \\ldots, k-1\\}."
},
{
"math_id": 11,
"text": "\\phi: E(G)\\to \\{-(k-1), \\ldots, -1, 0, 1, \\ldots, k-1\\}"
},
{
"math_id": 12,
"text": "\\chi(G^*) = \\phi(G),"
},
{
"math_id": 13,
"text": "c"
},
{
"math_id": 14,
"text": "\\phi_c(e) = c(r_1) - c(r_2)"
},
{
"math_id": 15,
"text": "\\phi"
},
{
"math_id": 16,
"text": "\\phi = \\phi_c"
},
{
"math_id": 17,
"text": "\\phi_c"
},
{
"math_id": 18,
"text": "M = \\Z_k"
},
{
"math_id": 19,
"text": "K = \\Z_2 \\times \\Z_2"
},
{
"math_id": 20,
"text": "\\Z"
}
] | https://en.wikipedia.org/wiki?curid=9909979 |
991 | Absolute value | Distance from zero to a number
In mathematics, the absolute value or modulus of a real number formula_0, denoted formula_1, is the non-negative value of formula_0 without regard to its sign. Namely, formula_2 if formula_0 is a positive number, and formula_3 if formula_0 is negative (in which case negating formula_0 makes formula_4 positive), and formula_5. For example, the absolute value of 3 is 3, and the absolute value of −3 is also 3. The absolute value of a number may be thought of as its distance from zero.
Generalisations of the absolute value for real numbers occur in a wide variety of mathematical settings. For example, an absolute value is also defined for the complex numbers, the quaternions, ordered rings, fields and vector spaces. The absolute value is closely related to the notions of magnitude, distance, and norm in various mathematical and physical contexts.
Terminology and notation.
In 1806, Jean-Robert Argand introduced the term "module", meaning "unit of measure" in French, specifically for the "complex" absolute value, and it was borrowed into English in 1866 as the Latin equivalent "modulus". The term "absolute value" has been used in this sense from at least 1806 in French and 1857 in English. The notation | · |, with a vertical bar on each side, was introduced by Karl Weierstrass in 1841. Other names for "absolute value" include "numerical value" and "magnitude". In programming languages and computational software packages, the absolute value of formula_6 is generally represented by codice_0, or a similar expression.
The vertical bar notation also appears in a number of other mathematical contexts: for example, when applied to a set, it denotes its cardinality; when applied to a matrix, it denotes its determinant. Vertical bars denote the absolute value only for algebraic objects for which the notion of an absolute value is defined, notably an element of a normed division algebra, for example a real number, a complex number, or a quaternion. A closely related but distinct notation is the use of vertical bars for either the Euclidean norm or sup norm of a vector in formula_7, although double vertical bars with subscripts (formula_8 and formula_9, respectively) are a more common and less ambiguous notation.
Definition and properties.
Real numbers.
For any real number formula_0, the absolute value or modulus of formula_0 is denoted by formula_1, with a vertical bar on each side of the quantity, and is defined as
formula_10
The absolute value of formula_0 is thus always either a positive number or zero, but never negative. When formula_0 itself is negative (formula_11), then its absolute value is necessarily positive (formula_12).
From an analytic geometry point of view, the absolute value of a real number is that number's distance from zero along the real number line, and more generally the absolute value of the difference of two real numbers (their absolute difference) is the distance between them. The notion of an abstract distance function in mathematics can be seen to be a generalisation of the absolute value of the difference (see "Distance" below).
Since the square root symbol represents the unique "positive" square root, when applied to a positive number, it follows that
formula_13
This is equivalent to the definition above, and may be used as an alternative definition of the absolute value of real numbers.
The absolute value has the following four fundamental properties (formula_14, formula_15 are real numbers), that are used for generalization of this notion to other domains:
Non-negativity, positive definiteness, and multiplicativity are readily apparent from the definition. To see that subadditivity holds, first note that formula_16 where formula_17, with its sign chosen to make the result positive. Now, since formula_18 and formula_19, it follows that, whichever of formula_20 is the value of formula_21, one has formula_22 for all real formula_0. Consequently, formula_23, as desired.
Some additional useful properties are given below. These are either immediate consequences of the definition or implied by the four fundamental properties above.
Two other useful properties concerning inequalities are:
These relations may be used to solve inequalities involving absolute values. For example:
The absolute value, as "distance from zero", is used to define the absolute difference between arbitrary real numbers, the standard metric on the real numbers.
Complex numbers.
Since the complex numbers are not ordered, the definition given at the top for the real absolute value cannot be directly applied to complex numbers. However, the geometric interpretation of the absolute value of a real number as its distance from 0 can be generalised. The absolute value of a complex number is defined by the Euclidean distance of its corresponding point in the complex plane from the origin. This can be computed using the Pythagorean theorem: for any complex number
formula_25
where formula_0 and formula_26 are real numbers, the absolute value or modulus of formula_24 is denoted formula_27 and is defined by
formula_28
the Pythagorean addition of formula_0 and formula_26, where formula_29 and formula_30 denote the real and imaginary parts of formula_24, respectively. When the imaginary part formula_26 is zero, this coincides with the definition of the absolute value of the real number formula_0.
When a complex number formula_24 is expressed in its polar form as formula_31 its absolute value is formula_32
Since the product of any complex number formula_24 and its complex conjugate formula_33, with the same absolute value, is always the non-negative real number formula_34, the absolute value of a complex number formula_24 is the square root of formula_35 which is therefore called the absolute square or "squared modulus" of formula_24:
formula_36
This generalizes the alternative definition for reals: formula_37.
The complex absolute value shares the four fundamental properties given above for the real absolute value. The identity formula_38 is a special case of multiplicativity that is often useful by itself.
Absolute value function.
The real absolute value function is continuous everywhere. It is differentiable everywhere except for "x" = 0. It is monotonically decreasing on the interval (−∞, 0] and monotonically increasing on the interval [0, +∞). Since a real number and its opposite have the same absolute value, it is an even function, and is hence not invertible. The real absolute value function is a piecewise linear, convex function.
For both real and complex numbers the absolute value function is idempotent (meaning that the absolute value of any absolute value is itself).
Relationship to the sign function.
The absolute value function of a real number returns its value irrespective of its sign, whereas the sign (or signum) function returns a number's sign irrespective of its value. The following equations show the relationship between these two functions:
formula_39
or
formula_40
and for "x" ≠ 0,
formula_41
Relationship to the max and min functions.
Let formula_42, then
formula_43
and
formula_44
Derivative.
The real absolute value function has a derivative for every "x" ≠ 0, but is not differentiable at "x" = 0. Its derivative for "x" ≠ 0 is given by the step function:
formula_45
The real absolute value function is an example of a continuous function that achieves a global minimum where the derivative does not exist.
The subdifferential of | · | at "x" = 0 is the interval [−1, 1].
The complex absolute value function is continuous everywhere but complex differentiable "nowhere" because it violates the Cauchy–Riemann equations.
The second derivative of | · | with respect to x is zero everywhere except zero, where it does not exist. As a generalised function, the second derivative may be taken as two times the Dirac delta function.
Antiderivative.
The antiderivative (indefinite integral) of the real absolute value function is
formula_46
where C is an arbitrary constant of integration. This is not a complex antiderivative because complex antiderivatives can only exist for complex-differentiable (holomorphic) functions, which the complex absolute value function is not.
Derivatives of compositions.
The following two formulae are special cases of the chain rule:
formula_47
if the absolute value is inside a function, and
formula_48
if another function is inside the absolute value. In the first case, the derivative is always discontinuous at formula_49 in the first case and where formula_50 in the second case.
Distance.
The absolute value is closely related to the idea of distance. As noted above, the absolute value of a real or complex number is the distance from that number to the origin, along the real number line, for real numbers, or in the complex plane, for complex numbers, and more generally, the absolute value of the difference of two real or complex numbers is the distance between them.
The standard Euclidean distance between two points
formula_51
and
formula_52
in Euclidean n-space is defined as:
formula_53
This can be seen as a generalisation, since for formula_54 and formula_55 real, i.e. in a 1-space, according to the alternative definition of the absolute value,
formula_56
and for formula_57 and formula_58 complex numbers, i.e. in a 2-space,
The above shows that the "absolute value"-distance, for real and complex numbers, agrees with the standard Euclidean distance, which they inherit as a result of considering them as one and two-dimensional Euclidean spaces, respectively.
The properties of the absolute value of the difference of two real or complex numbers: non-negativity, identity of indiscernibles, symmetry and the triangle inequality given above, can be seen to motivate the more general notion of a distance function as follows:
A real valued function d on a set "X" × "X" is called a metric (or a "distance function") on X, if it satisfies the following four axioms:
Generalizations.
Ordered rings.
The definition of absolute value given for real numbers above can be extended to any ordered ring. That is, if a is an element of an ordered ring "R", then the absolute value of a, denoted by , is defined to be:
formula_59
where −"a" is the additive inverse of a, 0 is the additive identity, and < and ≥ have the usual meaning with respect to the ordering in the ring.
Fields.
The four fundamental properties of the absolute value for real numbers can be used to generalise the notion of absolute value to an arbitrary field, as follows.
A real-valued function v on a field F is called an "absolute value" (also a "modulus", "magnitude", "value", or "valuation") if it satisfies the following four axioms:
Where 0 denotes the additive identity of F. It follows from positive-definiteness and multiplicativity that "v"(1) = 1, where 1 denotes the multiplicative identity of F. The real and complex absolute values defined above are examples of absolute values for an arbitrary field.
If v is an absolute value on F, then the function d on "F" × "F", defined by "d"("a", "b") = "v"("a" − "b"), is a metric and the following are equivalent:
An absolute value which satisfies any (hence all) of the above conditions is said to be non-Archimedean, otherwise it is said to be Archimedean.
Vector spaces.
Again the fundamental properties of the absolute value for real numbers can be used, with a slight modification, to generalise the notion to an arbitrary vector space.
A real-valued function on a vector space V over a field F, represented as ‖ · ‖, is called an absolute value, but more usually a norm, if it satisfies the following axioms:
For all a in F, and v, u in V,
The norm of a vector is also called its "length" or "magnitude".
In the case of Euclidean space formula_68, the function defined by
formula_69
is a norm called the Euclidean norm. When the real numbers formula_70 are considered as the one-dimensional vector space formula_71, the absolute value is a norm, and is the p-norm (see Lp space) for any p. In fact the absolute value is the "only" norm on formula_71, in the sense that, for every norm ‖ · ‖ on formula_71, ‖"x"‖ = ‖1‖ ⋅ |"x"|.
The complex absolute value is a special case of the norm in an inner product space, which is identical to the Euclidean norm when the complex plane is identified as the Euclidean plane formula_72.
Composition algebras.
Every composition algebra "A" has an involution "x" → "x"* called its conjugation. The product in "A" of an element "x" and its conjugate "x"* is written "N"("x") = "x x"* and called the norm of x.
The real numbers formula_70, complex numbers formula_73, and quaternions formula_74 are all composition algebras with norms given by definite quadratic forms. The absolute value in these division algebras is given by the square root of the composition algebra norm.
In general the norm of a composition algebra may be a quadratic form that is not definite and has null vectors. However, as in the case of division algebras, when an element "x" has a non-zero norm, then "x" has a multiplicative inverse given by "x"*/"N"("x").
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "|x|"
},
{
"math_id": 2,
"text": "|x|=x"
},
{
"math_id": 3,
"text": "|x|=-x"
},
{
"math_id": 4,
"text": "-x"
},
{
"math_id": 5,
"text": "|0|=0"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "\\R^n"
},
{
"math_id": 8,
"text": "\\|\\cdot\\|_2"
},
{
"math_id": 9,
"text": "\\|\\cdot\\|_\\infty"
},
{
"math_id": 10,
"text": "|x| =\n \\begin{cases}\n x, & \\text{if } x \\geq 0 \\\\\n -x, & \\text{if } x < 0.\n \\end{cases}\n "
},
{
"math_id": 11,
"text": "x < 0"
},
{
"math_id": 12,
"text": "|x|=-x>0"
},
{
"math_id": 13,
"text": "|x| = \\sqrt{x^2}."
},
{
"math_id": 14,
"text": "a"
},
{
"math_id": 15,
"text": "b"
},
{
"math_id": 16,
"text": "|a+b|=s(a+b)"
},
{
"math_id": 17,
"text": "s=\\pm 1"
},
{
"math_id": 18,
"text": "-1 \\cdot x \\le |x|"
},
{
"math_id": 19,
"text": "+1 \\cdot x \\le |x|"
},
{
"math_id": 20,
"text": "\\pm1"
},
{
"math_id": 21,
"text": "s"
},
{
"math_id": 22,
"text": "s \\cdot x\\leq |x|"
},
{
"math_id": 23,
"text": "|a+b|=s \\cdot (a+b) = s \\cdot a + s \\cdot b \\leq |a| + |b|"
},
{
"math_id": 24,
"text": "z"
},
{
"math_id": 25,
"text": "z = x + iy,"
},
{
"math_id": 26,
"text": "y"
},
{
"math_id": 27,
"text": "|z|"
},
{
"math_id": 28,
"text": "|z| = \\sqrt{\\operatorname{Re}(z)^2 + \\operatorname{Im}(z)^2}=\\sqrt{x^2 + y^2},"
},
{
"math_id": 29,
"text": "\\operatorname{Re}(z)=x"
},
{
"math_id": 30,
"text": "\\operatorname{Im}(z)=y"
},
{
"math_id": 31,
"text": "z = r e^{i \\theta},"
},
{
"math_id": 32,
"text": "|z| = r."
},
{
"math_id": 33,
"text": "\\bar z = x - iy"
},
{
"math_id": 34,
"text": "\\left(x^2 + y^2\\right)"
},
{
"math_id": 35,
"text": "z \\cdot \\overline{z},"
},
{
"math_id": 36,
"text": "|z| = \\sqrt{z \\cdot \\overline{z}}."
},
{
"math_id": 37,
"text": "|x| = \\sqrt{x\\cdot x}"
},
{
"math_id": 38,
"text": "|z|^2 = |z^2|"
},
{
"math_id": 39,
"text": "|x| = x \\sgn(x),"
},
{
"math_id": 40,
"text": " |x| \\sgn(x) = x,"
},
{
"math_id": 41,
"text": "\\sgn(x) = \\frac{|x|}{x} = \\frac{x}{|x|}."
},
{
"math_id": 42,
"text": "s,t\\in\\R"
},
{
"math_id": 43,
"text": "|t-s|= -2 \\min(s,t)+s+t"
},
{
"math_id": 44,
"text": "|t-s|=2 \\max(s,t)-s-t."
},
{
"math_id": 45,
"text": "\\frac{d\\left|x\\right|}{dx} = \\frac{x}{|x|} = \\begin{cases} -1 & x<0 \\\\ 1 & x>0. \\end{cases}"
},
{
"math_id": 46,
"text": "\\int \\left|x\\right| dx = \\frac{x\\left|x\\right|}{2} + C,"
},
{
"math_id": 47,
"text": "{d \\over dx} f(|x|)={x \\over |x|} (f'(|x|))"
},
{
"math_id": 48,
"text": "{d \\over dx} |f(x)|={f(x) \\over |f(x)|} f'(x)"
},
{
"math_id": 49,
"text": "x=0"
},
{
"math_id": 50,
"text": "f(x)=0"
},
{
"math_id": 51,
"text": "a = (a_1, a_2, \\dots , a_n) "
},
{
"math_id": 52,
"text": "b = (b_1, b_2, \\dots , b_n) "
},
{
"math_id": 53,
"text": "\\sqrt{\\textstyle\\sum_{i=1}^n(a_i-b_i)^2}. "
},
{
"math_id": 54,
"text": "a_1"
},
{
"math_id": 55,
"text": "b_1"
},
{
"math_id": 56,
"text": "|a_1 - b_1| = \\sqrt{(a_1 - b_1)^2} = \\sqrt{\\textstyle\\sum_{i=1}^1(a_i-b_i)^2},"
},
{
"math_id": 57,
"text": " a = a_1 + i a_2 "
},
{
"math_id": 58,
"text": " b = b_1 + i b_2 "
},
{
"math_id": 59,
"text": "|a| = \\left\\{\n \\begin{array}{rl}\n a, & \\text{if } a \\geq 0 \\\\\n -a, & \\text{if } a < 0.\n \\end{array}\\right.\n "
},
{
"math_id": 60,
"text": "d(x, y) \\leq \\max(d(x,z),d(y,z))"
},
{
"math_id": 61,
"text": " \\left\\{ v\\left( \\sum_{k=1}^n \\mathbf{1}\\right) : n \\in \\N \\right\\} "
},
{
"math_id": 62,
"text": " v\\left({\\textstyle \\sum_{k=1}^n } \\mathbf{1}\\right) \\le 1\\ "
},
{
"math_id": 63,
"text": "n \\in \\N"
},
{
"math_id": 64,
"text": " v(a) \\le 1 \\Rightarrow v(1+a) \\le 1\\ "
},
{
"math_id": 65,
"text": "a \\in F"
},
{
"math_id": 66,
"text": " v(a + b) \\le \\max \\{v(a), v(b)\\}\\ "
},
{
"math_id": 67,
"text": "a, b \\in F"
},
{
"math_id": 68,
"text": "\\mathbb{R}^n"
},
{
"math_id": 69,
"text": "\\|(x_1, x_2, \\dots , x_n) \\| = \\sqrt{\\textstyle\\sum_{i=1}^{n} x_i^2}"
},
{
"math_id": 70,
"text": "\\mathbb{R}"
},
{
"math_id": 71,
"text": "\\mathbb{R}^1"
},
{
"math_id": 72,
"text": "\\mathbb{R}^2"
},
{
"math_id": 73,
"text": "\\mathbb{C}"
},
{
"math_id": 74,
"text": "\\mathbb{H}"
}
] | https://en.wikipedia.org/wiki?curid=991 |
9910593 | Proto-Indo-European root | Most basic form of words in the Proto-Indo-European language
The roots of the reconstructed Proto-Indo-European language (PIE) are basic parts of words to carry a lexical meaning, so-called morphemes. PIE roots usually have verbal meaning like "to eat" or "to run". Roots never occurred alone in the language. Complete inflected verbs, nouns, and adjectives were formed by adding further morphemes to a root and potentially changing the root's vowel in a process called ablaut.
A root consists of a central vowel that is preceded and followed by at least one consonant each. A number of rules have been determined to specify which consonants can occur together, and in which order. The modern understanding of these rules is that the consonants with the highest sonority ("*l, *r, *y, *n") are nearest to the vowel, and the ones with the lowest sonority such as plosives are furthest away. There are some exceptions to these rules such as thorn clusters.
Sometimes new roots were created in PIE or its early descendants by various processes such as root extensions (adding a sound to the end of an existing root) or metathesis.
Word formation.
Typically, a root plus a suffix forms a stem, and adding an ending forms a word.
formula_0
For example, "*bʰéreti" 'he bears' can be split into the root 'to bear', the suffix "*-e-" which governs the imperfective aspect, and the ending "*-ti", which governs the present tense, third-person singular.
The suffix is sometimes missing, which has been interpreted as a zero suffix. Words with zero suffix are termed "root verbs" and "root nouns". An example is '[I] am'. Beyond this basic structure, there is the nasal infix which functions as a present tense marker, and reduplication, a prefix with a number of grammatical and derivational functions.
Finite verbs.
Verbal suffixes, including the zero suffix, convey grammatical information about tense and aspect, two grammatical categories that are not clearly distinguished. Imperfective (present, durative) and perfective aspect (aorist, punctual) are universally recognised, while some of the other aspects remain controversial. Two of the four moods, the subjunctive and the optative, are also formed with suffixes, which sometimes results in forms with two consecutive suffixes: "*bʰér-e-e-ti" > "*bʰérēti" 'he would bear', with the first "*e" being the present tense marker, and the second the subjunctive marker. Reduplication can mark the present and the perfect.
Verbal endings convey information about grammatical person, number and voice. The imperative mood has its own set of endings.
Nouns and adjectives.
Nouns usually derive from roots or verb stems by suffixation or by other means. (See the morphology of the Proto-Indo-European noun for some examples.) This can hold even for roots that are often translated as nouns: , for example, can mean 'to tread' or 'foot', depending on the ablaut grade and ending. Some noun stems like "" 'lamb', however, do not derive from known verbal roots. In any case, the meaning of a noun is given by its stem, whether this is composed of a root plus a suffix or not. This leaves the ending, which conveys case and number.
Adjectives are also derived by suffixation of (usually verbal) roots. An example is 'begotten, produced' from the root 'to beget, to produce'. The endings are the same as with nouns.
Infinitives and participles.
Infinitives are verbal nouns and, just like other nouns, are formed with suffixes. It is not clear whether any of the infinitive suffixes reconstructed from the daughter languages ("*-dʰye-", "*-tu-", "*-ti-", among others) was actually used to express an infinitive in PIE.
Participles are verbal adjectives formed with the suffixes "*-ent-" (active imperfective and aorist participle), "*-wos-" (perfect participle) and "*-mh₁no-" or "*-m(e)no-" (mediopassive participle), among others.
Shape of a root.
In its base form, a PIE root consists of a single vowel, preceded and followed by consonants. Except for a very few cases, the root is fully characterized by its consonants, while the vowel may change in accordance with inflection or word derivation. Thus, the root "*bʰer-" can also appear as "*bʰor-", with a long vowel as "*bʰēr-" or "*bʰōr-", or even unsyllabic as "*bʰr-", in different grammatical contexts. This process is called ablaut, and the different forms are called ablaut grades. The five ablaut grades are the e-grade, o-grade, lengthened e- and o-grades, and the zero-grade that lacks a vowel.
In linguistic works, "*e" is used to stand in for the various ablaut grades that the vowel may appear in. Some reconstructions also include roots with "*a" as the vowel, but the existence of "*a" as a distinct vowel is disputed; see Indo-European ablaut: a-grade. The vowel is flanked on both sides by one or more consonants; the preceding consonants are the "onset", the following ones are the "coda".
The onset and coda must contain at least one consonant; a root may not begin or end with the ablaut vowel. Consequently, the simplest roots have an onset and coda consisting of one consonant each. Such simple roots are common; examples are: 'to give', "*bʰer-" 'to bear', 'to put', 'to run', 'to eat', 'sharp', "*ped-" 'to tread', 'to sit', and 'to clothe'. Roots can also have a more complex onset and coda, consisting of a consonant cluster (multiple consonants). These include: 'to breathe', 'red', 'to plough', 'straight', 'to bind', 'to freeze', 'to flow', 'to sleep', and 'to moisten'. The maximum number of consonants seems to be five, as in 'to twine'.
Early PIE scholars reconstructed a number of roots beginning or ending with a vowel. The latter type always had a long vowel ("*dʰē-" 'to put', "*bʰwā-" 'to grow', "*dō-" 'to give'), while this restriction did not hold for vowel-initial roots ("*ed-" 'to eat', "*aǵ-" 'to drive', "*od-" 'to smell'). Laryngeal theory can explain this behaviour by reconstructing a laryngeal following the vowel ("*dʰeh₁-", , "*deh₃-", resulting in a long vowel) or preceding it ("*h₁ed-", , , resulting in a short vowel). These reconstructions obey the mentioned rules.
Sonority hierarchy.
When the onset or coda of a root contains a consonant cluster, the consonants in this cluster must be ordered according to their sonority. The vowel constitutes a sonority peak, and the sonority must progressively rise in the onset and progressively fall in the coda.
PIE roots distinguish three main classes of consonants, arranged from high to low sonority:
The following rules apply:
Laryngeals can also occur in the coda "before" a sonorant, as in 'small'.
Obstruent clusters.
The obstruent slot of an onset or coda may consist of multiple obstruents itself. Here, too, only one member of each subgroup of obstruents may appear in the cluster; a cluster may not contain multiple laryngeals or plosives.
The rules for the ordering within a cluster of obstruents are somewhat different, and do not fit into the general sonority hierarchy:
In several roots, a phenomenon called s-mobile occurs, where some descendants include a prepended "*s" while other forms lack it. There does not appear to be any particular pattern; sometimes forms with "*s" and without it even occur side by side in the same language.
Further restrictions.
PIE abided by the general cross-linguistic constraint against the co-occurrence of two similar consonants in a word root. In particular, no examples are known of roots containing two plain voiced plosives ("**ged-") or two glides ("**ler-"). A few examples of roots with two fricatives or two nasals ( 'to burn', 'to give, to take', etc.) can be reconstructed, but they were rare as well. An exception, however, were the voiced aspirated and voiceless plosives, which relatively commonly co-occurred (e.g. 'to burn', "*peth₂-" 'to fly'). In particular, roots with two voiced aspirates were more than twice as common than could be expected to occur by chance.
An additional constraint prohibited roots containing both a voiced aspirated and a voiceless plosive ("**tebʰ-"), unless the latter occurs in a word-initial cluster after an "*s" (e.g. 'to stiffen'). Taken together with the abundance of "*DʰeDʰ"-type roots, it has been proposed that this distribution results from a limited process of voice assimilation in pre-PIE, where a voiceless stop was assimilated to a voiced aspirate, if another one followed or preceded within a root.
Exceptions.
Thorn clusters are sequences of a dental ("*t, *d, *dʰ") plus a velar plosive ("*k, *g, *gʰ" etc.). Their role in PIE phonotactics is unknown. Roots like "to perish" apparently violate the phonotactical rules, but are quite common.
Some roots cannot be reconstructed with an ablauting "*e", an example being "*bʰuh₂-" 'to grow, to become'. Such roots can be seen as generalized zero grades of unattested forms like "**bʰweh₂-", and thus follow the phonotactical rules.
Some roots like "*pster-" 'to sneeze' or "*pteh₂k-" 'to duck' do not appear to follow these rules. This might be due to incomplete understanding of PIE phonotactics or to wrong reconstructions. "*pster-", for example, might not have existed in PIE at all, if the Indo-European words usually traced back to it are onomatopoeias.
Lexical meaning.
The meaning of a reconstructed root is conventionally that of a verb; the terms "root" and "verbal root" are almost synonymous in PIE grammar. This is because, apart from a limited number of so-called root nouns, PIE roots overwhelmingly participate in verbal inflection through well-established morphological and phonological mechanisms. Their meanings are not always directly reconstructible, due to semantic shifts that led to discrepancies in the meanings of reflexes in the attested daughter languages. Many nouns and adjectives are derived from verbal roots via suffixes and ablaut.
Nevertheless, some roots did exist that did not have a primary verbal derivation. Apart from the aforementioned root nouns, the most important of these were the so-called Caland roots, which had adjectival meaning. Such roots generally formed proterokinetic adjectives with the suffix "*-u-", thematic adjectives in "*-ró-" and compounding stems in "*-i-". They included at least "*h₁rewdʰ-" 'red', 'white', 'deep' and 'heavy'.
Verbal roots were inherently either imperfective or perfective. To form a verb from the root's own aspect, verb endings were attached directly to the root, either with or without a thematic vowel. The other aspect, if it were needed, would then be a "characterised" stem, as detailed in Proto-Indo-European verb. The characterised imperfective stems are often different in different descendants, but with no association between certain forms and the various branches of Indo-European, which suggests that a number of aspects fell together before PIE split up.
Creation of new roots.
Roots were occasionally created anew within PIE or its early descendants. A variety of methods have been observed.
Root extensions.
Root extensions are additions of one or two sounds, often plosives, to the end of a root. These extensions do not seem to change the meaning of a root, and often lead to variant root forms across different descendants. The source and function of these extensions is not known.
For "*(s)tew-" 'to push, hit, thrust', we can reconstruct:
Sonorant metathesis.
When the root contains a sonorant, the zero grade is ambiguous as to whether the sonorant should be placed before the ablaut vowel or after it. Speakers occasionally analysed such roots the "wrong" way, and this has led to some roots being created from existing ones by swapping the position of the sonorant.
An example of such a pair of roots, both meaning 'to increase, to enlarge':
Another example concerns the root 'sky', which formed a vṛddhi derivative in this way:
Back-formations.
Sometimes, commonly used words became the template for a new root that was back-formed from the word, different from the root from which the word was originally formed. For example, the ablauting noun 'lifetime' was formed as a u-stem derivative of the root . The oblique stem alternant "*h₂yéw-" was then reinterpreted as the e-grade of a new root, which formed a new neuter s-stem , a formation which is only created from roots.
References.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\underbrace{\\underbrace{\\mathrm{root+suffix}}_{\\mathrm{stem}} + \\mathrm{ending}}_{\\mathrm{word}}\n"
}
] | https://en.wikipedia.org/wiki?curid=9910593 |
991111 | Harold Davenport | English mathematician
Harold Davenport FRS (30 October 1907 – 9 June 1969) was an English mathematician, known for his extensive work in number theory.
Early life.
Born on 30 October 1907 in Huncoat, Lancashire, Davenport was educated at Accrington Grammar School, the University of Manchester (graduating in 1927), and Trinity College, Cambridge. He became a research student of John Edensor Littlewood, working on the question of the distribution of quadratic residues.
First steps in research.
The attack on the distribution question leads quickly to problems that are now seen to be special cases of those on local zeta-functions, for the particular case of some special hyperelliptic curves such as formula_0.
Bounds for the zeroes of the local zeta-function immediately imply bounds for sums formula_1, where χ is the Legendre symbol "modulo" a prime number "p", and the sum is taken over a complete set of residues mod "p".
In the light of this connection it was appropriate that, with a Trinity research fellowship, Davenport in 1932–1933 spent time in Marburg and Göttingen working with Helmut Hasse, an expert on the algebraic theory. This produced the work on the Hasse–Davenport relations for Gauss sums, and contact with Hans Heilbronn, with whom Davenport would later collaborate. In fact, as Davenport later admitted, his inherent prejudices against algebraic methods ("what can you "do" with algebra?") probably limited the amount he learned, in particular in the "new" algebraic geometry and Artin/Noether approach to abstract algebra.
He proved in 1946 that 8436 is the largest tetrahedral number of the form formula_2 for some nonnegative integers formula_3 and formula_4 and also in 1947 that 5040 is the largest factorial of the form formula_5 for some integer formula_6 by using Brun sieve and other advanced methods.
Later career.
He took an appointment at the University of Manchester in 1937, just at the time when Louis Mordell had recruited émigrés from continental Europe to build an outstanding department. He moved into the areas of diophantine approximation and geometry of numbers. These were fashionable, and complemented the technical expertise he had in the Hardy–Littlewood circle method; he was later, though, to let drop the comment that he wished he'd spent more time on the Riemann hypothesis.
He was President of the London Mathematical Society from 1957 to 1959. After professorial positions at the University of Wales and University College London, he was appointed to the Rouse Ball Chair of Mathematics in Cambridge in 1958. There he remained until his death, of lung cancer.
Personal life.
Davenport married Anne Lofthouse, whom he met at the University College of North Wales at Bangor in 1944; they had two children, Richard and James, the latter going on to become Hebron and Medlock Professor of Information Technology at the University of Bath.
Influence.
From about 1950, Davenport was the obvious leader of a "school", somewhat unusually in the context of British mathematics. The successor to the school of mathematical analysis of G. H. Hardy and J. E. Littlewood, it was also more narrowly devoted to number theory, and indeed to its analytic side, as had flourished in the 1930s. This implied problem-solving, and hard-analysis methods. The outstanding works of Klaus Roth and Alan Baker exemplify what this can do, in diophantine approximation. Two reported sayings, "the problems are there", and "I don't care how you get hold of the gadget, I just want to know how big or small it is", sum up the attitude, and could be transplanted today into any discussion of combinatorics. This concrete emphasis on problems stood in sharp contrast with the abstraction of Bourbaki, who were then active just across the English Channel. | [
{
"math_id": 0,
"text": "Y^2 = X(X-1)(X-2)\\ldots (X-k)"
},
{
"math_id": 1,
"text": "\\sum \\chi(X(X-1)(X-2)\\ldots (X-k))"
},
{
"math_id": 2,
"text": "2^a+3^b+1"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "b"
},
{
"math_id": 5,
"text": "n(n+4)(n+6)"
},
{
"math_id": 6,
"text": "n"
}
] | https://en.wikipedia.org/wiki?curid=991111 |
991210 | Divisibility rule | Shorthand way of determining whether a given number is divisible by a fixed divisor
A divisibility rule is a shorthand and useful way of determining whether a given integer is divisible by a fixed divisor without performing the division, usually by examining its digits. Although there are divisibility tests for numbers in any radix, or base, and they are all different, this article presents rules and examples only for decimal, or base 10, numbers. Martin Gardner explained and popularized these rules in his September 1962 "Mathematical Games" column in "Scientific American".
Divisibility rules for numbers 1–30.
The rules given below transform a given number into a generally smaller number, while preserving divisibility by the divisor of interest. Therefore, unless otherwise noted, the resulting number should be evaluated for divisibility by the same divisor. In some cases the process can be iterated until the divisibility is obvious; for others (such as examining the last "n" digits) the result must be examined by other means.
For divisors with multiple rules, the rules are generally ordered first for those appropriate for numbers with many digits, then those useful for numbers with fewer digits.
To test the divisibility of a number by a power of 2 or a power of 5 (2"n" or 5"n", in which "n" is a positive integer), one only need to look at the last "n" digits of that number.
To test divisibility by any number expressed as the product of prime factors formula_0, we can separately test for divisibility by each prime to its appropriate power. For example, testing divisibility by 24 (24 = 8×3 = 23×3) is equivalent to testing divisibility by 8 (23) and 3 simultaneously, thus we need only show divisibility by 8 and by 3 to prove divisibility by 24.
<templatestyles src="TOC top/styles.css"/>
<templatestyles src="Hlist/styles.css"/>* 1* 2* 3* 4* 5* 6* 7* 8* 9* 10* 11* 12* 13* 14* 15* 16* 17* 18* 19* 20* 21* 22* 23* 24* 25* 26* 27* 28* 29* 30
Step-by-step examples.
Divisibility by 2.
First, take any number (for this example it will be 376) and note the last digit in the number, discarding the other digits. Then take that digit (6) while ignoring the rest of the number and determine if it is divisible by 2. If it is divisible by 2, then the original number is divisible by 2.
Example
Divisibility by 3 or 9.
First, take any number (for this example it will be 492) and add together each digit in the number (4 + 9 + 2 = 15). Then take that sum (15) and determine if it is divisible by 3. The original number is divisible by 3 (or 9) if and only if the sum of its digits is divisible by 3 (or 9).
Adding the digits of a number up, and then repeating the process with the result until only one digit remains, will give the remainder of the original number if it were divided by nine (unless that single digit is nine itself, in which case the number is divisible by nine and the remainder is zero).
This can be generalized to any standard positional system, in which the divisor in question then becomes one less than the radix; thus, in base-twelve, the digits will add up to the remainder of the original number if divided by eleven, and numbers are divisible by eleven only if the digit sum is divisible by eleven.
Example.
Divisibility by 4.
The basic rule for divisibility by 4 is that if the number formed by the last two digits in a number is divisible by 4, the original number is divisible by 4; this is because 100 is divisible by 4 and so adding hundreds, thousands, etc. is simply adding another number that is divisible by 4. If any number ends in a two digit number that you know is divisible by 4 (e.g. 24, 04, 08, etc.), then the whole number will be divisible by 4 regardless of what is before the last two digits.
Alternatively, one can just add half of the last digit to the penultimate digit (or the remaining number). If that number is an even natural number, the original number is divisible by 4
Also, one can simply divide the number by 2, and then check the result to find if it is divisible by 2. If it is, the original number is divisible by 4. In addition, the result of this test is the same as the original number divided by 4.
Example.<br>
General rule
Second method
Third method
Divisibility by 5.
Divisibility by 5 is easily determined by checking the last digit in the number (475), and seeing if it is either 0 or 5. If the last number is either 0 or 5, the entire number is divisible by 5.
If the last digit in the number is 0, then the result will be the remaining digits multiplied by 2. For example, the number 40 ends in a zero, so take the remaining digits (4) and multiply that by two (4 × 2 = 8). The result is the same as the result of 40 divided by 5(40/5 = 8).
If the last digit in the number is 5, then the result will be the remaining digits multiplied by two, plus one. For example, the number 125 ends in a 5, so take the remaining digits (12), multiply them by two (12 × 2 = 24), then add one (24 + 1 = 25). The result is the same as the result of 125 divided by 5 (125/5=25).
Example.<br>
If the last digit is 0
If the last digit is 5
Divisibility by 6.
Divisibility by 6 is determined by checking the original number to see if it is both an even number (divisible by 2) and divisible by 3.
If the final digit is even the number is divisible by two, and thus may be divisible by 6. If it is divisible by 2 continue by adding the digits of the original number and checking if that sum is a multiple of 3. Any number which is both a multiple of 2 and of 3 is a multiple of 6.
Example.
Divisibility by 7.
Divisibility by 7 can be tested by a recursive method. A number of the form 10"x" + "y" is divisible by 7 if and only if "x" − 2"y" is divisible by 7. In other words, subtract twice the last digit from the number formed by the remaining digits. Continue to do this until a number is obtained for which it is known whether it is divisible by 7. The original number is divisible by 7 if and only if the number obtained using this procedure is divisible by 7. For example, the number 371: 37 − (2×1) = 37 − 2 = 35; 3 − (2 × 5) = 3 − 10 = −7; thus, since −7 is divisible by 7, 371 is divisible by 7.
Similarly a number of the form 10"x" + "y" is divisible by 7 if and only if "x" + 5"y" is divisible by 7. So add five times the last digit to the number formed by the remaining digits, and continue to do this until a number is obtained for which it is known whether it is divisible by 7.
Another method is multiplication by 3. A number of the form 10"x" + "y" has the same remainder when divided by 7 as 3"x" + "y". One must multiply the leftmost digit of the original number by 3, add the next digit, take the remainder when divided by 7, and continue from the beginning: multiply by 3, add the next digit, etc. For example, the number 371: 3×3 + 7 = 16 remainder 2, and 2×3 + 1 = 7. This method can be used to find the remainder of division by 7.
A more complicated algorithm for testing divisibility by 7 uses the fact that 100 ≡ 1, 101 ≡ 3, 102 ≡ 2, 103 ≡ 6, 104 ≡ 4, 105 ≡ 5, 106 ≡ 1, ... (mod 7). Take each digit of the number (371) in reverse order (173), multiplying them successively by the digits 1, 3, 2, 6, 4, 5, repeating with this sequence of multipliers as long as necessary (1, 3, 2, 6, 4, 5, 1, 3, 2, 6, 4, 5, ...), and adding the products (1×1 + 7×3 + 3×2 = 1 + 21 + 6 = 28). The original number is divisible by 7 if and only if the number obtained using this procedure is divisible by 7 (hence 371 is divisible by 7 since 28 is).
This method can be simplified by removing the need to multiply. All it would take with this simplification is to memorize the sequence above (132645...), and to add and subtract, but always working with one-digit numbers.
The simplification goes as follows:
If through this procedure you obtain a 0 or any recognizable multiple of 7, then the original number is a multiple of 7. If you obtain any number from 1 to 6, that will indicate how much you should subtract from the original number to get a multiple of 7. In other words, you will find the remainder of dividing the number by 7. For example, take the number 186:
Now we have a number smaller than 7, and this number (4) is the remainder of dividing 186/7. So 186 minus 4, which is 182, must be a multiple of 7.
Note: The reason why this works is that if we have: a+b=c and b is a multiple of any given number n, then a and c will necessarily produce the same remainder when divided by n. In other words, in 2 + 7 = 9, 7 is divisible by 7. So 2 and 9 must have the same remainder when divided by 7. The remainder is 2.
Therefore, if a number "n" is a multiple of 7 (i.e.: the remainder of "n"/7 is 0), then adding (or subtracting) multiples of 7 cannot change that property.
What this procedure does, as explained above for most divisibility rules, is simply subtract little by little multiples of 7 from the original number until reaching a number that is small enough for us to remember whether it is a multiple of 7. If 1 becomes a 3 in the following decimal position, that is just the same as converting 10×10"n" into a 3×10"n". And that is actually the same as subtracting 7×10"n" (clearly a multiple of 7) from 10×10"n".
Similarly, when you turn a 3 into a 2 in the following decimal position, you are turning 30×10"n" into 2×10"n", which is the same as subtracting 30×10"n"−28×10n, and this is again subtracting a multiple of 7. The same reason applies for all the remaining conversions:
First method example<br>
1050 → 105 − 0=105 → 10 − 10 = 0. ANSWER: 1050 is divisible by 7.
Second method example<br>
1050 → 0501 (reverse) → 0×1 + 5×3 + 0×2 + 1×6 = 0 + 15 + 0 + 6 = 21 (multiply and add). ANSWER: 1050 is divisible by 7.
Vedic method of divisibility by osculation<br>
Divisibility by seven can be tested by multiplication by the "Ekhādika". Convert the divisor seven to the nines family by multiplying by seven. 7×7=49. Add one, drop the units digit and, take the 5, the "Ekhādika", as the multiplier. Start on the right. Multiply by 5, add the product to the next digit to the left. Set down that result on a line below that digit. Repeat that method of multiplying the units digit by five and adding that product to the number of tens. Add the result to the next digit to the left. Write down that result below the digit. Continue to the end. If the result is zero or a multiple of seven, then yes, the number is divisible by seven. Otherwise, it is not. This follows the Vedic ideal, one-line notation.
Vedic method example:
Is 438,722,025 divisible by seven? Multiplier = 5.
4 3 8 7 2 2 0 2 5
42 37 46 37 6 40 37 27
YES
Pohlman–Mass method of divisibility by 7<br>
The Pohlman–Mass method provides a quick solution that can determine if most integers are divisible by seven in three steps or less. This method could be useful in a mathematics competition such as MATHCOUNTS, where time is a factor to determine the solution without a calculator in the Sprint Round.
Step A:
If the integer is 1000 or less, subtract twice the last digit from the number formed by the remaining digits. If the result is a multiple of seven, then so is the original number (and vice versa). For example:
112 -> 11 − (2×2) = 11 − 4 = 7 YES
98 -> 9 − (8×2) = 9 − 16 = −7 YES
634 -> 63 − (4×2) = 63 − 8 = 55 NO
Because 1001 is divisible by seven, an interesting pattern develops for repeating sets of 1, 2, or 3 digits that form 6-digit numbers (leading zeros are allowed) in that all such numbers are divisible by seven. For example:
001 001 = 1,001 / 7 = 143
010 010 = 10,010 / 7 = 1,430
011 011 = 11,011 / 7 = 1,573
100 100 = 100,100 / 7 = 14,300
101 101 = 101,101 / 7 = 14,443
110 110 = 110,110 / 7 = 15,730
01 01 01 = 10,101 / 7 = 1,443
10 10 10 = 101,010 / 7 = 14,430
111,111 / 7 = 15,873
222,222 / 7 = 31,746
999,999 / 7 = 142,857
576,576 / 7 = 82,368
For all of the above examples, subtracting the first three digits from the last three results in a multiple of seven. Notice that leading zeros are permitted to form a 6-digit pattern.
This phenomenon forms the basis for Steps B and C.
Step B:
If the integer is between 1001 and one million, find a repeating pattern of 1, 2, or 3 digits that forms a 6-digit number that is close to the integer (leading zeros are allowed and can help you visualize the pattern). If the positive difference is less than 1000, apply Step A. This can be done by subtracting the first three digits from the last three digits. For example:
341,355 − 341,341 = 14 -> 1 − (4×2) = 1 − 8 = −7 YES
67,326 − 067,067 = 259 -> 25 − (9×2) = 25 − 18 = 7 YES
The fact that 999,999 is a multiple of 7 can be used for determining divisibility of integers larger than one million by reducing the integer to a 6-digit number that can be determined using Step B. This can be done easily by adding the digits left of the first six to the last six and follow with Step A.
Step C:
If the integer is larger than one million, subtract the nearest multiple of 999,999 and then apply Step B. For even larger numbers, use larger sets such as 12-digits (999,999,999,999) and so on. Then, break the integer into a smaller number that can be solved using Step B. For example:
22,862,420 − (999,999 × 22) = 22,862,420 − 21,999,978 -> 862,420 + 22 = 862,442
862,442 -> 862 − 442 (Step B) = 420 -> 42 − (0×2) (Step A) = 42 YES
This allows adding and subtracting alternating sets of three digits to determine divisibility by seven. Understanding these patterns allows you to quickly calculate divisibility of seven as seen in the following examples:
Pohlman–Mass method of divisibility by 7, examples:
Is 98 divisible by seven?
98 -> 9 − (8×2) = 9 − 16 = −7 YES (Step A)
Is 634 divisible by seven?
634 -> 63 − (4×2) = 63 − 8 = 55 NO (Step A)
Is 355,341 divisible by seven?
355,341 − 341,341 = 14,000 (Step B) -> 014 − 000 (Step B) -> 14 = 1 − (4×2) (Step A) = 1 − 8 = −7 YES
Is 42,341,530 divisible by seven?
42,341,530 -> 341,530 + 42 = 341,572 (Step C)
341,572 − 341,341 = 231 (Step B)
231 -> 23 − (1×2) = 23 − 2 = 21 YES (Step A)
Using quick alternating additions and subtractions:
42,341,530 -> 530 − 341 + 42 = 189 + 42 = 231 -> 23 − (1×2) = 21 YES
Multiplication by 3 method of divisibility by 7, examples:
Is 98 divisible by seven?
98 -> 9 remainder 2 -> 2×3 + 8 = 14 YES
Is 634 divisible by seven?
634 -> 6×3 + 3 = 21 -> remainder 0 -> 0×3 + 4 = 4 NO
Is 355,341 divisible by seven?
3 × 3 + 5 = 14 -> remainder 0 -> 0×3 + 5 = 5 -> 5×3 + 3 = 18 -> remainder 4 -> 4×3 + 4 = 16 -> remainder 2 -> 2×3 + 1 = 7 YES
Find remainder of 1036125837 divided by 7
1×3 + 0 = 3
3×3 + 3 = 12 remainder 5
5×3 + 6 = 21 remainder 0
0×3 + 1 = 1
1×3 + 2 = 5
5×3 + 5 = 20 remainder 6
6×3 + 8 = 26 remainder 5
5×3 + 3 = 18 remainder 4
4×3 + 7 = 19 remainder 5
Answer is 5
Finding remainder of a number when divided by 7
7 − (1, 3, 2, −1, −3, −2, cycle repeats for the next six digits) Period: 6 digits.
Recurring numbers: 1, 3, 2, −1, −3, −2
<br>Minimum magnitude sequence <br>
(1, 3, 2, 6, 4, 5, cycle repeats for the next six digits) Period: 6 digits.
Recurring numbers: 1, 3, 2, 6, 4, 5
<br>Positive sequence
Multiply the right most digit by the left most digit in the sequence and multiply the second right most digit by the second left most digit in the sequence and so on and so for. Next, compute the sum of all the values and take the modulus of 7.
<br>Example: What is the remainder when 1036125837 is divided by 7? <br>
<br>Multiplication of the rightmost digit = 1 × 7 = 7 <br>
<br>Multiplication of the second rightmost digit = 3 × 3 = 9 <br>
<br>Third rightmost digit = 8 × 2 = 16 <br>
<br>Fourth rightmost digit = 5 × −1 = −5 <br>
<br>Fifth rightmost digit = 2 × −3 = −6 <br>
<br>Sixth rightmost digit = 1 × −2 = −2 <br>
<br>Seventh rightmost digit = 6 × 1 = 6 <br>
<br>Eighth rightmost digit = 3 × 3 = 9 <br>
<br>Ninth rightmost digit = 0 <br>
<br>Tenth rightmost digit = 1 × −1 = −1 <br>
<br>Sum = 33 <br>
<br>33 modulus 7 = 5 <br>
<br>Remainder = 5
Digit pair method of divisibility by 7
This method uses 1, −3, 2 pattern on the "digit pairs". That is, the divisibility of any number by seven can be tested by first separating the number into digit pairs, and then applying the algorithm on three digit pairs (six digits). When the number is smaller than six digits, then fill zero's to the right side until there are six digits. When the number is larger than six digits, then repeat the cycle on the next six digit group and then add the results. Repeat the algorithm until the result is a small number. The original number is divisible by seven if and only if the number obtained using this algorithm is divisible by seven. This method is especially suitable for large numbers.
"Example 1:"<br>
The number to be tested is 157514.
First we separate the number into three digit pairs: 15, 75 and 14.<br>
Then we apply the algorithm: 1 × 15 − 3 × 75 + 2 × 14 = 182<br>
Because the resulting 182 is less than six digits, we add zero's to the right side until it is six digits.<br>
Then we apply our algorithm again: 1 × 18 − 3 × 20 + 2 × 0 = −42<br>
The result −42 is divisible by seven, thus the original number 157514 is divisible by seven.
"Example 2:"<br>
The number to be tested is 15751537186.<br>
(1 × 15 − 3 × 75 + 2 × 15) + (1 × 37 − 3 × 18 + 2 × 60) = −180 + 103 = −77<br>
The result −77 is divisible by seven, thus the original number 15751537186 is divisible by seven.
Another digit pair method of divisibility by 7
Method
This is a non-recursive method to find the remainder left by a number on dividing by 7:
For example:
The number 194,536 leaves a remainder of 6 on dividing by 7.
The number 510,517,813 leaves a remainder of 1 on dividing by 7.
Proof of correctness of the method
The method is based on the observation that 100 leaves a remainder of 2 when divided by 7. And since we are breaking the number into digit pairs we essentially have powers of 100.
1 mod 7 = 1
100 mod 7 = 2
10,000 mod 7 = 2^2 = 4
1,000,000 mod 7 = 2^3 = 8; 8 mod 7 = 1
100,000,000 mod 7 = 2^4 = 16; 16 mod 7 = 2
10,000,000,000 mod 7 = 2^5 = 32; 32 mod 7 = 4
And so on.
The correctness of the method is then established by the following chain of equalities:
Let N be the given number formula_1.
formula_2
formula_3
formula_4
formula_5
Divisibility by 11.
Method
In order to check divisibility by 11, consider the alternating sum of the digits. For example with 907,071:
formula_6
so 907,071 is divisible by 11.
We can either start with formula_7 or formula_8 since multiplying the whole by formula_9 does not change anything.
Proof of correctness of the method
Considering that formula_10, we can write for any integer:
formula_11
Divisibility by 13.
Remainder Test
13 (1, −3, −4, −1, 3, 4, cycle goes on.)
If you are not comfortable with negative numbers, then use this sequence. (1, 10, 9, 12, 3, 4)
Multiply the right most digit of the number with the left most number in the sequence shown above and the second right most digit to the second left most digit of the number in the sequence. The cycle goes on.
Example: What is the remainder when 321 is divided by 13?
Using the first sequence, <br>
Ans: 1 × 1 + 2 × −3 + 3 × −4 = −17
Remainder = −17 mod 13 = 9
Example: What is the remainder when 1234567 is divided by 13?
Using the second sequence, <br>
Answer: 7 × 1 + 6 × 10 + 5 × 9 + 4 × 12 + 3 × 3 + 2 × 4 + 1 × 1 = 178 mod 13 = 9
Remainder = 9
A recursive method can be derived using the fact that formula_12 and that formula_13. This implies that a number is divisible by 13 iff removing the first digit and subtracting 3 times that digit from the new first digit yields a number divisible by 13. We also have the rule that 10 x + y is divisible iff x + 4 y is divisible by 13. For example, to test the divisibility of 1761 by 13 we can reduce this to the divisibility of 461 by the first rule. Using the second rule, this reduces to the divisibility of 50, and doing that again yields 5. So, 1761 is not divisible by 13.
Testing 871 this way reduces it to the divisibility of 91 using the second rule, and then 13 using that rule again, so we see that 871 is divisible by 13.
Beyond 30.
Divisibility properties of numbers can be determined in two ways, depending on the type of the divisor.
Composite divisors.
A number is divisible by a given divisor if it is divisible by the highest power of each of its prime factors. For example, to determine divisibility by 36, check divisibility by 4 and by 9. Note that checking 3 and 12, or 2 and 18, would not be sufficient. A table of prime factors may be useful.
A composite divisor may also have a rule formed using the same procedure as for a prime divisor, given below, with the caveat that the manipulations involved may not introduce any factor which is present in the divisor. For instance, one cannot make a rule for 14 that involves multiplying the equation by 7. This is not an issue for prime divisors because they have no smaller factors.
Prime divisors.
The goal is to find an inverse to 10 modulo the prime under consideration (does not work for 2 or 5) and use that as a multiplier to make the divisibility of the original number by that prime depend on the divisibility of the new (usually smaller) number by the same prime.
Using 31 as an example, since 10 × (−3) = −30 = 1 mod 31, we get the rule for using "y" − 3"x" in the table below. Likewise, since 10 × (28) = 280 = 1 mod 31 also, we obtain a complementary rule "y" + 28"x" of the same kind - our choice of addition or subtraction being dictated by arithmetic convenience of the smaller value. In fact, this rule for prime divisors besides 2 and 5 is "really" a rule for divisibility by any integer relatively prime to 10 (including 33 and 39; see the table below). This is why the last divisibility condition in the tables above and below for any number relatively prime to 10 has the same kind of form (add or subtract some multiple of the last digit from the rest of the number).
Generalized divisibility rule.
To test for divisibility by "D", where "D" ends in 1, 3, 7, or 9, the following method can be used. Find any multiple of "D" ending in 9. (If "D" ends respectively in 1, 3, 7, or 9, then multiply by 9, 3, 7, or 1.) Then add 1 and divide by 10, denoting the result as "m". Then a number "N" = 10"t" + "q" is divisible by "D" if and only if "mq + t" is divisible by "D". If the number is too large, you can also break it down into several strings with "e" digits each, satisfying either 10"e" = 1 or 10"e" = −1 (mod "D"). The sum (or alternating sum) of the numbers have the same divisibility as the original one.
For example, to determine if 913 = 10×91 + 3 is divisible by 11, find that "m" = (11×9+1)÷10 = 10. Then "mq+t" = 10×3+91 = 121; this is divisible by 11 (with quotient 11), so 913 is also divisible by 11. As another example, to determine if 689 = 10×68 + 9 is divisible by 53, find that "m" = (53×3+1)÷10 = 16. Then "mq+t" = 16×9 + 68 = 212, which is divisible by 53 (with quotient 4); so 689 is also divisible by 53.
Alternatively, any number Q = 10c + d is divisible by n = 10a + b, such that gcd(n, 2, 5) = 1, if c + D(n)d = An for some integer A, where:
formula_14
The first few terms of the sequence, generated by D(n), are 1, 1, 5, 1, 10, 4, 12, 2, ... (sequence A333448 in OEIS).
The piece wise form of D(n) and the sequence generated by it were first published by Bulgarian mathematician Ivan Stoykov in March 2020.
Proofs.
Proof using basic algebra.
Many of the simpler rules can be produced using only algebraic manipulation, creating binomials and rearranging them. By writing a number as the sum of each digit times a power of 10 each digit's power can be manipulated individually.
Case where all digits are summed
This method works for divisors that are factors of 10 − 1 = 9.
Using 3 as an example, 3 divides 9 = 10 − 1. That means formula_15 (see modular arithmetic). The same for all the higher powers of 10: formula_16 They are all congruent to 1 modulo 3. Since two things that are congruent modulo 3 are either both divisible by 3 or both not, we can interchange values that are congruent modulo 3. So, in a number such as the following, we can replace all the powers of 10 by 1:
formula_17
which is exactly the sum of the digits.
Case where the alternating sum of digits is used
This method works for divisors that are factors of 10 + 1 = 11.
Using 11 as an example, 11 divides 11 = 10 + 1. That means formula_18. For the higher powers of 10, they are congruent to 1 for even powers and congruent to −1 for odd powers:
formula_19
Like the previous case, we can substitute powers of 10 with congruent values:
formula_20
which is also the difference between the sum of digits at odd positions and the sum of digits at even positions.
Case where only the last digit(s) matter
This applies to divisors that are a factor of a power of 10. This is because sufficiently high powers of the base are multiples of the divisor, and can be eliminated.
For example, in base 10, the factors of 101 include 2, 5, and 10. Therefore, divisibility by 2, 5, and 10 only depend on whether the last 1 digit is divisible by those divisors. The factors of 102 include 4 and 25, and divisibility by those only depend on the last 2 digits.
Case where only the last digit(s) are removed
Most numbers do not divide 9 or 10 evenly, but do divide a higher power of 10"n" or 10"n" − 1. In this case the number is still written in powers of 10, but not fully expanded.
For example, 7 does not divide 9 or 10, but does divide 98, which is close to 100. Thus, proceed from
formula_21
where in this case a is any integer, and b can range from 0 to 99. Next,
formula_22
and again expanding
formula_23
and after eliminating the known multiple of 7, the result is
formula_24
which is the rule "double the number formed by all but the last two digits, then add the last two digits".
Case where the last digit(s) is multiplied by a factor
The representation of the number may also be multiplied by any number relatively prime to the divisor without changing its divisibility. After observing that 7 divides 21, we can perform the following:
formula_25
after multiplying by 2, this becomes
formula_26
and then
formula_27
Eliminating the 21 gives
formula_28
and multiplying by −1 gives
formula_29
Either of the last two rules may be used, depending on which is easier to perform. They correspond to the rule "subtract twice the last digit from the rest".
Proof using modular arithmetic.
This section will illustrate the basic method; all the rules can be derived following the same procedure. The following requires a basic grounding in modular arithmetic; for divisibility other than by 2's and 5's the proofs rest on the basic fact that 10 mod "m" is invertible if 10 and "m" are relatively prime.
For 2"n" or 5"n":
Only the last "n" digits need to be checked.
formula_30
Representing "x" as formula_31
formula_32
and the divisibility of "x" is the same as that of "z".
For 7:
Since 10 × 5 ≡ 10 × (−2) ≡ 1 (mod 7) we can do the following:
Representing "x" as formula_33
formula_34
so "x" is divisible by 7 if and only if "y" − 2"z" is divisible by 7. | [
{
"math_id": 0,
"text": "p_1^n p_2^m p_3^q"
},
{
"math_id": 1,
"text": "\\overline{a_{2n} a_{2n-1} ... a_2a_1}"
},
{
"math_id": 2,
"text": "\\overline{a_{2n}a_{2n-1}...a_2a_1}\\mod 7"
},
{
"math_id": 3,
"text": "[\\sum_{k=1}^n(a_{2k}a_{2k-1}) \\times 10^{2k-2}] \\bmod 7"
},
{
"math_id": 4,
"text": "\\sum_{k=1}^n(a_{2k}a_{2k-1} \\times 10^{2k-2}) \\bmod 7"
},
{
"math_id": 5,
"text": "\\sum_{k=1}^n(a_{2k}a_{2k-1} \\bmod 7) \\times (10^{2k-2} \\bmod 7)"
},
{
"math_id": 6,
"text": "9-0+7-0+7-1 = 22 = 2\\times 11, "
},
{
"math_id": 7,
"text": "-"
},
{
"math_id": 8,
"text": "+"
},
{
"math_id": 9,
"text": "-1"
},
{
"math_id": 10,
"text": "10 \\equiv -1 \\bmod 11"
},
{
"math_id": 11,
"text": " \\overline{a_{n} a_{n-1}...a_1 a_0} = \\sum_{i=0}^n a_i 10^i \\equiv \\sum_{i=0}^n (-1)^i a_i \\bmod 11."
},
{
"math_id": 12,
"text": "10 = -3\\bmod 13"
},
{
"math_id": 13,
"text": "10^{-1} = 4\\bmod 13"
},
{
"math_id": 14,
"text": "D(n) \\equiv \\begin{cases} 9a+1, & \\mbox{if }n\\mbox{ = 10a+1} \\\\ 3a+1, & \\mbox{if }n\\mbox{ = 10a+3} \\\\ 7a+5, & \\mbox{if }n\\mbox{ = 10a+7} \\\\ a+1, & \\mbox{if }n\\mbox{ = 10a+9}\\end{cases} \\ "
},
{
"math_id": 15,
"text": "10 \\equiv 1 \\pmod{3}"
},
{
"math_id": 16,
"text": "10^n \\equiv 1^n \\equiv 1 \\pmod{3}"
},
{
"math_id": 17,
"text": "100\\cdot a + 10\\cdot b + 1\\cdot c \\equiv (1)a + (1)b + (1)c \\pmod{3}"
},
{
"math_id": 18,
"text": "10 \\equiv -1 \\pmod{11}"
},
{
"math_id": 19,
"text": "10^n \\equiv (-1)^n \\equiv \\begin{cases} 1, & \\mbox{if }n\\mbox{ is even} \\\\ -1, & \\mbox{if }n\\mbox{ is odd} \\end{cases} \\pmod{11}."
},
{
"math_id": 20,
"text": "1000\\cdot a + 100\\cdot b + 10\\cdot c + 1\\cdot d \\equiv (-1)a + (1)b + (-1)c + (1)d \\pmod{11}"
},
{
"math_id": 21,
"text": "100 \\cdot a + b"
},
{
"math_id": 22,
"text": "(98+2) \\cdot a + b"
},
{
"math_id": 23,
"text": "98 \\cdot a + 2 \\cdot a + b,"
},
{
"math_id": 24,
"text": "2 \\cdot a + b,"
},
{
"math_id": 25,
"text": "10 \\cdot a + b,"
},
{
"math_id": 26,
"text": "20 \\cdot a + 2 \\cdot b,"
},
{
"math_id": 27,
"text": "(21 - 1) \\cdot a + 2 \\cdot b."
},
{
"math_id": 28,
"text": " -1 \\cdot a + 2 \\cdot b,"
},
{
"math_id": 29,
"text": " a - 2 \\cdot b."
},
{
"math_id": 30,
"text": "10^n = 2^n \\cdot 5^n \\equiv 0 \\pmod{2^n \\mathrm{\\ or\\ } 5^n}"
},
{
"math_id": 31,
"text": "10^n \\cdot y + z,"
},
{
"math_id": 32,
"text": "x = 10^n \\cdot y + z \\equiv z \\pmod{2^n \\mathrm{\\ or\\ } 5^n}"
},
{
"math_id": 33,
"text": "10 \\cdot y + z,"
},
{
"math_id": 34,
"text": "-2x \\equiv y -2z \\pmod{7},"
}
] | https://en.wikipedia.org/wiki?curid=991210 |
9912921 | Zintl phase | Product of a chemical reaction between elements of periodic groups 1-2 and groups 13-16
In chemistry, a Zintl phase is a product of a reaction between a group 1 (alkali metal) or group 2 (alkaline earth metal) and main group metal or metalloid (from groups 13, 14, 15, or 16). It is characterized by intermediate metallic/ionic bonding. Zintl phases are a subgroup of brittle, high-melting intermetallic compounds that are diamagnetic or exhibit temperature-independent paramagnetism and are poor conductors or semiconductors.
This type of solid is named after German chemist Eduard Zintl who investigated them in the 1930s. The term "Zintl Phases" was first used by Laves in 1941. In his early studies, Zintl noted that there was an atomic volume contraction upon the formation of these products and realized that this could indicate cation formation. He suggested that the structures of these phases were ionic, with complete electron transfer from the more electropositive metal to the more electronegative main group element. The structure of the anion within the phase is then considered on the basis of the resulting electronic state. These ideas are further developed in the Zintl-Klemm-Busmann concept, where the polyanion structure should be similar to that of the isovalent element. Further, the anionic sublattice can be isolated as polyanions (Zintl ions) in solution and are the basis of a rich subfield of main group inorganic chemistry.
History.
A "Zintl Phase" was first observed in 1891 by M. Joannis, who noted an unexpected green colored solution after dissolving lead and sodium in liquid ammonia, indicating the formation of a new product. It was not until many years later, in 1930, that the stoichiometry of the new product was identified as Na4Pb94− by titrations performed by Zintl et al.; and it was not until 1970 that the structure was confirmed by crystallization with ethylenediamine (en) by Kummer.
In the intervening years and in the years since, many other reaction mixtures of metals were explored to provide a great number of examples of this type of system. There are hundreds of both compounds composed of group 14 elements and group 15 elements, plus dozens of others beyond those groups, all spanning a variety of different geometries. Corbett has contributed improvements to the crystallization of Zintl ions by demonstrating the use of chelating ligands, such as cryptands, as cation sequestering agents.
More recently, Zintl phase and ion reactivity in more complex systems, with organic ligands or transition metals, have been investigated, as well as their use in practical applications, such as for catalytic purposes or in materials science.
Zintl phases.
Zintl phases are intermetallic compounds that have a pronounced ionic bonding character. They are made up of a polyanionic substructure and group 1 or 2 counter ions, and their structure can be understood by a formal electron transfer from the electropositive element to the more electronegative element in their composition. Thus, the valence electron concentration (VEC) of the anionic element is increased, and it formally moves to the right in its row of the periodic table. Generally the anion does not reach an octet, so to reach that closed shell configuration, bonds are formed. The structure can be explained by the 8-N rule (replacing the number of valence electrons, N, by VEC), making it comparable to an isovalent element. The formed polyanionic substructures can be chains (two-dimensional), rings, and other two-or three-dimensional networks or molecule-like entities.
The Zintl line is a hypothetical boundary drawn between groups 13 and 14. It separates the columns based on the tendency for group 13 elements to form metals when reacted with electropositive group 1 or 2 elements and for group 14 and above to form ionic solids. The 'typical salts' formed in these reactions become more metallic as the main group element becomes heavier.
Synthesis.
Zintl phases can be prepared in regular solid state reactions, usually performed under an inert atmosphere or in a molten salt solution. Typical solid state methods include direct reduction of corresponding oxides in solution phase reactions in liquid ammonia or mercury. The product can be purified in some cases via zone refining, though often careful annealing will result in large single crystals of a desired phase.
Characterization.
Many of the usual methods are useful for determining physical and structural properties of Zintl phases. Some Zintl phases can be decomposed into a Zintl ion—the polyanion that composes the anionic substructure of the phase—and counter ion, which can be studied as described below. The heat of formation of these phases can be evaluated. Often their magnitude is comparable to those of salt formation, providing evidence for the ionic character of these phases. Density measurements indicate a contraction of the product compared to reactants, similarly indicating ionic bonding within the phase. X-ray spectroscopy gives additional information about the oxidation state of the elements, and correspondingly the nature of their bonding. Conductivity and magnetization measurements can also be taken. Finally, the structure of a Zintl phase or ion is most reliably confirmed via X-ray crystallography.
Examples.
An illustrative example: There are two types of Zintl ions in K12Si17; 2x Si44- (pseudo P4,
or according to Wade's rules, 12 = 2n + 4 skeletal-electrons corresponding to a "nido"-form of a trigonal-bipyramid) and 1x
Si94- (according to Wade's rules, 22 = 2n + 4 skeletal-electrons corresponding to a "nido"-form of a bicapped square antiprism)
Examples from Müller's 1973 review paper with known structures are listed in the table below.
Exceptions.
There are examples of a new class of compounds that, on the basis of their chemical formulae, would appear to be Zintl phases, e.g., K8In11, which is metallic and paramagnetic. Molecular orbital calculations have shown that the anion is (In11)7− and that the extra electron is distributed over the cations and, possibly, the anion antibonding orbitals. Another exception is the metallic InBi. InBi fulfills the Zintl phase requisite of element-element bonds but not the requisite of the polyanionic structure fitting a normal valence compound, i.e., the Bi–Bi polyanionic structure does not correspond to a normal valence structure such as the diamond Tl− in NaTl.
Zintl ions.
Zintl phases that contain molecule-like polyanions will often separate into its constituent anions and cations in liquid ammonia, ethylenediamene, crown ethers, or cryptand solutions. Therefore, they are referred to as Zintl ions. The term 'clusters' is also used to emphasize them as groups with homonuclear bonding. The structures can be described by Wade's rules and occupy an area of transition between localized covalent bonds and delocalized skeletal bonding. Beyond the "aesthetic simplicity and beauty of their structures" and distinctive electronic properties, Zintl ions are also of interest in synthesis because of their unique and unpredictable behavior in solution.
The largest subcategory of Zintl ions is homoatomic clusters of group 14 or 15 elements. Some examples are listed below.
Many examples similarly exist for heteroatomic clusters where the polyanion is composed of greater than one main group element. Some examples are listed below. Zintl ions are also capable of reacting with ligands and transition metals, and further 'heteroatomic examples are discussed below (intermetalloid clusters). In some solvents, atoms exchange can occur between heteroatomic clusters. Additionally, it is notable that fewer large cluster examples exist.
Synthesis.
Zintl ions are typically prepared through one of two methods. The first is a direct reduction route performed at low temperature. In this method, dry ammonia is condensed over a mixture of the two (or more) metals under inert atmosphere. The reaction initially produces solvated electrons in ammonia that reduce the more electronegative element over the course of the reaction. This reaction can be monitored by a color change from blue (solvated electrons) to the color of the Zintl phase. The second is method, performed at higher temperatures, is to dissolve a Zintl phase in liquid ammonia or other polar aprotic solvent like ethylenediamine (on rare occasions DMF or pyridine is used). Some Zintl ions, such as Si and Ge based ions, can only be prepared via this indirect method because they cannot be reduced at low temperatures.
Characterization.
The structure of Zintl ions can be confirmed through x-ray crystallography. Corbett has also improved the crystallization of Zintl ions by demonstrating the use of chelating ligands such as cryptands, as cation sequestering agents.
Many of the main group elements have NMR active nuclei, thus NMR experiments are also valuable for gaining structural and electronic information; they can reveal information about the flexibility of clusters. For example, differently charged species can be present in solution because the polyanions are highly reduced and may be oxidized by solvent molecules. NMR experiments have shown a low barrier to change and thus similar energies for different states. NMR is also useful for gaining information about the coupling between individual atoms of the polyanion and with the counter-ion, a coordinated transition metal, or ligand. Nucleus independent chemical shifts can also be an indicator for 3D aromaticity, which causes magnetic shielding at special points.
Additionally, EPR can be used to measure paramagnetic in relevant clusters, of which there are a number of examples of the [E9]3− type, among others.
Reactivity.
As highly reduced species in solution, Zintl ions offer many and often unexpected, reaction possibilities, and their discrete nature positions them as potentially important starting materials in inorganic synthesis.
In solution, individual Zintl ions can react with each other to form oligomers and polymers. In fact, anions with high nuclearity can be viewed as oxidative coupling products of monomers. After oxidation, the clusters may sometimes persist as radicals that can be used as precursors in other reactions. Zintl ions can oxidize without the presence of specific oxidizing agents through solvent molecules or impurities, for example in the presence of cryptand, which is often used to aid crystallization.
Zintl ion clusters can be functionalized with a variety of ligands in a similar reaction to their oligomerization. As such, functionalization competes with those reactions and both can be observed to occur. Organic groups, for example phenyl, TMS, and bromomethane, form exo bonds to the electronegative main group atoms. These ligands can also stabilize high nuclearity clusters, in particular heteroatomic examples.
Similarly in solids, Zintl phases can incorporate hydrogen. Such Zintl phase hydrides can be either formed by direct synthesis of the elements or element hydrides in a hydrogen atmosphere or by a hydrogenation reaction of a pristine Zintl phase. Since hydrogen has a comparable electronegativity as the post-transition metal it is incorporated as part of the polyanionic spatial structure. There are two structural motifs present. A monatomic hydride can be formed occupying an interstitial site that is coordinated by cations exclusively (interstitial hydride) or it can bind covalently to the polyanion (polyanionic hydride).
The Zintl ion itself can also act as a ligand in transition metal complexes. This reactivity is usually seen in clusters composed of greater than 9 atoms, and it is more common for group 15 clusters. A change in geometry often accompanies complexation; however zero electrons are contributed from the metal to the complex, so the electron count with respect to Wade's rules does not change. In some cases the transition metal will cap the face of the cluster. Another mode of reaction is the formation of endohedral complexes where the metal is encapsulated inside the cluster. These types of complexes lend themselves to comparison with the solid state structure of the corresponding Zintl phase. These reactions tend to be unpredictable and highly dependent on temperature, among other reaction conditions.
Electronic structure and bonding.
Wade's rules.
The geometry and bonding of a Zintl ion cannot be easily described by classical two electron two center bonding theories; however the geometries Zintl ions can be well described by Wade’s rules of boranes. Wade’s rules offer an alternative model for the relationship between geometry and electron count in delocalized electron deficient systems. The rules were developed to predict the geometries of boranes from the number of electrons and can be applied to these polyanions by replacing the BH unit with a lone pair. Some unique clusters of Ge occur in non-deltahedral shapes that cannot be described by Wade’s rules. The rules also become more convoluted in intermetallic clusters with transition metals and consideration needs to be taken for the location of the additional electrons.
Zintl-Klemm-Busmann concept.
The Zintl-Klemm-Busmann concept describes how in an anionic cluster, the atoms arrange in typical geometries found for the element to the right of it on the periodic table. So “the anionic lattice is isometric with elemental lattices having the same number of valence electrons.” In this formulation, the average charge on each atom of the cluster can be calculated by:
formula_0
where "na" is number of anion atoms and VEC is the valence electron concentration per anion atom, then:
formula_1.
The number of bonds per anion predicts structure based on isoelectronic neighbor. This rule is also referred to as the 8 - N rule and can also be written as:
formula_2.
Not all phases follow the Zintl-Klemm-Busmann concept, particularly when there is a high content of either the electronegative or electropositive element. There are still other examples where this does not apply.
Electronic theory.
Wade's rules are successful in describing the geometry of the anionic sublattice of Zintl phases and of Zintl ions but not the electronic structure. Other 'spherical shell models' with spherical harmonic wave functions for molecular orbitals—analogous to atomic orbitals—that describe the clusters as pseduo elements. The Jellium model uses a spherical potential from the nuclei to give orbitals with global nodal properties. Again, this formulates the cluster as a 'super atom' with an electron configuration comparable to a single atom. The model is best applied to spherically symmetric systems, and two examples for which it works well are the icosahedral Al13− and [Sn@Cu12@Sn20]12− clusters. DFT or ab initio molecular orbital calculations similarly treat the clusters with atomic, and correspondingly label them S, P, D etc. These closed shell configurations have prompted some investigation of 3D aromaticity. This concept was first suggested for fullerenes and corresponds to a 2(N+1)2 rule in the spherical shell model. An indicator of this phenomenon is a negative Nucleus Independent Chemical Shift (NICS) values of the center of the cluster or of certain additional high symmetry points.
Use in catalysis and materials science.
Some Zintl ions show the ability to activate small molecules. One example from Dehnen and coworkers is the capture of O2 by the intermetallic cluster [Bi9{Ru(cod)}2]3−. Another ruthenium intertermetallic cluster, [Ru@Sn9]6−, was used as a precursor to selectively disperse the CO2 hydrogenation catalyst Ru-SnOx onto CeO2, resulting in nearly 100% CO selectivity for methanation.
In materials science, Ge94− has been used as a source of Ge in lithium ion batteries, where is can be deposited in a microporous layer of alpha-Ge. The discrete nature of Zintl ions opens the possibility for the bottom up synthesis of nanostructured semiconductors and the surface modification of solids. The oxidation and polymerization of Zintl ions may also be a source of new materials. For example, polymerization of Ge clusters was used to create guest free germanium clathrate, in other words a particular, pure Ge.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\text{anion valence} + \\text{cation valence} }{n_a} = \\text{VEC}"
},
{
"math_id": 1,
"text": "8 - \\text{VEC} = \\text{number of bonds per anion atom}"
},
{
"math_id": 2,
"text": "\\frac{n_e + b_a - b_c}{n_a} = 8"
}
] | https://en.wikipedia.org/wiki?curid=9912921 |
9917583 | X-ray diffraction | Elastic interaction of x-rays with electrons
X-ray diffraction is a generic term for phenomena associated with changes in the direction of X-ray beams due to interactions with the electrons around atoms. It occurs due to elastic scattering, when there is no change in the energy of the waves. The resulting map of the directions of the X-rays far from the sample is called a diffraction pattern. It is different from X-ray crystallography which exploits X-ray diffraction to determine the arrangement of atoms in materials, and also has other components such as ways to map from experimental diffraction measurements to the positions of atoms.
This article provides an overview of X-ray diffraction, starting with the early history of x-rays and the discovery that they have the right spacings to be diffracted by crystals. In many cases these diffraction patterns can be Interpreted using a single scattering or kinematical theory with conservation of energy (wave vector). Many different types of X-ray sources exist, ranging from ones used in laboratories to higher brightness synchrotron light sources. Similar diffraction patterns can be produced by related scattering techniques such as electron diffraction or neutron diffraction. If single crystals of sufficient size cannot be obtained, various other X-ray methods can be applied to obtain less detailed information; such methods include fiber diffraction, powder diffraction and (if the sample is not crystallized) small-angle X-ray scattering (SAXS).
History.
When Wilhelm Röntgen discovered X-rays in 1895 physicists were uncertain of the nature of X-rays, but suspected that they were waves of electromagnetic radiation. The Maxwell theory of electromagnetic radiation was well accepted, and experiments by Charles Glover Barkla showed that X-rays exhibited phenomena associated with electromagnetic waves, including transverse polarization and spectral lines akin to those observed in the visible wavelengths. Barkla created the x-ray notation for sharp spectral lines, noting in 1909 two separate energies, at first, naming them "A" and "B" and, supposing that there may be lines prior to "A", he started an alphabet numbering beginning with "K." Single-slit experiments in the laboratory of Arnold Sommerfeld suggested that X-rays had a wavelength of about 1 angstrom. X-rays are not only waves but also have particle properties causing Sommerfeld to coin the name Bremsstrahlung for the continuous spectra when they were formed when electrons bombarded a material. Albert Einstein introduced the photon concept in 1905, but it was not broadly accepted until 1922, when Arthur Compton confirmed it by the scattering of X-rays from electrons. The particle-like properties of X-rays, such as their ionization of gases, had prompted William Henry Bragg to argue in 1907 that X-rays were "not" electromagnetic radiation. Bragg's view proved unpopular and the observation of X-ray diffraction by Max von Laue in 1912 confirmed that X-rays are a form of electromagnetic radiation.
The idea that crystals could be used as a diffraction grating for X-rays arose in 1912 in a conversation between Paul Peter Ewald and Max von Laue in the English Garden in Munich. Ewald had proposed a resonator model of crystals for his thesis, but this model could not be validated using visible light, since the wavelength was much larger than the spacing between the resonators. Von Laue realized that electromagnetic radiation of a shorter wavelength was needed, and suggested that X-rays might have a wavelength comparable to the spacing in crystals. Von Laue worked with two technicians, Walter Friedrich and his assistant Paul Knipping, to shine a beam of X-rays through a copper sulfate crystal and record its diffraction pattern on a photographic plate. After being developed, the plate showed a large number of well-defined spots arranged in a pattern of intersecting circles around the spot produced by the central beam. The results were presented to the Bavarian Academy of Sciences and Humanities in June 1912 as "Interferenz-Erscheinungen bei Röntgenstrahlen" (Interference phenomena in X-rays). Von Laue developed a law that connects the scattering angles and the size and orientation of the unit-cell spacings in the crystal, for which he was awarded the Nobel Prize in Physics in 1914.
After Von Laue's pioneering research the field developed rapidly, most notably by physicists William Lawrence Bragg and his father William Henry Bragg. In 1912–1913, the younger Bragg developed Bragg's law, which connects the scattering with evenly spaced planes within a crystal. The Braggs, father and son, shared the 1915 Nobel Prize in Physics for their work in crystallography. The earliest structures were generally simple; as computational and experimental methods improved over the next decades, it became feasible to deduce reliable atomic positions for more complicated arrangements of atoms; see X-ray crystallography for more details.
Introduction to x-ray diffraction theory.
Basics.
Crystals are regular arrays of atoms, and X-rays are electromagnetic waves. Atoms scatter X-ray waves, primarily through the atoms' electrons. Just as an ocean wave striking a lighthouse produces secondary circular waves emanating from the lighthouse, so an X-ray striking an electron produces secondary spherical waves emanating from the electron. This phenomenon is known as elastic scattering, and the electron (or lighthouse) is known as the "scatterer". A regular array of scatterers produces a regular array of spherical waves. Although these waves cancel one another out in most directions through destructive interference, they add constructively in a few specific directions.
An intuitive understanding of X-ray diffraction can be obtained from the Bragg model of diffraction. In this model, a given reflection is associated with a set of evenly spaced sheets running through the crystal, usually passing through the centers of the atoms of the crystal lattice. The orientation of a particular set of sheets is identified by its three Miller indices ("h", "k", "l"), and their spacing by "d". William Lawrence Bragg proposed a model where the incoming X-rays are scattered specularly (mirror-like) from each plane; from that assumption, X-rays scattered from adjacent planes will combine constructively (constructive interference) when the angle θ between the plane and the X-ray results in a path-length difference that is an integer multiple "n" of the X-ray wavelength λ.
formula_0
A reflection is said to be "indexed" when its Miller indices (or, more correctly, its reciprocal lattice vector components) have been identified from the known wavelength and the scattering angle 2θ. Such indexing gives the unit-cell parameters, the lengths and angles of the unit-cell, as well as its space group.
Ewald's sphere.
Each X-ray diffraction pattern represents a spherical slice of reciprocal space, as may be seen by the Ewald sphere construction. For a given incident wavevector k0 the only wavevectors with the same energy lie on the surface of a sphere. In the diagram, the wavevector k1 lies on the Ewald sphere and also is at a reciprocal lattice vector g1 so satisfies Bragg's law. In contrast the wavevector k2 differs from the reciprocal lattice point and g2 by the vector s which is called the excitation error. For large single crystals primarily used in crystallography only the Bragg's law case matters; for electron diffraction and some other types of x-ray diffraction non-zero values of the excitation error also matter.
Scattering amplitudes.
X-ray scattering is determined by the density of electrons within the crystal. Since the energy of an X-ray is much greater than that of a valence electron, the scattering may be modeled as Thomson scattering, the elastic interaction of an electromagnetic ray with a charged particle.
The intensity of Thomson scattering for one particle with mass "m" and elementary charge "q" is:
formula_1
Hence the atomic nuclei, which are much heavier than an electron, contribute negligibly to the scattered X-rays. Consequently, the coherent scattering detected from an atom can be accurately approximated by analyzing the collective scattering from the electrons in the system.
The incoming X-ray beam has a polarization and should be represented as a vector wave; however, for simplicity, it will be represented here as a scalar wave. We will ignore the time dependence of the wave and just concentrate on the wave's spatial dependence. Plane waves can be represented by a wave vector kin, and so the incoming wave at time "t" = 0 is given by
formula_2
At a position r within the sample, consider a density of scatterers "f"(r); these scatterers produce a scattered spherical wave of amplitude proportional to the local amplitude of the incoming wave times the number of scatterers in a small volume "dV" about r
formula_3
where "S" is the proportionality constant.
Consider the fraction of scattered waves that leave with an outgoing wave-vector of kout and strike a screen (detector) at rscreen. Since no energy is lost (elastic, not inelastic scattering), the wavelengths are the same as are the magnitudes of the wave-vectors |kin| = |kout|. From the time that the photon is scattered at r until it is absorbed at rscreen, the photon undergoes a change in phase
formula_4
The net radiation arriving at rscreen is the sum of all the scattered waves throughout the crystal
formula_5
which may be written as a Fourier transform
formula_6
where g = kout – kin is a reciprocal lattice vector that satisfies Bragg's law and the Ewald construction mentioned above. The measured intensity of the reflection will be square of this amplitude
formula_7
The above assumes that the crystalline regions as somewhat large, for instance microns across, but also not so large that the X-rays are scattered more than once. If either of these is not the case then the diffracted intensities will be e more complicated.
X-ray sources.
Rotating anode.
Small scale diffraction experiments can be done with a local X-ray tube source, typically coupled with an image plate detector. These have the advantage of being relatively inexpensive and easy to maintain, and allow for quick screening and collection of samples. However, the wavelength of the X-rays produced is limited by the availability of different anode materials. Furthermore, the intensity is limited by the power applied and cooling capacity available to avoid melting the anode. In such systems, electrons are boiled off of a cathode and accelerated through a strong electric potential of ~50 kV; having reached a high speed, the electrons collide with a metal plate, emitting "bremsstrahlung" and some strong spectral lines corresponding to the excitation of inner-shell electrons of the metal. The most common metal used is copper, which can be kept cool easily due to its high thermal conductivity, and which produces strong Kα and Kβ lines. The Kβ line is sometimes suppressed with a thin (~10 μm) nickel foil. The simplest and cheapest variety of sealed X-ray tube has a stationary anode (the Crookes tube) and runs with ~2 kW of electron beam power. The more expensive variety has a rotating-anode type source that runs with ~14 kW of e-beam power.
X-rays are generally filtered (by use of X-ray filters) to a single wavelength (made monochromatic) and collimated to a single direction before they are allowed to strike the crystal. The filtering not only simplifies the data analysis, but also removes radiation that degrades the crystal without contributing useful information. Collimation is done either with a collimator (basically, a long tube) or with an arrangement of gently curved mirrors. Mirror systems are preferred for small crystals (under 0.3 mm) or with large unit cells (over 150 Å).
Microfocus tube.
A more recent development is the microfocus tube, which can deliver at least as high a beam flux (after collimation) as rotating-anode sources but only require a beam power of a few tens or hundreds of watts rather than requiring several kilowatts.
Synchrotron radiation.
Synchrotron radiation sources are some of the brightest light sources on earth and are some of the most powerful tools available for X-ray diffraction and crystallography. X-ray beams are generated in synchrotrons which accelerate electrically charged particles, often electrons, to nearly the speed of light and confine them in a (roughly) circular loop using magnetic fields.
Synchrotrons are generally national facilities, each with several dedicated beamlines where data is collected without interruption. Synchrotrons were originally designed for use by high-energy physicists studying subatomic particles and cosmic phenomena. The largest component of each synchrotron is its electron storage ring. This ring is not a perfect circle, but a many-sided polygon. At each corner of the polygon, or sector, precisely aligned magnets bend the electron stream. As the electrons' path is bent, they emit bursts of energy in the form of X-rays.
The intense ionizing radiation can cause radiation damage to samples, particularly macromolecular crystals. Cryo crystallography can protect the sample from radiation damage, by freezing the crystal at liquid nitrogen temperatures (~100 K). Cryocrystallography methods are applied to home source rotating anode sources as well. However, synchrotron radiation frequently has the advantage of user-selectable wavelengths, allowing for anomalous scattering experiments which maximizes anomalous signal. This is critical in experiments such as single wavelength anomalous dispersion (SAD) and multi-wavelength anomalous dispersion (MAD).
Free-electron laser.
Free-electron lasers have been developed for use in X-ray diffraction and crystallography. These are the brightest X-ray sources currently available; with the X-rays coming in femtosecond bursts. The intensity of the source is such that atomic resolution diffraction patterns can be resolved for crystals otherwise too small for collection. However, the intense light source also destroys the sample, requiring multiple crystals to be shot. As each crystal is randomly oriented in the beam, hundreds of thousands of individual diffraction images must be collected in order to get a complete data set. This method, serial femtosecond crystallography, has been used in solving the structure of a number of protein crystal structures, sometimes noting differences with equivalent structures collected from synchrotron sources.
Related scattering techniques.
Other X-ray techniques.
Other forms of elastic X-ray scattering besides single-crystal diffraction include powder diffraction, small-angle X-ray scattering (SAXS) and several types of X-ray fiber diffraction, which was used by Rosalind Franklin in determining the double-helix structure of DNA. In general, single-crystal X-ray diffraction offers more structural information than these other techniques; however, it requires a sufficiently large and regular crystal, which is not always available.
These scattering methods generally use "monochromatic" X-rays, which are restricted to a single wavelength with minor deviations. A broad spectrum of X-rays (that is, a blend of X-rays with different wavelengths) can also be used to carry out X-ray diffraction, a technique known as the Laue method. This is the method used in the original discovery of X-ray diffraction. Laue scattering provides much structural information with only a short exposure to the X-ray beam, and is therefore used in structural studies of very rapid events (time resolved crystallography). However, it is not as well-suited as monochromatic scattering for determining the full atomic structure of a crystal and therefore works better with crystals with relatively simple atomic arrangements.
The Laue back reflection mode records X-rays scattered backwards from a broad spectrum source. This is useful if the sample is too thick for X-rays to transmit through it. The diffracting planes in the crystal are determined by knowing that the normal to the diffracting plane bisects the angle between the incident beam and the diffracted beam. A Greninger chart can be used to interpret the back reflection Laue photograph.
Electron diffraction.
Because they interact via the Coulomb forces the scattering of electrons by matter is 1000 or more times stronger than for X-rays. Hence electron beams produce strong multiple or dynamical scattering even for relatively thin crystals (>10 nm). While there are similarities between the diffraction of X-rays and electrons, as can be found in the book by John M. Cowley, the approach is different as it is based upon the original approach of Hans Bethe and solving Schrödinger equation for relativistic electrons, rather than a kinematical or Bragg's law approach. Information about very small regions, down to single atoms is possible. The range of applications for electron diffraction, transmission electron microscopy and transmission electron crystallography with high energy electrons is extensive; see the relevant links for more information and citations. In addition to transmission methods, low-energy electron diffraction is a technique where electrons are back-scattered off surfaces and has been extensively used to determine surface structures at the atomic scale, and reflection high-energy electron diffraction is another which is extensively used to monitor thin film growth.
Neutron diffraction.
Neutron diffraction is used for structure determination, although it has been difficult to obtain intense, monochromatic beams of neutrons in sufficient quantities. Traditionally, nuclear reactors have been used, although sources producing neutrons by spallation are becoming increasingly available. Being uncharged, neutrons scatter more from the atomic nuclei rather than from the electrons. Therefore, neutron scattering is useful for observing the positions of light atoms with few electrons, especially hydrogen, which is essentially invisible in X-ray diffraction. Neutron scattering also has the property that the solvent can be made invisible by adjusting the ratio of normal water, H2O, and heavy water, D2O.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2 d\\sin\\theta = n\\lambda."
},
{
"math_id": 1,
"text": "\nI_o = I_e \\left(\\frac{q^4}{m^2c^4}\\right)\\frac{1+\\cos^22\\theta}{2} = I_e7.94\\times10^{-26}\\frac{1+\\cos^22\\theta}{2} = I_ef\n"
},
{
"math_id": 2,
"text": "A \\mathrm{e}^{2\\pi \\mathrm{i}\\mathbf{k}_{\\mathrm{in}} \\cdot \\mathbf{r}}."
},
{
"math_id": 3,
"text": "\\text{amplitude of scattered wave} = A \\mathrm{e}^{2\\pi \\mathrm{i}\\mathbf{k} \\cdot \\mathbf{r}} S f(\\mathbf{r}) \\, \\mathrm{d}V,"
},
{
"math_id": 4,
"text": "e^{2\\pi i \\mathbf{k}_\\text{out} \\cdot \\left( \\mathbf{r}_\\text{screen} - \\mathbf{r} \\right)}."
},
{
"math_id": 5,
"text": "A S \\int \\mathrm{d}\\mathbf{r} \\, f(\\mathbf{r}) \\mathrm{e}^{2\\pi \\mathrm{i} \\mathbf{k}_\\text{in} \\cdot \\mathbf{r}}\ne^{2\\pi i \\mathbf{k}_\\text{out} \\cdot \\left( \\mathbf{r}_\\text{screen} - \\mathbf{r} \\right)} =\nA S e^{2\\pi i \\mathbf{k}_\\text{out} \\cdot \\mathbf{r}_\\text{screen}}\n\\int \\mathrm{d}\\mathbf{r} \\, f(\\mathbf{r}) \\mathrm{e}^{2\\pi \\mathrm{i} \\left( \\mathbf{k}_\\text{in} - \\mathbf{k}_\\text{out} \\right) \\cdot \\mathbf{r}}, "
},
{
"math_id": 6,
"text": "A S \\mathrm{e}^{2\\pi \\mathrm{i} \\mathbf{k}_\\text{out} \\cdot \\mathbf{r}_\\text{screen}}\n\\int d\\mathbf{r} f(\\mathbf{r}) \\mathrm{e}^{-2\\pi \\mathrm{i} \\mathbf{g} \\cdot \\mathbf{r}} =\nA S \\mathrm{e}^{2\\pi \\mathrm{i} \\mathbf{k}_\\text{out} \\cdot \\mathbf{r}_\\text{screen}} F(\\mathbf{g}),"
},
{
"math_id": 7,
"text": "A^2 S^2 \\left|F(\\mathbf{g}) \\right|^2."
}
] | https://en.wikipedia.org/wiki?curid=9917583 |
991784 | Circumscribed sphere | Sphere touching all of a polyhedron's vertices
In geometry, a circumscribed sphere of a polyhedron is a sphere that contains the polyhedron and touches each of the polyhedron's vertices. The word circumsphere is sometimes used to mean the same thing, by analogy with the term "circumcircle". As in the case of two-dimensional circumscribed circles (circumcircles), the radius of a sphere circumscribed around a polyhedron P is called the circumradius of P, and the center point of this sphere is called the circumcenter of P.
Existence and optimality.
When it exists, a circumscribed sphere need not be the smallest sphere containing the polyhedron; for instance, the tetrahedron formed by a vertex of a cube and its three neighbors has the same circumsphere as the cube itself, but can be contained within a smaller sphere having the three neighboring vertices on its equator. However, the smallest sphere containing a given polyhedron is always the circumsphere of the convex hull of a subset of the vertices of the polyhedron.
In "De solidorum elementis" (circa 1630), René Descartes observed that, for a polyhedron with a circumscribed sphere, all faces have circumscribed circles, the circles where the plane of the face meets the circumscribed sphere. Descartes suggested that this necessary condition for the existence of a circumscribed sphere is sufficient, but it is not true: some bipyramids, for instance, can have circumscribed circles for their faces (all of which are triangles) but still have no circumscribed sphere for the whole polyhedron. However, whenever a simple polyhedron has a circumscribed circle for each of its faces, it also has a circumscribed sphere.
Related concepts.
The circumscribed sphere is the three-dimensional analogue of the circumscribed circle.
All regular polyhedra have circumscribed spheres, but most irregular polyhedra do not have one, since in general not all vertices lie on a common sphere. The circumscribed sphere (when it exists) is an example of a bounding sphere, a sphere that contains a given shape. It is possible to define the smallest bounding sphere for any polyhedron, and compute it in linear time.
Other spheres defined for some but not all polyhedra include a midsphere, a sphere tangent to all edges of a polyhedron, and an inscribed sphere, a sphere tangent to all faces of a polyhedron. In the regular polyhedra, the inscribed sphere, midsphere, and circumscribed sphere all exist and are concentric.
When the circumscribed sphere is the set of infinite limiting points of hyperbolic space, a polyhedron that it circumscribes is known as an ideal polyhedron.
Point on the circumscribed sphere.
There are five convex regular polyhedra, known as the Platonic solids. All Platonic solids have circumscribed spheres. For an arbitrary point formula_0 on the circumscribed sphere of each Platonic solid with number of the vertices formula_1, if formula_2 are the distances to
the vertices formula_3, then
formula_4
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "MA_i"
},
{
"math_id": 3,
"text": "A_i"
},
{
"math_id": 4,
"text": "4(MA_1^{2}+MA_2^{2}+...+MA_n^{2})^2=3n(MA_1^{4}+MA_2^{4}+...+MA_n^{4})."
}
] | https://en.wikipedia.org/wiki?curid=991784 |
9919374 | Elitserien (speedway) | Swedish motorcycle speedway tier one league
Elitserien (literally, "The Elite Series") () is the highest league in the league system of speedway in Sweden and currently comprises the top eight Swedish speedway teams. The first season began in 1982. Before that Allsvenskan was the highest division. The winner of the Elitserien is crowned the Swedish Speedway Team Championship winner.
History.
From the start of Swedish league speedway in 1948 until 1981 the highest speedway league in Sweden was called Division 1. In 1982 Elitserien was formed, consisting of only six teams where the top four teams qualified for the playoffs. Getingarna became the first winners after defeating Njudungarna in the finals. The league size was increased to seven teams in 1983 and to eight teams in 1984, a league size that was maintained until the 1996 season when the league expanded to nine teams and then to ten teams the following season. In 1986 the playoffs lost its championship status; instead the winner of the league became the Swedish champions. However, in 2000, playoffs were reintroduced as a championship final.
Rules.
A season consists of 18 rounds where all teams meet twice, once home and once away. Scoring 2 points for a win, 1 point for a draw and 0 points for a loss. 1 additional bonus point is awarded the team with the highest rider points aggregate from home and away matches against each team. The six first teams in the league after 18 rounds qualify for the playoffs.
Playoffs.
The playoffs decides the winner of the Swedish Speedway Team Championship. It is run in three rounds; quarter finals, semi finals and final. The first six teams of the main series are split into two quarter final groups with three teams each. The two best teams in the quarterfinal groups qualifies for the semifinals. In the semifinals and the final the teams meet once home and once away. The team with the highest rider points aggregate after two matches goes through to the next round.
Squads.
Each team has a squad of nine riders with an initial match average above 2,00 and an unlimited number of riders with an initial match average of 2,00. In addition most teams has a farm club agreement with a team in another league which makes it possible to farm Swedish riders with an Elitserien initial match average less than or equal to 6,00. A team can use a maximum of two riders from the farm club in each match.
Team selection.
To each match a team has to select seven riders from its squad. The selected team must include at least one Swedish junior rider (under 21). The total initial match average of the team must not exceed 42,00 and not be lower than 35,00. The two riders with the lowest consecutive match average must be placed as reserves in jackets number 6 and 7. An additional reserve can be added to the team in jacket number 8 if the rider is a Swedish junior rider and has a lower consecutive match average than all other riders in the team. The 8th rider has no scheduled rides.
Initial match average
In the beginning of each season, each rider in the league is provided an initial match average by the Swedish motorcycle sports association, Svemo. This average remains unchanged throughout the season and is used for team selection.
Calculation of initial match averages
From the 2010 season a riders initial match average is equal to the Calculated Match Average of the last season the rider competed in the Swedish league system. Before 2010 the last consecutive match average was used.
The first initial match average for a Swedish rider that has not yet competed in the Swedish league system is 2,00. Foreign riders that has not yet competed for a Swedish team get a first initial match average equal to the rider's last official home league average, bonus points excluded. Since some foreign leagues are not of the same quality as the Swedish leagues the initial match averages for foreign riders are modified by multiplying it to a factor according to the table below. However, a foreign rider cannot be awarded a first initial match average below 5,00.
Consecutive match average
A rider's consecutive match average is calculated after each match and is used to determine which riders that are allowed to race as reserves in a match. The last consecutive match average of the season becomes the initial match average for the next season.
Calculation of consecutive match average
At the start of the season the consecutive match average is equal to the initial match average. The consecutive match average is calculated according to the following formula
formula_0
"Where:"
N is the New consecutive match average
C is the Current consecutive match average
M is the total number of league rounds
I is the first consecutive match average of the season. i.e. the Initial match average
p is the total points in the latest match
h is the number of heats the rider participated in during the latest match
The new consecutive match average is rounded down to two decimals.
"Example:"
A rider has a current consecutive match average (C) on 8,11 and an initial match average (I) on 7,76. In the following match he rides 5 heats (h) scoring 14 points (p). The number of rounds in his league (M) is 12. This leaves the rider with a new consecutive match average on 8,39
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N = \\frac{ C * M + ((p * h / 4) - I)} { M}"
},
{
"math_id": 1,
"text": "8,3966667 = \\frac{ 8,11 * 12 + ((14 * 5 / 4) - 7,76)} { 12}"
}
] | https://en.wikipedia.org/wiki?curid=9919374 |
9920 | Electronic oscillator | Type of electronic circuit
An electronic oscillator is an electronic circuit that produces a periodic, oscillating or alternating current (AC) signal, usually a sine wave, square wave or a triangle wave, powered by a direct current (DC) source. Oscillators are found in many electronic devices, such as radio receivers, television sets, radio and television broadcast transmitters, computers, computer peripherals, cellphones, radar, and many other devices.
Oscillators are often characterized by the frequency of their output signal:
There are two general types of electronic oscillators: the linear or harmonic oscillator, and the nonlinear or relaxation oscillator. The two types are fundamentally different in how oscillation is produced, as well as in the characteristic type of output signal that is generated.
The most-common linear oscillator in use is the crystal oscillator, in which the output frequency is controlled by a piezo-electric resonator consisting of a vibrating quartz crystal. Crystal oscillators are ubiquitous in modern electronics, being the source for the clock signal in computers and digital watches, as well as a source for the signals generated in radio transmitters and receivers. As a crystal oscillator’s “native” output waveform is sinusoidal, a signal-conditioning circuit may be used to convert the output to other waveform types, such as the square wave typically utilized in computer clock circuits.
Harmonic oscillators.
Linear or harmonic oscillators generate a sinusoidal (or nearly-sinusoidal) signal. There are two types:
Feedback oscillator.
The most common form of linear oscillator is an electronic amplifier such as a transistor or operational amplifier connected in a feedback loop with its output fed back into its input through a frequency selective electronic filter to provide positive feedback. When the power supply to the amplifier is switched on initially, electronic noise in the circuit provides a non-zero signal to get oscillations started. The noise travels around the loop and is amplified and filtered until very quickly it converges on a sine wave at a single frequency.
Feedback oscillator circuits can be classified according to the type of frequency selective filter they use in the feedback loop:
Negative-resistance oscillator.
In addition to the feedback oscillators described above, which use two-port amplifying active elements such as transistors and operational amplifiers, linear oscillators can also be built using one-port (two terminal) devices with negative resistance, such as magnetron tubes, tunnel diodes, IMPATT diodes and Gunn diodes. Negative-resistance oscillators are usually used at high frequencies in the microwave range and above, since at these frequencies feedback oscillators perform poorly due to excessive phase shift in the feedback path.
In negative-resistance oscillators, a resonant circuit, such as an LC circuit, crystal, or cavity resonator, is connected across a device with negative differential resistance, and a DC bias voltage is applied to supply energy. A resonant circuit by itself is "almost" an oscillator; it can store energy in the form of electronic oscillations if excited, but because it has electrical resistance and other losses the oscillations are damped and decay to zero. The negative resistance of the active device cancels the (positive) internal loss resistance in the resonator, in effect creating a resonator with no damping, which generates spontaneous continuous oscillations at its resonant frequency.
The negative-resistance oscillator model is not limited to one-port devices like diodes; feedback oscillator circuits with two-port amplifying devices such as transistors and tubes also have negative resistance. At high frequencies, three terminal devices such as transistors and FETs are also used in negative resistance oscillators. At high frequencies these devices do not need a feedback loop, but with certain loads applied to one port can become unstable at the other port and show negative resistance due to internal feedback. The negative resistance port is connected to a tuned circuit or resonant cavity, causing them to oscillate. High-frequency oscillators in general are designed using negative-resistance techniques.
List of harmonic oscillator circuits.
Some of the many harmonic oscillator circuits are listed below:
Relaxation oscillator.
A nonlinear or relaxation oscillator produces a non-sinusoidal output, such as a square, sawtooth or triangle wave. It consists of an energy-storing element (a capacitor or, more rarely, an inductor) and a nonlinear switching device (a latch, Schmitt trigger, or negative-resistance element) connected in a feedback loop. The switching device periodically charges the storage element with energy and when its voltage or current reaches a threshold discharges it again, thus causing abrupt changes in the output waveform.
Square-wave relaxation oscillators are used to provide the clock signal for sequential logic circuits such as timers and counters, although crystal oscillators are often preferred for their greater stability. Triangle-wave or sawtooth oscillators are used in the timebase circuits that generate the horizontal deflection signals for cathode ray tubes in analogue oscilloscopes and television sets. They are also used in voltage-controlled oscillators (VCOs), inverters and switching power supplies, dual-slope analog to digital converters (ADCs), and in function generators to generate square and triangle waves for testing equipment. In general, relaxation oscillators are used at lower frequencies and have poorer frequency stability than linear oscillators.
Ring oscillators are built of a ring of active delay stages. Generally the ring has an odd number of inverting stages, so that there is no single stable state for the internal ring voltages. Instead, a single transition propagates endlessly around the ring.
Some of the more common relaxation oscillator circuits are listed below:
Voltage-controlled oscillator (VCO).
An oscillator can be designed so that the oscillation frequency can be varied over some range by an input voltage or current. These voltage controlled oscillators are widely used in phase-locked loops, in which the oscillator's frequency can be locked to the frequency of another oscillator. These are ubiquitous in modern communications circuits, used in filters, modulators, demodulators, and forming the basis of frequency synthesizer circuits which are used to tune radios and televisions.
Radio frequency VCOs are usually made by adding a varactor diode to the tuned circuit or resonator in an oscillator circuit. Changing the DC voltage across the varactor changes its capacitance, which changes the resonant frequency of the tuned circuit. Voltage controlled relaxation oscillators can be constructed by charging and discharging the energy storage capacitor with a voltage controlled current source. Increasing the input voltage increases the rate of charging the capacitor, decreasing the time between switching events.
Theory of feedback oscillators.
A feedback oscillator circuit consists of two parts connected in a feedback loop; an amplifier formula_0 and an electronic filter formula_1. The filter's purpose is to limit the frequencies that can pass through the loop so the circuit only oscillates at the desired frequency. Since the filter and wires in the circuit have resistance they consume energy and the amplitude of the signal drops as it passes through the filter. The amplifier is needed to increase the amplitude of the signal to compensate for the energy lost in the other parts of the circuit, so the loop will oscillate, as well as supply energy to the load attached to the output.
Frequency of oscillation - the Barkhausen criterion.
To determine the frequency(s) formula_2 at which a feedback oscillator circuit will oscillate, the feedback loop is thought of as broken at some point (see diagrams) to give an input and output port. A sine wave is applied to the input formula_3 and the amplitude and phase of the sine wave after going through the loop formula_4 is calculated
formula_5 and formula_6 so formula_7
Since in the complete circuit formula_8 is connected to formula_9, for oscillations to exist
formula_10
The ratio of output to input of the loop, formula_11, is called the loop gain. So the condition for oscillation is that the loop gain must be one
formula_12
Since formula_13 is a complex number with two parts, a magnitude and an angle, the above equation actually consists of two conditions:
formula_14
so that after a trip around the loop the sine wave is the same amplitude. It can be seen intuitively that if the loop gain were greater than one, the amplitude of the sinusoidal signal would increase as it travels around the loop, resulting in a sine wave that grows exponentially with time, without bound. If the loop gain were less than one, the signal would decrease around the loop, resulting in an exponentially decaying sine wave that decreases to zero.
formula_15
Equations (1) and (2) are called the "Barkhausen stability criterion". It is a necessary but not a sufficient criterion for oscillation, so there are some circuits which satisfy these equations that will not oscillate. An equivalent condition often used instead of the Barkhausen condition is that the circuit's closed loop transfer function (the circuit's complex impedance at its output) have a pair of poles on the imaginary axis.
In general, the phase shift of the feedback network increases with increasing frequency so there are only a few discrete frequencies (often only one) which satisfy the second equation. If the amplifier gain formula_0 is high enough that the loop gain is unity (or greater, see Startup section) at one of these frequencies, the circuit will oscillate at that frequency. Many amplifiers such as common-emitter transistor circuits are "inverting", meaning that their output voltage decreases when their input increases. In these the amplifier provides 180° phase shift, so the circuit will oscillate at the frequency at which the feedback network provides the other 180° phase shift.
At frequencies well below the poles of the amplifying device, the amplifier will act as a pure gain formula_0, but if the oscillation frequency formula_16 is near the amplifier's cutoff frequency formula_17, within formula_18, the active device can no longer be considered a 'pure gain', and it will contribute some phase shift to the loop.
An alternate mathematical stability test sometimes used instead of the Barkhausen criterion is the Nyquist stability criterion. This has a wider applicability than the Barkhausen, so it can identify some of the circuits which pass the Barkhausen criterion but do not oscillate.
Frequency stability.
Temperature changes, other environmental changes, aging, and manufacturing tolerances will cause component values to "drift" away from their designed values. Changes in "frequency determining" components such as the tank circuit in LC oscillators will cause the oscillation frequency to change, so for a constant frequency these components must have stable values. How stable the oscillator's frequency is to other changes in the circuit, such as changes in values of other components, gain of the amplifier, the load impedance, or the supply voltage, is mainly dependent on the Q factor ("quality factor") of the feedback filter. Since the "amplitude" of the output is constant due to the nonlinearity of the amplifier (see Startup section below), changes in component values cause changes in the phase shift formula_19 of the feedback loop. Since oscillation can only occur at frequencies where the phase shift is a multiple of 360°, formula_20, shifts in component values cause the oscillation frequency formula_16 to change to bring the loop phase back to 360n°. The amount of frequency change formula_21 caused by a given phase change formula_22 depends on the slope of the loop phase curve at formula_16, which is determined by the formula_23
formula_24 so formula_25
RC oscillators have the equivalent of a very low formula_26, so the phase changes very slowly with frequency, therefore a given phase change will cause a large change in the frequency. In contrast, LC oscillators have tank circuits with high formula_26 (~102). This means the phase shift of the feedback network increases rapidly with frequency near the resonant frequency of the tank circuit. So a large change in phase causes only a small change in frequency. Therefore the circuit's oscillation frequency is very close to the natural resonant frequency of the tuned circuit, and doesn't depend much on other components in the circuit. The quartz crystal resonators used in crystal oscillators have even higher formula_26 (104 to 106) and their frequency is very stable and independent of other circuit components.
Tunability.
The frequency of RC and LC oscillators can be tuned over a wide range by using variable components in the filter. A microwave cavity can be tuned mechanically by moving one of the walls. In contrast, a quartz crystal is a mechanical resonator whose resonant frequency is mainly determined by its dimensions, so a crystal oscillator's frequency is only adjustable over a very narrow range, a tiny fraction of one percent.
It's frequency can be changed slightly by using a trimmer capacitor in series or parallel with the crystal.
Startup and amplitude of oscillation.
The Barkhausen criterion above, eqs. (1) and (2), merely gives the frequencies at which steady-state oscillation is possible, but says nothing about the amplitude of the oscillation, whether the amplitude is stable, or whether the circuit will start oscillating when the power is turned on. For a practical oscillator two additional requirements are necessary:
formula_27
A typical rule of thumb is to make the small signal loop gain at the oscillation frequency 2 or 3. When the power is turned on, oscillation is started by the power turn-on transient or random electronic noise present in the circuit. Noise guarantees that the circuit will not remain "balanced" precisely at its unstable DC equilibrium point (Q point) indefinitely. Due to the narrow passband of the filter, the response of the circuit to a noise pulse will be sinusoidal, it will excite a small sine wave of voltage in the loop. Since for small signals the loop gain is greater than one, the amplitude of the sine wave increases exponentially.
During startup, while the amplitude of the oscillation is small, the circuit is approximately linear, so the analysis used in the Barkhausen criterion is applicable. When the amplitude becomes large enough that the amplifier becomes nonlinear, technically the frequency domain analysis used in normal amplifier circuits is no longer applicable, so the "gain" of the circuit is undefined. However the filter attenuates the harmonic components produced by the nonlinearity of the amplifier, so the fundamental frequency component formula_28 mainly determines the loop gain (this is the "harmonic balance" analysis technique for nonlinear circuits).
The sine wave cannot grow indefinitely; in all real oscillators some nonlinear process in the circuit limits its amplitude, reducing the gain as the amplitude increases, resulting in stable operation at some constant amplitude. In most oscillators this nonlinearity is simply the saturation (limiting or clipping) of the amplifying device, the transistor, vacuum tube or op-amp. The maximum voltage swing of the amplifier's output is limited by the DC voltage provided by its power supply. Another possibility is that the output may be limited by the amplifier slew rate.
As the amplitude of the output nears the power supply voltage rails, the amplifier begins to saturate on the peaks (top and bottom) of the sine wave, flattening or "clipping" the peaks. Since the output of the amplifier can no longer increase with increasing input, further increases in amplitude cause the equivalent gain of the amplifier and thus the loop gain to decrease. The amplitude of the sine wave, and the resulting clipping, continues to grow until the loop gain is reduced to unity, formula_29, satisfying the Barkhausen criterion, at which point the amplitude levels off and steady state operation is achieved, with the output a slightly distorted sine wave with peak amplitude determined by the supply voltage. This is a stable equilibrium; if the amplitude of the sine wave increases for some reason, increased clipping of the output causes the loop gain formula_30 to drop below one temporarily, reducing the sine wave's amplitude back to its unity-gain value. Similarly if the amplitude of the wave decreases, the decreased clipping will cause the loop gain to increase above one, increasing the amplitude.
The amount of harmonic distortion in the output is dependent on how much excess loop gain the circuit has:
An exception to the above are high Q oscillator circuits such as crystal oscillators; the narrow bandwidth of the crystal removes the harmonics from the output, producing a 'pure' sinusoidal wave with almost no distortion even with large loop gains.
Design procedure.
Since oscillators depend on nonlinearity for their operation, the usual linear frequency domain circuit analysis techniques used for amplifiers based on the Laplace transform, such as root locus and gain and phase plots (Bode plots), cannot capture their full behavior. To determine startup and transient behavior and calculate the detailed shape of the output waveform, electronic circuit simulation computer programs like SPICE are used. A typical design procedure for oscillator circuits is to use linear techniques such as the Barkhausen stability criterion or Nyquist stability criterion to design the circuit, use a rule of thumb to choose the loop gain, then simulate the circuit on computer to make sure it starts up reliably and to determine the nonlinear aspects of operation such as harmonic distortion. Component values are tweaked until the simulation results are satisfactory. The distorted oscillations of real-world (nonlinear) oscillators are called limit cycles and are studied in nonlinear control theory.
Amplitude-stabilized oscillators.
In applications where a 'pure' very low distortion sine wave is needed, such as precision signal generators, a nonlinear component is often used in the feedback loop that provides a 'slow' gain reduction with amplitude. This stabilizes the loop gain at an amplitude below the saturation level of the amplifier, so it does not saturate and "clip" the sine wave. Resistor-diode networks and FETs are often used for the nonlinear element. An older design uses a thermistor or an ordinary incandescent light bulb; both provide a resistance that increases with temperature as the current through them increases.
As the amplitude of the signal current through them increases during oscillator startup, the increasing resistance of these devices reduces the loop gain. The essential characteristic of all these circuits is that the nonlinear gain-control circuit must have a long time constant, much longer than a single period of the oscillation. Therefore over a single cycle they act as virtually linear elements, and so introduce very little distortion. The operation of these circuits is somewhat analogous to an automatic gain control (AGC) circuit in a radio receiver. The Wein bridge oscillator is a widely used circuit in which this type of gain stabilization is used.
Frequency limitations.
At high frequencies it becomes difficult to physically implement feedback oscillators because of shortcomings of the components. Since at high frequencies the tank circuit has very small capacitance and inductance, parasitic capacitance and parasitic inductance of component leads and PCB traces become significant. These may create unwanted feedback paths between the output and input of the active device, creating instability and oscillations at unwanted frequencies (parasitic oscillation). Parasitic feedback paths inside the active device itself, such as the interelectrode capacitance between output and input, make the device unstable. The input impedance of the active device falls with frequency, so it may load the feedback network. As a result, stable feedback oscillators are difficult to build for frequencies above 500 MHz, and negative resistance oscillators are usually used for frequencies above this.
History.
The first practical oscillators were based on electric arcs, which were used for lighting in the 19th century. The current through an arc light is unstable due to its negative resistance, and often breaks into spontaneous oscillations, causing the arc to make hissing, humming or howling sounds which had been noticed by Humphry Davy in 1821, Benjamin Silliman in 1822, Auguste Arthur de la Rive in 1846, and David Edward Hughes in 1878. Ernst Lecher in 1888 showed that the current through an electric arc could be oscillatory.
An oscillator was built by Elihu Thomson in 1892 by placing an LC tuned circuit in parallel with an electric arc and included a magnetic blowout. Independently, in the same year, George Francis FitzGerald realized that if the damping resistance in a resonant circuit could be made zero or negative, the circuit would produce oscillations, and, unsuccessfully, tried to build a negative resistance oscillator with a dynamo, what would now be called a parametric oscillator. The arc oscillator was rediscovered and popularized by William Duddell in 1900. Duddell, a student at London Technical College, was investigating the hissing arc effect. He attached an LC circuit (tuned circuit) to the electrodes of an arc lamp, and the negative resistance of the arc excited oscillation in the tuned circuit. Some of the energy was radiated as sound waves by the arc, producing a musical tone. Duddell demonstrated his oscillator before the London Institute of Electrical Engineers by sequentially connecting different tuned circuits across the arc to play the national anthem "God Save the Queen". Duddell's "singing arc" did not generate frequencies above the audio range. In 1902 Danish physicists Valdemar Poulsen and P. O. Pederson were able to increase the frequency produced into the radio range by operating the arc in a hydrogen atmosphere with a magnetic field, inventing the Poulsen arc radio transmitter, the first continuous wave radio transmitter, which was used through the 1920s.
The vacuum-tube feedback oscillator was invented around 1912, when it was discovered that feedback ("regeneration") in the recently invented audion (triode) vacuum tube could produce oscillations. At least six researchers independently made this discovery, although not all of them can be said to have a role in the invention of the oscillator. In the summer of 1912, Edwin Armstrong observed oscillations in audion radio receiver circuits and went on to use positive feedback in his invention of the regenerative receiver. Austrian Alexander Meissner independently discovered positive feedback and invented oscillators in March 1913. Irving Langmuir at General Electric observed feedback in 1913. Fritz Lowenstein may have preceded the others with a crude oscillator in late 1911. In Britain, H. J. Round patented amplifying and oscillating circuits in 1913. In August 1912, Lee De Forest, the inventor of the audion, had also observed oscillations in his amplifiers, but he didn't understand the significance and tried to eliminate it until he read Armstrong's patents in 1914, which he promptly challenged. Armstrong and De Forest fought a protracted legal battle over the rights to the "regenerative" oscillator circuit which has been called "the most complicated patent litigation in the history of radio". De Forest ultimately won before the Supreme Court in 1934 on technical grounds, but most sources regard Armstrong's claim as the stronger one.
The first and most widely used relaxation oscillator circuit, the astable multivibrator, was invented in 1917 by French engineers Henri Abraham and Eugene Bloch. They called their cross-coupled, dual-vacuum-tube circuit a "multivibrateur", because the square-wave signal it produced was rich in harmonics, compared to the sinusoidal signal of other vacuum-tube oscillators.
Vacuum-tube feedback oscillators became the basis of radio transmission by 1920. However, the triode vacuum tube oscillator performed poorly above 300 MHz because of interelectrode capacitance. To reach higher frequencies, new "transit time" (velocity modulation) vacuum tubes were developed, in which electrons traveled in "bunches" through the tube. The first of these was the Barkhausen–Kurz oscillator (1920), the first tube to produce power in the UHF range. The most important and widely used were the klystron (R. and S. Varian, 1937) and the cavity magnetron (J. Randall and H. Boot, 1940).
Mathematical conditions for feedback oscillations, now called the Barkhausen criterion, were derived by Heinrich Georg Barkhausen in 1921. The first analysis of a nonlinear electronic oscillator model, the Van der Pol oscillator, was done by Balthasar van der Pol in 1927. He showed that the stability of the oscillations (limit cycles) in actual oscillators was due to the nonlinearity of the amplifying device. He originated the term "relaxation oscillation" and was first to distinguish between linear and relaxation oscillators. Further advances in mathematical analysis of oscillation were made by Hendrik Wade Bode and Harry Nyquist in the 1930s. In 1969 Kaneyuki Kurokawa derived necessary and sufficient conditions for oscillation in negative-resistance circuits, which form the basis of modern microwave oscillator design.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "\\beta(j\\omega)"
},
{
"math_id": 2,
"text": "\\omega_0\\;=\\;2\\pi f_0"
},
{
"math_id": 3,
"text": "v_i(t) = V_ie^{j\\omega t}"
},
{
"math_id": 4,
"text": "v_o = V_o e^{j(\\omega t + \\phi)}"
},
{
"math_id": 5,
"text": "v_o = A v_f\\,"
},
{
"math_id": 6,
"text": "v_f = \\beta(j\\omega) v_i \\,"
},
{
"math_id": 7,
"text": "v_o = A\\beta(j\\omega) v_i\\,"
},
{
"math_id": 8,
"text": "v_o"
},
{
"math_id": 9,
"text": "v_i"
},
{
"math_id": 10,
"text": "v_o(t) = v_i(t)"
},
{
"math_id": 11,
"text": "{v_o \\over v_i} = A\\beta(j\\omega)"
},
{
"math_id": 12,
"text": "A\\beta(j\\omega_0) = 1\\,"
},
{
"math_id": 13,
"text": "A\\beta(j\\omega) "
},
{
"math_id": 14,
"text": "|A||\\beta(j\\omega_0)| = 1\\, \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\text{(1)} "
},
{
"math_id": 15,
"text": "\\angle A + \\angle \\beta = 2 \\pi n \\qquad n \\in 0, 1, 2... \\, \\qquad\\qquad \\text{(2)}"
},
{
"math_id": 16,
"text": "\\omega_0"
},
{
"math_id": 17,
"text": "\\omega_C"
},
{
"math_id": 18,
"text": "0.1\\omega_C"
},
{
"math_id": 19,
"text": "\\phi\\;=\\;\\angle A\\beta(j\\omega)"
},
{
"math_id": 20,
"text": "\\phi\\;=\\;360n^\\circ"
},
{
"math_id": 21,
"text": "\\Delta \\omega"
},
{
"math_id": 22,
"text": "\\Delta \\phi"
},
{
"math_id": 23,
"text": "Q "
},
{
"math_id": 24,
"text": "{d\\phi \\over d\\omega}\\Bigg|_{\\omega_0} = -{2Q \\over \\omega_0}\\,"
},
{
"math_id": 25,
"text": "\\Delta \\omega = -{\\omega_0 \\over 2Q}\\Delta \\phi \\,"
},
{
"math_id": 26,
"text": "Q"
},
{
"math_id": 27,
"text": "|A\\beta(j\\omega_0)| > 1\\,"
},
{
"math_id": 28,
"text": "\\sin \\omega_0 t"
},
{
"math_id": 29,
"text": "|A\\beta(j\\omega_0)|\\;=\\;1\\,"
},
{
"math_id": 30,
"text": "|A\\beta(j\\omega_0)|"
}
] | https://en.wikipedia.org/wiki?curid=9920 |
9920139 | Ground dipole | Radio antenna that radiates extremely low frequency electromagnetic waves
In radio communication, a ground dipole, also referred to as an earth dipole antenna, transmission line antenna, and in technical literature as a horizontal electric dipole (HED), is a huge, specialized type of radio antenna that radiates extremely low frequency (ELF) electromagnetic waves. It is the only type of transmitting antenna that can radiate practical amounts of power in the frequency range of 3 Hz to 3 kHz, commonly called ELF waves. A ground dipole consists of two ground electrodes buried in the earth, separated by tens to hundreds of kilometers, linked by overhead transmission lines to a power plant transmitter located between them. Alternating current electricity flows in a giant loop between the electrodes through the ground, radiating ELF waves, so the ground is part of the antenna. To be most effective, ground dipoles must be located over certain types of underground rock formations. The idea was proposed by U.S. Dept. of Defense physicist Nicholas Christofilos in 1959.
Although small ground dipoles have been used for years as sensors in geological and geophysical research, their only use as antennas has been in a few military ELF transmitter facilities to communicate with submerged submarines. Besides small research and experimental antennas, four full-scale ground dipole installations are known to have been constructed; two by the U.S. Navy at Republic, Michigan and Clam Lake, Wisconsin, one by the Russian Navy on the Kola peninsula near Murmansk, Russia, and one in India at the INS Kattabomman naval base. The U.S. facilities were used between 1985 and 2004 but are now decommissioned.
Antennas at ELF frequencies.
Although the official ITU definition of extremely low frequencies is 3 Hz to 30 Hz, the wider band of frequencies of 3 Hz to 3 kHz with corresponding wavelengths from 100,000 km to 100 km is used for ELF communication and are commonly called ELF waves. The frequency used in the U.S. and Russian transmitters, about 80 Hz, generates waves 3750 km (2300 miles) long, roughly one quarter of the Earth's diameter. ELF waves have been used in very few manmade communications systems because of the difficulty of building efficient antennas for such long waves. Ordinary types of antenna (half-wave dipoles and quarter-wave monopoles) cannot be built for such extremely long waves because of their size. A half wave dipole for 80 Hz would be 1162 miles long. So even the largest practical antennas for ELF frequencies are very electrically short, very much smaller than the wavelength of the waves they radiate. The disadvantage of this is that the efficiency of an antenna drops as its size is reduced below a wavelength. An antenna's radiation resistance, and the amount of power it radiates, is proportional to where L is its length and λ is the wavelength. So even physically large ELF antennas have very small radiation resistance, and so radiate only a tiny fraction of the input power as ELF waves; most of the power applied to them is dissipated as heat in various ohmic resistances in the antenna. ELF antennas must be tens to hundreds of kilometers long, and must be driven by powerful transmitters in the megawatt range, to produce even a few watts of ELF radiation. Fortunately, the attenuation of ELF waves with distance is so low (1–2 dB per 1000 km) that a few watts of radiated power is enough to communicate worldwide.
A second problem stems from the required polarization of the waves. ELF waves only propagate long distances in vertical polarization, with the direction of the magnetic field lines horizontal and the electric field lines vertical. Vertically oriented antennas are required to generate vertically polarized waves. Even if sufficiently large conventional antennas could be built on the surface of the Earth, these would generate horizontally polarized, not vertically polarized waves.
History.
Submarines when submerged are shielded by seawater from all ordinary radio signals, and therefore are cut off from communication with military command authorities. VLF radio waves can penetrate 50–75 feet into seawater and have been used since WWII to communicate with submarines, but the submarine must rise close to the surface, making it vulnerable to detection. In 1958, the realization that ELF waves could penetrate deeper into seawater, to normal submarine operating depths led U.S. physicist Nicholas Christofilos to suggest that the U.S. Navy use them to communicate with submarines. The U.S. military researched many different types of antenna for use at ELF frequencies. Cristofilos proposed applying currents to the Earth to create a vertical loop antenna, and it became clear that this was the most practical design. The feasibility of the ground dipole idea was tested in 1962 with a 42 km leased power line in Wyoming, and in 1963 with a 176 km prototype wire antenna extending from West Virginia to North Carolina.
How a ground dipole works.
A ground dipole functions as an enormous vertically oriented loop antenna ("see drawing, right"). It consists of two widely separated electrodes ("G") buried in the ground, connected by overhead transmission cables to a transmitter ("P") located between them. The alternating current from the transmitter ("I") travels in a loop through one transmission line, kilometers deep into bedrock from one ground electrode to the other, and back through the other transmission line. This creates an alternating magnetic field ("H") through the loop, which radiates ELF waves. Due to their low frequency, ELF waves have a large skin depth and can penetrate a significant distance through earth, so it doesn't matter that half the antenna is under the ground. The axis of the magnetic field produced is horizontal, so it generates vertically polarized waves. The radiation pattern of the antenna is directional, a dipole pattern, with two lobes (maxima) in the plane of the loop, off the ends of the transmission lines. In the U.S. installations two ground dipoles are used, oriented perpendicular to each other, to allow the beam to be steered in any direction by altering the relative phase of the currents in the antennas.
The amount of power radiated by a loop antenna is proportional to ("IA")2, where I is the AC current in the loop and A is the area enclosed, To radiate practical power at ELF frequencies, the loop has to carry a current of hundreds of amperes and enclose an area of at least several square miles. Christofilos found that the lower the electrical conductivity of the underlying rock, the deeper the current will go, and the larger the effective loop area. Radio frequency current will penetrate into the ground to a depth equal to the "skin depth" of the ground at that frequency, which is inversely proportional to the square root of ground conductivity σ. The ground dipole forms a loop with effective area of "A"
"L δ", where L is the total length of the transmission lines and δ is the skin depth. Thus, ground dipoles are sited over low conductivity underground rock formations (this contrasts with ordinary radio antennas, which require "good" earth conductivity for a low resistance ground connection for their transmitters). The two U.S. Navy antennas were located in the Upper Peninsula of Michigan, on the Canadian Shield (Laurentian Shield) formation, which has unusually low conductivity of 2×10−4 siemens/meter resulting in an increase in antenna efficiency of 20 dB. The earth conductivity at the site of the Russian transmitter is even lower.
Because of their lack of civilian applications, little information about ground dipoles is available in antenna technical literature.
U.S. Navy ELF antennas.
After initially considering several larger systems (Project Sanguine), the U.S. Navy constructed two ELF transmitter facilities, one at Clam Lake, Wisconsin and the other at Republic, Michigan, 145 miles apart, transmitting at 76 Hz. They could operate independently, or phase synchronized as one antenna for greater output power. The Clam Lake site, the initial test facility, transmitted its first signal in 1982 and began operation in 1985, while the Republic site became operational in 1989. With an input power of 2.6 megawatts, the total radiated ELF output power of both sites working together was 8 watts. However, due to the low attenuation of ELF waves this tiny radiated power was able to communicate with submarines over about half the Earth's surface.
Both transmitters were shut down in 2004. The official Navy explanation was that advances in VLF communication systems had made them unnecessary.
Russian Navy ZEVS antennas.
The Russian Navy operates an ELF transmitter facility, named ZEVS ("Zeus"), to communicate with its submarines, located 30 km southeast of Murmansk on the Kola peninsula in northern Russia. Signals from it were detected in the 1990s at Stanford University and elsewhere. It normally operates at 82 Hz, using MSK (minimum shift keying) modulation. although it reportedly can cover the frequency range from 20 to 250 Hz. It reportedly consists of two parallel ground dipole antennas 60 km long, driven at currents of 200–300 amperes. Calculations from intercepted signals indicate it is 10 dB more powerful than the U.S. transmitters. Unlike them it is used for geophysical research in addition to military communications.
Indian Navy antennas.
The Indian Navy has an operational ELF communication facility at the INS Kattabomman naval base, in Tamil Nadu, to communicate with its Arihant class and Akula class submarines.
Radiated power.
The total power radiated by a ground dipole is
formula_0
where f is the frequency, I is the RMS current in the loop, L is the length of the transmission line, c is the speed of light, h is the height above ground of the ionosphere’s D layer, and σ is the ground conductivity.
The radiated power of an electrically small loop antenna normally scales with the fourth power of the frequency, but at ELF frequencies the effects of the ionosphere result in a less severe reduction in power proportional to the square of frequency.
Receiving antennas.
Ground dipoles are not needed for reception of ELF signals, although some radio amateurs use small ones for this purpose. Instead, various loop and ferrite coil antennas have been used for reception.
The requirements for receiving antennas at ELF frequencies are far less stringent than transmitting antennas: In ELF receivers, noise in the signal is dominated by the large atmospheric noise in the band. Even the tiny signal captured by a small, inefficient receiving antenna contains noise that greatly exceeds the small amount of noise generated in the receiver itself. Because the outside noise is what limits reception, very little power from the antenna is needed for the intercepted signal to overwhelm the internal noise, and hence small receive antennas can be used with no disadvantage.
Footnotes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P = \\frac {\\pi^2 f^2 I^2 L^2}{2 c^2 h \\sigma} \\,"
}
] | https://en.wikipedia.org/wiki?curid=9920139 |
9920876 | Undefined (mathematics) | Expression which is not assigned an interpretation
In mathematics, the term undefined is often used to refer to an expression which is not assigned an interpretation or a value (such as an indeterminate form, which has the possibility of assuming different values). The term can take on several different meanings depending on the context. For example:
Undefined terms.
In ancient times, geometers attempted to define every term. For example, Euclid defined a point as "that which has no part". In modern times, mathematicians recognize that attempting to define every word inevitably leads to circular definitions, and therefore leave some terms (such as "point") undefined (see primitive notion for more).
This more abstract approach allows for fruitful generalizations. In topology, a topological space may be defined as a set of points endowed with certain properties, but in the general setting, the nature of these "points" is left entirely undefined. Likewise, in category theory, a category consists of "objects" and "arrows", which are again primitive, undefined terms. This allows such abstract mathematical theories to be applied to very diverse concrete situations.
In arithmetic.
The expression formula_2 is undefined in arithmetic, as explained in division by zero (the formula_3 expression is used in calculus to represent an indeterminate form).
Mathematicians have different opinions as to whether 00 should be defined to equal 1, or be left undefined.
Values for which functions are undefined.
The set of numbers for which a function is defined is called the "domain" of the function. If a number is not in the domain of a function, the function is said to be "undefined" for that number. Two common examples are formula_4, which is undefined for formula_1, and formula_5, which is undefined (in the real number system) for negative formula_6.
In trigonometry.
In trigonometry, for all formula_7, the functions formula_8 and formula_9 are undefined for all formula_10, while the functions formula_11 and formula_12 are undefined for all formula_13.
In complex analysis.
In complex analysis, a point formula_14 where a holomorphic function is undefined is called a singularity. One distinguishes between removable singularities (i.e., the function can be extended holomorphically to formula_15), poles (i.e., the function can be extended meromorphically to formula_15), and essential singularities (i.e., no meromorphic extension to formula_15 can exist).
In computability theory.
Notation using ↓ and ↑.
In computability theory, if formula_16 is a partial function on formula_17 and formula_18 is an element of formula_17, then this is written as formula_19, and is read as ""f"("a") is "defined"."
If formula_18 is not in the domain of formula_16, then this is written as formula_20, and is read as "formula_21 is "undefined"".
It is important to distinguish "logic of existence"(the standard one) and "logic of definiteness".
Both arrows are not well-defined as predicates in logic of existence, which normally uses the semantics of total functions. Term f(x) is a term and it has some value for example formula_22, but in the same time formula_22 can be a legitimate value of a function.
Therefore the predicate "defined" doesn't respect equality, therefore it is not well-defined.
The logic of definiteness has different predicate calculus, for example specialization of a formula with universal quantifier requires the term to be well-defined.
formula_23
Moreover, it requires introduction of a quasi-equality notion, which makes necessary the reformulation of the axioms.
The symbols of infinity.
In analysis, measure theory and other mathematical disciplines, the symbol formula_24 is frequently used to denote an infinite pseudo-number, along with its negative, formula_25. The symbol has no well-defined meaning by itself, but an expression like formula_26 is shorthand for a divergent sequence, which at some point is eventually larger than any given real number.
Performing standard arithmetic operations with the symbols formula_27 is undefined. Some extensions, though, define the following conventions of addition and multiplication:
No sensible extension of addition and multiplication with formula_24 exists in the following cases:
For more detail, see extended real number line.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " f(x)=\\frac{1}{x} "
},
{
"math_id": 1,
"text": "x=0"
},
{
"math_id": 2,
"text": " \\frac{n}{0}, n \\ne 0 "
},
{
"math_id": 3,
"text": " \\frac{0}{0}"
},
{
"math_id": 4,
"text": " f(x)=\\frac{1}{x}"
},
{
"math_id": 5,
"text": " f(x)=\\sqrt{x}"
},
{
"math_id": 6,
"text": " x "
},
{
"math_id": 7,
"text": "n \\in \\mathbb{Z}"
},
{
"math_id": 8,
"text": "\\tan \\theta"
},
{
"math_id": 9,
"text": "\\sec \\theta"
},
{
"math_id": 10,
"text": "\\theta = \\pi \\left(n - \\frac{1}{2}\\right)"
},
{
"math_id": 11,
"text": "\\cot \\theta"
},
{
"math_id": 12,
"text": "\\csc \\theta"
},
{
"math_id": 13,
"text": "\\theta = \\pi n"
},
{
"math_id": 14,
"text": "z\\in\\mathbb{C}"
},
{
"math_id": 15,
"text": "z"
},
{
"math_id": 16,
"text": " f"
},
{
"math_id": 17,
"text": " S"
},
{
"math_id": 18,
"text": " a"
},
{
"math_id": 19,
"text": " f(a)\\downarrow"
},
{
"math_id": 20,
"text": " f(a)\\uparrow"
},
{
"math_id": 21,
"text": " f(a)"
},
{
"math_id": 22,
"text": "\\emptyset"
},
{
"math_id": 23,
"text": "(\\forall x:\\varphi) \\& (t\\downarrow) \\Longrightarrow \\varphi[t/x]"
},
{
"math_id": 24,
"text": "\\infty"
},
{
"math_id": 25,
"text": " -\\infty"
},
{
"math_id": 26,
"text": "\\left\\{a_n\\right\\}\\rightarrow\\infty"
},
{
"math_id": 27,
"text": "\\pm\\infty"
},
{
"math_id": 28,
"text": "x+\\infty=\\infty"
},
{
"math_id": 29,
"text": " x \\in\\R\\cup\\{\\infty\\}"
},
{
"math_id": 30,
"text": "-\\infty+x=-\\infty"
},
{
"math_id": 31,
"text": " x\\in\\R\\cup\\{-\\infty\\}"
},
{
"math_id": 32,
"text": "x\\cdot\\infty=\\infty"
},
{
"math_id": 33,
"text": " x\\in\\R^{+}"
},
{
"math_id": 34,
"text": "\\infty-\\infty"
},
{
"math_id": 35,
"text": "0\\cdot\\infty"
},
{
"math_id": 36,
"text": "0"
},
{
"math_id": 37,
"text": "\\frac{\\infty}{\\infty}"
},
{
"math_id": 38,
"text": "-\\infty [{1(0)}]"
}
] | https://en.wikipedia.org/wiki?curid=9920876 |
9924 | Electronic mixer | An electronic mixer is a device that combines two or more electrical or electronic signals into one or two composite output signals. There are two basic circuits that both use the term "mixer", but they are very different types of circuits: additive mixers and multiplicative mixers. Additive mixers are also known as analog adders to distinguish from the related digital adder circuits.
Simple additive mixers use Kirchhoff's circuit laws to add the currents of two or more signals together, and this terminology ("mixer") is only used in the realm of audio electronics where audio mixers are used to add together audio signals such as voice signals, music signals, and sound effects.
Multiplicative mixers multiply together two time-varying input signals instantaneously (instant-by-instant). If the two input signals are both sinusoids of specified frequencies f1 and f2, then the output of the mixer will contain two new sinusoids that have the sum f1 + f2 frequency and the difference frequency absolute value |f1 - f2|.
Any nonlinear electronic block driven by two signals with frequencies f1 and f2 would generate intermodulation (mixing) products. A multiplier (which is a nonlinear device) will generate ideally only the sum and difference frequencies, whereas an arbitrary nonlinear block will also generate signals at 2·f1-3·f2, etc. Therefore, normal nonlinear amplifiers or just single diodes have been used as mixers, instead of a more complex multiplier. A multiplier usually has the advantage of rejecting – at least partly – undesired higher-order intermodulations and larger conversion gain.
Additive mixers.
Additive mixers add two or more signals, giving out a composite signal that contains the frequency components of each of the source signals. The simplest additive mixers are resistor networks, and thus purely passive, while more complex matrix mixers employ active components such as buffer amplifiers for impedance matching and better isolation.
Multiplicative mixers.
An ideal multiplicative mixer produces an output signal equal to the product of the two input signals. In communications, a multiplicative mixer is often used together with an oscillator to modulate signal frequencies. A multiplicative mixer can be coupled with a filter to either up-convert or down-convert an input signal frequency, but they are more commonly used to down-convert to a lower frequency to allow for simpler filter designs, as done in superheterodyne receivers. In many typical circuits, the single output signal actually contains multiple waveforms, namely those at the sum and difference of the two input frequencies and harmonic waveforms. The output signal may be obtained by removing the other signal components with a filter.í
Mathematical treatment.
The received signal can be represented as
formula_0
and that of the local oscillator can be represented as
formula_1
For simplicity, assume that the output "I" of the detector is proportional to the square of the amplitude:
formula_2
formula_3
formula_4
formula_5
formula_6
formula_7
The output has high frequency (formula_8, formula_9 and formula_10) and constant components. In heterodyne detection, the high frequency components and usually the constant components are filtered out, leaving the intermediate (beat) frequency at formula_11. The amplitude of this last component is proportional to the amplitude of the signal radiation. With appropriate signal analysis the phase of the signal can be recovered as well.
If formula_12 is equal to formula_13 then the beat component is a recovered version of the original signal, with the amplitude equal to the product of formula_14 and formula_15; that is, the received signal is amplified by mixing with the local oscillator. This is the basis for a Direct conversion receiver.
Implementations.
Multiplicative mixers have been implemented in many ways. The most popular are Gilbert cell mixers, diode mixers, diode ring mixers (ring modulation) and switching mixers. Diode mixers take advantage of the non-linearity of diode devices to produce the desired multiplication in the squared term. They are very inefficient as most of the power output is in other unwanted terms which need filtering out. Inexpensive AM radios still use diode mixers.
Electronic mixers are usually made with transistors and/or diodes arranged in a balanced circuit or even a double-balanced circuit. They are readily manufactured as monolithic integrated circuits or hybrid integrated circuits. They are designed for a wide variety of frequency ranges, and they are mass-produced to tight tolerances by the hundreds of thousands, making them relatively cheap.
Double-balanced mixers are very widely used in microwave communications, satellite communications, ultrahigh frequency (UHF) communications transmitters, radio receivers, and radar systems.
Gilbert cell mixers are an arrangement of transistors that multiplies the two signals.
Switching mixers use arrays of field-effect transistors or vacuum tubes. These are used as electronic switches, to alternate the signal direction. They are controlled by the signal being mixed. They are especially popular with digitally controlled radios. Switching mixers pass more power and usually insert less distortion than Gilbert cell mixers. | [
{
"math_id": 0,
"text": "E_\\mathrm{sig} \\cos(\\omega_\\mathrm{sig}t+\\varphi)\\,"
},
{
"math_id": 1,
"text": "E_\\mathrm{LO} \\cos(\\omega_\\mathrm{LO}t).\\,"
},
{
"math_id": 2,
"text": "I\\propto \\left( E_\\mathrm{sig}\\cos(\\omega_\\mathrm{sig}t+\\varphi) + E_\\mathrm{LO}\\cos(\\omega_\\mathrm{LO}t) \\right)^2"
},
{
"math_id": 3,
"text": " =\\frac{E_\\mathrm{sig}^2}{2}\\left( 1+\\cos(2\\omega_\\mathrm{sig}t+2\\varphi) \\right)"
},
{
"math_id": 4,
"text": " + \\frac{E_\\mathrm{LO}^2}{2}(1+\\cos(2\\omega_\\mathrm{LO}t)) "
},
{
"math_id": 5,
"text": " + E_\\mathrm{sig}E_\\mathrm{LO} \\left[\n\\cos((\\omega_\\mathrm{sig}+\\omega_\\mathrm{LO})t+\\varphi)\n+ \\cos((\\omega_\\mathrm{sig}-\\omega_\\mathrm{LO})t+\\varphi)\n\\right]\n"
},
{
"math_id": 6,
"text": " =\\underbrace{\\frac{E_\\mathrm{sig}^2+E_\\mathrm{LO}^2}{2}}_{constant\\;component}+\\underbrace{\\frac{E_\\mathrm{sig}^2}{2}\\cos(2\\omega_\\mathrm{sig}t+2\\varphi) + \\frac{E_\\mathrm{LO}^2}{2}\\cos(2\\omega_\\mathrm{LO}t) + E_\\mathrm{sig}E_\\mathrm{LO} \\cos((\\omega_\\mathrm{sig}+\\omega_\\mathrm{LO})t+\\varphi)}_{high\\;frequency\\;component}"
},
{
"math_id": 7,
"text": " + \\underbrace{E_\\mathrm{sig}E_\\mathrm{LO} \\cos((\\omega_\\mathrm{sig}-\\omega_\\mathrm{LO})t+\\varphi)}_{beat\\;component}.\n"
},
{
"math_id": 8,
"text": "2\\omega_\\mathrm{sig}"
},
{
"math_id": 9,
"text": "2\\omega_\\mathrm{LO}"
},
{
"math_id": 10,
"text": "\\omega_\\mathrm{sig}+\\omega_\\mathrm{LO}"
},
{
"math_id": 11,
"text": "\\omega_\\mathrm{sig}-\\omega_\\mathrm{LO}"
},
{
"math_id": 12,
"text": "\\omega_\\mathrm{LO}"
},
{
"math_id": 13,
"text": "\\omega_\\mathrm{sig} "
},
{
"math_id": 14,
"text": " E_\\mathrm{sig} "
},
{
"math_id": 15,
"text": "E_\\mathrm{LO} "
}
] | https://en.wikipedia.org/wiki?curid=9924 |
992611 | Polonium-210 | Isotope of polonium
Polonium-210 (210Po, Po-210, historically radium F) is an isotope of polonium. It undergoes alpha decay to stable 206Pb with a half-life of 138.376 days (about <templatestyles src="Fraction/styles.css" />4+1⁄2 months), the longest half-life of all naturally occurring polonium isotopes (210–218Po). First identified in 1898, and also marking the discovery of the element polonium, 210Po is generated in the decay chain of uranium-238 and radium-226. 210Po is a prominent contaminant in the environment, mostly affecting seafood and tobacco. Its extreme toxicity is attributed to intense radioactivity, mostly due to alpha particles, which easily cause radiation damage, including cancer in surrounding tissue. The specific activity of 210Po is 166 TBq/g, "i.e.", 1.66 × 1014 Bq/g. At the same time, 210Po is not readily detected by common radiation detectors, because its gamma rays have a very low energy. Therefore, 210Po can be considered as a quasi-pure alpha emitter.
History.
In 1898, Marie and Pierre Curie discovered a strongly radioactive substance in pitchblende and determined that it was a new element; it was one of the first radioactive elements discovered. Having identified it as such, they named the element polonium after Marie's home country, Poland. Willy Marckwald discovered a similar radioactive activity in 1902 and named it radio-tellurium, and at roughly the same time, Ernest Rutherford identified the same activity in his analysis of the uranium decay chain and named it radium F (originally radium E). By 1905, Rutherford concluded that all these observations were due to the same substance, 210Po. Further discoveries and the concept of isotopes, first proposed in 1913 by Frederick Soddy, firmly placed 210Po as the penultimate step in the uranium series.
In 1943, 210Po was studied as a possible neutron initiator in nuclear weapons, as part of the Dayton Project. In subsequent decades, concerns for the safety of workers handling 210Po led to extensive studies on its health effects.
In the 1950s, scientists of the United States Atomic Energy Commission at Mound Laboratories, Ohio explored the possibility of using 210Po in radioisotope thermoelectric generators (RTGs) as a heat source to power satellites. A 2.5-watt atomic battery using 210Po was developed by 1958. However, the isotope plutonium-238 was chosen instead, as it has a longer half-life of 87.7 years.
Polonium-210 was used to kill Russian dissident and ex-FSB officer Alexander V. Litvinenko in 2006, and was suspected as a possible cause of Yasser Arafat's death, following exhumation and analysis of his corpse in 2012–2013. The radioisotope may also have been used to kill Yuri Shchekochikhin, Lecha Islamov and Roman Tsepov.
Decay properties.
210Po is an alpha emitter that has a half-life of 138.376 days; it decays directly to stable 206Pb. The majority of the time, 210Po decays by emission of an alpha particle only, not by emission of an alpha particle and a gamma ray; about one in 100,000 decays results in the emission of a gamma ray.
formula_0
This low gamma ray production rate makes it more difficult to find and identify this isotope. Rather than gamma ray spectroscopy, alpha spectroscopy is the best method of measuring this isotope.
Owing to its much shorter half-life, a milligram of 210Po emits as many alpha particles per second as 5 grams of 226Ra. A few curies of 210Po emit a blue glow caused by excitation of surrounding air.
210Po occurs in minute amounts in nature, where it is the penultimate isotope in the uranium series decay chain. It is generated via beta decay from 210Pb and 210Bi.
The astrophysical s-process is terminated by the decay of 210Po, as the neutron flux is insufficient to lead to further neutron captures in the short lifetime of 210Po. Instead, 210Po alpha decays to 206Pb, which then captures more neutrons to become 210Po and repeats the cycle, thus consuming the remaining neutrons. This results in a buildup of lead and bismuth, and ensures that heavier elements such as thorium and uranium are only produced in the much faster r-process.
Production.
Deliberate.
Although 210Po occurs in trace amounts in nature, it is not abundant enough (0.1 ppb) for extraction from uranium ore to be feasible. Instead, most 210Po is produced synthetically, through neutron bombardment of 209Bi in a nuclear reactor. This process converts 209Bi to 210Bi, which beta decays to 210Po with a five-day half-life. Through this method, approximately of 210Po are produced in Russia and shipped to the United States every month for commercial applications. By irradiating certain bismuth salts containing light element nuclei such as beryllium, a cascading (α,n) reaction can also be induced to produce 210Po in large quantities.
Byproduct.
The production of polonium-210 is a downside to reactors cooled with lead-bismuth eutectic rather than pure lead. However, given the eutectic properties of this alloy, some proposed Generation IV reactor designs still rely on lead-bismuth.
Applications.
A single gram of 210Po generates 140 watts of power. Because it emits many alpha particles, which are stopped within a very short distance in dense media and release their energy, 210Po has been used as a lightweight heat source to power thermoelectric cells in artificial satellites; for instance, a 210Po heat source was also in each of the Lunokhod rovers deployed on the surface of the Moon, to keep their internal components warm during the lunar nights. Some anti-static brushes, used for neutralizing static electricity on materials like photographic film, contain a few microcuries of 210Po as a source of charged particles. 210Po was also used in initiators for atomic bombs through the (α,n) reaction with beryllium. Small neutron sources reliant on the (α,n) reaction also usually use polonium as a convenient source of alpha particles due to its comparatively low gamma emissions (allowing easy shielding) and high specific activity.
Hazards.
210Po is extremely toxic; it and other polonium isotopes are some of the most radiotoxic substances to humans. With one microgram of 210Po being more than enough to kill the average adult, it is 250,000 times more toxic than hydrogen cyanide by weight. One gram of 210Po would hypothetically be enough to kill 50 million people and sicken another 50 million. This is a consequence of its ionizing alpha radiation, as alpha particles are especially damaging to organic tissues inside the body. However, 210Po does not pose a radiation hazard when contained outside the body. The alpha particles it produces cannot penetrate the outer layer of dead skin cells.
The toxicity of 210Po stems entirely from its radioactivity. It is not chemically toxic in itself, but its solubility in aqueous solution as well as that of its salts poses a hazard because its spread throughout the body is facilitated in solution. Intake of 210Po occurs primarily through contaminated air, food, or water, as well as through open wounds. Once inside the body, 210Po concentrates in soft tissues (especially in the reticuloendothelial system) and the bloodstream. Its biological half-life is approximately 50 days.
In the environment, 210Po can accumulate in seafood. It has been detected in various organisms in the Baltic Sea, where it can propagate in, and thus contaminate, the food chain. 210Po is also known to contaminate vegetation, primarily originating from the decay of atmospheric radon-222 and absorption from soil.
In particular, 210Po attaches to, and concentrates in, tobacco leaves. Elevated concentrations of 210Po in tobacco were documented as early as 1964, and cigarette smokers were thus found to be exposed to considerably greater doses of radiation from 210Po and its parent 210Pb. Heavy smokers may be exposed to the same amount of radiation (estimates vary from 100 µSv to 160 mSv per year) as individuals in Poland were from Chernobyl fallout traveling from Ukraine. As a result, 210Po is most dangerous when inhaled from cigarette smoke.
Polonium-210 has been used in silent killings. Significant amounts were found in the body of Alexander Litvinenko at the time of his murder in London in 2006.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathrm{\n\n^{210}_{\\ 84}Po\\ \\xrightarrow [138.376 \\ d]{}\\ ^{206}_{\\ 82}Pb\\ + ^{\\ 4}_{\\ 2}He\n\n}\n"
}
] | https://en.wikipedia.org/wiki?curid=992611 |
9927028 | Polynomial greatest common divisor | Greatest common divisor of polynomials
In algebra, the greatest common divisor (frequently abbreviated as GCD) of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials. This concept is analogous to the greatest common divisor of two integers.
In the important case of univariate polynomials over a field the polynomial GCD may be computed, like for the integer GCD, by the Euclidean algorithm using long division. The polynomial GCD is defined only up to the multiplication by an invertible constant.
The similarity between the integer GCD and the polynomial GCD allows extending to univariate polynomials all the properties that may be deduced from the Euclidean algorithm and Euclidean division. Moreover, the polynomial GCD has specific properties that make it a fundamental notion in various areas of algebra. Typically, the roots of the GCD of two polynomials are the common roots of the two polynomials, and this provides information on the roots without computing them. For example, the multiple roots of a polynomial are the roots of the GCD of the polynomial and its derivative, and further GCD computations allow computing the square-free factorization of the polynomial, which provides polynomials whose roots are the roots of a given multiplicity of the original polynomial.
The greatest common divisor may be defined and exists, more generally, for multivariate polynomials over a field or the ring of integers, and also over a unique factorization domain. There exist algorithms to compute them as soon as one has a GCD algorithm in the ring of coefficients. These algorithms proceed by a recursion on the number of variables to reduce the problem to a variant of the Euclidean algorithm. They are a fundamental tool in computer algebra, because computer algebra systems use them systematically to simplify fractions. Conversely, most of the modern theory of polynomial GCD has been developed to satisfy the need for efficiency of computer algebra systems.
General definition.
Let "p" and "q" be polynomials with coefficients in an integral domain "F", typically a field or the integers.
A greatest common divisor of "p" and "q" is a polynomial "d" that divides "p" and "q", and such that every common divisor of "p" and "q" also divides "d". Every pair of polynomials (not both zero) has a GCD if and only if "F" is a unique factorization domain.
If "F" is a field and "p" and "q" are not both zero, a polynomial "d" is a greatest common divisor if and only if it divides both "p" and "q", and it has the greatest degree among the polynomials having this property. If "p" = "q" = 0, the GCD is 0. However, some authors consider that it is not defined in this case.
The greatest common divisor of "p" and "q" is usually denoted "gcd("p", "q")".
The greatest common divisor is not unique: if "d" is a GCD of "p" and "q", then the polynomial "f" is another GCD if and only if there is an invertible element "u" of "F" such that
formula_0
and
formula_1
In other words, the GCD is unique up to the multiplication by an invertible constant.
In the case of the integers, this indetermination has been settled by choosing, as the GCD, the unique one which is positive (there is another one, which is its opposite). With this convention, the GCD of two integers is also the greatest (for the usual ordering) common divisor. However, since there is no natural total order for polynomials over an integral domain, one cannot proceed in the same way here. For univariate polynomials over a field, one can additionally require the GCD to be monic (that is to have 1 as its coefficient of the highest degree), but in more general cases there is no general convention. Therefore, equalities like "d" = gcd("p", "q") or gcd("p", "q") = gcd("r", "s") are common abuses of notation which should be read ""d" is a GCD of "p" and "q"" and ""p" and "q" have the same set of GCDs as "r" and "s"". In particular, gcd("p", "q") = 1 means that the invertible constants are the only common divisors. In this case, by analogy with the integer case, one says that "p" and "q" are <templatestyles src="Template:Visible anchor/styles.css" />coprime polynomials.
GCD by hand computation.
There are several ways to find the greatest common divisor of two polynomials. Two of them are:
Factoring.
To find the GCD of two polynomials using factoring, simply factor the two polynomials completely. Then, take the product of all common factors. At this stage, we do not necessarily have a monic polynomial, so finally multiply this by a constant to make it a monic polynomial. This will be the GCD of the two polynomials as it includes all common divisors and is monic.
Example one: Find the GCD of "x"2 + 7"x" + 6 and "x"2 − 5"x" − 6.
<templatestyles src="Block indent/styles.css"/>"x"2 + 7"x" + 6 = ("x" + 1)("x" + 6)
<templatestyles src="Block indent/styles.css"/>"x"2 − 5"x" − 6 = ("x" + 1)("x" − 6)
Thus, their GCD is "x" + 1.
Euclidean algorithm.
Factoring polynomials can be difficult, especially if the polynomials have a large degree. The Euclidean algorithm is a method that works for any pair of polynomials. It makes repeated use of Euclidean division. When using this algorithm on two numbers, the size of the numbers decreases at each stage. With polynomials, the degree of the polynomials decreases at each stage. The last nonzero remainder, made monic if necessary, is the GCD of the two polynomials.
More specifically, for finding the gcd of two polynomials "a"("x") and "b"("x"), one can suppose "b" ≠ 0 (otherwise, the GCD is "a"("x")), and
formula_16
The Euclidean division provides two polynomials "q"("x"), the "quotient" and "r"("x"), the "remainder" such that
formula_17
A polynomial "g"("x") divides both "a"("x") and "b"("x") if and only if it divides both "b"("x") and "r"0("x"). Thus
formula_18
Setting
formula_19
one can repeat the Euclidean division to get new polynomials "q"1("x"), "r"1("x"), "a"2("x"), "b"2("x") and so on. At each stage we have
formula_20
so the sequence will eventually reach a point at which
formula_21
and one has got the GCD:
formula_22
Example: finding the GCD of "x"2 + 7"x" + 6 and "x"2 − 5"x" − 6:
<templatestyles src="Block indent/styles.css"/>"x"2 + 7"x" + 6 = 1 ⋅ ("x"2 − 5"x" − 6) + (12 "x" + 12)
<templatestyles src="Block indent/styles.css"/>"x"2 − 5"x" − 6 = (12 "x" + 12) ( "x" − ) + 0
Since 12 "x" + 12 is the last nonzero remainder, it is a GCD of the original polynomials, and the monic GCD is "x" + 1.
In this example, it is not difficult to avoid introducing denominators by factoring out 12 before the second step. This can always be done by using pseudo-remainder sequences, but, without care, this may introduce very large integers during the computation. Therefore, for computer computation, other algorithms are used, that are described below.
This method works only if one can test the equality to zero of the coefficients that occur during the computation. So, in practice, the coefficients must be integers, rational numbers, elements of a finite field, or must belong to some finitely generated field extension of one of the preceding fields. If the coefficients are floating-point numbers that represent real numbers that are known only approximately, then one must know the degree of the GCD for having a well defined computation result (that is a numerically stable result; in this cases other techniques may be used, usually based on singular value decomposition.
Univariate polynomials with coefficients in a field.
The case of univariate polynomials over a field is especially important for several reasons. Firstly, it is the most elementary case and therefore appears in most first courses in algebra. Secondly, it is very similar to the case of the integers, and this analogy is the source of the notion of Euclidean domain. A third reason is that the theory and the algorithms for the multivariate case and for coefficients in a unique factorization domain are strongly based on this particular case. Last but not least, polynomial GCD algorithms and derived algorithms allow one to get useful information on the roots of a polynomial, without computing them.
Euclidean division.
Euclidean division of polynomials, which is used in Euclid's algorithm for computing GCDs, is very similar to Euclidean division of integers. Its existence is based on the following theorem: Given two univariate polynomials "a" and "b" ≠ 0 defined over a field, there exist two polynomials "q" (the "quotient") and "r" (the "remainder") which satisfy
formula_23
and
formula_24
where "deg(...)" denotes the degree and the degree of the zero polynomial is defined as being negative. Moreover, "q" and "r" are uniquely defined by these relations.
The difference from Euclidean division of the integers is that, for the integers, the degree is replaced by the absolute value, and that to have uniqueness one has to suppose that "r" is non-negative. The rings for which such a theorem exists are called Euclidean domains.
Like for the integers, the Euclidean division of the polynomials may be computed by the long division algorithm. This algorithm is usually presented for paper-and-pencil computation, but it works well on computers when formalized as follows (note that the names of the variables correspond exactly to the regions of the paper sheet in a pencil-and-paper computation of long division). In the following computation "deg" stands for the degree of its argument (with the convention deg(0) < 0), and "lc" stands for the leading coefficient, the coefficient of the highest degree of the variable.
<templatestyles src="Pre/styles.css"/>
The proof of the validity of this algorithm relies on the fact that during the whole "while" loop, we have "a" = "bq" + "r" and deg("r") is a non-negative integer that decreases at each iteration. Thus the proof of the validity of this algorithm also proves the validity of the Euclidean division.
Euclid's algorithm.
As for the integers, the Euclidean division allows us to define Euclid's algorithm for computing GCDs.
Starting from two polynomials "a" and "b", Euclid's algorithm consists of recursively replacing the pair ("a", "b") by ("b", rem("a", "b")) (where "rem("a", "b")" denotes the remainder of the Euclidean division, computed by the algorithm of the preceding section), until "b" = 0. The GCD is the last non zero remainder.
Euclid's algorithm may be formalized in the recursive programming style as:
formula_25
In the imperative programming style, the same algorithm becomes, giving a name to each intermediate remainder:
<templatestyles src="Pre/styles.css"/>
The sequence of the degrees of the "ri" is strictly decreasing. Thus after, at most, deg("b") steps, one get a null remainder, say "rk". As ("a", "b") and ("b", rem("a","b")) have the same divisors, the set of the common divisors is not changed by Euclid's algorithm and thus all pairs ("ri", "r""i"+1) have the same set of common divisors. The common divisors of a and b are thus the common divisors of "r""k"−1 and 0. Thus "r""k"−1 is a GCD of a and b.
This not only proves that Euclid's algorithm computes GCDs but also proves that GCDs exist.
Bézout's identity and extended GCD algorithm.
Bézout's identity is a GCD related theorem, initially proved for the integers, which is valid for every principal ideal domain. In the case of the univariate polynomials over a field, it may be stated as follows.
<templatestyles src="Block indent/styles.css"/>If g is the greatest common divisor of two polynomials a and b (not both zero), then there are two polynomials u and v such that
<templatestyles src="Block indent/styles.css"/>formula_26 (Bézout's identity)
and either "u" = 1, "v" = 0, or "u" = 0, "v" = 1, or
formula_27
The interest of this result in the case of the polynomials is that there is an efficient algorithm to compute the polynomials u and v, This algorithm differs from Euclid's algorithm by a few more computations done at each iteration of the loop. It is therefore called extended GCD algorithm. Another difference with Euclid's algorithm is that it also uses the quotient, denoted "quo", of the Euclidean division instead of only the remainder. This algorithm works as follows.
<templatestyles src="Pre/styles.css"/>
The proof that the algorithm satisfies its output specification relies on the fact that, for every i we have
formula_28
formula_29
the latter equality implying
formula_30
The assertion on the degrees follows from the fact that, at every iteration, the degrees of "si" and "ti" increase at most as the degree of "ri" decreases.
An interesting feature of this algorithm is that, when the coefficients of Bezout's identity are needed, one gets for free the quotient of the input polynomials by their GCD.
Arithmetic of algebraic extensions.
An important application of the extended GCD algorithm is that it allows one to compute division in algebraic field extensions.
Let L an algebraic extension of a field K, generated by an element whose minimal polynomial f has degree n. The elements of L are usually represented by univariate polynomials over K of degree less than n.
The addition in L is simply the addition of polynomials:
formula_31
The multiplication in L is the multiplication of polynomials followed by the division by f:
formula_32
The inverse of a non zero element a of L is the coefficient u in Bézout's identity "au" + "fv" = 1, which may be computed by extended GCD algorithm. (the GCD is 1 because the minimal polynomial f is irreducible). The degrees inequality in the specification of extended GCD algorithm shows that a further division by f is not needed to get deg(u) < deg(f).
Subresultants.
In the case of univariate polynomials, there is a strong relationship between the greatest common divisors and resultants. More precisely, the resultant of two polynomials "P", "Q" is a polynomial function of the coefficients of "P" and "Q" which has the value zero if and only if the GCD of "P" and "Q" is not constant.
The subresultants theory is a generalization of this property that allows characterizing generically the GCD of two polynomials, and the resultant is the 0-th subresultant polynomial.
The i-th "subresultant polynomial" "Si"("P" ,"Q") of two polynomials "P" and "Q" is a polynomial of degree at most i whose coefficients are polynomial functions of the coefficients of "P" and "Q", and the i-th "principal subresultant coefficient" "si"("P" ,"Q") is the coefficient of degree i of "Si"("P", "Q"). They have the property that the GCD of "P" and "Q" has a degree d if and only if
formula_33
In this case, "Sd"("P" ,"Q") is a GCD of "P" and "Q" and
formula_34
Every coefficient of the subresultant polynomials is defined as the determinant of a submatrix of the Sylvester matrix of "P" and "Q". This implies that subresultants "specialize" well. More precisely, subresultants are defined for polynomials over any commutative ring "R", and have the following property.
Let "φ" be a ring homomorphism of "R" into another commutative ring "S". It extends to another homomorphism, denoted also "φ" between the polynomials rings over "R" and "S". Then, if "P" and "Q" are univariate polynomials with coefficients in "R" such that
formula_35
and
formula_36
then the subresultant polynomials and the principal subresultant coefficients of "φ"("P") and "φ"("Q") are the image by "φ" of those of "P" and "Q".
The subresultants have two important properties which make them fundamental for the computation on computers of the GCD of two polynomials with integer coefficients.
Firstly, their definition through determinants allows bounding, through Hadamard inequality, the size of the coefficients of the GCD.
Secondly, this bound and the property of good specialization allow computing the GCD of two polynomials with integer coefficients through modular computation and Chinese remainder theorem (see below).
Technical definition.
Let
formula_37
be two univariate polynomials with coefficients in a field "K". Let us denote by formula_38 the "K" vector space of dimension i of polynomials of degree less than i. For non-negative integer i such that "i" ≤ "m" and "i" ≤ "n", let
formula_39
be the linear map such that
formula_40
The resultant of "P" and "Q" is the determinant of the Sylvester matrix, which is the (square) matrix of formula_41 on the bases of the powers of "X". Similarly, the "i"-subresultant polynomial is defined in term of determinants of submatrices of the matrix of formula_42
Let us describe these matrices more precisely;
Let "p""i" = 0 for "i" < 0 or "i" > "m", and "q""i" = 0 for "i" < 0 or "i" > "n". The Sylvester matrix is the ("m" + "n") × ("m" + "n")-matrix such that the coefficient of the "i"-th row and the "j"-th column is "p""m"+"j"−"i" for "j" ≤ "n" and "q""j"−"i" for "j" > "n":
formula_43
The matrix "Ti" of formula_44 is the ("m" + "n" − "i") × ("m" + "n" − 2"i")-submatrix of "S" which is obtained by removing the last "i" rows of zeros in the submatrix of the columns 1 to "n" − "i" and "n" + 1 to "m" + "n" − "i" of "S" (that is removing "i" columns in each block and the "i" last rows of zeros). The "principal subresultant coefficient" "si" is the determinant of the "m" + "n" − 2"i" first rows of "Ti".
Let "Vi" be the ("m" + "n" − 2"i") × ("m" + "n" − "i") matrix defined as follows. First we add ("i" + 1) columns of zeros to the right of the ("m" + "n" − 2"i" − 1) × ("m" + "n" − 2"i" − 1) identity matrix. Then we border the bottom of the resulting matrix by a row consisting in ("m" + "n" − "i" − 1) zeros followed by "Xi", "X""i"−1, ..., "X", 1:
formula_45
With this notation, the "i"-th "subresultant polynomial" is the determinant of the matrix product "ViTi". Its coefficient of degree "j" is the determinant of the square submatrix of "Ti" consisting in its "m" + "n" − 2"i" − 1 first rows and the ("m" + "n" − "i" − "j")-th row.
Sketch of the proof.
It is not obvious that, as defined, the subresultants have the desired properties. Nevertheless, the proof is rather simple if the properties of linear algebra and those of polynomials are put together.
As defined, the columns of the matrix "Ti" are the vectors of the coefficients of some polynomials belonging to the image of formula_44. The definition of the "i"-th subresultant polynomial "Si" shows that the vector of its coefficients is a linear combination of these column vectors, and thus that "Si" belongs to the image of formula_42
If the degree of the GCD is greater than "i", then Bézout's identity shows that every non zero polynomial in the image of formula_44 has a degree larger than "i". This implies that "Si" = 0.
If, on the other hand, the degree of the GCD is "i", then Bézout's identity again allows proving that the multiples of the GCD that have a degree lower than "m" + "n" − "i" are in the image of formula_44. The vector space of these multiples has the dimension "m" + "n" − 2"i" and has a base of polynomials of pairwise different degrees, not smaller than "i". This implies that the submatrix of the "m" + "n" − 2"i" first rows of the column echelon form of "Ti" is the identity matrix and thus that "si" is not 0. Thus "Si" is a polynomial in the image of formula_44, which is a multiple of the GCD and has the same degree. It is thus a greatest common divisor.
GCD and root finding.
Square-free factorization.
Most root-finding algorithms behave badly with polynomials that have multiple roots. It is therefore useful to detect and remove them before calling a root-finding algorithm. A GCD computation allows detection of the existence of multiple roots, since the multiple roots of a polynomial are the roots of the GCD of the polynomial and its derivative.
After computing the GCD of the polynomial and its derivative, further GCD computations provide the complete "square-free factorization" of the polynomial, which is a factorization
formula_46
where, for each "i", the polynomial "f""i" either is 1 if "f" does not have any root of multiplicity "i" or is a square-free polynomial (that is a polynomial without multiple root) whose roots are exactly the roots of multiplicity "i" of "f" (see Yun's algorithm).
Thus the square-free factorization reduces root-finding of a polynomial with multiple roots to root-finding of several square-free polynomials of lower degree. The square-free factorization is also the first step in most polynomial factorization algorithms.
Sturm sequence.
The "Sturm sequence" of a polynomial with real coefficients is the sequence of the remainders provided by a variant of Euclid's algorithm applied to the polynomial and its derivative. For getting the Sturm sequence, one simply replaces the instruction
formula_47
of Euclid's algorithm by
formula_48
Let "V"("a") be the number of changes of signs in the sequence, when evaluated at a point "a". Sturm's theorem asserts that "V"("a") − "V"("b") is the number of real roots of the polynomial in the interval ["a", "b"]. Thus the Sturm sequence allows computing the number of real roots in a given interval. By subdividing the interval until every subinterval contains at most one root, this provides an algorithm that locates the real roots in intervals of arbitrary small length.
GCD over a ring and its field of fractions.
In this section, we consider polynomials over a unique factorization domain "R", typically the ring of the integers, and over its field of fractions "F", typically the field of the rational numbers, and we denote "R"["X"] and "F"["X"] the rings of polynomials in a set of variables over these rings.
Primitive part–content factorization.
The "content" of a polynomial "p" ∈ "R"["X"], denoted "cont("p")", is the GCD of its coefficients. A polynomial "q" ∈ "F"["X"] may be written
formula_49
where "p" ∈ "R"["X"] and "c" ∈ "R": it suffices to take for "c" a multiple of all denominators of the coefficients of "q" (for example their product) and "p" = "cq". The "content" of "q" is defined as:
formula_50
In both cases, the content is defined up to the multiplication by a unit of "R".
The "primitive part" of a polynomial in "R"["X"] or "F"["X"] is defined by
formula_51
In both cases, it is a polynomial in "R"["X"] that is "primitive", which means that 1 is a GCD of its coefficients.
Thus every polynomial in "R"["X"] or "F"["X"] may be factorized as
formula_52
and this factorization is unique up to the multiplication of the content by a unit of "R" and of the primitive part by the inverse of this unit.
Gauss's lemma implies that the product of two primitive polynomials is primitive. It follows that
formula_53
and
formula_54
Relation between the GCD over "R" and over "F".
The relations of the preceding section imply a strong relation between the GCD's in "R"["X"] and in "F"["X"]. To avoid ambiguities, the notation "gcd" will be indexed, in the following, by the ring in which the GCD is computed.
If "q"1 and "q"2 belong to "F"["X"], then
formula_55
If "p"1 and "p"2 belong to "R"["X"], then
formula_56
and
formula_57
Thus the computation of polynomial GCD's is essentially the same problem over "F"["X"] and over "R"["X"].
For univariate polynomials over the rational numbers, one may think that Euclid's algorithm is a convenient method for computing the GCD. However, it involves simplifying a large number of fractions of integers, and the resulting algorithm is not efficient. For this reason, methods have been designed to modify Euclid's algorithm for working only with polynomials over the integers. They consist of replacing the Euclidean division, which introduces fractions, by a so-called "pseudo-division", and replacing the remainder sequence of the Euclid's algorithm by so-called "pseudo-remainder sequences" (see below).
Proof that GCD exists for multivariate polynomials.
In the previous section we have seen that the GCD of polynomials in "R"["X"] may be deduced from GCDs in "R" and in "F"["X"]. A closer look on the proof shows that this allows us to prove the existence of GCDs in "R"["X"], if they exist in "R" and in "F"["X"]. In particular, if GCDs exist in "R", and if "X" is reduced to one variable, this proves that GCDs exist in "R"["X"] (Euclid's algorithm proves the existence of GCDs in "F"["X"]).
A polynomial in n variables may be considered as a univariate polynomial over the ring of polynomials in ("n" − 1) variables. Thus a recursion on the number of variables shows that if GCDs exist and may be computed in "R", then they exist and may be computed in every multivariate polynomial ring over "R". In particular, if "R" is either the ring of the integers or a field, then GCDs exist in "R"["x"1, ..., "xn"], and what precedes provides an algorithm to compute them.
The proof that a polynomial ring over a unique factorization domain is also a unique factorization domain is similar, but it does not provide an algorithm, because there is no general algorithm to factor univariate polynomials over a field (there are examples of fields for which there does not exist any factorization algorithm for the univariate polynomials).
Pseudo-remainder sequences.
In this section, we consider an integral domain "Z" (typically the ring Z of the integers) and its field of fractions "Q" (typically the field Q of the rational numbers). Given two polynomials "A" and "B" in the univariate polynomial ring "Z"["X"], the Euclidean division (over "Q") of "A" by "B" provides a quotient and a remainder which may not belong to "Z"["X"].
For, if one applies Euclid's algorithm to the following polynomials
formula_58
and
formula_59
the successive remainders of Euclid's algorithm are
formula_60
One sees that, despite the small degree and the small size of the coefficients of the input polynomials, one has to manipulate and simplify integer fractions of rather large size.
The "<templatestyles src="Template:Visible anchor/styles.css" />pseudo-division" has been introduced to allow a variant of Euclid's algorithm for which all remainders belong to "Z"["X"].
If formula_61 and formula_62 and "a" ≥ "b", the pseudo-remainder of the pseudo-division of "A" by "B", denoted by prem("A","B") is
formula_63
where lc("B") is the leading coefficient of "B" (the coefficient of "X""b").
The pseudo-remainder of the pseudo-division of two polynomials in "Z"["X"] belongs always to "Z"["X"].
A pseudo-remainder sequence is the sequence of the (pseudo) remainders "r""i" obtained by replacing the instruction
formula_64
of Euclid's algorithm by
formula_65
where "α" is an element of "Z" that divides exactly every coefficient of the numerator. Different choices of "α" give different pseudo-remainder sequences, which are described in the next subsections.
As the common divisors of two polynomials are not changed if the polynomials are multiplied by invertible constants (in "Q"), the last nonzero term in a pseudo-remainder sequence is a GCD (in "Q"["X"]) of the input polynomials. Therefore, pseudo-remainder sequences allows computing GCD's in "Q"["X"] without introducing fractions in "Q".
In some contexts, it is essential to control the sign of the leading coefficient of the pseudo-remainder. This is typically the case when computing resultants and subresultants, or for using Sturm's theorem. This control can be done either by replacing lc("B") by its absolute value in the definition of the pseudo-remainder, or by controlling the sign of α (if α divides all coefficients of a remainder, the same is true for −"α").
Trivial pseudo-remainder sequence.
The simplest (to define) remainder sequence consists in taking always "α" = 1. In practice, it is not interesting, as the size of the coefficients grows exponentially with the degree of the input polynomials. This appears clearly on the example of the preceding section, for which the successive pseudo-remainders are
formula_66
formula_67
formula_68
formula_69
The number of digits of the coefficients of the successive remainders is more than doubled at each iteration of the algorithm. This is typical behavior of the trivial pseudo-remainder sequences.
Primitive pseudo-remainder sequence.
The "primitive pseudo-remainder sequence" consists in taking for α the content of the numerator. Thus all the "r""i" are primitive polynomials.
The primitive pseudo-remainder sequence is the pseudo-remainder sequence, which generates the smallest coefficients. However it requires to compute a number of GCD's in "Z", and therefore is not sufficiently efficient to be used in practice, especially when "Z" is itself a polynomial ring.
With the same input as in the preceding sections, the successive remainders, after division by their content are
formula_70
formula_71
formula_72
formula_73
The small size of the coefficients hides the fact that a number of integers GCD and divisions by the GCD have been computed.
Subresultant pseudo-remainder sequence.
A subresultant sequence can be also computed with pseudo-remainders. The process consists in choosing α in such a way that every "r""i" is a subresultant polynomial. Surprisingly, the computation of α is very easy (see below). On the other hand, the proof of correctness of the algorithm is difficult, because it should take into account all the possibilities for the difference of degrees of two consecutive remainders.
The coefficients in the subresultant sequence are rarely much larger than those of the primitive pseudo-remainder sequence. As GCD computations in "Z" are not needed, the subresultant sequence with pseudo-remainders gives the most efficient computation.
With the same input as in the preceding sections, the successive remainders are
formula_74
formula_75
formula_76
formula_77
The coefficients have a reasonable size. They are obtained without any GCD computation, only exact divisions. This makes this algorithm more efficient than that of primitive pseudo-remainder sequences.
The algorithm computing the subresultant sequence with pseudo-remainders is given below. In this algorithm, the input ("a", "b") is a pair of polynomials in "Z"["X"]. The "r""i" are the successive pseudo remainders in "Z"["X"], the variables "i" and "d""i" are non negative integers, and the Greek letters denote elements in "Z". The functions codice_0 and codice_1 denote the degree of a polynomial and the remainder of the Euclidean division. In the algorithm, this remainder is always in "Z"["X"]. Finally the divisions denoted / are always exact and have their result either in "Z"["X"] or in "Z".
<templatestyles src="Pre/styles.css"/>
Note: "lc" stands for the leading coefficient, the coefficient of the highest degree of the variable.
This algorithm computes not only the greatest common divisor (the last non zero "r""i"), but also all the subresultant polynomials: The remainder "r""i" is the (deg("r""i"−1) − 1)-th subresultant polynomial. If deg("r""i") < deg("r""i"−1) − 1, the deg("r""i")-th subresultant polynomial is lc("r""i")deg("r""i"−1)−deg("r""i")−1"r""i". All the other subresultant polynomials are zero.
Sturm sequence with pseudo-remainders.
One may use pseudo-remainders for constructing sequences having the same properties as Sturm sequences. This requires to control the signs of the successive pseudo-remainders, in order to have the same signs as in the Sturm sequence. This may be done by defining a modified pseudo-remainder as follows.
If formula_61 and formula_62 and "a" ≥ "b", the modified pseudo-remainder prem2("A", "B") of the pseudo-division of "A" by "B" is
formula_78
where is the absolute value of the leading coefficient of "B" (the coefficient of "X""b").
For input polynomials with integer coefficients, this allows retrieval of Sturm sequences consisting of polynomials with integer coefficients. The subresultant pseudo-remainder sequence may be modified similarly, in which case the signs of the remainders coincide with those computed over the rationals.
Note that the algorithm for computing the subresultant pseudo-remainder sequence given above will compute wrong subresultant polynomials if one uses formula_79 instead of formula_80.
Modular GCD algorithm.
If "f" and "g" are polynomials in "F"["x"] for some finitely generated field "F", the Euclidean Algorithm is the most natural way to compute their GCD. However, modern computer algebra systems only use it if "F" is finite because of a phenomenon called intermediate expression swell. Although degrees keep decreasing during the Euclidean algorithm, if "F" is not finite then the bit size of the polynomials can increase (sometimes dramatically) during the computations because repeated arithmetic operations in "F" tends to lead to larger expressions. For example, the addition of two rational numbers whose denominators are bounded by "b" leads to a rational number whose denominator is bounded by "b"2, so in the worst case, the bit size could nearly double with just one operation.
To expedite the computation, take a ring "D" for which "f" and "g" are in "D"["x"], and take an ideal "I" such that "D"/"I" is a finite ring. Then compute the GCD over this finite ring with the Euclidean Algorithm. Using reconstruction techniques (Chinese remainder theorem, rational reconstruction, etc.) one can recover the GCD of "f" and "g" from its image modulo a number of ideals "I". One can prove that this works provided that one discards modular images with non-minimal degrees, and avoids ideals "I" modulo which a leading coefficient vanishes.
Suppose formula_81, formula_82, formula_83 and formula_84. If we take formula_85 then formula_86 is a finite ring (not a field since formula_87 is not maximal in formula_88). The Euclidean algorithm applied to the images of formula_89 in formula_90 succeeds and returns 1. This implies that the GCD of formula_89 in formula_91 must be 1 as well. Note that this example could easily be handled by any method because the degrees were too small for expression swell to occur, but it illustrates that if two polynomials have GCD 1, then the modular algorithm is likely to terminate after a single ideal formula_87.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "f=u d"
},
{
"math_id": 1,
"text": "d=u^{-1} f."
},
{
"math_id": 2,
"text": "\\gcd(p,q)= \\gcd(q,p)."
},
{
"math_id": 3,
"text": "\\gcd(p, q)= \\gcd(q,p+rq)"
},
{
"math_id": 4,
"text": "\\gcd(p,q)=\\gcd(p,kq)"
},
{
"math_id": 5,
"text": "\\gcd(p,q)=\\gcd(a_1p+b_1q,a_2p+b_2q)"
},
{
"math_id": 6,
"text": "a_1, b_1, a_2, b_2"
},
{
"math_id": 7,
"text": "a_1 b_2 - a_2 b_1"
},
{
"math_id": 8,
"text": "\\gcd(p, r)=1"
},
{
"math_id": 9,
"text": "\\gcd(p, q)=\\gcd(p, qr)"
},
{
"math_id": 10,
"text": "\\gcd(q, r)=1"
},
{
"math_id": 11,
"text": "\\gcd(p, qr)=\\gcd(p, q)\\,\\gcd(p, r)"
},
{
"math_id": 12,
"text": "\\gcd(p,q)=ap+bq"
},
{
"math_id": 13,
"text": "\\gcd(p,q)"
},
{
"math_id": 14,
"text": "\\gcd(p, q, r) = \\gcd(p, \\gcd(q, r)),"
},
{
"math_id": 15,
"text": "\\gcd(p_1, p_2, \\dots , p_n) = \\gcd( p_1, \\gcd(p_2, \\dots , p_n))."
},
{
"math_id": 16,
"text": "\\deg(b(x)) \\le \\deg(a(x)) \\,."
},
{
"math_id": 17,
"text": "a(x) = q_0(x) b(x) + r_0(x)\n\\quad \\text{and} \\quad\n\\deg(r_0(x)) < \\deg(b(x))"
},
{
"math_id": 18,
"text": "\\gcd(a(x), b(x)) = \\gcd(b(x), r_0(x))."
},
{
"math_id": 19,
"text": "a_1(x) = b(x), b_1(x) = r_0(x),"
},
{
"math_id": 20,
"text": "\\deg(a_{k+1})+\\deg(b_{k+1}) < \\deg(a_{k})+\\deg(b_{k}),"
},
{
"math_id": 21,
"text": "b_N(x) = 0"
},
{
"math_id": 22,
"text": "\\gcd(a,b) = \\gcd(a_1,b_1) = \\cdots = \\gcd(a_N, 0) = a_N ."
},
{
"math_id": 23,
"text": "a = bq + r"
},
{
"math_id": 24,
"text": "\\deg(r) < \\deg(b),"
},
{
"math_id": 25,
"text": "\\gcd(a,b):= \\begin{cases}\na & \\text{if } b = 0 \\\\\n\\gcd(b, \\operatorname{rem}(a,b)) & \\text{otherwise}.\n\\end{cases}"
},
{
"math_id": 26,
"text": "au + bv = g"
},
{
"math_id": 27,
"text": "\\deg(u)<\\deg(b)-\\deg(g), \\quad \\deg(v)<\\deg(a)-\\deg(g)."
},
{
"math_id": 28,
"text": "r_i = a s_i + b t_i"
},
{
"math_id": 29,
"text": "s_i t_{i+1} - t_i s_{i+1} = s_i t_{i-1} - t_i s_{i-1},"
},
{
"math_id": 30,
"text": "s_i t_{i+1} - t_i s_{i+1} = (-1)^i."
},
{
"math_id": 31,
"text": "a+_Lb=a+_{K[X]}b."
},
{
"math_id": 32,
"text": "a\\cdot_Lb=\\operatorname{rem}(a._{K[X]}b,f)."
},
{
"math_id": 33,
"text": "s_0(P,Q) = \\cdots = s_{d-1}(P,Q) =0 \\ , s_d(P,Q)\\neq 0."
},
{
"math_id": 34,
"text": "S_0(P,Q)=\\cdots=S_{d-1}(P,Q) =0."
},
{
"math_id": 35,
"text": "\\deg(P)=\\deg(\\varphi(P))"
},
{
"math_id": 36,
"text": "\\deg(Q)=\\deg(\\varphi(Q)),"
},
{
"math_id": 37,
"text": "P=p_0+p_1 X+\\cdots +p_m X^m,\\quad Q=q_0+q_1 X+\\cdots +q_n X^n."
},
{
"math_id": 38,
"text": "\\mathcal{P}_i"
},
{
"math_id": 39,
"text": "\\varphi_i:\\mathcal{P}_{n-i} \\times \\mathcal{P}_{m-i} \\rightarrow \\mathcal{P}_{m+n-i}"
},
{
"math_id": 40,
"text": "\\varphi_i(A,B) = AP + BQ."
},
{
"math_id": 41,
"text": "\\varphi_0"
},
{
"math_id": 42,
"text": "\\varphi_i."
},
{
"math_id": 43,
"text": "S=\\begin{pmatrix} \np_m & 0 & \\cdots & 0 & q_n & 0 & \\cdots & 0 \\\\\np_{m-1} & p_m & \\cdots & 0 & q_{n-1} & q_n & \\cdots & 0 \\\\\np_{m-2} & p_{m-1} & \\ddots & 0 & q_{n-2} & q_{n-1} & \\ddots & 0 \\\\\n\\vdots &\\vdots & \\ddots & p_m & \\vdots &\\vdots & \\ddots & q_n \\\\\n\\vdots &\\vdots & \\cdots & p_{m-1} & \\vdots &\\vdots & \\cdots & q_{n-1} \\\\\np_0 & p_1 & \\cdots & \\vdots & q_0 & q_1 & \\cdots & \\vdots \\\\\n0 & p_0 & \\ddots & \\vdots & 0 & q_0 & \\ddots & \\vdots \\\\\n\\vdots & \\vdots & \\ddots & p_1 & \\vdots & \\vdots & \\ddots & q_1 \\\\\n0 & 0 & \\cdots & p_0 & 0 & 0 & \\cdots & q_0 \n\\end{pmatrix}."
},
{
"math_id": 44,
"text": "\\varphi_i"
},
{
"math_id": 45,
"text": "V_i=\\begin{pmatrix} \n1 & 0 & \\cdots & 0 & 0 & 0 & \\cdots & 0 \\\\\n0 & 1 & \\cdots & 0 & 0 & 0 & \\cdots & 0 \\\\\n\\vdots &\\vdots & \\ddots & \\vdots & \\vdots &\\ddots & \\vdots & 0 \\\\\n0 & 0 & \\cdots & 1 & 0 & 0 & \\cdots & 0 \\\\\n0 & 0 & \\cdots & 0 & X^i & X^{i-1} & \\cdots & 1 \n\\end{pmatrix}."
},
{
"math_id": 46,
"text": "f = \\prod_{i=1}^{\\deg(f)} f_i^i"
},
{
"math_id": 47,
"text": "r_{i+1} := \\operatorname{rem}(r_{i-1},r_{i})"
},
{
"math_id": 48,
"text": "r_{i+1} := -\\operatorname{rem}(r_{i-1},r_{i})."
},
{
"math_id": 49,
"text": "q = \\frac{p}{c}"
},
{
"math_id": 50,
"text": "\\operatorname{cont} (q) = \\frac{\\operatorname{cont} (p)}{c}."
},
{
"math_id": 51,
"text": "\\operatorname{primpart} (p) =\\frac{p}{\\operatorname{cont} (p)}."
},
{
"math_id": 52,
"text": "p = \\operatorname{cont} (p)\\,\\operatorname{primpart} (p),"
},
{
"math_id": 53,
"text": "\\operatorname{primpart} (pq)=\\operatorname{primpart} (p) \\operatorname{primpart}(q)"
},
{
"math_id": 54,
"text": "\\operatorname{cont} (pq)=\\operatorname{cont} (p) \\operatorname{cont}(q)."
},
{
"math_id": 55,
"text": "\\operatorname{primpart}(\\gcd_{F[X]}(q_1,q_2))=\\gcd_{R[X]}(\\operatorname{primpart}(q_1),\\operatorname{primpart}(q_2))."
},
{
"math_id": 56,
"text": "\\gcd_{R[X]}(p_1,p_2)=\\gcd_R(\\operatorname{cont}(p_1),\\operatorname{cont}(p_2)) \\gcd_{R[X]}(\\operatorname{primpart}(p_1),\\operatorname{primpart}(p_2)),"
},
{
"math_id": 57,
"text": "\\gcd_{R[X]}(\\operatorname{primpart}(p_1),\\operatorname{primpart}(p_2))=\\operatorname{primpart}(\\gcd_{F[X]}(p_1,p_2))."
},
{
"math_id": 58,
"text": "X^8 + X^6 - 3 X^4 - 3 X^3 + 8 X^2 + 2 X - 5"
},
{
"math_id": 59,
"text": "3 X^6 + 5 X^4 - 4 X^2 - 9 X + 21,"
},
{
"math_id": 60,
"text": "\\begin{align}\n&-\\tfrac{5}{9}X^4+\\tfrac{1}{9}X^2-\\tfrac{1}{3},\\\\\n&-\\tfrac{117}{25}X^2-9X+\\tfrac{441}{25},\\\\\n&\\tfrac{233150}{19773}X-\\tfrac{102500}{6591},\\\\\n&-\\tfrac{1288744821}{543589225}.\n\\end{align}"
},
{
"math_id": 61,
"text": "\\deg(A) = a"
},
{
"math_id": 62,
"text": "\\deg(B) = b"
},
{
"math_id": 63,
"text": "\\operatorname{prem}(A,B) = \\operatorname{rem}(\\operatorname{lc}(B)^{a-b+1}A,B),"
},
{
"math_id": 64,
"text": "r_{i+1}:=\\operatorname{rem}(r_{i-1},r_{i})"
},
{
"math_id": 65,
"text": "r_{i+1}:=\\frac{\\operatorname{prem}(r_{i-1},r_{i})}{\\alpha},"
},
{
"math_id": 66,
"text": "-15\\, X^4 + 3\\, X^2 - 9,"
},
{
"math_id": 67,
"text": "15795\\, X^2 + 30375\\, X - 59535,"
},
{
"math_id": 68,
"text": "1254542875143750\\, X - 1654608338437500,"
},
{
"math_id": 69,
"text": "12593338795500743100931141992187500."
},
{
"math_id": 70,
"text": "-5\\,X^4+X^2-3,"
},
{
"math_id": 71,
"text": "13\\,X^2+25\\,X-49,"
},
{
"math_id": 72,
"text": "4663\\,X-6150,"
},
{
"math_id": 73,
"text": "1."
},
{
"math_id": 74,
"text": "15\\,X^4-3\\,X^2+9,"
},
{
"math_id": 75,
"text": "65\\,X^2+125\\,X-245,"
},
{
"math_id": 76,
"text": "9326\\,X-12300,"
},
{
"math_id": 77,
"text": "260708."
},
{
"math_id": 78,
"text": "\\operatorname{prem2}(A,B) = -\\operatorname{rem}(\\left|\\operatorname{lc}(B)\\right|^{a-b+1}A,B),"
},
{
"math_id": 79,
"text": "-\\mathrm{prem2}(A,B)"
},
{
"math_id": 80,
"text": "\\operatorname{prem}(A,B)"
},
{
"math_id": 81,
"text": "F = \\mathbb{Q}(\\sqrt{3})"
},
{
"math_id": 82,
"text": "D = \\mathbb{Z}[\\sqrt{3}]"
},
{
"math_id": 83,
"text": "f = \\sqrt{3}x^3 - 5 x^2 + 4x + 9"
},
{
"math_id": 84,
"text": "g = x^4 + 4 x^2 + 3\\sqrt{3}x - 6"
},
{
"math_id": 85,
"text": "I = (2)"
},
{
"math_id": 86,
"text": "D/I"
},
{
"math_id": 87,
"text": "I"
},
{
"math_id": 88,
"text": "D"
},
{
"math_id": 89,
"text": "f,g"
},
{
"math_id": 90,
"text": "(D/I)[x]"
},
{
"math_id": 91,
"text": "F[x]"
}
] | https://en.wikipedia.org/wiki?curid=9927028 |
9928681 | Split graph | Graph which partitions into a clique and independent set
In graph theory, a branch of mathematics, a split graph is a graph in which the vertices can be partitioned into a clique and an independent set. Split graphs were first studied by Földes and Hammer (1977a, 1977b), and independently introduced by Tyshkevich and Chernyak (1979), where they called these graphs "polar graphs" ().
A split graph may have more than one partition into a clique and an independent set; for instance, the path "a"–"b"–"c" is a split graph, the vertices of which can be partitioned in three different ways:
Split graphs can be characterized in terms of their forbidden induced subgraphs: a graph is split if and only if no induced subgraph is a cycle on four or five vertices, or a pair of disjoint edges (the complement of a 4-cycle).
Relation to other graph families.
From the definition, split graphs are clearly closed under complementation. Another characterization of split graphs involves complementation: they are chordal graphs the complements of which are also chordal. Just as chordal graphs are the intersection graphs of subtrees of trees, split graphs are the intersection graphs of distinct substars of star graphs. Almost all chordal graphs are split graphs; that is, in the limit as "n" goes to infinity, the fraction of "n"-vertex chordal graphs that are split approaches one.
Because chordal graphs are perfect, so are the split graphs. The double split graphs, a family of graphs derived from split graphs by doubling every vertex (so the clique comes to induce an antimatching and the independent set comes to induce a matching), figure prominently as one of five basic classes of perfect graphs from which all others can be formed in the proof by of the Strong Perfect Graph Theorem.
If a graph is both a split graph and an interval graph, then its complement is both a split graph and a comparability graph, and vice versa. The split comparability graphs, and therefore also the split interval graphs, can be characterized in terms of a set of three forbidden induced subgraphs. The split cographs are exactly the threshold graphs. The split permutation graphs are exactly the interval graphs that have interval graph complements;
these are the permutation graphs of skew-merged permutations. Split graphs have cochromatic number 2.
Algorithmic problems.
Let G be a split graph, partitioned into a clique C and an independent set i. Then every maximal clique in a split graph is either C itself, or the neighborhood of a vertex in i. Thus, it is easy to identify the maximum clique, and complementarily the maximum independent set in a split graph. In any split graph, one of the following three possibilities must be true:
Some other optimization problems that are NP-complete on more general graph families, including graph coloring, are similarly straightforward on split graphs. Finding a Hamiltonian cycle remains NP-complete even for split graphs which are strongly chordal graphs. It is also well known that the Minimum Dominating Set problem remains NP-complete for split graphs.
Degree sequences.
One remarkable property of split graphs is that they can be recognized solely from their degree sequences. Let the degree sequence of a graph G be "d"1 ≥ "d"2 ≥ … ≥ "dn", and let m be the largest value of i such that "di" ≥ "i" – 1. Then G is a split graph if and only if
formula_0
If this is the case, then the m vertices with the largest degrees form a maximum clique in G, and the remaining vertices constitute an independent set.
The splittance of an arbitrary graph measures the extent to which this inequality fails to be true. If a graph is not a split graph, then the smallest sequence of edge insertions and removals that make it into a split graph can be obtained by adding all missing edges between the m vertices with the largest degrees, and removing all edges between pairs of the remaining vertices; the splittance counts the number of operations in this sequence.
Counting split graphs.
showed that (unlabeled) "n"-vertex split graphs are in one-to-one correspondence with certain Sperner families. Using this fact, he determined a formula for the number of nonisomorphic split graphs on "n" vertices. For small values of "n", starting from "n" = 1, these numbers are
1, 2, 4, 9, 21, 56, 164, 557, 2223, 10766, 64956, 501696, ... (sequence in the OEIS).
This enumerative result was also proved earlier by .
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{i=1}^m d_i = m(m-1) + \\sum_{i=m+1}^n d_i."
}
] | https://en.wikipedia.org/wiki?curid=9928681 |
9929142 | 1 + 1 + 1 + 1 + ⋯ | Divergent series
In mathematics, 1 + 1 + 1 + 1 + ⋯, also written &NoBreak;&NoBreak;, &NoBreak;&NoBreak;, or simply &NoBreak;&NoBreak;, is a divergent series. Nevertheless, it is sometimes imputed to have a value of &NoBreak;}&NoBreak;, especially in physics. This value can be justified by certain mathematical methods for obtaining values from divergent series, including zeta function regularization.
As a divergent series.
1 + 1 + 1 + 1 + ⋯ is a divergent series, meaning that its sequence of partial sums does not converge to a limit in the real numbers.
The sequence 1n can be thought of as a geometric series with the common ratio 1. For some other divergent geometric series, including Grandi's series with ratio −1, and the series 1 + 2 + 4 + 8 + ⋯ with ratio 2, one can use the general solution for the sum of a geometric series with base 1 and ratio &NoBreak;&NoBreak;, obtaining &NoBreak;}&NoBreak;, but this summation method fails for 1 + 1 + 1 + 1 + ⋯, producing a division by zero.
Together with Grandi's series, this is one of two geometric series with rational ratio that diverges both for the real numbers and for all systems of p-adic numbers.
In the context of the extended real number line
formula_0
since its sequence of partial sums increases monotonically without bound.
Zeta function regularization.
Where the sum of "n"0 occurs in physical applications, it may sometimes be interpreted by zeta function regularization, as the value at "s" = 0 of the Riemann zeta function:
formula_1
The two formulas given above are not valid at zero however, but the analytic continuation is
formula_2
Using this one gets (given that Γ(1)
1),
formula_3
where the power series expansion for "ζ"("s") about "s" = 1 follows because "ζ"("s") has a simple pole of residue one there. In this sense 1 + 1 + 1 + 1 + ⋯ = "ζ"(0) = −.
Emilio Elizalde presents a comment from others about the series, suggesting the centrality of the zeta function regularization of this series in physics:
<templatestyles src="Template:Blockquote/styles.css" />In a short period of less than a year, two distinguished physicists, A. Slavnov and F. Yndurain, gave seminars in Barcelona, about different subjects. It was remarkable that, in both presentations, at some point the speaker addressed the audience with these words: "'As everybody knows", 1 + 1 + 1 + ⋯ = −.' Implying maybe: "If you do not know this, it is no use to continue listening."
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{n=1}^{\\infin} 1 = +\\infty \\, ,"
},
{
"math_id": 1,
"text": "\\zeta(s)=\\sum_{n=1}^\\infty\\frac{1}{n^s}=\\frac{1}{1-2^{1-s}}\\sum_{n=1}^\\infty \\frac{(-1)^{n+1}}{n^s}\\,."
},
{
"math_id": 2,
"text": "\n\\zeta(s) = 2^s\\pi^{s-1}\\ \\sin\\left(\\frac{\\pi s}{2}\\right)\\ \\Gamma(1-s)\\ \\zeta(1-s)\n"
},
{
"math_id": 3,
"text": "\\zeta(0) = \\frac{1}{\\pi} \\lim_{s \\rightarrow 0} \\ \\sin\\left(\\frac{\\pi s}{2}\\right)\\ \\zeta(1-s) = \\frac{1}{\\pi} \\lim_{s \\rightarrow 0} \\ \\left( \\frac{\\pi s}{2} - \\frac{\\pi^3 s^3}{48} + ... \\right)\\ \\left( -\\frac{1}{s} + ... \\right) = -\\frac{1}{2}"
}
] | https://en.wikipedia.org/wiki?curid=9929142 |
9930635 | Lubrication theory | Flow of fluids within extremely thin regions
In fluid dynamics, lubrication theory describes the flow of fluids (liquids or gases) in a geometry in which one dimension is significantly smaller than the others. An example is the flow above air hockey tables, where the thickness of the air layer beneath the puck is much smaller than the dimensions of the puck itself.
Internal flows are those where the fluid is fully bounded. Internal flow lubrication theory has many industrial applications because of its role in the design of fluid bearings. Here a key goal of lubrication theory is to determine the pressure distribution in the fluid volume, and hence the forces on the bearing components. The working fluid in this case is often termed a lubricant.
Free film lubrication theory is concerned with the case in which one of the surfaces containing the fluid is a free surface. In that case, the position of the free surface is itself unknown, and one goal of lubrication theory is then to determine this. Examples include the flow of a viscous fluid over an inclined plane or over topography. Surface tension may be significant, or even dominant. Issues of wetting and dewetting then arise. For very thin films (thickness less than one micrometre), additional intermolecular forces, such as Van der Waals forces or disjoining forces, may become significant.
Theoretical basis.
Mathematically, lubrication theory can be seen as exploiting the disparity between two length scales. The first is the characteristic film thickness, formula_0, and the second is a characteristic substrate length scale formula_1. The key requirement for lubrication theory is that the ratio formula_2 is small, that is, formula_3.
The Navier–Stokes equations (or Stokes equations, when fluid inertia may be neglected) are expanded in this small parameter, and the leading-order equations are then
formula_4
where formula_5 and formula_6 are coordinates in the direction of the substrate and perpendicular to it respectively. Here formula_7 is the fluid pressure, and formula_8 is the fluid velocity component parallel to the substrate; formula_9 is the fluid viscosity. The equations show, for example, that pressure variations across the gap are small, and that those along the gap are proportional to the fluid viscosity. A more general formulation of the lubrication approximation would include a third dimension, and the resulting differential equation is known as the Reynolds equation.
Further details can be found in the literature or in the textbooks given in the bibliography.
Applications.
An important application area is lubrication of machinery components such as fluid bearings and mechanical seals. Coating is another major application area including the preparation of thin films, printing, painting and adhesives.
Biological applications have included studies of red blood cells in narrow capillaries and of liquid flow in the lung and eye.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": "L"
},
{
"math_id": 2,
"text": "\\varepsilon = H/L"
},
{
"math_id": 3,
"text": "\\epsilon \\ll 1"
},
{
"math_id": 4,
"text": "\n\\begin{align}\n\\frac{\\partial p}{\\partial z} & = 0 \\\\[6pt]\n\\frac{\\partial p}{\\partial x} & = \\mu\\frac{\\partial^2 u}{\\partial z^2}\n\\end{align}\n"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "z"
},
{
"math_id": 7,
"text": "p"
},
{
"math_id": 8,
"text": "u"
},
{
"math_id": 9,
"text": "\\mu"
}
] | https://en.wikipedia.org/wiki?curid=9930635 |
9931978 | Absolute radio-frequency channel number | In GSM cellular networks, an absolute radio-frequency channel number (ARFCN) is a code that specifies a pair of physical radio carriers used for transmission and reception in a land mobile radio system, one for the uplink signal and one for the downlink signal. ARFCNs for GSM are defined in Specification 45.005 Section 2. There are also other variants of the ARFCN numbering scheme that are in use for other systems that are not GSM. One such example is the TETRA system that has 25 kHz channel spacing and uses different base frequencies for numbering.
Different frequencies (ARFCNs) are used for the frequency-based component of GSMs multiple access scheme (FDMA — frequency-division multiple access). Uplink/downlink channel pairs in GSM are identified by ARFCN. Together with the time-based component (TDMA — time-division multiple access) the physical channel is defined by selecting a certain ARFCN and a certain time slot. Note not to confuse this physical channel with the logical channels (e.g. BCCH — Broadcast Control Channel) that are time-multiplexed onto it under the rules of GSM Specification 05.03.
ARFCN table for common GSM systems.
This table shows the common channel numbers and corresponding uplink and downlink frequencies associated with a particular ARFCN, as well as the way to calculate the frequency from the ARFCN number and vice versa.
Observe this table only deals with GSM systems. There are other mobile telecommunications systems that do use ARFCN to number their channels, but they may use different offsets, channel spacing and so on.
Other versions of ARFCN.
TETRA uses different channel spacing compared to GSM systems. The standard is 25 kHz spacing and the center frequency of each channel may be offset in a number of fashions such as ±12.5 kHz or even ±6.25 kHz. This makes it more tricky to correlate the ARFCN strictly to a pair of frequencies, you need to know the specifics of the system. Also the duplex spacing is generally 10 MHz in TETRA although other versions are available for certain applications.
In TETRA the ARFCN is always given for the downlink frequency, the uplink is by standard 10 MHz lower in frequency than the downlink frequency.
In UMTS for 3G and 4G mobile telephone systems, ARFCN is replaced with UARFCN and EARFCN which are simpler and always has a direct relation between the frequency and the channel number.
Example ARFCN for TETRA.
In many countries in Europe there is a standardised set of frequencies used for "blue light services" i.e. the police, firebrigade, rescue and so on. This set of frequencies correspond to ARFCN with a base of 300 MHz and an offset of 12.5 kHz.
To calculate the ARFCN from frequency the following method is used:
formula_0
Where:
"f" is the actual frequency [MHz]
"fb" is the base frequency [MHz]
"fo" is the offset frequency [MHz]
"fc" is the channel spacing frequency [MHz]
The range of frequencies used in these tetra systems are defined by 380-385 MHz for the uplink (mobile to radio base station) paired with 390-395 MHz for the downlink (radio base station to mobile). Therefore, the base frequency "fb" is 300 MHz (the baseband frequency to relate from) and the offset is 0.0125 MHz (12.5 kHz) and thus we get the relation:
formula_1
Inseting the frequency for the first channel 390.0125 MHz gives us an ARFCN of 3600.
Calculating the frequency from ARFCN is just the reverse of this:
formula_2
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{ARFCN} = \\frac{f - f_b - f_o}{f_c}"
},
{
"math_id": 1,
"text": "\\mathrm{ARFCN}=\\frac{f - 300 - 0.0125}{0.025}"
},
{
"math_id": 2,
"text": "f = f_c \\cdot \\mathrm{ARFCN} + f_b + f_o"
}
] | https://en.wikipedia.org/wiki?curid=9931978 |
9933325 | Thymidylate synthase | Enzyme
Thymidylate synthase (TS) (EC 2.1.1.45) is an enzyme that catalyzes the conversion of deoxyuridine monophosphate (dUMP) to deoxythymidine monophosphate (dTMP). Thymidine is one of the nucleotides in DNA. With inhibition of TS, an imbalance of deoxynucleotides and increased levels of dUMP arise. Both cause DNA damage.
Function.
The following reaction is catalyzed by thymidylate synthase:
5,10-methylenetetrahydrofolate + dUMP formula_0 dihydrofolate + dTMP
By means of reductive methylation, deoxyuridine monophosphate (dUMP) and N5,N10-methylene tetrahydrofolate are together used to form dTMP, yielding dihydrofolate as a secondary product.
This provides the sole de novo pathway for production of dTMP and is the only enzyme in folate metabolism in which the 5,10-methylenetetrahydrofolate is oxidised during one-carbon transfer. The enzyme is essential for regulating the balanced supply of the 4 DNA precursors in normal DNA replication: defects in the enzyme activity affecting the regulation process cause various biological and genetic abnormalities, such as thymineless death. The enzyme is an important target for certain chemotherapeutic drugs. Thymidylate synthase is an enzyme of about 30 to 35 kDa in most species except in protozoan and plants where it exists as a bifunctional enzyme that includes a dihydrofolate reductase domain. A cysteine residue is involved in the catalytic mechanism (it covalently binds the 5,6-dihydro-dUMP intermediate). The sequence around the active site of this enzyme is conserved from phages to vertebrates.
Thymidylate synthase is induced by a transcription factor LSF/TFCP2 and LSF is an oncogene in hepatocellular carcinoma. LSF and Thymidylate synthase plays significant role in Liver Cancer proliferation and progression and Drug resistance.
Clinical significance.
Thymidylate synthase (TS) plays a crucial role in the early stages of DNA biosynthesis. DNA damage or deletion occur on a daily basis as a result of both endogenous and environmental factors. Such environmental factors include ultraviolet damage and cigarette smoke that contain a variety of carcinogens. Therefore, synthesis and insertion of healthy DNA is vital for normal body functions and avoidance of cancerous activity. In addition, inhibition in synthesis of important nucleotides necessary for cell growth is important. For this reason, TS has become an important target for cancer treatment by means of chemotherapy. The sensitivity of TS to succumb to TS inhibitors is a key part to its success as treatment for colorectal, pancreatic, ovarian, gastric, and breast cancers.
Using TS as a drug target.
The use of TS inhibitors has become a main focus of using TS as a drug target. The most widely used inhibitor is 5-fluorouracil (5-FU) and its metabolized form 5-fluorodeoxyuridine monophosphate (5-FdUMP), which acts as an antimetabolite that irreversibly inhibits TS by competitive binding. However, due to a low level of 5-FU found in many patients, it has been discovered that in combination with leucovorin (LV), 5-FU has greater success in down regulating tumor progression mechanisms and increasing immune system activity.
Experimentally, it has been shown that low levels of TS expression leads to a better response to 5-FU and higher success rates and survival of colon and liver cancer patients. However, additional experiments have merely stated that levels of TS may be associated with stage of disease, cell proliferation and tumor differentiation for those with lung adenocarcinoma but low levels are not necessarily indicators of high success. Expression levels of TS mRNA may be helpful in predicting the malignant potential of certain cancerous cells, thus improving cancer treatment targets and yielding higher survival rates among cancer patients [Hashimoto].
TS's relation to the cell cycle also contributes to its use in cancer treatment. Several cell-cycle dependent kinases and transcription factors influence TS levels in the cell cycle that increase its activity during the S phase but decrease its activity while cells are no longer proliferating. In an auto-regulatory manner, TS not only controls its own translation but that of other proteins such as p53 that through mutation is the root of much tumor growth. Through its translation, TS has a varying expression in cancer cells and tumors, which leads to early cell death.
Interactive pathway map.
"Click on genes, proteins and metabolites below to link to respective articles."
Fluorouracil (5-FU) Activity edit
<templatestyles src="Reflist/styles.css" />
Mechanism description.
In the proposed mechanism, TS forms a covalent bond to the substrate dUMP through a 1,4-addition involving a cysteine nucleophile. The substrate tetrahydrofolate donates a methyl group to the alpha carbon while reducing the new methyl on dUMP to form dTMP.
It has been proven that the imine formed through reaction with THF and dUMP is an intermediate in the reaction with dUMP through mutations in the structure of TS that inhibit the completion of the mechanism. V316Am TS, a mutant with deletion of C terminal valines from both subunits, allows the catalysis of dehalogenation of BrdUMP preceding the mechanism described above and the covalent bond to THF and dUMP. The mutant TS is unable to accomplish the C-terminal conformational change needed to break covalent bonds to form dTMP, thus showing the proposed mechanism to be true. The structure was deduced through x-ray crystallography of V316Am TS to illustrate full homodimer TS structure (Figure 1). In addition, it showed possible interactions of the 175Arg and 174Arg between dimers. These arginines are thought to stabilize the UMP structures within the active sites by creating hydrogen bonds to the phosphate group (Figure 2). [Stroud and Finer-Moore]
5-FU is an inhibitor of TS. Upon entering the cell, 5-fluorouracil (5-FU) is converted to a variety of active metabolites, intracellularly. One such metabolite is FdUMP which differs from dUMP by a fluorine in place of a hydrogen on the alpha carbon. FdUMP is able to inhibit TS by binding to the nucleotide-binding site of dUMP. This competitive binding inhibits the normal function of dTMP synthesis from dUMP [Longley]. Thus the dUMP is unable to have an elimination reaction and complete the methyl donation from THF.
Figure 1. This figure depicts the homodimer that is TS. As you can see the orange and teal backbones never connect or intertwine, but there are side chains interactions between the dimers. On the orange protein, you can visibly detect two long side chains that enter the teal protein (this is located within the yellow circle). The other beige parts are side chains that interact within the active site. Just below the yellow circle, you are able to see the same pattern of side chains and configuration.
Figure 2. This figure shows the possible H-bond interactions between the arginines and the UMP in the active site of thymidylate synthase. This can be seen by the faint lines between the blue tips and the red tips. These arginines are used to hold the position of the UMP molecule so that the interaction may occur correctly. The two arginines in the top right corner that are located next to each other on the back bone are actually from the other protein of this dimer enzyme. This interaction is one of the many intermolecular forces that holds these two tertiary structures together. The yellow stand in the top-middle region shows a sulfur bond that forms between a cysteine side chain and UMP. This covalently holds the UMP within the active site until it is reacted to yield TMP.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=9933325 |
9933752 | Hartman–Grobman theorem | In mathematics, in the study of dynamical systems, the Hartman–Grobman theorem or linearisation theorem is a theorem about the local behaviour of dynamical systems in the neighbourhood of a hyperbolic equilibrium point. It asserts that linearisation—a natural simplification of the system—is effective in predicting qualitative patterns of behaviour. The theorem owes its name to Philip Hartman and David M. Grobman.
The theorem states that the behaviour of a dynamical system in a domain near a hyperbolic equilibrium point is qualitatively the same as the behaviour of its linearization near this equilibrium point, where hyperbolicity means that no eigenvalue of the linearization has real part equal to zero. Therefore, when dealing with such dynamical systems one can use the simpler linearization of the system to analyse its behaviour around equilibria.
Main theorem.
Consider a system evolving in time with state formula_0 that satisfies the differential equation formula_1 for some smooth map formula_2. Now suppose the map has a hyperbolic equilibrium state formula_3: that is, formula_4 and the Jacobian matrix formula_5 of formula_6 at state formula_7 has no eigenvalue with real part equal to zero. Then there exists a neighbourhood formula_8 of the equilibrium formula_7 and a homeomorphism formula_9,
such that formula_10 and such that in the neighbourhood formula_8 the flow of formula_1 is topologically conjugate by the continuous map formula_11 to the flow of its linearisation formula_12. A like result holds for iterated maps, and for fixed points of flows or maps on manifolds.
A mere topological conjugacy does not provide geometric information about the behavior near the equilibrium. Indeed, neighborhoods of any two equilibria are topologically conjugate so long as the dimensions of the contracting directions (negative eigenvalues) match and the dimensions of the expanding directions (positive eigenvalues) match. But the topological conjugacy in this context does provide the full geometric picture. In effect, the nonlinear phase portrait near the equilibrium is a thumbnail of the phase portrait of the linearized system. This is the meaning of the following regularity results, and it is illustrated by the saddle equilibrium in the example below.
Even for infinitely differentiable maps formula_6, the homeomorphism formula_13 need not to be smooth, nor even locally Lipschitz. However, it turns out to be Hölder continuous, with exponent arbitrarily close to 1. Moreover, on a surface, i.e., in dimension 2, the linearizing homeomorphism and its inverse are continuously differentiable (with, as in the example below, the differential at the equilibrium being the identity) but need not be formula_14. And in any dimension, if formula_6 has Hölder continuous derivative, then the linearizing homeomorphism is differentiable at the equilibrium and its differential at the equilibrium is the identity.
The Hartman–Grobman theorem has been extended to infinite-dimensional Banach spaces, non-autonomous systems formula_15 (potentially stochastic), and to cater for the topological differences that occur when there are eigenvalues with zero or near-zero real-part.
Example.
The algebra necessary for this example is easily carried out by a web service that computes normal form coordinate transforms of systems of differential equations, autonomous or non-autonomous, deterministic or stochastic.
Consider the 2D system in variables formula_16 evolving according to the pair of coupled differential equations
formula_17
By direct computation it can be seen that the only equilibrium of this system lies at the origin, that is formula_18. The coordinate transform, formula_19 where formula_20, given by
formula_21
is a smooth map between the original formula_16 and new formula_20 coordinates, at least near the equilibrium at the origin. In the new coordinates the dynamical system transforms to its linearisation
formula_22
That is, a distorted version of the linearisation gives the original dynamics in some finite neighbourhood.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "u(t)\\in\\mathbb R^n"
},
{
"math_id": 1,
"text": "du/dt=f(u)"
},
{
"math_id": 2,
"text": "f: \\mathbb{R}^n \\to \\mathbb{R}^n"
},
{
"math_id": 3,
"text": "u^*\\in\\mathbb R^n"
},
{
"math_id": 4,
"text": "f(u^*)=0"
},
{
"math_id": 5,
"text": "A=[\\partial f_i/\\partial x_j]"
},
{
"math_id": 6,
"text": "f"
},
{
"math_id": 7,
"text": "u^*"
},
{
"math_id": 8,
"text": "N"
},
{
"math_id": 9,
"text": "h : N \\to \\mathbb{R}^n"
},
{
"math_id": 10,
"text": "h(u^*)=0"
},
{
"math_id": 11,
"text": "U=h(u)"
},
{
"math_id": 12,
"text": "dU/dt=AU"
},
{
"math_id": 13,
"text": "h"
},
{
"math_id": 14,
"text": "C^2"
},
{
"math_id": 15,
"text": "du/dt=f(u,t)"
},
{
"math_id": 16,
"text": "u=(y,z)"
},
{
"math_id": 17,
"text": " \\frac{dy}{dt} = -3y+yz\\quad\\text{and}\\quad \\frac{dz}{dt} = z+y^2."
},
{
"math_id": 18,
"text": "u^*=0"
},
{
"math_id": 19,
"text": "u=h^{-1}(U)"
},
{
"math_id": 20,
"text": "U=(Y,Z)"
},
{
"math_id": 21,
"text": "\n\\begin{align}\ny & \\approx Y+YZ+\\dfrac1{42}Y^3+\\dfrac1 2Y Z^2 \\\\[5pt]\nz & \\approx Z-\\dfrac1 7Y^2-\\dfrac1 3Y^2 Z\n\\end{align}\n"
},
{
"math_id": 22,
"text": " \\frac{dY}{dt}=-3Y\\quad\\text{and}\\quad \\frac{dZ}{dt} = Z."
}
] | https://en.wikipedia.org/wiki?curid=9933752 |
9934503 | Radiation damage | Radiation damage is the effect of ionizing radiation on physical objects including non-living structural materials. It can be either detrimental or beneficial for materials.
Radiobiology is the study of the action of ionizing radiation on living things, including the health effects of radiation in humans. High doses of ionizing radiation can cause damage to living tissue such as radiation burning and harmful mutations such as causing cells to become cancerous, and can lead to health problems such as radiation poisoning.
Causes.
This radiation may take several forms:
Effects on materials and devices.
Radiation may affect materials and devices in deleterious and beneficial ways:
Many of the radiation effects on materials are produced by collision cascades and covered by radiation chemistry.
Effects on metals and concrete.
Radiation can have harmful effects on solid materials as it can degrade their properties so that they are no longer mechanically sound. This is of special concern as it can greatly affect their ability to perform in nuclear reactors and is the emphasis of radiation material science, which seeks to mitigate this danger.
As a result of their usage and exposure to radiation, the effects on metals and concrete are particular areas of study. For metals, exposure to radiation can result in radiation hardening which strengthens the material while subsequently embrittling it (lowers toughness, allowing brittle fracture to occur). This occurs as a result of knocking atoms out of their lattice sites through both the initial interaction as well as a resulting cascade of damage, leading to the creation of defects, dislocations (similar to work hardening and precipitation hardening). Grain boundary engineering through thermomechanical processing has been shown to mitigate these effects by changing the fracture mode from intergranular (occurring along grain boundaries) to transgranular. This increases the strength of the material, mitigating the embrittling effect of radiation. Radiation can also lead to segregation and diffusion of atoms within materials, leading to phase segregation and voids as well as enhancing the effects of stress corrosion cracking through changes in both the water chemistry and alloy microstructure.
As concrete is used extensively in the construction of nuclear power plants, where it provides structure as well as containing radiation, the effect of radiation on it is also of major interest. During its lifetime, concrete will change properties naturally due to its normal aging process, however nuclear exposure will lead to a loss of mechanical properties due to swelling of the concrete aggregates, and thus damaging the bulk material. For instance, the biological shield of the reactor is frequently composed of Portland cement, where dense aggregates are added in order to decrease the radiation flux through the shield. These aggregates can swell and make the shield mechanically unsound. Numerous studies have shown decreases in both compressive and tensile strength as well as elastic modulus of concrete at around a dosage of around 1019 neutrons per square centimeter. These trends were also shown to exist in reinforced concrete, a composite of both concrete and steel.
The knowledge gained from current analyses of materials in fission reactors in regards to the effects of temperature, irradiation dosage, materials compositions, and surface treatments will be helpful in the design of future fission reactors as well as the development of fusion reactors.
Solids subject to radiation are constantly being bombarded with high energy particles. The interaction between particles, and atoms in the lattice of the reactor materials causes displacement in the atoms. Over the course of sustained bombardment, some of the atoms do not come to rest at lattice sites, which results in the creation of defects. These defects cause changes in the microstructure of the material, and ultimately result in a number of radiation effects.
Radiation cross section.
The probability of an interaction between two atoms is dependent on the thermal neutron cross section (measured in barn). Given a macroscopic cross section of formula_0 (where formula_1 is the microscopic cross section, and formula_2 is the density of atoms in the target), and a reaction rate of formula_3 (where formula_4 is the beam flux), the probability of interaction becomes formula_5.
Listed below are the cross sections of common atoms or alloys.
Thermal Neutron Cross Sections (Barn)
Microstructural evolution under irradiation.
Microstructural evolution is driven in the material by the accumulation of defects over a period of sustained radiation. This accumulation is limited by defect recombination, by clustering of defects, and by the annihilation of defects at sinks. Defects must thermally migrate to sinks, and in doing so often recombine, or arrive at sinks to recombine. In most cases, Drad = DvCv + DiCi » Dtherm, that is to say, the motion of interstitials and vacancies throughout the lattice structure of a material as a result of radiation often outweighs the thermal diffusion of the same material.
One consequence of a flux of vacancies towards sinks is a corresponding flux of atoms away from the sink. If vacancies are not annihilated or recombined before collecting at sinks, they will form voids. At sufficiently high temperature, dependent on the material, these voids can fill with gases from the decomposition of the alloy, leading to swelling in the material. This is a tremendous issue for pressure sensitive or constrained materials that are under constant radiation bombardment, like pressurized water reactors. In many cases, the radiation flux is non-stoichiometric, which causes segregation within the alloy. This non-stoichiometric flux can result in significant change in local composition near grain boundaries, where the movement of atoms and dislocations is impeded. When this flux continues, solute enrichment at sinks can result in the precipitation of new phases.
Thermo-mechanical effects of irradiation.
Hardening.
Radiation hardening is the strengthening of the material in question by the introduction of defect clusters, impurity-defect cluster complexes, dislocation loops, dislocation lines, voids, bubbles and precipitates. For pressure vessels, the loss in ductility that occurs as a result of the increase in hardness is a particular concern.
Embrittlement.
Radiation embrittlement results in a reduction of the energy to fracture, due to a reduction in strain hardening (as hardening is already occurring during irradiation). This is motivated for very similar reasons to those that cause radiation hardening; development of defect clusters, dislocations, voids, and precipitates. Variations in these parameters make the exact amount of embrittlement difficult to predict, but the generalized values for the measurement show predictable consistency.
Creep.
Thermal creep in irradiated materials is negligible, by comparison to the irradiation creep, which can exceed 10−6sec−1. The mechanism is not enhanced diffusivities, as would be intuitive from the elevated temperature, but rather interaction between the stress and the developing microstructure. Stress induces the nucleation of loops, and causes preferential absorption of interstitials at dislocations, which results in swelling. Swelling, in combination with the embrittlement and hardening, can have disastrous effects on any nuclear material under substantial pressure.
Growth.
Growth in irradiated materials is caused by Diffusion Anisotropy Difference (DAD). This phenomenon frequently occurs in zirconium, graphite, and magnesium because of natural properties.
Conductivity.
Thermal and electrical conductivity rely on the transport of energy through the electrons and the lattice of a material. Defects in the lattice and substitution of atoms via transmutation disturb these pathways, leading to a reduction in both types of conduction by radiation damage. The magnitude of reduction depends on the dominant type of conductivity (electronic or Wiedemann–Franz law, phononic) in the material and the details of the radiation damage and is therefore still hard to predict.
Effects on polymers.
Radiation damage can affect polymers that are found in nuclear reactors, medical devices, electronic packaging, and aerospace parts, as well as polymers that undergo sterilization or irradiation for use in food and pharmaceutical industries. Ionizing radiation can also be used to intentionally strengthen and modify the properties of polymers. Research in this area has focused on the three most common sources of radiation used for these applications, including gamma, electron beam, and x-ray radiation.
The mechanisms of radiation damage are different for polymers and metals, since dislocations and grain boundaries do not have real significance in a polymer. Instead, polymers deform via the movement and rearrangement of chains, which interact through Van der Waals forces and hydrogen bonding. In the presence of high energy, such as ionizing radiation, the covalent bonds that connect the polymer chains themselves can overcome their forces of attraction to form a pair of free radicals. These radicals then participate in a number of polymerization reactions that fall under the classification of radiation chemistry. Crosslinking describes the process through which carbon-centered radicals on different chains combine to form a network of crosslinks. In contrast, chain scission occurs when a carbon-centered radical on the polymer backbone reacts with another free radical, typically from oxygen in the atmosphere, causing a break in the main chain. Free radicals can also undergo reactions that graft new functional groups onto the backbone, or laminate two polymer sheets without an adhesive.
There is contradictory information about the expected effects of ionizing radiation for most polymers, since the conditions of radiation are so influential. For example, dose rate determines how fast free radicals are formed and whether they are able to diffuse through the material to recombine, or participate in chemical reactions. The ratio of crosslinking to chain scission is also affected by temperature, environment, presence of oxygen versus inert gases, radiation source (changing the penetration depth), and whether the polymer has been dissolved in an aqueous solution.
Crosslinking and chain scission have diverging effects on mechanical properties. Irradiated polymers typically undergo both types of reactions simultaneously, but not necessarily to the same extent. Crosslinks strengthen the polymer by preventing chain sliding, effectively leading to thermoset behavior. Crosslinks and branching lead to higher molecular weight and polydispersity. Thus, these polymers will generally have increased stiffness, tensile strength, and yield strength, and decreased solubility. Polyethylene is well known to experience improved mechanical properties as a result of crosslinking, including increased tensile strength and decreased elongation at break. Thus, it has “several advantageous applications in areas as diverse as rock bolts for mining, reinforcement of concrete, manufacture of light weight high strength ropes and high performance fabrics.”
In contrast, chain scission reactions will weaken the material by decreasing the average molecular weight of the chains, such that tensile and flexural strength decrease and solubility increases. Chain scission occurs primarily in the amorphous regions of the polymer. It can increase crystallinity in these regions by making it easier for the short chains to reassemble. Thus, it has been observed that crystallinity increases with dose, leading to a more brittle material on the macroscale. In addition, “gaseous products, such as CO2, may be trapped in the polymer, and this can lead to subsequent crazing and cracking due to accumulated local stresses." An example of this phenomenon is 3D printed materials, which are often porous as a result of their printing configuration. Oxygen can diffuse into the pores and react with the surviving free radicals, leading to embrittlement. Some materials continue to weaken through aging, as the remaining free radicals react.
The resistance of these polymers to radiation damage can be improved by grafting or copolymerizing aromatic groups, which enhance stability and decrease reactivity, and by adding antioxidants and nanomaterials, which act as free radical scavengers. In addition, higher molecular weight polymers will be more resistant to radiation.
Effects on gases.
Exposure to radiation causes chemical changes in gases. The least susceptible to damage are noble gases, where the major concern is the nuclear transmutation with follow-up chemical reactions of the nuclear reaction products.
High-intensity ionizing radiation in air can produce a visible ionized air glow of telltale bluish-purplish color. The glow can be observed e.g. during criticality accidents, around mushroom clouds shortly after a nuclear explosion, or inside of a damaged nuclear reactor like during the Chernobyl disaster.
Significant amounts of ozone can be produced. Even small amounts of ozone can cause ozone cracking in many polymers over time, in addition to the damage by the radiation itself.
Gas-filled radiation detectors.
In some gaseous ionisation detectors, radiation damage to gases plays an important role in the device's ageing, especially in devices exposed for long periods to high intensity radiation, e.g. detectors for the Large Hadron Collider or the Geiger–Müller tube
Ionization processes require energy above 10 eV, while splitting covalent bonds in molecules and generating free radicals requires only 3-4 eV. The electrical discharges initiated by the ionization events by the particles result in plasma populated by large amount of free radicals. The highly reactive free radicals can recombine back to original molecules, or initiate a chain of free-radical polymerization reactions with other molecules, yielding compounds with increasing molecular weight. These high molecular weight compounds then precipitate from gaseous phase, forming conductive or non-conductive deposits on the electrodes and insulating surfaces of the detector and distorting its response. Gases containing hydrocarbon quenchers, e.g. argon–methane, are typically sensitive to aging by polymerization; addition of oxygen tends to lower the aging rates. Trace amounts of silicone oils, present from outgassing of silicone elastomers and especially from traces of silicone lubricants, tend to decompose and form deposits of silicon crystals on the surfaces. Gaseous mixtures of argon (or xenon) with carbon dioxide and optionally also with 2-3% of oxygen are highly tolerant to high radiation fluxes. The oxygen is added as noble gas with carbon dioxide has too high transparency for high-energy photons; ozone formed from the oxygen is a strong absorber of ultraviolet photons. Carbon tetrafluoride can be used as a component of the gas for high-rate detectors; the fluorine radicals produced during the operation however limit the choice of materials for the chambers and electrodes (e.g. gold electrodes are required, as the fluorine radicals attack metals, forming fluorides). Addition of carbon tetrafluoride can however eliminate the silicon deposits. Presence of hydrocarbons with carbon tetrafluoride leads to polymerization. A mixture of argon, carbon tetrafluoride, and carbon dioxide shows low aging in high hadron flux.
Effects on liquids.
Like gases, liquids lack fixed internal structure; the effects of radiation is therefore mainly limited to radiolysis, altering the chemical composition of the liquids. As with gases, one of the primary mechanisms is formation of free radicals.
All liquids are subject to radiation damage, with few exotic exceptions; e.g. molten sodium, where there are no chemical bonds to be disrupted, and liquid hydrogen fluoride, which produces gaseous hydrogen and fluorine, which spontaneously react back to hydrogen fluoride.
Effects on water.
Water subjected to ionizing radiation forms free radicals of hydrogen and hydroxyl, which can recombine to form gaseous hydrogen, oxygen, hydrogen peroxide, hydroxyl radicals, and peroxide radicals. In living organisms, which are composed mostly of water, majority of the damage is caused by the reactive oxygen species, free radicals produced from water. The free radicals attack the biomolecules forming structures within the cells, causing oxidative stress (a cumulative damage which may be significant enough to cause the cell death, or may cause DNA damage possibly leading to cancer).
In cooling systems of nuclear reactors, the formation of free oxygen would promote corrosion and is counteracted by addition of hydrogen to the cooling water. The hydrogen is not consumed as for each molecule reacting with oxygen one molecule is liberated by radiolysis of water; the excess hydrogen just serves to shift the reaction equilibriums by providing the initial hydrogen radicals. The reducing environment in pressurized water reactors is less prone to buildup of oxidative species. The chemistry of boiling water reactor coolant is more complex, as the environment can be oxidizing. Most of the radiolytic activity occurs in the core of the reactor where the neutron flux is highest; the bulk of energy is deposited in water from fast neutrons and gamma radiation, the contribution of thermal neutrons is much lower. In air-free water, the concentration of hydrogen, oxygen, and hydrogen peroxide reaches steady state at about 200 Gy of radiation. In presence of dissolved oxygen, the reactions continue until the oxygen is consumed and the equilibrium is shifted. Neutron activation of water leads to buildup of low concentrations of nitrogen species; due to the oxidizing effects of the reactive oxygen species, these tend to be present in the form of nitrate anions. In reducing environments, ammonia may be formed. Ammonia ions may be however also subsequently oxidized to nitrates. Other species present in the coolant water are the oxidized corrosion products (e.g. chromates) and fission products (e.g. pertechnetate and periodate anions, uranyl and neptunyl cations). Absorption of neutrons in hydrogen nuclei leads to buildup of deuterium and tritium in the water.
Behavior of supercritical water, important for the supercritical water reactors, differs from the radiochemical behavior of liquid water and steam and is currently under investigation.
The magnitude of the effects of radiation on water is dependent on the type and energy of the radiation, namely its linear energy transfer. A gas-free water subjected to low-LET gamma rays yields almost no radiolysis products and sustains an equilibrium with their low concentration. High-LET alpha radiation produces larger amounts of radiolysis products. In presence of dissolved oxygen, radiolysis always occurs. Dissolved hydrogen completely suppresses radiolysis by low-LET radiation while radiolysis still occurs with
The presence of reactive oxygen species has strongly disruptive effect on dissolved organic chemicals. This is exploited in groundwater remediation by electron beam treatment.
Countermeasures.
Two main approaches to reduce radiation damage are reducing the amount of energy deposited in the sensitive material (e.g. by shielding, distance from the source, or spatial orientation), or modification of the material to be less sensitive to radiation damage (e.g. by adding antioxidants, stabilizers, or choosing a more suitable material).
In addition to the electronic device hardening mentioned above, some degree of protection may be obtained by shielding, usually with the interposition of high density materials (particularly lead, where space is critical, or concrete where space is available) between the radiation source and areas to be protected. For biological effects of substances such as radioactive iodine the ingestion of non-radioactive isotopes may substantially reduce the biological uptake of the radioactive form, and chelation therapy may be applied to accelerate the removal of radioactive materials formed from heavy metals from the body by natural processes.
For solid radiation damage.
Solid countermeasures to radiation damage consist of three approaches. Firstly, saturating the matrix with oversized solutes. This acts to trap the swelling that occurs as a result of the creep and dislocation motion. They also act to help prevent diffusion, which restricts the ability of the material to undergo radiation induced segregation. Secondly, dispersing an oxide inside the matrix of the material. Dispersed oxide helps to prevent creep, and to mitigate swelling and reduce radiation induced segregation as well, by preventing dislocation motion and the formation and motion of interstitials. Finally, by engineering grain boundaries to be as small as possible, dislocation motion can be impeded, which prevents the embrittlement and hardening that result in material failure.
Effects on humans.
Ionizing radiation is generally harmful and potentially lethal to living things but can have health benefits in radiation therapy for the treatment of cancer and thyrotoxicosis. Its most common impact is the induction of cancer with a latent period of years or decades after exposure. High doses can cause visually dramatic radiation burns, and/or rapid fatality through acute radiation syndrome. Controlled doses are used for medical imaging and radiotherapy.
Most adverse health effects of radiation exposure may be grouped in two general categories:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Sigma = \\sigma \\rho_A"
},
{
"math_id": 1,
"text": "\\sigma"
},
{
"math_id": 2,
"text": "\\rho_A"
},
{
"math_id": 3,
"text": "R = \\Phi \\Sigma = \\Phi \\sigma \\rho_A "
},
{
"math_id": 4,
"text": "\\Phi"
},
{
"math_id": 5,
"text": "P\\,dx = N_j \\sigma(E_i) \\, dx = \\Sigma \\, dx"
}
] | https://en.wikipedia.org/wiki?curid=9934503 |
9938244 | Graph operations | Procedures for constructing new graphs in graph theory
In the mathematical field of graph theory, graph operations are operations which produce new graphs from initial ones. They include both unary (one input) and binary (two input) operations.
Unary operations.
Unary operations create a new graph from a single initial graph.
Elementary operations.
Elementary operations or editing operations, which are also known as graph edit operations, create a new graph from one initial one by a simple local change, such as addition or deletion of a vertex or of an edge, merging and splitting of vertices, edge contraction, etc.
The graph edit distance between a pair of graphs is the minimum number of elementary operations required to transform one graph into the other.
Advanced operations.
Advanced operations create a new graph from an initial one by a complex change, such as:
Binary operations.
Binary operations create a new graph from two initial graphs "G"1 = ("V"1, "E"1) and "G"2 = ("V"2, "E"2), such as: | [
{
"math_id": 0,
"text": "G_1 \\nabla G_2"
}
] | https://en.wikipedia.org/wiki?curid=9938244 |
9938459 | Disjoint union of graphs | Combining the vertex and edge sets of two graphs
In graph theory, a branch of mathematics, the disjoint union of graphs is an operation that combines two or more graphs to form a larger graph.
It is analogous to the disjoint union of sets, and is constructed by making the vertex set of the result be the disjoint union of the vertex sets of the given graphs, and by making the edge set of the result be the disjoint union of the edge sets of the given graphs. Any disjoint union of two or more nonempty graphs is necessarily disconnected.
Notation.
The disjoint union is also called the graph sum, and may be represented either by a plus sign or a circled plus sign: If formula_0 and formula_1 are two graphs, then formula_2 or formula_3 denotes their disjoint union.
Related graph classes.
Certain special classes of graphs may be represented using disjoint union operations. In particular:
More generally, every graph is the disjoint union of connected graphs, its connected components.
The cographs are the graphs that can be constructed from single-vertex graphs by a combination of disjoint union and complement operations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "H"
},
{
"math_id": 2,
"text": "G+H"
},
{
"math_id": 3,
"text": "G\\oplus H"
}
] | https://en.wikipedia.org/wiki?curid=9938459 |
9939158 | 1 − 2 + 4 − 8 + ⋯ | In mathematics, 1 − 2 + 4 − 8 + ⋯ is the infinite series whose terms are the successive powers of two with alternating signs. As a geometric series, it is characterized by its first term, 1, and its common ratio, −2.
formula_0
As a series of real numbers, it diverges. So in the usual sense it has no sum. In a much broader sense, the series is associated with another value besides ∞, namely , which is the limit of the series using the 2-adic metric.
Historical arguments.
Gottfried Leibniz considered the divergent alternating series 1 − 2 + 4 − 8 + 16 − ⋯ as early as 1673. He argued that by subtracting either on the left or on the right, one could produce either positive or negative infinity, and therefore both answers are wrong and the whole should be finite:
Now normally nature chooses the middle if neither of the two is permitted, or rather if it cannot be determined which of the two is permitted, and the whole is equal to a finite quantity
Leibniz did not quite assert that the series had a "sum", but he did infer an association with following Mercator's method. The attitude that a series could equal some finite quantity without actually adding up to it as a sum would be commonplace in the 18th century, although no distinction is made in modern mathematics.
After Christian Wolff read Leibniz's treatment of Grandi's series in mid-1712, Wolff was so pleased with the solution that he sought to extend the arithmetic mean method to more divergent series such as 1 − 2 + 4 − 8 + 16 − ⋯. Briefly, if one expresses a partial sum of this series as a function of the penultimate term, one obtains either or . The mean of these values is , and assuming that "m" = "n" at infinity yields as the value of the series. Leibniz's intuition prevented him from straining his solution this far, and he wrote back that Wolff's idea was interesting but invalid for several reasons. The arithmetic means of neighboring partial sums do not converge to any particular value, and for all finite cases one has "n" = 2"m", not "n" = "m". Generally, the terms of a summable series should decrease to zero; even 1 − 1 + 1 − 1 + ⋯ could be expressed as a limit of such series. Leibniz counsels Wolff to reconsider so that he "might produce something worthy of science and himself."
Modern methods.
Geometric series.
Any summation method possessing the properties of regularity, linearity, and stability will sum a geometric series
formula_1
In this case "a" = 1 and "r" = −2, so the sum is .
Euler summation.
In his 1755 "Institutiones", Leonhard Euler effectively took what is now called the Euler transform of 1 − 2 + 4 − 8 + ⋯, arriving at the convergent series − + − + ⋯. Since the latter sums to , Euler concluded that 1 − 2 + 4 − 8 + ... =. His ideas on infinite series do not quite follow the modern approach; today one says that 1 − 2 + 4 − 8 + ... is Euler-summable and that its Euler sum is .
The Euler transform begins with the sequence of positive terms:
"a"0 = 1,
"a"1 = 2,
"a"2 = 4,
"a"3 = 8...
The sequence of forward differences is then
Δ"a"0 = "a"1 − "a"0 = 2 − 1 = 1,
Δ"a"1 = "a"2 − "a"1 = 4 − 2 = 2,
Δ"a"2 = "a"3 − "a"2 = 8 − 4 = 4,
Δ"a"3 = "a"4 − "a"3 = 16 − 8 = 8...
which is just the same sequence. Hence the iterated forward difference sequences all start with Δ"n""a"0 = 1 for every "n". The Euler transform is the series
formula_2
This is a convergent geometric series whose sum is by the usual formula.
Borel summation.
The Borel sum of 1 − 2 + 4 − 8 + ⋯ is also ; when Émile Borel introduced the limit formulation of Borel summation in 1896, this was one of his first examples after 1 − 1 + 1 − 1 + ⋯
"p"-adic numbers.
The sequence of partial sums associated with formula_3 in the 2-adic metric is
formula_4
and when expressed in base 2 using two's complement,
formula_5
and the limit of this sequence is formula_6 in the 2-adic metric. Thus formula_7.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{k=0}^{n} (-2)^k"
},
{
"math_id": 1,
"text": "\\sum_{k=0}^\\infty a r^k = \\frac{a}{1-r}."
},
{
"math_id": 2,
"text": "\\frac{a_0}{2}-\\frac{\\Delta a_0}{4}+\\frac{\\Delta^2 a_0}{8}-\\frac{\\Delta^3 a_0}{16}+\\cdots = \\frac12-\\frac14+\\frac18-\\frac{1}{16}+\\cdots."
},
{
"math_id": 3,
"text": "1 - 2 + 4 - 8 \\ldots"
},
{
"math_id": 4,
"text": "1, -1, 3, -5, 11, \\ldots"
},
{
"math_id": 5,
"text": "\\overline{0}1, \\overline{1}1, \\overline{0}11, \\overline{1}011, \\overline{0}1011, \\ldots"
},
{
"math_id": 6,
"text": "\\overline{01}1 = \\frac{1}{3}"
},
{
"math_id": 7,
"text": "1 - 2 + 4 - 8 \\ldots = \\frac{1}{3}"
}
] | https://en.wikipedia.org/wiki?curid=9939158 |
993979 | Golem (ILP) | Golem is an inductive logic programming algorithm developed by Stephen Muggleton and Cao Feng in 1990. It uses the technique of relative least general generalisation proposed by Gordon Plotkin, leading to a bottom-up search through the subsumption lattice. In 1992, shortly after its introduction, Golem was considered the only inductive logic programming system capable of scaling to tens of thousands of examples.
Description.
Golem takes as input a definite program B as background knowledge together with sets of positive and negative examples, denoted formula_0 and formula_1 respectively. The overall idea is to construct the least general generalisation of formula_0 with respect to the background knowledge. However, if B is not merely a finite set of ground atoms, then this relative least general generalisation may not exist.
Therefore, rather than using B directly, Golem uses the set formula_2 of all ground atoms that can be resolved from B in at most h resolution steps. An additional difficulty is that if formula_1 is non-empty, the least general generalisation of formula_0 may entail a negative example. In this case, Golem generalises different subsets of formula_0 separately to obtain a program of several clauses.
Golem also employs some restrictions on the hypothesis space, ensuring that relative least general generalisations are polynomial in the number of training examples. Golem demands that all variables in the head of a clause also appears in a literal of the clause body; that the number of substitutions needed to instantiate existentially quantified variables introduced in a literal is bounded; and that the depth of the chain of substitutions needed to instantiate such a variable is also bounded.
Example.
The following example about learning definitions of family relations uses the abbreviations
"par": "parent", "fem": "female", "dau": "daughter", "g": "George", "h": "Helen", "m": "Mary", "t": "Tom", "n": "Nancy", and "e": "Eve".
It starts from the background knowledge (cf. picture)
formula_3,
the positive examples
formula_4,
and the trivial proposition
true
to denote the absence of negative examples.
The relative least general generalisation is now computed as follows to obtain a definition of the "daughter" relation.
The resulting Horn clause is the hypothesis h obtained by Golem. Informally, the clause reads "formula_21 is called a daughter of formula_22 if formula_22 is the parent of formula_21 and formula_21 is female", which is a commonly accepted definition.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E^{+}"
},
{
"math_id": 1,
"text": "E^{-}"
},
{
"math_id": 2,
"text": "B^{h}"
},
{
"math_id": 3,
"text": "\\textit{par}(h,m) \\land \\textit{par}(h,t) \\land \\textit{par}(g,m) \\land \\textit{par}(t,e) \\land \\textit{par}(n,e) \\land \\textit{fem}(h) \\land \\textit{fem}(m) \\land \\textit{fem}(n) \\land \\textit{fem}(e)"
},
{
"math_id": 4,
"text": "\\textit{dau}(m,h) \\land \\textit{dau}(e,t)"
},
{
"math_id": 5,
"text": "\\begin{align}\n\\textit{dau}(m,h) \\leftarrow \\textit{par}(h,m) \\land \\textit{par}(h,t) \\land \\textit{par}(g,m) \\land \\textit{par}(t,e) \\land \\textit{par}(n,e) \\land \\textit{fem}(h) \\land \\textit{fem}(m) \\land \\textit{fem}(n) \\land \\textit{fem}(e) \\\\\n\\textit{dau}(e,t) \\leftarrow \\textit{par}(h,m) \\land \\textit{par}(h,t) \\land \\textit{par}(g,m) \\land \\textit{par}(t,e) \\land \\textit{par}(n,e) \\land \\textit{fem}(h) \\land \\textit{fem}(m) \\land \\textit{fem}(n) \\land \\textit{fem}(e)\n\\end{align}"
},
{
"math_id": 6,
"text": "\\begin{align}\n\\textit{dau}(m,h) \\lor \\lnot \\textit{par}(h,m) \\lor \\lnot \\textit{par}(h,t) \\lor \\lnot \\textit{par}(g,m) \\lor \\lnot \\textit{par}(t,e) \\lor \\lnot \\textit{par}(n,e) \\lor \\lnot \\textit{fem}(h) \\lor \\lnot \\textit{fem}(m) \\lor \\lnot \\textit{fem}(n) \\lor \\lnot \\textit{fem}(e) \\\\\n\\textit{dau}(e,t) \\lor \\lnot \\textit{par}(h,m) \\lor \\lnot \\textit{par}(h,t) \\lor \\lnot \\textit{par}(g,m) \\lor \\lnot \\textit{par}(t,e) \\lor \\lnot \\textit{par}(n,e) \\lor \\lnot \\textit{fem}(h) \\lor \\lnot \\textit{fem}(m) \\lor \\lnot \\textit{fem}(n) \\lor \\lnot \\textit{fem}(e)\n\\end{align}"
},
{
"math_id": 7,
"text": "\\textit{dau}(x_{me},x_{ht})"
},
{
"math_id": 8,
"text": "\\textit{dau}(m,h)"
},
{
"math_id": 9,
"text": "\\textit{dau}(e,t)"
},
{
"math_id": 10,
"text": "\\lnot \\textit{par}(x_{ht},x_{me})"
},
{
"math_id": 11,
"text": "\\lnot \\textit{par}(h,m)"
},
{
"math_id": 12,
"text": "\\lnot \\textit{par}(t,e)"
},
{
"math_id": 13,
"text": "\\lnot \\textit{fem}(x_{me})"
},
{
"math_id": 14,
"text": "\\lnot \\textit{fem}(m)"
},
{
"math_id": 15,
"text": "\\lnot \\textit{fem}(e)"
},
{
"math_id": 16,
"text": "\\lnot \\textit{par}(g,m)"
},
{
"math_id": 17,
"text": "\\lnot \\textit{par}(x_{gt},x_{me})"
},
{
"math_id": 18,
"text": "x_{me},x_{ht}"
},
{
"math_id": 19,
"text": "\\textit{dau}(x_{me},x_{ht}) \\lor \\lnot \\textit{par}(x_{ht},x_{me}) \\lor \\lnot \\textit{fem}(x_{me})"
},
{
"math_id": 20,
"text": "\\textit{dau}(x_{me},x_{ht}) \\leftarrow \\textit{par}(x_{ht},x_{me}) \\land \\textit{fem}(x_{me}) \\land (\\text{all background knowledge facts})"
},
{
"math_id": 21,
"text": "x_{me}"
},
{
"math_id": 22,
"text": "x_{ht}"
}
] | https://en.wikipedia.org/wiki?curid=993979 |
99438 | Extended Euclidean algorithm | Method for computing the relation of two integers with their greatest common divisor
In arithmetic and computer programming, the extended Euclidean algorithm is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers "a" and "b", also the coefficients of Bézout's identity, which are integers "x" and "y" such that
formula_0
This is a certifying algorithm, because the gcd is the only number that can simultaneously satisfy this equation and divide the inputs.
It allows one to compute also, with almost no extra cost, the quotients of "a" and "b" by their greatest common divisor.
Extended Euclidean algorithm also refers to a very similar algorithm for computing the polynomial greatest common divisor and the coefficients of Bézout's identity of two univariate polynomials.
The extended Euclidean algorithm is particularly useful when "a" and "b" are coprime. With that provision, "x" is the modular multiplicative inverse of "a" modulo "b", and "y" is the modular multiplicative inverse of "b" modulo "a". Similarly, the polynomial extended Euclidean algorithm allows one to compute the multiplicative inverse in algebraic field extensions and, in particular in finite fields of non prime order. It follows that both extended Euclidean algorithms are widely used in cryptography. In particular, the computation of the modular multiplicative inverse is an essential step in the derivation of key-pairs in the RSA public-key encryption method.
Description.
The standard Euclidean algorithm proceeds by a succession of Euclidean divisions whose quotients are not used. Only the "remainders" are kept. For the extended algorithm, the successive quotients are used. More precisely, the standard Euclidean algorithm with "a" and "b" as input, consists of computing a sequence formula_1 of quotients and a sequence formula_2 of remainders such that
formula_3
It is the main property of Euclidean division that the inequalities on the right define uniquely formula_4 and formula_5 from formula_6 and formula_7
The computation stops when one reaches a remainder formula_8 which is zero; the greatest common divisor is then the last non zero remainder formula_9
The extended Euclidean algorithm proceeds similarly, but adds two other sequences, as follows
formula_10
The computation also stops when formula_11 and gives
Moreover, if "a" and "b" are both positive and formula_20, then
formula_21
for formula_22 where formula_23 denotes the integral part of x, that is the greatest integer not greater than x.
This implies that the pair of Bézout's coefficients provided by the extended Euclidean algorithm is the "minimal pair" of Bézout coefficients, as being the unique pair satisfying both above inequalities.
Also it means that the algorithm can be done without integer overflow by a computer program using integers of a fixed size that is larger than that of "a" and "b".
Example.
The following table shows how the extended Euclidean algorithm proceeds with input and . The greatest common divisor is the last non zero entry, in the column "remainder". The computation stops at row 6, because the remainder in it is . Bézout coefficients appear in the last two entries of the second-to-last row. In fact, it is easy to verify that −9 × 240 + 47 × 46 = 2. Finally the last two entries and of the last row are, up to the sign, the quotients of the input and by the greatest common divisor .
Proof.
As formula_24 the sequence of the formula_25 is a decreasing sequence of nonnegative integers (from "i" = 2 on). Thus it must stop with some formula_26 This proves that the algorithm stops eventually.
As formula_27 the greatest common divisor is the same for formula_28 and formula_29 This shows that the greatest common divisor of the input formula_30 is the same as that of formula_31 This proves that formula_32 is the greatest common divisor of "a" and "b". (Until this point, the proof is the same as that of the classical Euclidean algorithm.)
As formula_33 and formula_34 we have formula_35 for "i" = 0 and 1. The relation follows by induction for all formula_36:
formula_37
Thus formula_15 and formula_38 are Bézout coefficients.
Consider the matrix
formula_39
The recurrence relation may be rewritten in matrix form
formula_40
The matrix formula_41 is the identity matrix and its determinant is one. The determinant of the rightmost matrix in the preceding formula is −1. It follows that the determinant of formula_42 is formula_43 In particular, for formula_44 we have formula_45 Viewing this as a Bézout's identity, this shows that formula_46 and formula_47 are coprime. The relation formula_48 that has been proved above and Euclid's lemma show that formula_46 divides b, that is that formula_49 for some integer d. Dividing by formula_46 the relation formula_48 gives formula_50 So, formula_46 and formula_51 are coprime integers that are the quotients of a and b by a common factor, which is thus their greatest common divisor or its opposite.
To prove the last assertion, assume that "a" and "b" are both positive and formula_20. Then, formula_52, and if formula_53, it can be seen that the "s" and "t" sequences for ("a","b") under the EEA are, up to initial 0s and 1s, the "t" and "s" sequences for ("b","a"). The definitions then show that the ("a","b") case reduces to the ("b","a") case. So assume that formula_54 without loss of generality.
It can be seen that formula_55 is 1 and formula_56 (which exists by formula_20) is a negative integer. Thereafter, the formula_57 alternate in sign and strictly increase in magnitude, which follows inductively from the definitions and the fact that formula_58 for formula_59, the case formula_60 holds because formula_54. The same is true for the formula_61 after the first few terms, for the same reason. Furthermore, it is easy to see that formula_62 (when "a" and "b" are both positive and formula_20). Thus, noticing that formula_63, we obtain
formula_64
This, accompanied by the fact that formula_65 are larger than or equal to in absolute value than any previous formula_57 or formula_61 respectively completed the proof.
Polynomial extended Euclidean algorithm.
For univariate polynomials with coefficients in a field, everything works similarly, Euclidean division, Bézout's identity and extended Euclidean algorithm. The first difference is that, in the Euclidean division and the algorithm, the inequality formula_66 has to be replaced by an inequality on the degrees formula_67 Otherwise, everything which precedes in this article remains the same, simply by replacing integers by polynomials.
A second difference lies in the bound on the size of the Bézout coefficients provided by the extended Euclidean algorithm, which is more accurate in the polynomial case, leading to the following theorem.
"If a and b are two nonzero polynomials, then the extended Euclidean algorithm produces the unique pair of polynomials" ("s", "t") "such that"
formula_68
"and"
formula_69
A third difference is that, in the polynomial case, the greatest common divisor is defined only up to the multiplication by a non zero constant. There are several ways to define unambiguously a greatest common divisor.
In mathematics, it is common to require that the greatest common divisor be a monic polynomial. To get this, it suffices to divide every element of the output by the leading coefficient of formula_9 This allows that, if "a" and "b" are coprime, one gets 1 in the right-hand side of Bézout's inequality. Otherwise, one may get any non-zero constant. In computer algebra, the polynomials commonly have integer coefficients, and this way of normalizing the greatest common divisor introduces too many fractions to be convenient.
The second way to normalize the greatest common divisor in the case of polynomials with integer coefficients is to divide every output by the content of formula_70 to get a primitive greatest common divisor. If the input polynomials are coprime, this normalisation also provides a greatest common divisor equal to 1. The drawback of this approach is that a lot of fractions should be computed and simplified during the computation.
A third approach consists in extending the algorithm of subresultant pseudo-remainder sequences in a way that is similar to the extension of the Euclidean algorithm to the extended Euclidean algorithm. This allows that, when starting with polynomials with integer coefficients, all polynomials that are computed have integer coefficients. Moreover, every computed remainder formula_71 is a subresultant polynomial. In particular, if the input polynomials are coprime, then the Bézout's identity becomes
formula_72
where formula_73 denotes the resultant of "a" and "b". In this form of Bézout's identity, there is no denominator in the formula. If one divides everything by the resultant one gets the classical Bézout's identity, with an explicit common denominator for the rational numbers that appear in it.
Pseudocode.
To implement the algorithm that is described above, one should first remark that only the two last values of the indexed variables are needed at each step. Thus, for saving memory, each indexed variable must be replaced by just two variables.
For simplicity, the following algorithm (and the other algorithms in this article) uses parallel assignments. In a programming language which does not have this feature, the parallel assignments need to be simulated with an auxiliary variable. For example, the first one,
(old_r, r) := (r, old_r - quotient * r)
is equivalent to
prov := r;
r := old_r - quotient × prov;
old_r := prov;
and similarly for the other parallel assignments.
This leads to the following code:
function extended_gcd(a, b)
(old_r, r) := (a, b)
(old_s, s) := (1, 0)
(old_t, t) := (0, 1)
while r ≠ 0 do
quotient := old_r div r
(old_r, r) := (r, old_r − quotient × r)
(old_s, s) := (s, old_s − quotient × s)
(old_t, t) := (t, old_t − quotient × t)
output "Bézout coefficients:", (old_s, old_t)
output "greatest common divisor:", old_r
output "quotients by the gcd:", (t, s)
The quotients of "a" and "b" by their greatest common divisor, which is output, may have an incorrect sign. This is easy to correct at the end of the computation but has not been done here for simplifying the code. Similarly, if either "a" or "b" is zero and the other is negative, the greatest common divisor that is output is negative, and all the signs of the output must be changed.
Finally, notice that in Bézout's identity, formula_74, one can solve for formula_75 given formula_76. Thus, an optimization to the above algorithm is to compute only the formula_15 sequence (which yields the Bézout coefficient formula_77), and then compute formula_75 at the end:
function extended_gcd(a, b)
s := 0; old_s := 1
r := b; old_r := a
while r ≠ 0 do
quotient := old_r div r
(old_r, r) := (r, old_r − quotient × r)
(old_s, s) := (s, old_s − quotient × s)
if b ≠ 0 then
bezout_t := (old_r − old_s × a) div b
else
bezout_t := 0
output "Bézout coefficients:", (old_s, bezout_t)
output "greatest common divisor:", old_r
However, in many cases this is not really an optimization: whereas the former algorithm is not susceptible to overflow when used with machine integers (that is, integers with a fixed upper bound of digits), the multiplication of "old_s * a" in computation of "bezout_t" can overflow, limiting this optimization to inputs which can be represented in less than half the maximal size. When using integers of unbounded size, the time needed for multiplication and division grows quadratically with the size of the integers. This implies that the "optimisation" replaces a sequence of multiplications/divisions of small integers by a single multiplication/division, which requires more computing time than the operations that it replaces, taken together.
Simplification of fractions.
A fraction is in canonical simplified form if "a" and "b" are coprime and "b" is positive. This canonical simplified form can be obtained by replacing the three output lines of the preceding pseudo code by
if "s" = 0 then output "Division by zero"
if "s" < 0 then "s" := −"s"; "t" := −"t" ("for avoiding negative denominators")
if "s" = 1 then output -"t" ("for avoiding denominators equal to" 1)
output
The proof of this algorithm relies on the fact that "s" and "t" are two coprime integers such that "as" + "bt" = 0, and thus formula_78. To get the canonical simplified form, it suffices to move the minus sign for having a positive denominator.
If "b" divides "a" evenly, the algorithm executes only one iteration, and we have "s" = 1 at the end of the algorithm. It is the only case where the output is an integer.
Computing multiplicative inverses in modular structures.
The extended Euclidean algorithm is the essential tool for computing multiplicative inverses in modular structures, typically the modular integers and the algebraic field extensions. A notable instance of the latter case are the finite fields of non-prime order.
Modular integers.
If "n" is a positive integer, the ring Z/"nZ may be identified with the set {0, 1, ..., "n"-1} of the remainders of Euclidean division by "n", the addition and the multiplication consisting in taking the remainder by "n" of the result of the addition and the multiplication of integers. An element "a" of Z/"nZ has a multiplicative inverse (that is, it is a unit) if it is coprime to "n". In particular, if "n" is prime, "a" has a multiplicative inverse if it is not zero (modulo "n"). Thus Z/"n"Z is a field if and only if "n" is prime.
Bézout's identity asserts that "a" and "n" are coprime if and only if there exist integers "s" and "t" such that
formula_79
Reducing this identity modulo "n" gives
formula_80
Thus "t", or, more exactly, the remainder of the division of "t" by "n", is the multiplicative inverse of "a" modulo "n".
To adapt the extended Euclidean algorithm to this problem, one should remark that the Bézout coefficient of "n" is not needed, and thus does not need to be computed. Also, for getting a result which is positive and lower than "n", one may use the fact that the integer "t" provided by the algorithm satisfies |"t"| < "n". That is, if "t" < 0, one must add "n" to it at the end. This results in the pseudocode, in which the input "n" is an integer larger than 1.
function inverse(a, n)
t := 0; newt := 1
r := n; newr := a
while newr ≠ 0 do
quotient := r div newr
(t, newt) := (newt, t − quotient × newt)
(r, newr) := (newr, r − quotient × newr)
if r > 1 then
return "a is not invertible"
if t < 0 then
t := t + n
return t
Simple algebraic field extensions.
The extended Euclidean algorithm is also the main tool for computing multiplicative inverses in simple algebraic field extensions. An important case, widely used in cryptography and coding theory, is that of finite fields of non-prime order. In fact, if "p" is a prime number, and "q" = "p""d", the field of order "q" is a simple algebraic extension of the prime field of "p" elements, generated by a root of an irreducible polynomial of degree "d".
A simple algebraic extension "L" of a field "K", generated by the root of an irreducible polynomial "p" of degree "d" may be identified to the quotient ring formula_81, and its elements are in bijective correspondence with the polynomials of degree less than "d". The addition in "L" is the addition of polynomials. The multiplication in "L" is the remainder of the Euclidean division by "p" of the product of polynomials. Thus, to complete the arithmetic in "L", it remains only to define how to compute multiplicative inverses. This is done by the extended Euclidean algorithm.
The algorithm is very similar to that provided above for computing the modular multiplicative inverse. There are two main differences: firstly the last but one line is not needed, because the Bézout coefficient that is provided always has a degree less than "d". Secondly, the greatest common divisor which is provided, when the input polynomials are coprime, may be any non zero elements of "K"; this Bézout coefficient (a polynomial generally of positive degree) has thus to be multiplied by the inverse of this element of "K". In the pseudocode which follows, "p" is a polynomial of degree greater than one, and "a" is a polynomial.
function inverse(a, p)
t := 0; newt := 1
r := p; newr := a
while newr ≠ 0 do
quotient := r div newr
(r, newr) := (newr, r − quotient × newr)
(t, newt) := (newt, t − quotient × newt)
if degree(r) > 0 then
return "Either p is not irreducible or a is a multiple of p"
return (1/r) × t
Example.
For example, if the polynomial used to define the finite field GF(28) is "p" = "x"8 + "x"4 + "x"3 + "x" + 1, and "a" = "x"6 + "x"4 + "x" + 1 is the element whose inverse is desired, then performing the algorithm results in the computation described in the following table. Let us recall that in fields of order 2"n", one has −"z" = "z" and "z" + "z" = 0 for every element "z" in the field). Since 1 is the only nonzero element of GF(2), the adjustment in the last line of the pseudocode is not needed.
Thus, the inverse is "x"7 + "x"6 + "x"3 + "x", as can be confirmed by multiplying the two elements together, and taking the remainder by p of the result.
The case of more than two numbers.
One can handle the case of more than two numbers iteratively. First we show that formula_82. To prove this let formula_83. By definition of gcd formula_84 is a divisor of formula_85 and formula_86. Thus formula_87 for some formula_88. Similarly formula_84 is a divisor of formula_89 so formula_90 for some formula_91. Let formula_92. By our construction of formula_93, formula_94 but since formula_84 is the greatest divisor formula_93 is a unit. And since formula_95 the result is proven.
So if formula_96 then there are formula_77 and formula_75 such that formula_97 so the final equation will be
formula_98
So then to apply to "n" numbers we use induction
formula_99
with the equations following directly. | [
{
"math_id": 0,
"text": "ax + by = \\gcd(a, b)."
},
{
"math_id": 1,
"text": "q_1,\\ldots, q_k"
},
{
"math_id": 2,
"text": "r_0,\\ldots, r_{k+1}"
},
{
"math_id": 3,
"text": "\n\\begin{align}\nr_0 & =a \\\\\nr_1 & =b \\\\\n& \\,\\,\\,\\vdots \\\\\nr_{i+1} & =r_{i-1}-q_i r_i \\quad \\text {and} \\quad 0\\le r_{i+1} < |r_i| \\quad\\text{(this defines }q_i)\\\\\n& \\,\\,\\, \\vdots\n\\end{align}\n"
},
{
"math_id": 4,
"text": "q_i"
},
{
"math_id": 5,
"text": "r_{i+1}"
},
{
"math_id": 6,
"text": "r_{i-1}"
},
{
"math_id": 7,
"text": "r_{i}."
},
{
"math_id": 8,
"text": "r_{k+1}"
},
{
"math_id": 9,
"text": "r_{k}."
},
{
"math_id": 10,
"text": "\n\\begin{align}\nr_0 & =a & r_1 & =b \\\\\ns_0 & =1 & s_1 & =0 \\\\\nt_0 & =0 & t_1 & =1 \\\\\n& \\,\\,\\,\\vdots & & \\,\\,\\,\\vdots \\\\\nr_{i+1} & =r_{i-1}-q_i r_i & \\text {and } 0 & \\le r_{i+1} < |r_i| & \\text{(this defines } q_i \\text{)}\\\\\ns_{i+1} & =s_{i-1}-q_i s_i \\\\\nt_{i+1} & =t_{i-1}-q_i t_i \\\\\n& \\,\\,\\, \\vdots\n\\end{align}\n"
},
{
"math_id": 11,
"text": "r_{k+1}=0"
},
{
"math_id": 12,
"text": "r_k"
},
{
"math_id": 13,
"text": "a=r_0"
},
{
"math_id": 14,
"text": "b=r_1."
},
{
"math_id": 15,
"text": "s_k"
},
{
"math_id": 16,
"text": "t_k,"
},
{
"math_id": 17,
"text": "\\gcd(a,b)=r_k=as_k+bt_k"
},
{
"math_id": 18,
"text": "s_{k+1}=\\pm\\frac{b}{\\gcd(a,b)}"
},
{
"math_id": 19,
"text": "t_{k+1}=\\pm\\frac{a}{\\gcd(a,b)}"
},
{
"math_id": 20,
"text": "\\gcd(a,b)\\neq\\min(a,b)"
},
{
"math_id": 21,
"text": "|s_i| \\leq \\left\\lfloor\\frac{b}{2\\gcd(a,b)}\\right\\rfloor\\quad \\text{and} \\quad |t_i| \\leq \\left\\lfloor\\frac{a}{2\\gcd(a,b)}\\right\\rfloor"
},
{
"math_id": 22,
"text": "0\\leq i \\leq k,"
},
{
"math_id": 23,
"text": "\\lfloor x\\rfloor"
},
{
"math_id": 24,
"text": " 0\\le r_{i+1}<|r_i|, "
},
{
"math_id": 25,
"text": " r_i "
},
{
"math_id": 26,
"text": " r_{k+1}=0."
},
{
"math_id": 27,
"text": " r_{i+1}= r_{i-1} - r_i q_i,"
},
{
"math_id": 28,
"text": "(r_{i-1}, r_i)"
},
{
"math_id": 29,
"text": "(r_{i}, r_{i+1})."
},
{
"math_id": 30,
"text": "a=r_0, b=r_1 "
},
{
"math_id": 31,
"text": " r_k, r_{k+1}=0."
},
{
"math_id": 32,
"text": " r_k "
},
{
"math_id": 33,
"text": " a=r_0"
},
{
"math_id": 34,
"text": " b=r_1,"
},
{
"math_id": 35,
"text": "as_i+bt_i=r_i"
},
{
"math_id": 36,
"text": "i>1"
},
{
"math_id": 37,
"text": "r_{i+1} = r_{i-1} - r_i q_i = (as_{i-1}+bt_{i-1}) - (as_i+bt_i)q_i = (as_{i-1}-as_iq_i) + (bt_{i-1}-bt_iq_i) = as_{i+1}+bt_{i+1}."
},
{
"math_id": 38,
"text": "t_k"
},
{
"math_id": 39,
"text": "A_i=\\begin{pmatrix} s_{i-1} & s_i\\\\ t_{i-1} & t_i \\end{pmatrix}."
},
{
"math_id": 40,
"text": "A_{i+1} = A_i \\cdot \\begin{pmatrix} 0 & 1\\\\ 1 & -q_i \\end{pmatrix}."
},
{
"math_id": 41,
"text": "A_1"
},
{
"math_id": 42,
"text": "A_i"
},
{
"math_id": 43,
"text": "(-1)^{i-1}."
},
{
"math_id": 44,
"text": "i=k+1,"
},
{
"math_id": 45,
"text": "s_k t_{k+1} - t_k s_{k+1} = (-1)^k."
},
{
"math_id": 46,
"text": "s_{k+1}"
},
{
"math_id": 47,
"text": "t_{k+1}"
},
{
"math_id": 48,
"text": "as_{k+1}+bt_{k+1}=0"
},
{
"math_id": 49,
"text": "b=ds_{k+1}"
},
{
"math_id": 50,
"text": "a=-dt_{k+1}."
},
{
"math_id": 51,
"text": "-t_{k+1}"
},
{
"math_id": 52,
"text": "a \\neq b "
},
{
"math_id": 53,
"text": "a < b"
},
{
"math_id": 54,
"text": "a > b"
},
{
"math_id": 55,
"text": "s_2"
},
{
"math_id": 56,
"text": "s_3"
},
{
"math_id": 57,
"text": "s_i"
},
{
"math_id": 58,
"text": "q_i\\geq 1"
},
{
"math_id": 59,
"text": "1 \\leq i \\leq k"
},
{
"math_id": 60,
"text": "i = 1"
},
{
"math_id": 61,
"text": "t_i"
},
{
"math_id": 62,
"text": "q_k \\geq 2"
},
{
"math_id": 63,
"text": "|s_{k+1}| = |s_{k-1}| + q_k |s_k|"
},
{
"math_id": 64,
"text": "|s_{k+1}|=\\left |\\frac{b}{\\gcd(a,b)} \\right | \\geq 2|s_k| \\qquad \\text{and} \\qquad |t_{k+1}|= \\left |\\frac{a}{\\gcd(a,b)} \\right | \\geq 2|t_k|."
},
{
"math_id": 65,
"text": "s_k,t_k"
},
{
"math_id": 66,
"text": "0\\le r_{i+1}<|r_i|"
},
{
"math_id": 67,
"text": "\\deg r_{i+1}<\\deg r_i."
},
{
"math_id": 68,
"text": "as+bt=\\gcd(a,b)"
},
{
"math_id": 69,
"text": "\\deg s < \\deg b - \\deg (\\gcd(a,b)), \\quad \\deg t < \\deg a - \\deg (\\gcd(a,b))."
},
{
"math_id": 70,
"text": "r_{k},"
},
{
"math_id": 71,
"text": "r_i"
},
{
"math_id": 72,
"text": "as+bt=\\operatorname{Res}(a,b),"
},
{
"math_id": 73,
"text": "\\operatorname{Res}(a,b)"
},
{
"math_id": 74,
"text": "ax + by = \\gcd(a, b)"
},
{
"math_id": 75,
"text": "y"
},
{
"math_id": 76,
"text": "a, b, x, \\gcd(a, b)"
},
{
"math_id": 77,
"text": "x"
},
{
"math_id": 78,
"text": "\\frac{a}{b} = -\\frac{t}{s}"
},
{
"math_id": 79,
"text": "ns+at=1"
},
{
"math_id": 80,
"text": "at \\equiv 1 \\mod n."
},
{
"math_id": 81,
"text": "K[X]/\\langle p\\rangle,"
},
{
"math_id": 82,
"text": "\\gcd(a,b,c) = \\gcd(\\gcd(a,b),c)"
},
{
"math_id": 83,
"text": "d=\\gcd(a,b,c)"
},
{
"math_id": 84,
"text": "d"
},
{
"math_id": 85,
"text": "a"
},
{
"math_id": 86,
"text": "b"
},
{
"math_id": 87,
"text": "\\gcd(a,b)=k d"
},
{
"math_id": 88,
"text": "k"
},
{
"math_id": 89,
"text": "c"
},
{
"math_id": 90,
"text": "c=jd"
},
{
"math_id": 91,
"text": "j"
},
{
"math_id": 92,
"text": "u=\\gcd(k,j)"
},
{
"math_id": 93,
"text": "u"
},
{
"math_id": 94,
"text": "ud | a,b,c"
},
{
"math_id": 95,
"text": "ud=\\gcd(\\gcd(a,b),c)"
},
{
"math_id": 96,
"text": "na + mb = \\gcd(a,b)"
},
{
"math_id": 97,
"text": "x\\gcd(a,b) + yc = \\gcd(a,b,c)"
},
{
"math_id": 98,
"text": "x(na + mb) + yc = (xn)a + (xm)b + yc = \\gcd(a,b,c).\\,"
},
{
"math_id": 99,
"text": "\\gcd(a_1,a_2,\\dots,a_n) =\\gcd(a_1,\\, \\gcd(a_2,\\, \\gcd(a_3,\\dots, \\gcd(a_{n-1}\\,,a_n))),\\dots),"
}
] | https://en.wikipedia.org/wiki?curid=99438 |
9943932 | Adjoint bundle | In mathematics, an adjoint bundle is a vector bundle naturally associated to any principal bundle. The fibers of the adjoint bundle carry a Lie algebra structure making the adjoint bundle into a (nonassociative) algebra bundle. Adjoint bundles have important applications in the theory of connections as well as in gauge theory.
Formal definition.
Let "G" be a Lie group with Lie algebra formula_0, and let "P" be a principal "G"-bundle over a smooth manifold "M". Let
formula_1
be the (left) adjoint representation of "G". The adjoint bundle of "P" is the associated bundle
formula_2
The adjoint bundle is also commonly denoted by formula_3. Explicitly, elements of the adjoint bundle are equivalence classes of pairs ["p", "X"] for "p" ∈ "P" and "X" ∈ formula_0 such that
formula_4
for all "g" ∈ "G". Since the structure group of the adjoint bundle consists of Lie algebra automorphisms, the fibers naturally carry a Lie algebra structure making the adjoint bundle into a bundle of Lie algebras over "M".
Restriction to a closed subgroup.
Let "G" be any Lie group with Lie algebra formula_0, and let "H" be a closed subgroup of G.
Via the (left) adjoint representation of G on formula_0, G becomes a topological transformation group of formula_0.
By restricting the adjoint representation of G to the subgroup H,
formula_5
also H acts as a topological transformation group on formula_0. For every h in H, formula_6 is a Lie algebra automorphism.
Since H is a closed subgroup of the Lie group G, the homogeneous space M=G/H is the base space of a principal bundle formula_7 with total space G and structure group H. So the existence of H-valued transition functions formula_8 is assured, where formula_9 is an open covering for M, and the transition functions formula_10 form a cocycle of transition function on M.
The associated fibre bundle formula_11 is a bundle of Lie algebras, with typical fibre formula_0, and a continuous mapping formula_12 induces on each fibre the Lie bracket.
Properties.
Differential forms on "M" with values in formula_13 are in one-to-one correspondence with horizontal, "G"-equivariant Lie algebra-valued forms on "P". A prime example is the curvature of any connection on "P" which may be regarded as a 2-form on "M" with values in formula_13.
The space of sections of the adjoint bundle is naturally an (infinite-dimensional) Lie algebra. It may be regarded as the Lie algebra of the infinite-dimensional Lie group of gauge transformations of "P" which can be thought of as sections of the bundle formula_14 where conj is the action of "G" on itself by (left) conjugation.
If formula_15 is the frame bundle of a vector bundle formula_16, then formula_17 has fibre the general linear group formula_18 (either real or complex, depending on formula_19) where formula_20. This structure group has Lie algebra consisting of all formula_21 matrices formula_22, and these can be thought of as the endomorphisms of the vector bundle formula_19. Indeed there is a natural isomorphism formula_23.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathfrak g"
},
{
"math_id": 1,
"text": "\\mathrm{Ad}: G\\to\\mathrm{Aut}(\\mathfrak g)\\sub\\mathrm{GL}(\\mathfrak g)"
},
{
"math_id": 2,
"text": "\\mathrm{ad} P = P\\times_{\\mathrm{Ad}}\\mathfrak g"
},
{
"math_id": 3,
"text": "\\mathfrak g_P"
},
{
"math_id": 4,
"text": "[p\\cdot g,X] = [p,\\mathrm{Ad}_{g}(X)]"
},
{
"math_id": 5,
"text": "\\mathrm{Ad\\vert_H}: H \\hookrightarrow G \\to \\mathrm{Aut}(\\mathfrak g) "
},
{
"math_id": 6,
"text": "Ad\\vert_H(h): \\mathfrak g \\mapsto \\mathfrak g"
},
{
"math_id": 7,
"text": "G \\to M"
},
{
"math_id": 8,
"text": "g_{ij}: U_{i}\\cap U_{j} \\rightarrow H"
},
{
"math_id": 9,
"text": "U_{i}"
},
{
"math_id": 10,
"text": "g_{ij}"
},
{
"math_id": 11,
"text": " \\xi= (E,p,M,\\mathfrak g) = G[(\\mathfrak g, \\mathrm{Ad\\vert_H})] "
},
{
"math_id": 12,
"text": " \\Theta :\\xi \\oplus \\xi \\rightarrow \\xi "
},
{
"math_id": 13,
"text": "\\mathrm{ad} P"
},
{
"math_id": 14,
"text": "P \\times_{\\mathrm conj} G"
},
{
"math_id": 15,
"text": "P=\\mathcal{F}(E)"
},
{
"math_id": 16,
"text": "E\\to M"
},
{
"math_id": 17,
"text": "P"
},
{
"math_id": 18,
"text": "\\operatorname{GL}(r)"
},
{
"math_id": 19,
"text": "E"
},
{
"math_id": 20,
"text": "\\operatorname{rank}(E) = r"
},
{
"math_id": 21,
"text": "r\\times r"
},
{
"math_id": 22,
"text": "\\operatorname{Mat}(r)"
},
{
"math_id": 23,
"text": "\\operatorname{ad} \\mathcal{F}(E) = \\operatorname{End}(E)"
}
] | https://en.wikipedia.org/wiki?curid=9943932 |
9944002 | 1/2 − 1/4 + 1/8 − 1/16 + ⋯ | In mathematics, the infinite series 1/2 − 1/4 + 1/8 − 1/16 + ⋯
is a simple example of an alternating series that converges absolutely.
It is a geometric series whose first term is and whose common ratio is −, so its sum is
formula_0
Hackenbush and the surreals.
A slight rearrangement of the series reads
formula_1
The series has the form of a positive integer plus a series containing every negative power of two with either a positive or negative sign, so it can be translated into the infinite blue-red Hackenbush string that represents the surreal number :
LRRLRLR... = .
A slightly simpler Hackenbush string eliminates the repeated R:
LRLRLRL... = .
In terms of the Hackenbush game structure, this equation means that the board depicted on the right has a value of 0; whichever player moves second has a winning strategy.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{n=1}^\\infty \\frac{(-1)^{n+1}}{2^n}=\\frac12-\\frac14+\\frac18-\\frac{1}{16}+\\cdots=\\frac{\\frac12}{1-(-\\frac12)} = \\frac13."
},
{
"math_id": 1,
"text": "1-\\frac12-\\frac14+\\frac18-\\frac{1}{16}+\\cdots=\\frac13."
}
] | https://en.wikipedia.org/wiki?curid=9944002 |
9944209 | Cactus graph | Mathematical tree of cycles
In graph theory, a cactus (sometimes called a cactus tree) is a connected graph in which any two simple cycles have at most one vertex in common. Equivalently, it is a connected graph in which every edge belongs to at most one simple cycle, or (for nontrivial cacti) in which every block (maximal subgraph without a cut-vertex) is an edge or a cycle.
Properties.
Cacti are outerplanar graphs. Every pseudotree is a cactus. A nontrivial graph is a cactus if and only if every block is either a simple cycle or a single edge.
The family of graphs in which each component is a cactus is downwardly closed under graph minor operations. This graph family may be characterized by a single forbidden minor, the four-vertex diamond graph formed by removing an edge from the complete graph "K"4.
Triangular cactus.
A triangular cactus is a special type of cactus graph such that each cycle has length three and each edge belongs to a cycle. For instance, the friendship graphs, graphs formed from a collection of triangles joined together at a single shared vertex, are triangular cacti. As well as being cactus graphs the triangular cacti are also block graphs and locally linear graphs.
Triangular cactuses have the property that they remain connected if any matching is removed from them; for a given number of vertices, they have the fewest possible edges with this property. Every tree with an odd number of vertices may be augmented to a triangular cactus by adding edges to it,
giving a minimal augmentation with the property of remaining connected after the removal of a matching.
The largest triangular cactus in any graph may be found in polynomial time using an algorithm for the matroid parity problem. Since triangular cactus graphs are planar graphs, the largest triangular cactus can be used as an approximation to the largest planar subgraph, an important subproblem in planarization. As an approximation algorithm, this method has approximation ratio 4/9, the best known for the maximum planar subgraph problem.
The algorithm for finding the largest triangular cactus is associated with a theorem of Lovász and Plummer that characterizes the number of triangles in this largest cactus.
Lovász and Plummer consider pairs of partitions of the vertices and edges of the given graph into subsets, with the property that every triangle of the graph either has two vertices in a single class of the vertex partition or all three edges in a single class of the edge partition; they call a pair of partitions with this property "valid".
Then the number of triangles in the largest triangular cactus equals the maximum, over pairs of valid partitions formula_0 and formula_1, of
formula_2,
where formula_3 is the number of vertices in the given graph and formula_4 is the number of vertex classes met by edge class formula_5.
Every plane graph formula_6 contains a cactus subgraph formula_7 which includes at least formula_8 fraction of the triangular faces of formula_6. This result implies a direct analysis of the 4/9 - approximation algorithm for maximum planar subgraph problem without using the above min-max formula.
Rosa's Conjecture.
An important conjecture related to the triangular cactus is the Rosa's Conjecture, named after Alexander Rosa, which says that all triangular cacti are graceful or nearly-graceful. More precisely
"All triangular cacti with t ≡ 0, 1 mod 4 blocks are graceful, and those with t ≡ 2, 3 mod 4 are near graceful."
Algorithms and applications.
Some facility location problems which are NP-hard for general graphs, as well as some other graph problems, may be solved in polynomial time for cacti.
Since cacti are special cases of outerplanar graphs, a number of combinatorial optimization problems on graphs may be solved for them in polynomial time.
Cacti represent electrical circuits that have useful properties. An early application of cacti was associated with the representation of op-amps.
Cacti have also been used in comparative genomics as a way of representing the relationship between different genomes or parts of genomes.
If a cactus is connected, and each of its vertices belongs to at most two blocks, then it is called a Christmas cactus. Every polyhedral graph has a Christmas cactus subgraph that includes all of its vertices, a fact that plays an essential role in a proof by that every polyhedral graph has a greedy embedding in the Euclidean plane, an assignment of coordinates to the vertices for which greedy forwarding succeeds in routing messages between all pairs of vertices.
In topological graph theory, the graphs whose cellular embeddings are all planar are exactly the subfamily of the cactus graphs with the additional property that each vertex belongs to at most one cycle. These graphs have two forbidden minors, the diamond graph and the five-vertex friendship graph.
History.
Cacti were first studied under the name of Husimi trees, bestowed on them by Frank Harary and George Eugene Uhlenbeck in honor of previous work on these graphs by Kôdi Husimi. The same Harary–Uhlenbeck paper reserves the name "cactus" for graphs of this type in which every cycle is a triangle, but now allowing cycles of all lengths is standard.
Meanwhile, the name Husimi tree commonly came to refer to graphs in which every block is a complete graph (equivalently, the intersection graphs of the blocks in some other graph). This usage had little to do with the work of Husimi, and the more pertinent term block graph is now used for this family; however, because of this ambiguity this phrase has become less frequently used to refer to cactus graphs.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{P}=\\{V_1, V_2, \\dots, V_k\\}"
},
{
"math_id": 1,
"text": "\\mathcal{Q} = \\{E_1, E_2, \\dots, E_m\\}"
},
{
"math_id": 2,
"text": "\\sum_{i=1}^{m}\\frac{(u_i - 1)}{2} + n - k,"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "u_i"
},
{
"math_id": 5,
"text": "E_i"
},
{
"math_id": 6,
"text": "G"
},
{
"math_id": 7,
"text": "C \\subseteq G"
},
{
"math_id": 8,
"text": "1/6"
}
] | https://en.wikipedia.org/wiki?curid=9944209 |
994556 | Born–Haber cycle | Approach to analyzing reaction energies
The Born–Haber cycle is an approach to analyze reaction energies. It was named after two German scientists, Max Born and Fritz Haber, who developed it in 1919. It was also independently formulated by Kasimir Fajans and published concurrently in the same journal. The cycle is concerned with the formation of an ionic compound from the reaction of a metal (often a Group I or Group II element) with a halogen or other non-metallic element such as oxygen.
Born–Haber cycles are used primarily as a means of calculating lattice energy (or more precisely enthalpy), which cannot otherwise be measured directly. The lattice enthalpy is the enthalpy change involved in the formation of an ionic compound from gaseous ions (an exothermic process), or sometimes defined as the energy to break the ionic compound into gaseous ions (an endothermic process). A Born–Haber cycle applies Hess's law to calculate the lattice enthalpy by comparing the standard enthalpy change of formation of the ionic compound (from the elements) to the enthalpy required to make gaseous ions from the elements.
This lattice calculation is complex. To make gaseous ions from elements it is necessary to atomise the elements (turn each into gaseous atoms) and then to ionise the atoms. If the element is normally a molecule then we first have to consider its bond dissociation enthalpy (see also bond energy). The energy required to remove one or more electrons to make a cation is a sum of successive ionization energies; for example, the energy needed to form Mg2+ is the ionization energy required to remove the first electron from Mg, plus the ionization energy required to remove the second electron from Mg+. Electron affinity is defined as the amount of energy released when an electron is added to a neutral atom or molecule in the gaseous state to form a negative ion.
The Born–Haber cycle applies only to fully ionic solids such as certain alkali halides. Most compounds include covalent and ionic contributions to chemical bonding and to the lattice energy, which is represented by an extended Born–Haber thermodynamic cycle. The extended Born–Haber cycle can be used to estimate the polarity and the atomic charges of polar compounds.
Examples.
Formation of LiF.
The enthalpy of formation of lithium fluoride (LiF) from its elements in their standard states (Li(s) and F2(g)) is modeled in five steps in the diagram:
The sum of the energies for each step of the process must equal the enthalpy of formation of lithium fluoride, formula_0.
formula_1
The net enthalpy of formation and the first four of the five energies can be determined experimentally, but the lattice enthalpy cannot be measured directly. Instead, the lattice enthalpy is calculated by subtracting the other four energies in the Born–Haber cycle from the net enthalpy of formation.
A similar calculation applies for any metal other than lithium and/or any non-metal other than fluorine.
The word "cycle" refers to the fact that one can also equate to zero the total enthalpy change for a cyclic process, starting and ending with LiF(s) in the example. This leads to
formula_3
which is equivalent to the previous equation.
Formation of NaBr.
At ordinary temperatures, Na is solid and Br2 is liquid, so the enthalpy of vaporization of liquid bromine is added to the equation:
formula_4
In the above equation, formula_5 is the enthalpy of vaporization of Br2 at the temperature of interest (usually in kJ/mol).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta H_{f}"
},
{
"math_id": 1,
"text": "\\Delta H_{f} = V + \\frac{1}{2}B + \\mathit{IE}_\\ce{M} - \\ce{EA}_\\ce{X} + U_L"
},
{
"math_id": 2,
"text": "U_L"
},
{
"math_id": 3,
"text": " 0 = - \\Delta H_{f} + V + \\frac{1}{2}B + \\mathit{IE}_\\ce{M} - \\mathit{EA}_\\ce{X} + U_L"
},
{
"math_id": 4,
"text": "\\Delta H_{f} = V + \\frac{1}{2}B + \\frac{1}{2}\\Delta_{vap}H + \\mathit{IE}_\\ce{M} - \\ce{EA}_\\ce{X} + U_L"
},
{
"math_id": 5,
"text": "\\Delta_{vap}H"
}
] | https://en.wikipedia.org/wiki?curid=994556 |
9945920 | Robin boundary condition | Class of problems in PDEs
In mathematics, the Robin boundary condition (; properly ), or third type boundary condition, is a type of boundary condition, named after Victor Gustave Robin (1855–1897). When imposed on an ordinary or a partial differential equation, it is a specification of a linear combination of the values of a function "and" the values of its derivative on the boundary of the domain. Other equivalent names in use are Fourier-type condition and radiation condition.
Definition.
Robin boundary conditions are a weighted combination of Dirichlet boundary conditions and Neumann boundary conditions. This contrasts to mixed boundary conditions, which are boundary conditions of different types specified on different subsets of the boundary. Robin boundary conditions are also called impedance boundary conditions, from their application in electromagnetic problems, or convective boundary conditions, from their application in heat transfer problems (Hahn, 2012).
If Ω is the domain on which the given equation is to be solved and ∂Ω denotes its boundary, the Robin boundary condition is:
formula_0
for some non-zero constants "a" and "b" and a given function "g" defined on ∂Ω. Here, "u" is the unknown solution defined on Ω and denotes the normal derivative at the boundary. More generally, "a" and "b" are allowed to be (given) functions, rather than constants.
In one dimension, if, for example, Ω = [0,1], the Robin boundary condition becomes the conditions:
formula_1
Notice the change of sign in front of the term involving a derivative: that is because the normal to [0,1] at 0 points in the negative direction, while at 1 it points in the positive direction.
Application.
Robin boundary conditions are commonly used in solving Sturm–Liouville problems which appear in many contexts in science and engineering.
In addition, the Robin boundary condition is a general form of the insulating boundary condition for convection–diffusion equations. Here, the convective and diffusive fluxes at the boundary sum to zero:
formula_2
where "D" is the diffusive constant, "u" is the convective velocity at the boundary and "c" is the concentration. The second term is a result of Fick's law of diffusion.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a u + b \\frac{\\partial u}{\\partial n} =g \\qquad \\text{on } \\partial \\Omega"
},
{
"math_id": 1,
"text": "\\begin{align} a u(0) - bu'(0) &=g(0) \\\\ a u(1) + bu'(1) &=g(1) \\end{align}"
},
{
"math_id": 2,
"text": "u_x(0)\\,c(0) -D \\frac{\\partial c(0)}{\\partial x}=0"
}
] | https://en.wikipedia.org/wiki?curid=9945920 |