id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
10002357
Khintchine inequality
Theorem in probability In mathematics, the Khintchine inequality, named after Aleksandr Khinchin and spelled in multiple ways in the Latin alphabet, is a theorem from probability, and is also frequently used in analysis. Heuristically, it says that if we pick formula_0 complex numbers formula_1, and add them together each multiplied by a random sign formula_2, then the expected value of the sum's modulus, or the modulus it will be closest to on average, will be not too far off from formula_3. Statement. Let formula_4 be i.i.d. random variables with formula_5 for formula_6, i.e., a sequence with Rademacher distribution. Let formula_7 and let formula_8. Then formula_9 for some constants formula_10 depending only on formula_11 (see Expected value for notation). The sharp values of the constants formula_12 were found by Haagerup (Ref. 2; see Ref. 3 for a simpler proof). It is a simple matter to see that formula_13 when formula_14, and formula_15 when formula_16. Haagerup found that formula_17 where formula_18 and formula_19 is the Gamma function. One may note in particular that formula_20 matches exactly the moments of a normal distribution. Uses in analysis. The uses of this inequality are not limited to applications in probability theory. One example of its use in analysis is the following: if we let formula_21 be a linear operator between two L"p" spaces formula_22 and formula_23, formula_24, with bounded norm formula_25, then one can use Khintchine's inequality to show that formula_26 for some constant formula_27 depending only on formula_11 and formula_28. Generalizations. For the case of Rademacher random variables, Pawel Hitczenko showed that the sharpest version is: formula_29 where formula_30, and formula_31 and formula_32 are universal constants independent of formula_11. Here we assume that the formula_33 are non-negative and non-increasing. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " N " }, { "math_id": 1, "text": " x_1,\\dots,x_N \\in\\mathbb{C}" }, { "math_id": 2, "text": "\\pm 1 " }, { "math_id": 3, "text": " \\sqrt{|x_1|^{2}+\\cdots + |x_N|^{2}}" }, { "math_id": 4, "text": " \\{\\varepsilon_n\\}_{n=1}^N " }, { "math_id": 5, "text": "P(\\varepsilon_n=\\pm1)=\\frac12" }, { "math_id": 6, "text": "n=1,\\ldots, N" }, { "math_id": 7, "text": " 0<p<\\infty" }, { "math_id": 8, "text": " x_1,\\ldots,x_N\\in \\mathbb{C}" }, { "math_id": 9, "text": " A_p \\left( \\sum_{n=1}^N |x_n|^2 \\right)^{1/2} \\leq \\left(\\operatorname{E} \\left|\\sum_{n=1}^N \\varepsilon_n x_n\\right|^p \\right)^{1/p} \\leq B_p \\left(\\sum_{n=1}^N |x_n|^2\\right)^{1/2} " }, { "math_id": 10, "text": " A_p,B_p>0 " }, { "math_id": 11, "text": "p" }, { "math_id": 12, "text": "A_p,B_p" }, { "math_id": 13, "text": "A_p = 1" }, { "math_id": 14, "text": "p \\ge 2" }, { "math_id": 15, "text": "B_p = 1" }, { "math_id": 16, "text": "0 < p \\le 2" }, { "math_id": 17, "text": "\n\\begin{align}\nA_p &= \\begin{cases}\n2^{1/2-1/p} & 0<p\\le p_0, \\\\\n2^{1/2}(\\Gamma((p+1)/2)/\\sqrt{\\pi})^{1/p} & p_0 < p < 2\\\\\n1 & 2 \\le p < \\infty\n\\end{cases}\n\\\\\n&\\text{and}\n\\\\\nB_p &= \\begin{cases}\n1 & 0 < p \\le 2 \\\\\n2^{1/2}(\\Gamma((p+1)/2)/\\sqrt\\pi)^{1/p} & 2 < p < \\infty\n\\end{cases},\n\\end{align}\n" }, { "math_id": 18, "text": "p_0\\approx 1.847" }, { "math_id": 19, "text": "\\Gamma" }, { "math_id": 20, "text": "B_p" }, { "math_id": 21, "text": "T" }, { "math_id": 22, "text": " L^p(X,\\mu)" }, { "math_id": 23, "text": " L^p(Y,\\nu) " }, { "math_id": 24, "text": "1 < p < \\infty" }, { "math_id": 25, "text": " \\|T\\|<\\infty " }, { "math_id": 26, "text": " \\left\\|\\left(\\sum_{n=1}^N |Tf_n|^2 \\right)^{1/2} \\right\\|_{L^p(Y,\\nu)}\\leq C_p \\left\\|\\left(\\sum_{n=1}^N |f_n|^2\\right)^{1/2} \\right\\|_{L^p(X,\\mu)} " }, { "math_id": 27, "text": "C_p>0" }, { "math_id": 28, "text": "\\|T\\|" }, { "math_id": 29, "text": "\nA \\left(\\sqrt{p}\\left(\\sum_{n=b+1}^N x_n^2\\right)^{1/2} + \\sum_{n=1}^b x_n\\right)\n\\leq \\left(\\operatorname{E} \\left|\\sum_{n=1}^N \\varepsilon_n x_n\\right|^p \\right)^{1/p}\n\\leq B \\left(\\sqrt{p}\\left(\\sum_{n=b+1}^N x_n^2\\right)^{1/2} + \\sum_{n=1}^b x_n\\right)\n" }, { "math_id": 30, "text": "b = \\lfloor p\\rfloor" }, { "math_id": 31, "text": "A" }, { "math_id": 32, "text": "B" }, { "math_id": 33, "text": "x_i" } ]
https://en.wikipedia.org/wiki?curid=10002357
10004115
Brushed DC electric motor
Internally commutated electric motor A brushed DC electric motor is an internally commutated electric motor designed to be run from a direct current power source and utilizing an electric brush for contact. Brushed motors were the first commercially important application of electric power to driving mechanical energy, and DC distribution systems were used for more than 100 years to operate motors in commercial and industrial buildings. Brushed DC motors can be varied in speed by changing the operating voltage or the strength of the magnetic field. Depending on the connections of the field to the power supply, the speed and torque characteristics of a brushed motor can be altered to provide steady speed or speed inversely proportional to the mechanical load. Brushed motors continue to be used for electrical propulsion, cranes, paper machines and steel rolling mills. Since the brushes wear down and require replacement, brushless DC motors using power electronic devices have displaced brushed motors from many applications. Simple two-pole DC motor. The following graphics illustrate a simple, two-pole, brushed, DC motor. When a current passes through the coil wound around a soft iron core situated inside an external magnetic field, the side of the positive pole is acted upon by an upwards force, while the other side is acted upon by a downward force. According to Fleming's left hand rule, the forces cause a turning effect on the coil, making it rotate. To make the motor rotate in a constant direction, "direct current" commutators make the current reverse in direction every half a cycle (in a two-pole motor) thus causing the motor to continue to rotate in the same direction. A problem with the motor shown above is that when the plane of the coil is parallel to the magnetic field—i.e. when the rotor poles are 90 degrees from the stator poles—the torque is zero. In the pictures above, this occurs when the core of the coil is horizontal—the position it is just about to reach in the second-to-last picture on the right. The motor would not be able to start in this position. However, once it was started, it would continue to rotate through this position by momentum. There is a second problem with this simple pole design. At the zero-torque position, both commutator brushes are touching (bridging) both commutator plates, resulting in a short circuit. The power leads are shorted together through the commutator plates, and the coil is also short-circuited through both brushes (the coil is shorted twice, once through each brush independently). Note that this problem is independent of the non-starting problem above; even if there were a high current in the coil at this position, there would still be zero torque. The problem here is that this short uselessly consumes power without producing any motion (nor even any coil current.) In a low-current battery-powered demonstration this short-circuiting is generally not considered harmful. However, if a two-pole motor were designed to do actual work with several hundred watts of power output, this shorting could result in severe commutator overheating, brush damage, and potential welding of the brushes—if they were metallic—to the commutator. Carbon brushes, which are often used, would not weld. In any case, a short like this is very wasteful, drains batteries rapidly and, at a minimum, requires power supply components to be designed to much higher standards than would be needed just to run the motor without the shorting. One simple solution is to put a gap between the commutator plates which is wider than the ends of the brushes. This increases the zero-torque range of angular positions but eliminates the shorting problem; if the motor is started spinning by an outside force it will continue spinning. With this modification, it can also be effectively turned off simply by stalling (stopping) it in a position in the zero-torque (i.e. commutator non-contacting) angle range. This design is sometimes seen in homebuilt hobby motors, e.g. for science fairs and such designs can be found in some published science project books. A clear downside of this simple solution is that the motor now coasts through a substantial arc of rotation twice per revolution and the torque is pulsed. This may work for electric fans or to keep a flywheel spinning but there are many applications, even where starting and stopping are not necessary, for which it is completely inadequate, such as driving the capstan of a tape transport, or any similar instance where to speed up and slow down often and quickly is a requirement. Another disadvantage is that, since the coils have a measure of self inductance, current flowing in them cannot suddenly stop. The current attempts to jump the opening gap between the commutator segment and the brush, causing arcing. Even for fans and flywheels, the clear weaknesses remaining in this design—especially that it is not self-starting from all positions—make it impractical for working use, especially considering the better alternatives that exist. Unlike the demonstration motor above, DC motors are commonly designed with more than two poles, are able to start from any position, and do not have any position where current can flow without producing electromotive power by passing through some coil. Many common small brushed DC motors used in toys and small consumer appliances, the simplest mass-produced DC motors to be found, have three-pole armatures. The brushes can now bridge two adjacent commutator segments without causing a short circuit. These three-pole armatures also have the advantage that current from the brushes either flows through two coils in series or through just one coil. Starting with the current in an individual coil at half its nominal value (as a result of flowing through two coils in series), it rises to its nominal value and then falls to half this value. The sequence then continues with current in the reverse direction. This results in a closer step-wise approximation to the ideal sinusoidal coil current, producing a more even torque than the two-pole motor where the current in each coil is closer to a square wave. Since current changes are half those of a comparable two-pole motor, arcing at the brushes is consequently less. If the shaft of a DC motor is turned by an external force, the motor will act like a generator and produce an Electromotive force (EMF). During normal operation, the spinning of the motor produces a voltage, known as the counter-EMF (CEMF) or back EMF, because it opposes the applied voltage on the motor. The back EMF is the reason that the motor when free-running does not appear to have the same low electrical resistance as the wire contained in its winding. This is the same EMF that is produced when the motor is used as a generator (for example when an electrical load, such as a light bulb, is placed across the terminals of the motor and the motor shaft is driven with an external torque). Therefore, the total voltage drop across a motor consists of the CEMF voltage drop, and the parasitic voltage drop resulting from the internal resistance of the armature's windings. The current through a motor is given by the following equation: formula_0 The mechanical power produced by the motor is given by: formula_1 As an unloaded DC motor spins, it generates a backwards-flowing electromotive force that resists the current being applied to the motor. The current through the motor drops as the rotational speed increases, and a free-spinning motor has very little current. It is only when a load is applied to the motor that slows the rotor that the current draw through the motor increases. The commutating plane. In a dynamo, a plane through the centers of the contact areas where a pair of brushes touch the commutator and parallel to the axis of rotation of the armature is referred to as the "commutating plane". In this diagram the commutating plane is shown for just one of the brushes, assuming the other brush made contact on the other side of the commutator with radial symmetry, 180 degrees from the brush shown. Compensation for stator field distortion. In a real dynamo, the field is never perfectly uniform. Instead, as the rotor spins it induces field effects which drag and distort the magnetic lines of the outer non-rotating stator. The faster the rotor spins, the further the degree of field distortion. Because the dynamo operates most efficiently with the rotor field at right angles to the stator field, it is necessary to either retard or advance the brush position to put the rotor's field into the correct position to be at a right angle to the distorted field. These field effects are reversed when the direction of spin is reversed. It is therefore difficult to build an efficient reversible commutated dynamo, since for highest field strength it is necessary to move the brushes to the opposite side of the normal neutral plane. The effect can be considered to be somewhat similar to timing advance in an internal combustion engine. Generally a dynamo that has been designed to run at a certain fixed speed will have its brushes permanently fixed to align the field for highest efficiency at that speed. DC machines with wound stators compensate the distortion with commutating field windings and compensation windings. Motor design variations. DC motors. Brushed DC motors are constructed with wound rotors and either wound or permanent-magnet stators. Wound stators. The field coils have traditionally existed in four basic formats: separately excited (sepex), series-wound, shunt-wound, and a combination of the latter two; compound-wound. In a series wound motor, the field coils are connected electrically in series with the armature coils (via the brushes). In a shunt wound motor, the field coils are connected in parallel, or "shunted" to the armature coils. In a separately excited (sepex) motor the field coils are supplied from an independent source, such as a motor-generator and the field current is unaffected by changes in the armature current. The sepex system was sometimes used in DC traction motors to facilitate control of wheelslip. Permanent-magnet motors. Permanent-magnet types have some performance advantages over direct-current, excited, synchronous types, and have become predominant in fractional horsepower applications. They are smaller, lighter, more efficient and reliable than other singly-fed electric machines. Originally all large industrial DC motors used wound field or rotor magnets. Permanent magnets have traditionally only been useful on small motors because it was difficult to find a material capable of retaining a high-strength field. Only recently have advances in materials technology allowed the creation of high-intensity permanent magnets, such as neodymium magnets, allowing the development of compact, high-power motors without the extra volume of field coils and excitation means. But as these high performance permanent magnets become more applied in electric motor or generator systems, other problems are realized (see Permanent magnet synchronous generator). Axial field motors. Traditionally, the field has been applied radially—in and away from the rotation axis of the motor. However some designs have the field flowing along the axis of the motor, with the rotor cutting the field lines as it rotates. This allows for much stronger magnetic fields, particularly if halbach arrays are employed. This, in turn, gives power to the motor at lower speeds. However, the focused flux density cannot rise about the limited residual flux density of the permanent magnet despite high coercivity and like all electric machines, the flux density of magnetic core saturation is the design constraint. Speed control. Generally, the rotational speed of a DC motor is proportional to the EMF in its coil (= the voltage applied to it minus voltage lost on its resistance), and the torque is proportional to the current. Speed control can be achieved by variable battery tappings, variable supply voltage, resistors or electronic controls. A simulation example can be found here and. The direction of a wound field DC motor can be changed by reversing either the field or armature connections but not both. This is commonly done with a special set of contactors (direction contactors). The effective voltage can be varied by inserting a series resistor or by an electronically controlled switching device made of thyristors, transistors, or, formerly, mercury arc rectifiers. Series-parallel. Series-parallel control was the standard method of controlling railway traction motors before the advent of power electronics. An electric locomotive or train would typically have four motors which could be grouped in three different ways: This provided three running speeds with minimal resistance losses. For starting and acceleration, additional control was provided by resistances. This system has been superseded by electronic control systems. Field weakening. The speed of a DC motor can be increased by field weakening. Reducing the field strength is done by inserting resistance in series with a shunt field, or inserting resistances around a series-connected field winding, to reduce current in the field winding. When the field is weakened, the back-emf reduces, so a larger current flows through the armature winding and this increases the speed. Field weakening is not used on its own but in combination with other methods, such as series-parallel control. Chopper. In a circuit known as a chopper, the average voltage applied to the motor is varied by switching the supply voltage very rapidly. As the "on" to "off" ratio is varied to alter the average applied voltage, the speed of the motor varies. The percentage "on" time multiplied by the supply voltage gives the average voltage applied to the motor. Therefore, with a 100 V supply and a 25% "on" time, the average voltage at the motor will be 25 V. During the "off" time, the armature's inductance causes the current to continue through a diode called a "flyback diode", in parallel with the motor. At this point in the cycle, the supply current will be zero, and therefore the average motor current will always be higher than the supply current unless the percentage "on" time is 100%. At 100% "on" time, the supply and motor current are equal. The rapid switching wastes less energy than series resistors. This method is also called pulse-width modulation (PWM) and is often controlled by a microprocessor. An output filter is sometimes installed to smooth the average voltage applied to the motor and reduce motor noise. Since the series-wound DC motor develops its highest torque at low speed, it is often used in traction applications such as electric locomotives, and trams. Another application is starter motors for petrol and small diesel engines. Series motors must never be used in applications where the drive can fail (such as belt drives). As the motor accelerates, the armature (and hence field) current reduces. The reduction in field causes the motor to speed up, and in extreme cases the motor can even destroy itself, although this is much less of a problem in fan-cooled motors (with self-driven fans). This can be a problem with railway motors in the event of a loss of adhesion since, unless quickly brought under control, the motors can reach speeds far higher than they would do under normal circumstances. This can not only cause problems for the motors themselves and the gears, but due to the differential speed between the rails and the wheels it can also cause serious damage to the rails and wheel treads as they heat and cool rapidly. Field weakening is used in some electronic controls to increase the top speed of an electric vehicle. The simplest form uses a contactor and field-weakening resistor; the electronic control monitors the motor current and switches the field weakening resistor into circuit when the motor current reduces below a preset value (this will be when the motor is at its full design speed). Once the resistor is in circuit, the motor will increase speed above its normal speed at its rated voltage. When motor current increases, the control will disconnect the resistor and low speed torque is made available. Ward Leonard. A Ward Leonard control is usually used for controlling a shunt or compound wound DC motor, and developed as a method of providing a speed-controlled motor from an AC supply, though it is not without its advantages in DC schemes. The AC supply is used to drive an AC motor, usually an induction motor that drives a DC generator or dynamo. The DC output from the armature is directly connected to the armature of the DC motor (sometimes but not always of identical construction). The shunt field windings of both DC machines are independently excited through variable resistors. Extremely good speed control from standstill to full speed, and consistent torque, can be obtained by varying the generator and/or motor field current. This method of control was the "de facto" method from its development until it was superseded by solid state thyristor systems. It found service in almost any environment where good speed control was required, from passenger lifts through to large mine pit head winding gear and even industrial process machinery and electric cranes. Its principal disadvantage was that three machines were required to implement a scheme (five in very large installations, as the DC machines were often duplicated and controlled by a tandem variable resistor). In many applications, the motor-generator set was often left permanently running, to avoid the delays that would otherwise be caused by starting it up as required. Although electronic (thyristor) controllers have replaced most small to medium Ward-Leonard systems, some very large ones (thousands of horsepower) remain in service. The field currents are much lower than the armature currents, allowing a moderate sized thyristor unit to control a much larger motor than it could control directly. For example, in one installation, a 300 amp thyristor unit controls the field of the generator. The generator output current is in excess of 15,000 amperes, which would be prohibitively expensive (and inefficient) to control directly with thyristors. Torque and speed of a DC motor. A DC motor's speed and torque characteristics vary according to three different magnetization sources, separately excited field, self-excited field or permanent-field, which are used selectively to control the motor over the mechanical load's range. Self-excited field motors can be series, shunt, or a compound wound connected to the armature. Basic properties. Define Counter EMF equation. The DC motor's counter emf is proportional to the product of the machine's total flux strength and armature speed: Eb kb Φ n Voltage balance equation. The DC motor's input voltage must overcome the counter emf as well as the voltage drop created by the armature current across the motor resistance, that is, the combined resistance across the brushes, armature winding and series field winding, if any: Vm Eb + Rm Ia Torque equation. The DC motor's torque is proportional to the product of the armature current and the machine's total flux strength: formula_2 where kT Speed equation. Since n and Vm Eb + Rm Ia we have formula_3 where kn Torque and speed characteristics. Shunt wound motor. With the shunt wound motor's high-resistance field winding connected in parallel with the armature, Vm, Rm and Ø are constant such that the no load to full load speed regulation is seldom more than 5%. Speed control is achieved three ways: Series wound motor. The series motor responds to increased load by slowing down; the current increases and the torque rises in proportional to the square of the current since the same current flows in both the armature and the field windings. If the motor is stalled, the current is limited only by the total resistance of the windings and the torque can be very high, but there is a danger of the windings becoming overheated. Series wound motors were widely used as traction motors in rail transport of every kind, but are being phased out in favour of power inverter-fed AC induction motors. The counter EMF aids the armature resistance to limit the current through the armature. When power is first applied to a motor, the armature does not rotate, the counter EMF is zero and the only factor limiting the armature current is the armature resistance. As the prospective current through the armature is very large, the need arises for an additional resistance in series with the armature to limit the current until the motor rotation can build up the counter EMF. As the motor rotation builds up, the resistance is gradually cut out. The series wound DC motor's most notable characteristic is that its speed is almost entirely dependent on the torque required to drive the load. This suits large inertial loads as motor accelerates from maximum torque, torque reducing gradually as speed increases. As the series motor's speed can be dangerously high, series motors are often geared or direct-connected to the load. Permanent magnet motor. A permanent magnet DC motor is characterized by a linear relationship between stall torque when the torque is maximum with the shaft at standstill and no-load speed with no applied shaft torque and maximum output speed. There is a quadratic power relationship between these two speed-axis points. Protection. To extend a DC motor's service life, protective devices and motor controllers are used to protect it from mechanical damage, excessive moisture, high dielectric stress and high temperature or thermal overloading. These protective devices sense motor fault conditions and either activate an alarm to notify the operator or automatically de-energize the motor when a faulty condition occurs. For overloaded conditions, motors are protected with thermal overload relays. Bi-metal thermal overload protectors are embedded in the motor's windings and made from two dissimilar metals. They are designed such that the bimetallic strips will bend in opposite directions when a temperature set point is reached to open the control circuit and de-energize the motor. Heaters are external thermal overload protectors connected in series with the motor's windings and mounted in the motor contactor. Solder pot heaters melt in an overload condition, which cause the motor control circuit to de-energize the motor. Bimetallic heaters function the same way as embedded bimetallic protectors. Fuses and circuit breakers are overcurrent or short circuit protectors. Ground fault relays also provide overcurrent protection. They monitor the electric current between the motor's windings and earth system ground. In motor-generators, reverse current relays prevent the battery from discharging and motorizing the generator. Since D.C. motor field loss can cause a hazardous runaway or overspeed condition, loss of field relays are connected in parallel with the motor's field to sense field current. When the field current decreases below a set point, the relay will deenergize the motor's armature. A locked rotor condition prevents a motor from accelerating after its starting sequence has been initiated. Distance relays protect motors from locked-rotor faults. Undervoltage motor protection is typically incorporated into motor controllers or starters. In addition, motors can be protected from overvoltages or surges with isolation transformers, power conditioning equipment, MOVs, arresters and harmonic filters. Environmental conditions, such as dust, explosive vapors, water, and high ambient temperatures, can adversely affect the operation of a DC motor. To protect a motor from these environmental conditions, the National Electrical Manufacturers Association (NEMA) and the International Electrotechnical Commission (IEC) have standardized motor enclosure designs based upon the environmental protection they provide from contaminants. Modern software can also be used in the design stage, such as Motor-CAD, to help increase the thermal efficiency of a motor. DC motor starters. The counter-emf aids the armature resistance to limit the current through the armature. When power is first applied to a motor, the armature does not rotate. At that instant the counter-emf is zero and the only factor limiting the armature current is the armature resistance and inductance. Usually the armature resistance of a motor is less than 1 Ω; therefore the current through the armature would be very large when the power is applied. This current can make an excessive voltage drop affecting other equipment in the circuit and even trip overload protective devices. Therefore, the need arises for an additional resistance in series with the armature to limit the current until the motor rotation can build up the counter-emf. As the motor rotation builds up, the resistance is gradually cut out. Manual-starting rheostat. When electrical and DC motor technology was first developed, much of the equipment was constantly tended by an operator trained in the management of motor systems. The very first motor management systems were almost completely manual, with an attendant starting and stopping the motors, cleaning the equipment, repairing any mechanical failures, and so forth. The first DC motor-starters were also completely manual, as shown in this image. Normally it took the operator about ten seconds to slowly advance the rheostat across the contacts to gradually increase input power up to operating speed. There were two different classes of these rheostats, one used for starting only, and one for starting and speed regulation. The starting rheostat was less expensive, but had smaller resistance elements that would burn out if required to run a motor at a constant reduced speed. This starter includes a no-voltage magnetic holding feature, which causes the rheostat to spring to the off position if power is lost, so that the motor does not later attempt to restart in the full-voltage position. It also has overcurrent protection that trips the lever to the off position if excessive current over a set amount is detected. Three-point starter. The incoming power wires are called L1 and L2. As the name implies there are only three connections to the starter, one to incoming power, one to the armature, and one to the field. The connections to the armature are called A1 and A2. The ends of the field (excitement) coil are called F1 and F2. In order to control the speed, a field rheostat is connected in series with the shunt field. One side of the line is connected to the arm of the starter. The arm is spring-loaded so, it will return to the "Off" position when not held at any other position. Four-point starter. The four-point starter eliminates the drawback of the three-point starter. In addition to the same three points that were in use with the three-point starter, the other side of the line, L1, is the fourth point brought to the starter when the arm is moved from the "Off" position. The coil of the holding magnet is connected across the line. The holding magnet and starting resistors function identical as in the three-point starter. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I = \\frac{V_\\text{applied} - V_\\text{cemf}}{R_\\text{armature}}" }, { "math_id": 1, "text": "P = I \\cdot V_\\text{cemf}" }, { "math_id": 2, "text": "\\begin{align}\n T &= \\frac{1}{2\\pi} k_b I_a \\Phi \\\\\n &= k_T I_a \\Phi\n\\end{align}" }, { "math_id": 3, "text": "\\begin{align}\n n &= \\frac{V_m - R_m I_a}{k_b \\Phi} \\\\\n &= k_n \\frac{V_m - R_m I_a}{\\Phi}\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=10004115
10004409
Shear strength (soil)
Magnitude of the shear stress that a soil can sustain Shear strength is a term used in soil mechanics to describe the magnitude of the shear stress that a soil can sustain. The shear resistance of soil is a result of friction and interlocking of particles, and possibly cementation or bonding of particle contacts. Due to interlocking, particulate material may expand or contract in volume as it is subject to shear strains. If soil expands its volume, the density of particles will decrease and the strength will decrease; in this case, the peak strength would be followed by a reduction of shear stress. The stress-strain relationship levels off when the material stops expanding or contracting, and when interparticle bonds are broken. The theoretical state at which the shear stress and density remain constant while the shear strain increases may be called the critical state, steady state, or residual strength. The volume change behavior and interparticle friction depend on the density of the particles, the intergranular contact forces, and to a somewhat lesser extent, other factors such as the rate of shearing and the direction of the shear stress. The average normal intergranular contact force per unit area is called the effective stress. If water is not allowed to flow in or out of the soil, the stress path is called an "undrained stress path". During undrained shear, if the particles are surrounded by a nearly incompressible fluid such as water, then the density of the particles cannot change without drainage, but the water pressure and effective stress will change. On the other hand, if the fluids are allowed to freely drain out of the pores, then the pore pressures will remain constant and the test path is called a "drained stress path". The soil is free to dilate or contract during shear if the soil is drained. In reality, soil is partially drained, somewhere between the perfectly undrained and drained idealized conditions. The shear strength of soil depends on the effective stress, the drainage conditions, the density of the particles, the rate of strain, and the direction of the strain. For undrained, constant volume shearing, the Tresca theory may be used to predict the shear strength, but for drained conditions, the Mohr–Coulomb theory may be used. Two important theories of soil shear are the critical state theory and the steady state theory. There are key differences between the critical state condition and the steady state condition and the resulting theory corresponding to each of these conditions. Factors controlling shear strength of soils. The stress-strain relationship of soils, and therefore the shearing strength, is affected by: Undrained strength. This term describes a type of shear strength in soil mechanics as distinct from drained strength. Conceptually, there is no such thing as "the" undrained strength of a soil. It depends on a number of factors, the main ones being: Undrained strength is typically defined by Tresca theory, based on Mohr's circle as: "σ1 - σ3 = 2 Su" Where: "σ1" is the major principal stress "σ3" is the minor principal stress formula_0 is the shear strength "(σ1 - σ3)/2" hence, formula_0 = "Su" (or sometimes "cu"), the undrained strength. It is commonly adopted in limit equilibrium analyses where the rate of loading is very much greater than the rate at which pore water pressure - generated due to the action of shearing the soil - dissipates. An example of this is rapid loading of sands during an earthquake, or the failure of a clay slope during heavy rain, and applies to most failures that occur during construction. As an implication of undrained condition, no elastic volumetric strains occur, and thus Poisson's ratio is assumed to remain 0.5 throughout shearing. The Tresca soil model also assumes no plastic volumetric strains occur. This is of significance in more advanced analyses such as in finite element analysis. In these advanced analysis methods, soil models other than Tresca may be used to model the undrained condition including Mohr-Coulomb and critical state soil models such as the modified Cam-clay model, provided Poisson's ratio is maintained at 0.5. One relationship used extensively by practising engineers is the empirical observation that the ratio of the undrained shear strength c to the original consolidation stress p' is approximately a constant for a given Over Consolidation Ratio (OCR). This relationship was first formalized by and who also extended it to show that stress-strain characteristics of remolded clays could also be normalized with respect to the original consolidation stress. The constant c/p relationship can also be derived from theory for both critical-state and steady-state soil mechanics . This fundamental, normalization property of the stress-strain curves is found in many clays, and was refined into the empirical SHANSEP (stress history and normalized soil engineering properties) method.. Drained shear strength. The drained shear strength is the shear strength of the soil when pore fluid pressures, generated during the course of shearing the soil, are able to dissipate during shearing. It also applies where no pore water exists in the soil (the soil is dry) and hence pore fluid pressures are negligible. It is commonly approximated using the Mohr-Coulomb equation. (It was called "Coulomb's equation" by Karl von Terzaghi in 1942.) combined it with the principle of effective stress. In terms of effective stresses, the shear strength is often approximated by: formula_0 = "σ' tan(φ') + c"' Where "σ' = (σ - u)", is defined as the effective stress. "σ" is the total stress applied normal to the shear plane, and "u" is the pore water pressure acting on the same plane. "φ"' = the effective stress friction angle, or the 'angle of internal friction' after Coulomb friction. The coefficient of friction formula_1 is equal to tan(φ'). Different values of friction angle can be defined, including the peak friction angle, φ'p, the critical state friction angle, φ'cv, or residual friction angle, φ'r. c' = is called cohesion, however, it usually arises as a consequence of forcing a straight line to fit through measured values of (τ,σ') even though the data actually falls on a curve. The intercept of the straight line on the shear stress axis is called the cohesion. It is well known that the resulting intercept depends on the range of stresses considered: it is not a fundamental soil property. The curvature (nonlinearity) of the failure envelope occurs because the dilatancy of closely packed soil particles depends on confining pressure. Critical state theory. A more advanced understanding of the behaviour of soil undergoing shearing led to the development of the critical state theory of soil mechanics . In critical state soil mechanics, a distinct shear strength is identified where the soil undergoing shear does so at a constant volume, also called the 'critical state'. Thus there are three commonly identified shear strengths for a soil undergoing shear: The peak strength may occur before or at critical state, depending on the initial state of the soil particles undergoing shear force: The constant volume (or critical state) shear strength is said to be extrinsic to the soil, and independent of the initial density or packing arrangement of the soil grains. In this state the grains being separated are said to be 'tumbling' over one another, with no significant granular interlock or sliding plane development affecting the resistance to shearing. At this point, no inherited fabric or bonding of the soil grains affects the soil strength. The residual strength occurs for some soils where the shape of the particles that make up the soil become aligned during shearing (forming a slickenside), resulting in reduced resistance to continued shearing (further strain softening). This is particularly true for most clays that comprise plate-like minerals, but is also observed in some granular soils with more elongate shaped grains. Clays that do not have plate-like minerals (like allophanic clays) do not tend to exhibit residual strengths. Use in practice: If one is to adopt critical state theory and take c' = 0; formula_0p may be used, provided the level of anticipated strains are taken into account, and the effects of potential rupture or strain softening to critical state strengths are considered. For large strain deformation, the potential to form a slickensided surface with a φ'r should be considered (such as pile driving). The Critical State occurs at the quasi-static strain rate. It does not allow for differences in shear strength based on different strain rates. Also at the critical state, there is no particle alignment or specific soil structure. Almost as soon as it was first introduced, the critical state concept was subjected to much criticism—chiefly its inability to match readily available test data from testing a wide variety of soils. This is primarily due to the theories inability to account for particle structure. A major consequence of this is its inability to model strain-softening post peak commonly observed in contractive soils that have anisotropic grain shapes/properties. Further, an assumption commonly made to make the model mathematically tractable is that shear stress cannot cause volumetric strain nor volumetric stress cause shear strain. Since this is not the case in reality, it is an additional cause of the poor matches to readily available empirical test data. Additionally, critical state elasto-plastic models assume that elastic strains drives volumetric changes. Since this too is not the case in real soils, this assumption results in poor fits to volume and pore pressure change data. Steady state (dynamical systems based soil shear). A refinement of the critical state concept is the steady state concept. The steady state strength is defined as the shear strength of the soil when it is at the steady state condition. The steady state condition is defined as "that state in which the mass is continuously deforming at constant volume, constant normal effective stress, constant shear stress, and constant velocity." Steve J. Poulos , then an Associate Professor of the Soil Mechanics Department of Harvard University, built off a hypothesis that Arthur Casagrande was formulating towards the end of his career. Steady state based soil mechanics is sometimes called "Harvard soil mechanics". The steady state condition is not the same as the "critical state" condition. The steady state occurs only after all particle breakage if any is complete and all the particles are oriented in a statistically steady state condition and so that the shear stress needed to continue deformation at a constant velocity of deformation does not change. It applies to both the drained and the undrained case. The steady state has a slightly different value depending on the strain rate at which it is measured. Thus the steady state shear strength at the quasi-static strain rate (the strain rate at which the critical state is defined to occur at) would seem to correspond to the critical state shear strength. However, there is an additional difference between the two states. This is that at the steady state condition the grains position themselves in the steady state structure, whereas no such structure occurs for the critical state. In the case of shearing to large strains for soils with elongated particles, this steady state structure is one where the grains are oriented (perhaps even aligned) in the direction of shear. In the case where the particles are strongly aligned in the direction of shear, the steady state corresponds to the "residual condition." Three common misconceptions regarding the steady state are that a) it is the same as the critical state (it is not), b) that it applies only to the undrained case (it applies to all forms of drainage), and c) that it does not apply to sands (it applies to any granular material). A primer on the Steady State theory can be found in a report by Poulos . Its use in earthquake engineering is described in detail in another publication by Poulos . The difference between the steady state and the critical state is not merely one of semantics as is sometimes thought, and it is incorrect to use the two terms/concepts interchangeably. The additional requirements of the strict definition of the steady state over and above the critical state viz. a constant deformation velocity and statistically constant structure (the steady state structure), places the steady state condition within the framework of dynamical systems theory. This strict definition of the steady state was used to describe soil shear as a dynamical system . Dynamical systems are ubiquitous in nature (the Great Red Spot on Jupiter is one example) and mathematicians have extensively studied such systems. The underlying basis of the soil shear dynamical system is simple friction . References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tau" }, { "math_id": 1, "text": "\\mu" } ]
https://en.wikipedia.org/wiki?curid=10004409
1000441
Artificial chemistry
An artificial chemistry is a chemical-like system that usually consists of objects, called molecules, that interact according to rules resembling chemical reaction rules. Artificial chemistries are created and studied in order to understand fundamental properties of chemical systems, including prebiotic evolution, as well as for developing chemical computing systems. Artificial chemistry is a field within computer science wherein chemical reactions—often biochemical ones—are computer-simulated, yielding insights on evolution, self-assembly, and other biochemical phenomena. The field does not use actual chemicals, and should not be confused with either synthetic chemistry or computational chemistry. Rather, bits of information are used to represent the starting molecules, and the end products are examined along with the processes that led to them. The field originated in artificial life but has shown to be a versatile method with applications in many fields such as chemistry, economics, sociology and linguistics. Formal definition. An artificial chemistry is defined in general as a triple (S,R,A). In some cases it is sufficient to define it as a tuple (S,I). History of artificial chemistries. Artificial chemistries emerged as a sub-field of artificial life, in particular from strong artificial life. The idea behind this field was that if one wanted to build something alive, it had to be done by a combination of non-living entities. For instance, a cell is itself alive, and yet is a combination of non-living molecules. Artificial chemistry enlists, among others, researchers that believe in an extreme bottom-up approach to artificial life. In artificial life, bits of information were used to represent bacteria or members of a species, each of which moved, multiplied, or died in computer simulations. In artificial chemistry bits of information are used to represent starting molecules capable of reacting with one another. The field has pertained to artificial intelligence by virtue of the fact that, over billions of years, non-living matter evolved into primordial life forms which in turn evolved into intelligent life forms. Important contributors. The first reference about Artificial Chemistries come from a Technical paper written by John McCaskill Walter Fontana working with Leo Buss then took up the work developing the AlChemy model The model was presented at the second International Conference of Artificial Life. In his first papers he presented the concept of organization, as a set of molecules that is algebraically closed and self-maintaining. This concept was further developed by Dittrich and Speroni di Fenizio into a theory of chemical organizations Two main schools of artificial chemistries have been in Japan and Germany. In Japan the main researchers have been Takashi Ikegami Hideaki Suzuki and Yasuhiro Suzuki In Germany, it was Wolfgang Banzhaf, who, together with his students Peter Dittrich and Jens Ziegler, developed various artificial chemistry models. Their 2001 paper 'Artificial Chemistries - A Review' became a standard in the field. Jens Ziegler, as part of his PhD thesis, proved that an artificial chemistry could be used to control a small Khepera robot Among other models, Peter Dittrich developed the Seceder model which is able to explain group formation in society through some simple rules. Since then he became a professor in Jena where he investigates artificial chemistries as a way to define a general theory of constructive dynamical systems. Applications of artificial chemistries. Artificial Chemistries are often used in the study of protobiology, in trying to bridge the gap between chemistry and biology. A further motivation to study artificial chemistries is the interest in constructive dynamical systems. Yasuhiro Suzuki has modeled various systems such as membrane systems, signaling pathways (P53), ecosystems, and enzyme systems by using his method, abstract rewriting system on multisets (ARMS). Artificial chemistry in popular culture. In the 1994 science-fiction novel "Permutation City" by Greg Egan, brain-scanned emulated humans known as Copies inhabit a simulated world which includes the Autoverse, an artificial life simulator based on a cellular automaton complex enough to represent the substratum of an artificial chemistry. Tiny environments are simulated in the Autoverse and filled with populations of a simple, designed lifeform, "Autobacterium lamberti". The purpose of the Autoverse is to allow Copies to explore the life that had evolved there after it had been run on a significantly large segment of the simulated universe (referred to as "Planet Lambert"). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\subset" } ]
https://en.wikipedia.org/wiki?curid=1000441
1000450
Johann Heinrich von Thünen
German economist (1783–1850) Johann Heinrich von Thünen (24 June 1783 – 22 September 1850), sometimes spelled Thuenen, was a prominent nineteenth-century economist and a native of Mecklenburg-Strelitz, now in northern Germany. Even though he never held a professorial position, von Thunen had substantial influence on economics. He has been described as one of the founders of agricultural economics and economic geography. He made substantial contributions to economic debates on rent, land use, and wages. Early life. Von Thunen was born on June 24, 1783 on his father's estate Canarienhausen. His father was from an old feudal family. Von Thunen lost his father at the age of two. His mother remarried a merchant and the family moved to Hooksiel. Von Thunen expected to take over his father's estate, which led him to study practical farming. In 1803, von Thunen published his first economic ideas. Von Thunen was influenced by Albrecht Thaer. Von Thunen married in 1806. Work. Model of agricultural land use. Thünen was a Mecklenburg landowner, who in the first volume of his treatise "The Isolated State" (1826), developed the first serious treatment of spatial economics and economic geography, connecting it with the theory of rent. The importance lies less in the pattern of land use predicted than in its analytical approach. Thünen developed the basics of the theory of marginal productivity in a mathematically rigorous way, summarizing it in the formula in which formula_0 where R = land rent; Y = yield per unit of land; c = production expenses per unit of commodity; p=market price per unit of commodity; F = freight rate (per agricultural unit, per mile); m=distance to market. Thünen's model of agricultural land, created before industrialization, made the following simplifying assumptions: The use which a piece of land is put to is a function of the cost of transport to market and the land rent a farmer can afford to pay (determined by yield, which is held constant here). The model generated four concentric rings of agricultural activity. Dairying and intensive farming lies closest to the city. Since vegetables, fruit, milk and other dairy products must get to market quickly, they would be produced close to the city. Timber and firewood would be produced for fuel and building materials in the second ring. Wood was a very important fuel for heating and cooking and is very heavy and difficult to transport so it is located close to the city. The third zone consists of extensive fields crops such as grain. Since grains last longer than dairy products and are much lighter than fuel, reducing transport costs, they can be located further from the city. Ranching is located in the final ring. Animals can be raised far from the city because they are self-transporting. Animals can walk to the central city for sale or for butchering. Beyond the fourth ring lies the wilderness, which is too great a distance from the central city for any type of agricultural product. Thünen's rings proved especially useful to economic history, such as Fernand Braudel's "Civilization and Capitalism," untangling the economic history of Europe and European colonialism before the Industrial Revolution blurred the patterns on the ground. In economics, Thünen rent is an economic rent created by spatial variation or location of a resource. It is "that which can be earned "above" that which can be earned at the margin of production". Natural wage. In the second volume of his great work "The Isolated State", Thunen developed some of the mathematical foundations of marginal productivity theory and wrote about the Natural Wage indicated by the formula , in which A equals the value of the product of labor and capital, and P equals the subsistence of the laborer and their family. The idea he presented is that a surplus will arise on the earlier units of an investment of either capital or labor, but as time goes on the diminishing return of newer investments will mean that if wages vary with the level of productivity those that are early will receive a greater reward for their labor and capital. But if wage rates were determined using his formula, thus giving labor a share that will vary as a geometric mean: the square root of the joint product of the two factors, A and P. This formula was so important to him that it was a dying wish of his that it be placed on his tombstone. In "The Isolated State", he also coined the term "Grenzkosten" (marginal cost) which would later be popularized by Alfred Marshall in his "Principles of Economics". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R = Y(p - c) - YFm \\," } ]
https://en.wikipedia.org/wiki?curid=1000450
10005756
Sample mean and covariance
Statistics computed from a sample of data The sample mean (sample average) or empirical mean (empirical average), and the sample covariance or empirical covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or mean value) of a sample of numbers taken from a larger population of numbers, where "population" indicates not number of people but the entirety of relevant data, whether collected or not. A sample of 40 companies' sales from the Fortune 500 might be used for convenience instead of looking at the population, all 500 companies' sales. The sample mean is used as an estimator for the population mean, the average value in the entire population, where the estimate is more likely to be close to the population mean if the sample is large and representative. The reliability of the sample mean is estimated using the standard error, which in turn is calculated using the variance of the sample. If the sample is random, the standard error falls with the size of the sample and the sample mean's distribution approaches the normal distribution as the sample size increases. The term "sample mean" can also be used to refer to a vector of average values when the statistician is looking at the values of several variables in the sample, e.g. the sales, profits, and employees of a sample of Fortune 500 companies. In this case, there is not just a sample variance for each variable but a sample variance-covariance matrix (or simply "covariance matrix") showing also the relationship between each pair of variables. This would be a 3×3 matrix when 3 variables are being considered. The sample covariance is useful in judging the reliability of the sample means as estimators and is also useful as an estimate of the population covariance matrix. Due to their ease of calculation and other desirable characteristics, the sample mean and sample covariance are widely used in statistics to represent the location and dispersion of the distribution of values in the sample, and to estimate the values for the population. Definition of the sample mean. The sample mean is the average of the values of a variable in a sample, which is the sum of those values divided by the number of values. Using mathematical notation, if a sample of "N" observations on variable "X" is taken from the population, the sample mean is: formula_0 Under this definition, if the sample (1, 4, 1) is taken from the population (1,1,3,4,0,2,1,0), then the sample mean is formula_1, as compared to the population mean of formula_2. Even if a sample is random, it is rarely perfectly representative, and other samples would have other sample means even if the samples were all from the same population. The sample (2, 1, 0), for example, would have a sample mean of 1. If the statistician is interested in "K" variables rather than one, each observation having a value for each of those "K" variables, the overall sample mean consists of "K" sample means for individual variables. Let formula_3 be the "i"th independently drawn observation ("i"=1...,"N") on the "j"th random variable ("j"=1...,"K"). These observations can be arranged into "N" column vectors, each with "K" entries, with the "K"×1 column vector giving the "i"-th observations of all variables being denoted formula_4 ("i"=1...,"N"). The sample mean vector formula_5 is a column vector whose "j"-th element formula_6 is the average value of the "N" observations of the "j"th variable: formula_7 Thus, the sample mean vector contains the average of the observations for each variable, and is written formula_8 Definition of sample covariance. The sample covariance matrix is a "K"-by-"K" matrix formula_9 with entries formula_10 where formula_11 is an estimate of the covariance between the jth variable and the kth variable of the population underlying the data. In terms of the observation vectors, the sample covariance is formula_12 Alternatively, arranging the observation vectors as the columns of a matrix, so that formula_13, which is a matrix of "K" rows and "N" columns. Here, the sample covariance matrix can be computed as formula_14, where formula_15 is an "N" by 1 vector of ones. If the observations are arranged as rows instead of columns, so formula_5 is now a 1×"K" row vector and formula_16 is an "N"×"K" matrix whose column "j" is the vector of "N" observations on variable "j", then applying transposes in the appropriate places yields formula_17 Like covariance matrices for random vector, sample covariance matrices are positive semi-definite. To prove it, note that for any matrix formula_18 the matrix formula_19 is positive semi-definite. Furthermore, a covariance matrix is positive definite if and only if the rank of the formula_20 vectors is K. Unbiasedness. The sample mean and the sample covariance matrix are unbiased estimates of the mean and the covariance matrix of the random vector formula_21, a row vector whose "j"th element ("j = 1, ..., K") is one of the random variables. The sample covariance matrix has formula_22 in the denominator rather than formula_23 due to a variant of Bessel's correction: In short, the sample covariance relies on the difference between each observation and the sample mean, but the sample mean is slightly correlated with each observation since it is defined in terms of all observations. If the population mean formula_24 is known, the analogous unbiased estimate formula_25 using the population mean, has formula_23 in the denominator. This is an example of why in probability and statistics it is essential to distinguish between random variables (upper case letters) and realizations of the random variables (lower case letters). The maximum likelihood estimate of the covariance formula_26 for the Gaussian distribution case has "N" in the denominator as well. The ratio of 1/"N" to 1/("N" − 1) approaches 1 for large "N", so the maximum likelihood estimate approximately equals the unbiased estimate when the sample is large. Distribution of the sample mean. For each random variable, the sample mean is a good estimator of the population mean, where a "good" estimator is defined as being efficient and unbiased. Of course the estimator will likely not be the true value of the population mean since different samples drawn from the same distribution will give different sample means and hence different estimates of the true mean. Thus the sample mean is a random variable, not a constant, and consequently has its own distribution. For a random sample of "N" observations on the "j"th random variable, the sample mean's distribution itself has mean equal to the population mean formula_27 and variance equal to formula_28, where formula_29 is the population variance. The arithmetic mean of a population, or population mean, is often denoted "μ". The sample mean formula_30 (the arithmetic mean of a sample of values drawn from the population) makes a good estimator of the population mean, as its expected value is equal to the population mean (that is, it is an unbiased estimator). The sample mean is a random variable, not a constant, since its calculated value will randomly differ depending on which members of the population are sampled, and consequently it will have its own distribution. For a random sample of "n" independent observations, the expected value of the sample mean is formula_31 and the variance of the sample mean is formula_32 If the samples are not independent, but correlated, then special care has to be taken in order to avoid the problem of pseudoreplication. If the population is normally distributed, then the sample mean is normally distributed as follows: formula_33 If the population is not normally distributed, the sample mean is nonetheless approximately normally distributed if "n" is large and "σ"2/"n" &lt; +∞. This is a consequence of the central limit theorem. Weighted samples. In a weighted sample, each vector formula_34 (each set of single observations on each of the "K" random variables) is assigned a weight formula_35. Without loss of generality, assume that the weights are normalized: formula_36 (If they are not, divide the weights by their sum). Then the weighted mean vector formula_37 is given by formula_38 and the elements formula_11 of the weighted covariance matrix formula_39 are formula_40 If all weights are the same, formula_41, the weighted mean and covariance reduce to the (biased) sample mean and covariance mentioned above. Criticism. The sample mean and sample covariance are not robust statistics, meaning that they are sensitive to outliers. As robustness is often a desired trait, particularly in real-world applications, robust alternatives may prove desirable, notably quantile-based statistics such as the sample median for location, and interquartile range (IQR) for dispersion. Other alternatives include trimming and Winsorising, as in the trimmed mean and the Winsorized mean. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\bar{X}=\\frac{1}{N}\\sum_{i=1}^{N}X_{i}." }, { "math_id": 1, "text": "\\bar{x} = (1+4+1)/3 = 2" }, { "math_id": 2, "text": "\\mu = (1+1+3+4+0+2+1+0) /8 = 12/8 = 1.5" }, { "math_id": 3, "text": "x_{ij}" }, { "math_id": 4, "text": "\\mathbf{x}_i" }, { "math_id": 5, "text": "\\mathbf{\\bar{x}}" }, { "math_id": 6, "text": "\\bar{x}_{j}" }, { "math_id": 7, "text": " \\bar{x}_{j}=\\frac{1}{N} \\sum_{i=1}^{N} x_{ij},\\quad j=1,\\ldots,K. " }, { "math_id": 8, "text": " \\mathbf{\\bar{x}}=\\frac{1}{N}\\sum_{i=1}^{N}\\mathbf{x}_i = \\begin{bmatrix}\n\\bar{x}_1 \\\\\n\\vdots \\\\\n\\bar{x}_j \\\\ \n\\vdots \\\\\n\\bar{x}_K\n\\end{bmatrix} " }, { "math_id": 9, "text": "\\textstyle \\mathbf{Q}=\\left[ q_{jk}\\right] " }, { "math_id": 10, "text": " q_{jk}=\\frac{1}{N-1}\\sum_{i=1}^{N}\\left( x_{ij}-\\bar{x}_j \\right) \\left( x_{ik}-\\bar{x}_k \\right), " }, { "math_id": 11, "text": "q_{jk}" }, { "math_id": 12, "text": "\\mathbf{Q} = {1 \\over {N-1}}\\sum_{i=1}^N (\\mathbf{x}_i.-\\mathbf{\\bar{x}}) (\\mathbf{x}_i.-\\mathbf{\\bar{x}})^\\mathrm{T}," }, { "math_id": 13, "text": "\\mathbf{F} = \\begin{bmatrix}\\mathbf{x}_1 & \\mathbf{x}_2 & \\dots & \\mathbf{x}_N \\end{bmatrix}" }, { "math_id": 14, "text": "\\mathbf{Q} = \\frac{1}{N-1}( \\mathbf{F} - \\mathbf{\\bar{x}} \\,\\mathbf{1}_N^\\mathrm{T} ) ( \\mathbf{F} - \\mathbf{\\bar{x}} \\,\\mathbf{1}_N^\\mathrm{T} )^\\mathrm{T}" }, { "math_id": 15, "text": "\\mathbf{1}_N" }, { "math_id": 16, "text": "\\mathbf{M}=\\mathbf{F}^\\mathrm{T}" }, { "math_id": 17, "text": "\\mathbf{Q} = \\frac{1}{N-1}( \\mathbf{M} - \\mathbf{1}_N \\mathbf{\\bar{x}} )^\\mathrm{T} ( \\mathbf{M} - \\mathbf{1}_N \\mathbf{\\bar{x}} )." }, { "math_id": 18, "text": "\\mathbf{A}" }, { "math_id": 19, "text": "\\mathbf{A}^T\\mathbf{A}" }, { "math_id": 20, "text": "\\mathbf{x}_i.-\\mathbf{\\bar{x}}" }, { "math_id": 21, "text": "\\textstyle \\mathbf{X}" }, { "math_id": 22, "text": "\\textstyle N-1" }, { "math_id": 23, "text": "\\textstyle N" }, { "math_id": 24, "text": "\\operatorname{E}(\\mathbf{X})" }, { "math_id": 25, "text": " q_{jk}=\\frac{1}{N}\\sum_{i=1}^N \\left( x_{ij}-\\operatorname{E}(X_j)\\right) \\left( x_{ik}-\\operatorname{E}(X_k)\\right), " }, { "math_id": 26, "text": " q_{jk}=\\frac{1}{N}\\sum_{i=1}^N \\left( x_{ij}-\\bar{x}_j \\right) \\left( x_{ik}-\\bar{x}_k \\right) " }, { "math_id": 27, "text": "E(X_j)" }, { "math_id": 28, "text": " \\sigma^2_j/N" }, { "math_id": 29, "text": "\\sigma^2_j" }, { "math_id": 30, "text": " \\bar{x}" }, { "math_id": 31, "text": " \\operatorname E (\\bar{x}) = \\mu " }, { "math_id": 32, "text": " \\operatorname{var}(\\bar{x}) = \\frac{\\sigma^2} n. " }, { "math_id": 33, "text": "\\bar{x} \\thicksim N\\left\\{\\mu, \\frac{\\sigma^2}{n}\\right\\}." }, { "math_id": 34, "text": "\\textstyle \\textbf{x}_{i}" }, { "math_id": 35, "text": "\\textstyle w_i \\geq0" }, { "math_id": 36, "text": " \\sum_{i=1}^{N}w_i = 1. " }, { "math_id": 37, "text": "\\textstyle \\mathbf{\\bar{x}}" }, { "math_id": 38, "text": " \\mathbf{\\bar{x}}=\\sum_{i=1}^N w_i \\mathbf{x}_i." }, { "math_id": 39, "text": "\\textstyle \\mathbf{Q}" }, { "math_id": 40, "text": " q_{jk}=\\frac{1}{1-\\sum_{i=1}^{N}w_i^2}\n\\sum_{i=1}^N w_i \\left( x_{ij}-\\bar{x}_j \\right) \\left( x_{ik}-\\bar{x}_k \\right) . " }, { "math_id": 41, "text": "\\textstyle w_{i}=1/N" } ]
https://en.wikipedia.org/wiki?curid=10005756
10006830
Disk loading
Characteristic of rotors/propellers In fluid dynamics, disk loading or disc loading is the average pressure change across an actuator disk, such as an airscrew. Airscrews with a relatively low disk loading are typically called rotors, including helicopter main rotors and tail rotors; propellers typically have a higher disk loading. The V-22 Osprey tiltrotor aircraft has a high disk loading relative to a helicopter in the hover mode, but a relatively low disk loading in fixed-wing mode compared to a turboprop aircraft. Rotors. Disk loading of a hovering helicopter is the ratio of its weight to the total main rotor disk area. It is determined by dividing the total helicopter weight by the rotor disk area, which is the area swept by the blades of a rotor. Disk area can be found by using the span of one rotor blade as the radius of a circle and then determining the area the blades encompass during a complete rotation. When a helicopter is being maneuvered, its disk loading changes. The higher the loading, the more power needed to maintain rotor speed. A low disk loading is a direct indicator of high lift thrust efficiency. Increasing the weight of a helicopter increases disk loading. For a given weight, a helicopter with shorter rotors will have higher disk loading, and will require more engine power to hover. A low disk loading improves autorotation performance in rotorcraft. Typically, an autogyro (or gyroplane) has a lower rotor disk loading than a helicopter, which provides a slower rate of descent in autorotation. Propellers. In reciprocating and propeller engines, disk loading can be defined as the ratio between propeller-induced velocity and freestream velocity. Lower disk loading will increase efficiency, so it is generally desirable to have larger propellers from an efficiency standpoint. Maximum efficiency is reduced as disk loading is increased due to the rotating slipstream; using contra-rotating propellers can alleviate this problem allowing high maximum efficiency even at relatively high disc loading. The Airbus A400M fixed-wing aircraft will have a very high disk loading on its propellers. Theory. The "momentum theory" or "disk actuator theory" describes a mathematical model of an ideal actuator disk, developed by W.J.M. Rankine (1865), Alfred George Greenhill (1888) and R.E. Froude (1889). The helicopter rotor is modeled as an infinitesimally thin disk with an infinite number of blades that induce a constant pressure jump over the disk area and along the axis of rotation. For a helicopter that is hovering, the aerodynamic force is vertical and exactly balances the helicopter weight, with no lateral force. The downward force on the air flowing through the rotor is accompanied by an upward force on the helicopter rotor disk. The downward force produces a downward acceleration of the air, increasing its kinetic energy. This energy transfer from the rotor to the air is the induced power loss of the rotary wing, which is analogous to the lift-induced drag of a fixed-wing aircraft. Conservation of linear momentum relates the induced velocity downstream in the far wake field to the rotor thrust per unit of mass flow. Conservation of energy considers these parameters as well as the induced velocity at the rotor disk. Conservation of mass relates the mass flow to the induced velocity. The momentum theory applied to a helicopter gives the relationship between induced power loss and rotor thrust, which can be used to analyze the performance of the aircraft. Viscosity and compressibility of the air, frictional losses, and rotation of the slipstream in the wake are not considered. Momentum theory. For an actuator disk of area formula_0, with uniform induced velocity formula_1 at the rotor disk, and with formula_2 as the density of air, the mass flow rate formula_3 through the disk area is: formula_4 By conservation of mass, the mass flow rate is constant across the slipstream both upstream and downstream of the disk (regardless of velocity). Since the flow far upstream of a helicopter in a level hover is at rest, the starting velocity, momentum, and energy are zero. If the homogeneous slipstream far downstream of the disk has velocity formula_5, by conservation of momentum the total thrust formula_6 developed over the disk is equal to the rate of change of momentum, which assuming zero starting velocity is: formula_7 By conservation of energy, the work done by the rotor must equal the energy change in the slipstream: formula_8 Substituting for formula_6 and eliminating terms, we get: formula_9 So the velocity of the slipstream far downstream of the disk is twice the velocity at the disk, which is the same result as for an elliptically loaded wing predicted by lifting-line theory. Bernoulli's principle. To compute the disk loading using Bernoulli's principle, we assume the pressure in the slipstream far downstream is equal to the starting pressure formula_10, which is equal to the atmospheric pressure. From the starting point to the disk we have: formula_11 Between the disk and the distant wake, we have: formula_12 Combining equations, the disk loading formula_13 is: formula_14 The total pressure in the distant wake is: formula_15 So the pressure change across the disk is equal to the disk loading. Above the disk the pressure change is: formula_16 Below the disk, the pressure change is: formula_17 The pressure along the slipstream is always falling downstream, except for the positive pressure jump across the disk. Power required. From the momentum theory, thrust is: formula_18 The induced velocity is: formula_19 Where formula_20 is the disk loading as before, and the power formula_21 required in hover (in the ideal case) is: formula_22 Therefore, the induced velocity can be expressed as: formula_23 So, the induced velocity is inversely proportional to the power loading formula_24. References. &lt;templatestyles src="Reflist/styles.css" /&gt;  This article incorporates public domain material from
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "v" }, { "math_id": 2, "text": "\\rho" }, { "math_id": 3, "text": "\\dot{m}" }, { "math_id": 4, "text": "\\dot m = \\rho \\, A \\, v." }, { "math_id": 5, "text": "w" }, { "math_id": 6, "text": "T" }, { "math_id": 7, "text": " T= \\dot m\\, w." }, { "math_id": 8, "text": " T\\, v= \\tfrac12\\, \\dot m\\, {w^2}." }, { "math_id": 9, "text": " v= \\tfrac12\\, w." }, { "math_id": 10, "text": "p_0" }, { "math_id": 11, "text": " p_0 =\\, p_1 +\\ \\tfrac12\\, \\rho\\, v^2." }, { "math_id": 12, "text": " p_2 +\\ \\tfrac12\\, \\rho\\, v^2 =\\, p_0 +\\ \\tfrac12\\, \\rho\\, w^2." }, { "math_id": 13, "text": "T /\\, A" }, { "math_id": 14, "text": "\\frac {T}{A} = p_2 -\\, p_1 = \\tfrac12\\, \\rho\\, w^2" }, { "math_id": 15, "text": " p_0 + \\tfrac12\\, \\rho\\, w^2 =\\, p_0 + \\frac {T}{A}." }, { "math_id": 16, "text": " p_0 - \\tfrac12\\, \\rho\\, v^2 =\\, p_0 -\\, \\tfrac14 \\frac {T}{A}." }, { "math_id": 17, "text": " p_0 + \\tfrac32\\, \\rho\\, v^2 =\\, p_0 +\\, \\tfrac34 \\frac {T}{A}." }, { "math_id": 18, "text": " T = \\dot m\\, w = \\dot m\\, (2 v) = 2 \\rho\\, A\\, v^2." }, { "math_id": 19, "text": "v = \\sqrt{\\frac{T}{A} \\cdot \\frac{1}{2 \\rho}}." }, { "math_id": 20, "text": "T/A" }, { "math_id": 21, "text": "P" }, { "math_id": 22, "text": "P = T v = T \\sqrt{\\frac{T}{A} \\cdot \\frac{1}{2 \\rho}}." }, { "math_id": 23, "text": " v = \\frac{P}{T} = \\left [ \\frac{T}{P} \\right ] ^{-1}." }, { "math_id": 24, "text": "T/P" } ]
https://en.wikipedia.org/wiki?curid=10006830
10008
Electrode
Electrical conductor used to make contact with nonmetallic parts of a circuit An electrode is an electrical conductor used to make contact with a nonmetallic part of a circuit (e.g. a semiconductor, an electrolyte, a vacuum or air). Electrodes are essential parts of batteries that can consist of a variety of materials (chemicals) depending on the type of battery. The electrophore, invented by Johan Wilcke, was an early version of an electrode used to study static electricity. Anode and cathode in electrochemical cells. Electrodes are an essential part of any battery. The first electrochemical battery was devised by Alessandro Volta and was aptly named the Voltaic cell. This battery consisted of a stack of copper and zinc electrodes separated by brine-soaked paper disks. Due to fluctuation in the voltage provided by the voltaic cell, it was not very practical. The first practical battery was invented in 1839 and named the Daniell cell after John Frederic Daniell. It still made use of the zinc–copper electrode combination. Since then, many more batteries have been developed using various materials. The basis of all these is still using two electrodes, anodes and cathodes. Anode (-). 'Anode' was coined by William Whewell at Michael Faraday's request, derived from the Greek words ἄνο (ano), 'upwards' and ὁδός (hodós), 'a way'. The anode is the electrode through which the conventional current enters from the electrical circuit of an electrochemical cell (battery) into the non-metallic cell. The electrons then flow to the other side of the battery. Benjamin Franklin surmised that the electrical flow moved from positive to negative. The electrons flow away from the anode and the conventional current towards it. From both can be concluded that the charge of the anode is negative. The electron entering the anode comes from the oxidation reaction that takes place next to it. Cathode (+). The cathode is in many ways the opposite of the anode. The name (also coined by Whewell) comes from the Greek words κάτω (kato), 'downwards' and ὁδός (hodós), 'a way'. It is the positive electrode, meaning the electrons flow from the electrical circuit through the cathode into the non-metallic part of the electrochemical cell. At the cathode, the reduction reaction takes place with the electrons arriving from the wire connected to the cathode and are absorbed by the oxidizing agent. Primary cell. A primary cell is a battery designed to be used once and then discarded. This is due to the electrochemical reactions taking place at the electrodes in the cell not being reversible. An example of a primary cell is the discardable alkaline battery commonly used in flashlights. Consisting of a zinc anode and a manganese oxide cathode in which ZnO is formed. The half-reactions are: Zn(s) + 2OH−(aq) → ZnO(s) + H2O(l) + 2e− formula_0 [E0oxidation = -1.28 V] 2MnO2(s) + H2O(l) + 2e− → Mn2O3(s) + 2OH−(aq)formula_1 [E0reduction = +0.15 V] Overall reaction: Zn(s) + 2MnO2(s) ⇌ ZnO(s) + Mn2O3(s)formula_0 [E0total = +1.43 V] The ZnO is prone to clumping and will give less efficient discharge if recharged again. It is possible to recharge these batteries but is due to safety concerns advised against by the manufacturer. Other primary cells include zinc–carbon, zinc–chloride, and lithium iron disulfide. Secondary cell. Contrary to the primary cell a secondary cell can be recharged. The first was the lead–acid battery, invented in 1859 by French physicist Gaston Planté. This type of battery is still the most widely used in among others automobiles. The cathode consists of lead dioxide (PbO2) and the anode of solid lead. Other commonly used rechargeable batteries are nickel–cadmium, nickel–metal hydride, and Lithium-ion. The last of which will be explained more thoroughly in this article due to its importance. Marcus' theory of electron transfer. Marcus theory is a theory originally developed by Nobel laureate Rudolph A. Marcus and explains the rate at which an electron can move from one chemical species to another, for this article this can be seen as 'jumping' from the electrode to a species in the solvent or vice versa. We can represent the problem as calculating the transfer rate for the transfer of an electron from donor to an acceptor D + A → D+ + A− The potential energy of the system is a function of the translational, rotational, and vibrational coordinates of the reacting species and the molecules of the surrounding medium, collectively called the reaction coordinates. The abscissa the figure to the right represents these. From the classical electron transfer theory, the expression of the reaction rate constant (probability of reaction) can be calculated, if a non-adiabatic process and parabolic potential energy are assumed, by finding the point of intersection (Qx). One important thing to note, and was noted by Marcus when he came up with the theory, the electron transfer must abide by the law of conservation of energy and the Frank-Condon principle. Doing this and then rearranging this leads to the expression of the free energy activation (formula_2) in terms of the overall free energy of the reaction (formula_3). formula_4 In which the formula_5 is the reorganisation energy. Filling this result in the classically derived Arrhenius equation formula_6 leads to formula_7 With A being the pre-exponential factor which is usually experimentally determined, although a semi classical derivation provides more information as will be explained below. This classically derived result qualitatively reproduced observations of a maximum electron transfer rate under the conditions formula_8. For a more extensive mathematical treatment one could read the paper by Newton. An interpretation of this result and what a closer look at the physical meaning of the formula_9 one can read the paper by Marcus. the situation at hand can be more accurately described by using the displaced harmonic oscillator model, in this model quantum tunneling is allowed. This is needed in order to explain why even at near-zero Kelvin there still are electron transfers, in contradiction to the classical theory. Without going into too much detail on how the derivation is done, it rests on using Fermi's golden rule from time-dependent perturbation theory with the full Hamiltonian of the system. It is possible to look at the overlap in the wavefunctions of both the reactants and the products (the right and the left side of the chemical reaction) and therefore when their energies are the same and allow for electron transfer. As touched on before this must happen because only then conservation of energy is abided by. Skipping over a few mathematical steps the probability of electron transfer can be calculated (albeit quite difficult) using the following formula formula_10 With formula_11 being the electronic coupling constant describing the interaction between the two states (reactants and products) and formula_12 being the line shape function. Taking the classical limit of this expression, meaning formula_13, and making some substitution an expression is obtained very similar to the classically derived formula, as expected. formula_14 The main difference is now the pre-exponential factor has now been described by more physical parameters instead of the experimental factor formula_15. One is once again revered to the sources as listed below for a more in-depth and rigorous mathematical derivation and interpretation. Efficiency. The physical properties of electrodes are mainly determined by the material of the electrode and the topology of the electrode. The properties required depend on the application and therefore there are many kinds of electrodes in circulation. The defining property for a material to be used as an electrode is that it be conductive. Any conducting material such as metals, semiconductors, graphite or conductive polymers can therefore be used as an electrode. Often electrodes consist of a combination of materials, each with a specific task. Typical constituents are the active materials which serve as the particles which oxidate or reduct, conductive agents which improve the conductivity of the electrode and binders which are used to contain the active particles within the electrode. The efficiency of electrochemical cells is judged by a number of properties, important quantities are the self-discharge time, the discharge voltage and the cycle performance. The physical properties of the electrodes play an important role in determining these quantities. Important properties of the electrodes are: the electrical resistivity, the specific heat capacity (c_p), the electrode potential and the hardness. Of course, for technological applications, the cost of the material is also an important factor. The values of these properties at room temperature (T = 293 K) for some commonly used materials are listed in the table below. Surface effects. The surface topology of the electrode plays an important role in determining the efficiency of an electrode. The efficiency of the electrode can be reduced due to contact resistance. To create an efficient electrode it is therefore important to design it such that it minimizes the contact resistance. Manufacturing. The production of electrodes for Li-ion batteries is done in various steps as follows: Structure of the electrode. For a given selection of constituents of the electrode, the final efficiency is determined by the internal structure of the electrode. The important factors in the internal structure in determining the performance of the electrode are: These properties can be influenced in the production of the electrodes in a number of manners. The most important step in the manufacturing of the electrodes is creating the electrode slurry. As can be seen above, the important properties of the electrode all have to do with the even distribution of the components of the electrode. Therefore, it is very important that the electrode slurry be as homogeneous as possible. Multiple procedures have been developed to improve this mixing stage and current research is still being done. Electrodes in lithium ion batteries. A modern application of electrodes is in lithium-ion batteries (Li-ion batteries). A Li-ion battery is a kind of flow battery which can be seen in the image on the right. Furthermore, a Li-ion battery is an example of a secondary cell since it is rechargeable. It can both act as a galvanic or electrolytic cell. Li-ion batteries use lithium ions as the solute in the electrolyte which are dissolved in an organic solvent. Lithium electrodes were first studied by Gilbert N. Lewis and Frederick G. Keyes in 1913. In the following century these electrodes were used to create and study the first Li-ion batteries. Li-ion batteries are very popular due to their great performance. Applications include mobile phones and electric cars. Due to their popularity, much research is being done to reduce the cost and increase the safety of Li-ion batteries. An integral part of the Li-ion batteries are their anodes and cathodes, therefore much research is being done into increasing the efficiency, safety and reducing the costs of these electrodes specifically. Cathodes. In Li-ion batteries, the cathode consists of a intercalated lithium compound (a layered material consisting of layers of molecules composed of lithium and other elements). A common element which makes up part of the molecules in the compound is cobalt. Another frequently used element is manganese. The best choice of compound usually depends on the application of the battery. Advantages for cobalt-based compounds over manganese-based compounds are their high specific heat capacity, high volumetric heat capacity, low self-discharge rate, high discharge voltage and high cycle durability. There are however also drawbacks in using cobalt-based compounds such as their high cost and their low thermostability. Manganese has similar advantages and a lower cost, however there are some problems associated with using manganese. The main problem is that manganese tends to dissolve into the electrolyte over time. For this reason, cobalt is still the most common element which is used in the lithium compounds. There is much research being done into finding new materials which can be used to create cheaper and longer lasting Li-ion batteries Anodes. The anodes used in mass-produced Li-ion batteries are either carbon based (usually graphite) or made out of spinel lithium titanate (Li4Ti5O12). Graphite anodes have been successfully implemented in many modern commercially available batteries due to its cheap price, longevity and high energy density. However, it presents issues of dendrite growth, with risks of shorting the battery and posing a safety issue. Li4Ti5O12 has the second largest market share of anodes, due to its stability and good rate capability, but with challenges such as low capacity. During the early 2000s, silicon anode research began picking up pace, becoming one of the decade's most promising candidates for future lithium-ion battery anodes. Silicon has one of the highest gravimetric capacities when compared to graphite and Li4Ti5O12 as well as a high volumetric one. Furthermore, Silicon has the advantage of operating under a reasonable open circuit voltage without parasitic lithium reactions. However, silicon anodes have a major issue of volumetric expansion during lithiation of around 360%. This expansion may pulverize the anode, resulting in poor performance. To fix this problem, scientists looked into varying the dimensionality of the Si. Many studies have been developed in Si nanowires, Si tubes as well as Si sheets. As a result, composite hierarchical Si anodes have become the major technology for future applications in lithium-ion batteries. In the early 2020s, technology is reaching commercial levels with factories being built for mass production of anodes in the United States. Furthermore, metallic lithium is another possible candidate for the anode. It boasts a higher specific capacity than silicon, however, does come with the drawback of working with the highly unstable metallic lithium. Similarly to graphite anodes, dendrite formation is another major limitation of metallic lithium, with the solid electrolyte interphase being a major design challenge. In the end, if stabilized, metallic lithium would be able to produce batteries that hold the most charge, while being the lightest. Mechanical properties. A common failure mechanism of batteries is mechanical shock, which breaks either the electrode or the system's container, leading to poor conductivity and electrolyte leakage. However, the relevance of mechanical properties of electrodes goes beyond the resistance to collisions due to its environment. During standard operation, the incorporation of ions into electrodes leads to a change in volume. This is well exemplified by Si electrodes in lithium-ion batteries expanding around 300% during lithiation. Such change may lead to the deformations in the lattice and, therefore stresses in the material. The origin of stresses may be due to geometric constraints in the electrode or inhomogeneous plating of the ion. This phenomenon is very concerning as it may lead to electrode fracture and performance loss. Thus, mechanical properties are crucial to enable the development of new electrodes for long lasting batteries. A possible strategy for measuring the mechanical behavior of electrodes during operation is by using nanoindentation. The method is able to analyze how the stresses evolve during the electrochemical reactions, being a valuable tool in evaluating possible pathways for coupling mechanical behavior and electrochemistry. More than just affecting the electrode's morphology, stresses are also able to impact electrochemical reactions. While the chemical driving forces are usually higher in magnitude than the mechanical energies, this is not true for Li-ion batteries. A study by Dr. Larché established a direct relation between the applied stress and the chemical potential of the electrode. Though it neglects multiple variables such as the variation of elastic constraints, it subtracts from the total chemical potential the elastic energy induced by the stress. formula_16 In this equation, μ represents the chemical potential, with μ° being its reference value. T stands for the temperature and k the Boltzmann constant. The term γ inside the logarithm is the activity and x is the ratio of the ion to the total composition of the electrode. The novel term Ω is the partial molar volume of the ion in the host and σ corresponds to the mean stress felt by the system. The result of this equation is that diffusion, which is dependent on chemical potential, gets impacted by the added stress and, therefore changes the battery's performance. Furthermore, mechanical stresses may also impact the electrode's solid-electrolyte-interphase layer. The interface which regulates the ion and charge transfer and can be degraded by stress. Thus, more ions in the solution will be consumed to reform it, diminishing the overall efficiency of the system. Other anodes and cathodes. In a vacuum tube or a semiconductor having polarity (diodes, electrolytic capacitors) the anode is the positive (+) electrode and the cathode the negative (−). The electrons enter the device through the cathode and exit the device through the anode. Many devices have other electrodes to control operation, e.g., base, gate, control grid. In a three-electrode cell, a counter electrode, also called an auxiliary electrode, is used only to make a connection to the electrolyte so that a current can be applied to the working electrode. The counter electrode is usually made of an inert material, such as a noble metal or graphite, to keep it from dissolving. Welding electrodes. In arc welding, an electrode is used to conduct current through a workpiece to fuse two pieces together. Depending upon the process, the electrode is either consumable, in the case of gas metal arc welding or shielded metal arc welding, or non-consumable, such as in gas tungsten arc welding. For a direct current system, the weld rod or stick may be a cathode for a filling type weld or an anode for other welding processes. For an alternating current arc welder, the welding electrode would not be considered an anode or cathode. Alternating current electrodes. For electrical systems which use alternating current, the electrodes are the connections from the circuitry to the object to be acted upon by the electric current but are not designated anode or cathode because the direction of flow of the electrons changes periodically, usually many times per second. Chemically modified electrodes. Chemically modified electrodes are electrodes that have their surfaces chemically modified to change the electrode's physical, chemical, electrochemical, optical, electrical, and transportive properties. These electrodes are used for advanced purposes in research and investigation. Uses. Electrodes are used to provide current through nonmetal objects to alter them in numerous ways and to measure conductivity for numerous purposes. Examples include: See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\qquad \\qquad" }, { "math_id": 1, "text": "\\qquad" }, { "math_id": 2, "text": "\\Delta G^{\\dagger}" }, { "math_id": 3, "text": "\\Delta G^{0}" }, { "math_id": 4, "text": "\\Delta G^{\\dagger} = \\frac{1}{4 \\lambda} (\\Delta G^{0} + \\lambda)^{2} " }, { "math_id": 5, "text": " \\lambda " }, { "math_id": 6, "text": "k = A\\, \\exp\\left(\\frac{- \\Delta G^{\\dagger}}{kT}\\right)," }, { "math_id": 7, "text": "k = A\\, \\exp\\left[{\\frac {-(\\Delta G^{0} + \\lambda)^{2}}{4 \\lambda k T}}\\right]" }, { "math_id": 8, "text": "\\Delta G^{\\dagger} = \\lambda" }, { "math_id": 9, "text": "\\lambda" }, { "math_id": 10, "text": "w_{ET}= \\frac{|J|^{2}}{\\hbar^{2}}\\int_{-\\infty}^{+\\infty}dt\\, e^{-i \\Delta Et / \\hbar - g (t)}" }, { "math_id": 11, "text": " J " }, { "math_id": 12, "text": " g(t) " }, { "math_id": 13, "text": " \\hbar \\omega \\ll k T " }, { "math_id": 14, "text": "w_{ET} = \\frac{|J|^{2}}{\\hbar} \\sqrt{\\frac{\\pi}{\\lambda k T}}\\exp\\left[\\frac {- ( \\Delta E + \\lambda )^{2}} {4 \\lambda k T}\\right]" }, { "math_id": 15, "text": " A " }, { "math_id": 16, "text": "\\mu = \\mu^o + k\\cdot T\\cdot\\log (\\gamma\\cdot x) + \\Omega \\cdot \\sigma" } ]
https://en.wikipedia.org/wiki?curid=10008
1001293
Irreducibility (mathematics)
In mathematics, the concept of irreducibility is used in several ways. &lt;templatestyles src="Dmbox/styles.css" /&gt; Index of articles associated with the same name This includes a list of related items that share the same name (or similar names). &lt;br&gt; If an [ internal link] incorrectly led you here, you may wish to change the link to point directly to the intended article.
[ { "math_id": 0, "text": "\\mathbb RP^2" } ]
https://en.wikipedia.org/wiki?curid=1001293
1001329
Class function
In mathematics, especially in the fields of group theory and representation theory of groups, a class function is a function on a group "G" that is constant on the conjugacy classes of "G". In other words, it is invariant under the conjugation map on "G". Such functions play a basic role in representation theory. Characters. The character of a linear representation of "G" over a field "K" is always a class function with values in "K". The class functions form the center of the group ring "K"["G"]. Here a class function "f" is identified with the element formula_0. Inner products. The set of class functions of a group "G" with values in a field "K" form a "K"-vector space. If "G" is finite and the characteristic of the field does not divide the order of "G", then there is an inner product defined on this space defined by formula_1 where |"G"| denotes the order of "G" and bar is conjugation in the field "K". The set of irreducible characters of "G" forms an orthogonal basis, and if "K" is a splitting field for "G", for instance if "K" is algebraically closed, then the irreducible characters form an orthonormal basis. In the case of a compact group and "K" = C the field of complex numbers, the notion of Haar measure allows one to replace the finite sum above with an integral: formula_2 When "K" is the real numbers or the complex numbers, the inner product is a non-degenerate Hermitian bilinear form.
[ { "math_id": 0, "text": " \\sum_{g \\in G} f(g) g" }, { "math_id": 1, "text": " \\langle \\phi , \\psi \\rangle = \\frac{1}{|G|} \\sum_{g \\in G} \\phi(g) \\overline{\\psi(g)} " }, { "math_id": 2, "text": " \\langle \\phi, \\psi \\rangle = \\int_G \\phi(t) \\overline{\\psi(t)}\\, dt. " } ]
https://en.wikipedia.org/wiki?curid=1001329
1001361
Semisimple module
Direct sum of irreducible modules In mathematics, especially in the area of abstract algebra known as module theory, a semisimple module or completely reducible module is a type of module that can be understood easily from its parts. A ring that is a semisimple module over itself is known as an Artinian semisimple ring. Some important rings, such as group rings of finite groups over fields of characteristic zero, are semisimple rings. An Artinian ring is initially understood via its largest semisimple quotient. The structure of Artinian semisimple rings is well understood by the Artin–Wedderburn theorem, which exhibits these rings as finite direct products of matrix rings. For a group-theory analog of the same notion, see "Semisimple representation". Definition. A module over a (not necessarily commutative) ring is said to be semisimple (or completely reducible) if it is the direct sum of simple (irreducible) submodules. For a module "M", the following are equivalent: For the proof of the equivalences, see "". The most basic example of a semisimple module is a module over a field, i.e., a vector space. On the other hand, the ring Z of integers is not a semisimple module over itself, since the submodule 2Z is not a direct summand. Semisimple is stronger than completely decomposable, which is a direct sum of indecomposable submodules. Let "A" be an algebra over a field "K". Then a left module "M" over "A" is said to be absolutely semisimple if, for any field extension "F" of "K", "F" ⊗"K" "M" is a semisimple module over "F" ⊗"K" "A". Semisimple rings. A ring is said to be (left-)semisimple if it is semisimple as a left module over itself. Surprisingly, a left-semisimple ring is also right-semisimple and vice versa. The left/right distinction is therefore unnecessary, and one can speak of semisimple rings without ambiguity. A semisimple ring may be characterized in terms of homological algebra: namely, a ring "R" is semisimple if and only if any short exact sequence of left (or right) "R"-modules splits. That is, for a short exact sequence formula_0 there exists "s" : "C" → "B" such that the composition "g" ∘ "s" : "C" → "C" is the identity. The map "s" is known as a section. From this it follows that formula_1 or in more exact terms formula_2 In particular, any module over a semisimple ring is injective and projective. Since "projective" implies "flat", a semisimple ring is a von Neumann regular ring. Semisimple rings are of particular interest to algebraists. For example, if the base ring "R" is semisimple, then all "R"-modules would automatically be semisimple. Furthermore, every simple (left) "R"-module is isomorphic to a minimal left ideal of "R", that is, "R" is a left Kasch ring. Semisimple rings are both Artinian and Noetherian. From the above properties, a ring is semisimple if and only if it is Artinian and its Jacobson radical is zero. If an Artinian semisimple ring contains a field as a central subring, it is called a semisimple algebra. Simple rings. One should beware that despite the terminology, "not all simple rings are semisimple". The problem is that the ring may be "too big", that is, not (left/right) Artinian. In fact, if "R" is a simple ring with a minimal left/right ideal, then "R" is semisimple. Classic examples of simple, but not semisimple, rings are the Weyl algebras, such as the Q-algebra formula_3 which is a simple noncommutative domain. These and many other nice examples are discussed in more detail in several noncommutative ring theory texts, including chapter 3 of Lam's text, in which they are described as nonartinian simple rings. The module theory for the Weyl algebras is well studied and differs significantly from that of semisimple rings. Jacobson semisimple. A ring is called "Jacobson semisimple" (or "J-semisimple" or "semiprimitive") if the intersection of the maximal left ideals is zero, that is, if the Jacobson radical is zero. Every ring that is semisimple as a module over itself has zero Jacobson radical, but not every ring with zero Jacobson radical is semisimple as a module over itself. A J-semisimple ring is semisimple if and only if it is an artinian ring, so semisimple rings are often called "artinian semisimple rings" to avoid confusion. For example, the ring of integers, Z, is J-semisimple, but not artinian semisimple. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "0 \\to A \\xrightarrow{f} B \\xrightarrow{g} C \\to 0 " }, { "math_id": 1, "text": "B \\cong A \\oplus C" }, { "math_id": 2, "text": "B \\cong f(A) \\oplus s(C)." }, { "math_id": 3, "text": " A=\\mathbf{Q}{\\left[x,y\\right]}/\\langle xy-yx-1\\rangle\\ ," } ]
https://en.wikipedia.org/wiki?curid=1001361
10013925
Multiple inert gas elimination technique
Medical technique The multiple inert gas elimination technique (MIGET) is a medical technique used mainly in pulmonology that involves measuring the concentrations of various infused, inert gases in mixed venous blood, arterial blood, and expired gas of a subject. The technique quantifies true shunt, physiological dead space ventilation, ventilation versus blood flow (VA/Q) ratios, and diffusion limitation. Background. Hypoxemia is generally attributed to one of four processes: hypoventilation, shunt (right to left), diffusion limitation, and ventilation/perfusion (VA/Q) inequality. Moreover, there are also "extrapulmonary" factors that can contribute to fluctuations in arterial PO2. There are several measures of hypoxemia that can be assessed, but there are various limitations associated with each. It was for this reason that the MIGET was developed, to overcome the shortcomings of previous methods. Theoretical basis. Steady-state gas exchange in the lungs obeys the principles of conservation of mass. This leads to the ventilation/perfusion equation for oxygen: formula_0 and for carbon dioxide: formula_1 where: For the purposes of utilizing the MIGET, the equations have been generalized for an inert gas (IG): formula_2 where: Assuming diffusion equilibration is complete for the inert gas, dropping the subscript IG, and substituting the blood-gas partition coefficient (λ) renders: formula_3 Rearranging: formula_4 where: This equation is the foundation for the MIGET, and it demonstrates that the fraction of inert gas not eliminated from the blood via the lung is a function of the partition coefficient and the VA/Q ratio. This equation operates under the presumption that the lung is perfectly homogenous. In this model, retention (R) is measured from the ratio &amp;NoBreak;&amp;NoBreak;. Stated mathematically: formula_5 From this equation, we can measure the levels of each inert gas retained in the blood. The relationship between retention (R) and &amp;NoBreak;&amp;NoBreak; can be summarized as follows: As &amp;NoBreak;&amp;NoBreak; for a given λ increases, R decreases; however, this relationship between &amp;NoBreak;&amp;NoBreak; and R is the most obvious at values of &amp;NoBreak;&amp;NoBreak; between ten times higher and lower than a gas's λ. Beyond this, however, it is possible to measure the concentrations of the inert gases in the expired gas from the subject. The ratio of the mixed expired concentration to the mixed venous concentration has been termed excretion (E) and describes the ventilation to regions of varying &amp;NoBreak;&amp;NoBreak;. When taken together: formula_6 where: When observing a collection of alveoli in which PO2 and PCO2 are uniform, local alveolar ventilation and local blood flow define &amp;NoBreak;&amp;NoBreak;: formula_7 From these equations it can be deduced that to have knowledge of either retention or excretion implies knowledge of the other. Moreover, a similar understanding exists for the relationship between the distribution of blood flow and the distribution of ventilation. Limitations. The data produced by the MIGET is an approximation of the distribution of &amp;NoBreak;&amp;NoBreak; ratios across the entire lung. It has been estimated that nearly 100,000 gas exchange units exist in the human lung; this could lead to a theoretical maximum of VA/Q compartments as high as 100,000, in that case. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_A/Q=8.63 \\times \\frac{C_{c'}\\ce{O2} - C_v\\ce{O2}}{P_I\\ce{O2} - P_A\\ce{O2}}" }, { "math_id": 1, "text": "V_A/Q=8.63 \\times \\frac{C_v\\ce{CO2} - C_{c'}\\ce{CO2}}{P_A\\ce{CO2}}" }, { "math_id": 2, "text": "V_A/Q = 8.63 \\times \\ce{solubility} \\times \\frac {P_V\\ce{IG} - P_{C'}\\ce{IG}}{P_A\\ce{IG}}" }, { "math_id": 3, "text": " V_A/Q = {\\lambda} \\times \\frac{P_v - P_A}{P_A} " }, { "math_id": 4, "text": "P_A/P_v = \\frac{{\\lambda}}{{\\lambda} + V_A/Q} = P_{c'}/P_v " }, { "math_id": 5, "text": "R = \\frac{\\lambda}{\\lambda+V_A/Q}" }, { "math_id": 6, "text": "V_{IG} = V_E \\times E = \\lambda \\times Q_T \\times [1-R]" }, { "math_id": 7, "text": "V_A = Q \\times V_A/Q" } ]
https://en.wikipedia.org/wiki?curid=10013925
10014466
Copper cable certification
Cable testing regimen In copper twisted pair wire networks, copper cable certification is achieved through a thorough series of tests in accordance with Telecommunications Industry Association (TIA) or International Organization for Standardization (ISO) standards. These tests are done using a certification-testing tool, which provide "pass" or "fail" information. While certification can be performed by the owner of the network, certification is primarily done by datacom contractors. It is this certification that allows the contractors to warranty their work. Need for certification. Installers who need to prove to the network owner that the installation has been done correctly and meets TIA or ISO standards need to certify their work. Network owners who want to guarantee that the infrastructure is capable of handling a certain application (e.g. Voice over Internet Protocol) will use a tester to certify the network infrastructure. In some cases, these testers are used to pinpoint specific problems. Certification tests are vital if there is a discrepancy between the installer and network owner after an installation has been performed. Standards. The performance tests and their procedures have been defined in the ANSI/TIA-568.2 standard and the ISO/IEC 11801 standard. The TIA standard defines performance in categories (Cat 3, Cat 5e, Cat 6, Cat 6A, and Cat 8) and the ISO defines classes (Class C, D, E, EA, F and FA). These standards define the procedure to certify that an installation meets performance criteria in a given category or class. The significance of each category or class is the limit values of which the Pass/Fail and frequency ranges are measured: Cat 3 and Class C (no longer used) test and define communication with 16 MHz bandwidth, Cat 5e and Class D with 100 MHz bandwidth, Cat 6 and Class E up to 250 MHz, Cat6A and Class EA up to 500 MHz, Cat7 and Class F up to 600 MHz and Cat 7A and Class FA with a frequency range through 1000 MHz., Cat 8, Class I, and Class II have a frequency range through 2000MHz The standards also define that data from each test result must be collected and stored in either print or electronic format for future inspection. Tests. Wiremap. The wiremap test is used to identify physical installation errors; improper pin termination, shorts between any two or more wires, continuity to the remote end, split pairs, crossed pairs, reversed pairs, and any other mis-wiring. Propagation delay. The propagation delay test tests for the time it takes for the signal to be sent from one end and received by the other end. Delay skew. The delay skew test is used to find the difference in propagation delay between the fastest and slowest set of wire pairs. An ideal skew is between 25 and 50 nanoseconds over a 100-meter cable. The lower this skew the better; less than 25 ns is excellent, but 45 to 50 ns is marginal. (Traveling between 50% and 80% of the speed of light, an electronic wave requires between 417 and 667 ns to traverse a 100-meter cable. Cable length. The cable length test verifies that the copper cable from the transmitter to receiver does not exceed the maximum recommended distance of 100 meters in a 10BASE-T/100BASE-TX/1000BASE-T network. Insertion loss. Insertion loss, also referred to as attenuation, refers to the loss of signal strength at the far end of a line compared to the signal that was introduced into the line. This loss is due to the electrical resistance of the copper cable, the loss of energy through the cable insulation, and impedance mismatches introduced at the connectors. Insertion loss is usually expressed in decibels dB. Insertion loss increases with distance and frequency. For every roughly 3 dB of loss, signal power is reduced by a factor of formula_0 and signal amplitude is reduced by a factor of formula_1. Return loss. Return loss is the measurement (in dB) of the amount of signal that is reflected back toward the transmitter. The reflection of the signal is caused by the variations of impedance in the connectors and cable and is usually attributed to a poorly terminated wire. The greater the variation in impedance, the greater the return loss reading. If three pairs of wire pass by a substantial amount, but the fourth pair barely passes, it usually is an indication of a bad crimp or bad connection at the RJ45 plug. Return loss is usually not significant in the loss of a signal, but rather signal jitter. Near-end crosstalk (NEXT). In twisted-pair cabling near-end crosstalk (NEXT) is a measure that describes the effect caused by a signal from one wire pair coupling into another wire pair and interfering with the signal therein. It is the difference, expressed in dB, between the amplitude of a transmitted signal and the amplitude of the signal coupled into another cable pair, a"t the signal-source end" of a cable. A higher value is desirable as it indicates that less of the transmitted signal is coupled into the victim wire pair. NEXT is measured 30 meters (about 98 feet) from the injector/generator. Higher near-end crosstalk values correspond to higher overall circuit performance. Low NEXT values on a UTP LAN used with older signaling standards (IEEE 802.3 and earlier) are particularly detrimental. Excessive near-end crosstalk can be an indication of improper termination. Power sum NEXT (PSNEXT). Power sum NEXT (NEXT) is the sum of NEXT values from 3 wire pairs as they affect the other wire pair. The combined effect of NEXT can be very detrimental to the signal. The equal-level far-end crosstalk (ELFEXT). The equal-level far-end crosstalk (ELFEXT) test measures far-end Crosstalk (FEXT). FEXT is very similar to NEXT, but happens at the receiver side of the connection. Due to attenuation on the line, the signal causing the crosstalk diminishes as it gets further away from the transmitter. Because of this, FEXT is usually less detrimental to a signal than NEXT, but still important nonetheless. Recently the designation was changed from ELFEXT to ACR-F (far end ACR). Power sum ELFEXT (PSELFEXT). Power sum ELFEXT (PSELFEXT) is the sum of FEXT values from 3 wire pairs as they affect the other wire pair, minus the insertion loss of the channel. Recently the designation was changed from PSELFEXT to PSACR-F (far end ACR). Attenuation-to-crosstalk ratio (ACR). Attenuation-to-crosstalk ratio (ACR) is the difference between the signal attenuation produced NEXT and is measured in decibels (dB). The ACR indicates how much stronger the attenuated signal is than the crosstalk at the destination (receiving) end of a communications circuit. The ACR figure must be at least several decibels for proper performance. If the ACR is not large enough, errors will be frequent. In many cases, even a small improvement in ACR can cause a dramatic reduction in the bit error rate. Sometimes it may be necessary to switch from un-shielded twisted pair (UTP) cable to shielded twisted pair (STP) in order to increase the ACR. Power sum ACR (PSACR). Power sum ACR (PSACR) done in the same way as ACR, but using the PSNEXT value in the calculation rather than NEXT. DC loop resistance. DC loop resistance measures the total resistance through one wire pair looped at one end of the connection. This will increase with the length of the cable. DC resistance usually has less effect on a signal than insertion loss, but plays a major role if power over Ethernet is required. Also measured in ohms is the characteristic impedance of the cable, which is independent of the cable length.
[ { "math_id": 0, "text": "2" }, { "math_id": 1, "text": "\\sqrt 2" } ]
https://en.wikipedia.org/wiki?curid=10014466
1001490
Convex conjugate
Generalization of the Legendre transformation In mathematics and mathematical optimization, the convex conjugate of a function is a generalization of the Legendre transformation which applies to non-convex functions. It is also known as Legendre–Fenchel transformation, Fenchel transformation, or Fenchel conjugate (after Adrien-Marie Legendre and Werner Fenchel). The convex conjugate is widely used for constructing the dual problem in optimization theory, thus generalizing Lagrangian duality. Definition. Let formula_0 be a real topological vector space and let formula_1 be the dual space to formula_0. Denote by formula_2 the canonical dual pairing, which is defined by formula_3 For a function formula_4 taking values on the extended real number line, its convex conjugate is the function formula_5 whose value at formula_6 is defined to be the supremum: formula_7 or, equivalently, in terms of the infimum: formula_8 This definition can be interpreted as an encoding of the convex hull of the function's epigraph in terms of its supporting hyperplanes. Examples. For more examples, see . The convex conjugate and Legendre transform of the exponential function agree except that the domain of the convex conjugate is strictly larger as the Legendre transform is only defined for positive real numbers. Connection with expected shortfall (average value at risk). See this article for example. Let "F" denote a cumulative distribution function of a random variable "X". Then (integrating by parts), formula_17 has the convex conjugate formula_18 Ordering. A particular interpretation has the transform formula_19 as this is a nondecreasing rearrangement of the initial function "f"; in particular, formula_20 for "f" nondecreasing. Properties. The convex conjugate of a closed convex function is again a closed convex function. The convex conjugate of a polyhedral convex function (a convex function with polyhedral epigraph) is again a polyhedral convex function. Order reversing. Declare that formula_21 if and only if formula_22 for all formula_23 Then convex-conjugation is order-reversing, which by definition means that if formula_21 then formula_24 For a family of functions formula_25 it follows from the fact that supremums may be interchanged that formula_26 and from the max–min inequality that formula_27 Biconjugate. The convex conjugate of a function is always lower semi-continuous. The biconjugate formula_28 (the convex conjugate of the convex conjugate) is also the closed convex hull, i.e. the largest lower semi-continuous convex function with formula_29 For proper functions formula_30 formula_31 if and only if formula_32 is convex and lower semi-continuous, by the Fenchel–Moreau theorem. Fenchel's inequality. For any function f and its convex conjugate "f" *, Fenchel's inequality (also known as the Fenchel–Young inequality) holds for every formula_33 and formula_34: formula_35 Furthermore, the equality holds only when formula_36. The proof follows from the definition of convex conjugate: formula_37 Convexity. For two functions formula_38 and formula_39 and a number formula_40 the convexity relation formula_41 holds. The formula_42 operation is a convex mapping itself. Infimal convolution. The infimal convolution (or epi-sum) of two functions formula_32 and formula_43 is defined as formula_44 Let formula_45 be proper, convex and lower semicontinuous functions on formula_46 Then the infimal convolution is convex and lower semicontinuous (but not necessarily proper), and satisfies formula_47 The infimal convolution of two functions has a geometric interpretation: The (strict) epigraph of the infimal convolution of two functions is the Minkowski sum of the (strict) epigraphs of those functions. Maximizing argument. If the function formula_32 is differentiable, then its derivative is the maximizing argument in the computation of the convex conjugate: formula_48 and formula_49 hence formula_50 formula_51 and moreover formula_52 formula_53 Scaling properties. If for some formula_54 formula_55, then formula_56 Behavior under linear transformations. Let formula_57 be a bounded linear operator. For any convex function formula_32 on formula_58 formula_59 where formula_60 is the preimage of formula_32 with respect to formula_61 and formula_62 is the adjoint operator of formula_63 A closed convex function formula_32 is symmetric with respect to a given set formula_64 of orthogonal linear transformations, formula_65 for all formula_66 and all formula_67 if and only if its convex conjugate formula_68 is symmetric with respect to formula_69 Table of selected convex conjugates. The following table provides Legendre transforms for many common functions as well as a few useful properties.
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "X^{*}" }, { "math_id": 2, "text": "\\langle \\cdot , \\cdot \\rangle : X^{*} \\times X \\to \\mathbb{R}" }, { "math_id": 3, "text": "\\left( x^*, x \\right) \\mapsto x^* (x)." }, { "math_id": 4, "text": "f : X \\to \\mathbb{R} \\cup \\{ - \\infty, + \\infty \\}" }, { "math_id": 5, "text": "f^{*} : X^{*} \\to \\mathbb{R} \\cup \\{ - \\infty, + \\infty \\}" }, { "math_id": 6, "text": "x^* \\in X^{*}" }, { "math_id": 7, "text": "f^{*} \\left( x^{*} \\right) := \\sup \\left\\{ \\left\\langle x^{*}, x \\right\\rangle - f (x) ~\\colon~ x \\in X \\right\\}," }, { "math_id": 8, "text": "f^{*} \\left( x^{*} \\right) := - \\inf \\left\\{ f (x) - \\left\\langle x^{*}, x \\right\\rangle ~\\colon~ x \\in X \\right\\}." }, { "math_id": 9, "text": " f(x) = \\left\\langle a, x \\right\\rangle - b" }, { "math_id": 10, "text": " f^{*}\\left(x^{*} \\right)\n= \\begin{cases} b, & x^{*} = a\n \\\\ +\\infty, & x^{*} \\ne a.\n \\end{cases}\n" }, { "math_id": 11, "text": " f(x) = \\frac{1}{p}|x|^p, 1 < p < \\infty " }, { "math_id": 12, "text": "\nf^{*}\\left(x^{*} \\right) = \\frac{1}{q}|x^{*}|^q, 1<q<\\infty, \\text{where} \\tfrac{1}{p} + \\tfrac{1}{q} = 1." }, { "math_id": 13, "text": "f(x) = \\left| x \\right|" }, { "math_id": 14, "text": "\nf^{*}\\left(x^{*} \\right)\n= \\begin{cases} 0, & \\left|x^{*} \\right| \\le 1\n \\\\ \\infty, & \\left|x^{*} \\right| > 1.\n \\end{cases}\n" }, { "math_id": 15, "text": "f(x)= e^x" }, { "math_id": 16, "text": "\nf^{*}\\left(x^{*} \\right)\n= \\begin{cases} x^{*} \\ln x^{*} - x^{*} , & x^{*} > 0\n \\\\ 0 , & x^{*} = 0\n \\\\ \\infty , & x^{*} < 0.\n \\end{cases}\n" }, { "math_id": 17, "text": "f(x):= \\int_{-\\infty}^x F(u) \\, du = \\operatorname{E}\\left[\\max(0,x-X)\\right] = x-\\operatorname{E} \\left[\\min(x,X)\\right]" }, { "math_id": 18, "text": "f^{*}(p)= \\int_0^p F^{-1}(q) \\, dq = (p-1)F^{-1}(p)+\\operatorname{E}\\left[\\min(F^{-1}(p),X)\\right] \n = p F^{-1}(p)-\\operatorname{E}\\left[\\max(0,F^{-1}(p)-X)\\right]." }, { "math_id": 19, "text": "f^\\text{inc}(x):= \\arg \\sup_t t\\cdot x-\\int_0^1 \\max\\{t-f(u),0\\} \\, du," }, { "math_id": 20, "text": "f^\\text{inc}= f" }, { "math_id": 21, "text": "f \\le g" }, { "math_id": 22, "text": "f(x) \\le g(x)" }, { "math_id": 23, "text": "x." }, { "math_id": 24, "text": "f^* \\ge g^*." }, { "math_id": 25, "text": "\\left(f_\\alpha\\right)_\\alpha" }, { "math_id": 26, "text": "\\left(\\inf_\\alpha f_\\alpha\\right)^*(x^*) = \\sup_\\alpha f_\\alpha^*(x^*)," }, { "math_id": 27, "text": "\\left(\\sup_\\alpha f_\\alpha\\right)^*(x^*) \\le \\inf_\\alpha f_\\alpha^*(x^*)." }, { "math_id": 28, "text": "f^{**}" }, { "math_id": 29, "text": "f^{**} \\le f." }, { "math_id": 30, "text": "f," }, { "math_id": 31, "text": "f = f^{**}" }, { "math_id": 32, "text": "f" }, { "math_id": 33, "text": "x \\in X" }, { "math_id": 34, "text": "p \\in X^{*}" }, { "math_id": 35, "text": "\\left\\langle p,x \\right\\rangle \\le f(x) + f^*(p)." }, { "math_id": 36, "text": "p \\in \\partial f(x)" }, { "math_id": 37, "text": "f^*(p) = \\sup_{\\tilde x} \\left\\{ \\langle p,\\tilde x \\rangle - f(\\tilde x) \\right\\} \\ge \\langle p,x \\rangle - f(x)." }, { "math_id": 38, "text": "f_0" }, { "math_id": 39, "text": "f_1" }, { "math_id": 40, "text": "0 \\le \\lambda \\le 1" }, { "math_id": 41, "text": "\\left((1-\\lambda) f_0 + \\lambda f_1\\right)^{*} \\le (1-\\lambda) f_0^{*} + \\lambda f_1^{*}" }, { "math_id": 42, "text": "{*}" }, { "math_id": 43, "text": "g" }, { "math_id": 44, "text": "\\left( f \\operatorname{\\Box} g \\right)(x) = \\inf \\left\\{ f(x-y) + g(y) \\mid y \\in \\mathbb{R}^n \\right\\}." }, { "math_id": 45, "text": "f_1, \\ldots, f_{m}" }, { "math_id": 46, "text": "\\mathbb{R}^{n}." }, { "math_id": 47, "text": "\\left( f_1 \\operatorname{\\Box} \\cdots \\operatorname{\\Box} f_m \\right)^{*} = f_1^{*} + \\cdots + f_m^{*}." }, { "math_id": 48, "text": "f^\\prime(x) = x^*(x):= \\arg\\sup_{x^{*}} {\\langle x, x^{*}\\rangle} -f^{*}\\left( x^{*} \\right)" }, { "math_id": 49, "text": "f^{{*}\\prime}\\left( x^{*} \\right) = x\\left( x^{*} \\right):= \\arg\\sup_x {\\langle x, x^{*}\\rangle} - f(x);" }, { "math_id": 50, "text": "x = \\nabla f^{{*}}\\left( \\nabla f(x) \\right)," }, { "math_id": 51, "text": "x^{*} = \\nabla f\\left( \\nabla f^{{*}}\\left( x^{*} \\right)\\right)," }, { "math_id": 52, "text": "f^{\\prime\\prime}(x) \\cdot f^{{*}\\prime\\prime}\\left( x^{*}(x) \\right) = 1," }, { "math_id": 53, "text": "f^{{*}\\prime\\prime}\\left( x^{*} \\right) \\cdot f^{\\prime\\prime}\\left( x(x^{*}) \\right) = 1." }, { "math_id": 54, "text": "\\gamma>0," }, { "math_id": 55, "text": "g(x) = \\alpha + \\beta x + \\gamma \\cdot f\\left( \\lambda x + \\delta \\right)" }, { "math_id": 56, "text": "g^{*}\\left( x^{*} \\right)= - \\alpha - \\delta\\frac{x^{*}-\\beta} \\lambda + \\gamma \\cdot f^{*}\\left(\\frac {x^{*}-\\beta}{\\lambda \\gamma}\\right)." }, { "math_id": 57, "text": "A : X \\to Y" }, { "math_id": 58, "text": "X," }, { "math_id": 59, "text": "\\left(A f\\right)^{*} = f^{*} A^{*}" }, { "math_id": 60, "text": "(A f)(y) = \\inf\\{ f(x) : x \\in X , A x = y \\}" }, { "math_id": 61, "text": "A" }, { "math_id": 62, "text": "A^{*}" }, { "math_id": 63, "text": "A." }, { "math_id": 64, "text": "G" }, { "math_id": 65, "text": "f(A x) = f(x)" }, { "math_id": 66, "text": "x" }, { "math_id": 67, "text": "A \\in G" }, { "math_id": 68, "text": "f^{*}" }, { "math_id": 69, "text": "G." } ]
https://en.wikipedia.org/wiki?curid=1001490
10016360
Excellent ring
In commutative algebra, a quasi-excellent ring is a Noetherian commutative ring that behaves well with respect to the operation of completion, and is called an excellent ring if it is also universally catenary. Excellent rings are one answer to the problem of finding a natural class of "well-behaved" rings containing most of the rings that occur in number theory and algebraic geometry. At one time it seemed that the class of Noetherian rings might be an answer to this problem, but Masayoshi Nagata and others found several strange counterexamples showing that in general Noetherian rings need not be well-behaved: for example, a normal Noetherian local ring need not be analytically normal. The class of excellent rings was defined by Alexander Grothendieck (1965) as a candidate for such a class of well-behaved rings. Quasi-excellent rings are conjectured to be the base rings for which the problem of resolution of singularities can be solved; showed this in characteristic 0, but the positive characteristic case is (as of 2024) still a major open problem. Essentially all Noetherian rings that occur naturally in algebraic geometry or number theory are excellent; in fact it is quite hard to construct examples of Noetherian rings that are not excellent. Definitions. The definition of excellent rings is quite involved, so we recall the definitions of the technical conditions it satisfies. Although it seems like a long list of conditions, most rings in practice are excellent, such as fields, polynomial rings, complete Noetherian rings, Dedekind domains over characteristic 0 (such as formula_0), and quotient and localization rings of these rings. Recalled definitions. Finally, a ring is J-2 if any finite type formula_1-algebra formula_11 is J-1, meaning the regular subscheme formula_12 is open. Definition of (quasi-)excellence. A ring formula_1 is called quasi-excellent if it is a G-ring and J-2 ring. It is called excellentpg 214 if it is quasi-excellent and universally catenary. In practice almost all Noetherian rings are universally catenary, so there is little difference between excellent and quasi-excellent rings. A scheme is called excellent or quasi-excellent if it has a cover by open affine subschemes with the same property, which implies that every open affine subscheme has this property. Properties. Because an excellent ring formula_1 is a G-ring, it is Noetherian by definition. Because it is universally catenary, every maximal chain of prime ideals has the same length. This is useful for studying the dimension theory of such rings because their dimension can be bounded by a fixed maximal chain. In practice, this means infinite-dimensional Noetherian rings which have an inductive definition of maximal chains of prime ideals, giving an infinite-dimensional ring, cannot be constructed. Schemes. Given an excellent scheme formula_13 and a locally finite type morphism formula_14, then formula_15 is excellentpg 217. Quasi-excellence. Any quasi-excellent ring is a Nagata ring. Any quasi-excellent reduced local ring is analytically reduced. Any quasi-excellent normal local ring is analytically normal. Examples. Excellent rings. Most naturally occurring commutative rings in number theory or algebraic geometry are excellent. In particular: A J-2 ring that is not a G-ring. Here is an example of a discrete valuation ring "A" of dimension 1 and characteristic "p" &gt; 0 which is J-2 but not a G-ring and so is not quasi-excellent. If "k" is any field of characteristic "p" with ["k" : "k""p"] = ∞ and "A" is the ring of power series Σ"a""i""x""i" such that ["k""p"("a"0, "a"1, ...) : "k""p"] is finite then the formal fibers of "A" are not all geometrically regular so "A" is not a G-ring. It is a J-2 ring as all Noetherian local rings of dimension at most 1 are J-2 rings. It is also universally catenary as it is a Dedekind domain. Here "k""p" denotes the image of "k" under the Frobenius morphism "a" → "a""p". A G-ring that is not a J-2 ring. Here is an example of a ring that is a G-ring but not a J-2 ring and so not quasi-excellent. If "R" is the subring of the polynomial ring "k"["x"1,"x"2...] in infinitely many generators generated by the squares and cubes of all generators, and "S" is obtained from "R" by adjoining inverses to all elements not in any of the ideals generated by some "x""n", then "S" is a 1-dimensional Noetherian domain that is not a J-1 ring as "S" has a cusp singularity at every closed point, so the set of singular points is not closed, though it is a G-ring. This ring is also universally catenary, as its localization at every prime ideal is a quotient of a regular ring. A quasi-excellent ring that is not excellent. Nagata's example of a 2-dimensional Noetherian local ring that is catenary but not universally catenary is a G-ring, and is also a J-2 ring as any local G-ring is a J-2 ring . So it is a quasi-excellent catenary local ring that is not excellent. Resolution of singularities. Quasi-excellent rings are closely related to the problem of resolution of singularities, and this seems to have been Grothendieck's motivationpg 218 for defining them. Grothendieck (1965) observed that if it is possible to resolve singularities of all complete integral local Noetherian rings, then it is possible to resolve the singularities of all reduced quasi-excellent rings. Hironaka (1964) proved this for all complete integral Noetherian local rings over a field of characteristic 0, which implies his theorem that all singularities of excellent schemes over a field of characteristic 0 can be resolved. Conversely if it is possible to resolve all singularities of the spectra of all integral finite algebras over a Noetherian ring "R" then the ring "R" is quasi-excellent. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Z}" }, { "math_id": 1, "text": "R" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "K" }, { "math_id": 4, "text": "R\\otimes_kK" }, { "math_id": 5, "text": "R \\to S" }, { "math_id": 6, "text": "\\mathfrak{p} \\in \\text{Spec}(R)" }, { "math_id": 7, "text": "S\\otimes_R\\kappa(\\mathfrak{p})" }, { "math_id": 8, "text": "\\kappa(\\mathfrak{p})" }, { "math_id": 9, "text": "\\mathfrak{p}" }, { "math_id": 10, "text": "R_\\mathfrak{p} \\to \\hat{R_\\mathfrak{p}}" }, { "math_id": 11, "text": "S" }, { "math_id": 12, "text": "\\text{Reg}(\\text{Spec}(S)) \\subset \\text{Spec}(S)" }, { "math_id": 13, "text": "X" }, { "math_id": 14, "text": "f:X'\\to X" }, { "math_id": 15, "text": "X'" }, { "math_id": 16, "text": "R[x_1,\\ldots, x_n]/(f_1,\\ldots,f_k)" } ]
https://en.wikipedia.org/wiki?curid=10016360
1002045
Émilie du Châtelet
French mathematician, physicist, and author (1706–1749) Gabrielle Émilie Le Tonnelier de Breteuil, Marquise du Châtelet (; 17 December 1706 – 10 September 1749) was a French natural philosopher and mathematician from the early 1730s until her death due to complications during childbirth in 1749. Her most recognized achievement is her translation of and commentary on Isaac Newton's 1687 book "Philosophiæ Naturalis Principia Mathematica" containing basic laws of physics. The translation, published posthumously in 1756, is still considered the standard French translation. Her commentary includes a contribution to Newtonian mechanics—the postulate of an additional conservation law for total energy, of which kinetic energy of motion is one element. This led her to conceptualize energy, and to derive its quantitative relationships to the mass and velocity of an object. Her philosophical magnum opus, "Institutions de Physique" (Paris, 1740, first edition; "Foundations of Physics"), circulated widely, generated heated debates, and was republished and translated into several other languages within two years of its original publication. She participated in the famous "vis viva" debate, concerning the best way to measure the force of a body and the best means of thinking about conservation principles. Posthumously, her ideas were heavily represented in the most famous text of the French Enlightenment, the "Encyclopédie" of Denis Diderot and Jean le Rond d'Alembert, first published shortly after du Châtelet's death. She is also known as the intellectual collaborator with and romantic partner of Voltaire. Numerous biographies, books and plays have been written about her life and work in the two centuries since her death. In the early 21st century, her life and ideas have generated renewed interest. Contribution to philosophy. In addition to producing famous translations of works by authors such as Bernard Mandeville and Isaac Newton, du Châtelet wrote a number of significant philosophical essays, letters and books that were well known in her time. Because of her well-known collaboration and romantic involvement with Voltaire, which spanned much of her adult life, du Châtelet has been known as the romantic partner of and collaborator with her famous intellectual companion. Despite her notable achievements and intelligence, her accomplishments have often been subsumed under his and, as a result, even today she is often mentioned only within the context of Voltaire's life and work during the period of the early French Enlightenment. In her own right, she was a strong and influential philosopher, with the ideals of her works spread from the ideals of individual empowerment to issues of the social contract. Recently, however, professional philosophers and historians have transformed the reputation of du Châtelet. Historical evidence indicates that her work had a very significant influence on the philosophical and scientific conversations of the 1730s and 1740s – in fact, she was famous and respected by the greatest thinkers of her time. Francesco Algarotti styled the dialogue of "Il Newtonianismo per le dame" based on conversations he observed between Du Châtelet and Voltaire in Cirey. Du Châtelet corresponded with renowned mathematicians such as Johann II Bernoulli and Leonhard Euler, early developers of calculus. She was also tutored by Bernoulli's prodigy students, Pierre Louis Moreau de Maupertuis and Alexis Claude Clairaut. Frederick the Great of Prussia, who re-founded the Academy of Sciences in Berlin, was her great admirer, and corresponded with both Voltaire and du Châtelet regularly. He introduced du Châtelet to Leibniz's philosophy by sending her the works of Christian Wolff, and du Châtelet sent him a copy of her "Institutions". Her works were published and republished in Paris, London, and Amsterdam; they were translated into German and Italian; and, they were discussed in the most important scholarly journals of the era, including the "Memoires des Trévoux", the "Journal des Sçavans", the ',"' and others. Perhaps most intriguingly, many of her ideas were represented in various sections of the "Encyclopédie" of Diderot and D'Alembert, and some of the articles in the "Encyclopédie" are a direct copy of her work (this is an active area of current academic research - the latest research can be found at Project Vox, a Duke University research initiative). Biography. Early life. Émilie du Châtelet was born on 17 December 1706 in Paris, the only girl amongst six children. Three brothers lived to adulthood: René-Alexandre (b. 1698), Charles-Auguste (b. 1701), and Elisabeth-Théodore (b. 1710). Her eldest brother, René-Alexandre, died in 1720, and the next brother, Charles-Auguste, died in 1731. However, her younger brother, Elisabeth-Théodore, lived to a successful old age, becoming an abbot and eventually a bishop. Two other brothers died very young. Du Châtelet also had a half-sister, Michelle Born in 1686, who was born of her father and Anne Bellinzani, an intelligent woman who was interested in astronomy and married to an important Parisian official. Her father was Louis Nicolas le Tonnelier de Breteuil (1648–1728), a member of the lesser nobility. At the time of du Châtelet's birth, her father held the position of the Principal Secretary and Introducer of Ambassadors to King Louis XIV. He held a weekly "salon" on Thursdays, to which well-respected writers and scientists were invited. Her mother was Gabrielle Anne de Froullay (1670–1740), Baronne de Breteuil. Her paternal uncle was cleric Claude Le Tonnelier de Breteuil (1644–1698). Among her cousins was nobleman François Victor Le Tonnelier de Breteuil (1686–1743), Who was her uncle's son Francois Le Tonnelier de Breteuil (1638–1705). Early education. Du Châtelet's education has been the subject of much speculation, but nothing is known with certainty. Among their acquaintances was Fontenelle, the perpetual secretary of the French Académie des Sciences. Du Châtelet's father Louis-Nicolas, recognizing her early brilliance, arranged for Fontenelle to visit and talk about astronomy with her when she was 10 years old. Her mother, Gabrielle-Anne de Froulay, had been brought up in a convent, which was at that time the predominant educational institution available to French girls and women. While some sources believe her mother did not approve of her intelligent daughter, or of her husband's encouragement of Émilie's intellectual curiosity, there are also other indications that her mother not only approved of du Châtelet's early education, but actually encouraged her to vigorously question stated fact. In either case, such encouragement would have been seen as unusual for parents of their time and status. When she was small, her father arranged training for her in physical activities such as fencing and riding, and as she grew older, he brought tutors to the house for her. As a result, by the age of twelve she was fluent in Latin, Italian, Greek and German; she was later to publish translations into French of Greek and Latin plays and philosophy. She received education in mathematics, literature, and science. Du Châtelet also liked to dance, was a passable performer on the harpsichord, sang opera, and was an amateur actress. As a teenager, short of money for books, she used her mathematical skills to devise highly successful strategies for gambling. Marriage. On 12 June 1725, she married the Marquis Florent-Claude du Chastellet-Lomont (1695–1765). Her marriage conferred the title of Marquise du Chastellet. Like many marriages among the nobility, theirs was arranged. As a wedding gift, her husband was made governor of Semur-en-Auxois in Burgundy by his father; the recently married couple moved there at the end of September 1725. Du Châtelet was eighteen at the time, her husband thirty-four. Children. Émilie du Châtelet and the Marquis Florent-Claude du Chastellet-Lomont had three children: Françoise-Gabrielle-Pauline (30 June 1726 – 1754), married in 1743 to Alfonso Carafa, Duca di Montenero (1713–1760), Louis Marie Florent (born 20 November 1727), and Victor-Esprit (born 11 April 1733). Victor-Esprit died as an infant in late summer 1734, likely the last Sunday in August. On 4 September 1749 Émilie du Châtelet gave birth to Stanislas-Adélaïde du Châtelet, daughter of Jean François de Saint-Lambert. She died as a toddler in Lunéville on 6 May 1751. Resumption of studies. After bearing three children, Émilie, Marquise du Châtelet, considered her marital responsibilities fulfilled and reached an agreement with her husband to live separate lives while still maintaining one household. In 1733, aged 26, du Châtelet resumed her mathematical studies. Initially, she was tutored in algebra and calculus by Moreau de Maupertuis, a member of the Academy of Sciences; although mathematics was not his forte, he had received a solid education from Johann Bernoulli, who also taught Leonhard Euler. However by 1735 du Châtelet had turned for her mathematical training to Alexis Clairaut, a mathematical prodigy known best for Clairaut's equation and Clairaut's theorem. Du Châtelet resourcefully sought some of France's best tutors and scholars to mentor her in mathematics. On one occasion at the Café Gradot, a place where men frequently gathered for intellectual discussion, she was politely ejected when she attempted to join one of her teachers. Undeterred, she returned and entered after having men's clothing made for her. Relationship with Voltaire. Du Châtelet may have met Voltaire in her childhood at one of her father's "salons"; Voltaire himself dates their meeting to 1729, when he returned from his exile in London. However, their friendship developed from May 1733 when she re-entered society after the birth of her third child. Du Châtelet invited Voltaire to live at her country house at Cirey in Haute-Marne, northeastern France, and he became her long-time companion. There she studied physics and mathematics, and published scientific articles and translations. To judge from Voltaire's letters to friends and their commentaries on each other's work, they lived together with great mutual liking and respect. As a literary rather than scientific person, Voltaire implicitly acknowledged her contributions to his 1738 "Elements of the Philosophy of Newton". This was through a poem dedicated to her at the beginning of the text and in the preface, where Voltaire praised her study and contributions. The book's chapters on optics show strong similarities with her own "Essai sur l'optique". She was able to contribute further to the campaign by a laudatory review in the "Journal des savants". Sharing a passion for science, Voltaire and du Châtelet collaborated scientifically. They set up a laboratory in du Châtelet's home in Lorraine. In a healthy competition, they both entered the 1738 Paris Academy prize contest on the nature of fire, since du Châtelet disagreed with Voltaire's essay. Although neither of them won, both essays received honourable mention and were published. She thus became the first woman to have a scientific paper published by the Academy. Social life after living with Voltaire. Du Châtelet's relationship with Voltaire caused her to give up most of her social life to become more involved with her study in mathematics with the teacher of Pierre-Louis Moreau de Maupertuis. He introduced the ideas of Isaac Newton to her. Letters written by du Châtelet explain how she felt during the transition from Parisian socialite to rural scholar, from "one life to the next." Final pregnancy and death. In May 1748, du Châtelet began an affair with the poet Jean François de Saint-Lambert and became pregnant. In a letter to a friend, she confided her fears that she would not survive her pregnancy. On the night of 4 September 1749 she gave birth to a daughter, Stanislas-Adélaïde. Du Châtelet died on 10 September 1749 at Château de Lunéville, from a pulmonary embolism. She was 42. Her infant daughter died 20 months later. Scientific research and publications. Criticizing Locke and the debate on "thinking matter". In her writings, du Châtelet criticized John Locke's philosophy. She emphasizes the necessity of the verification of knowledge through experience: "Locke's idea of the possibility of "thinking matter" is […] abstruse." Her critique on Locke originated in her commentary on Bernard de Mandeville's "The Fable of the Bees". She resolutely favored universal principles which precondition human knowledge and action, and maintained that this kind of law is innate. Du Châtelet claimed the necessity of a universal presupposition, because if there is no such beginning, all our knowledge is relative. In that way, Du Châtelet rejected Locke's aversion to innate ideas and prior principles. She also reversed Locke's negation of the principle of contradiction, which would constitute the basis of her methodic reflections in the "Institutions". On the contrary, she affirmed her arguments in favor of the necessity of prior and universal principles. "Two and two could then make as well 4 as 6 if prior principles did not exist." Pierre Louis Moreau de Maupertuis' and Julien Offray de La Mettrie's references to du Châtelet's deliberations on motion, free will, "thinking matter", numbers, and the way to do metaphysics are a sign of the importance of her reflections. She rebuts the claim to finding truth by using mathematical laws, and argues against Maupertuis. Warmth and brightness. In 1737 du Châtelet published a paper "Dissertation sur la nature et la propagation du feu", based upon her research into the science of fire. In it she speculated that there may be colors in other suns that are not found in the spectrum of sunlight on Earth. "Institutions de Physique". Her book "Institutions de Physique" ("Lessons in Physics") was published in 1740; it was presented as a review of new ideas in science and philosophy to be studied by her 13-year-old son, but it incorporated and sought to reconcile complex ideas from the leading thinkers of the time. The book and subsequent debate contributed to her becoming a member of the Academy of Sciences of the Institute of Bologna in 1746. Du Châtelet originally preferred anonymity in her role as the author, because she wished to conceal her sex. Ultimately, however, "Institutions" was convincing to salon-dwelling intellectuals in spite of the commonplace sexism. "Institutions" discussed, refuted, and synthesized many ideas of prominent mathematicians and physicists of the time. In particular, the text is famous for discussing ideas that originated with G.W. Leibniz and Christian Wolff, and for using the principle of sufficient reason often associated with their philosophical work. This main work is equally famous for providing a detailed discussion and evaluation of ideas that originated with Isaac Newton and his followers. That combination is more remarkable than it might seem now, since the ideas of Leibniz and Newton were regarded as fundamentally opposed to one another by most of the major philosophical figures of the 18th century. In chapter I, du Châtelet included a description of her rules of reasoning, based largely on Descartes’s principle of contradiction and Leibniz’s principle of sufficient reason. In chapter II, she applied these rules of reasoning to metaphysics, discussing God, space, time, and matter. In chapters III through VI, du Châtelet continued to discuss the role of God and his relationship to his creation. In chapter VII, she broke down the concept of matter into three parts: the macroscopic substance available to sensory perception, the atoms composing that macroscopic material, and an even smaller constituent unit similarly imperceptible to human senses. However, she carefully added that there was no way to know how many levels truly existed. The remainder of "Institutions" considered more metaphysics and classical mechanics. Du Châtelet discussed the concepts of space and time in a manner more consistent with modern relativity than her contemporaries. She described both space and time in the abstract, as representations of the relationships between coexistent bodies rather than physical substances. This included an acknowledgement that "absolute" place is an idealization and that "relative" place is the only real, measurable quantity. Du Châtelet also presented a thorough explanation of Newton’s laws of motion and their function on earth. Forces Vives. In 1741 du Châtelet published a book titled "Réponse de Madame la Marquise du Chastelet, a la lettre que M. de Mairan". D'Ortous de Mairan, secretary of the Academy of Sciences, had published a set of arguments addressed to her regarding the appropriate mathematical expression for "forces vives" ("living forces"). Du Châtelet presented a point-by-point rebuttal of de Mairan's arguments, causing him to withdraw from the controversy. Immanuel Kant's first publication in 1747, 'Thoughts on the True Estimation of Living Forces' ("Gedanken zur wahren Schätzung der lebendigen Kräfte"), focused on du Châtelet's pamphlet against the secretary of the French Academy of Sciences, Mairan. Kant's opponent, Johann Augustus Eberhard, accused Kant of taking ideas from du Châtelet. lnterestingly, Kant, in his "Observations on the Feeling of the Beautiful and Sublime", wrote sexist critiques of learned women of the time including Mme Du Châtelet, stating: "A woman who has a head full of Greek, like Mme. Dacier, or who conducts disputations about mechanics, like the Marquise du Châtelet might as well also wear a beard; for that might perhaps better express the mien of depth for which they strive." Advocacy of kinetic energy. Although in the early 18th century the concepts of force and momentum had been long understood, the idea of energy as being transferable between different systems was still in its infancy, and would not be fully resolved until the 19th century. It is now accepted that the total mechanical momentum of a system is conserved and that none is lost to friction. Simply put, there is no 'momentum friction', and momentum cannot transfer between different forms, and particularly there is no 'potential momentum'. In the 20th century, Emmy Noether proved this to be true for all problems where the initial state is symmetric in generalized coordinates. E.g., mechanical energy, either kinetic or potential, may be lost to another form, but the total is conserved in time. Du Châtelet's contribution was the hypothesis of the conservation of total energy, as distinct from momentum. In doing so, she became the first to elucidate the concept of energy as such, and to quantify its relationship to mass and velocity based on her own empirical studies. Inspired by the theories of Gottfried Leibniz, she repeated and publicized an experiment originally devised by Willem 's Gravesande in which heavy balls were dropped from different heights into a sheet of soft clay. Each ball's kinetic energy - as indicated by the quantity of material displaced - was shown to be proportional to the square of the velocity: She showed that if two balls were identical except for their mass, they would make the same size indentation in the clay if the quantity formula_0 (then called "vis viva") were the same for each ball. Newton's work assumed the exact conservation of only mechanical momentum. A broad range of mechanical problems in physics are soluble only if energy conservation is included. The collision and scattering of two point masses is one example. Leonhard Euler and Joseph-Louis Lagrange established a more formal framework for mechanics using the results of du Châtelet. Translation and commentary on Newton's "Principia". In 1749, the year of du Châtelet's death, she completed the work regarded as her outstanding achievement: her translation into French, with her commentary, of Newton's "Philosophiae Naturalis Principia Mathematica" (often referred to as simply the "Principia"), including her derivation of the notion of conservation of energy from its principles of mechanics. Despite modern misconceptions, Newton's work on his "Principia" was not perfect. Du Châtelet took on the task of not only translating his work from Latin to French, but adding important information to it as well. Her commentary was as essential to her contemporaries as her spreading of Newton's ideas. Du Châtelet's commentary was very extensive, comprising almost two-thirds of volume II of her edition. To undertake a formidable project such as this, du Châtelet prepared to translate the "Principia" by continuing her studies in analytic geometry, mastering calculus, and reading important works in experimental physics. It was her rigorous preparation that allowed her to add a lot more accurate information to her commentary, both from herself and other scientists she studied or worked with. She was one of only 20 or so people in the 1700s who could understand such advanced math and apply the knowledge to other works. This helped du Châtelet greatly, not only with her work on the "Principia" but also in her other important works like the "Institutions de Physique". Du Châtelet made very important corrections in her translation that helped support Newton's theories about the universe. Newton, based on the theory of fluids, suggested that gravitational attraction would cause the poles of the earth to flatten, thus causing the earth to bulge outwards at the equator. In Clairaut's "Memoire", which confirmed Newton's hypothesis about the shape of the earth and gave more accurate approximations, Clairaut discovered a way to determine the shape of the other planets in the solar system. Du Châtelet used Clairaut's proposal that the planets had different densities in her commentary to correct Newton's belief that the earth and the other planets were made of homogeneous substances. Du Châtelet used the work of Daniel Bernoulli, a Swiss mathematician and physicist, to further explain Newton's theory of the tides. This proof depended upon the three-body problem which still confounded even the best mathematicians in 18th century Europe. Using Clairaut's hypothesis about the differing of the planets' densities, Bernoulli theorized that the moon was 70 times denser than Newton had believed. Du Châtelet used this discovery in her commentary of the "Principia", further supporting Newton's theory about the law of gravitation. Published ten years after her death, today du Châtelet's translation of the "Principia" is still the standard translation of the work into French, and remains the only complete rendition in that language. Her translation was so important that it was the only one in any language used by Newtonian expert I. Bernard Cohen to write his own English version of Newton's "Principia". Du Châtelet not only used the works of other great scientists to revise Newton's work, but she added her own thoughts and ideas as a scientist in her own right. Her contributions in the French translation made Newton and his ideas look even better in the scientific community and around the world, and recognition for this is owed to du Châtelet. This enormous project, along with her "Foundations of Physics", proved du Châtelet's abilities as a great mathematician. Her translation and commentary of the "Principia" contributed to the completion of the scientific revolution in France and to its acceptance in Europe. Illusions and happiness. In "", Émilie Du Châtelet argues that illusions are an instrument for happiness. To be happy, “one must have freed oneself of prejudice, one must be virtuous, healthy, have tastes and passions, and be susceptible to illusions...”. She mentions many things one needs for happiness, but emphasizes the necessity of illusions and that one should not dismiss all illusions. One should not abandon all illusions because they can bestow positivity and hope, which can ameliorate one's well-being. But Du Châtelet also warns against trusting all illusions, because many illusions are harmful to oneself. They may cause negativity through a false reality, which can cause disappointment or even limit one’s abilities. This lack of self-awareness from so many illusions may cause one to be self-deceived. She suggests a balance of trusting and rejecting illusions for happiness, so as not to become self-deceived. In "Foundation of Physics", Émilie Du Châtelet discusses avoiding error by applying two principles – the principle of contradiction and the principle of sufficient reason. Du Châtelet presumed that all knowledge is developed from more fundamental knowledge that relies on infallible knowledge. She states that this infallible fundamental knowledge is most reliable because it is self-explanatory and exists with a small number of conclusions. Her logic and principles are used for an arguably less flawed understanding of physics, metaphysics, and morals. The principle of contradiction essentially claims that the thing implying a contradiction is impossible. So, if one does not use the principle of contradiction, one will have errors including the failure to reject a contradiction-causing element. To get from the possible or impossible to the actual or real, the principle of sufficient reason was revised by Du Châtelet from Leibniz's concept and integrated into science. The principle of sufficient reason suggests that every true thing has a reason for being so, and things without a reason do not exist. In essence, every effect has a cause, so the element in question must have a reasonable cause to be so. In application, Émilie Du Châtelet proposed that being happy and immoral are mutually exclusive. According to Du Châtelet, this principle is embedded within the hearts of all individuals, and even wicked individuals have an undeniable consciousness of this contradiction that is grueling. It suggests one cannot be living a happy life while living immorally. So, her suggested happiness requires illusions with a virtuous life. These illusions are naturally given like passions and tastes, and cannot be created. Du Châtelet recommended we maintain the illusions we receive and work to not dismantle the trustworthy illusions, because we cannot get them back. In other words, true happiness is a blending of illusions and morality. If one merely attempts to be moral, one will not obtain the happiness one deeply seeks. If one just strives for the illusions, one will not get the happiness that is genuinely desired. One needs to endeavor in both illusions and happiness to get the sincerest happiness. Other contributions. Development of financial derivatives. Du Châtelet lost the considerable sum for the time of 84,000 francs—some of it borrowed—in one evening at the table at the Court of Fontainebleau, to card cheats. To raise the money to pay back her debts, she devised an ingenious financing arrangement similar to modern derivatives, whereby she paid tax collectors a fairly low sum for the right to their future earnings (they were allowed to keep a portion of the taxes they collected for the King), and promised to pay the court gamblers part of these future earnings. Biblical scholarship. Du Châtelet wrote a critical analysis of the entire Bible. A synthesis of her remarks on the Book of Genesis was published in English in 1967 by Ira O. Wade of Princeton in his book "Voltaire and Madame du Châtelet: An Essay on Intellectual Activity at Cirey" and a book of her complete notes was published in 2011, in the original French, edited and annotated by Bertram Eugene Schwarzbach. Translation of the "Fable of the Bees", and other works. Du Châtelet translated "The Fable of the Bees" in a free adaptation. She also wrote works on optics, rational linguistics, and the nature of free will. Support of women's education. In her first independent work, the preface to her translation of the "Fable of the Bees", du Châtelet argued strongly for women's education, particularly a strong secondary education as was available for young men in the French "collèges". By denying women a good education, she argued, society prevents women from becoming eminent in the arts and sciences. Legacy. Du Châtelet made a crucial scientific contribution in making Newton's historic work more accessible in a timely, accurate and insightful French translation, augmented by her own original concept of energy conservation. A main-belt minor planet and a crater on Venus have been named in her honor, and she is the subject of three plays: "Legacy of Light" by Karen Zacarías; "Émilie: La Marquise Du Châtelet Defends Her Life Tonight" by Lauren Gunderson and "Urania: the Life of Émilie du Châtelet" by Jyl Bonaguro. The opera "Émilie" by Kaija Saariaho is about the last moments of her life. Du Châtelet is often represented in portraits with mathematical iconography, such as holding a pair of dividers or a page of geometrical calculations. In the early nineteenth century, a French pamphlet of celebrated women ("Femmes célèbres") introduced a possibly apocryphal story of her childhood. According to this story, a servant fashioned a doll for her by dressing up wooden dividers as a doll; however, du Châtelet undressed the dividers, and intuiting their original purpose, drew a circle with them. The Institut Émilie du Châtelet, which was founded in France in 2006, supports "the development and diffusion of research on women, sex, and gender". Since 2016, the French Society of Physics (la Société Française de Physique) has awarded the Émilie Du Châtelet Prize to a physicist or team of researchers for excellence in Physics. Duke University also presents an annual Du Châtelet Prize in Philosophy of Physics "for previously unpublished work in philosophy of physics by a graduate student or junior scholar". On December 17, 2021, Google Doodle honored du Châtelet. Émilie du Châtelet was portrayed by the actress Hélène de Fougerolles in the docudrama "Einstein's Big Idea". Works. Scientific Other References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "mv^2" } ]
https://en.wikipedia.org/wiki?curid=1002045
1002128
Giant magnetoresistance
Phenomenom involving the change of conductivity in metallic layers Giant magnetoresistance (GMR) is a quantum mechanical magnetoresistance effect observed in multilayers composed of alternating ferromagnetic and non-magnetic conductive layers. The 2007 Nobel Prize in Physics was awarded to Albert Fert and Peter Grünberg for the discovery of GMR, which also sets the foundation for the study of spintronics. The effect is observed as a significant change in the electrical resistance depending on whether the magnetization of adjacent ferromagnetic layers are in a parallel or an antiparallel alignment. The overall resistance is relatively low for parallel alignment and relatively high for antiparallel alignment. The magnetization direction can be controlled, for example, by applying an external magnetic field. The effect is based on the dependence of electron scattering on spin orientation. The main application of GMR is in magnetic field sensors, which are used to read data in hard disk drives, biosensors, microelectromechanical systems (MEMS) and other devices. GMR multilayer structures are also used in magnetoresistive random-access memory (MRAM) as cells that store one bit of information. In literature, the term giant magnetoresistance is sometimes confused with colossal magnetoresistance of ferromagnetic and antiferromagnetic semiconductors, which is not related to a multilayer structure. Formulation. Magnetoresistance is the dependence of the electrical resistance of a sample on the strength of an external magnetic field. Numerically, it is characterized by the value formula_0 where R(H) is the resistance of the sample in a magnetic field H, and R(0) corresponds to H = 0. Alternative forms of this expression may use electrical resistivity instead of resistance, a different sign for δH, and are sometimes normalized by R(H) rather than R(0). The term "giant magnetoresistance" indicates that the value δH for multilayer structures significantly exceeds the anisotropic magnetoresistance, which has a typical value within a few percent. History. GMR was discovered in 1988 independently by the groups of Albert Fert of the University of Paris-Sud, France, and Peter Grünberg of Forschungszentrum Jülich, Germany. The practical significance of this experimental discovery was recognized by the Nobel Prize in Physics awarded to Fert and Grünberg in 2007. Early steps. The first mathematical model describing the effect of magnetization on the mobility of charge carriers in solids, related to the spin of those carriers, was reported in 1936. Experimental evidence of the potential enhancement of δH has been known since the 1960s. By the late 1980s, the anisotropic magnetoresistance had been well explored, but the corresponding value of δH did not exceed a few percent. The enhancement of δH became possible with the advent of sample preparation techniques such as molecular beam epitaxy, which allows manufacturing multilayer thin films with a thickness of several nanometers. Experiment and its interpretation. Fert and Grünberg studied electrical resistance of structures incorporating ferromagnetic and non-ferromagnetic materials. In particular, Fert worked on multilayer films, and Grünberg in 1986 discovered the antiferromagnetic exchange interaction in Fe/Cr films. The GMR discovery work was carried out by the two groups on slightly different samples. The Fert group used (001)Fe/(001) Cr superlattices wherein the Fe and Cr layers were deposited in a high vacuum on a (001) GaAs substrate kept at 20 °C and the magnetoresistance measurements were taken at low temperature (typically 4.2 K). The Grünberg work was performed on multilayers of Fe and Cr on (110) GaAs at room temperature. In Fe/Cr multilayers with 3-nm-thick iron layers, increasing the thickness of the non-magnetic Cr layers from 0.9 to 3 nm weakened the antiferromagnetic coupling between the Fe layers and reduced the demagnetization field, which also decreased when the sample was heated from 4.2 K to room temperature. Changing the thickness of the non-magnetic layers led to a significant reduction of the residual magnetization in the hysteresis loop. Electrical resistance changed by up to 50% with the external magnetic field at 4.2 K. Fert named the new effect giant magnetoresistance, to highlight its difference with the anisotropic magnetoresistance. The Grünberg experiment made the same discovery but the effect was less pronounced (3% compared to 50%) due to the samples being at room temperature rather than low temperature. The discoverers suggested that the effect is based on spin-dependent scattering of electrons in the superlattice, particularly on the dependence of resistance of the layers on the relative orientations of magnetization and electron spins. The theory of GMR for different directions of the current was developed in the next few years. In 1989, Camley and Barnaś calculated the "current in plane" (CIP) geometry, where the current flows along the layers, in the classical approximation, whereas Levy "et al." used the quantum formalism. The theory of the GMR for the current perpendicular to the layers (current perpendicular to the plane or CPP geometry), known as the Valet-Fert theory, was reported in 1993. Applications favor the CPP geometry because it provides a greater magnetoresistance ratio (δH), thus resulting in a greater device sensitivity. Theory. Fundamentals. Spin-dependent scattering. In magnetically ordered materials, the electrical resistance is crucially affected by scattering of electrons on the magnetic sublattice of the crystal, which is formed by crystallographically equivalent atoms with nonzero magnetic moments. Scattering depends on the relative orientations of the electron spins and those magnetic moments: it is weakest when they are parallel and strongest when they are antiparallel; it is relatively strong in the paramagnetic state, in which the magnetic moments of the atoms have random orientations. For good conductors such as gold or copper, the Fermi level lies within the "sp" band, and the "d" band is completely filled. In ferromagnets, the dependence of electron-atom scattering on the orientation of their magnetic moments is related to the filling of the band responsible for the magnetic properties of the metal, e.g., 3"d" band for iron, nickel or cobalt. The "d" band of ferromagnets is split, as it contains a different number of electrons with spins directed up and down. Therefore, the density of electronic states at the Fermi level is also different for spins pointing in opposite directions. The Fermi level for majority-spin electrons is located within the "sp" band, and their transport is similar in ferromagnets and non-magnetic metals. For minority-spin electrons the "sp" and "d" bands are hybridized, and the Fermi level lies within the "d" band. The hybridized "spd" band has a high density of states, which results in stronger scattering and thus shorter mean free path λ for minority-spin than majority-spin electrons. In cobalt-doped nickel, the ratio λ↑/λ↓ can reach 20. According to the Drude theory, the conductivity is proportional to λ, which ranges from several to several tens of nanometers in thin metal films. Electrons "remember" the direction of spin within the so-called spin relaxation length (or spin diffusion length), which can significantly exceed the mean free path. Spin-dependent transport refers to the dependence of electrical conductivity on the spin direction of the charge carriers. In ferromagnets, it occurs due to electron transitions between the unsplit 4"s" and split 3"d" bands. In some materials, the interaction between electrons and atoms is the weakest when their magnetic moments are antiparallel rather than parallel. A combination of both types of materials can result in a so-called inverse GMR effect. CIP and CPP geometries. Electric current can be passed through magnetic superlattices in two ways. In the current in plane (CIP) geometry, the current flows along the layers, and the electrodes are located on one side of the structure. In the current perpendicular to plane (CPP) configuration, the current is passed perpendicular to the layers, and the electrodes are located on different sides of the superlattice. The CPP geometry results in more than twice higher GMR, but is more difficult to realize in practice than the CIP configuration. Carrier transport through a magnetic superlattice. Magnetic ordering differs in superlattices with ferromagnetic and antiferromagnetic interaction between the layers. In the former case, the magnetization directions are the same in different ferromagnetic layers in the absence of applied magnetic field, whereas in the latter case, opposite directions alternate in the multilayer. Electrons traveling through the ferromagnetic superlattice interact with it much weaker when their spin directions are opposite to the magnetization of the lattice than when they are parallel to it. Such anisotropy is not observed for the antiferromagnetic superlattice; as a result, it scatters electrons stronger than the ferromagnetic superlattice and exhibits a higher electrical resistance. Applications of the GMR effect require dynamic switching between the parallel and antiparallel magnetization of the layers in a superlattice. In first approximation, the energy density of the interaction between two ferromagnetic layers separated by a non-magnetic layer is proportional to the scalar product of their magnetizations: formula_1 The coefficient "J" is an oscillatory function of the thickness of the non-magnetic layer ds; therefore "J" can change its magnitude and sign. If the ds value corresponds to the antiparallel state then an external field can switch the superlattice from the antiparallel state (high resistance) to the parallel state (low resistance). The total resistance of the structure can be written as formula_2 where R0 is the resistance of ferromagnetic superlattice, ΔR is the GMR increment and θ is the angle between the magnetizations of adjacent layers. Mathematical description. The GMR phenomenon can be described using two spin-related conductivity channels corresponding to the conduction of electrons, for which the resistance is minimum or maximum. The relation between them is often defined in terms of the coefficient of the spin anisotropy β. This coefficient can be defined using the minimum and maximum of the specific electrical resistivity ρF± for the spin-polarized current in the form formula_3 where "ρF" is the average resistivity of the ferromagnet. Resistor model for CIP and CPP structures. If scattering of charge carriers at the interface between the ferromagnetic and non-magnetic metal is small, and the direction of the electron spins persists long enough, it is convenient to consider a model in which the total resistance of the sample is a combination of the resistances of the magnetic and non-magnetic layers. In this model, there are two conduction channels for electrons with various spin directions relative to the magnetization of the layers. Therefore, the equivalent circuit of the GMR structure consists of two parallel connections corresponding to each of the channels. In this case, the GMR can be expressed as formula_4 Here the subscript of R denote collinear and oppositely oriented magnetization in layers, "χ = b/a" is the thickness ratio of the magnetic and non-magnetic layers, and ρN is the resistivity of non-magnetic metal. This expression is applicable for both CIP and CPP structures. Under the condition formula_5 this relationship can be simplified using the coefficient of the spin asymmetry formula_6 Such a device, with resistance depending on the orientation of electron spin, is called a spin valve. It is "open", if the magnetizations of its layers are parallel, and "closed" otherwise. Valet-Fert model. In 1993, Thierry Valet and Albert Fert presented a model for the giant magnetoresistance in the CPP geometry, based on the Boltzmann equations. In this model the chemical potential inside the magnetic layer is split into two functions, corresponding to electrons with spins parallel and antiparallel to the magnetization of the layer. If the non-magnetic layer is sufficiently thin then in the external field E0 the amendments to the electrochemical potential and the field inside the sample will take the form formula_7 formula_8 where "ℓ"s is the average length of spin relaxation, and the z coordinate is measured from the boundary between the magnetic and non-magnetic layers (z &lt; 0 corresponds to the ferromagnetic). Thus electrons with a larger chemical potential will accumulate at the boundary of the ferromagnet. This can be represented by the potential of spin accumulation "V"AS or by the so-called interface resistance (inherent to the boundary between a ferromagnet and non-magnetic material) formula_9 where "j" is current density in the sample, "ℓ"sN and "ℓ"sF are the length of the spin relaxation in a non-magnetic and magnetic materials, respectively. Device preparation. Materials and experimental data. Many combinations of materials exhibit GMR, and the most common are the following: The magnetoresistance depends on many parameters such as the geometry of the device (CIP or CPP), its temperature, and the thicknesses of ferromagnetic and non-magnetic layers. At a temperature of 4.2 K and a thickness of cobalt layers of 1.5 nm, increasing the thickness of copper layers dCu from 1 to 10 nm decreased δH from 80 to 10% in the CIP geometry. Meanwhile, in the CPP geometry the maximum of δH (125%) was observed for dCu = 2.5 nm, and increasing dCu to 10 nm reduced δH to 60% in an oscillating manner. When a Co(1.2 nm)/Cu(1.1 nm) superlattice was heated from near zero to 300 K, its δH decreased from 40 to 20% in the CIP geometry, and from 100 to 55% in the CPP geometry. The non-magnetic layers can be non-metallic. For example, δH up to 40% was demonstrated for organic layers at 11 K. Graphene spin valves of various designs exhibited δH of about 12% at 7 K and 10% at 300 K, far below the theoretical limit of 109%. The GMR effect can be enhanced by spin filters that select electrons with a certain spin orientation; they are made of metals such as cobalt. For a filter of thickness "t" the change in conductivity ΔG can be expressed as formula_10 where ΔGSV is change in the conductivity of the spin valve without the filter, ΔGf is the maximum increase in conductivity with the filter, and β is a parameter of the filter material. Types of GMR. GMR is often classed by the type of devices which exhibit the effect. Films. Antiferromagnetic superlattices. GMR in films was first observed by Fert and Grünberg in a study of superlattices composed of ferromagnetic and non-magnetic layers. The thickness of the non-magnetic layers was chosen such that the interaction between the layers was antiferromagnetic and the magnetization in adjacent magnetic layers was antiparallel. Then an external magnetic field could make the magnetization vectors parallel thereby affecting the electrical resistance of the structure. Magnetic layers in such structures interact through antiferromagnetic coupling, which results in the oscillating dependence of the GMR on the thickness of the non-magnetic layer. In the first magnetic field sensors using antiferromagnetic superlattices, the saturation field was very large, up to tens of thousands of oersteds, due to the strong antiferromagnetic interaction between their layers (made of chromium, iron or cobalt) and the strong anisotropy fields in them. Therefore, the sensitivity of the devices was very low. The use of permalloy for the magnetic and silver for the non-magnetic layers lowered the saturation field to tens of oersteds. Spin valves using exchange bias. In the most successful spin valves the GMR effect originates from exchange bias. They comprise a sensitive layer, "fixed" layer and an antiferromagnetic layer. The last layer freezes the magnetization direction in the "fixed" layer. The sensitive and antiferromagnetic layers are made thin to reduce the resistance of the structure. The valve reacts to the external magnetic field by changing the magnetization direction in the sensitive layer relatively to the "fixed" layer. The main difference of these spin valves from other multilayer GMR devices is the monotonic dependence of the amplitude of the effect on the thickness "dN" of the non-magnetic layers: formula_11 where δH0 is a normalization constant, λN is the mean free path of electrons in the non-magnetic material, "d"0 is effective thickness that includes interaction between layers. The dependence on the thickness of the ferromagnetic layer can be given as: formula_12 The parameters have the same meaning as in the previous equation, but they now refer to the ferromagnetic layer. Non-interacting multilayers (pseudospin valves). GMR can also be observed in the absence of antiferromagnetic coupling layers. In this case, the magnetoresistance results from the differences in the coercive forces (for example, it is smaller for permalloy than cobalt). In multilayers such as permalloy/Cu/Co/Cu the external magnetic field switches the direction of saturation magnetization to parallel in strong fields and to antiparallel in weak fields. Such systems exhibit a lower saturation field and a larger δH than superlattices with antiferromagnetic coupling. A similar effect is observed in Co/Cu structures. The existence of these structures means that GMR does not require interlayer coupling, and can originate from a distribution of the magnetic moments that can be controlled by an external field. Inverse GMR effect. In the inverse GMR, the resistance is minimum for the antiparallel orientation of the magnetization in the layers. Inverse GMR is observed when the magnetic layers are composed of different materials, such as NiCr/Cu/Co/Cu. The resistivity for electrons with opposite spins can be written as formula_13; it has different values, i.e. different coefficients β, for spin-up and spin-down electrons. If the NiCr layer is not too thin, its contribution may exceed that of the Co layer, resulting in inverse GMR. Note that the GMR inversion depends on the sign of the "product" of the coefficients β in adjacent ferromagnetic layers, but not on the signs of individual coefficients. Inverse GMR is also observed if NiCr alloy is replaced by vanadium-doped nickel, but not for doping of nickel with iron, cobalt, manganese, gold or copper. GMR in granular structures. GMR in granular alloys of ferromagnetic and non-magnetic metals was discovered in 1992 and subsequently explained by the spin-dependent scattering of charge carriers at the surface and in the bulk of the grains. The grains form ferromagnetic clusters about 10 nm in diameter embedded in a non-magnetic metal, forming a kind of superlattice. A necessary condition for the GMR effect in such structures is poor mutual solubility in its components (e.g., cobalt and copper). Their properties strongly depend on the measurement and annealing temperature. They can also exhibit inverse GMR. Applications. Spin-valve sensors. General principle. One of the main applications of GMR materials is in magnetic field sensors, e.g., in hard disk drives and biosensors, as well as detectors of oscillations in MEMS. A typical GMR-based sensor consists of seven layers: The binder and protective layers are often made of tantalum, and a typical non-magnetic material is copper. In the sensing layer, magnetization can be reoriented by the external magnetic field; it is typically made of NiFe or cobalt alloys. FeMn or NiMn can be used for the antiferromagnetic layer. The fixed layer is made of a magnetic material such as cobalt. Such a sensor has an asymmetric hysteresis loop owing to the presence of the magnetically hard, fixed layer. Spin valves may exhibit anisotropic magnetoresistance, which leads to an asymmetry in the sensitivity curve. Hard disk drives. In hard disk drives (HDDs), information is encoded using magnetic domains, and a change in the direction of their magnetization is associated with the logical level 1 while no change represents a logical 0. There are two recording methods: longitudinal and perpendicular. In the longitudinal method, the magnetization is normal to the surface. A transition region (domain walls) is formed between domains, in which the magnetic field exits the material. If the domain wall is located at the interface of two north-pole domains then the field is directed outward, and for two south-pole domains it is directed inward. To read the direction of the magnetic field above the domain wall, the magnetization direction is fixed normal to the surface in the antiferromagnetic layer and parallel to the surface in the sensing layer. Changing the direction of the external magnetic field deflects the magnetization in the sensing layer. When the field tends to align the magnetizations in the sensing and fixed layers, the electrical resistance of the sensor decreases, and vice versa. Magnetic RAM. A cell of magnetoresistive random-access memory (MRAM) has a structure similar to the spin-valve sensor. The value of the stored bits can be encoded via the magnetization direction in the sensor layer; it is read by measuring the resistance of the structure. The advantages of this technology are independence of power supply (the information is preserved when the power is switched off owing to the potential barrier for reorienting the magnetization), low power consumption and high speed. In a typical GMR-based storage unit, a CIP structure is located between two wires oriented perpendicular to each other. These conductors are called lines of rows and columns. Pulses of electric current passing through the lines generate a vortex magnetic field, which affects the GMR structure. The field lines have ellipsoid shapes, and the field direction (clockwise or counterclockwise) is determined by the direction of the current in the line. In the GMR structure, the magnetization is oriented along the line. The direction of the field produced by the line of the column is almost parallel to the magnetic moments, and it can not reorient them. Line of the row is perpendicular, and regardless of the magnitude of the field can rotate the magnetization by only 90°. With the simultaneous passage of pulses along the row and column lines, of the total magnetic field at the location of the GMR structure will be directed at an acute angle with respect to one point and an obtuse to others. If the value of the field exceeds some critical value, the latter changes its direction. There are several storage and reading methods for the described cell. In one method, the information is stored in the sensing layer; it is read via resistance measurement and is erased upon reading. In another scheme, the information is kept in the fixed layer, which requires higher recording currents compared to reading currents. Tunnel magnetoresistance (TMR) is an extension of spin-valve GMR, in which the electrons travel with their spins oriented perpendicularly to the layers across a thin insulating tunnel barrier (replacing the non-ferromagnetic spacer). This allows to achieve a larger impedance, a larger magnetoresistance value (~10× at room temperature) and a negligible temperature dependence. TMR has now replaced GMR in MRAMs and disk drives, in particular for high area densities and perpendicular recording. Other applications. Magnetoresistive insulators for contactless signal transmission between two electrically isolated parts of electrical circuits were first demonstrated in 1997 as an alternative to opto-isolators. A Wheatstone bridge of four identical GMR devices is insensitive to a uniform magnetic field and reacts only when the field directions are antiparallel in the neighboring arms of the bridge. Such devices were reported in 2003 and may be used as rectifiers with a linear frequency response. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\delta_H = \\frac{R(H)-R(0)}{R(0)}" }, { "math_id": 1, "text": "w = - J (\\mathbf M_1 \\cdot \\mathbf M_2). " }, { "math_id": 2, "text": "R = R_0 + \\Delta R \\sin^2 \\frac{\\theta}{2}," }, { "math_id": 3, "text": "\\rho_{F\\pm}=\\frac{2\\rho_F}{1\\pm\\beta}," }, { "math_id": 4, "text": "\\delta_H = \\frac{\\Delta R}{R}=\\frac{R_{\\uparrow\\downarrow}-R_{\\uparrow\\uparrow}}{R_{\\uparrow\\uparrow}}=\\frac{(\\rho_{F+}-\\rho_{F-})^2}{(2\\rho_{F+}+\\chi\\rho_N)(2\\rho_{F-}+\\chi\\rho_N)}." }, { "math_id": 5, "text": "\\chi\\rho_N \\ll \\rho_{F\\pm}" }, { "math_id": 6, "text": "\\delta_H = \\frac{\\beta^2}{1-\\beta^2}." }, { "math_id": 7, "text": "\\Delta\\mu = \\frac{\\beta}{1-\\beta^2}eE_0\\ell_se^{z/\\ell_s}," }, { "math_id": 8, "text": "\\Delta E = \\frac{\\beta^2}{1-\\beta^2}eE_0\\ell_se^{z/\\ell_s}," }, { "math_id": 9, "text": "R_i= \\frac{\\beta(\\mu_{\\uparrow\\downarrow}-\\mu_{\\uparrow\\uparrow})}{2ej} = \\frac{\\beta^2\\ell_{sN}\\rho_N}{1+(1-\\beta^2)\\ell_{sN}\\rho_N/(\\ell_{sF}\\rho_F)}," }, { "math_id": 10, "text": "\\Delta G = \\Delta G_{SV} + \\Delta G_f (1 - e^{\\beta t/\\lambda})," }, { "math_id": 11, "text": "\\delta_H(d_N) = \\delta_{H0} \\frac{\\exp\\left(-d_N/\\lambda_N\\right)}{1 + d_N/d_0}," }, { "math_id": 12, "text": "\\delta_H(d_F) = \\delta_{H1} \\frac{1 - \\exp\\left(-d_F/\\lambda_F\\right)}{1 + d_F/d_0}." }, { "math_id": 13, "text": "\\rho_{\\uparrow,\\downarrow}=\\frac{2\\rho_F}{1\\pm\\beta}" } ]
https://en.wikipedia.org/wiki?curid=1002128
10022123
Vaccine efficacy
Reduction of disease among the vaccinated comparing to the unvaccinated Vaccine efficacy or vaccine effectiveness is the percentage reduction of disease cases in a vaccinated group of people compared to an unvaccinated group. For example, a vaccine efficacy or effectiveness of 80% indicates an 80% decrease in the number of disease cases among a group of vaccinated people compared to a group in which nobody was vaccinated. When a study is carried out using the most favorable, ideal or perfectly controlled conditions, such as those in a clinical trial, the term "vaccine efficacy" is used. On the other hand, when a study is carried out to show how well a vaccine works when they are used in a bigger, typical population under less-than-perfectly controlled conditions, the term "vaccine effectiveness" is used. Vaccine efficacy was designed and calculated by Greenwood and Yule in 1915 for the cholera and typhoid vaccines. It is best measured using double-blind, randomized, clinical controlled trials, such that it is studied under "best case scenarios." Vaccine efficacy studies are used to measure several important and critical outcomes of interest such as disease attack rates, hospitalizations due to the disease, deaths due to the disease, asymptomatic infection, serious adverse events due to vaccination, vaccine reactogenicity, and cost effectiveness of the vaccine. Vaccine efficacy is calculated on a set population (and therefore is not a constant value when counting in other populations), and may be misappropriated to be how efficacious a vaccine is in all populations. Testing. Vaccine efficacy differs from vaccine effectiveness in the same way that an : vaccine efficacy shows how effective a vaccine could be given ideal circumstances and 100% vaccine uptake (such as the conditions within a controlled clinical trial); vaccine effectiveness measures how well a vaccine performs when it is used in routine circumstances in the community. What makes vaccine efficacy relevant is that it shows the disease attack rates as well as a tracking of vaccination status. Vaccine effectiveness is relatively inexpensive to measure than vaccine efficacy. The measurement of vaccine effectiveness relies on observational studies which are usually easier to perform, whereas a vaccine efficacy measurement requires randomized controlled trials which are time and capital intensive. Because a clinical trial is based on people who are taking the vaccine and those who are not, there is a risk for disease, and optimal treatment is needed for those who become infected. The advantages of measuring vaccine efficacy is having the ability to control for selection bias, as well as prospective, active monitoring for disease attack rates, and careful tracking of vaccination status for a study population there is normally a subset as well; laboratory confirmation of the infectious outcome of interest and a sampling of vaccine immunogenicity. The major disadvantages of vaccine efficacy trials are the complexity and expense of performing them, especially for relatively uncommon infectious outcomes of diseases for which the sample size required is driven up to achieve clinically useful statistical power. Vaccine effectiveness estimates obtained from observational studies are usually subject to selection bias. Since 2014, epidemiologists have used quasi-experimental designs to obtain unbiased estimates of vaccine effectiveness. Standardized statements of efficacy may be parametrically expanded to include multiple categories of efficacy in a table format. While conventional efficacy/effectiveness data typically shows the ability to prevent a symptomatic infection, this expanded approach could include prevention of outcomes categorized to include symptom class, viral damage minor/serious, hospital admission, ICU admission, death, various viral shedding levels, etc. Capturing effectiveness at preventing each of these "outcome categories" is typically part of any study and could be provided in a table with clear definitions instead of being inconsistently presented in study discussion as is typically done in past practice. Biological factors. Biological exposures such as parasites affect the immune responses after vaccination. This can be seen in areas with a high burden of parasitic infections where vaccine responses are low for vaccines such as BCG. Infections like malaria suppress immune responses to polysaccharide vaccines. A potential solution is to give curative treatment before vaccination in areas where malaria is present. The effect of parasites on vaccine response has also been observed in individuals infected by helminths in areas that have a high burden of infectious diseases. Established helminth infections at the time of vaccination affect vaccine responses. Other biological factors such as smoking, age, sex, and nutrition also affect vaccine responses. In the case of hepatitis B vaccine, for example, increasing age, being male, having a body mass index &gt; 25, and smoking can result in lower seroprotection rates. In addition, other factors such as the composition of the gut microbiota can affect responses to vaccination. Formula. The outcome data (vaccine efficacy) generally are expressed as a proportionate reduction in disease attack rate (AR) between the unvaccinated (ARU) and vaccinated (ARV), or can be calculated from the relative risk (RR) of disease among the vaccinated group. The basic formula is written as:formula_0with An alternative, equivalent formulation of vaccine efficacy is: formula_4where formula_5 is the relative risk of developing the disease for vaccinated people compared to unvaccinated people. The design of clinical trials ensures that regulatory approval is issued only for effective vaccines. However, during research, it is possible that an intervention actually "increases" the risk of participants, for example, in the STEP and Phambili studies, which were both intended to test an experimental HIV vaccine . In these cases, the formula would yield a negative efficacy value because formula_6. A negative efficacy value is sometimes present in the lower limit of a confidence interval of an estimate of vaccine efficacy for specific clinical endpoints. While this means that the intervention may actually have a negative effect, it could also be simply due to small sample size or sample variability. Relative risk. First, the baseline risk can be calculated for each group and then vaccine efficacy (RRR) as follows: Then, formula_10 Also, the absolute risk reduction (ARR) for any vaccine can simply be obtained from calculating the difference of risks between the groups i.e. 0.86%–0.196% which renders a value of about 0.66% for the above example. Cases studied. "The New England Journal of Medicine" did a study on the efficacy of a vaccine for the influenza A virus. A total of 1,952 subjects were enrolled and received study vaccines in the fall of 2007. Influenza activity occurred from January through April 2008, with the circulation of influenza types: Absolute efficacy against both types of influenza, as measured by isolating the virus in culture, identifying it on real-time polymerase-chain-reaction assay, or both, was 68% (95% confidence interval [CI], 46 to 81) for the inactivated vaccine and 36% (95% CI, 0 to 59) for the live attenuated vaccine. In terms of relative efficacy, there was a 50% (95% CI, 20 to 69) reduction in laboratory-confirmed influenza among subjects who received inactivated vaccine as compared with those given live attenuated vaccine. Subjects were healthy adults. The efficacy against the influenza A virus was 72% and for the inactivated was 29% with a relative efficacy of 60%. The influenza vaccine is not 100% efficacious in preventing disease, but it is close to 100% safe, and much safer than the disease. Since 2004, clinical trials testing the efficacy of the influenza vaccine have been slowly coming in: 2,058 people were vaccinated in October and November 2005. Influenza activity was prolonged but of low intensity; type A (H3N2) was the virus that was generally spreading around the population, which was very like the vaccine itself. The efficacy of the inactivated vaccine was 16% (95% confidence interval [CI], -171% to 70%) for the virus identification end point (virus isolation in cell culture or identification through polymerase chain reaction) and 54% (95% CI, 4%–77%) for the primary end point (virus isolation or increase in serum antibody titer). The absolute efficacies of the live attenuated vaccine for these end points were 8% (95% CI, -194% to 67%) and 43% (95% CI, -15% to 71%). With serologic end points included, efficacy was demonstrated for the inactivated vaccine in a year with low influenza attack rates. Influenza vaccines are effective in reducing cases of influenza, especially when the content predicts accurately circulating types and circulation is high. However, they are less effective in reducing cases of influenza-like illness and have a modest impact on working days lost. There is insufficient evidence to assess their impact on complications. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "VE = \\frac{ARU - ARV}{ARU} \\times 100\\%," }, { "math_id": 1, "text": "VE" }, { "math_id": 2, "text": "ARU" }, { "math_id": 3, "text": "ARV" }, { "math_id": 4, "text": "VE = (1 - RR) \\times 100\\%," }, { "math_id": 5, "text": "RR" }, { "math_id": 6, "text": "ARV > ARU" }, { "math_id": 7, "text": "{24\\over 12221}=0.196 \\%" }, { "math_id": 8, "text": "{106\\over 12198}=0.86 \\%" }, { "math_id": 9, "text": "RR={0.196 \\over 0.86} \\approx 0.23" }, { "math_id": 10, "text": "VE=(1-RR) \\times 100 \\implies (1-0.23) \\times 100 \\approx 77\\%" } ]
https://en.wikipedia.org/wiki?curid=10022123
10023038
Benefit–cost ratio
Indicator of value-for-money of a project or proposal A benefit–cost ratio (BCR) is an indicator, used in cost–benefit analysis, that attempts to summarize the overall value for money of a project or proposal. A BCR is the ratio of the benefits of a project or proposal, expressed in monetary terms, relative to its costs, also expressed in monetary terms. All benefits and costs should be expressed in discounted present values. A BCR can be a profitability index in for-profit contexts. A BCR takes into account the amount of monetary gain realized by performing a project versus the amount it costs to execute the project. The higher the BCR the better the investment. The general rule of thumb is that if the benefit is higher than the cost the project is a good investment. The practice of cost–benefit analysis in some countries refers to the BCR as the cost–benefit ratio, but this is still calculated as the ratio of benefits to costs. Rationale. In the absence of funding constraints, the best value for money projects are those with the highest net present value (NPV). Where there is a budget constraint, the ratio of NPV to the expenditure falling within the constraint should be used. In practice, the ratio of present value (PV) of future net benefits to expenditure is expressed as a BCR. (NPV-to-investment is net BCR.) BCRs have been used most extensively in the field of transport cost–benefit appraisals. The NPV should be evaluated over the service life of the project. Problems. Long-term BCRs, such as those involved in climate change, are very sensitive to the discount rate used in the calculation of net present value, and there is often no consensus on the appropriate rate to use. The handling of non-monetary impacts also presents problems. These impacts are usually incorporated by estimating them in monetary terms, using measures such as WTP (willingness to pay), though these are often difficult to assess. Alternative approaches include the UK's New Approach to Appraisal framework. A further complication with BCRs concerns the precise definitions of benefits and costs. These can vary depending on the funding agency. formula_0 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "BCR = \\frac{\\text{Discounted value of incremental benefits}}{\\text{Discounted value of incremental costs}}" } ]
https://en.wikipedia.org/wiki?curid=10023038
10023138
Chandrasekhar number
The Chandrasekhar number is a dimensionless quantity used in magnetic convection to represent ratio of the Lorentz force to the viscosity. It is named after the Indian astrophysicist Subrahmanyan Chandrasekhar. The number's main function is as a measure of the magnetic field, being proportional to the square of a characteristic magnetic field in a system. Definition. The Chandrasekhar number is usually denoted by the letter formula_0, and is motivated by a dimensionless form of the Navier-Stokes equation in the presence of a magnetic force in the equations of magnetohydrodynamics: formula_1 where formula_2 is the Prandtl number, and formula_3 is the magnetic Prandtl number. The Chandrasekhar number is thus defined as: formula_4 where formula_5 is the magnetic permeability, formula_6 is the density of the fluid, formula_7 is the kinematic viscosity, and formula_8 is the magnetic diffusivity. formula_9 and formula_10 are a characteristic magnetic field and a length scale of the system respectively. It is related to the Hartmann number, formula_11, by the relation: formula_12 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ Q" }, { "math_id": 1, "text": "\\frac{1}{\\sigma}\\left(\\frac{\\partial^{}\\mathbf{u}}{\\partial t^{}}\\ +\\ (\\mathbf{u} \\cdot \\nabla) \\mathbf{u}\\right)\\ =\\ - {\\mathbf \\nabla }p\\ +\\ \\nabla^2 \\mathbf{u}\\ +\\frac {\\sigma}{\\zeta} {Q}\\ ({\\mathbf \\nabla} \\wedge \\mathbf{B}) \\wedge\\mathbf{B}, " }, { "math_id": 2, "text": "\\ \\sigma" }, { "math_id": 3, "text": "\\ \\zeta" }, { "math_id": 4, "text": " {Q}\\ =\\ \\frac{{B_0}^2 d^2}{\\mu_0 \\rho \\nu \\lambda} " }, { "math_id": 5, "text": "\\ \\mu_0" }, { "math_id": 6, "text": "\\ \\rho" }, { "math_id": 7, "text": "\\ \\nu" }, { "math_id": 8, "text": "\\ \\lambda" }, { "math_id": 9, "text": "\\ B_0" }, { "math_id": 10, "text": "\\ d" }, { "math_id": 11, "text": "\\ Ha" }, { "math_id": 12, "text": " Q\\ {=}\\ Ha^2\\ " } ]
https://en.wikipedia.org/wiki?curid=10023138
1002551
Flap (aeronautics)
Anti-stalling high-lift device on aircraft A flap is a high-lift device used to reduce the stalling speed of an aircraft wing at a given weight. Flaps are usually mounted on the wing trailing edges of a fixed-wing aircraft. Flaps are used to reduce the take-off distance and the landing distance. Flaps also cause an increase in drag so they are retracted when not needed. The flaps installed on most aircraft are partial-span flaps; spanwise from near the wing root to the inboard end of the ailerons. When partial-span flaps are extended they alter the spanwise lift distribution on the wing by causing the inboard half of the wing to supply an increased proportion of the lift, and the outboard half to supply a reduced proportion of the lift. Reducing the proportion of the lift supplied by the outboard half of the wing is accompanied by a reduction in the angle of attack on the outboard half. This is beneficial because it increases the margin above the stall of the outboard half, maintaining aileron effectiveness and reducing the likelihood of asymmetric stall, and spinning. The ideal lift distribution across a wing is elliptical, and extending partial-span flaps causes a significant departure from the elliptical. This increases lift-induced drag which can be beneficial during approach and landing because it allows the aircraft to descend at a steeper angle. Extending the wing flaps increases the camber or curvature of the wing, raising the maximum lift coefficient or the upper limit to the lift a wing can generate. This allows the aircraft to generate the required lift at a lower speed, reducing the minimum speed (known as stall speed) at which the aircraft will safely maintain flight. For most aircraft configurations, a useful side effect of flap deployment is a decrease in aircraft pitch angle which lowers the nose thereby improving the pilot's view of the runway over the nose of the aircraft during landing. There are many different designs of flaps, with the specific choice depending on the size, speed and complexity of the aircraft on which they are to be used, as well as the era in which the aircraft was designed. Plain flaps, slotted flaps, and Fowler flaps are the most common. Krueger flaps are positioned on the leading edge of the wings and are used on many jet airliners. The Fowler, Fairey-Youngman and Gouge types of flap increase the wing area in addition to changing the camber. The larger lifting surface reduces wing loading, hence further reducing the stalling speed. Some flaps are fitted elsewhere. Leading-edge flaps form the wing leading edge and when deployed they rotate down to increase the wing camber. The de Havilland DH.88 Comet racer had flaps running beneath the fuselage and forward of the wing trailing edge. Many of the Waco Custom Cabin series biplanes have the flaps at mid-chord on the underside of the top wing. Principles of operation. The general airplane lift equation demonstrates these relationships: formula_0 where: Here, it can be seen that increasing the area (S) and lift coefficient (formula_2) allow a similar amount of lift to be generated at a lower airspeed (V). Thus, flaps are extensively in use for short takeoffs and landings (STOL). Extending the flaps also increases the drag coefficient of the aircraft. Therefore, for any given weight and airspeed, flaps increase the drag force. Flaps increase the drag coefficient of an aircraft due to higher induced drag caused by the distorted spanwise lift distribution on the wing with flaps extended. Some flaps increase the wing area and, for any given speed, this also increases the parasitic drag component of total drag. Flaps during takeoff. Depending on the aircraft type, flaps may be partially extended for takeoff. When used during takeoff, flaps trade runway distance for climb rate: using flaps reduces ground roll but also reduces the climb rate. The amount of flap used on takeoff is specific to each type of aircraft, and the manufacturer will suggest limits and may indicate the reduction in climb rate to be expected. The "Cessna 172S Pilot Operating Handbook" recommends 10° of flaps on takeoff, when the ground is soft or it is a short runway, otherwise 0 degrees is used. Flaps during landing. Flaps may be fully extended for landing to give the aircraft a lower stall speed so the approach to landing can be flown more slowly, which also allows the aircraft to land in a shorter distance. The higher lift and drag associated with fully extended flaps allows a steeper and slower approach to the landing site, but imposes handling difficulties in aircraft with very low wing loading (i.e. having little weight and a large wing area). Winds across the line of flight, known as "crosswinds", cause the windward side of the aircraft to generate more lift and drag, causing the aircraft to roll, yaw and pitch off its intended flight path, and as a result many light aircraft land with reduced flap settings in crosswinds. Furthermore, once the aircraft is on the ground, the flaps may decrease the effectiveness of the brakes since the wing is still generating lift and preventing the entire weight of the aircraft from resting on the tires, thus increasing stopping distance, particularly in wet or icy conditions. Usually, the pilot will raise the flaps as soon as possible to prevent this from occurring. Maneuvering flaps. Some gliders not only use flaps when landing, but also in flight to optimize the camber of the wing for the chosen speed. While thermalling, flaps may be partially extended to reduce the stall speed so that the glider can be flown more slowly and thereby reduce the rate of sink, which lets the glider use the rising air of the thermal more efficiently, and to turn in a smaller circle to make best use of the core of the thermal. At higher speeds a negative flap setting is used to reduce the nose-down pitching moment. This reduces the balancing load required on the horizontal stabilizer, which in turn reduces the trim drag associated with keeping the glider in longitudinal trim. Negative flap may also be used during the initial stage of an aerotow launch and at the end of the landing run in order to maintain better control by the ailerons. Like gliders, some fighters such as the Nakajima Ki-43 also use special flaps to improve maneuverability during air combat, allowing the fighter to create more lift at a given speed, allowing for much tighter turns. The flaps used for this must be designed specifically to handle the greater stresses and most flaps have a maximum speed at which they can be deployed. Control line model aircraft built for precision aerobatics competition usually have a type of maneuvering flap system that moves them in an opposing direction to the elevators, to assist in tightening the radius of a maneuver. Flap tracks. Manufactured most often from PH steels and titanium, flap tracks control the flaps located on the trailing edge of an aircraft's wings. Extending flaps often run on guide tracks. Where these run outside the wing structure they may be faired in to streamline them and protect them from damage. Some flap track fairings are designed to act as anti-shock bodies, which reduce drag caused by local sonic shock waves where the airflow becomes transonic at high speeds. Thrust gates. Thrust gates, or gaps, in the trailing edge flaps may be required to minimise interference between the engine flow and deployed flaps. In the absence of an inboard aileron, which provides a gap in many flap installations, a modified flap section may be needed. The thrust gate on the Boeing 757 was provided by a single-slotted flap in between the inboard and outboard double-slotted flaps. The A320, A330, A340 and A380 have no inboard aileron. No thrust gate is required in the continuous, single-slotted flap. Interference in the go-around case while the flaps are still fully deployed can cause increased drag which must not compromise the climb gradient. Types of flap. Plain flap. The rear portion of airfoil rotates downwards on a simple hinge mounted at the front of the flap. The Royal Aircraft Factory and National Physical Laboratory in the United Kingdom tested flaps in 1913 and 1914, but these were never installed in an actual aircraft. In 1916, the Fairey Aviation Company made a number of improvements to a Sopwith Baby they were rebuilding, including their Patent Camber Changing Gear, making the Fairey Hamble Baby as they renamed it, the first aircraft to fly with flaps. These were full span plain flaps which incorporated ailerons, making it also the first instance of flaperons. Fairey were not alone however, as Breguet soon incorporated automatic flaps into the lower wing of their Breguet 14 reconnaissance/bomber in 1917. Owing to the greater efficiency of other flap types, the plain flap is normally only used where simplicity is required. Split flap. The rear portion of the lower surface of the airfoil hinges downwards from the leading edge of the flap, while the upper surface stays immobile. This can cause large changes in longitudinal trim, pitching the nose either down or up. At full deflection, a split flaps acts much like a spoiler, adding significantly to drag coefficient. It also adds a little to lift coefficient. It was invented by Orville Wright and James M. H. Jacobs in 1920, but only became common in the 1930s and was then quickly superseded. The Douglas DC-1 (progenitor to the DC-3 and C-47) was one of the first of many aircraft types to use split flaps. Slotted flap. A gap between the flap and the wing forces high pressure air from below the wing over the flap helping the airflow remain attached to the flap, increasing lift compared to a split flap. Additionally, lift across the entire chord of the primary airfoil is greatly increased as the velocity of air leaving its trailing edge is raised, from the typical non-flap 80% of freestream, to that of the higher-speed, lower-pressure air flowing around the leading edge of the slotted flap. Any flap that allows air to pass between the wing and the flap is considered a slotted flap. The slotted flap was a result of research at Handley-Page, a variant of the slot that dates from the 1920s, but was not widely used until much later. Some flaps use multiple slots to further boost the effect. Fowler flap. A split flap that slides backwards, before hinging downward, thereby increasing first chord, then camber. The flap may form part of the upper surface of the wing, like a plain flap, or it may not, like a split flap, but it must slide rearward before lowering. As a defining feature – distinguishing it from the Gouge Flap – it always provides a slot effect. The flap was invented by Harlan D. Fowler in 1924, and tested by Fred Weick at NACA in 1932. First used on the Martin 146 prototype in 1935, it entered production on the 1937 Lockheed Super Electra, and remains in widespread use on modern aircraft, often with multiple slots. Junkers flap. A slotted plain flap fixed below the trailing edge of the wing, and rotating about its forward edge. When not in use, it has more drag than other types, but is more effective at creating additional lift than a plain or split flap, while retaining their mechanical simplicity. Invented by Otto Mader at Junkers in the late 1920s, they were most often seen on the Junkers Ju 52 and the Junkers Ju 87 "Stuka", though the same basic design can also be found on many modern ultralights, like the Denney Kitfox. This type of flap is sometimes referred to as an external-airfoil flap. Gouge flap. A type of split flap that slides backward along curved tracks that force the trailing edge downward, increasing chord and camber without affecting trim or requiring any additional mechanisms. It was invented by Arthur Gouge for Short Brothers in 1936 and used on the Short Empire and Sunderland flying boats, which used the very thick Shorts A.D.5 airfoil. Short Brothers may have been the only company to use this type. Fairey-Youngman flap. Drops down (becoming a Junkers Flap) before sliding aft and then rotating up or down. Fairey was one of the few exponents of this design, which was used on the Fairey Firefly and Fairey Barracuda. When in the extended position, it could be angled up (to a negative angle of incidence) so that the aircraft could be dived vertically without needing excessive trim changes. Zap flap. The Zap flap was invented by Edward F. Zaparka while he was with Berliner/Joyce and tested on a General Airplanes Corporation Aristocrat in 1932 and on other types periodically thereafter, but it saw little use on production aircraft other than on the Northrop P-61 Black Widow. The leading edge of the flap is mounted on a track, while a point at mid chord on the flap is connected via an arm to a pivot just above the track. When the flap's leading edge moves aft along the track, the triangle formed by the track, the shaft and the surface of the flap (fixed at the pivot) gets narrower and deeper, forcing the flap down. Krueger flap. A hinged flap which folds out from under the wing's leading edge while not forming a part of the leading edge of the wing when retracted. This increases the camber and thickness of the wing, which in turn increases lift and drag. This is not the same as a leading edge droop flap, as that is formed from the entire leading edge. Invented by Werner Krüger in 1943 and evaluated in Goettingen, Krueger flaps are found on many modern swept wing airliners. Gurney flap. A small fixed perpendicular tab of between 1 and 2% of the wing chord, mounted on the high pressure side of the trailing edge of an airfoil. It was named for racing car driver Dan Gurney who rediscovered it in 1971, and has since been used on some helicopters such as the Sikorsky S-76B to correct control problems without having to resort to a major redesign. It boosts the efficiency of even basic theoretical airfoils (made up of a triangle and a circle overlapped) to the equivalent of a conventional airfoil. The principle was discovered in the 1930s, but was rarely used and was then forgotten. Late marks of the Supermarine Spitfire used a bead on the trailing edge of the elevators, which functioned in a similar manner. Leading edge flap. The entire leading edge of the wing rotates downward, effectively increasing camber and also slightly reducing chord. Most commonly found on fighters with very thin wings unsuited to other leading edge high lift devices. Blown flap. A type of Boundary Layer Control System, blown flaps pass engine-generated air or exhaust over the flaps to increase lift beyond that attainable with mechanical flaps. Types include the original (internally blown flap) which blows compressed air from the engine over the top of the flap, the externally blown flap, which blows engine exhaust over the upper and lower surfaces of the flap, and upper surface blowing which blows engine exhaust over the top of the wing and flap. While testing was done in Britain and Germany before the Second World War, and flight trials started, the first production aircraft with blown flaps was not until the 1957 Lockheed T2V SeaStar. Upper Surface Blowing was used on the Boeing YC-14 in 1976. Flexible flap. Also known as the FlexFoil. A modern interpretation of wing warping, internal mechanical actuators bend a lattice that changes the airfoil shape. It may have a flexible gap seal at the transition between fixed and flexible airfoils. Flaperon. A type of aircraft control surface that combines the functions of both flaps and ailerons. Continuous trailing-edge flap. As of 2014, U.S. Army Research Laboratory (ARL) researchers at NASA's Langley Research Center developed an active-flap design for helicopter rotor blades. The Continuous Trailing-Edge Flap (CTEF) uses components to change blade camber during flight, eliminating mechanical hinges in order to improve system reliability. Prototypes were constructed for wind-tunnel testing. A team from ARL completed a live-fire test of a rotor blade with individual blade control technology in January 2016. The live fire experiments explored the ballistic vulnerability of blade control technologies. Researchers fired three shots representative of typical ground fire on a 7-foot-span, 10-inch-chord rotor blade section with a 4-foot-long CTEF at ARL's Airbase Experimental Facility. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L = \\tfrac12 \\rho V^2 S C_L" }, { "math_id": 1, "text": "\\rho" }, { "math_id": 2, "text": "C_L" } ]
https://en.wikipedia.org/wiki?curid=1002551
100267
Dihedral group
Group of symmetries of a regular polygon In mathematics, a dihedral group is the group of symmetries of a regular polygon, which includes rotations and reflections. Dihedral groups are among the simplest examples of finite groups, and they play an important role in group theory, geometry, and chemistry. The notation for the dihedral group differs in geometry and abstract algebra. In geometry, D"n" or Dih"n" refers to the symmetries of the "n"-gon, a group of order 2"n". In abstract algebra, D2"n" refers to this same dihedral group. This article uses the geometric convention, D"n". Definition. The word "dihedral" comes from "di-" and "-hedron". The latter comes from the Greek word hédra, which means "face of a geometrical solid". Overall it thus refers to the two faces of a polygon. Elements. A regular polygon with formula_0 sides has formula_1 different symmetries: formula_0 rotational symmetries and formula_0 reflection symmetries. Usually, we take formula_2 here. The associated rotations and reflections make up the dihedral group formula_3. If formula_0 is odd, each axis of symmetry connects the midpoint of one side to the opposite vertex. If formula_0 is even, there are formula_4 axes of symmetry connecting the midpoints of opposite sides and formula_4 axes of symmetry connecting opposite vertices. In either case, there are formula_0 axes of symmetry and formula_1 elements in the symmetry group. Reflecting in one axis of symmetry followed by reflecting in another axis of symmetry produces a rotation through twice the angle between the axes. Group structure. As with any geometric object, the composition of two symmetries of a regular polygon is again a symmetry of this object. With composition of symmetries to produce another as the binary operation, this gives the symmetries of a polygon the algebraic structure of a finite group. The following Cayley table shows the effect of composition in the group D3 (the symmetries of an equilateral triangle). r0 denotes the identity; r1 and r2 denote counterclockwise rotations by 120° and 240° respectively, and s0, s1 and s2 denote reflections across the three lines shown in the adjacent picture. For example, s2s1 = r1, because the reflection s1 followed by the reflection s2 results in a rotation of 120°. The order of elements denoting the composition is right to left, reflecting the convention that the element acts on the expression to its right. The composition operation is not commutative. In general, the group D"n" has elements r0, ..., r"n"−1 and s0, ..., s"n"−1, with composition given by the following formulae: formula_5 In all cases, addition and subtraction of subscripts are to be performed using modular arithmetic with modulus "n". Matrix representation. If we center the regular polygon at the origin, then elements of the dihedral group act as linear transformations of the plane. This lets us represent elements of D"n" as matrices, with composition being matrix multiplication. This is an example of a (2-dimensional) group representation. For example, the elements of the group D4 can be represented by the following eight matrices: formula_6 In general, the matrices for elements of D"n" have the following form: formula_7 r"k" is a rotation matrix, expressing a counterclockwise rotation through an angle of 2"πk"/"n". s"k" is a reflection across a line that makes an angle of "πk"/"n" with the "x"-axis. Other definitions. D"n" can also be defined as the group with presentation formula_8 Using the relation formula_9, we obtain the relation formula_10. It follows that formula_11 is generated by formula_12 and formula_13. This substitution also shows that formula_11 has the presentation formula_14 In particular, D"n" belongs to the class of Coxeter groups. Small dihedral groups. D1 is isomorphic to Z2, the cyclic group of order 2. D2 is isomorphic to K4, the Klein four-group. D1 and D2 are exceptional in that: 1 or "n" 2, for these values, D"n" is too large to be a subgroup. The cycle graphs of dihedral groups consist of an "n"-element cycle and "n" 2-element cycles. The dark vertex in the cycle graphs below of various dihedral groups represents the identity element, and the other vertices are the other elements of the group. A cycle consists of successive powers of either of the elements connected to the identity element. The dihedral group as symmetry group in 2D and rotation group in 3D. An example of abstract group D"n", and a common way to visualize it, is the group of Euclidean plane isometries which keep the origin fixed. These groups form one of the two series of discrete point groups in two dimensions. D"n" consists of "n" rotations of multiples of 360°/"n" about the origin, and reflections across "n" lines through the origin, making angles of multiples of 180°/"n" with each other. This is the symmetry group of a regular polygon with "n" sides (for "n" ≥ 3; this extends to the cases "n" 1 and "n" 2 where we have a plane with respectively a point offset from the "center" of the "1-gon" and a "2-gon" or line segment). D"n" is generated by a rotation r of order "n" and a reflection s of order 2 such that formula_15 In geometric terms: in the mirror a rotation looks like an inverse rotation. In terms of complex numbers: multiplication by formula_16 and complex conjugation. In matrix form, by setting formula_17 and defining formula_18 and formula_19 for formula_20 we can write the product rules for D"n" as formula_21 The dihedral group D2 is generated by the rotation r of 180 degrees, and the reflection s across the "x"-axis. The elements of D2 can then be represented as {e, r, s, rs}, where e is the identity or null transformation and rs is the reflection across the "y"-axis. D2 is isomorphic to the Klein four-group. For "n" &gt; 2 the operations of rotation and reflection in general do not commute and D"n" is not abelian; for example, in D4, a rotation of 90 degrees followed by a reflection yields a different result from a reflection followed by a rotation of 90 degrees. Thus, beyond their obvious application to problems of symmetry in the plane, these groups are among the simplest examples of non-abelian groups, and as such arise frequently as easy counterexamples to theorems which are restricted to abelian groups. The 2"n" elements of D"n" can be written as e, r, r2, ... , r"n"−1, s, r s, r2s, ... , r"n"−1s. The first "n" listed elements are rotations and the remaining "n" elements are axis-reflections (all of which have order 2). The product of two rotations or two reflections is a rotation; the product of a rotation and a reflection is a reflection. So far, we have considered D"n" to be a subgroup of O(2), i.e. the group of rotations (about the origin) and reflections (across axes through the origin) of the plane. However, notation D"n" is also used for a subgroup of SO(3) which is also of abstract group type D"n": the proper symmetry group of a "regular polygon embedded in three-dimensional space" (if "n" ≥ 3). Such a figure may be considered as a degenerate regular solid with its face counted twice. Therefore, it is also called a "dihedron" (Greek: solid with two faces), which explains the name "dihedral group" (in analogy to "tetrahedral", "octahedral" and "icosahedral group", referring to the proper symmetry groups of a regular tetrahedron, octahedron, and icosahedron respectively). Properties. The properties of the dihedral groups D"n" with "n" ≥ 3 depend on whether "n" is even or odd. For example, the center of D"n" consists only of the identity if "n" is odd, but if "n" is even the center has two elements, namely the identity and the element r"n"/2 (with D"n" as a subgroup of O(2), this is inversion; since it is scalar multiplication by −1, it is clear that it commutes with any linear transformation). In the case of 2D isometries, this corresponds to adding inversion, giving rotations and mirrors in between the existing ones. For "n" twice an odd number, the abstract group D"n" is isomorphic with the direct product of D"n" / 2 and Z2. Generally, if "m" divides "n", then D"n" has "n"/"m" subgroups of type D"m", and one subgroup formula_22"m". Therefore, the total number of subgroups of D"n" ("n" ≥ 1), is equal to "d"("n") + σ("n"), where "d"("n") is the number of positive divisors of "n" and "σ"("n") is the sum of the positive divisors of "n". See list of small groups for the cases "n" ≤ 8. The dihedral group of order 8 (D4) is the smallest example of a group that is not a T-group. Any of its two Klein four-group subgroups (which are normal in D4) has as normal subgroup order-2 subgroups generated by a reflection (flip) in D4, but these subgroups are not normal in D4. Conjugacy classes of reflections. All the reflections are conjugate to each other whenever "n" is odd, but they fall into two conjugacy classes if "n" is even. If we think of the isometries of a regular "n"-gon: for odd "n" there are rotations in the group between every pair of mirrors, while for even "n" only half of the mirrors can be reached from one by these rotations. Geometrically, in an odd polygon every axis of symmetry passes through a vertex and a side, while in an even polygon there are two sets of axes, each corresponding to a conjugacy class: those that pass through two vertices and those that pass through two sides. Algebraically, this is an instance of the conjugate Sylow theorem (for "n" odd): for "n" odd, each reflection, together with the identity, form a subgroup of order 2, which is a Sylow 2-subgroup (2 21 is the maximum power of 2 dividing 2"n" 2[2"k" + 1]), while for "n" even, these order 2 subgroups are not Sylow subgroups because 4 (a higher power of 2) divides the order of the group. For "n" even there is instead an outer automorphism interchanging the two types of reflections (properly, a class of outer automorphisms, which are all conjugate by an inner automorphism). Automorphism group. The automorphism group of D"n" is isomorphic to the holomorph of formula_22/"n"formula_22, i.e., to Hol(formula_22/"n"formula_22) {"ax" + "b" | ("a", "n") 1} and has order "nϕ"("n"), where "ϕ" is Euler's totient function, the number of "k" in 1, ..., "n" − 1 coprime to "n". It can be understood in terms of the generators of a reflection and an elementary rotation (rotation by "k"(2"π"/"n"), for "k" coprime to "n"); which automorphisms are inner and outer depends on the parity of "n". 2) the inner automorphism group has order "n". ±1. Examples of automorphism groups. D9 has 18 inner automorphisms. As 2D isometry group D9, the group has mirrors at 20° intervals. The 18 inner automorphisms provide rotation of the mirrors by multiples of 20°, and reflections. As isometry group these are all automorphisms. As abstract group there are in addition to these, 36 outer automorphisms; e.g., multiplying angles of rotation by 2. D10 has 10 inner automorphisms. As 2D isometry group D10, the group has mirrors at 18° intervals. The 10 inner automorphisms provide rotation of the mirrors by multiples of 36°, and reflections. As isometry group there are 10 more automorphisms; they are conjugates by isometries outside the group, rotating the mirrors 18° with respect to the inner automorphisms. As abstract group there are in addition to these 10 inner and 10 outer automorphisms, 20 more outer automorphisms; e.g., multiplying rotations by 3. Compare the values 6 and 4 for Euler's totient function, the multiplicative group of integers modulo "n" for "n" = 9 and 10, respectively. This triples and doubles the number of automorphisms compared with the two automorphisms as isometries (keeping the order of the rotations the same or reversing the order). The only values of "n" for which "φ"("n") = 2 are 3, 4, and 6, and consequently, there are only three dihedral groups that are isomorphic to their own automorphism groups, namely D3 (order 6), D4 (order 8), and D6 (order 12). Inner automorphism group. The inner automorphism group of D"n" is isomorphic to: 2, D"2" / Z2 "1" ). Generalizations. There are several important generalizations of the dihedral groups: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "2n" }, { "math_id": 2, "text": "n \\ge 3" }, { "math_id": 3, "text": "\\mathrm{D}_n" }, { "math_id": 4, "text": "n/2" }, { "math_id": 5, "text": "\\mathrm{r}_i\\,\\mathrm{r}_j = \\mathrm{r}_{i+j}, \\quad \\mathrm{r}_i\\,\\mathrm{s}_j = \\mathrm{s}_{i+j}, \\quad \\mathrm{s}_i\\,\\mathrm{r}_j = \\mathrm{s}_{i-j}, \\quad \\mathrm{s}_i\\,\\mathrm{s}_j = \\mathrm{r}_{i-j}." }, { "math_id": 6, "text": "\\begin{matrix}\n \\mathrm{r}_0 = \\left(\\begin{smallmatrix} 1 & 0 \\\\[0.2em] 0 & 1 \\end{smallmatrix}\\right), &\n \\mathrm{r}_1 = \\left(\\begin{smallmatrix} 0 & -1 \\\\[0.2em] 1 & 0 \\end{smallmatrix}\\right), &\n \\mathrm{r}_2 = \\left(\\begin{smallmatrix} -1 & 0 \\\\[0.2em] 0 & -1 \\end{smallmatrix}\\right), &\n \\mathrm{r}_3 = \\left(\\begin{smallmatrix} 0 & 1 \\\\[0.2em] -1 & 0 \\end{smallmatrix}\\right), \\\\[1em]\n \\mathrm{s}_0 = \\left(\\begin{smallmatrix} 1 & 0 \\\\[0.2em] 0 & -1 \\end{smallmatrix}\\right), &\n \\mathrm{s}_1 = \\left(\\begin{smallmatrix} 0 & 1 \\\\[0.2em] 1 & 0 \\end{smallmatrix}\\right), &\n \\mathrm{s}_2 = \\left(\\begin{smallmatrix} -1 & 0 \\\\[0.2em] 0 & 1 \\end{smallmatrix}\\right), &\n \\mathrm{s}_3 = \\left(\\begin{smallmatrix} 0 & -1 \\\\[0.2em] -1 & 0 \\end{smallmatrix}\\right).\n\\end{matrix}" }, { "math_id": 7, "text": "\\begin{align}\n \\mathrm{r}_k & = \\begin{pmatrix}\n \\cos \\frac{2\\pi k}{n} & -\\sin \\frac{2\\pi k}{n} \\\\\n \\sin \\frac{2\\pi k}{n} & \\cos \\frac{2\\pi k}{n}\n \\end{pmatrix}\\ \\ \\text{and} \\\\[5pt]\n \\mathrm{s}_k & = \\begin{pmatrix}\n \\cos \\frac{2\\pi k}{n} & \\sin \\frac{2\\pi k}{n} \\\\\n \\sin \\frac{2\\pi k}{n} & -\\cos \\frac{2\\pi k}{n}\n \\end{pmatrix}\n .\n\\end{align}" }, { "math_id": 8, "text": "\\begin{align}\n \\mathrm{D}_n &= \\left\\langle r, s \\mid \\operatorname{ord}(r) = n, \\operatorname{ord}(s) = 2, srs^{-1} = r^{-1} \\right\\rangle \\\\\n &= \\left\\langle r,s \\mid r^n = s^2 = (sr)^2 = 1 \\right\\rangle.\n\\end{align}" }, { "math_id": 9, "text": "s^2 = 1" }, { "math_id": 10, "text": "r= s \\cdot sr" }, { "math_id": 11, "text": "\\mathrm D_n" }, { "math_id": 12, "text": "s" }, { "math_id": 13, "text": "t:=sr" }, { "math_id": 14, "text": "\n \\mathrm{D}_n = \\left\\langle s,t \\mid s^2=1, t^2 = 1, (st)^n=1\\right\\rangle\n.\n" }, { "math_id": 15, "text": "\\mathrm{srs} = \\mathrm{r}^{-1} \\, " }, { "math_id": 16, "text": "e^{2\\pi i \\over n}" }, { "math_id": 17, "text": "\n \\mathrm{r}_1 = \\begin{bmatrix}\n \\cos{2\\pi \\over n} & -\\sin{2\\pi \\over n} \\\\[4pt]\n \\sin{2\\pi \\over n} & \\cos{2\\pi \\over n}\n \\end{bmatrix}\\qquad\n \\mathrm{s}_0 = \\begin{bmatrix}\n 1 & 0 \\\\\n 0 & -1\n \\end{bmatrix}\n" }, { "math_id": 18, "text": "\\mathrm{r}_j = \\mathrm{r}_1^j" }, { "math_id": 19, "text": "\\mathrm{s}_j = \\mathrm{r}_j \\, \\mathrm{s}_0" }, { "math_id": 20, "text": "j \\in \\{1,\\ldots,n-1\\}" }, { "math_id": 21, "text": "\\begin{align}\n \\mathrm{r}_j \\, \\mathrm{r}_k &= \\mathrm{r}_{(j+k) \\text{ mod }n} \\\\\n \\mathrm{r}_j \\, \\mathrm{s}_k &= \\mathrm{s}_{(j+k) \\text{ mod }n} \\\\\n \\mathrm{s}_j \\, \\mathrm{r}_k &= \\mathrm{s}_{(j-k) \\text{ mod }n} \\\\\n \\mathrm{s}_j \\, \\mathrm{s}_k &= \\mathrm{r}_{(j-k) \\text{ mod }n}\n\\end{align}" }, { "math_id": 22, "text": "\\mathbb{Z}" } ]
https://en.wikipedia.org/wiki?curid=100267
1002779
Azeotropic distillation
Any of a range of techniques used to break an azeotrope in distillation In chemistry, azeotropic distillation is any of a range of techniques used to break an azeotrope in distillation. In chemical engineering, "azeotropic distillation" usually refers to the specific technique of adding another component to generate a new, lower-boiling azeotrope that is heterogeneous (e.g. producing two, immiscible liquid phases), such as the example below with the addition of benzene to water and ethanol. This practice of adding an entrainer which forms a separate phase is a specific sub-set of (industrial) azeotropic distillation methods, or combination thereof. In some senses, adding an entrainer is similar to extractive distillation. Material separation agent. The addition of a material separation agent, such as benzene to an ethanol/water mixture, changes the molecular interactions and eliminates the azeotrope. Added in the liquid phase, the new component can alter the activity coefficient of various compounds in different ways thus altering a mixture's relative volatility. Greater deviations from Raoult's law make it easier to achieve significant changes in relative volatility with the addition of another component. In azeotropic distillation the volatility of the added component is the same as the mixture, and a new azeotrope is formed with one or more of the components based on differences in polarity. If the material separation agent is selected to form azeotropes with more than one component in the feed then it is referred to as an entrainer. The added entrainer should be recovered by distillation, decantation, or another separation method and returned near the top of the original column. Distillation of ethanol/water. A common historical example of azeotropic distillation is its use in dehydrating ethanol and water mixtures. For this, a near azeotropic mixture is sent to the final column where azeotropic distillation takes place. Several entrainers can be used for this specific process: benzene, pentane, cyclohexane, hexane, heptane, isooctane, acetone, and diethyl ether are all options as the mixture. Of these benzene and cyclohexane have been used the most extensively, but since the identification of benzene as a carcinogen, toluene is used instead. Pressure-swing distillation. Another method, pressure-swing distillation, relies on the fact that an azeotrope is pressure dependent. An azeotrope is not a range of concentrations that cannot be distilled, but the point at which the activity coefficients of the distillates are crossing one another. If the azeotrope can be "jumped over", distillation can continue, although because the activity coefficients have crossed, the component which is boiling will change. For instance, in a distillation of ethanol and water, water will boil out of the remaining ethanol, rather than the ethanol out of the water as at lower concentrations. Overall the pressure-swing distillation is a very robust and not so highly sophisticated method compared to multi component distillation or membrane processes, but the energy demand is in general higher. Also the investment cost of the distillation columns is higher, due to the pressure inside the vessels. Molecular sieves. For low boiling azeotropes distillation may not allow the components to be fully separated, and must make use of separation methods that does not rely on distillation. A common approach involves the use of molecular sieves. Treatment of 96% ethanol with molecular sieves gives anhydrous alcohol, the sieves having adsorbed water from the mixture. The sieves can be subsequently regenerated by dehydration using a vacuum oven. Dehydration reactions. In organic chemistry, some dehydration reactions are subject to unfavorable but fast equilibria. One example is the formation of dioxolanes from aldehydes: RCHO + (CH2OH)2 formula_0 RCH(OCH2)2 + H2O Such unfavorable reactions proceed when water is removed by azeotropic distillation.
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=1002779
1003242
Rodrigues' formula
Formula for the Legendre polynomials In mathematics, Rodrigues' formula (formerly called the Ivory–Jacobi formula) generates the Legendre polynomials. It was independently introduced by Olinde Rodrigues (1816), Sir James Ivory (1824) and Carl Gustav Jacobi (1827). The name "Rodrigues formula" was introduced by Heine in 1878, after Hermite pointed out in 1865 that Rodrigues was the first to discover it. The term is also used to describe similar formulas for other orthogonal polynomials. describes the history of the Rodrigues formula in detail. Statement. Let formula_0 be a sequence of orthogonal polynomials defined on the interval formula_1 satisfying the orthogonality condition formula_2 where formula_3 is a suitable weight function, formula_4 is a constant depending on formula_5, and formula_6 is the Kronecker delta. If the weight function formula_3 satisfies the following differential equation (called Pearson's differential equation), formula_7 where formula_8 is a polynomial with degree at most 1 and formula_9 is a polynomial with degree at most 2 and, further, the limits formula_10 Then it can be shown that formula_11 satisfies a relation of the form, formula_12 for some constants formula_13. This relation is called "Rodrigues' type formula", or just "Rodrigues' formula". The most known applications of Rodrigues' type formulas are the formulas for Legendre, Laguerre and Hermite polynomials: Rodrigues stated his formula for Legendre polynomials formula_14: formula_15 Laguerre polynomials are usually denoted "L"0, "L"1, ..., and the Rodrigues formula can be written as formula_16 The Rodrigues formula for the Hermite polynomials can be written as formula_17 Similar formulae hold for many other sequences of orthogonal functions arising from Sturm–Liouville equations, and these are also called the Rodrigues formula (or Rodrigues' type formula) for that case, especially when the resulting sequence is polynomial. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(P_n(x))_{n=0}^\\infty" }, { "math_id": 1, "text": "[a, b]" }, { "math_id": 2, "text": "\\int_a^b P_m(x) P_n(x) w(x) \\, dx = K_n \\delta_{m,n}," }, { "math_id": 3, "text": "w(x)" }, { "math_id": 4, "text": "K_n" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "\\delta_{m,n}" }, { "math_id": 7, "text": "\\frac{w'(x)}{w(x)} = \\frac{A(x)}{B(x)}," }, { "math_id": 8, "text": "A(x)" }, { "math_id": 9, "text": "B(x)" }, { "math_id": 10, "text": "\\lim_{x \\to a} w(x) B(x) = 0, \\qquad \\lim_{x \\to b} w(x) B(x) = 0." }, { "math_id": 11, "text": "P_n(x)" }, { "math_id": 12, "text": "P_n(x) = \\frac{c_n}{w(x)} \\frac{d^n}{dx^n} \\!\\left[ B(x)^n w(x)\\right]," }, { "math_id": 13, "text": "c_n" }, { "math_id": 14, "text": "P_n" }, { "math_id": 15, "text": "P_n(x) = \\frac{1}{2^n n!} \\frac{d^n}{dx^n} \\!\\left[ (x^2 -1)^n \\right]\\!." }, { "math_id": 16, "text": "L_n(x) = \\frac{e^x}{n!} \\frac{d^n}{dx^n} \\!\\left[e^{-x} x^n\\right] = \\frac{1}{n!} \\left( \\frac{d}{dx} -1 \\right) ^n x^n," }, { "math_id": 17, "text": "H_n(x)=(-1)^n e^{x^2} \\frac{d^n}{dx^n} \\!\\left[e^{-x^2}\\right] = \\left(2x-\\frac{d}{dx} \\right)^n\\cdot 1." } ]
https://en.wikipedia.org/wiki?curid=1003242
100337
Standard ML
General-purpose functional programming language Standard ML (SML) is a general-purpose, high-level, modular, functional programming language with compile-time type checking and type inference. It is popular for writing compilers, for programming language research, and for developing theorem provers. Standard ML is a modern dialect of ML, the language used in the Logic for Computable Functions (LCF) theorem-proving project. It is distinctive among widely used languages in that it has a formal specification, given as typing rules and operational semantics in "The Definition of Standard ML". Language. Standard ML is a functional programming language with some impure features. Programs written in Standard ML consist of expressions in contrast to statements or commands, although some expressions of type unit are only evaluated for their side-effects. Functions. Like all functional languages, a key feature of Standard ML is the function, which is used for abstraction. The factorial function can be expressed as follows: fun factorial n = if n = 0 then 1 else n * factorial (n - 1) Type inference. An SML compiler must infer the static type without user-supplied type annotations. It has to deduce that is only used with integer expressions, and must therefore itself be an integer, and that all terminal expressions are integer expressions. Declarative definitions. The same function can be expressed with clausal function definitions where the "if"-"then"-"else" conditional is replaced with templates of the factorial function evaluated for specific values: fun factorial 0 = 1 | factorial n = n * factorial (n - 1) Imperative definitions. or iteratively: fun factorial n = let val i = ref n and acc = ref 1 in while !i &gt; 0 do (acc := !acc * !i; i := !i - 1); !acc end Lambda functions. or as a lambda function: val rec factorial = fn 0 =&gt; 1 | n =&gt; n * factorial (n - 1) Here, the keyword introduces a binding of an identifier to a value, introduces an anonymous function, and allows the definition to be self-referential. Local definitions. The encapsulation of an invariant-preserving tail-recursive tight loop with one or more accumulator parameters within an invariant-free outer function, as seen here, is a common idiom in Standard ML. Using a local function, it can be rewritten in a more efficient tail-recursive style: local fun loop (0, acc) = acc | loop (m, acc) = loop (m - 1, m * acc) in fun factorial n = loop (n, 1) end Type synonyms. A type synonym is defined with the keyword . Here is a type synonym for points on a plane, and functions computing the distances between two points, and the area of a triangle with the given corners as per Heron's formula. (These definitions will be used in subsequent examples). type loc = real * real fun square (x : real) = x * x fun dist (x, y) (x', y') = Math.sqrt (square (x' - x) + square (y' - y)) fun heron (a, b, c) = let val x = dist a b val y = dist b c val z = dist a c val s = (x + y + z) / 2.0 in Math.sqrt (s * (s - x) * (s - y) * (s - z)) end Algebraic datatypes. Standard ML provides strong support for algebraic datatypes (ADT). A data type can be thought of as a disjoint union of tuples (or a "sum of products"). They are easy to define and easy to use, largely because of pattern matching, and most Standard ML implementations' pattern-exhaustiveness checking and pattern redundancy checking. In object-oriented programming languages, a disjoint union can be expressed as class hierarchies. However, in contrast to class hierarchies, ADTs are closed. Thus, the extensibility of ADTs is orthogonal to the extensibility of class hierarchies. Class hierarchies can be extended with new subclasses which implement the same interface, while the functions of ADTs can be extended for the fixed set of constructors. See expression problem. A datatype is defined with the keyword , as in: datatype shape = Circle of loc * real (* center and radius *) | Square of loc * real (* upper-left corner and side length; axis-aligned *) | Triangle of loc * loc * loc (* corners *) Note that a type synonym cannot be recursive; datatypes are necessary to define recursive constructors. (This is not at issue in this example.) Pattern matching. Patterns are matched in the order in which they are defined. C programmers can use tagged unions, dispatching on tag values, to do what ML does with datatypes and pattern matching. Nevertheless, while a C program decorated with appropriate checks will, in a sense, be as robust as the corresponding ML program, those checks will of necessity be dynamic; ML's static checks provide strong guarantees about the correctness of the program at compile time. Function arguments can be defined as patterns as follows: fun area (Circle (_, r)) = Math.pi * square r | area (Square (_, s)) = square s | area (Triangle p) = heron p (* see above *) The so-called "clausal form" of function definition, where arguments are defined as patterns, is merely syntactic sugar for a case expression: fun area shape = case shape of Circle (_, r) =&gt; Math.pi * square r | Square (_, s) =&gt; square s | Triangle p =&gt; heron p Exhaustiveness checking. Pattern-exhaustiveness checking will make sure that each constructor of the datatype is matched by at least one pattern. The following pattern is not exhaustive: fun center (Circle (c, _)) = c | center (Square ((x, y), s)) = (x + s / 2.0, y + s / 2.0) There is no pattern for the case in the function. The compiler will issue a warning that the case expression is not exhaustive, and if a is passed to this function at runtime, will be raised. Redundancy checking. The pattern in the second clause of the following (meaningless) function is redundant: fun f (Circle ((x, y), r)) = x + y | f (Circle _) = 1.0 | f _ = 0.0 Any value that would match the pattern in the second clause would also match the pattern in the first clause, so the second clause is unreachable. Therefore, this definition as a whole exhibits redundancy, and causes a compile-time warning. The following function definition is exhaustive and not redundant: val hasCorners = fn (Circle _) =&gt; false | _ =&gt; true If control gets past the first pattern (), we know the shape must be either a or a . In either of those cases, we know the shape has corners, so we can return without discerning the actual shape. Higher-order functions. Functions can consume functions as arguments: fun map f (x, y) = (f x, f y) Functions can produce functions as return values: fun constant k = (fn _ =&gt; k) Functions can also both consume and produce functions: fun compose (f, g) = (fn x =&gt; f (g x)) The function from the basis library is one of the most commonly used higher-order functions in Standard ML: fun map _ [] = [] | map f (x :: xs) = f x :: map f xs A more efficient implementation with tail-recursive : fun map f = List.rev o List.foldl (fn (x, acc) =&gt; f x :: acc) [] Exceptions. Exceptions are raised with the keyword and handled with the pattern matching construct. The exception system can implement non-local exit; this optimization technique is suitable for functions like the following. local exception Zero; val p = fn (0, _) =&gt; raise Zero | (a, b) =&gt; a * b in fun prod xs = List.foldl p 1 xs handle Zero =&gt; 0 end When is raised, control leaves the function altogether. Consider the alternative: the value 0 would be returned, it would be multiplied by the next integer in the list, the resulting value (inevitably 0) would be returned, and so on. The raising of the exception allows control to skip over the entire chain of frames and avoid the associated computation. Note the use of the underscore () as a wildcard pattern. The same optimization can be obtained with a tail call. local fun p a (0 :: _) = 0 | p a (x :: xs) = p (a * x) xs | p a [] = a in val prod = p 1 end Module system. Standard ML's advanced module system allows programs to be decomposed into hierarchically organized "structures" of logically related type and value definitions. Modules provide not only namespace control but also abstraction, in the sense that they allow the definition of abstract data types. Three main syntactic constructs comprise the module system: signatures, structures and functors. Signatures. A "signature" is an interface, usually thought of as a type for a structure; it specifies the names of all entities provided by the structure, the arity of each type component, the type of each value component, and the signature of each substructure. The definitions of type components are optional; type components whose definitions are hidden are "abstract types". For example, the signature for a queue may be: signature QUEUE = sig type 'a queue exception QueueError; val empty : 'a queue val isEmpty : 'a queue -&gt; bool val singleton : 'a -&gt; 'a queue val fromList : 'a list -&gt; 'a queue val insert : 'a * 'a queue -&gt; 'a queue val peek : 'a queue -&gt; 'a val remove : 'a queue -&gt; 'a * 'a queue end This signature describes a module that provides a polymorphic type , , and values that define basic operations on queues. Structures. A "structure" is a module; it consists of a collection of types, exceptions, values and structures (called "substructures") packaged together into a logical unit. A queue structure can be implemented as follows: structure TwoListQueue :&gt; QUEUE = struct type 'a queue = 'a list * 'a list exception QueueError; val empty = ([], []) fun isEmpty ([], []) = true | isEmpty _ = false fun singleton a = ([], [a]) fun fromList a = ([], a) fun insert (a, ([], [])) = singleton a | insert (a, (ins, outs)) = (a :: ins, outs) fun peek (_, []) = raise QueueError | peek (ins, outs) = List.hd outs fun remove (_, []) = raise QueueError | remove (ins, [a]) = (a, ([], List.rev ins)) | remove (ins, a :: outs) = (a, (ins, outs)) end This definition declares that implements . Furthermore, the "opaque ascription" denoted by states that any types which are not defined in the signature (i.e. ) should be abstract, meaning that the definition of a queue as a pair of lists is not visible outside the module. The structure implements all of the definitions in the signature. The types and values in a structure can be accessed with "dot notation": val q : string TwoListQueue.queue = TwoListQueue.empty val q' = TwoListQueue.insert (Real.toString Math.pi, q) Functors. A "functor" is a function from structures to structures; that is, a functor accepts one or more arguments, which are usually structures of a given signature, and produces a structure as its result. Functors are used to implement generic data structures and algorithms. One popular algorithm for breadth-first search of trees makes use of queues. Here is a version of that algorithm parameterized over an abstract queue structure: functor BFS (Q: QUEUE) = struct datatype 'a tree = E | T of 'a * 'a tree * 'a tree local fun bfsQ q = if Q.isEmpty q then [] else search (Q.remove q) and search (E, q) = bfsQ q | search (T (x, l, r), q) = x :: bfsQ (insert (insert q l) r) and insert q a = Q.insert (a, q) in fun bfs t = bfsQ (Q.singleton t) end end structure QueueBFS = BFS (TwoListQueue) Within , the representation of the queue is not visible. More concretely, there is no way to select the first list in the two-list queue, if that is indeed the representation being used. This data abstraction mechanism makes the breadth-first search truly agnostic to the queue's implementation. This is in general desirable; in this case, the queue structure can safely maintain any logical invariants on which its correctness depends behind the bulletproof wall of abstraction. Code examples. Snippets of SML code are most easily studied by entering them into an interactive top-level. Hello, world! The following is a "Hello, World!" program: Algorithms. Insertion sort. Insertion sort for (ascending) can be expressed concisely as follows: fun insert (x, []) = [x] | insert (x, h :: t) = sort x (h, t) and sort x (h, t) = if x &lt; h then [x, h] @ t else h :: insert (x, t) val insertionsort = List.foldl insert [] Mergesort. Here, the classic mergesort algorithm is implemented in three functions: split, merge and mergesort. Also note the absence of types, with the exception of the syntax and which signify lists. This code will sort lists of any type, so long as a consistent ordering function is defined. Using Hindley–Milner type inference, the types of all variables can be inferred, even complicated types such as that of the function . Split is implemented with a stateful closure which alternates between and , ignoring the input: fun alternator {} = let val state = ref true in fn a =&gt; !state before state := not (!state) end (* Split a list into near-halves which will either be the same length, * or the first will have one more element than the other. * Runs in O(n) time, where n = |xs|. fun split xs = List.partition (alternator {}) xs Merge Merge uses a local function loop for efficiency. The inner is defined in terms of cases: when both lists are non-empty () and when one list is empty (). This function merges two sorted lists into one sorted list. Note how the accumulator is built backwards, then reversed before being returned. This is a common technique, since is represented as a linked list; this technique requires more clock time, but the asymptotics are not worse. (* Merge two ordered lists using the order cmp. * Pre: each list must already be ordered per cmp. * Runs in O(n) time, where n = |xs| + |ys|. fun merge cmp (xs, []) = xs | merge cmp (xs, y :: ys) = let fun loop (a, acc) (xs, []) = List.revAppend (a :: acc, xs) | loop (a, acc) (xs, y :: ys) = if cmp (a, y) then loop (y, a :: acc) (ys, xs) else loop (a, y :: acc) (xs, ys) in loop (y, []) (ys, xs) end Mergesort The main function: fun ap f (x, y) = (f x, f y) (* Sort a list in according to the given ordering operation cmp. * Runs in O(n log n) time, where n = |xs|. fun mergesort cmp [] = [] | mergesort cmp [x] = [x] | mergesort cmp xs = (merge cmp o ap (mergesort cmp) o split) xs Quicksort. Quicksort can be expressed as follows. is a closure that consumes an order operator . infix « fun quicksort (op «) = let fun part p = List.partition (fn x =&gt; x « p) fun sort [] = [] | sort (p :: xs) = join p (part p xs) and join p (l, r) = sort l @ p :: sort r in sort end Expression interpreter. Note the relative ease with which a small expression language can be defined and processed: exception TyErr; datatype ty = IntTy | BoolTy fun unify (IntTy, IntTy) = IntTy | unify (BoolTy, BoolTy) = BoolTy | unify (_, _) = raise TyErr datatype exp = True | False | Int of int | Not of exp | Add of exp * exp | If of exp * exp * exp fun infer True = BoolTy | infer False = BoolTy | infer (Int _) = IntTy | infer (Not e) = (assert e BoolTy; BoolTy) | infer (Add (a, b)) = (assert a IntTy; assert b IntTy; IntTy) | infer (If (e, t, f)) = (assert e BoolTy; unify (infer t, infer f)) and assert e t = unify (infer e, t) fun eval True = True | eval False = False | eval (Int n) = Int n | eval (Not e) = if eval e = True then False else True | eval (Add (a, b)) = (case (eval a, eval b) of (Int x, Int y) =&gt; Int (x + y)) | eval (If (e, t, f)) = eval (if eval e = True then t else f) fun run e = (infer e; SOME (eval e)) handle TyErr =&gt; NONE Example usage on well-typed and ill-typed expressions: val SOME (Int 3) = run (Add (Int 1, Int 2)) (* well-typed *) val NONE = run (If (Not (Int 1), True, False)) (* ill-typed *) Arbitrary-precision integers. The module provides arbitrary-precision integer arithmetic. Moreover, integer literals may be used as arbitrary-precision integers without the programmer having to do anything. The following program implements an arbitrary-precision factorial function: Partial application. Curried functions have many applications, such as eliminating redundant code. For example, a module may require functions of type , but it is more convenient to write functions of type where there is a fixed relationship between the objects of type and . A function of type can factor out this commonality. This is an example of the adapter pattern. In this example, computes the numerical derivative of a given function at point : - fun d delta f x = (f (x + delta) - f (x - delta)) / (2.0 * delta) val d = fn : real -&gt; (real -&gt; real) -&gt; real -&gt; real The type of indicates that it maps a "float" onto a function with the type . This allows us to partially apply arguments, known as currying. In this case, function can be specialised by partially applying it with the argument . A good choice for when using this algorithm is the cube root of the machine epsilon. - val d' = d 1E~8; val d' = fn : (real -&gt; real) -&gt; real -&gt; real The inferred type indicates that expects a function with the type as its first argument. We can compute an approximation to the derivative of formula_0 at formula_1. The correct answer is formula_2. - d' (fn x =&gt; x * x * x - x - 1.0) 3.0; val it = 25.9999996644 : real Libraries. Standard. The Basis Library has been standardized and ships with most implementations. It provides modules for trees, arrays, and other data structures, and input/output and system interfaces. Third party. For numerical computing, a Matrix module exists (but is currently broken), https://www.cs.cmu.edu/afs/cs/project/pscico/pscico/src/matrix/README.html. For graphics, cairo-sml is an open source interface to the Cairo graphics library. For machine learning, a library for graphical models exists. Implementations. Implementations of Standard ML include the following: Standard Derivative Research All of these implementations are open-source and freely available. Most are implemented themselves in Standard ML. There are no longer any commercial implementations; Harlequin, now defunct, once produced a commercial IDE and compiler called MLWorks which passed on to Xanalys and was later open-sourced after it was acquired by Ravenbrook Limited on April 26, 2013. Major projects using SML. The IT University of Copenhagen's entire enterprise architecture is implemented in around 100,000 lines of SML, including staff records, payroll, course administration and feedback, student project management, and web-based self-service interfaces. The proof assistants HOL4, Isabelle, LEGO, and Twelf are written in Standard ML. It is also used by compiler writers and integrated circuit designers such as ARM. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. About Standard ML About successor ML Practical Academic
[ { "math_id": 0, "text": "f(x) = x^3-x-1" }, { "math_id": 1, "text": "x=3" }, { "math_id": 2, "text": "f'(3) = 27-1 = 26" } ]
https://en.wikipedia.org/wiki?curid=100337
1003410
S transform
S transform as a time–frequency distribution was developed in 1994 for analyzing geophysics data. In this way, the "S" transform is a generalization of the short-time Fourier transform (STFT), extending the continuous wavelet transform and overcoming some of its disadvantages. For one, modulation sinusoids are fixed with respect to the time axis; this localizes the scalable Gaussian window dilations and translations in "S" transform. Moreover, the "S" transform doesn't have a cross-term problem and yields a better signal clarity than Gabor transform. However, the "S" transform has its own disadvantages: the clarity is worse than Wigner distribution function and Cohen's class distribution function. A fast "S" transform algorithm was invented in 2010. It reduces the computational complexity from O[N2·log(N)] to O[N·log(N)] and makes the transform one-to-one, where the transform has the same number of points as the source signal or image, compared to storage complexity of N2 for the original formulation. An implementation is available to the research community under an open source license. A general formulation of the S transform makes clear the relationship to other time frequency transforms such as the Fourier, short time Fourier, and wavelet transforms. Definition. There are several ways to represent the idea of the "S" transform. In here, "S" transform is derived as the phase correction of the continuous wavelet transform with window being the Gaussian function. formula_0 formula_1 Modified form. The above definition implies that the s-transform function can be express as the convolution of formula_2 and formula_3.&lt;br&gt; Applying the Fourier transform to both formula_2 and formula_3 gives formula_4. From the spectrum form of S-transform, we can derive the discrete-time S-transform. &lt;br&gt; Let formula_5, where formula_6 is the sampling interval and formula_7 is the sampling frequency.&lt;br&gt; The Discrete time S-transform can then be expressed as: &lt;br&gt; formula_8 Implementation of discrete-time S-transform. Below is the Pseudo code of the implementation.&lt;br&gt; Step1.Compute formula_9 &lt;br&gt; loop over m (voices) Step2.Compute formula_10for formula_11&lt;br&gt; Step3.Move formula_12 to formula_13&lt;br&gt; Step4.Multiply Step2 and Step3 formula_14&lt;br&gt; Step5.IDFT(formula_15). Comparison with other time–frequency analysis tools. Comparison with Gabor transform. The only difference between the Gabor transform (GT) and the S transform is the window size. For GT, the windows size is a Gaussian function formula_16, meanwhile, the window function for S-Transform is a function of f. With a window function proportional to frequency, S Transform performs well in frequency domain analysis when the input frequency is low. When the input frequency is high, S-Transform has a better clarity in the time domain. As table below. This kind of property makes S-Transform a powerful tool to analyze sound because human is sensitive to low frequency part in a sound signal. Comparison with Wigner transform. The main problem with the Wigner Transform is the cross term, which stems from the auto-correlation function in the Wigner Transform function. This cross term may cause noise and distortions in signal analyses. S-transform analyses avoid this issue. Comparison with the short-time Fourier transform. We can compare the "S" transform and short-time Fourier transform (STFT). First, a high frequency signal, a low frequency signal, and a high frequency burst signal are used in the experiment to compare the performance. The S transform characteristic of frequency dependent resolution allows the detection of the high frequency burst. On the other hand, as the STFT consists of a constant window width, it leads to the result having poorer definition. In the second experiment, two more high frequency bursts are added to crossed chirps. In the result, all four frequencies were detected by the S transform. On the other hand, the two high frequencies bursts are not detected by STFT. The high frequencies bursts cross term caused STFT to have a single frequency at lower frequency. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " S_x(t,f) = \\int_{-\\infty}^\\infty x(\\tau)|f|e^{- \\pi (t- \\tau)^2 f^2} e^{-j2 \\pi f \\tau} \\, d \\tau " }, { "math_id": 1, "text": "x(\\tau) = \\int_{-\\infty}^\\infty \\left[\\int_{-\\infty}^{\\infty}S_x(t,f)\\, dt\\right]\\,e^{j2\\pi f\\tau}\\, df" }, { "math_id": 2, "text": "( x(\\tau) e^{-j2 \\pi f \\tau} )" }, { "math_id": 3, "text": "( |f|e^{- \\pi t^2 f^2} )" }, { "math_id": 4, "text": " S_x(t,f) = \\int_{-\\infty}^\\infty X(f+\\alpha)\\,e^{-\\pi\\alpha^2 /f^2}\\,e^{j2\\pi\\alpha t}\\, d\\alpha " }, { "math_id": 5, "text": "t = n\\Delta_T\\,\\, f = m\\Delta_F\\,\\, \\alpha = p\\Delta_F" }, { "math_id": 6, "text": "\\Delta_T" }, { "math_id": 7, "text": "\\Delta_F" }, { "math_id": 8, "text": "S_x(n\\Delta_T\\, ,m\\Delta_F) = \\sum_{p=0}^{N-1} X[(p+m)\\,\\Delta_F]\\,e^{-\\pi\\frac{p^2}{m^2}}\\,e^{\\frac{j2pn}{N}}" }, { "math_id": 9, "text": "X[p\\Delta_{F}]\\," }, { "math_id": 10, "text": "e^{-\\pi \\frac{p^2}{m^2}}" }, { "math_id": 11, "text": "f=m\\Delta_{F}" }, { "math_id": 12, "text": "X[p\\Delta_{F}]" }, { "math_id": 13, "text": "X[(p+m)\\Delta_{F}]" }, { "math_id": 14, "text": "B[m,p] = X[(p+m)\\Delta_{F}]\\cdot e^{-\\pi \\frac{p^2}{m^2}}" }, { "math_id": 15, "text": "B[m,p]" }, { "math_id": 16, "text": "( e^{-\\pi (t-\\tau)^2} )" } ]
https://en.wikipedia.org/wiki?curid=1003410
10034460
Catenary ring
In mathematics, a commutative ring "R" is catenary if for any pair of prime ideals "p", "q", any two strictly increasing chains "p" = "p"0 ⊂ "p"1 ⊂ ... ⊂ "p""n" = "q" of prime ideals are contained in maximal strictly increasing chains from "p" to "q" of the same (finite) length. In a geometric situation, in which the dimension of an algebraic variety attached to a prime ideal will decrease as the prime ideal becomes bigger, the length of such a chain "n" is usually the difference in dimensions. A ring is called universally catenary if all finitely generated algebras over it are catenary rings. The word 'catenary' is derived from the Latin word "catena", which means "chain". There is the following chain of inclusions. Universally catenary rings ⊃ Cohen–Macaulay rings ⊃ Gorenstein rings ⊃ complete intersection rings ⊃ regular local rings Dimension formula. Suppose that "A" is a Noetherian domain and "B" is a domain containing "A" that is finitely generated over "A". If "P" is a prime ideal of "B" and "p" its intersection with "A", then formula_0 The dimension formula for universally catenary rings says that equality holds if "A" is universally catenary. Here κ("P") is the residue field of "P" and tr.deg. means the transcendence degree (of quotient fields). In fact, when "A" is not universally catenary, but formula_1, then equality also holds. Examples. Almost all Noetherian rings that appear in algebraic geometry are universally catenary. In particular the following rings are universally catenary: A ring that is catenary but not universally catenary. It is delicate to construct examples of Noetherian rings that are not universally catenary. The first example was found by Masayoshi Nagata (1956, 1962, page 203 example 2), who found a 2-dimensional Noetherian local domain that is catenary but not universally catenary. Nagata's example is as follows. Choose a field "k" and a formal power series "z"=Σ"i"&gt;0"a""i""x""i" in the ring "S" of formal power series in "x" over "k" such that "z" and "x" are algebraically independent. Define "z"1 = "z" and "z""i"+1="z""i"/x–"a""i". Let "R" be the (non-Noetherian) ring generated by "x" and all the elements "z""i". Let "m" be the ideal ("x"), and let "n" be the ideal generated by "x"–1 and all the elements "z""i". These are both maximal ideals of "R", with residue fields isomorphic to "k". The local ring "R""m" is a regular local ring of dimension 1 (the proof of this uses the fact that "z" and "x" are algebraically independent) and the local ring "R""n" is a regular Noetherian local ring of dimension 2. Let "B" be the localization of "R" with respect to all elements not in either "m" or "n". Then "B" is a 2-dimensional Noetherian semi-local ring with 2 maximal ideals, "mB" (of height 1) and "nB" (of height 2). Let "I" be the Jacobson radical of "B", and let "A" = "k"+"I". The ring "A" is a local domain of dimension 2 with maximal ideal "I", so is catenary because all 2-dimensional local domains are catenary. The ring "A" is Noetherian because "B" is Noetherian and is a finite "A"-module. However "A" is not universally catenary, because if it were then the ideal "mB" of "B" would have the same height as "mB"∩"A" by the dimension formula for universally catenary rings, but the latter ideal has height equal to dim("A")=2. Nagata's example is also a quasi-excellent ring, so gives an example of a quasi-excellent ring that is not an excellent ring. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{height}(P)\\le \\text{height}(p)+ \\text{tr.deg.}_A(B) - \\text{tr.deg.}_{\\kappa(p)}(\\kappa(P))." }, { "math_id": 1, "text": "B=A[x_1,\\dots,x_n]" } ]
https://en.wikipedia.org/wiki?curid=10034460
100349
Legendre polynomials
System of complete and orthogonal polynomials In mathematics, Legendre polynomials, named after Adrien-Marie Legendre (1782), are a system of complete and orthogonal polynomials with a vast number of mathematical properties and numerous applications. They can be defined in many ways, and the various definitions highlight different aspects as well as suggest generalizations and connections to different mathematical structures and physical and numerical applications. Closely related to the Legendre polynomials are associated Legendre polynomials, Legendre functions, Legendre functions of the second kind, big q-Legendre polynomials, and associated Legendre functions. Definition and representation. Definition by construction as an orthogonal system. In this approach, the polynomials are defined as an orthogonal system with respect to the weight function formula_0 over the interval formula_1. That is, formula_2 is a polynomial of degree formula_3, such that formula_4 With the additional standardization condition formula_5, all the polynomials can be uniquely determined. We then start the construction process: formula_6 is the only correctly standardized polynomial of degree 0. formula_7 must be orthogonal to formula_8, leading to formula_9, and formula_10 is determined by demanding orthogonality to formula_8 and formula_11, and so on. formula_12 is fixed by demanding orthogonality to all formula_13 with formula_14. This gives formula_15 conditions, which, along with the standardization formula_16 fixes all formula_17 coefficients in formula_18. With work, all the coefficients of every polynomial can be systematically determined, leading to the explicit representation in powers of formula_19 given below. This definition of the formula_12's is the simplest one. It does not appeal to the theory of differential equations. Second, the completeness of the polynomials follows immediately from the completeness of the powers 1, formula_20. Finally, by defining them via orthogonality with respect to the most obvious weight function on a finite interval, it sets up the Legendre polynomials as one of the three classical orthogonal polynomial systems. The other two are the Laguerre polynomials, which are orthogonal over the half line formula_21, and the Hermite polynomials, orthogonal over the full line formula_22, with weight functions that are the most natural analytic functions that ensure convergence of all integrals. Definition via generating function. The Legendre polynomials can also be defined as the coefficients in a formal expansion in powers of formula_23 of the generating function The coefficient of formula_24 is a polynomial in formula_25 of degree formula_3 with formula_26. Expanding up to formula_27 gives formula_28 Expansion to higher orders gets increasingly cumbersome, but is possible to do systematically, and again leads to one of the explicit forms given below. It is possible to obtain the higher formula_12's without resorting to direct expansion of the Taylor series, however. Equation 2 is differentiated with respect to t on both sides and rearranged to obtain formula_29 Replacing the quotient of the square root with its definition in Eq. 2, and equating the coefficients of powers of "t" in the resulting expansion gives "Bonnet’s recursion formula" formula_30 This relation, along with the first two polynomials "P"0 and "P"1, allows all the rest to be generated recursively. The generating function approach is directly connected to the multipole expansion in electrostatics, as explained below, and is how the polynomials were first defined by Legendre in 1782. Definition via differential equation. A third definition is in terms of solutions to Legendre's differential equation: This differential equation has regular singular points at "x" = ±1 so if a solution is sought using the standard Frobenius or power series method, a series about the origin will only converge for in general. When "n" is an integer, the solution "Pn"("x") that is regular at "x" = 1 is also regular at "x" = −1, and the series for this solution terminates (i.e. it is a polynomial). The orthogonality and completeness of these solutions is best seen from the viewpoint of Sturm–Liouville theory. We rewrite the differential equation as an eigenvalue problem, formula_31 with the eigenvalue formula_32 in lieu of formula_33. If we demand that the solution be regular at formula_34, the differential operator on the left is Hermitian. The eigenvalues are found to be of the form "n"("n" + 1), with formula_35 and the eigenfunctions are the formula_2. The orthogonality and completeness of this set of solutions follows at once from the larger framework of Sturm–Liouville theory. The differential equation admits another, non-polynomial solution, the Legendre functions of the second kind formula_36. A two-parameter generalization of (Eq. 1) is called Legendre's "general" differential equation, solved by the Associated Legendre polynomials. Legendre functions are solutions of Legendre's differential equation (generalized or not) with "non-integer" parameters. In physical settings, Legendre's differential equation arises naturally whenever one solves Laplace's equation (and related partial differential equations) by separation of variables in spherical coordinates. From this standpoint, the eigenfunctions of the angular part of the Laplacian operator are the spherical harmonics, of which the Legendre polynomials are (up to a multiplicative constant) the subset that is left invariant by rotations about the polar axis. The polynomials appear as formula_37 where formula_38 is the polar angle. This approach to the Legendre polynomials provides a deep connection to rotational symmetry. Many of their properties which are found laboriously through the methods of analysis — for example the addition theorem — are more easily found using the methods of symmetry and group theory, and acquire profound physical and geometrical meaning. Rodrigues' formula and other explicit formulas. An especially compact expression for the Legendre polynomials is given by Rodrigues' formula: formula_39 This formula enables derivation of a large number of properties of the formula_12's. Among these are explicit representations such as formula_40 Expressing the polynomial as a power series, formula_41, the coefficients of powers of formula_19 can also be calculated using a general formula:formula_42The Legendre polynomial is determined by the values used for the two constants formula_43 and formula_44, where formula_45 if formula_3 is odd and formula_46 if formula_3 is even. In the fourth representation, formula_47 stands for the largest integer less than or equal to formula_48. The last representation, which is also immediate from the recursion formula, expresses the Legendre polynomials by simple monomials and involves the generalized form of the binomial coefficient. The first few Legendre polynomials are: The graphs of these polynomials (up to "n" = 5) are shown below: Main properties. Orthogonality. The standardization formula_5 fixes the normalization of the Legendre polynomials (with respect to the "L"2 norm on the interval −1 ≤ "x" ≤ 1). Since they are also orthogonal with respect to the same norm, the two statements can be combined into the single equation, formula_49 (where "δmn" denotes the Kronecker delta, equal to 1 if "m" = "n" and to 0 otherwise). This normalization is most readily found by employing Rodrigues' formula, given below. Completeness. That the polynomials are complete means the following. Given any piecewise continuous function formula_50 with finitely many discontinuities in the interval [−1, 1], the sequence of sums formula_51 converges in the mean to formula_50 as formula_52, provided we take formula_53 This completeness property underlies all the expansions discussed in this article, and is often stated in the form formula_54 with −1 ≤ "x" ≤ 1 and −1 ≤ "y" ≤ 1. Applications. Expanding an inverse distance potential. The Legendre polynomials were first introduced in 1782 by Adrien-Marie Legendre as the coefficients in the expansion of the Newtonian potential formula_55 where "r" and "r"′ are the lengths of the vectors x and x′ respectively and "γ" is the angle between those two vectors. The series converges when "r" &gt; "r"′. The expression gives the gravitational potential associated to a point mass or the Coulomb potential associated to a point charge. The expansion using Legendre polynomials might be useful, for instance, when integrating this expression over a continuous mass or charge distribution. Legendre polynomials occur in the solution of Laplace's equation of the static potential, ∇2 Φ(x) = 0, in a charge-free region of space, using the method of separation of variables, where the boundary conditions have axial symmetry (no dependence on an azimuthal angle). Where ẑ is the axis of symmetry and "θ" is the angle between the position of the observer and the ẑ axis (the zenith angle), the solution for the potential will be formula_56 "Al" and "Bl" are to be determined according to the boundary condition of each problem. They also appear when solving the Schrödinger equation in three dimensions for a central force. In multipole expansions. Legendre polynomials are also useful in expanding functions of the form (this is the same as before, written a little differently): formula_57 which arise naturally in multipole expansions. The left-hand side of the equation is the generating function for the Legendre polynomials. As an example, the electric potential Φ("r","θ") (in spherical coordinates) due to a point charge located on the "z"-axis at "z" = "a" (see diagram right) varies as formula_58 If the radius "r" of the observation point P is greater than "a", the potential may be expanded in the Legendre polynomials formula_59 where we have defined "η" = &lt; 1 and "x" = cos "θ". This expansion is used to develop the normal multipole expansion. Conversely, if the radius "r" of the observation point P is smaller than "a", the potential may still be expanded in the Legendre polynomials as above, but with "a" and "r" exchanged. This expansion is the basis of interior multipole expansion. In trigonometry. The trigonometric functions cos "nθ", also denoted as the Chebyshev polynomials "Tn"(cos "θ") ≡ cos "nθ", can also be multipole expanded by the Legendre polynomials "Pn"(cos "θ"). The first several orders are as follows: formula_60 Another property is the expression for sin ("n" + 1)"θ", which is formula_61 In recurrent neural networks. A recurrent neural network that contains a "d"-dimensional memory vector, formula_62, can be optimized such that its neural activities obey the linear time-invariant system given by the following state-space representation: formula_63 formula_64 In this case, the sliding window of formula_65 across the past formula_38 units of time is best approximated by a linear combination of the first formula_66 shifted Legendre polynomials, weighted together by the elements of formula_67 at time formula_23: formula_68 When combined with deep learning methods, these networks can be trained to outperform long short-term memory units and related architectures, while using fewer computational resources. Additional properties. Legendre polynomials have definite parity. That is, they are even or odd, according to formula_69 Another useful property is formula_70 which follows from considering the orthogonality relation with formula_6. It is convenient when a Legendre series formula_71 is used to approximate a function or experimental data: the "average" of the series over the interval [−1, 1] is simply given by the leading expansion coefficient formula_72. Since the differential equation and the orthogonality property are independent of scaling, the Legendre polynomials' definitions are "standardized" (sometimes called "normalization", but the actual norm is not 1) by being scaled so that formula_73 The derivative at the end point is given by formula_74 The Askey–Gasper inequality for Legendre polynomials reads formula_75 The Legendre polynomials of a scalar product of unit vectors can be expanded with spherical harmonics using formula_76 where the unit vectors "r" and "r"′ have spherical coordinates ("θ", "φ") and ("θ"′, "φ"′), respectively. The product of two Legendre polynomials formula_77 where formula_78 is the complete elliptic integral of the first kind. Recurrence relations. As discussed above, the Legendre polynomials obey the three-term recurrence relation known as Bonnet's recursion formula given by formula_79 and formula_80 or, with the alternative expression, which also holds at the endpoints formula_81 Useful for the integration of Legendre polynomials is formula_82 From the above one can see also that formula_83 or equivalently formula_84 where is the norm over the interval −1 ≤ "x" ≤ 1 formula_85 Asymptotics. Asymptotically, for formula_86, the Legendre polynomials can be written as formula_87 and for arguments of magnitude greater than 1 formula_88 where "J"0 and "I"0 are Bessel functions. Zeros. All formula_89 zeros of formula_2 are real, distinct from each other, and lie in the interval formula_90. Furthermore, if we regard them as dividing the interval formula_91 into formula_92 subintervals, each subinterval will contain exactly one zero of formula_93. This is known as the interlacing property. Because of the parity property it is evident that if formula_94 is a zero of formula_2, so is formula_95. These zeros play an important role in numerical integration based on Gaussian quadrature. The specific quadrature based on the formula_12's is known as Gauss-Legendre quadrature. From this property and the facts that formula_96, it follows that formula_97 has formula_98 local minima and maxima in formula_99. Equivalently, formula_100 has formula_101 zeros in formula_99. Pointwise evaluations. The parity and normalization implicate the values at the boundaries formula_102 to be formula_103 At the origin formula_104 one can show that the values are given by formula_105formula_106 Variants with transformed argument. Shifted Legendre polynomials. The shifted Legendre polynomials are defined as formula_107 Here the "shifting" function "x" ↦ 2"x" − 1 is an affine transformation that bijectively maps the interval [0, 1] to the interval [−1, 1], implying that the polynomials "P̃n"("x") are orthogonal on [0, 1]: formula_108 An explicit expression for the shifted Legendre polynomials is given by formula_109 The analogue of Rodrigues' formula for the shifted Legendre polynomials is formula_110 The first few shifted Legendre polynomials are: Legendre rational functions. The Legendre rational functions are a sequence of orthogonal functions on [0, ∞). They are obtained by composing the Cayley transform with Legendre polynomials. A rational Legendre function of degree "n" is defined as: formula_111 They are eigenfunctions of the singular Sturm–Liouville problem: formula_112 with eigenvalues formula_113 See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "w(x) = 1" }, { "math_id": 1, "text": " [-1,1]" }, { "math_id": 2, "text": "P_n(x)" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "\\int_{-1}^1 P_m(x) P_n(x) \\,dx = 0 \\quad \\text{if } n \\ne m." }, { "math_id": 5, "text": "P_n(1) = 1" }, { "math_id": 6, "text": "P_0(x) = 1" }, { "math_id": 7, "text": "P_1(x)" }, { "math_id": 8, "text": "P_0" }, { "math_id": 9, "text": "P_1(x) = x" }, { "math_id": 10, "text": "P_2(x)" }, { "math_id": 11, "text": "P_1" }, { "math_id": 12, "text": "P_n" }, { "math_id": 13, "text": "P_m" }, { "math_id": 14, "text": " m < n " }, { "math_id": 15, "text": " n " }, { "math_id": 16, "text": " P_n(1) = 1" }, { "math_id": 17, "text": " n+1" }, { "math_id": 18, "text": " P_n(x)" }, { "math_id": 19, "text": "x" }, { "math_id": 20, "text": " x, x^2, x^3, \\ldots" }, { "math_id": 21, "text": "[0,\\infty)" }, { "math_id": 22, "text": "(-\\infty,\\infty)" }, { "math_id": 23, "text": "t" }, { "math_id": 24, "text": "t^n" }, { "math_id": 25, "text": " x " }, { "math_id": 26, "text": "|x| \\leq 1" }, { "math_id": 27, "text": "t^1" }, { "math_id": 28, "text": "P_0(x) = 1 \\,,\\quad P_1(x) = x." }, { "math_id": 29, "text": "\\frac{x-t}{\\sqrt{1-2xt+t^2}} = \\left(1-2xt+t^2\\right) \\sum_{n=1}^\\infty n P_n(x) t^{n-1} \\,." }, { "math_id": 30, "text": " (n+1) P_{n+1}(x) = (2n+1) x P_n(x) - n P_{n-1}(x)\\,." }, { "math_id": 31, "text": "\\frac{d}{dx} \\left( \\left(1-x^2\\right) \\frac{d}{dx} \\right) P(x) = -\\lambda P(x) \\,," }, { "math_id": 32, "text": "\\lambda" }, { "math_id": 33, "text": " n(n+1)" }, { "math_id": 34, "text": "x = \\pm 1" }, { "math_id": 35, "text": "n = 0, 1, 2, \\ldots" }, { "math_id": 36, "text": "Q_n" }, { "math_id": 37, "text": "P_n(\\cos\\theta)" }, { "math_id": 38, "text": "\\theta" }, { "math_id": 39, "text": "P_n(x) = \\frac{1}{2^n n!} \\frac{d^n}{dx^n} (x^2 -1)^n \\,." }, { "math_id": 40, "text": "\\begin{align}\nP_n(x) & = [t^n] \\frac{\\left((t+x)^2 - 1\\right)^n}{2^n} = [t^n] \\frac{\\left(t+x+1\\right)^n \\left(t+x-1\\right)^n}{2^n}, \\\\[1ex]\nP_n(x)&= \\frac{1}{2^n} \\sum_{k=0}^n \\binom{n}{k}^{\\!2} (x-1)^{n-k}(x+1)^k, \\\\[1ex]\nP_n(x)&= \\sum_{k=0}^n \\binom{n}{k} \\binom{n+k}{k} \\left( \\frac{x-1}{2} \\right)^{\\!k}, \\\\[1ex]\nP_n(x)&= \\frac{1}{2^n}\\sum_{k=0}^{\\left\\lfloor n/2 \\right\\rfloor} \\left(-1\\right)^k \\binom{n}{k}\\binom{2n-2k}n x^{n-2k},\\\\[1ex]\nP_n(x)&= 2^n \\sum_{k=0}^n x^k \\binom{n}{k} \\binom{\\frac{n+k-1}{2}}{n}.\n\\end{align}" }, { "math_id": 41, "text": "P_n(x) = \\sum a_k x^k " }, { "math_id": 42, "text": "a_{k+2} = - \\frac{(l-k)(l+k+1)}{(k+2)(k+1)}a_k. " }, { "math_id": 43, "text": "a_0 " }, { "math_id": 44, "text": "a_1 " }, { "math_id": 45, "text": "a_0=0 " }, { "math_id": 46, "text": "a_1=0 " }, { "math_id": 47, "text": "\\lfloor n/2 \\rfloor" }, { "math_id": 48, "text": "n/2" }, { "math_id": 49, "text": "\\int_{-1}^1 P_m(x) P_n(x)\\,dx = \\frac{2}{2n + 1} \\delta_{mn}," }, { "math_id": 50, "text": " f(x) " }, { "math_id": 51, "text": " f_n(x) = \\sum_{\\ell=0}^n a_\\ell P_\\ell(x)" }, { "math_id": 52, "text": " n \\to \\infty " }, { "math_id": 53, "text": " a_\\ell = \\frac{2\\ell + 1}{2} \\int_{-1}^1 f(x) P_\\ell(x)\\,dx." }, { "math_id": 54, "text": "\\sum_{\\ell=0}^\\infty \\frac{2\\ell + 1}{2} P_\\ell(x)P_\\ell(y) = \\delta(x-y), " }, { "math_id": 55, "text": "\\frac{1}{\\left| \\mathbf{x}-\\mathbf{x}' \\right|} = \\frac{1}{\\sqrt{r^2+{r'}^2-2r{r'}\\cos\\gamma}} = \\sum_{\\ell=0}^\\infty \\frac{{r'}^\\ell}{r^{\\ell+1}} P_\\ell(\\cos \\gamma)," }, { "math_id": 56, "text": "\\Phi(r,\\theta) = \\sum_{\\ell=0}^\\infty \\left( A_\\ell r^\\ell + B_\\ell r^{-(\\ell+1)} \\right) P_\\ell(\\cos\\theta) \\,." }, { "math_id": 57, "text": "\\frac{1}{\\sqrt{1 + \\eta^2 - 2\\eta x}} = \\sum_{k=0}^\\infty \\eta^k P_k(x)," }, { "math_id": 58, "text": "\\Phi (r, \\theta ) \\propto \\frac{1}{R} = \\frac{1}{\\sqrt{r^2 + a^2 - 2ar \\cos\\theta}}." }, { "math_id": 59, "text": "\\Phi(r, \\theta) \\propto \\frac{1}{r} \\sum_{k=0}^\\infty \\left( \\frac{a}{r} \\right)^k P_k(\\cos \\theta)," }, { "math_id": 60, "text": "\\begin{alignat}{2}\nT_0(\\cos\\theta)&=1 &&=P_0(\\cos\\theta),\\\\[4pt]\nT_1(\\cos\\theta)&=\\cos \\theta&&=P_1(\\cos\\theta),\\\\[4pt]\nT_2(\\cos\\theta)&=\\cos 2\\theta&&=\\tfrac{1}{3}\\bigl(4P_2(\\cos\\theta)-P_0(\\cos\\theta)\\bigr),\\\\[4pt]\nT_3(\\cos\\theta)&=\\cos 3\\theta&&=\\tfrac{1}{5}\\bigl(8P_3(\\cos\\theta)-3P_1(\\cos\\theta)\\bigr),\\\\[4pt]\nT_4(\\cos\\theta)&=\\cos 4\\theta&&=\\tfrac{1}{105}\\bigl(192P_4(\\cos\\theta)-80P_2(\\cos\\theta)-7P_0(\\cos\\theta)\\bigr),\\\\[4pt]\nT_5(\\cos\\theta)&=\\cos 5\\theta&&=\\tfrac{1}{63}\\bigl(128P_5(\\cos\\theta)-56P_3(\\cos\\theta)-9P_1(\\cos\\theta)\\bigr),\\\\[4pt]\nT_6(\\cos\\theta)&=\\cos 6\\theta&&=\\tfrac{1}{1155}\\bigl(2560P_6(\\cos\\theta)-1152P_4(\\cos\\theta)-220P_2(\\cos\\theta)-33P_0(\\cos\\theta)\\bigr).\n\\end{alignat}" }, { "math_id": 61, "text": "\\frac{\\sin (n+1)\\theta}{\\sin\\theta}=\\sum_{\\ell=0}^n P_\\ell(\\cos\\theta) P_{n-\\ell}(\\cos\\theta)." }, { "math_id": 62, "text": "\\mathbf{m} \\in \\R^d" }, { "math_id": 63, "text": "\\theta \\dot{\\mathbf{m}}(t) = A\\mathbf{m}(t) + Bu(t)," }, { "math_id": 64, "text": "\\begin{align}\nA &= \\left[ a \\right]_{ij} \\in \\R^{d \\times d} \\text{,} \\quad\n&& a_{ij} = \\left(2i + 1\\right)\n\\begin{cases}\n -1 & i < j \\\\\n (-1)^{i-j+1} & i \\ge j\n\\end{cases},\\\\\nB &= \\left[ b \\right]_i \\in \\R^{d \\times 1} \\text{,} \\quad\n&& b_i = (2i + 1) (-1)^i .\n\\end{align}" }, { "math_id": 65, "text": "u" }, { "math_id": 66, "text": "d" }, { "math_id": 67, "text": "\\mathbf{m}" }, { "math_id": 68, "text": "u(t - \\theta') \\approx \\sum_{\\ell=0}^{d-1} \\widetilde{P}_\\ell \\left(\\frac{\\theta'}{\\theta} \\right) \\, m_{\\ell}(t) , \\quad 0 \\le \\theta' \\le \\theta ." }, { "math_id": 69, "text": "P_n(-x) = (-1)^n P_n(x) \\,." }, { "math_id": 70, "text": "\\int_{-1}^1 P_n(x)\\,dx = 0 \\text{ for } n\\ge1," }, { "math_id": 71, "text": "\\sum_i a_i P_i" }, { "math_id": 72, "text": "a_0" }, { "math_id": 73, "text": "P_n(1) = 1 \\,." }, { "math_id": 74, "text": "P_n'(1) = \\frac{n(n+1)}{2} \\,. " }, { "math_id": 75, "text": "\\sum_{j=0}^n P_j(x) \\ge 0 \\quad \\text{for }\\quad x\\ge -1 \\,." }, { "math_id": 76, "text": "P_\\ell \\left(r \\cdot r'\\right) = \\frac{4\\pi}{2\\ell + 1} \\sum_{m=-\\ell}^\\ell Y_{\\ell m}(\\theta,\\varphi) Y_{\\ell m}^*(\\theta',\\varphi')\\,," }, { "math_id": 77, "text": "\n \\sum_{p=0}^\\infty t^{p}P_p(\\cos\\theta_1)P_p(\\cos\\theta_2)=\\frac2\\pi\\frac{\\mathbf K\\left( 2\\sqrt{\\frac{t\\sin\\theta_1\\sin\\theta_2}{t^2-2t\\cos\\left( \\theta_1+\\theta_2 \\right)+1}} \\right)}{\\sqrt{t^2-2t\\cos\\left( \\theta_1+\\theta_2 \\right)+1}}\\,," }, { "math_id": 78, "text": "K(\\cdot)" }, { "math_id": 79, "text": " (n+1) P_{n+1}(x) = (2n+1) x P_n(x) - n P_{n-1}(x)" }, { "math_id": 80, "text": " \\frac{x^2-1}{n} \\frac{d}{dx} P_n(x) = xP_n(x) - P_{n-1}(x) " }, { "math_id": 81, "text": " \\frac{d}{dx} P_{n+1}(x) = (n+1)P_n(x) + x \\frac{d}{dx}P_{n}(x) \\,." }, { "math_id": 82, "text": "(2n+1) P_n(x) = \\frac{d}{dx} \\bigl( P_{n+1}(x) - P_{n-1}(x) \\bigr) \\,." }, { "math_id": 83, "text": "\\frac{d}{dx} P_{n+1}(x) = (2n+1) P_n(x) + \\bigl(2(n-2)+1\\bigr) P_{n-2}(x) + \\bigl(2(n-4)+1\\bigr) P_{n-4}(x) + \\cdots" }, { "math_id": 84, "text": "\\frac{d}{dx} P_{n+1}(x) = \\frac{2 P_n(x)}{\\left\\| P_n \\right\\|^2} + \\frac{2 P_{n-2}(x)}{\\left\\| P_{n-2} \\right\\|^2} + \\cdots" }, { "math_id": 85, "text": "\\| P_n \\| = \\sqrt{\\int_{-1}^1 \\bigl(P_n(x)\\bigr)^2 \\,dx} = \\sqrt{\\frac{2}{2 n + 1}} \\,." }, { "math_id": 86, "text": "\\ell \\to \\infty" }, { "math_id": 87, "text": "\\begin{align}\nP_\\ell (\\cos \\theta) &= \\sqrt{\\frac{\\theta}{\\sin\\left(\\theta\\right)}} \\, J_0{\\left(\\left(\\ell+\\tfrac{1}{2}\\right)\\theta\\right)} + \\mathcal{O}\\left(\\ell^{-1}\\right) \\\\[1ex]\n&= \\sqrt{\\frac{2}{\\pi \\ell\\sin\\left(\\theta\\right)}}\\cos\\left(\\left(\\ell + \\tfrac{1}{2} \\right)\\theta - \\tfrac{\\pi}{4}\\right) + \\mathcal{O}\\left(\\ell^{-3/2}\\right), \\quad \\theta \\in (0,\\pi),\n\\end{align}" }, { "math_id": 88, "text": "\\begin{align}\nP_\\ell \\left(\\cosh\\xi\\right) &= \\sqrt{\\frac{\\xi}{\\sinh\\xi}} I_0\\left(\\left(\\ell+\\frac{1}{2}\\right)\\xi\\right)\\left(1+\\mathcal{O}\\left(\\ell^{-1}\\right)\\right)\\,,\\\\\nP_\\ell \\left(\\frac{1}{\\sqrt{1-e^2}}\\right) &= \\frac{1}{\\sqrt{2\\pi\\ell e}} \\frac{(1+e)^\\frac{\\ell+1}{2}}{(1-e)^\\frac{\\ell}{2}} + \\mathcal{O}\\left(\\ell^{-1}\\right)\n\\end{align}" }, { "math_id": 89, "text": " n" }, { "math_id": 90, "text": "(-1,1)" }, { "math_id": 91, "text": "[-1,1]" }, { "math_id": 92, "text": " n+1 " }, { "math_id": 93, "text": "P_{n+1}" }, { "math_id": 94, "text": "x_k" }, { "math_id": 95, "text": "-x_k" }, { "math_id": 96, "text": " P_n(\\pm 1) \\ne 0 " }, { "math_id": 97, "text": " P_n(x) " }, { "math_id": 98, "text": " n-1 " }, { "math_id": 99, "text": " (-1,1) " }, { "math_id": 100, "text": " dP_n(x)/dx " }, { "math_id": 101, "text": " n -1 " }, { "math_id": 102, "text": " x=\\pm 1 " }, { "math_id": 103, "text": "\n P_n(1) = 1\n \\,, \\quad\n P_n(-1) = (-1)^n\n" }, { "math_id": 104, "text": " x=0 " }, { "math_id": 105, "text": "\n P_{2n}(0) = \\frac{(-1)^{n}}{4^n} \\binom{2n}{n} = \\frac{(-1)^{n}}{2^{2n}} \\frac{(2n)!}{\\left(n!\\right)^2}\n= (-1)^n\\frac{(2n-1)!!}{(2n)!!}\n" }, { "math_id": 106, "text": "\n P_{2n+1}(0) = 0\n" }, { "math_id": 107, "text": "\\widetilde{P}_n(x) = P_n(2x-1) \\,." }, { "math_id": 108, "text": "\\int_0^1 \\widetilde{P}_m(x) \\widetilde{P}_n(x)\\,dx = \\frac{1}{2n + 1} \\delta_{mn} \\,." }, { "math_id": 109, "text": "\\widetilde{P}_n(x) = (-1)^n \\sum_{k=0}^n \\binom{n}{k} \\binom{n+k}{k} (-x)^k \\,." }, { "math_id": 110, "text": "\\widetilde{P}_n(x) = \\frac{1}{n!} \\frac{d^n}{dx^n} \\left(x^2 -x \\right)^n \\,." }, { "math_id": 111, "text": "R_n(x) = \\frac{\\sqrt{2}}{x+1}\\,P_n\\left(\\frac{x-1}{x+1}\\right)\\,." }, { "math_id": 112, "text": "\\left(x+1\\right) \\frac{d}{dx} \\left(x \\frac{d}{dx} \\left[\\left(x+1\\right) v(x)\\right]\\right) + \\lambda v(x) = 0" }, { "math_id": 113, "text": "\\lambda_n=n(n+1)\\,." } ]
https://en.wikipedia.org/wiki?curid=100349
10039004
Sergey Yablonsky
Soviet and Russian mathematician Sergey Vsevolodovich Yablonsky (Russian: Серге́й Все́володович Ябло́нский, 6 December 1924 – 26 May 1998) was a Soviet and Russian mathematician, one of the founders of the Soviet school of mathematical cybernetics and discrete mathematics. He is the author of a number of classic results on synthesis, reliability, and classification of control systems (), the term used in the USSR and Russia for a generalization of finite state automata, Boolean circuits and multi-valued logic circuits. Yablonsky is credited for helping to overcome the pressure from Soviet ideologists against the term and the discipline of cybernetics and establishing what in the Soviet Union was called mathematical cybernetics as a separate field of mathematics. Yablonsky and his students were ones of the first in the world to raise the issues of potentially inherent unavoidability of the brute force search for some problems, the precursor of the P = NP problem, though Gödel's letter to von Neumann, dated 20 March 1956 and discovered in 1988, may have preceded them. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;In Russia, a group led by Yablonsky had the idea that combinatorial problems are hard in proportion to the amount of brute-force search required to find a solution. In particular, they noticed that for many problems they could not find a useful way to organize the space of potential solutions so as to avoid brute force search. They began to suspect that these problems had an "inherently" unorganized solution space, and the best method for solving them would require enumerating an exponential (in the size of the problem instance) number of potential solutions. That is, the problems seem to require formula_0 "shots in the dark" (for some constant formula_1) when the length of the problem description is formula_2. However, despite their "leading-edge" taste in mathematics, Yablonsky's group never quite formulated this idea precisely. Biography. Childhood. Yablonsky was born in Moscow, to the family of a professor of mechanics. His mathematical talents became apparent in early age. In 1940 he became the winner of the sixth Moscow secondary school mathematical olympiad. War. In August 1942, after completing his first year at Moscow State University's Faculty of Mechanics and Mathematics, Yablonsky, then 17, went to serve in the Soviet Army, fighting in the second world war as a member of the tank brigade 242. For his service he was awarded two Orders of the Patriotic War, two Orders of the Red Star, Order of Glory of the 3rd class, and numerous medals. He returned to his study after the war has ended in 1945 and went on to graduate with distinction. Post-war period. Yablonsky graduated the Faculty of Mechanics and Mathematics of Moscow State University in 1950. During his student years he worked under supervision of Nina Bari. This collaboration resulted in his first research paper, "On the converging sequences of continuous functions" (1950). He joined the graduate program of the Faculty of Mechanics and Mathematics in 1950 where his advisor was Pyotr Novikov. There Yablonsky's research was on the issues of the expressibility in mathematical logic. He approached this problem in terms of the theory of k-valued discrete functions. Among the problems that were addressed in his PhD thesis titled "Issues of functional completeness in k-valued calculus" (1953) is the definitive answer to the question of completeness in 3-valued logic. Starting from 1953, Yablonsky worked at the Department of Applied Mathematics of Steklov Institute of Mathematics, that in 1966 became the separate Institute of Applied Mathematics. Over the period of the 1950s and 1960s, together with Alexey Lyapunov, Yablonsky organized the seminar on cybernetics, showing his support to the new field of mathematics that had been a subject of a significant controversy fueled by Soviet ideologists. He actively participated in the creation of the periodical publication Problems of Cybernetics, with Lyapunov as its first editor-in-chief. Yablonsky succeeded Lyapunov as the editor-in-chief of Problems of Cybernetics in 1974 (the publication changed its name to Mathematical Issues of Cybernetics in 1989). In 1966 Yablonsky (together with Yuri Zhuravlyov and Oleg Lupanov) was awarded Lenin Prize for their work on the theory of control systems (in the discrete-mathematical sense, as explained above). In 1968 Yablonsky was elected a corresponding member of the Academy of Sciences of the Soviet Union (division of mathematics). Yablonsky played an active role in the creation of the Faculty of Computational Mathematics and Cybernetics at Moscow State University in 1970. In 1971 he became the founding head of the department of mathematical cybernetics (initially department of automata theory and mathematical logic) at the Faculty of Computational Mathematics and Cybernetics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c^n" }, { "math_id": 1, "text": "c" }, { "math_id": 2, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=10039004
10040846
Random measure
In probability theory, a random measure is a measure-valued random element. Random measures are for example used in the theory of random processes, where they form many important point processes such as Poisson point processes and Cox processes. Definition. Random measures can be defined as transition kernels or as random elements. Both definitions are equivalent. For the definitions, let formula_0 be a separable complete metric space and let formula_1 be its Borel formula_2-algebra. (The most common example of a separable complete metric space is formula_3) As a transition kernel. A random measure formula_4 is a (a.s.) locally finite transition kernel from an abstract probability space formula_5 to formula_6. Being a transition kernel means that formula_8 is measurable from formula_9 to formula_10 formula_12 is a measure on formula_6 Being locally finite means that the measures formula_13 satisfy formula_14 for all bounded measurable sets formula_15 and for all formula_11 except some formula_16-null set In the context of stochastic processes there is the related concept of a stochastic kernel, probability kernel, Markov kernel. As a random element. Define formula_17 and the subset of locally finite measures by formula_18 For all bounded measurable formula_19, define the mappings formula_20 from formula_21 to formula_22. Let formula_23 be the formula_2-algebra induced by the mappings formula_24 on formula_21 and formula_25 the formula_2-algebra induced by the mappings formula_24 on formula_26. Note that formula_27. A random measure is a random element from formula_5 to formula_28 that almost surely takes values in formula_29 Basic related concepts. Intensity measure. For a random measure formula_30, the measure formula_31 satisfying formula_32 for every positive measurable function formula_33 is called the intensity measure of formula_4. The intensity measure exists for every random measure and is a s-finite measure. Supporting measure. For a random measure formula_30, the measure formula_34 satisfying formula_35 for all positive measurable functions is called the supporting measure of formula_30. The supporting measure exists for all random measures and can be chosen to be finite. Laplace transform. For a random measure formula_30, the Laplace transform is defined as formula_36 for every positive measurable function formula_33. Basic properties. Measurability of integrals. For a random measure formula_4, the integrals formula_37 and formula_38 for positive formula_1-measurable formula_33 are measurable, so they are random variables. Uniqueness. The distribution of a random measure is uniquely determined by the distributions of formula_37 for all continuous functions with compact support formula_33 on formula_0. For a fixed semiring formula_39 that generates formula_1 in the sense that formula_40, the distribution of a random measure is also uniquely determined by the integral over all positive simple formula_41-measurable functions formula_33. Decomposition. A measure generally might be decomposed as: formula_42 Here formula_43 is a diffuse measure without atoms, while formula_44 is a purely atomic measure. Random counting measure. A random measure of the form: formula_45 where formula_46 is the Dirac measure, and formula_47 are random variables, is called a "point process" or random counting measure. This random measure describes the set of "N" particles, whose locations are given by the (generally vector valued) random variables formula_47. The diffuse component formula_43 is null for a counting measure. In the formal notation of above a random counting measure is a map from a probability space to the measurable space (formula_48, formula_49) a measurable space. Here formula_48 is the space of all boundedly finite integer-valued measures formula_50 (called counting measures). The definitions of expectation measure, Laplace functional, moment measures and stationarity for random measures follow those of point processes. Random measures are useful in the description and analysis of Monte Carlo methods, such as Monte Carlo numerical quadrature and particle filters.
[ { "math_id": 0, "text": " E " }, { "math_id": 1, "text": " \\mathcal E " }, { "math_id": 2, "text": " \\sigma " }, { "math_id": 3, "text": " \\R^n " }, { "math_id": 4, "text": " \\zeta " }, { "math_id": 5, "text": " (\\Omega, \\mathcal A, P) " }, { "math_id": 6, "text": " (E, \\mathcal E) " }, { "math_id": 7, "text": " B \\in \\mathcal \\mathcal E " }, { "math_id": 8, "text": " \\omega \\mapsto \\zeta(\\omega,B) " }, { "math_id": 9, "text": " (\\Omega, \\mathcal A) " }, { "math_id": 10, "text": " (\\R, \\mathcal B(\\R)) " }, { "math_id": 11, "text": " \\omega \\in \\Omega " }, { "math_id": 12, "text": " B \\mapsto \\zeta(\\omega, B) \\quad (B \\in \\mathcal E)" }, { "math_id": 13, "text": " B \\mapsto \\zeta(\\omega, B) " }, { "math_id": 14, "text": " \\zeta(\\omega,\\tilde B) < \\infty " }, { "math_id": 15, "text": " \\tilde B \\in \\mathcal E " }, { "math_id": 16, "text": " P " }, { "math_id": 17, "text": " \\tilde \\mathcal M:= \\{ \\mu \\mid \\mu \\text{ is measure on } (E, \\mathcal E) \\} " }, { "math_id": 18, "text": " \\mathcal M:= \\{ \\mu \\in \\tilde \\mathcal M \\mid \\mu(\\tilde B) < \\infty \\text{ for all bounded measurable } \\tilde B \\in \\mathcal E \\} " }, { "math_id": 19, "text": " \\tilde B " }, { "math_id": 20, "text": " I_{\\tilde B } \\colon \\mu \\mapsto \\mu(\\tilde B) " }, { "math_id": 21, "text": " \\tilde \\mathcal M " }, { "math_id": 22, "text": " \\R " }, { "math_id": 23, "text": " \\tilde \\mathbb M " }, { "math_id": 24, "text": " I_{\\tilde B } " }, { "math_id": 25, "text": " \\mathbb M " }, { "math_id": 26, "text": " \\mathcal M " }, { "math_id": 27, "text": " \\tilde\\mathbb M|_{\\mathcal M}= \\mathbb M " }, { "math_id": 28, "text": " (\\tilde \\mathcal M, \\tilde \\mathbb M) " }, { "math_id": 29, "text": " (\\mathcal M, \\mathbb M) " }, { "math_id": 30, "text": " \\zeta" }, { "math_id": 31, "text": " \\operatorname E \\zeta " }, { "math_id": 32, "text": " \\operatorname E \\left[ \\int f(x) \\; \\zeta (\\mathrm dx )\\right] = \\int f(x) \\; \\operatorname E \\zeta (\\mathrm dx)" }, { "math_id": 33, "text": " f " }, { "math_id": 34, "text": " \\nu " }, { "math_id": 35, "text": " \\int f(x) \\; \\zeta(\\mathrm dx )=0 \\text{ a.s. } \\text{ iff } \\int f(x) \\; \\nu (\\mathrm dx)=0" }, { "math_id": 36, "text": " \\mathcal L_\\zeta(f)= \\operatorname E \\left[ \\exp \\left( -\\int f(x) \\; \\zeta (\\mathrm dx ) \\right) \\right]" }, { "math_id": 37, "text": " \\int f(x) \\zeta(\\mathrm dx) " }, { "math_id": 38, "text": " \\zeta(A) := \\int \\mathbf 1_A(x) \\zeta(\\mathrm dx) " }, { "math_id": 39, "text": " \\mathcal I \\subset \\mathcal E " }, { "math_id": 40, "text": " \\sigma(\\mathcal I)=\\mathcal E " }, { "math_id": 41, "text": " \\mathcal I " }, { "math_id": 42, "text": " \\mu=\\mu_d + \\mu_a = \\mu_d + \\sum_{n=1}^N \\kappa_n \\delta_{X_n}, " }, { "math_id": 43, "text": "\\mu_d" }, { "math_id": 44, "text": "\\mu_a" }, { "math_id": 45, "text": " \\mu=\\sum_{n=1}^N \\delta_{X_n}, " }, { "math_id": 46, "text": "\\delta" }, { "math_id": 47, "text": "X_n" }, { "math_id": 48, "text": "N_X" }, { "math_id": 49, "text": "\\mathfrak{B}(N_X)" }, { "math_id": 50, "text": "N \\in M_X" } ]
https://en.wikipedia.org/wiki?curid=10040846
10042977
Wind engineering
Study of the effects of wind on natural and built environments Wind engineering is a subset of mechanical engineering, structural engineering, meteorology, and applied physics that analyzes the effects of wind in the natural and the built environment and studies the possible damage, inconvenience or benefits which may result from wind. In the field of engineering it includes strong winds, which may cause discomfort, as well as extreme winds, such as in a tornado, hurricane or heavy storm, which may cause widespread destruction. In the fields of wind energy and air pollution it also includes low and moderate winds as these are relevant to electricity production and dispersion of contaminants. Wind engineering draws upon meteorology, fluid dynamics, mechanics, geographic information systems, and a number of specialist engineering disciplines, including aerodynamics and structural dynamics. The tools used include atmospheric models, atmospheric boundary layer wind tunnels, and computational fluid dynamics models. Wind engineering involves, among other topics: Wind engineering may be considered by structural engineers to be closely related to earthquake engineering and explosion protection. Some sports stadiums such as Candlestick Park and Arthur Ashe Stadium are known for their strong, sometimes swirly winds, which affect the playing conditions. History. Wind engineering as a separate discipline can be traced to the UK in the 1960s, when informal meetings were held at the National Physical Laboratory, the Building Research Establishment, and elsewhere. The term "wind engineering" was first coined in 1970. Alan Garnett Davenport was one of the most prominent contributors to the development of wind engineering. He is well known for developing the Alan Davenport wind-loading chain or in short "wind-loading chain" that describes how different components contribute to the final load calculated on the structure. Wind loads on buildings. The design of buildings must account for wind loads, and these are affected by wind shear. For engineering purposes, a power law wind-speed profile may be defined as: formula_0 where: formula_1 = speed of the wind at height formula_2 formula_3 = gradient wind at gradient height formula_4 formula_5 = exponential coefficient Typically, buildings are designed to resist a strong wind with a very long return period, such as 50 years or more. The design wind speed is determined from historical records using extreme value theory to predict future extreme wind speeds. Wind speeds are generally calculated based on some regional design standard or standards. The design standards for building wind loads include: Wind comfort. The advent of high-rise tower blocks led to concerns regarding the wind nuisance caused by these buildings to pedestrians in their vicinity. A number of wind comfort and wind danger criteria were developed from 1971, based on different pedestrian activities, such as: Other criteria classified a wind environment as completely unacceptable or dangerous. Building geometries consisting of one and two rectangular buildings have a number of well-known effects: For more complex geometries, pedestrian wind comfort studies are required. These can use an appropriately scaled model in a boundary-layer wind tunnel, or more recently, use of computational fluid dynamics techniques has increased. The pedestrian level wind speeds for a given exceedance probability are calculated to allow for regional wind speeds statistics. The vertical wind profile used in these studies varies according to the terrain in the vicinity of the buildings (which may differ by wind direction), and is often grouped in categories, such as: Wind turbines. Wind turbines are affected by wind shear. Vertical wind-speed profiles result in different wind speeds at the blades nearest to the ground level compared to those at the top of blade travel, and this, in turn, affects the turbine operation. The wind gradient can create a large bending moment in the shaft of a two bladed turbine when the blades are vertical. The reduced wind gradient over water means shorter and less expensive wind turbine towers can be used in shallow seas. For wind turbine engineering, wind speed variation with height is often approximated using a power law: formula_6 where: formula_7 = velocity of the wind at height formula_8 [m/s] formula_9 = velocity of the wind at some reference height formula_10 [m/s] formula_11 = Hellman exponent (aka power law exponent or shear exponent) (~= 1/7 in neutral flow, but can be &gt;1) Significance. The knowledge of wind engineering is used to analyze and design all high-rise buildings, cable-suspension bridges and cable-stayed bridges, electricity transmission towers and telecommunication towers and all other types of towers and chimneys. The wind load is the dominant load in the analysis of many tall buildings, so wind engineering is essential for their analysis and design. Again, wind load is a dominant load in the analysis and design of all long-span cable bridges. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ v_z = v_g \\cdot \\left( \\frac {z} {z_g} \\right)^ \\frac {1} {\\alpha}, 0 < z < z_g\n" }, { "math_id": 1, "text": "\\ v_z" }, { "math_id": 2, "text": "\\ z" }, { "math_id": 3, "text": "\\ v_g" }, { "math_id": 4, "text": "\\ z_g " }, { "math_id": 5, "text": "\\ \\alpha" }, { "math_id": 6, "text": "\\ v_w(h) = v_{ref} \\cdot \\left( \\frac {h} {h_{ref}} \\right)^ a\n" }, { "math_id": 7, "text": "\\ v_w(h)" }, { "math_id": 8, "text": " h" }, { "math_id": 9, "text": "\\ v_{ref}" }, { "math_id": 10, "text": " h_{ref} " }, { "math_id": 11, "text": "\\ a" } ]
https://en.wikipedia.org/wiki?curid=10042977
10043
Estimator
Rule for calculating an estimate of a given quantity based on observed data In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished. For example, the sample mean is a commonly used estimator of the population mean. There are point and interval estimators. The point estimators yield single-valued results. This is in contrast to an interval estimator, where the result would be a range of plausible values. "Single value" does not necessarily mean "single number", but includes vector valued or function valued estimators. "Estimation theory" is concerned with the properties of estimators; that is, with defining properties that can be used to compare different estimators (different rules for creating estimates) for the same quantity, based on the same data. Such properties can be used to determine the best rules to use under given circumstances. However, in robust statistics, statistical theory goes on to consider the balance between having good properties, if tightly defined assumptions hold, and having worse properties that hold under wider conditions. Background. An "estimator" or "point estimate" is a statistic (that is, a function of the data) that is used to infer the value of an unknown parameter in a statistical model. A common way of phrasing it is "the estimator is the method selected to obtain an estimate of an unknown parameter". The parameter being estimated is sometimes called the "estimand". It can be either finite-dimensional (in parametric and semi-parametric models), or infinite-dimensional (semi-parametric and non-parametric models). If the parameter is denoted formula_0 then the estimator is traditionally written by adding a circumflex over the symbol: formula_1. Being a function of the data, the estimator is itself a random variable; a particular realization of this random variable is called the "estimate". Sometimes the words "estimator" and "estimate" are used interchangeably. The definition places virtually no restrictions on which functions of the data can be called the "estimators". The attractiveness of different estimators can be judged by looking at their properties, such as unbiasedness, mean square error, consistency, asymptotic distribution, etc. The construction and comparison of estimators are the subjects of the estimation theory. In the context of decision theory, an estimator is a type of decision rule, and its performance may be evaluated through the use of loss functions. When the word "estimator" is used without a qualifier, it usually refers to point estimation. The estimate in this case is a single point in the parameter space. There also exists another type of estimator: interval estimators, where the estimates are subsets of the parameter space. The problem of density estimation arises in two applications. Firstly, in estimating the probability density functions of random variables and secondly in estimating the spectral density function of a time series. In these problems the estimates are functions that can be thought of as point estimates in an infinite dimensional space, and there are corresponding interval estimation problems. Definition. Suppose a fixed "parameter" formula_2 needs to be estimated. Then an "estimator" is a function that maps the sample space to a set of "sample estimates". An estimator of formula_2 is usually denoted by the symbol formula_1. It is often convenient to express the theory using the algebra of random variables: thus if "X" is used to denote a random variable corresponding to the observed data, the estimator (itself treated as a random variable) is symbolised as a function of that random variable, formula_3. The estimate for a particular observed data value formula_4 (i.e. for formula_5) is then formula_6, which is a fixed value. Often an abbreviated notation is used in which formula_1 is interpreted directly as a random variable, but this can cause confusion. Quantified properties. The following definitions and attributes are relevant. Error. For a given sample formula_7, the "error" of the estimator formula_1 is defined as formula_8 where formula_9 is the parameter being estimated. The error, "e", depends not only on the estimator (the estimation formula or procedure), but also on the sample. Mean squared error. The mean squared error of formula_1 is defined as the expected value (probability-weighted average, over all samples) of the squared errors; that is, formula_10 It is used to indicate how far, on average, the collection of estimates are from the single parameter being estimated. Consider the following analogy. Suppose the parameter is the bull's-eye of a target, the estimator is the process of shooting arrows at the target, and the individual arrows are estimates (samples). Then high MSE means the average distance of the arrows from the bull's eye is high, and low MSE means the average distance from the bull's eye is low. The arrows may or may not be clustered. For example, even if all arrows hit the same point, yet grossly miss the target, the MSE is still relatively large. However, if the MSE is relatively low then the arrows are likely more highly clustered (than highly dispersed) around the target. Sampling deviation. For a given sample formula_7, the "sampling deviation" of the estimator formula_1 is defined as formula_11 where formula_12 is the expected value of the estimator. The sampling deviation, "d", depends not only on the estimator, but also on the sample. Variance. The variance of formula_1 is the expected value of the squared sampling deviations; that is, formula_13. It is used to indicate how far, on average, the collection of estimates are from the "expected value" of the estimates. (Note the difference between MSE and variance.) If the parameter is the bull's-eye of a target, and the arrows are estimates, then a relatively high variance means the arrows are dispersed, and a relatively low variance means the arrows are clustered. Even if the variance is low, the cluster of arrows may still be far off-target, and even if the variance is high, the diffuse collection of arrows may still be unbiased. Finally, even if all arrows grossly miss the target, if they nevertheless all hit the same point, the variance is zero. Bias. The bias of formula_1 is defined as formula_14. It is the distance between the average of the collection of estimates, and the single parameter being estimated. The bias of formula_1 is a function of the true value of formula_2 so saying that the bias of formula_1 is formula_15 means that for every formula_2 the bias of formula_1 is formula_15. There are two kinds of estimators: biased estimators and unbiased estimators. Whether an estimator is biased or not can be identified by the relationship between formula_16 and 0: The bias is also the expected value of the error, since formula_19. If the parameter is the bull's eye of a target and the arrows are estimates, then a relatively high absolute value for the bias means the average position of the arrows is off-target, and a relatively low absolute bias means the average position of the arrows is on target. They may be dispersed, or may be clustered. The relationship between bias and variance is analogous to the relationship between accuracy and precision. The estimator formula_1 is an unbiased estimator of formula_2 if and only if formula_20. Bias is a property of the estimator, not of the estimate. Often, people refer to a "biased estimate" or an "unbiased estimate", but they really are talking about an "estimate from a biased estimator", or an "estimate from an unbiased estimator". Also, people often confuse the "error" of a single estimate with the "bias" of an estimator. That the error for one estimate is large, does not mean the estimator is biased. In fact, even if all estimates have astronomical absolute values for their errors, if the expected value of the error is zero, the estimator is unbiased. Also, an estimator's being biased does not preclude the error of an estimate from being zero in a particular instance. The ideal situation is to have an unbiased estimator with low variance, and also try to limit the number of samples where the error is extreme (that is, have few outliers). Yet unbiasedness is not essential. Often, if just a little bias is permitted, then an estimator can be found with lower mean squared error and/or fewer outlier sample estimates. An alternative to the version of "unbiased" above, is "median-unbiased", where the median of the distribution of estimates agrees with the true value; thus, in the long run half the estimates will be too low and half too high. While this applies immediately only to scalar-valued estimators, it can be extended to any measure of central tendency of a distribution: see median-unbiased estimators. In a practical problem, formula_1 can always have functional relationship with formula_2. For example, if a genetic theory states there is a type of leaf (starchy green) that occurs with probability formula_21, with formula_22. Then, for formula_23 leaves, the random variable formula_24, or the number of starchy green leaves, can be modeled with a formula_25 distribution. The number can be used to express the following estimator for formula_2: formula_26. One can show that formula_1 is an unbiased estimator for formula_2: formula_27 formula_28 formula_29 formula_30 formula_31 formula_32 formula_33. Unbiased. A desired property for estimators is the unbiased trait where an estimator is shown to have no systematic tendency to produce estimates larger or smaller than the provided probability. Additionally, unbiased estimators with smaller variances are preferred over larger variances because it will be closer to the "true" value of the parameter. The unbiased estimator with the smallest variance is known as the minimum-variance unbiased estimator (MVUE). To find if your estimator is unbiased it is easy to follow along the equation formula_36, formula_1. With estimator "T" with and parameter of interest formula_2 solving the previous equation so it is shown as formula_37 the estimator is unbiased. Looking at the figure to the right despite formula_34 being the only unbiased estimator. If the distributions overlapped and were both centered around formula_2 then distribution formula_35 would actually be the preferred unbiased estimator. Expectation When looking at quantities in the interest of expectation for the model distribution there is an unbiased estimator which should satisfy the two equations below. formula_38 formula_39 Variance Similarly, when looking at quantities in the interest of variance as the model distribution there is also an unbiased estimator that should satisfy the two equations below. formula_40 formula_41 Note we are dividing by "n" − 1 because if we divided with "n" we would obtain an estimator with a negative bias which would thus produce estimates that are too small for formula_42. It should also be mentioned that even though formula_43 is unbiased for formula_42 the reverse is not true. Behavioral properties. Consistency. A consistent sequence of estimators is a sequence of estimators that converge in probability to the quantity being estimated as the index (usually the sample size) grows without bound. In other words, increasing the sample size increases the probability of the estimator being close to the population parameter. Mathematically, a sequence of estimators {"tn"; "n" ≥ 0} is a consistent estimator for parameter "θ" if and only if, for all "ε" &gt; 0, no matter how small, we have formula_45. The consistency defined above may be called weak consistency. The sequence is "strongly consistent", if it converges almost surely to the true value. An estimator that converges to a "multiple" of a parameter can be made into a consistent estimator by multiplying the estimator by a scale factor, namely the true value divided by the asymptotic value of the estimator. This occurs frequently in estimation of scale parameters by measures of statistical dispersion. Fisher consistency. An estimator can be considered Fisher Consistent as long as the estimator is the same functional of the empirical distribution function as the true distribution function. Following the formula: formula_46 Where formula_47 and formula_48 is the empirical distribution function and theoretical distribution functions respectively. An easy example to see if something is Fisher consistent is to check the mean consistency and the variance. For example, to check consistency for the mean formula_49 and to check for variance confirm that formula_50. Asymptotic normality. An asymptotically normal estimator is a consistent estimator whose distribution around the true parameter "θ" approaches a normal distribution with standard deviation shrinking in proportion to formula_51 as the sample size "n" grows. Using formula_52 to denote convergence in distribution, "tn" is asymptotically normal if formula_53 for some "V". In this formulation "V/n" can be called the "asymptotic variance" of the estimator. However, some authors also call "V" the "asymptotic variance". Note that convergence will not necessarily have occurred for any finite "n", therefore this value is only an approximation to the true variance of the estimator, while in the limit the asymptotic variance (V/n) is simply zero. To be more specific, the distribution of the estimator "tn" converges weakly to a dirac delta function centered at formula_2. The central limit theorem implies asymptotic normality of the sample mean formula_54 as an estimator of the true mean. More generally, maximum likelihood estimators are asymptotically normal under fairly weak regularity conditions — see the asymptotics section of the maximum likelihood article. However, not all estimators are asymptotically normal; the simplest examples are found when the true value of a parameter lies on the boundary of the allowable parameter region. Efficiency. The efficiency of an estimator is used to estimate the quantity of interest in a "minimum error" manner. In reality, there is not an explicit best estimator; there can only be a better estimator. The good or not of the efficiency of an estimator is based on the choice of a particular loss function, and it is reflected by two naturally desirable properties of estimators: to be unbiased formula_18 and have minimal mean squared error (MSE) formula_55. These cannot in general both be satisfied simultaneously: an unbiased estimator may have a lower mean squared error than any biased estimator (see estimator bias). A function relates the mean squared error with the estimator bias. formula_56 The first term represents the mean squared error; the second term represents the square of the estimator bias; and the third term represents the variance of the sample. The quality of the estimator can be identified from the comparison between the variance, the square of the estimator bias, or the MSE. The variance of the good estimator (good efficiency) would be smaller than the variance of the bad estimator (bad efficiency). The square of an estimator bias with a good estimator would be smaller than the estimator bias with a bad estimator. The MSE of a good estimator would be smaller than the MSE of the bad estimator. Suppose there are two estimator, formula_35 is the good estimator and formula_34 is the bad estimator. The above relationship can be expressed by the following formulas. formula_57 formula_58 formula_59 Besides using formula to identify the efficiency of the estimator, it can also be identified through the graph. If an estimator is efficient, in the frequency vs. value graph, there will be a curve with high frequency at the center and low frequency on the two sides. For example: If an estimator is not efficient, the frequency vs. value graph, there will be a relatively more gentle curve. To put it simply, the good estimator has a narrow curve, while the bad estimator has a large curve. Plotting these two curves on one graph with a shared "y"-axis, the difference becomes more obvious. Among unbiased estimators, there often exists one with the lowest variance, called the minimum variance unbiased estimator (MVUE). In some cases an unbiased efficient estimator exists, which, in addition to having the lowest variance among unbiased estimators, satisfies the Cramér–Rao bound, which is an absolute lower bound on variance for statistics of a variable. Concerning such "best unbiased estimators", see also Cramér–Rao bound, Gauss–Markov theorem, Lehmann–Scheffé theorem, Rao–Blackwell theorem. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\theta " }, { "math_id": 1, "text": "\\widehat{\\theta}" }, { "math_id": 2, "text": "\\theta" }, { "math_id": 3, "text": "\\widehat{\\theta}(X)" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "X=x" }, { "math_id": 6, "text": "\\widehat{\\theta}(x)" }, { "math_id": 7, "text": " x " }, { "math_id": 8, "text": "e(x)=\\widehat{\\theta}(x) - \\theta," }, { "math_id": 9, "text": "\\theta " }, { "math_id": 10, "text": "\\operatorname{MSE}(\\widehat{\\theta}) = \\operatorname{E}[(\\widehat{\\theta}(X) - \\theta)^2]." }, { "math_id": 11, "text": "d(x) =\\widehat{\\theta}(x) - \\operatorname{E}( \\widehat{\\theta}(X) ) =\\widehat{\\theta}(x) - \\operatorname{E}( \\widehat{\\theta} )," }, { "math_id": 12, "text": " \\operatorname{E}( \\widehat{\\theta}(X) ) " }, { "math_id": 13, "text": "\\operatorname{Var}(\\widehat{\\theta}) = \\operatorname{E}[(\\widehat{\\theta} - \\operatorname{E}[\\widehat{\\theta}]) ^2]" }, { "math_id": 14, "text": "B(\\widehat{\\theta}) = \\operatorname{E}(\\widehat{\\theta}) - \\theta" }, { "math_id": 15, "text": "b" }, { "math_id": 16, "text": "\\operatorname{E}(\\widehat{\\theta}) - \\theta" }, { "math_id": 17, "text": "\\operatorname{E}(\\widehat{\\theta}) - \\theta\\neq0" }, { "math_id": 18, "text": "\\operatorname{E}(\\widehat{\\theta}) - \\theta=0" }, { "math_id": 19, "text": " \\operatorname{E}(\\widehat{\\theta}) - \\theta = \\operatorname{E}(\\widehat{\\theta} - \\theta ) " }, { "math_id": 20, "text": "B(\\widehat{\\theta}) = 0" }, { "math_id": 21, "text": "p_1=1/4\\cdot(\\theta + 2)" }, { "math_id": 22, "text": "0<\\theta<1" }, { "math_id": 23, "text": "n" }, { "math_id": 24, "text": "N_1" }, { "math_id": 25, "text": "Bin(n,p_1)" }, { "math_id": 26, "text": "\\widehat{\\theta}=4/n\\cdot N_1-2" }, { "math_id": 27, "text": "E[\\widehat{\\theta}]=E[4/n\\cdot N_1-2]" }, { "math_id": 28, "text": "=4/n\\cdot E[N_1]-2" }, { "math_id": 29, "text": "=4/n\\cdot np_1-2" }, { "math_id": 30, "text": "=4\\cdot p_1-2" }, { "math_id": 31, "text": "=4\\cdot1/4\\cdot(\\theta+2)-2" }, { "math_id": 32, "text": "=\\theta+2-2" }, { "math_id": 33, "text": "=\\theta" }, { "math_id": 34, "text": "\\theta_2" }, { "math_id": 35, "text": "\\theta_1" }, { "math_id": 36, "text": "\\operatorname E(\\widehat{\\theta}) - \\theta=0" }, { "math_id": 37, "text": "\\operatorname E[T] = \\theta" }, { "math_id": 38, "text": "1. \\quad \\overline X_n = \\frac{X_1 + X_2+ \\cdots + X_n} n" }, { "math_id": 39, "text": "2. \\quad \\operatorname E\\left[\\overline X_n \\right] = \\mu" }, { "math_id": 40, "text": "1. \\quad S^2_n = \\frac{1}{n-1}\\sum_{i = 1}^n (X_i - \\bar{X_n})^2" }, { "math_id": 41, "text": " 2. \\quad \\operatorname E\\left[S^2_n\\right] = \\sigma^2" }, { "math_id": 42, "text": "\\sigma^2" }, { "math_id": 43, "text": "S^2_n" }, { "math_id": 44, "text": "\\operatorname{MSE}(\\widehat{\\theta}) = \\operatorname{Var}(\\widehat\\theta) + (B(\\widehat{\\theta}))^2," }, { "math_id": 45, "text": "\n\\lim_{n\\to\\infty}\\Pr\\left\\{\n\\left|\nt_n-\\theta\\right|<\\varepsilon\n\\right\\}=1\n" }, { "math_id": 46, "text": "\\widehat{\\theta} = h(T_n), \\theta = h(T_\\theta)" }, { "math_id": 47, "text": "T_n" }, { "math_id": 48, "text": "T_\\theta" }, { "math_id": 49, "text": "\\widehat{\\mu} = \\bar{X}" }, { "math_id": 50, "text": "\\widehat{\\sigma}^2 = SSD/n" }, { "math_id": 51, "text": "1/\\sqrt{n}" }, { "math_id": 52, "text": "\\xrightarrow{D}" }, { "math_id": 53, "text": "\\sqrt{n}(t_n - \\theta) \\xrightarrow{D} N(0,V)," }, { "math_id": 54, "text": "\\bar X" }, { "math_id": 55, "text": "\\operatorname{E}[(\\widehat{\\theta} - \\theta )^2]" }, { "math_id": 56, "text": " \\operatorname{E}[(\\widehat{\\theta} - \\theta )^2]=(\\operatorname{E}(\\widehat{\\theta}) - \\theta)^2+\\operatorname{Var}(\\theta)\\ " }, { "math_id": 57, "text": "\\operatorname{Var}(\\theta_1)<\\operatorname{Var}(\\theta_2)" }, { "math_id": 58, "text": "|\\operatorname{E}(\\theta_1) - \\theta|<\\left|\\operatorname{E}(\\theta_2) - \\theta\\right|" }, { "math_id": 59, "text": "\\operatorname{MSE}(\\theta_1)<\\operatorname{MSE}(\\theta_2)" } ]
https://en.wikipedia.org/wiki?curid=10043
10043801
Homotopy category of chain complexes
In homological algebra in mathematics, the homotopy category "K(A)" of chain complexes in an additive category "A" is a framework for working with chain homotopies and homotopy equivalences. It lies intermediate between the category of chain complexes "Kom(A)" of "A" and the derived category "D(A)" of "A" when "A" is abelian; unlike the former it is a triangulated category, and unlike the latter its formation does not require that "A" is abelian. Philosophically, while "D(A)" turns into isomorphisms any maps of complexes that are quasi-isomorphisms in "Kom(A)", "K(A)" does so only for those that are quasi-isomorphisms for a "good reason", namely actually having an inverse up to homotopy equivalence. Thus, "K(A)" is more understandable than "D(A)". Definitions. Let "A" be an additive category. The homotopy category "K(A)" is based on the following definition: if we have complexes "A", "B" and maps "f", "g" from "A" to "B", a chain homotopy from "f" to "g" is a collection of maps formula_0 ("not" a map of complexes) such that formula_1 or simply formula_2 This can be depicted as: We also say that "f" and "g" are chain homotopic, or that formula_3 is null-homotopic or homotopic to 0. It is clear from the definition that the maps of complexes which are null-homotopic form a group under addition. The homotopy category of chain complexes "K(A)" is then defined as follows: its objects are the same as the objects of "Kom(A)", namely chain complexes. Its morphisms are "maps of complexes modulo homotopy": that is, we define an equivalence relation formula_4 if "f" is homotopic to "g" and define formula_5 to be the quotient by this relation. It is clear that this results in an additive category if one notes that this is the same as taking the quotient by the subgroup of null-homotopic maps. The following variants of the definition are also widely used: if one takes only "bounded-below" ("An=0 for n«0"), "bounded-above" ("An=0 for n»0"), or "bounded" ("An=0 for |n|»0") complexes instead of unbounded ones, one speaks of the "bounded-below homotopy category" etc. They are denoted by "K+(A)", "K−(A)" and "Kb(A)", respectively. A morphism formula_6 which is an isomorphism in "K(A)" is called a homotopy equivalence. In detail, this means there is another map formula_7, such that the two compositions are homotopic to the identities: formula_8 and formula_9. The name "homotopy" comes from the fact that homotopic maps of topological spaces induce homotopic (in the above sense) maps of singular chains. Remarks. Two chain homotopic maps "f" and "g" induce the same maps on homology because "(f − g)" sends cycles to boundaries, which are zero in homology. In particular a homotopy equivalence is a quasi-isomorphism. (The converse is false in general.) This shows that there is a canonical functor formula_10 to the derived category (if "A" is abelian). The triangulated structure. The "shift" "A[1]" of a complex "A" is the following complex formula_11 (note that formula_12), where the differential is formula_13. For the cone of a morphism "f" we take the mapping cone. There are natural maps formula_14 This diagram is called a "triangle". The homotopy category "K(A)" is a triangulated category, if one defines distinguished triangles to be isomorphic (in "K(A)", i.e. homotopy equivalent) to the triangles above, for arbitrary "A", "B" and "f". The same is true for the bounded variants "K+(A)", "K−(A)" and "Kb(A)". Although triangles make sense in "Kom(A)" as well, that category is not triangulated with respect to these distinguished triangles; for example, formula_15 is not distinguished since the cone of the identity map is not isomorphic to the complex 0 (however, the zero map formula_16 is a homotopy equivalence, so that this triangle "is" distinguished in "K(A)"). Furthermore, the rotation of a distinguished triangle is obviously not distinguished in "Kom(A)", but (less obviously) is distinguished in "K(A)". See the references for details. Generalization. More generally, the homotopy category "Ho(C)" of a differential graded category "C" is defined to have the same objects as "C", but morphisms are defined by formula_17. (This boils down to the homotopy of chain complexes if "C" is the category of complexes whose morphisms do not have to respect the differentials). If "C" has cones and shifts in a suitable sense, then "Ho(C)" is a triangulated category, too.
[ { "math_id": 0, "text": "h^n \\colon A^n \\to B^{n - 1}" }, { "math_id": 1, "text": "f^n - g^n = d_B^{n - 1} h^n + h^{n + 1} d_A^n," }, { "math_id": 2, "text": " f - g = d_B h + h d_A." }, { "math_id": 3, "text": "f - g" }, { "math_id": 4, "text": "f \\sim g\\ " }, { "math_id": 5, "text": "\\operatorname{Hom}_{K(A)}(A, B) = \\operatorname{Hom}_{Kom(A)}(A,B)/\\sim" }, { "math_id": 6, "text": "f : A \\rightarrow B" }, { "math_id": 7, "text": "g : B \\rightarrow A" }, { "math_id": 8, "text": "f \\circ g \\sim Id_B" }, { "math_id": 9, "text": "g \\circ f \\sim Id_A" }, { "math_id": 10, "text": "K(A) \\rightarrow D(A)" }, { "math_id": 11, "text": "A[1]: ... \\to A^{n+1} \\xrightarrow{d_{A[1]}^n} A^{n+2} \\to ..." }, { "math_id": 12, "text": "(A[1])^n = A^{n + 1}" }, { "math_id": 13, "text": "d_{A[1]}^n := - d_A^{n+1}" }, { "math_id": 14, "text": "A \\xrightarrow{f} B \\to C(f) \\to A[1]" }, { "math_id": 15, "text": "X \\xrightarrow{id} X \\to 0 \\to" }, { "math_id": 16, "text": "C(id) \\to 0" }, { "math_id": 17, "text": "\\operatorname{Hom}_{Ho(C)}(X, Y) = H^0 \\operatorname{Hom}_C (X, Y)" } ]
https://en.wikipedia.org/wiki?curid=10043801
1004401
Nambu–Goto action
The Nambu–Goto action is the simplest invariant action in bosonic string theory, and is also used in other theories that investigate string-like objects (for example, cosmic strings). It is the starting point of the analysis of zero-thickness (infinitely thin) string behavior, using the principles of Lagrangian mechanics. Just as the action for a free point particle is proportional to its proper time — "i.e.", the "length" of its world-line — a relativistic string's action is proportional to the area of the sheet which the string traces as it travels through spacetime. It is named after Japanese physicists Yoichiro Nambu and Tetsuo Goto. Background. Relativistic Lagrangian mechanics. The basic principle of Lagrangian mechanics, the principle of stationary action, is that an object subjected to outside influences will "choose" a path which makes a certain quantity, the "action", an extremum. The action is a functional, a mathematical relationship which takes an entire path and produces a single number. The "physical path", that which the object actually follows, is the path for which the action is "stationary" (or extremal): any small variation of the path from the physical one does not significantly change the action. (Often, this is equivalent to saying the physical path is the one for which the action is a minimum.) Actions are typically written using Lagrangians, formulas which depend upon the object's state at a particular point in space and/or time. In non-relativistic mechanics, for example, a point particle's Lagrangian is the difference between kinetic and potential energy: formula_0. The action, often written formula_1, is then the integral of this quantity from a starting time to an ending time: formula_2 This approach to mechanics has the advantage that it is easily extended and generalized. For example, we can write a Lagrangian for a relativistic particle, which will be valid even if the particle is traveling close to the speed of light. To preserve Lorentz invariance, the action should only depend upon quantities that are the same for all (Lorentz) observers, i.e. the action should be a Lorentz scalar. The simplest such quantity is the "proper time", the time measured by a clock carried by the particle. According to special relativity, all Lorentz observers watching a particle move will compute the same value for the quantity formula_3 and formula_4 is then an infinitesimal proper time. For a point particle not subject to external forces ("i.e.", one undergoing inertial motion), the relativistic action is formula_5 World-sheets. Just as a zero-dimensional point traces out a world-line on a spacetime diagram, a one-dimensional string is represented by a "world-sheet". All world-sheets are two-dimensional surfaces, hence we need two parameters to specify a point on a world-sheet. String theorists use the symbols formula_6 and formula_7 for these parameters. As it turns out, string theories involve higher-dimensional spaces than the 3D world with which we are familiar; bosonic string theory requires 25 spatial dimensions and one time axis. If formula_8 is the number of spatial dimensions, we can represent a point by the vector formula_9 We describe a string using functions which map a position in the parameter space (formula_6, formula_7) to a point in spacetime. For each value of formula_6 and formula_7, these functions specify a unique spacetime vector: formula_10 The functions formula_11 determine the shape which the world-sheet takes. Different Lorentz observers will disagree on the coordinates they assign to particular points on the world-sheet, but they must all agree on the total "proper area" which the world-sheet has. The Nambu–Goto action is chosen to be proportional to this total proper area. Let formula_12 be the metric on the formula_13-dimensional spacetime. Then, formula_14 is the induced metric on the world-sheet, where formula_15 and formula_16. For the area formula_17 of the world-sheet the following holds: formula_18 where formula_19 and formula_20 Using the notation that: formula_21 and formula_22 one can rewrite the metric formula_23: formula_24 formula_25 the Nambu–Goto action is defined as where formula_26. The factors before the integral give the action the correct units, energy multiplied by time. formula_27 is the tension in the string, and formula_28 is the speed of light. Typically, string theorists work in "natural units" where formula_28 is set to 1 (along with Planck's constant formula_29 and Newton's constant formula_30). Also, partly for historical reasons, they use the "slope parameter" formula_31 instead of formula_27. With these changes, the Nambu–Goto action becomes formula_32 These two forms are, of course, entirely equivalent: choosing one over the other is a matter of convention and convenience. Two further equivalent forms (on shell but not off shell) are formula_33 and formula_34 The conjugate momentum field formula_35. Then, formula_36is a primary constraint. The secondary constraint is formula_37. These constraints generate timelike diffeomorphisms and spacelike diffeomorphisms on the worldsheet. The Hamiltonian formula_38. The extended Hamiltonian is given by formula_39where formula_40 and formula_41 are Lagrange multipliers. The equations of motion satisfy the Virasoro constraints formula_42 and formula_43. Typically, the Nambu–Goto action does not yet have the form appropriate for studying the quantum physics of strings. For this it must be modified in a similar way as the action of a point particle. That is classically equal to minus mass times the invariant length in spacetime, but must be replaced by a quadratic expression with the same classical value. For strings the analog correction is provided by the Polyakov action, which is classically equivalent to the Nambu–Goto action, but gives the 'correct' quantum theory. It is, however, possible to develop a quantum theory from the Nambu–Goto action in the light cone gauge. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L=K-U" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "S = \\int_{t_i}^{t_f} L \\, dt." }, { "math_id": 3, "text": "-ds^2 = -(c \\, dt)^2 + dx^2 + dy^2 + dz^2, \\ " }, { "math_id": 4, "text": "ds/c" }, { "math_id": 5, "text": "S = -mc \\int ds." }, { "math_id": 6, "text": "\\tau" }, { "math_id": 7, "text": "\\sigma" }, { "math_id": 8, "text": "d" }, { "math_id": 9, "text": "x = (x^0, x^1, x^2, \\ldots, x^d)." }, { "math_id": 10, "text": "X (\\tau, \\sigma) = (X^0(\\tau,\\sigma), X^1(\\tau,\\sigma), X^2(\\tau,\\sigma), \\ldots, X^d(\\tau,\\sigma))." }, { "math_id": 11, "text": "X^\\mu (\\tau,\\sigma)" }, { "math_id": 12, "text": " \\eta_{\\mu \\nu} " }, { "math_id": 13, "text": "(d+1)" }, { "math_id": 14, "text": " g_{ab} = \\eta_{\\mu \\nu} \\frac{\\partial X^\\mu}{\\partial y^a} \\frac{\\partial X^\\nu}{\\partial y^b} \\ " }, { "math_id": 15, "text": " a,b = 0,1 " }, { "math_id": 16, "text": " y^0 = \\tau , y^1 = \\sigma " }, { "math_id": 17, "text": " \\mathcal{A} " }, { "math_id": 18, "text": " \\mathrm{d} \\mathcal{A} = \\mathrm{d}^2 \\Sigma \\sqrt{-g} " }, { "math_id": 19, "text": "\\mathrm{d}^2\\Sigma = \\mathrm{d}\\sigma \\, \\mathrm{d}\\tau" }, { "math_id": 20, "text": " g = \\mathrm{det} \\left( g_{ab} \\right) \\ " }, { "math_id": 21, "text": "\\dot{X} = \\frac{\\partial X}{\\partial \\tau}" }, { "math_id": 22, "text": "X' = \\frac{\\partial X}{\\partial \\sigma}," }, { "math_id": 23, "text": " g_{ab} " }, { "math_id": 24, "text": " g_{ab} = \\left( \\begin{array}{cc} \\dot{X}^2 & \\dot{X} \\cdot X' \\\\ X' \\cdot \\dot{X} & X'^2 \\end{array} \\right) \\ " }, { "math_id": 25, "text": " g = \\dot{X}^2 X'^2 - (\\dot{X} \\cdot X')^2 " }, { "math_id": 26, "text": " X \\cdot Y := \\eta_{\\mu \\nu}X^\\mu Y^\\nu " }, { "math_id": 27, "text": "T_0" }, { "math_id": 28, "text": "c" }, { "math_id": 29, "text": "\\hbar" }, { "math_id": 30, "text": "G" }, { "math_id": 31, "text": "\\alpha'" }, { "math_id": 32, "text": "\\mathcal{S} = -\\frac{1}{2\\pi\\alpha'} \\int \\mathrm{d}^2 \\Sigma \n\\sqrt{(\\dot{X} \\cdot X')^2 - (\\dot{X})^2 (X')^2}." }, { "math_id": 33, "text": "\\mathcal{S} = -\\frac{1}{2\\pi\\alpha'} \\int \\mathrm{d}^2 \\Sigma \\sqrt{{\\dot{X}} ^2 - {X'}^2}," }, { "math_id": 34, "text": "\\mathcal{S} = -\\frac{1}{4\\pi\\alpha'} \\int \\mathrm{d}^2 \\Sigma ({\\dot{X}}^2 - {X' }^2)." }, { "math_id": 35, "text": "P=-\\frac{T}{\\sqrt{(\\dot X\\cdot X')^2-{\\dot X}^2{X'}^2}}\\left[X'(\\dot X\\cdot X')-\\dot X {X'}^2\\right]" }, { "math_id": 36, "text": "P^2=\\frac{T^2}{(\\dot X\\cdot X')^2-{\\dot X}^2{X'}^2}\\left[ {X'}^2(\\dot X\\cdot X')^2-2(\\dot X\\cdot X')^2 X'^2+{\\dot X}^2{X'}^4 \\right]=-T^2{X'}^2" }, { "math_id": 37, "text": "P\\cdot X'=0" }, { "math_id": 38, "text": "H=P\\cdot \\dot X-\\mathcal{L}=0" }, { "math_id": 39, "text": "H=\\int d\\sigma \\left[\\lambda(P^2+T^2{X'}^2)+\\rho P\\cdot X'\\right]" }, { "math_id": 40, "text": "\\lambda" }, { "math_id": 41, "text": "\\rho" }, { "math_id": 42, "text": "{\\dot X}^2+X'^2=0" }, { "math_id": 43, "text": "\\dot X\\cdot X'=0" } ]
https://en.wikipedia.org/wiki?curid=1004401
10046650
Optimal matching
Sequence analysis in social science Optimal matching is a sequence analysis method used in social science, to assess the dissimilarity of ordered arrays of tokens that usually represent a time-ordered sequence of socio-economic states two individuals have experienced. Once such distances have been calculated for a set of observations (e.g. individuals in a cohort) classical tools (such as cluster analysis) can be used. The method was tailored to social sciences from a technique originally introduced to study molecular biology (protein or genetic) sequences (see sequence alignment). Optimal matching uses the Needleman-Wunsch algorithm. Algorithm. Let formula_0 be a sequence of states formula_1 belonging to a finite set of possible states. Let us denote formula_2 the sequence space, i.e. the set of all possible sequences of states. Optimal matching algorithms work by defining simple operator algebras that manipulate sequences, i.e. a set of operators formula_3. In the most simple approach, a set composed of only three basic operations to transform sequences is used: Imagine now that a "cost" formula_10 is associated to each operator. Given two sequences formula_11 and formula_12, the idea is to measure the "cost" of obtaining formula_12 from formula_11 using operators from the algebra. Let formula_13 be a sequence of operators such that the application of all the operators of this sequence formula_14 to the first sequence formula_11 gives the second sequence formula_12: formula_15 where formula_16 denotes the compound operator. To this set we associate the cost formula_17, that represents the total cost of the transformation. One should consider at this point that there might exist different such sequences formula_14 that transform formula_11 into formula_12; a reasonable choice is to select the cheapest of such sequences. We thus call distance &lt;br&gt; formula_18 &lt;br&gt; that is, the cost of the least expensive set of transformations that turn formula_11 into formula_12. Notice that formula_19 is by definition nonnegative since it is the sum of positive costs, and trivially formula_20 if and only if formula_21, that is there is no cost. The distance function is symmetric if insertion and deletion costs are equal formula_22; the term "indel" cost usually refers to the common cost of insertion and deletion. Considering a set composed of only the three basic operations described above, this proximity measure satisfies the triangular inequality. Transitivity however, depends on the definition of the set of elementary operations. Criticism. Although optimal matching techniques are widely used in sociology and demography, such techniques also have their flaws. As was pointed out by several authors (for example L. L. Wu), the main problem in the application of optimal matching is to appropriately define the costs formula_23.
[ { "math_id": 0, "text": "S = (s_1, s_2, s_3, \\ldots s_T)" }, { "math_id": 1, "text": "s_i" }, { "math_id": 2, "text": "{\\mathbf S}" }, { "math_id": 3, "text": "a_i: {\\mathbf S} \\rightarrow {\\mathbf S}" }, { "math_id": 4, "text": "s" }, { "math_id": 5, "text": "a^{\\rm Ins}_{s'} (s_1, s_2, s_3, \\ldots s_T) = (s_1, s_2, s_3, \\ldots, s', \\ldots s_T) " }, { "math_id": 6, "text": "a^{\\rm Del}_{s_2} (s_1, s_2, s_3, \\ldots s_T) = (s_1, s_3, \\ldots s_T)" }, { "math_id": 7, "text": "s_1" }, { "math_id": 8, "text": "s'_1" }, { "math_id": 9, "text": "a^{\\rm Sub}_{s_1,s'_1} (s_1, s_2, s_3, \\ldots s_T) = (s'_1, s_2, s_3, \\ldots s_T)" }, { "math_id": 10, "text": "c(a_i) \\in {\\mathbf R}^+_0" }, { "math_id": 11, "text": "S_1" }, { "math_id": 12, "text": "S_2" }, { "math_id": 13, "text": "A={a_1, a_2, \\ldots a_n}" }, { "math_id": 14, "text": "A" }, { "math_id": 15, "text": "S_2 = a_1 \\circ a_2 \\circ \\ldots \\circ a_{n} (S_1)" }, { "math_id": 16, "text": "a_1 \\circ a_2" }, { "math_id": 17, "text": "c(A) = \\sum_{i=1}^n c(a_i)" }, { "math_id": 18, "text": "d(S_1,S_2)= \\min_A \\left \\{ c(A)~{\\rm such~that}~S_2 = A (S_1) \\right \\} " }, { "math_id": 19, "text": "d(S_1,S_2)" }, { "math_id": 20, "text": "d(S_1,S_2)=0" }, { "math_id": 21, "text": "S_1=S_2" }, { "math_id": 22, "text": "c(a^{\\rm Ins}) = c(a^{\\rm Del})" }, { "math_id": 23, "text": "c(a_i)" } ]
https://en.wikipedia.org/wiki?curid=10046650
10046651
G-network
In queueing theory, a discipline within the mathematical theory of probability, a G-network (generalized queueing network, often called a Gelenbe network) is an open network of G-queues first introduced by Erol Gelenbe as a model for queueing systems with specific control functions, such as traffic re-routing or traffic destruction, as well as a model for neural networks. A G-queue is a network of queues with several types of novel and useful customers: A product-form solution superficially similar in form to Jackson's theorem, but which requires the solution of a system of non-linear equations for the traffic flows, exists for the stationary distribution of G-networks while the traffic equations of a G-network are in fact surprisingly non-linear, and the model does not obey partial balance. This broke previous assumptions that partial balance was a necessary condition for a product-form solution. A powerful property of G-networks is that they are universal approximators for continuous and bounded functions, so that they can be used to approximate quite general input-output behaviours. Definition. A network of "m" interconnected queues is a "G-network" if A queue in such a network is known as a G-queue. Stationary distribution. Define the utilization at each node, formula_5 where the formula_6 for formula_7 satisfy Then writing ("n"1, ... ,"n"m) for the state of the network (with queue length "n""i" at node "i"), if a unique non-negative solution formula_8 exists to the above equations (1) and (2) such that "ρ""i" for all "i" then the stationary probability distribution π exists and is given by formula_9 Proof. It is sufficient to show formula_10 satisfies the global balance equations which, quite differently from Jackson networks are non-linear. We note that the model also allows for multiple classes. G-networks have been used in a wide range of applications, including to represent Gene Regulatory Networks, the mix of control and payload in packet networks, neural networks, and the representation of colour images and medical images such as Magnetic Resonance Images. Response time distribution. The response time is the length of time a customer spends in the system. The response time distribution for a single G-queue is known where customers are served using a FCFS discipline at rate "μ", with positive arrivals at rate "λ"+ and negative arrivals at rate "λ"− which kill customers from the end of the queue. The Laplace transform of response time distribution in this situation is formula_11 where "λ" = "λ"+ + "λ"− and "ρ" = "λ"+/("λ"− + "μ"), requiring "ρ" &lt; 1 for stability. The response time for a tandem pair of G-queues (where customers who finish service at the first node immediately move to the second, then leave the network) is also known, and it is thought extensions to larger networks will be intractable. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\scriptstyle{\\Lambda_i}" }, { "math_id": 1, "text": "\\scriptstyle{\\lambda_i}" }, { "math_id": 2, "text": "\\scriptstyle{p_{ij}^{+}}" }, { "math_id": 3, "text": "\\scriptstyle{p_{ij}^{-}}" }, { "math_id": 4, "text": "\\scriptstyle{d_i}" }, { "math_id": 5, "text": "\\rho_i = \\frac{\\lambda^+_i}{\\mu_i + \\lambda^-_i}" }, { "math_id": 6, "text": "\\scriptstyle{\\lambda^+_i, \\lambda^-_i}" }, { "math_id": 7, "text": "\\scriptstyle{i=1,\\ldots,m}" }, { "math_id": 8, "text": "\\scriptstyle{(\\lambda^+_i,\\lambda^-_i)}" }, { "math_id": 9, "text": "\\pi(n_1,n_2,\\ldots,n_m) = \\prod_{i=1}^m (1 - \\rho_i)\\rho_i^{n_i}." }, { "math_id": 10, "text": "\\pi" }, { "math_id": 11, "text": "W^\\ast(s) = \\frac{\\mu(1-\\rho)}{\\lambda^+}\\frac{s+\\lambda+\\mu(1-\\rho)-\\sqrt{[s+\\lambda+\\mu(1-\\rho)]^2-4\\lambda^+\\lambda^-}}{\\lambda^--\\lambda^+-\\mu(1-\\rho)-s+\\sqrt{[s+\\lambda+\\mu(1-\\rho)]^2-4\\lambda^+\\lambda^-}}" } ]
https://en.wikipedia.org/wiki?curid=10046651
1004679
Needleman–Wunsch algorithm
Method for aligning biological sequences The Needleman–Wunsch algorithm is an algorithm used in bioinformatics to align protein or nucleotide sequences. It was one of the first applications of dynamic programming to compare biological sequences. The algorithm was developed by Saul B. Needleman and Christian D. Wunsch and published in 1970. The algorithm essentially divides a large problem (e.g. the full sequence) into a series of smaller problems, and it uses the solutions to the smaller problems to find an optimal solution to the larger problem. It is also sometimes referred to as the optimal matching algorithm and the global alignment technique. The Needleman–Wunsch algorithm is still widely used for optimal global alignment, particularly when the quality of the global alignment is of the utmost importance. The algorithm assigns a score to every possible alignment, and the purpose of the algorithm is to find all possible alignments having the highest score. Introduction. This algorithm can be used for any two strings. This guide will use two small DNA sequences as examples as shown in Figure 1: GCATGCG GATTACA Constructing the grid. First construct a grid such as one shown in Figure 1 above. Start the first string in the top of the third column and start the other string at the start of the third row. Fill out the rest of the column and row headers as in Figure 1. There should be no numbers in the grid yet. Choosing a scoring system. Next, decide how to score each individual pair of letters. Using the example above, one possible alignment candidate might be: 12345678 &lt;samp class="dna-sequence" style="word-break: break-all;"&gt;GCATG-CG&lt;/samp&gt; &lt;samp class="dna-sequence" style="word-break: break-all;"&gt;G-ATTACA&lt;/samp&gt; The letters may match, mismatch, or be matched to a gap (a deletion or insertion (indel)): Each of these scenarios is assigned a score and the sum of the scores of all the pairings is the score of the whole alignment candidate. Different systems exist for assigning scores; some have been outlined in the Scoring systems section below. For now, the system used by Needleman and Wunsch will be used: For the Example above, the score of the alignment would be 0: &lt;samp class="dna-sequence" style="word-break: break-all;"&gt;GCATG-CG&lt;/samp&gt; &lt;samp class="dna-sequence" style="word-break: break-all;"&gt;G-ATTACA&lt;/samp&gt; +−++−−+− −&gt; 1*4 + (−1)*4 = 0 Filling in the table. Start with a zero in the first row, first column (not including the cells containing nucleotides). Move through the cells row by row, calculating the score for each cell. The score is calculated by comparing the scores of the cells neighboring to the left, top or top-left (diagonal) of the cell and adding the appropriate score for match, mismatch or indel. Take the maximum of the candidate scores for each of the three possibilities: The resulting score for the cell is the highest of the three candidate scores. Given there is no 'top' or 'top-left' cells for the first row only the existing cell to the left can be used to calculate the score of each cell. Hence −1 is added for each shift to the right as this represents an indel from the previous score. This results in the first row being 0, −1, −2, −3, −4, −5, −6, −7. The same applies to the first column as only the existing score above each cell can be used. Thus the resulting table is: The first case with existing scores in all 3 directions is the intersection of our first letters (in this case G and G). The surrounding cells are below: This cell has three possible candidate sums: The highest candidate is 1 and is entered into the cell: The cell which gave the highest candidate score must also be recorded. In the completed diagram in figure 1 above, this is represented as an arrow from the cell in row and column 2 to the cell in row and column 1. In the next example, the diagonal step for both X and Y represents a mismatch: X: Y: For both X and Y, the highest score is zero: The highest candidate score may be reached by two of the neighboring cells: In this case, all directions reaching the highest candidate score must be noted as possible origin cells in the finished diagram in figure 1, e.g. in the cell in row and column 6. Filling in the table in this manner gives the scores of all possible alignment candidates, the score in the cell on the bottom right represents the alignment score for the best alignment. Tracing arrows back to origin. Mark a path from the cell on the bottom right back to the cell on the top left by following the direction of the arrows. From this path, the sequence is constructed by these rules: Following these rules, the steps for one possible alignment candidate in figure 1 are: G → CG → GCG → -GCG → T-GCG → AT-GCG → CAT-GCG → GCAT-GCG A → CA → ACA → TACA → TTACA → ATTACA → -ATTACA → G-ATTACA (branch) → TGCG → -TGCG → ... → TACA → TTACA → ... Scoring systems. Basic scoring schemes. The simplest scoring schemes simply give a value for each match, mismatch and indel. The step-by-step guide above uses match = 1, mismatch = −1, indel = −1. Thus the lower the alignment score the larger the edit distance, for this scoring system one wants a high score. Another scoring system might be: For this system the alignment score will represent the edit distance between the two strings. Different scoring systems can be devised for different situations, for example if gaps are considered very bad for your alignment you may use a scoring system that penalises gaps heavily, such as: Similarity matrix. More complicated scoring systems attribute values not only for the type of alteration, but also for the letters that are involved. For example, a match between A and A may be given 1, but a match between T and T may be given 4. Here (assuming the first scoring system) more importance is given to the Ts matching than the As, i.e. the Ts matching is assumed to be more significant to the alignment. This weighting based on letters also applies to mismatches. In order to represent all the possible combinations of letters and their resulting scores a similarity matrix is used. The similarity matrix for the most basic system is represented as: Each score represents a switch from one of the letters the cell matches to the other. Hence this represents all possible matches and mismatches (for an alphabet of ACGT). Note all the matches go along the diagonal, also not all the table needs to be filled, only this triangle because the scores are reciprocal.= (Score for A → C = Score for C → A). If implementing the T-T = 4 rule from above the following similarity matrix is produced: Different scoring matrices have been statistically constructed which give weight to different actions appropriate to a particular scenario. Having weighted scoring matrices is particularly important in protein sequence alignment due to the varying frequency of the different amino acids. There are two broad families of scoring matrices, each with further alterations for specific scenarios: Gap penalty. When aligning sequences there are often gaps (i.e. indels), sometimes large ones. Biologically, a large gap is more likely to occur as one large deletion as opposed to multiple single deletions. Hence two small indels should have a worse score than one large one. The simple and common way to do this is via a large gap-start score for a new indel and a smaller gap-extension score for every letter which extends the indel. For example, new-indel may cost -5 and extend-indel may cost -1. In this way an alignment such as: GAAAAAAT G--A-A-T which has multiple equal alignments, some with multiple small alignments will now align as: GAAAAAAT GAA----T or any alignment with a 4 long gap in preference over multiple small gaps. Advanced presentation of algorithm. Scores for aligned characters are specified by a similarity matrix. Here, "S"("a", "b") is the similarity of characters "a" and "b". It uses a linear gap penalty, here called d. For example, if the similarity matrix was then the alignment: AGACTAGTTAC CGA---GACGT with a gap penalty of −5, would have the following score: "S"(A,C) + "S"(G,G) + "S"(A,A) + (3 × "d") + "S"(G,G) + "S"(T,A) + "S"(T,C) + "S"(A,G) + "S"(C,T) = −3 + 7 + 10 − (3 × 5) + 7 + (−4) + 0 + (−1) + 0 = 1 To find the alignment with the highest score, a two-dimensional array (or matrix) "F" is allocated. The entry in row "i" and column "j" is denoted here by formula_1. There is one row for each character in sequence "A", and one column for each character in sequence "B". Thus, if aligning sequences of sizes "n" and "m", the amount of memory used is in formula_2. Hirschberg's algorithm only holds a subset of the array in memory and uses formula_3 space, but is otherwise similar to Needleman-Wunsch (and still requires formula_2 time). As the algorithm progresses, the formula_1 will be assigned to be the optimal score for the alignment of the first formula_4 characters in "A" and the first formula_5 characters in "B". The principle of optimality is then applied as follows: formula_6 formula_7 formula_8 The pseudo-code for the algorithm to compute the F matrix therefore looks like this: d ← Gap penalty score for i = 0 to length(A) F(i,0) ← d * i for j = 0 to length(B) F(0,j) ← d * j for i = 1 to length(A) for j = 1 to length(B) Match ← F(i−1, j−1) + S(Ai, Bj) Delete ← F(i−1, j) + d Insert ← F(i, j−1) + d F(i,j) ← max(Match, Insert, Delete) Once the "F" matrix is computed, the entry formula_9 gives the maximum score among all possible alignments. To compute an alignment that actually gives this score, you start from the bottom right cell, and compare the value with the three possible sources (Match, Insert, and Delete above) to see which it came from. If Match, then formula_10 and formula_11 are aligned, if Delete, then formula_10 is aligned with a gap, and if Insert, then formula_11 is aligned with a gap. (In general, more than one choice may have the same value, leading to alternative optimal alignments.) AlignmentA ← " AlignmentB ← " i ← length(A) j ← length(B) while (i &gt; 0 or j &gt; 0) if (i &gt; 0 and j &gt; 0 and F(i, j) == F(i−1, j−1) + S(Ai, Bj)) AlignmentA ← Ai + AlignmentA AlignmentB ← Bj + AlignmentB i ← i − 1 j ← j − 1 else if (i &gt; 0 and F(i, j) == F(i−1, j) + d) AlignmentA ← Ai + AlignmentA AlignmentB ← "−" + AlignmentB i ← i − 1 else AlignmentA ← "−" + AlignmentA AlignmentB ← Bj + AlignmentB j ← j − 1 Complexity. Computing the score formula_1 for each cell in the table is an formula_12 operation. Thus the time complexity of the algorithm for two sequences of length formula_13 and formula_14 is formula_0. It has been shown that it is possible to improve the running time to formula_15 using the Method of Four Russians. Since the algorithm fills an formula_16 table the space complexity is formula_17 Historical notes and algorithm development. The original purpose of the algorithm described by Needleman and Wunsch was to find similarities in the amino acid sequences of two proteins. Needleman and Wunsch describe their algorithm explicitly for the case when the alignment is penalized solely by the matches and mismatches, and gaps have no penalty ("d"=0). The original publication from 1970 suggests the recursion formula_18. The corresponding dynamic programming algorithm takes cubic time. The paper also points out that the recursion can accommodate arbitrary gap penalization formulas: A penalty factor, a number subtracted for every gap made, may be assessed as a barrier to allowing the gap. The penalty factor could be a function of the size and/or direction of the gap. [page 444] A better dynamic programming algorithm with quadratic running time for the same problem (no gap penalty) was introduced later by David Sankoff in 1972. Similar quadratic-time algorithms were discovered independently by T. K. Vintsyuk in 1968 for speech processing ("time warping"), and by Robert A. Wagner and Michael J. Fischer in 1974 for string matching. Needleman and Wunsch formulated their problem in terms of maximizing similarity. Another possibility is to minimize the edit distance between sequences, introduced by Vladimir Levenshtein. Peter H. Sellers showed in 1974 that the two problems are equivalent. The Needleman–Wunsch algorithm is still widely used for optimal global alignment, particularly when the quality of the global alignment is of the utmost importance. However, the algorithm is expensive with respect to time and space, proportional to the product of the length of two sequences and hence is not suitable for long sequences. Recent development has focused on improving the time and space cost of the algorithm while maintaining quality. For example, in 2013, a Fast Optimal Global Sequence Alignment Algorithm (FOGSAA), suggested alignment of nucleotide/protein sequences faster than other optimal global alignment methods, including the Needleman–Wunsch algorithm. The paper claims that when compared to the Needleman–Wunsch algorithm, FOGSAA achieves a time gain of 70–90% for highly similar nucleotide sequences (with &gt; 80% similarity), and 54–70% for sequences having 30–80% similarity. Applications outside bioinformatics. Computer stereo vision. Stereo matching is an essential step in the process of 3D reconstruction from a pair of stereo images. When images have been rectified, an analogy can be drawn between aligning nucleotide and protein sequences and matching pixels belonging to scan lines, since both tasks aim at establishing optimal correspondence between two strings of characters. Although in many applications image rectification can be performed, e.g. by camera resectioning or calibration, it is sometimes impossible or impractical since the computational cost of accurate rectification models prohibit their usage in real-time applications. Moreover, none of these models is suitable when a camera lens displays unexpected distortions, such as those generated by raindrops, weatherproof covers or dust. By extending the Needleman–Wunsch algorithm, a line in the 'left' image can be associated to a curve in the 'right' image by finding the alignment with the highest score in a three-dimensional array (or matrix). Experiments demonstrated that such extension allows dense pixel matching between unrectified or distorted images. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(mn)" }, { "math_id": 1, "text": "F_{ij}" }, { "math_id": 2, "text": "O(nm)" }, { "math_id": 3, "text": "\\Theta(\\min \\{n,m\\})" }, { "math_id": 4, "text": "i=0,\\dotsc,n" }, { "math_id": 5, "text": "j=0,\\dotsc,m" }, { "math_id": 6, "text": "F_{0j} = d*j" }, { "math_id": 7, "text": "F_{i0} = d*i" }, { "math_id": 8, "text": "F_{ij} = \\max(F_{i-1,j-1} + S(A_{i}, B_{j}), \\; F_{i,j-1} + d, \\; F_{i-1,j} + d)" }, { "math_id": 9, "text": "F_{nm}" }, { "math_id": 10, "text": "A_i" }, { "math_id": 11, "text": "B_j" }, { "math_id": 12, "text": "O(1)" }, { "math_id": 13, "text": "n" }, { "math_id": 14, "text": "m" }, { "math_id": 15, "text": "O(mn/ \\log n)" }, { "math_id": 16, "text": "n \\times m" }, { "math_id": 17, "text": "O(mn)." }, { "math_id": 18, "text": "F_{ij} = \\max_{h<i,k<j} \\{ F_{h,j-1}+S(A_{i},B_{j}), F_{i-1,k}+S(A_i,B_j) \\}" } ]
https://en.wikipedia.org/wiki?curid=1004679
1004743
Similarity measure
Real-valued function that quantifies similarity between two objects In statistics and related fields, a similarity measure or similarity function or similarity metric is a real-valued function that quantifies the similarity between two objects. Although no single definition of a similarity exists, usually such measures are in some sense the inverse of distance metrics: they take on large values for similar objects and either zero or a negative value for very dissimilar objects. Though, in more broad terms, a similarity function may also satisfy metric axioms. Cosine similarity is a commonly used similarity measure for real-valued vectors, used in (among other fields) information retrieval to score the similarity of documents in the vector space model. In machine learning, common kernel functions such as the RBF kernel can be viewed as similarity functions. Use of different similarity measure formulas. Different types of similarity measures exist for various types of objects, depending on the objects being compared. For each type of object there are various similarity measurement formulas. Similarity between two data points There are many various options available when it comes to finding similarity between two data points, some of which are a combination of other similarity methods. Some of the methods for similarity measures between two data points include Euclidean distance, Manhattan distance, Minkowski distance, and Chebyshev distance. The Euclidean distance formula is used to find the distance between two points on a plane, which is visualized in the image below. Manhattan distance is commonly used in GPS applications, as it can be used to find the shortest route between two addresses. When you generalize the Euclidean distance formula and Manhattan distance formula you are left with the Minkowski distance formulas, which can be used in a wide variety of applications. Similarity between strings For comparing strings, there are various measures of string similarity that can be used. Some of these methods include edit distance, Levenshtein distance, Hamming distance, and Jaro distance. The best-fit formula is dependent on the requirements of the application. For example, edit distance is frequently used for natural language processing applications and features, such as spell-checking. Jaro distance is commonly used in record linkage to compare first and last names to other sources. Similarity between two probability distributions Typical measures of similarity for probability distributions are the Bhattacharyya distance and the Hellinger distance. Both provide a quantification of similarity for two probability distributions on the same domain, and they are mathematically closely linked. The Bhattacharyya distance does not fulfill the triangle inequality, meaning it does not form a metric. The Hellinger distance does form a metric on the space of probability distributions. Similarity between two sets The Jaccard index formula measures the similarity between two sets based on the number of items that are present in both sets relative to the total number of items. It is commonly used in recommendation systems and social media analysis. The Sørensen–Dice coefficient also compares the number of items in both sets to the total number of items present but the weight for the number of shared items is larger. The Sørensen–Dice coefficient is commonly used in biology applications, measuring the similarity between two sets of genes or species. Similarity between two sequences When comparing temporal sequences (time series), some similarity measures must additionally account for similarity of two sequences that are not fully aligned. Use in clustering. Clustering or Cluster analysis is a data mining technique that is used to discover patterns in data by grouping similar objects together. It involves partitioning a set of data points into groups or clusters based on their similarities. One of the fundamental aspects of clustering is how to measure similarity between data points. Similarity measures play a crucial role in many clustering techniques, as they are used to determine how closely related two data points are and whether they should be grouped together in the same cluster. A similarity measure can take many different forms depending on the type of data being clustered and the specific problem being solved. One of the most commonly used similarity measures is the Euclidean distance, which is used in many clustering techniques including K-means clustering and Hierarchical clustering. The Euclidean distance is a measure of the straight-line distance between two points in a high-dimensional space. It is calculated as the square root of the sum of the squared differences between the corresponding coordinates of the two points. For example, if we have two data points formula_0 and formula_1, the Euclidean distance between them is formula_2. Another commonly used similarity measure is the Jaccard index or Jaccard similarity, which is used in clustering techniques that work with binary data such as presence/absence data or Boolean data; The Jaccard similarity is particularly useful for clustering techniques that work with text data, where it can be used to identify clusters of similar documents based on their shared features or keywords. It is calculated as the size of the intersection of two sets divided by the size of the union of the two sets: formula_3. Similarities among 162 Relevant Nuclear Profile are tested using the Jaccard Similarity measure (see figure with heatmap). The Jaccard similarity of the nuclear profile ranges from 0 to 1, with 0 indicating no similarity between the two sets and 1 indicating perfect similarity with the aim of clustering the most similar nuclear profile. Manhattan distance, also known as Taxicab geometry, is a commonly used similarity measure in clustering techniques that work with continuous data. It is a measure of the distance between two data points in a high-dimensional space, calculated as the sum of the absolute differences between the corresponding coordinates of the two points formula_4. When dealing with mixed-type data, including nominal, ordinal, and numerical attributes per object, Gower's distance (or similarity) is a common choice as it can handle different types of variables implicitly. It first computes similarities between the pair of variables in each object, and then combines those similarities to a single weighted average per object-pair. As such, for two objects formula_5 and formula_6 having formula_7 descriptors, the similarity formula_8 is defined as: formula_9 where the formula_10 are non-negative weights and formula_11 is the similarity between the two objects regarding their formula_12-th variable. In spectral clustering, a similarity, or affinity, measure is used to transform data to overcome difficulties related to lack of convexity in the shape of the data distribution. The measure gives rise to an formula_13-sized &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;similarity matrix for a set of n points, where the entry formula_14 in the matrix can be simply the (reciprocal of the) Euclidean distance between formula_5 and formula_6, or it can be a more complex measure of distance such as the Gaussian formula_15. Further modifying this result with network analysis techniques is also common. The choice of similarity measure depends on the type of data being clustered and the specific problem being solved. For example, working with continuous data such as gene expression data, the Euclidean distance or cosine similarity may be appropriate. If working with binary data such as the presence of a genomic loci in a nuclear profile, the Jaccard index may be more appropriate. Lastly, working with data that is arranged in a grid or lattice structure, such as image or signal processing data, the Manhattan distance is particularly useful for the clustering. Use in recommender systems. Similarity measures are used to develop recommender systems. It observes a user's perception and liking of multiple items. On recommender systems, the method is using a distance calculation such as &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Euclidean Distance or &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Cosine Similarity to generate a &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;similarity matrix with values representing the similarity of any pair of targets. Then, by analyzing and comparing the values in the matrix, it is possible to match two targets to a user's preference or link users based on their marks. In this system, it is relevant to observe the value itself and the absolute distance between two values. Gathering this data can indicate a mark's likeliness to a user as well as how mutually closely two marks are either rejected or accepted. It is possible then to recommend to a user targets with high similarity to the user's likes. Recommender systems are observed in multiple online entertainment platforms, in social media and streaming websites. The logic for the construction of this systems is based on similarity measures. Use in sequence alignment. Similarity matrices are used in sequence alignment. Higher scores are given to more-similar characters, and lower or negative scores for dissimilar characters. Nucleotide similarity matrices are used to align nucleic acid sequences. Because there are only four nucleotides commonly found in DNA (Adenine (A), Cytosine (C), Guanine (G) and Thymine (T)), nucleotide similarity matrices are much simpler than protein similarity matrices. For example, a simple matrix will assign identical bases a score of +1 and non-identical bases a score of −1. A more complicated matrix would give a higher score to transitions (changes from a pyrimidine such as C or T to another pyrimidine, or from a purine such as A or G to another purine) than to transversions (from a pyrimidine to a purine or vice versa). The match/mismatch ratio of the matrix sets the target evolutionary distance. The +1/−3 DNA matrix used by BLASTN is best suited for finding matches between sequences that are 99% identical; a +1/−1 (or +4/−4) matrix is much more suited to sequences with about 70% similarity. Matrices for lower similarity sequences require longer sequence alignments. Amino acid similarity matrices are more complicated, because there are 20 amino acids coded for by the genetic code, and so a larger number of possible substitutions. Therefore, the similarity matrix for amino acids contains 400 entries (although it is usually symmetric). The first approach scored all amino acid changes equally. A later refinement was to determine amino acid similarities based on how many base changes were required to change a codon to code for that amino acid. This model is better, but it doesn't take into account the selective pressure of amino acid changes. Better models took into account the chemical properties of amino acids. One approach has been to empirically generate the similarity matrices. The Dayhoff method used phylogenetic trees and sequences taken from species on the tree. This approach has given rise to the PAM series of matrices. PAM matrices are labelled based on how many nucleotide changes have occurred, per 100 amino acids. While the PAM matrices benefit from having a well understood evolutionary model, they are most useful at short evolutionary distances (PAM10–PAM120). At long evolutionary distances, for example PAM250 or 20% identity, it has been shown that the BLOSUM matrices are much more effective. The BLOSUM series were generated by comparing a number of divergent sequences. The BLOSUM series are labeled based on how much entropy remains unmutated between all sequences, so a lower BLOSUM number corresponds to a higher PAM number. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " (x_1,y_1)" }, { "math_id": 1, "text": " (x_2,y_2)" }, { "math_id": 2, "text": " d = \\surd[(x_2-x_1)^2 + (y_2-y_1)^2]" }, { "math_id": 3, "text": " J(A,B)={ A\\bigcap B\\over A\\bigcup B}" }, { "math_id": 4, "text": " \\left\\vert x_1 - x_2 \\right\\vert +\\left\\vert y_1 -y_2 \\right\\vert" }, { "math_id": 5, "text": "i" }, { "math_id": 6, "text": "j" }, { "math_id": 7, "text": "p" }, { "math_id": 8, "text": "S" }, { "math_id": 9, "text": "S_{ij} = \\frac{\\sum_{k=1}^pw_{ijk}s_{ijk}}{\\sum_{k=1}^pw_{ijk}}," }, { "math_id": 10, "text": "w_{ijk}" }, { "math_id": 11, "text": "s_{ijk}" }, { "math_id": 12, "text": "k" }, { "math_id": 13, "text": "(n, n)" }, { "math_id": 14, "text": "(i,j)" }, { "math_id": 15, "text": " e^{-\\|s_1 - s_2\\|^2/2\\sigma^2}" } ]
https://en.wikipedia.org/wiki?curid=1004743
1004764
Gap penalty
A Gap penalty is a method of scoring alignments of two or more sequences. When aligning sequences, introducing gaps in the sequences can allow an alignment algorithm to match more terms than a gap-less alignment can. However, minimizing gaps in an alignment is important to create a useful alignment. Too many gaps can cause an alignment to become meaningless. Gap penalties are used to adjust alignment scores based on the number and length of gaps. The five main types of gap penalties are constant, linear, affine, convex, and profile-based. Bioinformatics applications. Global alignment. A global alignment performs an end-to-end alignment of the query sequence with the reference sequence. Ideally, this alignment technique is most suitable for closely related sequences of similar lengths. The Needleman-Wunsch algorithm is a dynamic programming technique used to conduct global alignment. Essentially, the algorithm divides the problem into a set of sub-problems, then uses the results of the sub-problems to reconstruct a solution to the original query. Semi-global alignment. The use of semi-global alignment exists to find a particular match within a large sequence. An example includes seeking promoters within a DNA sequence. Unlike global alignment, it compromises of no end gaps in one or both sequences. If the end gaps are penalized in one sequence 1 but not in sequence 2, it produces an alignment that contains sequence 2 within sequence 1. Local alignment. A local sequence alignment matches a contiguous sub-section of one sequence with a contiguous sub-section of another. The Smith-Waterman algorithm is motivated by giving scores for matches and mismatches. Matches increase the overall score of an alignment whereas mismatches decrease the score. A good alignment then has a positive score and a poor alignment has a negative score. The local algorithm finds an alignment with the highest score by considering only alignments that score positives and picking the best one from those. The algorithm is a dynamic programming algorithm. When comparing proteins, one uses a similarity matrix which assigns a score to each possible residue pair. The score should be positive for similar residues and negative for dissimilar residue pairs. Gaps are usually penalized using a linear gap function that assigns an initial penalty for a gap opening, and an additional penalty for gap extensions, increasing the gap length. Scoring matrix. Substitution matrices such as BLOSUM are used for sequence alignment of proteins. A Substitution matrix assigns a score for aligning any possible pair of residues. In general, different substitution matrices are tailored to detecting similarities among sequences that are diverged by differing degrees. A single matrix may be reasonably efficient over a relatively broad range of evolutionary change. The BLOSUM-62 matrix is one of the best substitution matrices for detecting weak protein similarities. BLOSUM matrices with high numbers are designed for comparing closely related sequences, while those with low numbers are designed for comparing distant related sequences. For example, BLOSUM-80 is used for alignments that are more similar in sequence, and BLOSUM-45 is used for alignments that have diverged from each other. For particularly long and weak alignments, the BLOSUM-45 matrix may provide the best results. Short alignments are more easily detected using a matrix with a higher "relative entropy" than that of BLOSUM-62. The BLOSUM series does not include any matrices with relative entropies suitable for the shortest queries. Indels. During DNA replication, the cellular replication machinery is prone to making two types of errors while duplicating the DNA. These two replication errors are insertions and deletions of single DNA bases from the DNA strand (indels). Indels can have severe biological consequences by causing mutations in the DNA strand that could result in the inactivation or over activation of the target protein. For example, if a one or two nucleotide indel occurs in a coding sequence the result will be a shift in the reading frame, or a frameshift mutation that may render the protein inactive. The biological consequences of indels are often deleterious and are frequently associated with pathologies such as cancer. However, not all indels are frameshift mutations. If indels occur in trinucleotides, the result is an extension of the protein sequence that may also have implications on protein function. Types. Constant. This is the simplest type of gap penalty: a fixed negative score is given to every gap, regardless of its length. This encourages the algorithm to make fewer, larger, gaps leaving larger contiguous sections. ATTGACCTGA AT---CCTGA Aligning two short DNA sequences, with '-' depicting a gap of one base pair. If each match was worth 1 point and the whole gap -1, the total score: 7 − 1 = 6. Linear. Compared to the constant gap penalty, the linear gap penalty takes into account the length (L) of each insertion/deletion in the gap. Therefore, if the penalty for each inserted/deleted element is B and the length of the gap L; the total gap penalty would be the product of the two BL. This method favors shorter gaps, with total score decreasing with each additional gap. ATTGACCTGA AT---CCTGA Unlike constant gap penalty, the size of the gap is considered. With a match with score 1 and each gap -1, the score here is (7 − 3 = 4). Affine. The most widely used gap penalty function is the affine gap penalty. The affine gap penalty combines the components in both the constant and linear gap penalty, taking the form formula_0. This introduces new terms, A is known as the gap opening penalty, B the gap extension penalty and L the length of the gap. Gap opening refers to the cost required to open a gap of any length, and gap extension the cost to extend the length of an existing gap by 1. Often it is unclear as to what the values A and B should be as it differs according to purpose. In general, if the interest is to find closely related matches (e.g. removal of vector sequence during genome sequencing), a higher gap penalty should be used to reduce gap openings. On the other hand, gap penalty should be lowered when interested in finding a more distant match. The relationship between A and B also have an effect on gap size. If the size of the gap is important, a small A and large B (more costly to extend a gap) is used and vice versa. Only the ratio A/B is important, as multiplying both by the same positive constant formula_1 will increase all penalties by formula_1: formula_2 which does not change the relative penalty between different alignments. Convex. Using the affine gap penalty requires the assigning of fixed penalty values for both opening and extending a gap. This can be too rigid for use in a biological context. The logarithmic gap takes the form formula_3 and was proposed as studies had shown the distribution of indel sizes obey a power law. Another proposed issue with the use of affine gaps is the favoritism of aligning sequences with shorter gaps. Logarithmic gap penalty was invented to modify the affine gap so that long gaps are desirable. However, in contrast to this, it has been found that using logarithmatic models had produced poor alignments when compared to affine models. Profile-based. Profile–profile alignment algorithms are powerful tools for detecting protein homology relationships with improved alignment accuracy. Profile-profile alignments are based on the statistical indel frequency profiles from multiple sequence alignments generated by PSI-BLAST searches. Rather than using substitution matrices to measure the similarity of amino acid pairs, profile–profile alignment methods require a profile-based scoring function to measure the similarity of profile vector pairs. Profile-profile alignments employ gap penalty functions. The gap information is usually used in the form of indel frequency profiles, which is more specific for the sequences to be aligned. ClustalW and MAFFT adopted this kind of gap penalty determination for their multiple sequence alignments. Alignment accuracies can be improved using this model, especially for proteins with low sequence identity. Some profile–profile alignment algorithms also run the secondary structure information as one term in their scoring functions, which improves alignment accuracy. Comparing time complexities. The use of alignment in computational biology often involves sequences of varying lengths. It is important to pick a model that would efficiently run at a known input size. The time taken to run the algorithm is known as the time complexity. Challenges. There are a few challenges when it comes to working with gaps. When working with popular algorithms there seems to be little theoretical basis for the form of the gap penalty functions. Consequently, for any alignment situation gap placement must be empirically determined. Also, pairwise alignment gap penalties, such as the affine gap penalty, are often implemented independent of the amino acid types in the inserted or deleted fragment or at the broken ends, despite evidence that specific residue types are preferred in gap regions. Finally, alignment of sequences implies alignment of the corresponding structures, but the relationships between structural features of gaps in proteins and their corresponding sequences are only imperfectly known. Because of this incorporating structural information into gap penalties is difficult to do. Some algorithms use predicted or actual structural information to bias the placement of gaps. However, only a minority of sequences have known structures, and most alignment problems involve sequences of unknown secondary and tertiary structure.
[ { "math_id": 0, "text": "A+B\\cdot (L-1)" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "kA+kB (L-1) = k(A+B(L-1))" }, { "math_id": 3, "text": "G(L)=A+C\\ln L" } ]
https://en.wikipedia.org/wiki?curid=1004764
10048386
Milner Baily Schaefer
Milner Baily ("Benny") Schaefer (1912 in Cheyenne, Wyoming – 1970 in San Diego, California), is notable for his work on the population dynamics of fisheries. Career. Schaefer worked as a biologist at the Washington State Fisheries Department. From 1937 to 1942 as a scientist for the International Pacific Salmon Fisheries Commission in New Westminster, British Columbia, Canada. In 1946 he joined the United States Fish and Wildlife Service and held various posts at the Fishery Biology Headquarters at Stanford University. Later, he worked at the Pacific Oceanic Fisheries Investigations Laboratory in Honolulu, Hawaii and completed a fisheries doctorate from the University of Washington in 1950. In 1951 Schaefer became Director of Investigations at the Inter-American Tropical Tuna Commission (IATTC). IATTC established its first headquarters at the Scripps Institution of Oceanography. Schaefer short-term catch equation. During his period at the IATTC, Schaefer worked on the development of theories of fishery dynamics and published a fishery equilibrium model based on the Verhulst and an assumption of a bi-linear catch equation, often referred to as the Schaefer short-term catch equation: formula_0 where the variables are; "H", referring to catch (harvest) over a given period of time (e.g. a year); "E", the fishing effort over the given period; "X", the fish stock biomass at the beginning of the period (or the average biomass), and the parameter "q" represents the catchability of the stock. Assuming the catch to equal the net natural growth in the population over the same period (formula_1), the equilibrium catch is a function of the long term fishing effort "E": formula_2 "r" and "K" being biological parameters representing intrinsic growth rate and natural equilibrium biomass respectively. Schaefer published during the 1950s a range of papers of empirical studies based on the model, the most famous perhaps being "A study of the dynamics of the fishery for yellowfin tuna in the Eastern Tropical Pacific Ocean". Other researchers also soon saw the potential of developing the model tools further. Gordon-Schaefer model. Schaefer's seminal paper further extends the biological model to account for dynamics of fishing pressure in an unregulated fishery, assuming that fishing effort increases until profit can no longer be made. Thus, the fishery reaches an equilibrium, referred to as the "bionomic equilibrium" by H. Scott Gordon in a paper published the same year as Schaefer's but focused on purely economics of fishing. Apparently, Schaefer and Gordon did not know about each other's work, and today their bioeconomic model is known as Gordon-Schaefer Model. It is a common to credit Schaefer only for the biological part of this model , but this is a mistake. Together, the work by Schaefer and Gordon set the basis for quantitative analyses of fisheries economics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H(E,X)=q E X\\!" }, { "math_id": 1, "text": "\\dot{X}=0" }, { "math_id": 2, "text": "H(E)=q K E \\left(1-\\frac{qE}{r}\\right)" } ]
https://en.wikipedia.org/wiki?curid=10048386
10049713
Victorian Railways Dd class
Class of Australian 4-6-0 and 58 Australian 4-6-2T steam locomotives The DD class (later reclassified into D1, D2 and D3 subclasses) was a passenger and mixed traffic steam locomotive that ran on Victorian Railways from 1902 to 1974. Originally introduced on mainline express passenger services, they were quickly superseded by the much larger A2 class and were relegated to secondary and branch line passenger and goods service, where they gave excellent service for the next fifty years. The DD design was adapted into a 4-6-2T tank locomotive for suburban passenger use, the DDE (later D4) class. They were the most numerous locomotive class on the VR, with a total of 261 DD and 58 formula_0 locomotives built. History. By 1900, Victoria's express passenger locomotive fleet was almost exclusively made up of 4-4-0 designs of the Old A, New A, and the most recent AA class. These locomotives reflected contemporary British locomotive practice (as did the VR's fleet of 0-6-0 goods locomotives), in no small part due to the Victorian Government having appointed, in 1884, a Midland Railway manager, Richard Speight, as its first Chief Railways Commissioner. The commissioners then asked British locomotive engineer Edward Jeffreys to design five standard types of locos, in partnership with the British locomotive manufacturer (Kitson &amp; Company of Leeds). At the turn of the century, in what marked a major shift in policy, the recently appointed VR Commissioner, John Mathieson, set up a Locomotive Design Section for in-house development of future motive power. The DD class locomotives were the first product of this exercise. A 4-6-0 design equipped with 5 ft 1 in driving wheels, saturated steam boiler and Belpaire firebox, the DD reflected the considerable talent of VR's design team, which included ex-Beyer, Peacock &amp; Company recruit Eugene Siepen, future VR Chief Mechanical Engineer Alfred Smith, and Rolling Stock Branch manager Thomas Woodroffe. Production. The first DD was number 560, constructed at the Victorian Railways' Newport Workshops and entering service in 1902. It was followed by engines 582 to 700, evens only, all constructed at Newport with the exception of 602, 604, 606, 608, 610, 632 and 634. These seven engines were notable as the last locomotives to be built by Ballarat's Phoenix Foundry, which had been the main supplier of locomotives to the VR for over thirty years. That was because the conservative Irvine government sought to reduce the costs of locomotive construction, and Newport Workshops was asked to tender for the construction of the DD class locomotives. A fierce tender war between Newport and Phoenix eventually resulted in a Royal Commission, which found that Newport could produce a locomotive for £3,364, some £497 cheaper than the Phoenix Foundry. Phoenix produced just seven DD locomotives and received no further orders, going into voluntary liquidation a year later. Engines 702 to 796, again evens only, were delivered as tank engines of the formula_0 class up to the end of 1910. By this point the odds/evens locomotive numbering scheme had been abandoned, so the last nine of the batch were delivered as 701-717 to start filling gaps. As part of the competitive tendering process, in early 1912 contracts were signed with each of Beyer, Peacock &amp; Company of Manchester, England, Baldwin Locomotive Works of the US, Walkers Limited of Maryborough, Queensland and Austral Otis, to compare against the cost of building engines at Newport Workshops. Ritchie Brothers of Sydney had also tendered but failed to win any of the orders. The contracts were for 20 engines each, with rights to a 20-engine extension and the possibility of up to a total of 100 engines. Respectively, Beyer, Peacock &amp; Company delivered engines 531-569, Baldwin delivered 571-609 and Newport 611-649 (plus tank engine 719) in 1912. The following year saw Walkers delivered 651-689 while Newport supplied tank engines 721-749. Austral Otis encountered difficulties and withdrew from the contract in November 1912, leading to that contract being re-offered. From 1914 newly delivered engines were consecutively numbered. Between 1914 and 1919 Newport delivered three batches of 20 engines each, numbered 873-912, 943-962 and 1013-1032, at a rate of 20 per year except the final two, delivered in 1918 and 1919 respectively. The firm Thompsons &amp; Co successfully won the contract for the 20 engines not being constructed by Austral Otis, and these were delivered from the end of 1914 numbered 893-912. A repeat order was placed in 1916 with deliveries of 963-982, and work had started on a further 20 engines when pressures of World War I led to the firm abandoning the remainder of the DD contract extensions. The parts already constructed were forwarded to Victorian Railways workshops, initially with five each being built at Bendigo and Ballarat (1033-1037 and 1038-1042 respectively), and the next ten were split between Newport (1043-1046), Bendigo (1047-1049) and Ballarat (1050-1052). These three workshops turned out virtually all subsequent locomotives for the Victorian railway system until the post-war era. (Some references exist to a further ten Thompsons engines, but no evidence is available to support the claim.) Regular service. DD class locomotives were initially assigned to hauling the "Adelaide Express" over the steep gradients between Melbourne and Ballarat, but were soon seen on mainline passenger services on a number of lines. The first years of the 20th century saw on the VR (as elsewhere in the world) a considerable increase in both the amount of traffic and the size and weight of rolling stock being hauled. In 1907, the DD class was supplanted by the much larger and more powerful A2 class on principal mainline services. However, with their light axle load (just 12 t 10 cwt in their original form), they were quickly reassigned to the VR's branchline network, where they became a fixture for the next fifty years. From July until September 1918, 1032 was loaned to the South Australian Railways for trails against a Rx class operating from Adelaide to Murray Bridge and Victor Harbor. Commissioner's Engines. With their light axle load and express passenger speed, the DD was also an ideal choice as motive power for the Victorian Railways Commissioner's train (used to carry the VR Commissioners on inspection tours to every corner of the VR network). In January 1917, Commissioners' locomotive No. 100, a 2-4-0 built in 1872, was scrapped and replaced with the brand new DD 980 from Thompsons Foundry in Castlemaine. It was later renumbered DD 718, DD 600 and D1 600, until March 1937 when it was placed into normal service as D1 576, operating until 1959. There is photographic evidence of D1 600 as Commissioners' Engine throughout the 1930s in the K.V. Scott collection. The new Commissioners engine from 1937 was D3 683, specially fitted with an electric headlight (Mort Clark Bulletin Article) and in August 1950 it was replaced by D3 639. 639 herself was withdrawn in July 1956 and replaced with D3 658, however 639's numbers were transferred to 658. D3 639 (658) was replaced by new 40 M.P.H, Clyde EMD diesel-electric Y 123 in January 1964. In August 1968 new diesel-electric Y 175 geared for 60 M.P.H. running took over until the Commissioners' Train was discontinued about 1979/80. In 1983 new Chief General Manager Mr. John Hearsch reinstated the Inspection Train with Clyde diesel-electric T 410. The Inspection train was discontinued after Hearsch left for Queensland Rail circa 1991. DDE tank engine. The expansion of Melbourne's population into new suburbs early in the 20th century, and the delay of the suburban electrification project, saw the need for faster and more powerful steam locomotives for the suburban rail network. In 1908, the basic design of the DD was adapted to create 4-6-2T tank locomotives, classed formula_0. They were put to work on longer and hillier suburban routes such as the Dandenong, Frankston, Upper Ferntree Gully, Williamstown, Werribee, Lilydale, Darling and Kew railway lines. A total of 58 were built between 1908 and 1913. With electrification of the suburban network already on the drawing board (the first electrified lines opening in 1919), the formula_0 was designed for easy conversion to DD tender engines in the event of electrification making them redundant. However, only two were modified in that way. Ten were scrapped in 1924, followed by another four in 1925, and formula_0 704 was sold to the State Electricity Commission of Victoria. The remaining formula_0 locomotives remained in service on non-electrified outer suburban routes or found new roles as suburban goods locomotives or shunters. Some were allotted to Ballarat to work the short branch line to Newlyn. Design improvements. During the construction of the DD class, a number of changes were made. The first locomotives built featured low running plates with splashers over the driving wheels and a narrow cab. However, after 26 such examples were built the design was altered with high running plates mounted above the driving wheels and a more comfortable full-width pressed metal cab of Canadian design, a feature incorporated at the request of Victorian Railways Chief Commissioner and former Canadian Pacific Transportation Manager Thomas Tait. These became hallmarks of all subsequent VR steam locomotive designs. Although the Dd was considered to be a successful design, it had a key shortcoming in that its boiler performance was not sufficient for the traffic demands being placed on it. In 1914, an experimental superheater was fitted to DD 882 and was found to be very successful. Both DD and A2 designs (both locomotive classes still under construction at the time) were modified with superheated boilers (with all of the existing A2 class locomotives eventually fitted with superheated boilers). Superheaters were also fitted to three of the formula_0 locomotives. Further DD locomotives were also built with 19 in. diameter cylinders in place of the original 18 in. cylinders. In 1923–4, DD 1022 was experimentally fitted with Pulverised Brown Coal (PBC) burning equipment. Reclassing: D1, D2 and D4 class. In 1922 a complex renumbering and reclassing of VR locomotives saw the DD class split into two subclasses, the D1 class (comprising all the original saturated steam locomotives with 18 in. cylinders) and the D2 class (comprising superheated locomotives with either 18 or 19 in. cylinders). With the introduction of a further D3 class in 1929, the formula_0 tank locomotives were reclassified as D4 class in 1929. The D3 class. Despite the success of superheating the DD boiler, it was still somewhat limited in steam-raising capabilities. In 1922, a new design of 2-8-0 branch line goods locomotive, the K class, was introduced, with noticeably superior boiler performance to that of the DD. In 1929, a DD class locomotive was rebuilt with a larger boiler derived from the K class design. Based on the success of the rebuild, a further 93 D1or D2 class locomotives were converted between 1929 and 1947, and classified D3. The D3s were economical and efficient, but also renowned for their superior performance. They could be worked hard and were a favourite with crews. Although restricted to a maximum permitted speed of , the D3s were known to be capable of up to . With its low axle load and its ability to travel at a relatively high speed, the D3 helped to speed up passenger services on many lightly laid branch lines. Conversions and Renumberings. In the period 1922-1927 well over half the fleet of DD engines were renumbered, some twice, to clean up the mess left behind by the former odds/evens system and group engines of the same design into a consecutive series. In 1922 the proposed range was 490-799 for the Dd engines and 250-269 for the formula_0s, although in practice the ranges ended up as 500-799 and 250-287 with many numbers unfilled. Note the total of these groups would have been 350 engines, against 319 actually built. During this period two of the formula_0 engines were converted to tender engines, one sold and a further 17 scrapped. In 1929 the DD series was further segregated into D1, D2 and D3 taking slots 500-645, 700-799 and 638-699 respectively. The first of the latter was D1 542 to D3 685 in 1929, followed by further examples of the D3 upgrade completed in 1930 to give the number range 675-689 and this was further extended to 670-699 by the end of 1932. Later conversions between 1933 and 1946 counted down from 669 to 607 in 1946, and finally 604 ex D2 717 entering service in 1947. It is not clear which, if any, engines were intended to take the slots of D3 605 or 606. Otherwise, the DD group was reclassed as either D1 or D2 as appropriate, for the most part without renumbering. Unlike with other renumbering projects, engines converted to D3 and renumbered did not have their previous slots immediately filled. In 1951, to make way for new J and R Class engines being ordered under Operation Phoenix, the remaining D1 and D2 engines were renumbered to the range 561-579 and 580-604 respectively, with D3 604 changing to 606. At the time, engines D1 573, 578, 579 and 585 were still in service and retained their numbers, leaving gaps at numbers 575, 577, 583, 602 and 605. Assuming 585 would have been renumbered to replace D1 572 withdrawn that year, the remaining open slots in each group correspond to the number of engines withdrawn in 1951. Engine renumbering histories. These tables are based on: Demise. Scrapping of DD class locomotives commenced as early as 1927 when DD 712 was wrecked, followed by D1 535 in 1928. A full 20 engines (including the newest of the fleet, DD 1052) were scrapped in 1929 as newer K and N class locomotives took over branch line goods services and Petrol Electric Rail Motors started to replace mixed trains and locomotive-hauled branch line passenger services. The unrebuilt saturated steam D1 class locomotives were the first to go, and by 1951 no fewer than 120 had been scrapped. By 1951, the remaining D1 locomotives were shunters, the D2 locomotives providing suburban goods and branch line goods and passenger service, and the D3 performing both branch line and mainline service. However, with the massive postwar upgrading of the VR locomotive fleet as part of 'Operation Phoenix' came the introduction of J class 2-8-0 branch line steam locomotives and T class (EMD G8) diesel electric locomotives to replace the various remaining DD locomotives. The first D3 locomotive to be scrapped was none other than Commissioner's locomotive D3 639 in July 1956. However, this locomotive had attained sufficient prestige that its brass fittings and number plates were transferred to another locomotive, D3 658, which took over its role as Commissioner's locomotive and its identity as "D3 639". Withdrawals and scrappings continued throughout the 1950s and 60s. The last DD in VR service was the Commissioner's locomotive D3 639 (formerly D3 658), which was replaced in this role by a Y class (EMD G6B) diesel electric locomotive, Y 123 from January 1964, then Y175 from August 1968. However, D3 639 had since October 1964 taken on a new role providing motive power for the ARHS 'Vintage Train' as the first 'Special Trains Vintage Engine', and continued in this popular role until deteriorated boiler condition saw it finally withdrawn from service in 1974. Preservation. Operational. D3 639 was restored to operating condition in 1984 and was recommissioned into service by Prime Minister Bob Hawke on 17 November 1984. Since this date, it has continued in service hauling various rail enthusiast special trains. It has also been used in a number of films, and could be seen hauling passenger trains beneath an inoperable overhead catenary in the 2000 remake of the post-apocalyptic film "On the Beach". From 5 December 1970 the engine was painted red with black undergear and a brass dome, and by the Austeam '88 festival it had been named "Spirit of Ballarat". As a rebuild of a 1903 DD locomotive, it made a special long-distance journey to Mildura in 2002 as that line approached its centenary, and celebrated its own 100th anniversary in 2003 with a journey to Swan Hill. Between 2007 and 2009 the engine operated with its previous number of 658. In 2014 the engine masqueraded as DD 893 for the centenary of the Thompsons Foundry in Castlemaine. While most of its fittings were retained for the day, the number, letter and builders plates were swapped for the occasion. Notably, the first DD built by Thompsons was in fact preserved, having been converted to D3 640 in 1937 then renumbered D3 688 in 1964. It is displayed on a plinth in Swan Hill, and more recently returned to its previous 640 identity. Static. A single example of each of the D2 (604) and D4 (268) locomotives were retained for preservation and today are preserved at the Newport Railway Museum, where they are displayed along with D3 635. Notably, 604 is coupled to a tender consisting of a D2 tank on a slightly longer A2 frame. 13 other D3 class locomotives remain, either preserved in static display or stored awaiting restoration or as a supply of parts. No original D1 class locomotives have survived into preservation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{D^D_E}" } ]
https://en.wikipedia.org/wiki?curid=10049713
10049748
Cophasing
Segmented mirror/telescope-related individual segment-controlling process in astronomy In astronomy, the term cophasing or phasing describes the process of controlling the individual segments in a segmented mirror or a telescope so that the segments form a larger composite mirroring surface. Cophasing implies precise, active control of three degrees of freedom of each individual segment mirror: translation along the optical axis (piston) and rotation about two axes perpendicular to the optical axis (tip-tilt). Each segment of the segmented telescope is a solid body having 6 degrees of freedom exposed to the gravitation force, wind blowing, and other mechanical forces. If the position of each segment is not controlled the resolution of the whole telescope will be the same as if telescope had the diameter equal to the size of one segment. To achieve a resolution commensurable with that of a monolithic telescope of the same diameter the segmented surface must be controlled with a precision better than formula_0 surface rms. Projects for future extremely large telescopes (ELTs) generally depend on the use of a segmented primary mirror. While the basic technologies required for segmented telescopes have been demonstrated for the 10m Keck telescope or GTC telescope, ELTs of diameters form 50 to 100 m represent a qualitative change with respect to wave front control related to segmentation in comparison with the current 10 meters technology. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lambda/40" } ]
https://en.wikipedia.org/wiki?curid=10049748
10050297
Maximal ergodic theorem
The maximal ergodic theorem is a theorem in ergodic theory, a discipline within mathematics. Suppose that formula_0 is a probability space, that formula_1 is a (possibly noninvertible) measure-preserving transformation, and that formula_2. Define formula_3 by formula_4 Then the maximal ergodic theorem states that formula_5 for any λ ∈ R. This theorem is used to prove the point-wise ergodic theorem.
[ { "math_id": 0, "text": "(X, \\mathcal{B},\\mu)" }, { "math_id": 1, "text": "T : X\\to X" }, { "math_id": 2, "text": "f\\in L^1(\\mu,\\mathbb{R})" }, { "math_id": 3, "text": "f^*" }, { "math_id": 4, "text": "f^* = \\sup_{N\\geq 1} \\frac{1}{N} \\sum_{i=0}^{N-1} f \\circ T^i. " }, { "math_id": 5, "text": " \\int_{f^{*} > \\lambda} f \\, d\\mu \\ge \\lambda \\cdot \\mu\\{ f^{*} > \\lambda\\} " } ]
https://en.wikipedia.org/wiki?curid=10050297
10050999
Median absolute deviation
Statistical measure of variability In statistics, the median absolute deviation (MAD) is a robust measure of the variability of a univariate sample of quantitative data. It can also refer to the population parameter that is estimated by the MAD calculated from a sample. For a univariate data set "X"1, "X"2, ..., "Xn", the MAD is defined as the median of the absolute deviations from the data's median formula_0: formula_1 that is, starting with the residuals (deviations) from the data's median, the MAD is the median of their absolute values. Example. Consider the data (1, 1, 2, 2, 4, 6, 9). It has a median value of 2. The absolute deviations about 2 are (1, 1, 0, 0, 2, 4, 7) which in turn have a median value of 1 (because the sorted absolute deviations are (0, 0, 1, 1, 2, 4, 7)). So the median absolute deviation for this data is 1. Uses. The median absolute deviation is a measure of statistical dispersion. Moreover, the MAD is a robust statistic, being more resilient to outliers in a data set than the standard deviation. In the standard deviation, the distances from the mean are squared, so large deviations are weighted more heavily, and thus outliers can heavily influence it. In the MAD, the deviations of a small number of outliers are irrelevant. Because the MAD is a more robust estimator of scale than the sample variance or standard deviation, it works better with distributions without a mean or variance, such as the Cauchy distribution. Relation to standard deviation. The MAD may be used similarly to how one would use the deviation for the average. In order to use the MAD as a consistent estimator for the estimation of the standard deviation formula_2, one takes formula_3 where formula_4 is a constant scale factor, which depends on the distribution. For normally distributed data formula_4 is taken to be formula_5 i.e., the reciprocal of the quantile function formula_6 (also known as the inverse of the cumulative distribution function) for the standard normal distribution formula_7. Derivation. The argument 3/4 is such that formula_8 covers 50% (between 1/4 and 3/4) of the standard normal cumulative distribution function, i.e. formula_9 Therefore, we must have that formula_10 Noticing that formula_11 we have that formula_12, from which we obtain the scale factor formula_13. Another way of establishing the relationship is noting that MAD equals the half-normal distribution median: formula_14 This form is used in, e.g., the probable error. In the case of complex values ("X"+i"Y"), the relation of MAD to the standard deviation is unchanged for normally distributed data. MAD using geometric median. Analogously to how the median generalizes to the geometric median (gm) in multivariate data, MAD can be generalized to MADGM (median of distances to gm) in n dimensions. This is done by replacing the absolute differences in one dimension by euclidean distances of the data points to the geometric median in n dimensions. This gives the identical result as the univariate MAD in 1 dimension and generalizes to any number of dimensions. MADGM needs the geometric median to be found, which is done by an iterative process. The population MAD. The population MAD is defined analogously to the sample MAD, but is based on the complete distribution rather than on a sample. For a symmetric distribution with zero mean, the population MAD is the 75th percentile of the distribution. Unlike the variance, which may be infinite or undefined, the population MAD is always a finite number. For example, the standard Cauchy distribution has undefined variance, but its MAD is 1. The earliest known mention of the concept of the MAD occurred in 1816, in a paper by Carl Friedrich Gauss on the determination of the accuracy of numerical observations. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tilde{X}=\\operatorname{median}(X) " }, { "math_id": 1, "text": "\n\\operatorname{MAD} = \\operatorname{median}( |X_i - \\tilde{X}|)\n" }, { "math_id": 2, "text": "\\sigma" }, { "math_id": 3, "text": "\\hat{\\sigma} = k \\cdot \\operatorname{MAD}," }, { "math_id": 4, "text": "k" }, { "math_id": 5, "text": "k = 1/\\left(\\Phi^{-1}(3/4)\\right) \\approx 1/0.67449 \\approx 1.4826," }, { "math_id": 6, "text": "\\Phi^{-1}" }, { "math_id": 7, "text": "Z = (X - \\mu) / \\sigma" }, { "math_id": 8, "text": "\\pm \\operatorname{MAD}" }, { "math_id": 9, "text": "\\frac 12 = P(|X - \\mu| \\le \\operatorname{MAD}) = P\\left(\\left|\\frac{X - \\mu}{\\sigma}\\right| \\le \\frac {\\operatorname{MAD}} \\sigma\\right) = P\\left(|Z| \\le \\frac{\\operatorname{MAD}}{\\sigma}\\right)." }, { "math_id": 10, "text": "\\Phi\\left(\\operatorname{MAD} / \\sigma\\right) - \\Phi\\left(-\\operatorname{MAD} / \\sigma\\right) = 1/2." }, { "math_id": 11, "text": "\\Phi\\left(-\\operatorname{MAD} / \\sigma\\right) = 1 - \\Phi\\left(\\operatorname{MAD} / \\sigma\\right)," }, { "math_id": 12, "text": "\\operatorname{MAD} / \\sigma = \\Phi^{-1}(3/4) = 0.67449" }, { "math_id": 13, "text": "k = 1 / \\Phi^{-1}(3/4) = 1.4826" }, { "math_id": 14, "text": "\\operatorname{MAD} = \\sigma\\sqrt{2}\\operatorname{erf}^{-1}(1/2) \\approx 0.67449 \\sigma." } ]
https://en.wikipedia.org/wiki?curid=10050999
10053499
Oleg Lupanov
Russian mathematician (1932–2006) Oleg Borisovich Lupanov (; 2 June 1932 – 3 May 2006) was a Soviet and Russian mathematician, dean of the Moscow State University's Faculty of Mechanics and Mathematics (1980–2006), head of the Chair of Discrete Mathematics of the Faculty of Mechanics and Mathematics (1981–2006). Together with his graduate school advisor, Sergey Yablonsky, he is considered one of the founders of the Soviet school of Mathematical Cybernetics. In particular he authored pioneering works on synthesis and complexity of Boolean circuits, and of control systems in general (), the term used in the USSR and Russia for a generalization of finite state automata, Boolean circuits and multi-valued logic circuits. Ingo Wegener, in his book "The Complexity of Boolean Functions," credits O. B. Lupanov for coining the term "Shannon effect" in his 1970 paper, to refer to the fact that almost all Boolean functions have nearly the same circuit complexity as the hardest function. O. B. Lupanov is best known for his ("k", "s")-Lupanov representation of Boolean functions that he used to devise an asymptotically optimal method of Boolean circuit synthesis, thus proving the asymptotically tight upper bound on Boolean circuit complexity: formula_0 Biography. O. B. Lupanov graduated from Moscow State University's Faculty of Mechanics and Mathematics in 1955. He received his PhD in 1958 from the Academy of Sciences of the Soviet Union and his Doctorate degree in 1963. He began teaching at Moscow State University in 1959 and became professor there in 1967. From 1955 he had appointment at the Institute of Applied Mathematics and he was a professor at Faculty of Computational Mathematics and Cybernetics (1970–1980). He had served as the Dean of the Moscow State University's Faculty of Mechanics and Mathematics (1980–2006), and as the founding head of the Chair of Discrete Mathematics of the Faculty of Mechanics and Mathematics (1981–2006). Lupanov became a corresponding member of the Academy of Sciences of the Soviet Union in 1972 and a full member of Russian Academy of Sciences in 2003. He was the lead scientist of the Keldysh Institute of Applied Mathematics since 1993 and was awarded the title of a distinguished professor of Moscow State University in 2002. He was a recipient of the prestigious Lenin Prize (1966) and of the Moscow State University's Lomonosov Award (1993). His students count more than 30 PhD degree holders and 6 holders of the Soviet/Russian Doctorate degree. As a dean of the Faculty of Mechanics and Mathematics he had a reputation of a democratic and accessible person. Personal life. Lupanov died at around 7pm on 3 May 2006 in his office at the Faculty of Mechanics and Mathematics of Moscow State University. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C(f)\\le \\frac{2^n}{n} + o\\left(\\frac{2^n}{n}\\right). " } ]
https://en.wikipedia.org/wiki?curid=10053499
100558
A* search algorithm
Algorithm used for pathfinding and graph traversal A* (pronounced "A-star") is a graph traversal and pathfinding algorithm, which is used in many fields of computer science due to its completeness, optimality, and optimal efficiency. Given a weighted graph, a source node and a goal node, the algorithm finds the shortest path (with respect to the given weights) from source to goal. One major practical drawback is its formula_0 space complexity where d is the depth of the solution (the length of the shortest path) and b is the branching factor (the average number of successors per state), as it stores all generated nodes in memory. Thus, in practical travel-routing systems, it is generally outperformed by algorithms that can pre-process the graph to attain better performance, as well as by memory-bounded approaches; however, A* is still the best solution in many cases. Peter Hart, Nils Nilsson and Bertram Raphael of Stanford Research Institute (now SRI International) first published the algorithm in 1968. It can be seen as an extension of Dijkstra's algorithm. A* achieves better performance by using heuristics to guide its search. Compared to Dijkstra's algorithm, the A* algorithm only finds the shortest path from a specified source to a specified goal, and not the shortest-path tree from a specified source to all possible goals. This is a necessary trade-off for using a specific-goal-directed heuristic. For Dijkstra's algorithm, since the entire shortest-path tree is generated, every node is a goal, and there can be no specific-goal-directed heuristic. History. A* was created as part of the Shakey project, which had the aim of building a mobile robot that could plan its own actions. Nils Nilsson originally proposed using the Graph Traverser algorithm for Shakey's path planning. Graph Traverser is guided by a heuristic function "h"("n"), the estimated distance from node n to the goal node: it entirely ignores "g"("n"), the distance from the start node to n. Bertram Raphael suggested using the sum, "g"("n") + "h"("n"). Peter Hart invented the concepts we now call admissibility and consistency of heuristic functions. A* was originally designed for finding least-cost paths when the cost of a path is the sum of its costs, but it has been shown that A* can be used to find optimal paths for any problem satisfying the conditions of a cost algebra. The original 1968 A* paper contained a theorem stating that no A*-like algorithm could expand fewer nodes than A* if the heuristic function is consistent and A*'s tie-breaking rule is suitably chosen. A "correction" was published a few years later claiming that consistency was not required, but this was shown to be false in 1985 in Dechter and Pearl's definitive study of A*'s optimality (now called optimal efficiency), which gave an example of A* with a heuristic that was admissible but not consistent expanding arbitrarily more nodes than an alternative A*-like algorithm. Description. A* is an informed search algorithm, or a best-first search, meaning that it is formulated in terms of weighted graphs: starting from a specific starting node of a graph, it aims to find a path to the given goal node having the smallest cost (least distance travelled, shortest time, etc.). It does this by maintaining a tree of paths originating at the start node and extending those paths one edge at a time until the goal node is reached. At each iteration of its main loop, A* needs to determine which of its paths to extend. It does so based on the cost of the path and an estimate of the cost required to extend the path all the way to the goal. Specifically, A* selects the path that minimizes formula_1 where n is the next node on the path, "g"("n") is the cost of the path from the start node to n, and "h"("n") is a heuristic function that estimates the cost of the cheapest path from n to the goal. The heuristic function is problem-specific. If the heuristic function is admissible – meaning that it never overestimates the actual cost to get to the goal – A* is guaranteed to return a least-cost path from start to goal. Typical implementations of A* use a priority queue to perform the repeated selection of minimum (estimated) cost nodes to expand. This priority queue is known as the "open set", "fringe" or "frontier". At each step of the algorithm, the node with the lowest "f"("x") value is removed from the queue, the f and g values of its neighbors are updated accordingly, and these neighbors are added to the queue. The algorithm continues until a removed node (thus the node with the lowest f value out of all fringe nodes) is a goal node. The f value of that goal is then also the cost of the shortest path, since h at the goal is zero in an admissible heuristic. The algorithm described so far gives us only the length of the shortest path. To find the actual sequence of steps, the algorithm can be easily revised so that each node on the path keeps track of its predecessor. After this algorithm is run, the ending node will point to its predecessor, and so on, until some node's predecessor is the start node. As an example, when searching for the shortest route on a map, "h"("x") might represent the straight-line distance to the goal, since that is physically the smallest possible distance between any two points. For a grid map from a video game, using the Taxicab distance or the Chebyshev distance becomes better depending on the set of movements available (4-way or 8-way). If the heuristic h satisfies the additional condition "h"("x") ≤ "d"("x", "y") + "h"("y") for every edge ("x", "y") of the graph (where d denotes the length of that edge), then h is called monotone, or consistent. With a consistent heuristic, A* is guaranteed to find an optimal path without processing any node more than once and A* is equivalent to running Dijkstra's algorithm with the reduced cost "d"'("x", "y") "d"("x", "y") + "h"("y") − "h"("x"). Pseudocode. The following pseudocode describes the algorithm: function reconstruct_path(cameFrom, current) while current in cameFrom.Keys: current := cameFrom[current] total_path.prepend(current) return total_path // A* finds a path from start to goal. // h is the heuristic function. h(n) estimates the cost to reach goal from node n. function A_Star(start, goal, h) // The set of discovered nodes that may need to be (re-)expanded. // Initially, only the start node is known. // This is usually implemented as a min-heap or priority queue rather than a hash-set. // For node n, cameFrom[n] is the node immediately preceding it on the cheapest path from the start // to n currently known. cameFrom := an empty map // For node n, gScore[n] is the cost of the cheapest path from start to n currently known. gScore := map with default value of Infinity gScore[start] := 0 // For node n, fScore[n] := gScore[n] + h(n). fScore[n] represents our current best guess as to // how cheap a path could be from start to finish if it goes through n. fScore := map with default value of Infinity fScore[start] := h(start) while openSet is not empty // This operation can occur in O(Log(N)) time if openSet is a min-heap or a priority queue current := the node in openSet having the lowest fScore[] value if current = goal return reconstruct_path(cameFrom, current) openSet.Remove(current) for each neighbor of current // d(current,neighbor) is the weight of the edge from current to neighbor // tentative_gScore is the distance from start to the neighbor through current tentative_gScore := gScore[current] + d(current, neighbor) if tentative_gScore &lt; gScore[neighbor] // This path to neighbor is better than any previous one. Record it! cameFrom[neighbor] := current gScore[neighbor] := tentative_gScore fScore[neighbor] := tentative_gScore + h(neighbor) if neighbor not in openSet openSet.add(neighbor) // Open set is empty but goal was never reached return failure Remark: In this pseudocode, if a node is reached by one path, removed from openSet, and subsequently reached by a cheaper path, it will be added to openSet again. This is essential to guarantee that the path returned is optimal if the heuristic function is admissible but not consistent. If the heuristic is consistent, when a node is removed from openSet the path to it is guaranteed to be optimal so the test ‘codice_0’ will always fail if the node is reached again. Example. An example of an A* algorithm in action where nodes are cities connected with roads and h(x) is the straight-line distance to the target point: Key: green: start; blue: goal; orange: visited The A* algorithm has real-world applications. In this example, edges are railroads and h(x) is the great-circle distance (the shortest possible distance on a sphere) to the target. The algorithm is searching for a path between Washington, D.C., and Los Angeles. Implementation details. There are a number of simple optimizations or implementation details that can significantly affect the performance of an A* implementation. The first detail to note is that the way the priority queue handles ties can have a significant effect on performance in some situations. If ties are broken so the queue behaves in a LIFO manner, A* will behave like depth-first search among equal cost paths (avoiding exploring more than one equally optimal solution). When a path is required at the end of the search, it is common to keep with each node a reference to that node's parent. At the end of the search, these references can be used to recover the optimal path. If these references are being kept then it can be important that the same node doesn't appear in the priority queue more than once (each entry corresponding to a different path to the node, and each with a different cost). A standard approach here is to check if a node about to be added already appears in the priority queue. If it does, then the priority and parent pointers are changed to correspond to the lower-cost path. A standard binary heap based priority queue does not directly support the operation of searching for one of its elements, but it can be augmented with a hash table that maps elements to their position in the heap, allowing this decrease-priority operation to be performed in logarithmic time. Alternatively, a Fibonacci heap can perform the same decrease-priority operations in constant amortized time. Special cases. Dijkstra's algorithm, as another example of a uniform-cost search algorithm, can be viewed as a special case of A* where &amp;NoBreak;&amp;NoBreak; for all "x". General depth-first search can be implemented using A* by considering that there is a global counter "C" initialized with a very large value. Every time we process a node we assign "C" to all of its newly discovered neighbors. After every single assignment, we decrease the counter "C" by one. Thus the earlier a node is discovered, the higher its &amp;NoBreak;&amp;NoBreak; value. Both Dijkstra's algorithm and depth-first search can be implemented more efficiently without including an &amp;NoBreak;&amp;NoBreak; value at each node. Properties. Termination and completeness. On finite graphs with non-negative edge weights A* is guaranteed to terminate and is "complete", i.e. it will always find a solution (a path from start to goal) if one exists. On infinite graphs with a finite branching factor and edge costs that are bounded away from zero (formula_2 for some fixed formula_3), A* is guaranteed to terminate only if there exists a solution. Admissibility. A search algorithm is said to be "admissible" if it is guaranteed to return an optimal solution. If the heuristic function used by A* is admissible, then A* is admissible. An intuitive "proof" of this is as follows: When A* terminates its search, it has found a path from start to goal whose actual cost is lower than the estimated cost of any path from start to goal through any open node (the node's &amp;NoBreak;&amp;NoBreak; value). When the heuristic is admissible, those estimates are optimistic (not quite—see the next paragraph), so A* can safely ignore those nodes because they cannot possibly lead to a cheaper solution than the one it already has. In other words, A* will never overlook the possibility of a lower-cost path from start to goal and so it will continue to search until no such possibilities exist. The actual proof is a bit more involved because the &amp;NoBreak;&amp;NoBreak; values of open nodes are not guaranteed to be optimistic even if the heuristic is admissible. This is because the &amp;NoBreak;&amp;NoBreak; values of open nodes are not guaranteed to be optimal, so the sum &amp;NoBreak;&amp;NoBreak; is not guaranteed to be optimistic. Optimality and consistency. Algorithm A is optimally efficient with respect to a set of alternative algorithms Alts on a set of problems P if for every problem P in P and every algorithm A′ in Alts, the set of nodes expanded by A in solving P is a subset (possibly equal) of the set of nodes expanded by A′ in solving P. The definitive study of the optimal efficiency of A* is due to Rina Dechter and Judea Pearl. They considered a variety of definitions of Alts and P in combination with A*'s heuristic being merely admissible or being both consistent and admissible. The most interesting positive result they proved is that A*, with a consistent heuristic, is optimally efficient with respect to all admissible A*-like search algorithms on all "non-pathological" search problems. Roughly speaking, their notion of the non-pathological problem is what we now mean by "up to tie-breaking". This result does not hold if A*'s heuristic is admissible but not consistent. In that case, Dechter and Pearl showed there exist admissible A*-like algorithms that can expand arbitrarily fewer nodes than A* on some non-pathological problems. Optimal efficiency is about the "set" of nodes expanded, not the "number" of node expansions (the number of iterations of A*'s main loop). When the heuristic being used is admissible but not consistent, it is possible for a node to be expanded by A* many times, an exponential number of times in the worst case. In such circumstances, Dijkstra's algorithm could outperform A* by a large margin. However, more recent research found that this pathological case only occurs in certain contrived situations where the edge weight of the search graph is exponential in the size of the graph and that certain inconsistent (but admissible) heuristics can lead to a reduced number of node expansions in A* searches. Bounded relaxation. While the admissibility criterion guarantees an optimal solution path, it also means that A* must examine all equally meritorious paths to find the optimal path. To compute approximate shortest paths, it is possible to speed up the search at the expense of optimality by relaxing the admissibility criterion. Oftentimes we want to bound this relaxation, so that we can guarantee that the solution path is no worse than (1 + "ε") times the optimal solution path. This new guarantee is referred to as "ε"-admissible. There are a number of "ε"-admissible algorithms: Complexity. The time complexity of A* depends on the heuristic. In the worst case of an unbounded search space, the number of nodes expanded is exponential in the depth of the solution (the shortest path) d: "O"("bd"), where b is the branching factor (the average number of successors per state). This assumes that a goal state exists at all, and is reachable from the start state; if it is not, and the state space is infinite, the algorithm will not terminate. The heuristic function has a major effect on the practical performance of A* search, since a good heuristic allows A* to prune away many of the bd nodes that an uninformed search would expand. Its quality can be expressed in terms of the "effective" branching factor "b"*, which can be determined empirically for a problem instance by measuring the number of nodes generated by expansion, N, and the depth of the solution, then solving formula_9 Good heuristics are those with low effective branching factor (the optimal being "b"* 1). The time complexity is polynomial when the search space is a tree, there is a single goal state, and the heuristic function "h" meets the following condition: formula_10 where "h"* is the optimal heuristic, the exact cost to get from x to the goal. In other words, the error of h will not grow faster than the logarithm of the "perfect heuristic" "h"* that returns the true distance from x to the goal. The space complexity of A* is roughly the same as that of all other graph search algorithms, as it keeps all generated nodes in memory. In practice, this turns out to be the biggest drawback of the A* search, leading to the development of memory-bounded heuristic searches, such as Iterative deepening A*, memory-bounded A*, and SMA*. Applications. A* is often used for the common pathfinding problem in applications such as video games, but was originally designed as a general graph traversal algorithm. It finds applications in diverse problems, including the problem of parsing using stochastic grammars in NLP. Other cases include an Informational search with online learning. Relations to other algorithms. What sets A* apart from a greedy best-first search algorithm is that it takes the cost/distance already traveled, "g"("n"), into account. Some common variants of Dijkstra's algorithm can be viewed as a special case of A* where the heuristic formula_11 for all nodes; in turn, both Dijkstra and A* are special cases of dynamic programming. A* itself is a special case of a generalization of branch and bound. A* is similar to beam search except that beam search maintains a limit on the numbers of paths that it has to explore. Variants. A* can also be adapted to a bidirectional search algorithm. Special care needs to be taken for the stopping criterion. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(b^d)" }, { "math_id": 1, "text": "f(n) = g(n) + h(n)" }, { "math_id": 2, "text": "d(x,y)>\\varepsilon>0" }, { "math_id": 3, "text": "\\varepsilon" }, { "math_id": 4, "text": "w(n) = \\begin{cases} 1 - \\frac{d(n)}{N} & d(n) \\le N \\\\ 0 & \\text{otherwise} \\end{cases}" }, { "math_id": 5, "text": "A^*_\\varepsilon" }, { "math_id": 6, "text": "f_\\alpha(n) = (1 + w_\\alpha(n)) f(n)" }, { "math_id": 7, "text": "w_\\alpha(n) = \\begin{cases} \\lambda & g(\\pi(n)) \\le g(\\tilde{n}) \\\\ \\Lambda & \\text{otherwise} \\end{cases}" }, { "math_id": 8, "text": "\\lambda \\le \\Lambda" }, { "math_id": 9, "text": "N + 1 = 1 + b^* + (b^*)^2 + \\dots + (b^*)^d." }, { "math_id": 10, "text": "|h(x) - h^*(x)| = O(\\log h^*(x))" }, { "math_id": 11, "text": "h(n) = 0" } ]
https://en.wikipedia.org/wiki?curid=100558
10056274
Data transformation (statistics)
Application of a function to each point in a data set In statistics, data transformation is the application of a deterministic mathematical function to each point in a data set—that is, each data point "zi" is replaced with the transformed value "yi" = "f"("zi"), where "f" is a function. Transforms are usually applied so that the data appear to more closely meet the assumptions of a statistical inference procedure that is to be applied, or to improve the interpretability or appearance of graphs. Nearly always, the function that is used to transform the data is invertible, and generally is continuous. The transformation is usually applied to a collection of comparable measurements. For example, if we are working with data on peoples' incomes in some currency unit, it would be common to transform each person's income value by the logarithm function. Motivation. Guidance for how data should be transformed, or whether a transformation should be applied at all, should come from the particular statistical analysis to be performed. For example, a simple way to construct an approximate 95% confidence interval for the population mean is to take the sample mean plus or minus two standard error units. However, the constant factor 2 used here is particular to the normal distribution, and is only applicable if the sample mean varies approximately normally. The central limit theorem states that in many situations, the sample mean does vary normally if the sample size is reasonably large. However, if the population is substantially skewed and the sample size is at most moderate, the approximation provided by the central limit theorem can be poor, and the resulting confidence interval will likely have the wrong coverage probability. Thus, when there is evidence of substantial skew in the data, it is common to transform the data to a symmetric distribution before constructing a confidence interval. If desired, the confidence interval for the quantiles (such as the median) can then be transformed back to the original scale using the inverse of the transformation that was applied to the data. Data can also be transformed to make them easier to visualize. For example, suppose we have a scatterplot in which the points are the countries of the world, and the data values being plotted are the land area and population of each country. If the plot is made using untransformed data (e.g. square kilometers for area and the number of people for population), most of the countries would be plotted in tight cluster of points in the lower left corner of the graph. The few countries with very large areas and/or populations would be spread thinly around most of the graph's area. Simply rescaling units (e.g., to thousand square kilometers, or to millions of people) will not change this. However, following logarithmic transformations of both area and population, the points will be spread more uniformly in the graph. Another reason for applying data transformation is to improve interpretability, even if no formal statistical analysis or visualization is to be performed. For example, suppose we are comparing cars in terms of their fuel economy. These data are usually presented as "kilometers per liter" or "miles per gallon". However, if the goal is to assess how much additional fuel a person would use in one year when driving one car compared to another, it is more natural to work with the data transformed by applying the reciprocal function, yielding liters per kilometer, or gallons per mile. In regression. Data transformation may be used as a remedial measure to make data suitable for modeling with linear regression if the original data violates one or more assumptions of linear regression. For example, the simplest linear regression models assume a linear relationship between the expected value of "Y" (the response variable to be predicted) and each independent variable (when the other independent variables are held fixed). If linearity fails to hold, even approximately, it is sometimes possible to transform either the independent or dependent variables in the regression model to improve the linearity. For example, addition of quadratic functions of the original independent variables may lead to a linear relationship with expected value of "Y," resulting in a polynomial regression model, a special case of linear regression. Another assumption of linear regression is homoscedasticity, that is the variance of errors must be the same regardless of the values of predictors. If this assumption is violated (i.e. if the data is heteroscedastic), it may be possible to find a transformation of "Y" alone, or transformations of both "X" (the predictor variables) and "Y", such that the homoscedasticity assumption (in addition to the linearity assumption) holds true on the transformed variables and linear regression may therefore be applied on these. Yet another application of data transformation is to address the problem of lack of normality in error terms. Univariate normality is not needed for least squares estimates of the regression parameters to be meaningful (see Gauss–Markov theorem). However confidence intervals and hypothesis tests will have better statistical properties if the variables exhibit multivariate normality. Transformations that stabilize the variance of error terms (i.e. those that address heteroscedaticity) often also help make the error terms approximately normal. Examples. Equation: formula_0 Meaning: A unit increase in X is associated with an average of b units increase in Y. Equation: formula_1 (From exponentiating both sides of the equation: formula_2) Meaning: A unit increase in X is associated with an average increase of b units in formula_3, or equivalently, Y increases on an average by a multiplicative factor of formula_4. For illustrative purposes, if base-10 logarithm were used instead of natural logarithm in the above transformation and the same symbols ("a" and "b") are used to denote the regression coefficients, then a unit increase in X would lead to a formula_5times increase in Y on an average. If b were 1, then this implies a 10-fold increase in Y for a unit increase in X Equation: formula_6 Meaning: A k-fold increase in X is associated with an average of formula_7units increase in Y. For illustrative purposes, if base-10 logarithm were used instead of natural logarithm in the above transformation and the same symbols ("a" and "b") are used to denote the regression coefficients, then a tenfold increase in X would result in an average increase of formula_8 units in Y Equation: formula_9 (From exponentiating both sides of the equation: formula_10) Meaning: A k-fold increase in X is associated with a formula_11multiplicative increase in Y on an average. Thus if X doubles, it would result in Y changing by a multiplicative factor of formula_12. Alternative. Generalized linear models (GLMs) provide a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. GLMs allow the linear model to be related to the response variable via a link function and allow the magnitude of the variance of each measurement to be a function of its predicted value. Common cases. The logarithm transformation and square root transformation are commonly used for positive data, and the multiplicative inverse transformation (reciprocal transformation) can be used for non-zero data. The "power transformation" is a family of transformations parameterized by a non-negative value λ that includes the logarithm, square root, and multiplicative inverse transformations as special cases. To approach data transformation systematically, it is possible to use statistical estimation techniques to estimate the parameter λ in the power transformation, thereby identifying the transformation that is approximately the most appropriate in a given setting. Since the power transformation family also includes the identity transformation, this approach can also indicate whether it would be best to analyze the data without a transformation. In regression analysis, this approach is known as the "Box–Cox transformation". The reciprocal transformation, some power transformations such as the Yeo–Johnson transformation, and certain other transformations such as applying the inverse hyperbolic sine, can be meaningfully applied to data that include both positive and negative values (the power transformation is invertible over all real numbers if λ is an odd integer). However, when both negative and positive values are observed, it is sometimes common to begin by adding a constant to all values, producing a set of non-negative data to which any power transformation can be applied. A common situation where a data transformation is applied is when a value of interest ranges over several orders of magnitude. Many physical and social phenomena exhibit such behavior — incomes, species populations, galaxy sizes, and rainfall volumes, to name a few. Power transforms, and in particular the logarithm, can often be used to induce symmetry in such data. The logarithm is often favored because it is easy to interpret its result in terms of "fold changes". The logarithm also has a useful effect on ratios. If we are comparing positive quantities "X" and "Y" using the ratio "X" / "Y", then if "X" &lt; "Y", the ratio is in the interval (0,1), whereas if "X" &gt; "Y", the ratio is in the half-line (1,∞), where the ratio of 1 corresponds to equality. In an analysis where "X" and "Y" are treated symmetrically, the log-ratio log("X" / "Y") is zero in the case of equality, and it has the property that if "X" is "K" times greater than "Y", the log-ratio is the equidistant from zero as in the situation where "Y" is "K" times greater than "X" (the log-ratios are log("K") and −log("K") in these two situations). If values are naturally restricted to be in the range 0 to 1, not including the end-points, then a logit transformation may be appropriate: this yields values in the range (−∞,∞). Transforming to normality. 1. It is not always necessary or desirable to transform a data set to resemble a normal distribution. However, if symmetry or normality are desired, they can often be induced through one of the power transformations. 2. A linguistic power function is distributed according to the Zipf-Mandelbrot law. The distribution is extremely spiky and leptokurtic, this is the reason why researchers had to turn their backs to statistics to solve e.g. authorship attribution problems. Nevertheless, usage of Gaussian statistics is perfectly possible by applying data transformation. 3. To assess whether normality has been achieved after transformation, any of the standard normality tests may be used. A graphical approach is usually more informative than a formal statistical test and hence a normal quantile plot is commonly used to assess the fit of a data set to a normal population. Alternatively, rules of thumb based on the sample skewness and kurtosis have also been proposed. Transforming to a uniform distribution or an arbitrary distribution. If we observe a set of "n" values "X"1, ..., "X""n" with no ties (i.e., there are "n" distinct values), we can replace "X""i" with the transformed value "Y""i" = "k", where "k" is defined such that "X""i" is the "k"th largest among all the "X" values. This is called the "rank transform", and creates data with a perfect fit to a uniform distribution. This approach has a population analogue. Using the probability integral transform, if "X" is any random variable, and "F" is the cumulative distribution function of "X", then as long as "F" is invertible, the random variable "U" = "F"("X") follows a uniform distribution on the unit interval [0,1]. From a uniform distribution, we can transform to any distribution with an invertible cumulative distribution function. If "G" is an invertible cumulative distribution function, and "U" is a uniformly distributed random variable, then the random variable "G"−1("U") has "G" as its cumulative distribution function. Putting the two together, if "X" is any random variable, "F" is the invertible cumulative distribution function of "X", and "G" is an invertible cumulative distribution function then the random variable "G"−1("F"("X")) has "G" as its cumulative distribution function. Variance stabilizing transformations. Many types of statistical data exhibit a "variance-on-mean relationship", meaning that the variability is different for data values with different expected values. As an example, in comparing different populations in the world, the variance of income tends to increase with mean income. If we consider a number of small area units (e.g., counties in the United States) and obtain the mean and variance of incomes within each county, it is common that the counties with higher mean income also have higher variances. A variance-stabilizing transformation aims to remove a variance-on-mean relationship, so that the variance becomes constant relative to the mean. Examples of variance-stabilizing transformations are the Fisher transformation for the sample correlation coefficient, the square root transformation or Anscombe transform for Poisson data (count data), the Box–Cox transformation for regression analysis, and the arcsine square root transformation or angular transformation for proportions (binomial data). While commonly used for statistical analysis of proportional data, the arcsine square root transformation is not recommended because logistic regression or a logit transformation are more appropriate for binomial or non-binomial proportions, respectively, especially due to decreased type-II error. Transformations for multivariate data. Univariate functions can be applied point-wise to multivariate data to modify their marginal distributions. It is also possible to modify some attributes of a multivariate distribution using an appropriately constructed transformation. For example, when working with time series and other types of sequential data, it is common to difference the data to improve stationarity. If data generated by a random vector "X" are observed as vectors "X"i of observations with covariance matrix Σ, a linear transformation can be used to decorrelate the data. To do this, the Cholesky decomposition is used to express Σ = "A" "A"'. Then the transformed vector "Y"i = "A"−1"X"i has the identity matrix as its covariance matrix. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y = a + bX" }, { "math_id": 1, "text": "\\log(Y) = a + bX" }, { "math_id": 2, "text": "Y = e^a e^{bX}" }, { "math_id": 3, "text": "\\log(Y)" }, { "math_id": 4, "text": "e^{b}\\!" }, { "math_id": 5, "text": "10^{b}" }, { "math_id": 6, "text": "Y = a + b \\log(X)" }, { "math_id": 7, "text": "b \\times \\log(k)" }, { "math_id": 8, "text": "b \\times \\log_{10}(10) = b" }, { "math_id": 9, "text": "\\log(Y) = a + b \\log(X)" }, { "math_id": 10, "text": "Y = e^a X^{b}" }, { "math_id": 11, "text": "k^{b}" }, { "math_id": 12, "text": "2^{b}\\!" } ]
https://en.wikipedia.org/wiki?curid=10056274
100563
System on a chip
Micro-electronic component A system on a chip or system-on-chip (SoC ; pl. "SoCs" ) is an integrated circuit that integrates most or all components of a computer or other electronic system. These components almost always include on-chip central processing unit (CPU), memory interfaces, input/output devices and interfaces, and secondary storage interfaces, often alongside other components such as radio modems and a graphics processing unit (GPU) – all on a single substrate or microchip. SoCs may contain digital and also analog, mixed-signal and often radio frequency signal processing functions (otherwise it may be considered on a discrete application processor). Higher-performance SoCs are often paired with dedicated and physically separate memory and secondary storage (such as LPDDR and eUFS or eMMC, respectively) chips, that may be layered on top of the SoC in what is known as a package on package (PoP) configuration, or be placed close to the SoC. Additionally, SoCs may use separate wireless modems (especially WWAN modems). An SoC integrates a microcontroller, microprocessor or perhaps several processor cores with peripherals like a GPU, Wi-Fi and cellular network radio modems or one or more coprocessors. Similar to how a microcontroller integrates a microprocessor with peripheral circuits and memory, an SoC can be seen as integrating a microcontroller with even more advanced peripherals. Compared to a multi-chip architecture, an SoC with equivalent functionality will have reduced power consumption as well as a smaller semiconductor die area. This comes at the cost of reduced replaceability of components. By definition, SoC designs are fully or nearly fully integrated across different component modules. For these reasons, there has been a general trend towards tighter integration of components in the computer hardware industry, in part due to the influence of SoCs and lessons learned from the mobile and embedded computing markets. SoCs are very common in the mobile computing (as in smart devices such as smartphones and tablet computers) and edge computing markets. Types. In general, there are three distinguishable types of SoCs: Applications. SoCs can be applied to any computing task. However, they are typically used in mobile computing such as tablets, smartphones, smartwatches and netbooks as well as embedded systems and in applications where previously microcontrollers would be used. Embedded systems. Where previously only microcontrollers could be used, SoCs are rising to prominence in the embedded systems market. Tighter system integration offers better reliability and mean time between failure, and SoCs offer more advanced functionality and computing power than microcontrollers. Applications include AI acceleration, embedded machine vision, data collection, telemetry, vector processing and ambient intelligence. Often embedded SoCs target the internet of things, multimedia, networking, telecommunications and edge computing markets. Some examples of SoCs for embedded applications include: Mobile computing. Mobile computing based SoCs always bundle processors, memories, on-chip caches, wireless networking capabilities and often digital camera hardware and firmware. With increasing memory sizes, high end SoCs will often have no memory and flash storage and instead, the memory and flash memory will be placed right next to, or above (package on package), the SoC. Some examples of mobile computing SoCs include: Personal computers. In 1992, Acorn Computers produced the A3010, A3020 and A4000 range of personal computers with the ARM250 SoC. It combined the original Acorn ARM2 processor with a memory controller (MEMC), video controller (VIDC), and I/O controller (IOC). In previous Acorn ARM-powered computers, these were four discrete chips. The ARM7500 chip was their second-generation SoC, based on the ARM700, VIDC20 and IOMD controllers, and was widely licensed in embedded devices such as set-top-boxes, as well as later Acorn personal computers. Tablet and laptop manufacturers have learned lessons from embedded systems and smartphone markets about reduced power consumption, better performance and reliability from tighter integration of hardware and firmware modules, and LTE and other wireless network communications integrated on chip (integrated network interface controllers). Structure. An SoC consists of hardware functional units, including microprocessors that run software code, as well as a communications subsystem to connect, control, direct and interface between these functional modules. Functional components. Processor cores. An SoC must have at least one processor core, but typically an SoC has more than one core. Processor cores can be a microcontroller, microprocessor (μP), digital signal processor (DSP) or application-specific instruction set processor (ASIP) core. ASIPs have instruction sets that are customized for an application domain and designed to be more efficient than general-purpose instructions for a specific type of workload. Multiprocessor SoCs have more than one processor core by definition. The ARM architecture is a common choice for SoC processor cores because some ARM-architecture cores are soft processors specified as IP cores. Memory. SoCs must have semiconductor memory blocks to perform their computation, as do microcontrollers and other embedded systems. Depending on the application, SoC memory may form a memory hierarchy and cache hierarchy. In the mobile computing market, this is common, but in many low-power embedded microcontrollers, this is not necessary. Memory technologies for SoCs include read-only memory (ROM), random-access memory (RAM), Electrically Erasable Programmable ROM (EEPROM) and flash memory. As in other computer systems, RAM can be subdivided into relatively faster but more expensive static RAM (SRAM) and the slower but cheaper dynamic RAM (DRAM). When an SoC has a cache hierarchy, SRAM will usually be used to implement processor registers and cores' built-in caches whereas DRAM will be used for main memory. "Main memory" may be specific to a single processor (which can be multi-core) when the SoC has multiple processors, in this case it is distributed memory and must be sent via on-chip to be accessed by a different processor. For further discussion of multi-processing memory issues, see cache coherence and memory latency. Interfaces. SoCs include external interfaces, typically for communication protocols. These are often based upon industry standards such as USB, FireWire, Ethernet, USART, SPI, HDMI, I²C, CSI, etc. These interfaces will differ according to the intended application. Wireless networking protocols such as Wi-Fi, Bluetooth, 6LoWPAN and near-field communication may also be supported. When needed, SoCs include analog interfaces including analog-to-digital and digital-to-analog converters, often for signal processing. These may be able to interface with different types of sensors or actuators, including smart transducers. They may interface with application-specific modules or shields. Or they may be internal to the SoC, such as if an analog sensor is built in to the SoC and its readings must be converted to digital signals for mathematical processing. Digital signal processors. Digital signal processor (DSP) cores are often included on SoCs. They perform signal processing operations in SoCs for sensors, actuators, data collection, data analysis and multimedia processing. DSP cores typically feature very long instruction word (VLIW) and single instruction, multiple data (SIMD) instruction set architectures, and are therefore highly amenable to exploiting instruction-level parallelism through parallel processing and superscalar execution. SP cores most often feature application-specific instructions, and as such are typically application-specific instruction set processors (ASIP). Such application-specific instructions correspond to dedicated hardware functional units that compute those instructions. Typical DSP instructions include multiply-accumulate, Fast Fourier transform, fused multiply-add, and convolutions. Other. As with other computer systems, SoCs require timing sources to generate clock signals, control execution of SoC functions and provide time context to signal processing applications of the SoC, if needed. Popular time sources are crystal oscillators and phase-locked loops. SoC peripherals including counter-timers, real-time timers and power-on reset generators. SoCs also include voltage regulators and power management circuits. Intermodule communication. SoCs comprise many execution units. These units must often send data and instructions back and forth. Because of this, all but the most trivial SoCs require communications subsystems. Originally, as with other microcomputer technologies, data bus architectures were used, but recently designs based on sparse intercommunication networks known as networks-on-chip (NoC) have risen to prominence and are forecast to overtake bus architectures for SoC design in the near future. Bus-based communication. Historically, a shared global computer bus typically connected the different components, also called "blocks" of the SoC. A very common bus for SoC communications is ARM's royalty-free Advanced Microcontroller Bus Architecture (AMBA) standard. Direct memory access controllers route data directly between external interfaces and SoC memory, bypassing the CPU or control unit, thereby increasing the data throughput of the SoC. This is similar to some device drivers of peripherals on component-based multi-chip module PC architectures. Wire delay is not scalable due to continued miniaturization, system performance does not scale with the number of cores attached, the SoC's operating frequency must decrease with each additional core attached for power to be sustainable, and long wires consume large amounts of electrical power. These challenges are prohibitive to supporting manycore systems on chip. Network on a chip. In the late 2010s, a trend of SoCs implementing communications subsystems in terms of a network-like topology instead of bus-based protocols has emerged. A trend towards more processor cores on SoCs has caused on-chip communication efficiency to become one of the key factors in determining the overall system performance and cost. This has led to the emergence of interconnection networks with router-based packet switching known as "networks on chip" (NoCs) to overcome the bottlenecks of bus-based networks. Networks-on-chip have advantages including destination- and application-specific routing, greater power efficiency and reduced possibility of bus contention. Network-on-chip architectures take inspiration from communication protocols like TCP and the Internet protocol suite for on-chip communication, although they typically have fewer network layers. Optimal network-on-chip network architectures are an ongoing area of much research interest. NoC architectures range from traditional distributed computing network topologies such as torus, hypercube, meshes and tree networks to genetic algorithm scheduling to randomized algorithms such as random walks with branching and randomized time to live (TTL). Many SoC researchers consider NoC architectures to be the future of SoC design because they have been shown to efficiently meet power and throughput needs of SoC designs. Current NoC architectures are two-dimensional. 2D IC design has limited floorplanning choices as the number of cores in SoCs increase, so as three-dimensional integrated circuits (3DICs) emerge, SoC designers are looking towards building three-dimensional on-chip networks known as 3DNoCs. Design flow. A system on a chip consists of both the hardware, described in , and the software controlling the microcontroller, microprocessor or digital signal processor cores, peripherals and interfaces. The design flow for an SoC aims to develop this hardware and software at the same time, also known as architectural co-design. The design flow must also take into account optimizations () and constraints. Most SoCs are developed from pre-qualified hardware component IP core specifications for the hardware elements and execution units, collectively "blocks", described above, together with software device drivers that may control their operation. Of particular importance are the protocol stacks that drive industry-standard interfaces like USB. The hardware blocks are put together using computer-aided design tools, specifically electronic design automation tools; the software modules are integrated using a software integrated development environment. SoCs components are also often designed in high-level programming languages such as C++, MATLAB or SystemC and converted to RTL designs through high-level synthesis (HLS) tools such as C to HDL or flow to HDL. HLS products called "algorithmic synthesis" allow designers to use C++ to model and synthesize system, circuit, software and verification levels all in one high level language commonly known to computer engineers in a manner independent of time scales, which are typically specified in HDL. Other components can remain software and be compiled and embedded onto soft-core processors included in the SoC as modules in HDL as IP cores. Once the architecture of the SoC has been defined, any new hardware elements are written in an abstract hardware description language termed register transfer level (RTL) which defines the circuit behavior, or synthesized into RTL from a high level language through high-level synthesis. These elements are connected together in a hardware description language to create the full SoC design. The logic specified to connect these components and convert between possibly different interfaces provided by different vendors is called glue logic. Design verification. Chips are verified for validation correctness before being sent to a semiconductor foundry. This process is called functional verification and it accounts for a significant portion of the time and energy expended in the chip design life cycle, often quoted as 70%. With the growing complexity of chips, hardware verification languages like SystemVerilog, SystemC, e, and OpenVera are being used. Bugs found in the verification stage are reported to the designer. Traditionally, engineers have employed simulation acceleration, emulation or prototyping on reprogrammable hardware to verify and debug hardware and software for SoC designs prior to the finalization of the design, known as tape-out. Field-programmable gate arrays (FPGAs) are favored for prototyping SoCs because FPGA prototypes are reprogrammable, allow debugging and are more flexible than application-specific integrated circuits (ASICs). With high capacity and fast compilation time, simulation acceleration and emulation are powerful technologies that provide wide visibility into systems. Both technologies, however, operate slowly, on the order of MHz, which may be significantly slower – up to 100 times slower – than the SoC's operating frequency. Acceleration and emulation boxes are also very large and expensive at over US$1 million. FPGA prototypes, in contrast, use FPGAs directly to enable engineers to validate and test at, or close to, a system's full operating frequency with real-world stimuli. Tools such as Certus are used to insert probes in the FPGA RTL that make signals available for observation. This is used to debug hardware, firmware and software interactions across multiple FPGAs with capabilities similar to a logic analyzer. In parallel, the hardware elements are grouped and passed through a process of logic synthesis, during which performance constraints, such as operational frequency and expected signal delays, are applied. This generates an output known as a netlist describing the design as a physical circuit and its interconnections. These netlists are combined with the glue logic connecting the components to produce the schematic description of the SoC as a circuit which can be printed onto a chip. This process is known as place and route and precedes tape-out in the event that the SoCs are produced as application-specific integrated circuits (ASIC). Optimization goals. SoCs must optimize power use, area on die, communication, positioning for locality between modular units and other factors. Optimization is necessarily a design goal of SoCs. If optimization was not necessary, the engineers would use a multi-chip module architecture without accounting for the area use, power consumption or performance of the system to the same extent. Common optimization targets for SoC designs follow, with explanations of each. In general, optimizing any of these quantities may be a hard combinatorial optimization problem, and can indeed be NP-hard fairly easily. Therefore, sophisticated optimization algorithms are often required and it may be practical to use approximation algorithms or heuristics in some cases. Additionally, most SoC designs contain multiple variables to optimize simultaneously, so Pareto efficient solutions are sought after in SoC design. Oftentimes the goals of optimizing some of these quantities are directly at odds, further adding complexity to design optimization of SoCs and introducing trade-offs in system design. For broader coverage of trade-offs and requirements analysis, see requirements engineering. Targets. Power consumption. SoCs are optimized to minimize the electrical power used to perform the SoC's functions. Most SoCs must use low power. SoC systems often require long battery life (such as smartphones), can potentially spend months or years without a power source while needing to maintain autonomous function, and often are limited in power use by a high number of embedded SoCs being networked together in an area. Additionally, energy costs can be high and conserving energy will reduce the total cost of ownership of the SoC. Finally, waste heat from high energy consumption can damage other circuit components if too much heat is dissipated, giving another pragmatic reason to conserve energy. The amount of energy used in a circuit is the integral of power consumed with respect to time, and the average rate of power consumption is the product of current by voltage. Equivalently, by Ohm's law, power is current squared times resistance or voltage squared divided by resistance: formula_0SoCs are frequently embedded in portable devices such as smartphones, GPS navigation devices, digital watches (including smartwatches) and netbooks. Customers want long battery lives for mobile computing devices, another reason that power consumption must be minimized in SoCs. Multimedia applications are often executed on these devices, including video games, video streaming, image processing; all of which have grown in computational complexity in recent years with user demands and expectations for higher-quality multimedia. Computation is more demanding as expectations move towards 3D video at high resolution with multiple standards, so SoCs performing multimedia tasks must be computationally capable platform while being low power to run off a standard mobile battery. Performance per watt. SoCs are optimized to maximize power efficiency in performance per watt: maximize the performance of the SoC given a budget of power usage. Many applications such as edge computing, distributed processing and ambient intelligence require a certain level of computational performance, but power is limited in most SoC environments. Waste heat. SoC designs are optimized to minimize waste heat output on the chip. As with other integrated circuits, heat generated due to high power density are the bottleneck to further miniaturization of components. The power densities of high speed integrated circuits, particularly microprocessors and including SoCs, have become highly uneven. Too much waste heat can damage circuits and erode reliability of the circuit over time. High temperatures and thermal stress negatively impact reliability, stress migration, decreased mean time between failures, electromigration, wire bonding, metastability and other performance degradation of the SoC over time. In particular, most SoCs are in a small physical area or volume and therefore the effects of waste heat are compounded because there is little room for it to diffuse out of the system. Because of high transistor counts on modern devices, oftentimes a layout of sufficient throughput and high transistor density is physically realizable from fabrication processes but would result in unacceptably high amounts of heat in the circuit's volume. These thermal effects force SoC and other chip designers to apply conservative design margins, creating less performant devices to mitigate the risk of catastrophic failure. Due to increased transistor densities as length scales get smaller, each process generation produces more heat output than the last. Compounding this problem, SoC architectures are usually heterogeneous, creating spatially inhomogeneous heat fluxes, which cannot be effectively mitigated by uniform passive cooling. Throughput. SoCs are optimized to maximize computational and communications throughput. Latency. SoCs are optimized to minimize latency for some or all of their functions. This can be accomplished by laying out elements with proper proximity and locality to each-other to minimize the interconnection delays and maximize the speed at which data is communicated between modules, functional units and memories. In general, optimizing to minimize latency is an NP-complete problem equivalent to the Boolean satisfiability problem. For tasks running on processor cores, latency and throughput can be improved with task scheduling. Some tasks run in application-specific hardware units, however, and even task scheduling may not be sufficient to optimize all software-based tasks to meet timing and throughput constraints. Methodologies. Systems on chip are modeled with standard hardware verification and validation techniques, but additional techniques are used to model and optimize SoC design alternatives to make the system optimal with respect to multiple-criteria decision analysis on the above optimization targets. Task scheduling. Task scheduling is an important activity in any computer system with multiple processes or threads sharing a single processor core. It is important to reduce and increase for embedded software running on an SoC's . Not every important computing activity in a SoC is performed in software running on on-chip processors, but scheduling can drastically improve performance of software-based tasks and other tasks involving shared resources. Software running on SoCs often schedules tasks according to network scheduling and randomized scheduling algorithms. Pipelining. Hardware and software tasks are often pipelined in processor design. Pipelining is an important principle for speedup in computer architecture. They are frequently used in GPUs (graphics pipeline) and RISC processors (evolutions of the classic RISC pipeline), but are also applied to application-specific tasks such as digital signal processing and multimedia manipulations in the context of SoCs. Probabilistic modeling. SoCs are often analyzed though probabilistic models, queueing networks, and Markov chains. For instance, Little's law allows SoC states and NoC buffers to be modeled as arrival processes and analyzed through Poisson random variables and Poisson processes. Markov chains. SoCs are often modeled with Markov chains, both discrete time and continuous time variants. Markov chain modeling allows asymptotic analysis of the SoC's steady state distribution of power, heat, latency and other factors to allow design decisions to be optimized for the common case. Fabrication. SoC chips are typically fabricated using metal–oxide–semiconductor (MOS) technology. The netlists described above are used as the basis for the physical design (place and route) flow to convert the designers' intent into the design of the SoC. Throughout this conversion process, the design is analyzed with static timing modeling, simulation and other tools to ensure that it meets the specified operational parameters such as frequency, power consumption and dissipation, functional integrity (as described in the register transfer level code) and electrical integrity. When all known bugs have been rectified and these have been re-verified and all physical design checks are done, the physical design files describing each layer of the chip are sent to the foundry's mask shop where a full set of glass lithographic masks will be etched. These are sent to a wafer fabrication plant to create the SoC dice before packaging and testing. SoCs can be fabricated by several technologies, including: ASICs consume less power and are faster than FPGAs but cannot be reprogrammed and are expensive to manufacture. FPGA designs are more suitable for lower volume designs, but after enough units of production ASICs reduce the total cost of ownership. SoC designs consume less power and have a lower cost and higher reliability than the multi-chip systems that they replace. With fewer packages in the system, assembly costs are reduced as well. However, like most very-large-scale integration (VLSI) designs, the total cost is higher for one large chip than for the same functionality distributed over several smaller chips, because of lower yields and higher non-recurring engineering costs. When it is not feasible to construct an SoC for a particular application, an alternative is a system in package (SiP) comprising a number of chips in a single package. When produced in large volumes, SoC is more cost-effective than SiP because its packaging is simpler. Another reason SiP may be preferred is waste heat may be too high in a SoC for a given purpose because functional components are too close together, and in an SiP heat will dissipate better from different functional modules since they are physically further apart. Examples. Some examples of systems on a chip are: Benchmarks. SoC research and development often compares many options. Benchmarks, such as COSMIC, are developed to help such evaluations. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P = IV = \\frac{V^2}{R} = {I^2}{R}" } ]
https://en.wikipedia.org/wiki?curid=100563
1005746
Supercritical flow
Flow velocity larger than wave velocity A supercritical flow is a flow whose velocity is larger than the wave velocity. The analogous condition in gas dynamics is supersonic speed. According to the website Civil Engineering Terms, supercritical flow is defined as follows: The flow at which depth of the channel is less than critical depth, velocity of flow is greater than critical velocity and slope of the channel is also greater than the critical slope is known as supercritical flow. Information travels at the wave velocity. This is the velocity at which waves travel outwards from a pebble thrown into a lake. The flow velocity is the velocity at which a leaf in the flow travels. If a pebble is thrown into a supercritical flow then the ripples will all move down stream whereas in a subcritical flow some would travel up stream and some would travel down stream. It is only in supercritical flows that hydraulic jumps (bores) can occur. In fluid dynamics, the change from one behaviour to the other is often described by a dimensionless quantity, where the transition occurs whenever this number becomes less or more than one. One of these numbers is the Froude number: where If formula_1, we call the flow subcritical; if formula_2, we call the flow supercritical. If formula_3, it is critical. References. &lt;templatestyles src="Reflist/styles.css" /&gt; The Hydraulics of Open Channel Flow: An Introduction. Physical Modelling of Hydraulics Chanson, Hubert (1999)
[ { "math_id": 0, "text": "Fr \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac{U}{\\sqrt{gh}}," }, { "math_id": 1, "text": " Fr < 1 " }, { "math_id": 2, "text": " Fr > 1 " }, { "math_id": 3, "text": " Fr \\approx 1 " } ]
https://en.wikipedia.org/wiki?curid=1005746
10058495
Admittance parameters
Properties of an electrical network in terms of a matrix of ratios of currents to voltages Admittance parameters or Y-parameters (the elements of an admittance matrix or Y-matrix) are properties used in many areas of electrical engineering, such as power, electronics, and telecommunications. These parameters are used to describe the electrical behavior of linear electrical networks. They are also used to describe the small-signal (linearized) response of non-linear networks. Y parameters are also known as short circuited admittance parameters. They are members of a family of similar parameters used in electronic engineering, other examples being: S-parameters, Z-parameters, H-parameters, T-parameters or ABCD-parameters. The Y-parameter matrix. A Y-parameter matrix describes the behaviour of any linear electrical network that can be regarded as a black box with a number of ports. A "port" in this context is a pair of electrical terminals carrying equal and opposite currents into and out of the network, and having a particular voltage between them. The Y-matrix gives no information about the behaviour of the network when the currents at any port are not balanced in this way (should this be possible), nor does it give any information about the voltage between terminals not belonging to the same port. Typically, it is intended that each external connection to the network is between the terminals of just one port, so that these limitations are appropriate. For a generic multi-port network definition, it is assumed that each of the ports is allocated an integer n ranging from 1 to N, where N is the total number of ports. For port n, the associated Y-parameter definition is in terms of the port voltage and port current, Vn and In respectively. For all ports the currents may be defined in terms of the Y-parameter matrix and the voltages by the following matrix equation: formula_0 where Y is an "N" × "N" matrix the elements of which can be indexed using conventional matrix notation. In general the elements of the Y-parameter matrix are complex numbers and functions of frequency. For a one-port network, the Y-matrix reduces to a single element, being the ordinary admittance measured between the two terminals. Two-port networks. The Y-parameter matrix for the two-port network is probably the most common. In this case the relationship between the port voltages, port currents and the Y-parameter matrix is given by: formula_1. where formula_2 For the general case of an n-port network, formula_3 Admittance relations. The input admittance of a two-port network is given by: formula_4 where YL is the admittance of the load connected to port two. Similarly, the output admittance is given by: formula_5 where YS is the admittance of the source connected to port one. Relation to S-parameters. The Y-parameters of a network are related to its S-parameters by formula_6 and formula_7 where IN is the identity matrix, formula_8 is a diagonal matrix having the square root of the characteristic admittance (the reciprocal of the characteristic impedance) at each port as its non-zero elements, formula_9 and formula_10 is the corresponding diagonal matrix of square roots of characteristic impedances. In these expressions the matrices represented by the bracketed factors commute and so, as shown above, may be written in either order. Two port. In the special case of a two-port network, with the same and real characteristic admittance formula_11 at each port, the above expressions reduce to formula_12 where formula_13 The above expressions will generally use complex numbers for formula_14 and formula_15. Note that the value of formula_16 can become 0 for specific values of formula_14 so the division by formula_16 in the calculations of formula_15 may lead to a division by 0. The two-port S-parameters may also be obtained from the equivalent two-port Y-parameters by means of the following expressions. formula_17 where formula_18 and formula_19 is the characteristic impedance at each port (assumed the same for the two ports). Relation to Z-parameters. Conversion from Z-parameters to Y-parameters is much simpler, as the Y-parameter matrix is just the inverse of the Z-parameter matrix. The following expressions show the applicable relations: formula_20 where formula_21 In this case formula_22 is the determinant of the Z-parameter matrix. Vice versa the Y-parameters can be used to determine the Z-parameters, essentially using the same expressions since formula_23 and formula_24
[ { "math_id": 0, "text": "I = Y V\\," }, { "math_id": 1, "text": "\\begin{pmatrix}I_1 \\\\ I_2\\end{pmatrix} = \\begin{pmatrix} Y_{11} & Y_{12} \\\\ Y_{21} & Y_{22} \\end{pmatrix}\\begin{pmatrix}V_1 \\\\ V_2\\end{pmatrix}" }, { "math_id": 2, "text": "\\begin{align} \nY_{11} &= {I_1 \\over V_1 } \\bigg|_{V_2 = 0} \\qquad Y_{12} = {I_1 \\over V_2 } \\bigg|_{V_1 = 0} \\\\[8pt]\nY_{21} &= {I_2 \\over V_1 } \\bigg|_{V_2 = 0} \\qquad Y_{22} = {I_2 \\over V_2 } \\bigg|_{V_1 = 0}\n\\end{align}" }, { "math_id": 3, "text": "Y_{nm} = {I_n \\over V_m } \\bigg|_{V_k = 0 \\text{ for } k \\ne m}" }, { "math_id": 4, "text": "Y_{in} = Y_{11} - \\frac{Y_{12}Y_{21}}{Y_{22}+Y_L}" }, { "math_id": 5, "text": "Y_{out} = Y_{22} - \\frac{Y_{12}Y_{21}}{Y_{11}+Y_S}" }, { "math_id": 6, "text": " \\begin{align}\nY &= \\sqrt{y} (I_N - S) (I_N + S)^{-1} \\sqrt{y} \\\\\n &= \\sqrt{y} (I_N + S)^{-1} (I_N - S) \\sqrt{y} \\\\\n\\end{align} " }, { "math_id": 7, "text": " \\begin{align}\nS &= (I_N - \\sqrt{z}Y\\sqrt{z}) (I_N + \\sqrt{z}Y\\sqrt{z})^{-1} \\\\\n &= (I_N + \\sqrt{z}Y\\sqrt{z})^{-1} (I_N - \\sqrt{z}Y\\sqrt{z}) \\\\\n\\end{align} " }, { "math_id": 8, "text": "\\sqrt{y}" }, { "math_id": 9, "text": "\\sqrt{y} = \\begin{pmatrix}\n \\sqrt{y_{01}} & \\\\\n & \\sqrt{y_{02}} \\\\\n & & \\ddots \\\\\n & & & \\sqrt{y_{0N}}\n\\end{pmatrix}\n" }, { "math_id": 10, "text": "\\sqrt{z} = (\\sqrt{y})^{-1}" }, { "math_id": 11, "text": "y_{01} = y_{02} = Y_0" }, { "math_id": 12, "text": "\\begin{align}\nY_{11} &= {(1 - S_{11}) (1 + S_{22}) + S_{12} S_{21} \\over \\Delta_S} Y_0 \\\\\nY_{12} &= {-2 S_{12} \\over \\Delta_S} Y_0 \\\\[4pt]\nY_{21} &= {-2 S_{21} \\over \\Delta_S} Y_0 \\\\[4pt]\nY_{22} &= {(1 + S_{11}) (1 - S_{22}) + S_{12} S_{21} \\over \\Delta_S} Y_0\n\\end{align}" }, { "math_id": 13, "text": "\\Delta_S = (1 + S_{11}) (1 + S_{22}) - S_{12} S_{21} ." }, { "math_id": 14, "text": "S_{ij}" }, { "math_id": 15, "text": "Y_{ij}" }, { "math_id": 16, "text": "\\Delta" }, { "math_id": 17, "text": "\\begin{align}\nS_{11} &= {(1 - Z_0 Y_{11}) (1 + Z_0 Y_{22}) + Z^2_0 Y_{12} Y_{21} \\over \\Delta} \\\\\nS_{12} &= {-2 Z_0 Y_{12} \\over \\Delta} \\\\[4pt]\nS_{21} &= {-2 Z_0 Y_{21} \\over \\Delta} \\\\[4pt]\nS_{22} &= {(1 + Z_0 Y_{11}) (1 - Z_0 Y_{22}) + Z^2_0 Y_{12} Y_{21} \\over \\Delta} \n\\end{align}" }, { "math_id": 18, "text": "\\Delta = (1 + Z_0 Y_{11}) (1 + Z_0 Y_{22}) - Z^2_0 Y_{12} Y_{21} \\," }, { "math_id": 19, "text": "Z_0" }, { "math_id": 20, "text": "\\begin{align}\nY_{11} &= {Z_{22} \\over |Z|} \\\\[4pt]\nY_{12} &= {-Z_{12} \\over |Z|} \\\\[4pt]\nY_{21} &= {-Z_{21} \\over |Z|} \\\\[4pt]\nY_{22} &= {Z_{11} \\over |Z|} \n\\end{align}" }, { "math_id": 21, "text": "|Z| = Z_{11} Z_{22} - Z_{12} Z_{21} \\," }, { "math_id": 22, "text": "|Z|" }, { "math_id": 23, "text": "Y = Z^{-1} \\," }, { "math_id": 24, "text": "Z = Y^{-1} ." } ]
https://en.wikipedia.org/wiki?curid=10058495
1005868
Tachymeter (watch)
Scale sometimes inscribed around the rim of an analog watch A tachymeter (pronounced ) is a scale sometimes inscribed around the rim of an analog watch with a chronograph. It can be used to conveniently compute the frequency in inverse-hours of an event of a known second-defined period, such as speed (distance over hours) based on travel time (distance over speed), or measure distance based on speed. The spacings between the marks on the tachymeter dial are therefore proportional to &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄"t" , where "t" is the elapsed time. The function performed by a tachymeter is independent of the unit of distance (e.g. statute miles, nautical miles, kilometres, metres etc.) as long as the same unit of length is used for all calculations. It can also be used to measure the frequency of any regular event in occurrences per hour, such as the units output by an industrial process. A tachymeter is simply a means of converting "elapsed time" (in seconds per unit) to "rate" (in units per hour). Measuring speed. To use a tachymeter-equipped watch for measuring speed, the chronograph is started at a starting marker of a known distance. At the next marker, the point on the scale adjacent to the second hand indicates the speed (in distance between markers per hour) of travel between the two. The typical tachymeter scale on a watch converts between the number of seconds it takes for an event to happen and the number of times that event will occur in one hour. The formula used to create this type of tachymeter scale is formula_0 where T is the tachymeter scale value; t is the time in seconds that it takes for the event to occur; and 3600 is the number of seconds in an hour. As a sample calculation, if it takes 35 seconds to travel one mile, then the average speed is 103 miles/hour. On the watch, 35 seconds gives scale value 103. Similarly, if one kilometre takes 35 seconds then the average speed would be 103 km/hour. Note that the tachymeter scale only calculates the average speed. As a second example, if it takes 20 seconds to travel one unit of distance, then the average speed on the watch used for the purpose of the picture only is 180 units of distance per hour (examine the picture of the watch which is here to simplify the idea, actual tachymeters may vary slightly). For events that happen either very quickly or slowly, one can adjust the sixty-second tachymeter scale commonly found on watches. Smaller fractional units can be used for slower objects, but the same X/hour function remains constant. The scale on a watch is only valid for things that happen in 60 seconds or fewer, and the scale is also difficult to resolve for events that take fewer than 10 seconds or so to occur. As an example, if it takes 100 seconds to eat an apple, cutting that number in half allows one to say that it takes 50 seconds to eat half an apple. Using the tachymeter scale one can calculate that 72 half apples (36 whole apples) could be eaten in one hour. Some watches, not common, have "wraparound" or "scroll" scales, which extend the readings to lower speeds, typically 45 units. Measuring distance. A tachymeter-equipped watch can be used to measure distance by timing the travel over the distance while the speed is held constant. The tachymeter scale is rotated to align with the second hand at the start of the length to be measured. When the second hand reaches the point on the scale where the speed indicated equals the speed of the vehicle, one unit of distance (miles if speed is miles per hour, kilometres if kilometres per hour, etc.) has been covered. For example, if you travel at a constant 80 mph (or at 80 km/h), then the distance travelled while the second hand sweeps to "80" (45 seconds) will be exactly 1 mile (or 1 kilometre at 80 km/h). Rotating scale. Some tachymeter scales are on a rotating, indexed bezel. This allows two additional modes of use: The tachymeter bezel can be aligned with a free running second hand, and, more subtly, can be used to find the average speed over longer times/distances. Set the rotary bezel index to the position of the minute hand, note the current mileage/distance. Glance at the position of the minute hand on the tachymeter scale 60 units of distance later, and "average" speed will be indicated. A little mental math allows interim averages, easiest at 1/4 (15 unit) and other integer values. Alternatively, instead of using minute hand, align index bezel to the second hand and observe passing one unit of distance when position of the second hand will then indicate average speed. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T = \\frac{3600}{t}" } ]
https://en.wikipedia.org/wiki?curid=1005868
10058792
Backlash (engineering)
Clearance between mating components In mechanical engineering, backlash, sometimes called lash, play, or slop, is a clearance or lost motion in a mechanism caused by gaps between the parts. It can be defined as "the maximum distance or angle through which any part of a mechanical system may be moved in one direction without applying appreciable force or motion to the next part in mechanical sequence."p. 1-8 An example, in the context of gears and gear trains, is the amount of clearance between mated gear teeth. It can be seen when the direction of movement is reversed and the slack or lost motion is taken up before the reversal of motion is complete. It can be heard from the railway couplings when a train reverses direction. Another example is in a valve train with mechanical tappets, where a certain range of lash is necessary for the valves to work properly. Depending on the application, backlash may or may not be desirable. Some amount of backlash is unavoidable in nearly all reversing mechanical couplings, although its effects can be negated or compensated for. In many applications, the theoretical ideal would be zero backlash, but in actual practice some backlash must be allowed to prevent jamming. Reasons for specifying a requirement for backlash include allowing for lubrication, manufacturing errors, deflection under load, and thermal expansion. A principal cause of undesired backlash is wear. Gears. Factors affecting the amount of backlash required in a gear train include errors in profile, pitch, tooth thickness, helix angle and center distance, and run-out. The greater the accuracy the smaller the backlash needed. Backlash is most commonly created by cutting the teeth deeper into the gears than the ideal depth. Another way of introducing backlash is by increasing the center distances between the gears. Backlash due to tooth thickness changes is typically measured along the pitch circle and is defined by: formula_0 where: Backlash, measured on the pitch circle, due to operating center modifications is defined by: The speed of the machine. The material in the machine formula_1 where: Standard practice is to make allowance for half the backlash in the tooth thickness of each gear. However, if the pinion (the smaller of the two gears) is significantly smaller than the gear it is meshing with then it is common practice to account for all of the backlash in the larger gear. This maintains as much strength as possible in the pinion's teeth. The amount of additional material removed when making the gears depends on the pressure angle of the teeth. For a 14.5° pressure angle the extra distance the cutting tool is moved in equals the amount of backlash desired. For a 20° pressure angle the distance equals 0.73 times the amount of backlash desired. As a rule of thumb the average backlash is defined as 0.04 divided by the diametral pitch; the minimum being 0.03 divided by the diametral pitch and the maximum 0.05 divided by the diametral pitch. In metric, you can just multiply the values with the module: formula_2 In a gear train, backlash is cumulative. When a gear-train is reversed the driving gear is turned a short distance, equal to the total of all the backlashes, before the final driven gear begins to rotate. At low power outputs, backlash results in inaccurate calculation from the small errors introduced at each change of direction; at large power outputs backlash sends shocks through the whole system and can damage teeth and other components. Anti-backlash designs. In certain applications, backlash is an undesirable characteristic and should be minimized. Gear trains where positioning is key but power transmission is light. The best example here is an analog radio tuner dial where one may make precise tuning movements both forwards and backwards. Specialized gear designs allow this. One of the more common designs splits the gear into two gears, each half the thickness of the original. One half of the gear is fixed to its shaft while the other half of the gear is allowed to turn on the shaft, but pre-loaded in rotation by small coil springs that rotate the free gear relative to the fixed gear. In this way, the spring compression rotates the free gear until all of the backlash in the system has been taken out; the teeth of the fixed gear press against one side of the teeth of the pinion while the teeth of the free gear press against the other side of the teeth on the pinion. Loads smaller than the force of the springs do not compress the springs and with no gaps between the teeth to be taken up, backlash is eliminated. Leadscrews where positioning and power are both important. Another area where backlash matters is in leadscrews. Again, as with the gear train example, the culprit is lost motion when reversing a mechanism that is supposed to transmit motion accurately. Instead of gear teeth, the context is screw threads. The linear sliding axes (machine slides) of machine tools are an example application. Most machine slides for many decades, and many even today, have been simple (but accurate) cast-iron linear bearing surfaces, such as a dovetail- or box-slide, with an Acme leadscrew drive. With just a simple nut, some backlash is inevitable. On manual (non-CNC) machine tools, a machinist's means for compensating for backlash is to approach all precise positions using the same direction of travel, that is, if they have been dialing left, and next want to move to a rightward point, they will move rightward "past" it, then dial leftward back to it; the setups, tool approaches, and toolpaths must in that case be designed within this constraint. The next-more complex method than the simple nut is a split nut, whose halves can be adjusted, and locked with screws, so that the two sides ride, respectively, against leftward thread and the other side rides rightward faces. Notice the analogy here with the radio dial example using split gears, where the split halves are pushed in opposing directions. Unlike in the radio dial example, the spring tension idea is not useful here, because machine tools taking a cut put too much force against the screw. Any spring light enough to allow slide movement at all would allow cutter chatter at best and slide movement at worst. These screw-adjusted split-nut-on-an-Acme-leadscrew designs cannot eliminate "all" backlash on a machine slide unless they are adjusted so tight that the travel starts to bind. Therefore, this idea can't totally obviate the always-approach-from-the-same-direction concept; nevertheless, backlash can be held to a small amount (1 or 2 thousandths of an inch or), which is more convenient, and in some non-precise work is enough to allow one to "ignore" the backlash, i.e., to design as if there were none. CNCs can be programmed to use the always-approach-from-the-same-direction concept, but that is not the normal way they are used today, because hydraulic anti-backlash split nuts, and newer forms of leadscrew than Acme/trapezoidal -- such as recirculating ball screws -- effectively eliminate the backlash. The axis can move in either direction without the go-past-and-come-back motion. The simplest CNCs, such as microlathes or manual-to-CNC conversions, which use nut-and-Acme-screw drives can be programmed to correct for the total backlash on each axis, so that the machine's control system will automatically move the extra distance required to take up the slack when it changes directions. This programmatic "backlash compensation" is a cheap solution, but professional grade CNCs use the more expensive backlash-eliminating drives mentioned above. This allows them to do 3D contouring with a ball-nosed endmill, for example, where the endmill travels around in many directions with constant rigidity and without delays. In mechanical computers a more complex solution is required, namely a frontlash gearbox. This works by turning slightly faster when the direction is reversed to 'use up' the backlash slack. Some motion controllers include backlash compensation. Compensation may be achieved by simply adding extra compensating motion (as described earlier) or by sensing the load's position in a closed loop control scheme. The dynamic response of backlash itself, essentially a delay, makes the position loop less stable and thus more prone to oscillation. Minimum backlash. Minimum backlash is calculated as the minimum transverse backlash at the operating pitch circle allowable when the gear teeth with the greatest allowable functional tooth thickness are in mesh with the pinion teeth with their greatest allowable functional tooth thickness, at the smallest allowable center distance, under static conditions. Backlash variation is defined as the difference between the maximum and minimum backlash occurring in a whole revolution of the larger of a pair of mating gears. Applications. Backlash in gear couplings allows for slight angular misalignment. There can be significant backlash in unsynchronized transmissions because of the intentional gap between the dogs in dog clutches. The gap is necessary to engage dogs when input shaft (engine) speed and output shaft (driveshaft) speed are imperfectly synchronized. If there was a smaller clearance, it would be nearly impossible to engage the gears because the dogs would interfere with each other in most configurations. In synchronized transmissions, synchromesh solves this problem. However, backlash is undesirable in precision positioning applications such as machine tool tables. It can be minimized by choosing ball screws or leadscrews with preloaded nuts, and mounting them in preloaded bearings. A preloaded bearing uses a spring and/or a second bearing to provide a compressive axial force that maintains bearing surfaces in contact despite reversal of the load direction. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "b_t=t_i-t_a\\;" }, { "math_id": 1, "text": "b_c = 2 \\left( \\Delta c \\right) \\tan\\phi" }, { "math_id": 2, "text": "b_{avg}=0.04 * m" } ]
https://en.wikipedia.org/wiki?curid=10058792
10059553
Futile cycle
Metabolic process A futile cycle, also known as a substrate cycle, occurs when two metabolic pathways run simultaneously in opposite directions and have no overall effect other than to dissipate energy in the form of heat. The reason this cycle was called "futile" cycle was because it appeared that this cycle operated with no net utility for the organism. As such, it was thought of being a quirk of the metabolism and thus named a futile cycle. After further investigation it was seen that futile cycles are very important for regulating the concentrations of metabolites. For example, if glycolysis and gluconeogenesis were to be active at the same time, glucose would be converted to pyruvate by glycolysis and then converted back to glucose by gluconeogenesis, with an overall consumption of ATP. Futile cycles may have a role in metabolic regulation, where a futile cycle would be a system oscillating between two states and very sensitive to small changes in the activity of any of the enzymes involved. The cycle does generate heat, and may be used to maintain thermal homeostasis, for example in the brown adipose tissue of young mammals, or to generate heat rapidly, for example in insect flight muscles and in hibernating animals during periodical arousal from torpor. It has been reported that the glucose metabolism substrate cycle is not a futile cycle but a regulatory process. For example, when energy is suddenly needed, ATP is replaced by AMP, a much more reactive adenine. Example. The simultaneous carrying out of glycolysis and gluconeogenesis is an example of a futile cycle, represented by the following equation: &lt;templatestyles src="Block indent/styles.css"/&gt;ATP + H2O formula_0 ADP + Pi + H For example, during glycolysis, fructose-6-phosphate is converted to fructose-1,6-bisphosphate in a reaction catalysed by the enzyme phosphofructokinase 1 (PFK-1). &lt;templatestyles src="Block indent/styles.css"/&gt;ATP + fructose-6-phosphate → Fructose-1,6-bisphosphate + ADP But during gluconeogenesis (i.e. synthesis of glucose from pyruvate and other compounds) the reverse reaction takes place, being catalyzed by fructose-1,6-bisphosphatase (FBPase-1). &lt;templatestyles src="Block indent/styles.css"/&gt;Fructose-1,6-bisphosphate + H2O → fructose-6-phosphate + Pi Giving an overall reaction of: &lt;templatestyles src="Block indent/styles.css"/&gt;ATP + H2O → ADP + Pi + Heat That is, hydrolysis of ATP without any useful metabolic work being done. Clearly, if these two reactions were allowed to proceed simultaneously at a high rate in the same cell, a large amount of chemical energy would be dissipated as heat. This uneconomical process has therefore been called a futile cycle. Futile Cycle's role in Obesity and Homeostasis. There are not many drugs that can effectively treat or reverse obesity. Obesity can increase ones risk of diseases primarily linked to health problems such as diabetes, hypertension, cardiovascular disease and even certain types of cancers. A study revolving around treatment and prevention of obesity using transgenic mice to experiment on reports positive feedback that proposes miR-378 may sure be a promising agent for preventing and treating obesity in humans. The study findings demonstrate that activation of the pyruvate-PEP futile cycle in skeletal muscle through miR-378 is the primary cause of elevated lipolysis in adipose tissues of miR-378 transgenic mice, and it helps orchestrate the crosstalk between muscle and fat to control energy homeostasis in mice. Our general understanding of futile cycle is a substrate cycle, occurring when two overlapping metabolic pathways run in opposite directions, that when left without regulation will continue to go on uncontrolled without any actual production until all the cells energy is depleted. However, the idea behind the study indicates miR-378-activated pyruvate-phosphoenolpyruvate futile cycle plays a regulatory benefit. Not only does miR-378 result in lower body fat mass due to enhanced lipolysis it is also speculated that futile cycles regulate metabolism to maintain energy homeostasis. miR-378 has a unique function in regulating metabolic communication between the muscle and adipose tissues to control energy homeostasis at whole-body levels. Examples of futile cycle operating in different species. To understand how presence of a futile cycle helps maintain low levels of ATP and generation heat in some species we look at metabolic pathways dealing with reciprocal regulation of glycolysis and gluconeogenesis. The swim bladder of many fish; such as zebrafish for example - is an organ internally filled with gas that helps contribute to their buoyancy. These gas gland cell are found to be located where the capillaries and nerves are found. Analyses of metabolic enzymes demonstrated that a gluconeogenesis enzyme fructose-1,6- bisphosphatase (Fbp1) and a glycolytic enzyme glyceraldehyde-3-phosphate dehydrogenase (Gapdh) are highly expressed in gas gland cells. The study signified that the characterization of the zebrafish swim bladder should not contain any expression fructose-1,6-bisphosphatase gene. The tissue of the swim bladder is known to be very high in glycogenic activity and lacking in gluconeogenesis, yet a predominant amount of Fbp was found to be expressed. This finding suggests that in the gas gland cell, Fbp forms an ATP-dependent metabolic futile cycle. Generation of heat is critically important for the gas gland cells to synthesize lactic acid because the process is strongly inhibited if ATP is accumulated. Another example suggest that heat generation in fugu swim bladder will be transported out of the site of generation, however it may still be constantly recovered back through the rete mirabile so as to maintain the temperature of the gas gland higher than other areas of the body. The overall net reaction of the futile cycle involves the consumption of ATP and generation of heat as follows: &lt;templatestyles src="Block indent/styles.css"/&gt;ATP + → ADP + Pi + Heat Another example of futile cycle benefiting in generation of heat is found in bumblebees. The futile cycle involving Fbp and Pfk is used by bumble bees to produce heat in flight muscles and warm up their bodies considerably at low ambient temperatures. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=10059553
10059597
GPS signals
Signals broadcast by GPS satellites GPS signals are broadcast by Global Positioning System satellites to enable satellite navigation. Receivers on or near the Earth's surface can determine location, time, and velocity using this information. The GPS satellite constellation is operated by the 2nd Space Operations Squadron (2SOPS) of Space Delta 8, United States Space Force. GPS signals include ranging signals, which are used to measure the distance to the satellite, and navigation messages. The navigation messages include "ephemeris" data which are used both in trilateration to calculate the position of each satellite in orbit and also to provide information about the time and status of the entire satellite constellation, called the "almanac". There are four GPS signal specifications designed for civilian use. In order of date of introduction, these are: L1 C/A, L2C, L5 and L1C. L1 C/A is also called the "legacy signal" and is broadcast by all currently operational satellites. L2C, L5 and L1C are "modernized signals" and are only broadcast by newer satellites (or not yet at all). Furthermore, as of January 2021[ [update]], none of these three signals are yet considered to be fully operational for civilian use. In addition to the four aforementioned signals, there are "restricted signals" with published frequencies and chip rates, but the signals use encrypted coding, restricting use to authorized parties. Some limited use of restricted signals can still be made by civilians without decryption; this is called "codeless" and "semi-codeless" access, and this is officially supported. The interface to the User Segment (GPS receivers) is described in the Interface Control Documents (ICD). The format of civilian signals is described in the Interface Specification (IS) which is a subset of the ICD. Common characteristics. The GPS satellites (called "space vehicles" in the GPS interface specification documents) transmit simultaneously several ranging codes and navigation data using binary phase-shift keying (BPSK). Only a limited number of central frequencies are used. Satellites using the same frequency are distinguished by using different ranging codes. In other words, GPS uses code-division multiple access. The ranging codes are also called "chipping codes" (in reference to CDMA/DSSS), "pseudorandom noise" and "pseudorandom binary sequences" (in reference to the fact that the sequences are predictable yet that they statistically resemble noise). Some satellites transmit several BPSK streams at the same frequency in quadrature, in a form of quadrature amplitude modulation. However, unlike typical QAM systems where a single bit stream is split into two, half-symbol-rate bit streams to improve spectral efficiency, the in-phase and quadrature components of GPS signals are modulated by separate (but functionally related) bit streams. Satellites are uniquely identified by a serial number called "space vehicle number" (SVN) which does not change during its lifetime. In addition, all operating satellites are numbered with a "space vehicle identifier" (SV ID) and "pseudorandom noise number" (PRN number) which uniquely identifies the ranging codes that a satellite uses. There is a fixed one-to-one correspondence between SV identifiers and PRN numbers described in the interface specification. Unlike SVNs, the SV ID/PRN number of a satellite may be changed (resulting in a change to the ranging codes it uses). That is, no two active satellites can share any one active SV ID/PRN number. The current SVNs and PRN numbers for the GPS constellation are published at NAVCEN. Legacy GPS signals. The original GPS design contains two ranging codes: the "coarse/acquisition" (C/A) code, which is freely available to the public, and the restricted "precision" (P) code, usually reserved for military applications. Frequency information. For the ranging codes and navigation message to travel from the satellite to the receiver, they must be modulated onto a carrier wave. In the case of the original GPS design, two frequencies are utilized; one at 1575.42 MHz (10.23 MHz × 154) called L1; and a second at 1227.60 MHz (10.23 MHz × 120), called L2. The C/A code is transmitted on the L1 frequency as a 1.023 MHz signal using a bi-phase shift keying (BPSK) modulation technique. The P(Y)-code is transmitted on both the L1 and L2 frequencies as a 10.23 MHz signal using the same BPSK modulation, however the P(Y)-code carrier is in quadrature with the C/A carrier (meaning it is 90° out of phase). Besides redundancy and increased resistance to jamming, a critical benefit of having two frequencies transmitted from one satellite is the ability to measure directly, and therefore remove, the ionospheric delay error for that satellite. Without such a measurement, a GPS receiver must use a generic model or receive ionospheric corrections from another source (such as the Wide Area Augmentation System or WAAS). Advances in the technology used on both the GPS satellites and the GPS receivers has made ionospheric delay the largest remaining source of error in the signal. A receiver capable of performing this measurement can be significantly more accurate and is typically referred to as a "dual frequency receiver". Modulation codes. Coarse/acquisition code. The C/A PRN codes are Gold codes with a period of 1023 chips transmitted at 1.023 Mchip/s, causing the code to repeat every 1 millisecond. They are exclusive-ored with a 50 bit/s navigation message and the result phase modulates the carrier as previously described. These codes only match up, or strongly autocorrelate when they are almost exactly aligned. Each satellite uses a unique PRN code, which does not correlate well with any other satellite's PRN code. In other words, the PRN codes are highly orthogonal to one another. The 1 ms period of the C/A code corresponds to 299.8 km of distance, and each chip corresponds to a distance of 293 m. Receivers track these codes well within one chip of accuracy, so measurement errors are considerably smaller than 293 m. The C/A codes are generated by combining (using "exclusive or") two bit streams, each generated by two different maximal period 10 stage linear-feedback shift registers (LFSR). Different codes are obtained by selectively delaying one of those bit streams. Thus: C/A"i"("t") = "A"("t") ⊕ "B"("t"-"Di") where: C/A"i" is the code with PRN number "i". "A" is the output of the first LFSR whose generator polynomial is "x" → "x"10 + "x"3 + 1, and initial state is 11111111112. "B" is the output of the second LFSR whose generator polynomial is "x" → "x"10 + "x"9 + "x"8 + "x"6 + "x"3 + "x"2 + 1 and initial state is also 11111111112. "Di" is a delay (by an integer number of periods) specific to each PRN number "i"; it is designated in the GPS interface specification. ⊕ is exclusive or. The arguments of the functions therein are the number of "bits" or "chips" since their epochs, starting at 0. The epoch of the LFSRs is the point at which they are at the initial state; and for the overall C/A codes it is the start of any UTC second plus any integer number of milliseconds. The output of LFSRs at negative arguments is defined consistent with the period which is 1,023 chips (this provision is necessary because "B" may have a negative argument using the above equation). The delay for PRN numbers 34 and 37 is the same; therefore their C/A codes are identical and are not transmitted at the same time (it may make one or both of those signals unusable due to mutual interference depending on the relative power levels received on each GPS receiver). Precision code. The P-code is a PRN sequence much longer than the C/A code: 6.187104 x 1012 chips. Even though the P-code chip rate (10.23 Mchip/s) is ten times that of the C/A code, it repeats only once per week, eliminating range ambiguity. It was assumed that receivers could not directly acquire such a long and fast code so they would first "bootstrap" themselves with the C/A code to acquire the spacecraft ephemerides, produce an approximate time and position fix, and then acquire the P-code to refine the fix. Whereas the C/A PRNs are unique for each satellite, each satellite transmits a different segment of a master P-code sequence approximately 2.35 x 1014 chips long (235,000,000,000,000 chips). Each satellite repeatedly transmits its assigned segment of the master code, restarting every Sunday at 00:00:00 GPS time. For reference, the GPS epoch was Sunday January 6, 1980 at 00:00:00 UTC, but GPS does not follow UTC exactly because GPS time does not incorporate leap seconds. Thus, GPS time is ahead of UTC by an integer (whole) number of seconds. The P code is public, so to prevent unauthorized users from using or potentially interfering with it through spoofing, the P-code is XORed with "W-code", a cryptographically generated sequence, to produce the "Y-code". The Y-code is what the satellites have been transmitting since the anti-spoofing module was enabled. The encrypted signal is referred to as the "P(Y)-code". The details of the W-code are secret, but it is known that it is applied to the P-code at approximately 500 kHz, about 20 times slower than the P-code chip rate. This has led to semi-codeless approaches for tracking the P(Y) signal without knowing the W-code. Navigation message. In addition to the PRN ranging codes, a receiver needs to know the time and position of each active satellite. GPS encodes this information into the "navigation message" and modulates it onto both the C/A and P(Y) ranging codes at 50 bit/s. The navigation message format described in this section is called LNAV data (for "legacy navigation"). The navigation message conveys information of three types: An ephemeris is valid for only four hours, while an almanac is valid–with little dilution of precision–for up to two weeks. The receiver uses the almanac to acquire a set of satellites based on stored time and location. As the receiver acquires each satellite, each satellite’s ephemeris is decoded so that the satellite can be used for navigation. The navigation message consists of 30-second "frames" 1,500 bits long, divided into five 6-second "subframes" of ten 30-bit words each. Each subframe has the GPS time in 6-second increments. Subframe 1 contains the GPS date (week number), satellite clock correction information, satellite status and satellite health. Subframes 2 and 3 together contain the transmitting satellite's ephemeris data. Subframes 4 and 5 contain "page" 1 through 25 of the 25-page almanac. The almanac is 15,000 bits long and takes 12.5 minutes to transmit. A frame begins at the start of the GPS week and every 30 seconds thereafter. Each week begins with the transmission of almanac page 1. There are two navigation message types: LNAV-L is used by satellites with PRN numbers 1 to 32 (called "lower PRN numbers") and LNAV-U is used by satellites with PRN numbers 33 to 63 (called "upper PRN numbers"). The two types use very similar formats. Subframes 1 to 3 are the same, while subframes 4 and 5 are almost the same. Each message type contains almanac data for all satellites using the same navigation message type but not the other. Each subframe begins with a Telemetry Word (TLM), which enables the receiver to detect the beginning of a subframe and determine the receiver clock time at which the navigation subframe begins. Next is the handover word (HOW) giving the GPS time (as the time for when the first bit of the next subframe will be transmitted) and identifies the specific subframe within a complete frame. The remaining eight words of the subframe contain the actual data specific to that subframe. Each word includes 6 bits of parity generated using an algorithm based on Hamming codes, which take into account the 24 non-parity bits of that word and the last 2 bits of the previous word. After a subframe has been read and interpreted, the time the next subframe was sent can be calculated through the use of the clock correction data and the HOW. The receiver knows the receiver clock time of when the beginning of the next subframe was received from detection of the Telemetry Word thereby enabling computation of the transit time and thus the pseudorange. Time. GPS time is expressed with a resolution of 1.5 seconds as a week number and a time of week count (TOW). Its zero point (week 0, TOW 0) is defined to be 1980-01-06T00:00Z. The TOW count is a value ranging from 0 to 403,199 whose meaning is the number of 1.5 second periods elapsed since the beginning of the GPS week. Expressing TOW count thus requires 19 bits (219 = 524,288). GPS time is a continuous time scale in that it does not include leap seconds; therefore the start/end of GPS weeks may differ from that of the corresponding UTC day by an integer (whole) number of seconds. In each subframe, each hand-over word (HOW) contains the most significant 17 bits of the TOW count corresponding to the start of the next following subframe. Note that the 2 least significant bits can be safely omitted because one HOW occurs in the navigation message every 6 seconds, which is equal to the resolution of the truncated TOW count thereof. Equivalently, the truncated TOW count is the time duration since the last GPS week start/end to the beginning of the next frame in units of 6 seconds. Each frame contains (in subframe 1) the 10 least significant bits of the corresponding GPS week number. Note that each frame is entirely within one GPS week because GPS frames do not cross GPS week boundaries. Since rollover occurs every 1,024 GPS weeks (approximately every 19.6 years; 1,024 is 210), a receiver that computes current calendar dates needs to deduce the upper week number bits or obtain them from a different source. One possible method is for the receiver to save its current date in memory when shut down, and when powered on, assume that the newly decoded truncated week number corresponds to the period of 1,024 weeks that starts at the last saved date. This method correctly deduces the full week number if the receiver is never allowed to remain shut down (or without a time and position fix) for more than 1,024 weeks (~19.6 years). Almanac. The "almanac" consists of coarse orbit and status information for each satellite in the constellation, an ionospheric model, and information to relate GPS derived time to Coordinated Universal Time (UTC). Each frame contains a part of the almanac (in subframes 4 and 5) and the complete almanac is transmitted by each satellite in 25 frames total (requiring 12.5 minutes). The almanac serves several purposes. The first is to assist in the acquisition of satellites at power-up by allowing the receiver to generate a list of visible satellites based on stored position and time, while an ephemeris from each satellite is needed to compute position fixes using that satellite. In older hardware, lack of an almanac in a new receiver would cause long delays before providing a valid position, because the search for each satellite was a slow process. Advances in hardware have made the acquisition process much faster, so not having an almanac is no longer an issue. The second purpose is for relating time derived from the GPS (called GPS time) to the international time standard of UTC. Finally, the almanac allows a single-frequency receiver to correct for ionospheric delay error by using a global ionospheric model. The corrections are not as accurate as GNSS augmentation systems like WAAS or dual-frequency receivers. However, it is often better than no correction, since ionospheric error is the largest error source for a single-frequency GPS receiver. Data updates. Satellite data is updated typically every 24 hours, with up to 60 days data loaded in case there is a disruption in the ability to make updates regularly. Typically the updates contain new ephemerides, with new almanacs uploaded less frequently. The Control Segment guarantees that during normal operations a new almanac will be uploaded at least every 6 days. Satellites broadcast a new ephemeris every two hours. The ephemeris is generally valid for 4 hours, with provisions for updates every 4 hours or longer in non-nominal conditions. The time needed to acquire the ephemeris is becoming a significant element of the delay to first position fix, because as the receiver hardware becomes more capable, the time to lock onto the satellite signals shrinks; however, the ephemeris data requires 18 to 36 seconds before it is received, due to the low data transmission rate. Modernization and additional GPS signals. Having reached full operational capability on July 17, 1995 the GPS system had completed its original design goals. However, additional advances in technology and new demands on the existing system led to the effort to "modernize" the GPS system. Announcements from the Vice President and the White House in 1998 heralded the beginning of these changes, and in 2000, the U.S. Congress reaffirmed the effort, referred to as "GPS III". The project involves new ground stations and new satellites, with additional navigation signals for both civilian and military users. It aims to improve the accuracy and availability for all users. The implementation goal of 2013 was established, and contractors were offered incentives if they could complete it by 2011. General features. Modernized GPS civilian signals have two general improvements over their legacy counterparts: a dataless acquisition aid and forward error correction (FEC) coding of the NAV message. A dataless acquisition aid is an additional signal, called a pilot carrier in some cases, broadcast alongside the data signal. This dataless signal is designed to be easier to acquire than the data encoded and, upon successful acquisition, can be used to acquire the data signal. This technique improves acquisition of the GPS signal and boosts power levels at the correlator. The second advancement is to use forward error correction (FEC) coding on the NAV message itself. Due to the relatively slow transmission rate of NAV data (usually 50 bits per second), small interruptions can have potentially large impacts. Therefore, FEC on the NAV message is a significant improvement in overall signal robustness. L2C. One of the first announcements was the addition of a new civilian-use signal, to be transmitted on a frequency other than the L1 frequency used for the coarse/acquisition (C/A) signal. Ultimately, this became the L2C signal, so called because it is broadcast on the L2 frequency. Because it requires new hardware on board the satellite, it is only transmitted by the so-called Block IIR-M and later design satellites. The L2C signal is tasked with improving accuracy of navigation, providing an easy to track signal, and acting as a redundant signal in case of localized interference. L2C signals have been broadcast beginning in April 2014 on satellites capable of broadcasting it, but are still considered pre-operational. As of January 2021[ [update]], L2C is broadcast on 23 satellites and is expected on 24 satellites by 2023. Unlike the C/A code, L2C contains two distinct PRN code sequences to provide ranging information; the "civil-moderate" code (called CM), and the "civil-long" length code (called CL). The CM code is 10,230 chips long, repeating every 20 ms. The CL code is 767,250 chips long, repeating every 1,500 ms. Each signal is transmitted at 511,500 chips per second (chip/s); however, they are multiplexed together to form a 1,023,000-chip/s signal. CM is modulated with the CNAV Navigation Message (see below), whereas CL does not contain any modulated data and is called a "dataless sequence". The long, dataless sequence provides for approximately 24 dB greater correlation (~250 times stronger) than L1 C/A-code. When compared to the C/A signal, L2C has 2.7 dB greater data recovery and 0.7 dB greater carrier-tracking, although its transmission power is 2.3 dB weaker. The current status of the L2C signal as of July 3, 2023 is: CM and CL codes. The civil-moderate and civil-long ranging codes are generated by a modular LFSR which is reset periodically to a predetermined initial state. The period of the CM and CL is determined by this resetting and not by the natural period of the LFSR (as is the case with the C/A code). The initial states are designated in the interface specification and are different for different PRN numbers and for CM/CL. The feedback polynomial/mask is the same for CM and CL. The ranging codes are thus given by: CM"i"("t") = "A"("Xi","t" mod 10 230) CL"i"("t") = "A"("Yi","t" mod 767 250) where: CM"i" and CL"i" are the ranging codes for PRN number "i" and their arguments are the integer number of chips elapsed (starting at 0) since start/end of GPS week, or equivalently since the origin of the GPS time scale (see § Time). "A"("x", "t") is the output of the LFSR when initialized with initial state "x" after being clocked "t" times. "Xi" and "Yi" are the initial states for CM and CL respectively. for PRN number formula_0. mod is the remainder of division operation. "t" is the integer number of CM and CL chip periods since the origin of GPS time or equivalently, since any GPS second (starting from 0). The initial states are described in the GPS interface specification as numbers expressed in octal following the convention that the LFSR state is interpreted as the binary representation of a number where the output bit is the least significant bit, and the bit where new bits are shifted in is the most significant bit. Using this convention, the LFSR shifts from most significant bit to least significant bit and when seen in big endian order, it shifts to the right. The states called "final state" in the IS are obtained after cycles for CM and after cycles for LM (just before reset in both cases). CNAV navigation message. The CNAV data is an upgraded version of the original NAV navigation message. It contains higher precision representation and nominally more accurate data than the NAV data. The same type of information (time, status, ephemeris, and almanac) is still transmitted using the new CNAV format; however, instead of using a frame / subframe architecture, it uses a new pseudo-packetized format made of 12-second 300-bit "messages" analogous to LNAV frames. While LNAV frames have a fixed information content, CNAV messages may be of one of several defined types. The type of a frame determines its information content. Messages do not follow a fixed schedule regarding which message types will be used, allowing the Control Segment some versatility. However, for some message types there are lower bounds on how often they will be transmitted. In CNAV, at least 1 out of every 4 packets are ephemeris data and the same lower bound applies for clock data packets. The design allows for a wide variety of packet types to be transmitted. With a 32-satellite constellation, and the current requirements of what needs to be sent, less than 75% of the bandwidth is used. Only a small fraction of the available packet types have been defined; this enables the system to grow and incorporate advances without breaking compatibility. There are many important changes in the new CNAV message: CNAV messages begin and end at start/end of GPS week plus an integer multiple of 12 seconds. Specifically, the beginning of the first bit (with convolution encoding already applied) to contain information about a message matches the aforesaid synchronization. CNAV messages begin with an 8-bit preamble which is a fixed bit pattern and whose purpose is to enable the receiver to detect the beginning of a message. Forward error correction code. The convolutional code used to encode CNAV is described by: formula_1 where: formula_2 and formula_3 are the unordered outputs of the convolutional encoder formula_4 is the raw (non FEC encoded) navigation data, consisting of the simple concatenation of the 300-bit messages. formula_5 is the integer number of "non FEC encoded" navigation data bits elapsed since an arbitrary point in time (starting at 0). formula_6 is the FEC encoded navigation data. formula_7 is the integer number of "FEC encoded" navigation data bits elapsed since the same epoch than formula_5 (likewise starting at 0). Since the FEC encoded bit stream runs at 2 times the rate than the non FEC encoded bit as already described, then formula_8. FEC encoding is performed independently of navigation message boundaries; this follows from the above equations. L2C frequency information. An immediate effect of having two civilian frequencies being transmitted is the civilian receivers can now directly measure the ionospheric error in the same way as dual frequency P(Y)-code receivers. However, users utilizing the L2C signal alone, can expect 65% more position uncertainty due to ionospheric error than with the L1 signal alone. Military (M-code). A major component of the modernization process is a new military signal. Called the Military code, or M-code, it was designed to further improve the anti-jamming and secure access of the military GPS signals. Very little has been published about this new, restricted code. It contains a PRN code of unknown length transmitted at 5.115 MHz. Unlike the P(Y)-code, the M-code is designed to be autonomous, meaning that a user can calculate their position using only the M-code signal. From the P(Y)-code's original design, users had to first lock onto the C/A code and then transfer the lock to the P(Y)-code. Later, direct-acquisition techniques were developed that allowed some users to operate autonomously with the P(Y)-code. MNAV navigation message. A little more is known about the new navigation message, which is called "MNAV". Similar to the new CNAV, this new MNAV is packeted instead of framed, allowing for very flexible data payloads. Also like CNAV it can utilize Forward Error Correction (FEC) and advanced error detection (such as a CRC). M-code frequency information. The M-code is transmitted in the same L1 and L2 frequencies already in use by the previous military code, the P(Y)-code. The new signal is shaped to place most of its energy at the edges (away from the existing P(Y) and C/A carriers). It does not work at every satellite, and M-code was switched off for SVN62/PRN25 on 5 April 2011. In a major departure from previous GPS designs, the M-code is intended to be broadcast from a high-gain directional antenna, in addition to a full-Earth antenna. This directional antenna's signal, called a spot beam, is intended to be aimed at a specific region (several hundred kilometers in diameter) and increase the local signal strength by 20 dB, or approximately 100 times stronger. A side effect of having two antennas is that the GPS satellite will appear to be two GPS satellites occupying the same position to those inside the spot beam. While the whole Earth M-code signal is available on the Block IIR-M satellites, the spot beam antennas will not be deployed until the Block III satellites are deployed, which began in December 2018. An interesting side effect of having each satellite transmit four separate signals is that the MNAV can potentially transmit four different data channels, offering increased data bandwidth. The modulation method is binary offset carrier, using a 10.23 MHz subcarrier against the 5.115 MHz code. This signal will have an overall bandwidth of approximately 24 MHz, with significantly separated sideband lobes. The sidebands can be used to improve signal reception. L5. The L5 signal provides a means of radionavigation secure and robust enough for life critical applications, such as aircraft precision approach guidance. The signal is broadcast in a frequency band protected by the ITU for aeronautical radionavigation services. It was first demonstrated from satellite USA-203 (Block IIR-M), and is available on all satellites from GPS IIF and GPS III. L5 signals have been broadcast beginning in April 2014 on satellites that support it. The status of the L5 signal as of July 3, 2023[ [update]] is: The L5 band provides additional robustness in the form of interference mitigation, the band being internationally protected, redundancy with existing bands, geostationary satellite augmentation, and ground-based augmentation. The added robustness of this band also benefits terrestrial applications. Two PRN ranging codes are transmitted on L5 in quadrature: the in-phase code (called "I5-code") and the quadrature-phase code (called "Q5-code"). Both codes are 10,230 chips long, transmitted at 10.23 Mchip/s (1 ms repetition period), and are generated identically (differing only in initial states). Then, I5 is modulated (by exclusive-or) with navigation data (called L5 CNAV) and a 10-bit Neuman-Hofman code clocked at 1 kHz. Similarly, the Q5-code is then modulated but with only a 20-bit Neuman-Hofman code that is also clocked at 1 kHz. Compared to L1 C/A and L2, these are some of the changes in L5: I5 and Q5 codes. The I5-code and Q5-code are generated using the same structure but with different parameters. These codes are the combination (by exclusive-or) of the output of 2 differing linear-feedback shift registers (LFSRs) which are selectively reset. 5"i"("t") = "U"("t") ⊕ "Vi"("t") "U"("t") = "XA"(("t" mod 10 230) mod 8 190) "Vi"("t") = "XBi"("Xi", "t" mod 10 230) where: "i" is an ordered pair ("P", "n") where "P" ∈ {I, Q} for in-phase and quadrature-phase, and "n" a PRN number; both phases and a single PRN are required for the L5 signal from a single satellite. 5"i" is the ranging codes for "i"; also denoted as I5"n" and Q5"n". "U" and "Vi" are intermediate codes, with "U" not depending on phase "or" PRN. The output of two 13-stage LFSRs with clock state "t"' is used: "XA"("x","t"') has feedback polynomial "x"13 + "x"12 + "x"10 + "x"9 + 1, and initial state 11111111111112. "XBi"("x","t"') has feedback polynomial "x"13 + "x"12 + "x"8 + "x"7 + "x"6 + "x"4 + "x"3 + "x" + 1, and initial state "Xi". "Xi" is the initial state specified for the phase and PRN number given by "i" (designated in the IS). "t" is the integer number of chip periods since the origin of GPS time or equivalently, since any GPS second (starting from 0). "A" and "B" are maximal length LFSRs. The modulo operations correspond to resets. Note that both are reset each millisecond (synchronized with C/A code epochs). In addition, the extra modulo operation in the description of "A" is due to the fact it is reset 1 cycle before its natural period (which is 8,191) so that the next repetition becomes offset by 1 cycle with respect to "B" (otherwise, since both sequences would repeat, I5 and Q5 would repeat within any 1 ms period as well, degrading correlation characteristics). L5 navigation message. The L5 CNAV data includes SV ephemerides, system time, SV clock behavior data, status messages and time information, etc. The 50 bit/s data is coded in a rate 1/2 convolution coder. The resulting 100 symbols per second (sps) symbol stream is modulo-2 added to the I5-code only; the resultant bit-train is used to modulate the L5 in-phase (I5) carrier. This combined signal is called the L5 Data signal. The L5 quadrature-phase (Q5) carrier has no data and is called the L5 Pilot signal. The format used for L5 CNAV is very similar to that of L2 CNAV. One difference is that it uses 2 times the data rate. The bit fields within each message, message types, and forward error correction code algorithm are the same as those of L2 CNAV. L5 CNAV messages begin and end at start/end of GPS week plus an integer multiple of 6 seconds (this applies to the beginning of the first bit to contain information about a message, as is the case for L2 CNAV). L5 frequency information. Broadcast on the L5 frequency (1176.45 MHz, 10.23 MHz × 115), which is an aeronautical navigation band. The frequency was chosen so that the aviation community can manage interference to L5 more effectively than L2. L1C. L1C is a civilian-use signal, to be broadcast on the L1 frequency (1575.42 MHz), which contains the C/A signal used by all current GPS users. The L1C signals will be broadcast from GPS III and later satellites, the first of which was launched in December 2018. As of January 2021[ [update]], L1C signals are not yet broadcast, and only four operational satellites are capable of broadcasting them. L1C is expected on 24 GPS satellites in the late 2020s. L1C consists of a pilot (called L1CP) and a data (called L1CD) component. These components use carriers with the same phase (within a margin of error of 100 milliradians), instead of carriers in quadrature as with L5. The PRN codes are 10,230 chips long and transmitted at 1.023 Mchip/s, thus repeating in 10 ms. The pilot component is also modulated by an overlay code called L1CO (a secondary code that has a lower rate than the ranging code and is also predefined, like the ranging code). Of the total L1C signal power, 25% is allocated to the data and 75% to the pilot. The modulation technique used is BOC(1,1) for the data signal and TMBOC for the pilot. The time multiplexed binary offset carrier (TMBOC) is BOC(1,1) for all except 4 of 33 cycles, when it switches to BOC(6,1). The current status of the L1C signal as of July 3, 2023 is: L1C ranging code. The L1C pilot and data ranging codes are based on a Legendre sequence with length used to build an intermediate code (called a "Weil code") which is expanded with a fixed 7-bit sequence to the required 10,230 bits. This 10,230-bit sequence is the ranging code and varies between PRN numbers and between the pilot and data components. The ranging codes are described by: formula_9 where: formula_10 is the ranging code for PRN number and component formula_0. formula_11 represents a period of formula_10; it is introduced only to allow a more clear notation. To obtain a direct formula for formula_12 start from the right side of the formula for formula_13 and replace all instances of formula_7 with formula_14. formula_5 is the integer number of L1C chip periods (which is &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄1.023 μs) since the origin of GPS time or equivalently, since any GPS second (starting from 0). formula_0 is an ordered pair identifying a PRN number and a code (L1CP or L1CD) and is of the form formula_15 or formula_16 where formula_17 is the PRN number of the satellite, and formula_18 are symbols (not variables) that indicate the L1CP code or L1CD code, respectively. formula_19 is an intermediate code: a Legendre sequence whose domain is the set of integers formula_17 for which formula_20. formula_21 is an intermediate code called Weil code, with the same domain as formula_19. formula_22 is a 7-bit long sequence defined for 0-based indexes 0 to 6. formula_23 is the 0-based insertion index of the sequence formula_22 into the ranging code (specific for PRN number and code formula_0). It is defined in the Interface Specification (IS) as a 1-based index formula_24, therefore formula_25. formula_26 is the Weil index for PRN number and code formula_0 designated in the IS. formula_27 is the remainder of division (or modulo) operation, which differs to the notation in statements of modular congruence, also used in this article. According to the formula above and the GPS IS, the first formula_26 bits (equivalently, up to the insertion point of formula_22) of formula_11 and formula_12 are the first bits the corresponding Weil code; the next 7 bits are formula_22; the remaining bits are the remaining bits of the Weil code. The IS asserts that formula_28. For clarity, the formula for formula_11 does not account for the hypothetical case in which formula_29, which would cause the instance of formula_22 inserted into formula_11 to wrap from index to 0. L1C overlay code. The overlay codes are 1,800 bits long and is transmitted at 100 bit/s, synchronized with the navigation message encoded in L1CD. For PRN numbers 1 to 63 they are the truncated outputs of maximal period LFSRs which vary in initial conditions and feedback polynomials. For PRN numbers 64 to 210 they are truncated Gold codes generated by combining 2 LFSR outputs (formula_30 and formula_31, where formula_0 is the PRN number) whose initial state varies. formula_30 has one of the 4 feedback polynomials used overall (among PRN numbers 64–210). formula_31 has the same feedback polynomial for all PRN numbers in the range 64–210. CNAV-2 navigation message. The L1C navigation data (called CNAV-2) is broadcast in 1,800 bits long (including FEC) frames and is transmitted at 100 bit/s. The frames of L1C are analogous to the messages of L2C and L5. While L2 CNAV and L5 CNAV use a dedicated message type for ephemeris data, all CNAV-2 frames include that information. The common structure of all messages consists of 3 frames, as listed in the adjacent table. The content of subframe 3 varies according to its page number which is analogous to the type number of L2 CNAV and L5 CNAV messages. Pages are broadcast in an arbitrary order. The time of messages (not to be confused with clock correction parameters) is expressed in a different format than the format of the previous civilian signals. Instead it consists of 3 components: TOI is the only content of subframe 1. The week number and ITOW are contained in subframe 2 along with other information. Subframe 1 is encoded by a modified BCH code. Specifically, the 8 least significant bits are BCH encoded to generate 51 bits, then combined using exclusive or with the most significant bit and finally the most significant bit is appended as the most significant bit of the previous result to obtain the final 52 bits. Subframes 2 and 3 are individually expanded with a 24-bit CRC, then individually encoded using a low-density parity-check code, and then interleaved as a single unit using a block interleaver. Overview of frequencies. All satellites broadcast at the same two frequencies, 1.57542 GHz (L1 signal) and 1.2276 GHz (L2 signal). The satellite network uses a CDMA spread-spectrum technique where the low-bitrate message data is encoded with a high-rate pseudo-random noise (PRN) sequence that is different for each satellite. The receiver must be aware of the PRN codes for each satellite to reconstruct the actual message data. The C/A code, for civilian use, transmits data at 1.023 million chips per second, whereas the P code, for U.S. military use, transmits at 10.23 million chips per second. The L1 carrier is modulated by both the C/A and P codes, while the L2 carrier is only modulated by the P code. The P code can be encrypted as a so-called P(Y) code which is only available to military equipment with a proper decryption key. Both the C/A and P(Y) codes impart the precise time-of-day to the user. Each composite signal (in-phase and quadrature phase) becomes: formula_32 where formula_33 and formula_34 represent signal powers; formula_35 and formula_36 represent codes with/without data formula_37. This is a formula for the ideal case (which is not attained in practice) as it does not model timing errors, noise, amplitude mismatch between components or quadrature error (when components are not exactly in quadrature). Demodulation and decoding. A GPS receiver processes the GPS signals received on its antenna to determine position, velocity and/or timing. The signal at antenna is amplified, down converted to baseband or intermediate frequency, filtered (to remove frequencies outside the intended frequency range for the digital signal that would alias into it) and digitalized; these steps may be chained in a different order. Note that aliasing is sometimes intentional (specifically, when undersampling is used) but filtering is still required to discard frequencies not intended to be present in the digital representation. For each satellite used by the receiver, the receiver must first acquire the signal and then track it as long as that satellite is in use; both are performed in the digital domain in by far most (if not all) receivers. Acquiring a signal is the process of determining the frequency and code phase (both relative to receiver time) when it was previously unknown. Code phase must be determined within an accuracy that depends on the receiver design (especially the tracking loop); 0.5 times the duration of code chips (approx. 0.489 μs) is a representative value. Tracking is the process of continuously adjusting the estimated frequency and phase to match the received signal as close as possible and therefore is a phase locked loop. Note that acquisition is performed to start using a particular satellite, but tracking is performed as long as that satellite is in use. In this section, one possible procedure is described for L1 C/A acquisition and tracking, but the process is very similar for the other signals. The described procedure is based on computing the correlation of the received signal with a locally generated replica of the ranging code and detecting the highest peak or lowest valley. The offset of the highest peak or lowest valley contains information about the code phase relative to receiver time. The duration of the local replica is set by receiver design and is typically shorter than the duration of navigation data bits, which is 20 ms. Acquisition. Acquisition of a given PRN number can be conceptualized as searching for a signal in a bidimensional search space where the dimensions are (1) code phase, (2) frequency. In addition, a receiver may not know which PRN number to search for, and in that case a third dimension is added to the search space: (3) PRN number. If the almanac information has previously been acquired, the receiver picks which satellites to listen for by their PRNs. If the almanac information is not in memory, the receiver enters a search mode and cycles through the PRN numbers until a lock is obtained on one of the satellites. To obtain a lock, it is necessary that there be an unobstructed line of sight from the receiver to the satellite. The receiver can then decode the almanac and determine the satellites it should listen for. As it detects each satellite's signal, it identifies it by its distinct C/A code pattern. Simple correlation. The simplest way to acquire the signal (not necessarily the most effective or least computationally expensive) is to compute the dot product of a window of the digitalized signal with a set of locally generated replicas. The locally generated replicas vary in carrier frequency and code phase to cover all the already mentioned search space which is the Cartesian product of the frequency search space and the code phase search space. The carrier is a complex number where real and imaginary components are both sinusoids as described by Euler's formula. The replica that generates the highest magnitude of dot product is likely the one that best matches the code phase and frequency of the signal; therefore, if that magnitude is above a threshold, the receiver proceeds to track the signal or further refine the estimated parameters before tracking. The threshold is used to minimize false positives (apparently detecting a signal when there is in fact no signal), but some may still occur occasionally. Using a complex carrier allows the replicas to match the digitalized signal regardless of the signal's carrier phase and to detect that phase (the principle is the same used by the Fourier transform). The dot product is a complex number; its magnitude represents the level of similarity between the replica and the signal, as with an ordinary correlation of real-valued time series. The argument of the dot product is an approximation of the corresponding carrier in the digitalized signal. As an example, assume that the granularity for the search in code phase is 0.5 chips and in frequency is 500 Hz, then there are 1,023/0.5=2,046 code phases and 10,000 Hz/500 Hz=20 frequencies to try for a total of 20×2,046=40,920 local replicas. Note that each frequency bin is centered on its interval and therefore covers 250 Hz in each direction; for example, the first bin has a carrier at −4.750 Hz and covers the interval −5,000 Hz to −4,500 Hz. Code phases are equivalent modulo 1,023 because the ranging code is periodic; for example, phase −0.5 is equivalent to phase 1,022.5. The following table depicts the local replicas that would be compared against the digitalized signal in this example. "•" means a single local replica while "..." is used for elided local replicas: Fourier transform. As an improvement over the simple correlation method, it is possible to implement the computation of dot products more efficiently with a Fourier transform. Instead of performing one dot product for each element in the Cartesian product of code and frequency, a single operation involving FFT and covering all frequencies is performed for each code phase; each such operation is more computationally expensive, but it may still be faster overall than the previous method due to the efficiency of FFT algorithms, and it recovers carrier frequency with a higher accuracy, because the frequency bins are much closely spaced in a DFT. Specifically, for all code phases in the search space, the digitalized signal window is multiplied element by element with a local replica of the code (with no carrier), then processed with a discrete Fourier transform. Given the previous example to be processed with this method, assume real-valued data (as opposed to complex data, which would have in-phase and quadrature components), a sampling rate of 5 MHz, a signal window of 10 ms, and an intermediate frequency of 2.5 MHz. There will be 5 MHz×10 ms=50,000 samples in the digital signal, and therefore 25,001 frequency components ranging from 0 Hz to 2.5 MHz in steps of 100 Hz (note that the 0 Hz component is real because it is the average of a real-valued signal and the 2.5 MHz component is real as well because it is the critical frequency). Only the components (or bins) within 5 kHz of the central frequency are examined, which is the range from 2.495 MHz to 2.505 MHz, and it is covered by 51 frequency components. There are 2,046 code phases as in the previous case, thus in total 51×2,046=104,346 complex frequency components will be examined. Circular correlation with Fourier transform. Likewise, as an improvement over the simple correlation method, it is possible to perform a single operation covering all code phases for each frequency bin. The operation performed for each code phase bin involves forward FFT, element-wise multiplication in the frequency domain. inverse FFT, and extra processing so that overall, it computes circular correlation instead of circular convolution. This yields more accurate "code phase determination" than the simple correlation method in contrast with the previous method, which yields more accurate "carrier frequency determination" than the previous method. Tracking and navigation message decoding. Since the carrier frequency received can vary due to Doppler shift, the points where received PRN sequences begin may not differ from O by an exact integral number of milliseconds. Because of this, carrier frequency tracking along with PRN code tracking are used to determine when the received satellite's PRN code begins. Unlike the earlier computation of offset in which trials of all 1,023 offsets could potentially be required, the tracking to maintain lock usually requires shifting of half a pulse width or less. To perform this tracking, the receiver observes two quantities, phase error and received frequency offset. The correlation of the received PRN code with respect to the receiver generated PRN code is computed to determine if the bits of the two signals are misaligned. Comparisons of the received PRN code with receiver generated PRN code shifted half a pulse width early and half a pulse width late are used to estimate adjustment required. The amount of adjustment required for maximum correlation is used in estimating phase error. Received frequency offset from the frequency generated by the receiver provides an estimate of phase rate error. The command for the frequency generator and any further PRN code shifting required are computed as a function of the phase error and the phase rate error in accordance with the control law used. The Doppler velocity is computed as a function of the frequency offset from the carrier nominal frequency. The Doppler velocity is the velocity component along the line of sight of the receiver relative to the satellite. As the receiver continues to read successive PRN sequences, it will encounter a sudden change in the phase of the 1,023-bit received PRN signal. This indicates the beginning of a data bit of the navigation message. This enables the receiver to begin reading the 20 millisecond bits of the navigation message. The TLM word at the beginning of each subframe of a navigation frame enables the receiver to detect the beginning of a subframe and determine the receiver clock time at which the navigation subframe begins. The HOW word then enables the receiver to determine which specific subframe is being transmitted. There can be a delay of up to 30 seconds before the first estimate of position because of the need to read the ephemeris data before computing the intersections of sphere surfaces. After a subframe has been read and interpreted, the time the next subframe was sent can be calculated through the use of the clock correction data and the HOW. The receiver knows the receiver clock time of when the beginning of the next subframe was received from detection of the Telemetry Word thereby enabling computation of the transit time and thus the pseudorange. The receiver is potentially capable of getting a new pseudorange measurement at the beginning of each subframe or every 6 seconds. Then the orbital position data, or ephemeris, from the navigation message is used to calculate precisely where the satellite was at the start of the message. A more sensitive receiver will potentially acquire the ephemeris data more quickly than a less sensitive receiver, especially in a noisy environment. Sources and references. Bibliography. GPS Interface Specification Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i" }, { "math_id": 1, "text": "\\begin{align}\n X_1(t) &= d(t) \\oplus d(t - 2) \\oplus d(t - 3) \\oplus d(t - 5) \\oplus d(t - 6) \\\\\n X_2(t) &= d(t) \\oplus d(t - 1) \\oplus d(t - 2) \\oplus d(t - 3) \\oplus d(t - 6) \\\\\n d'(t') &= \\begin{cases}\n X_1\\left(\\frac{t'}{2}\\right) & \\text{if } t' \\equiv 0 \\pmod{2} \\\\\n X_2\\left(\\frac{t'-1}{2}\\right) & \\text{if } t' \\equiv 1 \\pmod{2} \\\\\n \\end{cases}\n\\end{align}" }, { "math_id": 2, "text": "X_1" }, { "math_id": 3, "text": "X_2" }, { "math_id": 4, "text": "d" }, { "math_id": 5, "text": "t" }, { "math_id": 6, "text": "d'" }, { "math_id": 7, "text": "t'" }, { "math_id": 8, "text": "t=\\left\\lfloor\\tfrac{t'}{2}\\right\\rfloor" }, { "math_id": 9, "text": "\\begin{align}\n \\text{L1C}_i(t) &= \\text{L1C}'(t \\bmod{10\\,230}) \\\\\n \\text{L1C}'_i(t') &= \\begin{cases}\n W_i(t') & \\text{ if } t' < p'_i \\\\\n S(t'-p'_i) & \\text{ if } p'_i \\le t' < p'_i + 7\\\\\n W_i(t'-7) & \\text{ if } t' \\ge p'_i + 7 \\\\\n \\end{cases} \\\\\n S &= (0, 1, 1, 0, 1, 0, 0) \\\\\n W_i(n) &= L(n) \\oplus L((n + w_i) \\bmod{10\\,223}) \\\\\n L(n) &= \\begin{cases}\n 1 & \\text{ if } n \\neq 0 \\text{ and there is an integer }\n m \\text{ such that } n \\equiv m^2 \\pmod{10\\,223} \\\\\n 0 & \\text{ otherwise} \\\\\n \\end{cases}\n\\end{align}" }, { "math_id": 10, "text": "\\text{L1C}_i" }, { "math_id": 11, "text": "\\text{L1C}'_i" }, { "math_id": 12, "text": "\\text{L1C}" }, { "math_id": 13, "text": "\\text{L1C}'" }, { "math_id": 14, "text": "t \\bmod{10\\,230}" }, { "math_id": 15, "text": "(\\text{P}, n)" }, { "math_id": 16, "text": "(\\text{D}, n)" }, { "math_id": 17, "text": "n" }, { "math_id": 18, "text": "\\text{P, D}" }, { "math_id": 19, "text": "L" }, { "math_id": 20, "text": "0 \\le n \\le 10\\,222" }, { "math_id": 21, "text": "W_i" }, { "math_id": 22, "text": "S" }, { "math_id": 23, "text": "p'_i" }, { "math_id": 24, "text": "p" }, { "math_id": 25, "text": "p'_i = p_i-1" }, { "math_id": 26, "text": "w_i" }, { "math_id": 27, "text": "\\operatorname{mod}" }, { "math_id": 28, "text": "0 \\le p'_i \\le 10\\,222" }, { "math_id": 29, "text": "p'_i > 10\\,222" }, { "math_id": 30, "text": "\\text{S1}_i" }, { "math_id": 31, "text": "\\text{S2}_i" }, { "math_id": 32, "text": "\n S(t) =\n \\sqrt{P_\\operatorname{I}} X_\\operatorname{I}(t) \\cos\\left(\\omega t + \\phi_0\\right) -\n \\sqrt{P_\\operatorname{Q}} X_\\operatorname{Q}(t) \\underbrace{\\sin\\left(\\omega t + \\phi_0\\right)}_{-\\cos\\left(\\omega t + \\phi_0 + \\frac{\\pi}{2}\\right)} ,\n" }, { "math_id": 33, "text": "\\scriptstyle P_\\operatorname{I}" }, { "math_id": 34, "text": "\\scriptstyle P_\\operatorname{Q}" }, { "math_id": 35, "text": "\\scriptstyle X_\\operatorname{I}(t)" }, { "math_id": 36, "text": "\\scriptstyle X_\\operatorname{Q}(t)" }, { "math_id": 37, "text": "\\scriptstyle (= \\;\\pm 1)" } ]
https://en.wikipedia.org/wiki?curid=10059597
10059981
Category of manifolds
Category theory In mathematics, the category of manifolds, often denoted Man"p", is the category whose objects are manifolds of smoothness class "C""p" and whose morphisms are "p"-times continuously differentiable maps. This is a category because the composition of two "C""p" maps is again continuous and of class "C""p". One is often interested only in "C""p"-manifolds modeled on spaces in a fixed category "A", and the category of such manifolds is denoted Man"p"("A"). Similarly, the category of "C""p"-manifolds modeled on a fixed space "E" is denoted Man"p"("E"). One may also speak of the category of smooth manifolds, Man∞, or the category of analytic manifolds, Man"ω". Man"p" is a concrete category. Like many categories, the category Man"p" is a concrete category, meaning its objects are sets with additional structure (i.e. a topology and an equivalence class of atlases of charts defining a "C""p"-differentiable structure) and its morphisms are functions preserving this structure. There is a natural forgetful functor "U" : Man"p" → Top to the category of topological spaces which assigns to each manifold the underlying topological space and to each "p"-times continuously differentiable function the underlying continuous function of topological spaces. Similarly, there is a natural forgetful functor "U"′ : Man"p" → Set to the category of sets which assigns to each manifold the underlying set and to each "p"-times continuously differentiable function the underlying function. Pointed manifolds and the tangent space functor. It is often convenient or necessary to work with the category of manifolds along with a distinguished point: Man•p analogous to Top• - the category of pointed spaces. The objects of Man•p are pairs formula_0 where formula_1 is a formula_2manifold along with a basepoint formula_3 and its morphisms are basepoint-preserving "p"-times continuously differentiable maps: e.g. formula_4 such that formula_5 The category of pointed manifolds is an example of a comma category - Man•p is exactly formula_6 where formula_7 represents an arbitrary singleton set, and the formula_8represents a map from that singleton to an element of Manp, picking out a basepoint. The tangent space construction can be viewed as a functor from Man•p to VectR as follows: given pointed manifolds formula_9and formula_10 with a formula_2map formula_11 between them, we can assign the vector spaces formula_12and formula_13 with a linear map between them given by the pushforward (differential): formula_14 This construction is a genuine functor because the pushforward of the identity map formula_15 is the vector space isomorphism formula_16 and the chain rule ensures that formula_17
[ { "math_id": 0, "text": "(M, p_0)," }, { "math_id": 1, "text": "M" }, { "math_id": 2, "text": "C^p" }, { "math_id": 3, "text": "p_0 \\in M ," }, { "math_id": 4, "text": "F: (M,p_0) \\to (N,q_0)," }, { "math_id": 5, "text": "F(p_0) = q_0." }, { "math_id": 6, "text": "\\scriptstyle {( \\{ \\bull \\} \\downarrow \\mathbf{Man^p})}," }, { "math_id": 7, "text": "\\{ \\bull \\}" }, { "math_id": 8, "text": "\\downarrow" }, { "math_id": 9, "text": "(M, p_0)" }, { "math_id": 10, "text": "(N, F(p_0))," }, { "math_id": 11, "text": "F: (M,p_0) \\to (N,F(p_0))" }, { "math_id": 12, "text": "T_{p_0}M" }, { "math_id": 13, "text": "T_{F(p_0)}N," }, { "math_id": 14, "text": "F_{*,p}:T_{p_0}M \\to T_{F(p_0)}N." }, { "math_id": 15, "text": "\\mathbb{1}_M:M \\to M" }, { "math_id": 16, "text": "(\\mathbb{1}_M)_{*,p_0}:T_{p_0}M \\to T_{p_0}M," }, { "math_id": 17, "text": "(f\\circ g)_{*,p_0} = f_{*,g(p_0)} \\circ g_{*,p_0}." } ]
https://en.wikipedia.org/wiki?curid=10059981
10061569
Perpendicular axis theorem
The perpendicular axis theorem (or plane figure theorem) states that, "The moment of inertia ("Iz") of a laminar body about an axis (z) perpendicular to its plane is the sum of its moments of inertia about two mutually perpendicular axes (x and y) in its plane, all the three axes being concurrent." Define perpendicular axes formula_0, formula_1, and formula_2 (which meet at origin formula_3) so that the body lies in the formula_4 plane, and the formula_2 axis is perpendicular to the plane of the body. Let "I""x", "I""y" and "I""z" be moments of inertia about axis "x", "y", "z" respectively. Then the perpendicular axis theorem states that formula_5 This rule can be applied with the parallel axis theorem and the stretch rule to find polar moments of inertia for a variety of shapes. If a planar object has rotational symmetry such that formula_6 and formula_7 are equal, then the perpendicular axes theorem provides the useful relationship: formula_8 Derivation. Working in Cartesian coordinates, the moment of inertia of the planar body about the formula_2 axis is given by: formula_9 On the plane, formula_10, so these two terms are the moments of inertia about the formula_0 and formula_1 axes respectively, giving the perpendicular axis theorem. The converse of this theorem is also derived similarly. Note that formula_11 because in formula_12, formula_13 measures the distance from the "axis of rotation", so for a "y"-axis rotation, deviation distance from the axis of rotation of a point is equal to its "x" coordinate. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "y" }, { "math_id": 2, "text": "z" }, { "math_id": 3, "text": "O" }, { "math_id": 4, "text": "xy" }, { "math_id": 5, "text": "I_z = I_x + I_y" }, { "math_id": 6, "text": "I_x" }, { "math_id": 7, "text": "I_y" }, { "math_id": 8, "text": "I_z = 2I_x = 2I_y" }, { "math_id": 9, "text": "I_{z} = \\int (x^2 + y^2) \\,dm = \\int x^2\\,dm + \\int y^2\\,dm = I_{y} + I_{x}" }, { "math_id": 10, "text": "z=0" }, { "math_id": 11, "text": "\\int x^2\\,dm = I_{y} \\ne I_{x}" }, { "math_id": 12, "text": "\\int r^2\\,dm " }, { "math_id": 13, "text": "r" } ]
https://en.wikipedia.org/wiki?curid=10061569
100625
Inverter (logic gate)
Logic gate implementing negation In digital logic, an inverter or NOT gate is a logic gate which implements logical negation. It outputs a bit opposite of the bit that is put into it. The bits are typically implemented as two differing voltage levels. Description. The NOT gate outputs a zero when given a one, and a one when given a zero. Hence, it inverts its inputs. Colloquially, this inversion of bits is called "flipping" bits. As with all binary logic gates, other pairs of symbols — such as true and false, or high and low — may be used in lieu of one and zero. It is equivalent to the logical negation operator (¬) in mathematical logic. Because it has only one input, it is a unary operation and has the simplest type of truth table. It is also called the complement gate because it produces the ones' complement of a binary number, swapping 0s and 1s. The NOT gate is one of three basic logic gates from which any Boolean circuit may be built up. Together with the AND gate and the OR gate, any function in binary mathematics may be implemented. All other logic gates may be made from these three. The terms "programmable inverter" or "controlled inverter" do not refer to this gate; instead, these terms refer to the XOR gate because it can conditionally function like a NOT gate. Symbols. The traditional symbol for an inverter circuit is a triangle touching a small circle or "bubble". Input and output lines are attached to the symbol; the bubble is typically attached to the output line. To symbolize active-low input, sometimes the bubble is instead placed on the input line. Sometimes only the circle portion of the symbol is used, and it is attached to the input or output of another gate; the symbols for NAND and NOR are formed in this way. A bar or overline ( ‾ ) above a variable can denote negation (or inversion or complement) performed by a NOT gate. A slash (/) before the variable is also used. Electronic implementation. An inverter circuit outputs a voltage representing the opposite logic-level to its input. Its main function is to invert the input signal applied. If the applied input is low then the output becomes high and vice versa. Inverters can be constructed using a single NMOS transistor or a single PMOS transistor coupled with a resistor. Since this "resistive-drain" approach uses only a single type of transistor, it can be fabricated at a low cost. However, because current flows through the resistor in one of the two states, the resistive-drain configuration is disadvantaged for power consumption and processing speed. Alternatively, inverters can be constructed using two complementary transistors in a CMOS configuration. This configuration greatly reduces power consumption since one of the transistors is always off in both logic states. Processing speed can also be improved due to the relatively low resistance compared to the NMOS-only or PMOS-only type devices. Inverters can also be constructed with bipolar junction transistors (BJT) in either a resistor–transistor logic (RTL) or a transistor–transistor logic (TTL) configuration. Digital electronics circuits operate at fixed voltage levels corresponding to a logical 0 or 1 (see binary). An inverter circuit serves as the basic logic gate to swap between those two voltage levels. Implementation determines the actual voltage, but common levels include (0, +5V) for TTL circuits. Digital building block. The inverter is a basic building block in digital electronics. Multiplexers, decoders, state machines, and other sophisticated digital devices may use inverters. The "hex inverter" is an integrated circuit that contains six ("hexa-") inverters. For example, the 7404 TTL chip which has 14 pins and the 4049 CMOS chip which has 16 pins, 2 of which are used for power/referencing, and 12 of which are used by the inputs and outputs of the six inverters (the 4049 has 2 pins with no connection). Analytical representation. formula_0 is the analytical representation of NOT gate: Alternatives. If no specific NOT gates are available, one can be made from the universal NAND or NOR gates, or an XOR gate by setting one input to high. Performance measurement. Digital inverter quality is often measured using the voltage transfer curve (VTC), which is a plot of output vs. input voltage. From such a graph, device parameters including noise tolerance, gain, and operating logic levels can be obtained. Ideally, the VTC appears as an inverted step function – this would indicate precise switching between "on" and "off" – but in real devices, a gradual transition region exists. The VTC indicates that for low input voltage, the circuit outputs high voltage; for high input, the output tapers off towards the low level. The slope of this transition region is a measure of quality – steep (close to vertical) slopes yield precise switching. The tolerance to noise can be measured by comparing the minimum input to the maximum output for each region of operation (on / off). Linear region as analog amplifier. Since the transition region is steep and approximately linear, a properly-biased CMOS inverter digital logic gate may be used as a high-gain analog linear amplifier or even combined to form an opamp. Maximum gain is achieved when the input and output operating points are the same voltage, which can be biased by connecting a resistor between the output and input. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(a)=1-a" }, { "math_id": 1, "text": "f(0)=1-0=1" }, { "math_id": 2, "text": "f(1)=1-1=0" } ]
https://en.wikipedia.org/wiki?curid=100625
10063629
Rank–size distribution
Rank–size distribution is the distribution of size by rank, in decreasing order of size. For example, if a data set consists of items of sizes 5, 100, 5, and 8, the rank-size distribution is 100, 8, 5, 5 (ranks 1 through 4). This is also known as the rank–frequency distribution, when the source data are from a frequency distribution. These are particularly of interest when the data vary significantly in scales, such as city size or word frequency. These distributions frequently follow a power law distribution, or less well-known ones such as a stretched exponential function or parabolic fractal distribution, at least approximately for certain ranges of ranks; see below. A rank-size distribution is not a probability distribution or cumulative distribution function. Rather, it is a discrete form of a quantile function (inverse cumulative distribution) in reverse order, giving the size of the element at a given rank. Simple rank–size distributions. In the case of city populations, the resulting distribution in a country, a region, or the world will be characterized by its largest city, with other cities decreasing in size respective to it, initially at a rapid rate and then more slowly. This results in a few large cities and a much larger number of cities orders of magnitude smaller. For example, a rank 3 city would have one-third the population of a country's largest city, a rank 4 city would have one-fourth the population of the largest city, and so on. Segmentation. A rank-size (or rank–frequency) distribution is often segmented into ranges. This is frequently done somewhat arbitrarily or due to external factors, particularly for market segmentation, but can also be due to distinct behavior as rank varies. Most simply and commonly, a distribution may be split in two pieces, termed the head and tail. If a distribution is broken into three pieces, the third (middle) piece has several terms, generically middle, also belly, torso, and body. These frequently have some adjectives added, most significantly "long tail", also "fat belly", "chunky middle", etc. In more traditional terms, these may be called "top-tier", "mid-tier", and "bottom-tier". The relative sizes and weights of these segments (how many ranks in each segment, and what proportion of the total population is in a given segment) qualitatively characterize a distribution, analogously to the skewness or kurtosis of a probability distribution. Namely: is it dominated by a few top members (head-heavy, like profits in the recorded music industry), or is it dominated by many small members (tail-heavy, like internet search queries), or distributed in some other way? Practically, this determines strategy: where should attention be focused? These distinctions may be made for various reasons. For example, they may arise from differing properties of the population, as in the 90–9–1 principle, which posits that in an internet community, 90% of the participants of a community only view content, 9% of the participants edit content, and 1% of the participants actively create new content. As another example, in marketing, one may pragmatically consider the head as all members that receive personalized attention, such as personal phone calls; while the tail is everything else, which does not receive personalized attention, for example receiving form letters; and the line is simply set at a point that resources allow, or where it makes business sense to stop. Purely quantitatively, a conventional way of splitting a distribution into head and tail is to consider the head to be the first "p" portion of ranks, which account for formula_0 of the overall population, as in the 80:20 Pareto principle, where the top 20% (head) comprises 80% of the overall population. The exact cutoff depends on the distribution – each distribution has a single such cutoff point—and for power, laws can be computed from the Pareto index. Segments may arise naturally due to actual changes in the behavior of the distribution as rank varies. Most common is the king effect, where the behavior of the top handful of items does not fit the pattern of the rest, as illustrated at the top for country populations, and above for most common words in English Wikipedia. For higher ranks, behavior may change at some point, and be well-modeled by different relations in different regions; on the whole by a piecewise function. For example, if two different power laws fit better in different regions, one can use a broken power law for the overall relation; the word frequency in English Wikipedia (above) also demonstrates this. The Yule–Simon distribution that results from preferential attachment (intuitively, "the rich get richer" and "success breeds success") simulates a broken power law and has been shown to "very well capture" word frequency versus rank distributions. It originated from trying to explain the population versus rank in different species. It has also been shown to fit city population versus rank better. Rank–size rule. The rank-size rule (or law) describes the remarkable regularity in many phenomena, including the distribution of city sizes, the sizes of businesses, the sizes of particles (such as sand), the lengths of rivers, the frequencies of word usage, and wealth among individuals. All are real-world observations that follow power laws, such as Zipf's law, the Yule distribution, or the Pareto distribution. If one ranks the population size of cities in a given country or in the entire world and calculates the natural logarithm of the rank and of the city population, the resulting graph will show a linear pattern. This is the rank-size distribution. Known exceptions to simple rank–size distributions. While Zipf's law works well in many cases, it tends to not fit the largest cities in many countries; one type of deviation is known as the King effect. A 2002 study found that Zipf's law was rejected in 53 of 73 countries, far more than would be expected based on random chance. The study also found that variations of the Pareto exponent are better explained by political variables than by economic geography variables like proxies for economies of scale or transportation costs. A 2004 study showed that Zipf's law did not work well for the five largest cities in six countries. In the richer countries, the distribution was flatter than predicted. For instance, in the United States, although its largest city, New York City, has more than twice the population of second-place Los Angeles, the two cities' metropolitan areas (also the two largest in the country) are much closer in population. In metropolitan-area population, New York City is only 1.3 times larger than Los Angeles. In other countries, the largest city would dominate much more than expected. For instance, in the Democratic Republic of the Congo, the capital, Kinshasa, is more than eight times larger than the second-largest city, Lubumbashi. When considering the entire distribution of cities, including the smallest ones, the rank-size rule does not hold. Instead, the distribution is log-normal. This follows from Gibrat's law of proportionate growth. Because exceptions are so easy to find, the function of the rule for analyzing cities today is to compare the city systems in different countries. The rank-size rule is a common standard by which urban primacy is established. A distribution such as that in the United States or China does not exhibit a pattern of primacy, but countries with a dominant "primate city" clearly vary from the rank-size rule in the opposite manner. Therefore, the rule helps to classify national (or regional) city systems according to the degree of dominance exhibited by the largest city. Countries with a primate city, for example, have typically had a colonial history that accounts for that city pattern. If a normal city distribution pattern is expected to follow the rank-size rule (i.e. if the rank-size principle correlates with central place theory), then it suggests that those countries or regions with distributions that do not follow the rule have experienced some conditions that have altered the normal distribution pattern. For example, the presence of multiple regions within large nations such as China and the United States tends to favor a pattern in which more large cities appear than would be predicted by the rule. By contrast, small countries that had been connected (e.g. colonially/economically) to much larger areas will exhibit a distribution in which the largest city is much larger than would fit the rule, compared with the other cities—the excessive size of the city theoretically stems from its connection with a larger system rather than the natural hierarchy that central place theory would predict within that one country or region alone. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1 - p" } ]
https://en.wikipedia.org/wiki?curid=10063629
10063692
Constant-Q transform
Short-time Fourier transform with variable resolution In mathematics and signal processing, the constant-Q transform and variable-Q transform, simply known as CQT and VQT, transforms a data series to the frequency domain. It is related to the Fourier transform and very closely related to the complex Morlet wavelet transform. Its design is suited for musical representation. The transform can be thought of as a series of filters "f""k", logarithmically spaced in frequency, with the "k"-th filter having a spectral width "δf""k" equal to a multiple of the previous filter's width: formula_0 where "δf""k" is the bandwidth of the "k"-th filter, "f"min is the central frequency of the lowest filter, and "n" is the number of filters per octave. Calculation. The short-time Fourier transform of "x"["n"] for a frame shifted to sample "m" is calculated as follows: formula_1 Given a data series at sampling frequency "f"s = 1/"T", "T" being the sampling period of our data, for each frequency bin we can define the following: formula_2 This is shown below to be the integer number of cycles processed at a center frequency "fk". As such, this somewhat defines the time complexity of the transform. formula_3 Since "fs"/"fk" is the number of samples processed per cycle at frequency "fk", "Q" is the number of integer cycles processed at this central frequency. The equivalent transform kernel can be found by using the following substitutions: formula_4 formula_5 After these modifications, we are left with formula_8 Variable-Q bandwidth calculation. The variable-Q transform is the same as constant-Q transform, but the only difference is the filter Q is variable, hence the name variable-Q transform. The variable-Q transform is useful . There are ways to calculate the bandwidth of the VQT, one of them using equivalent rectangular bandwidth as a value for VQT bin's bandwidth. The simplest way to implement a variable-Q transform is add a bandwidth offset called "γ" like this one: formula_9 This formula can be modified to have extra parameters to adjust sharpness of the transition between constant-Q and constant-bandwidth like this: formula_10 with "α" as a parameter for transition sharpness and where "α" of 2 is equals to hyperbolic sine frequency scale, in terms of frequency resolution. Fast calculation. The direct calculation of the constant-Q transform (either using naive DFT or slightly faster Goertzel algorithm) is slow when compared against the fast Fourier transform (FFT). However, the FFT can itself be employed, in conjunction with the use of a kernel, to perform the equivalent calculation but much faster. An approximate inverse to such an implementation was proposed in 2006; it works by going back to the DFT, and is only suitable for pitch instruments. A development on this method with improved invertibility involves performing CQT (via FFT) octave-by-octave, using lowpass filtered and downsampled results for consecutively lower pitches. Implementations of this method include the MATLAB implementation and LibROSA's Python implementation. LibROSA combines the subsampled method with the direct FFT method (which it dubs "pseudo-CQT") by having the latter process higher frequencies as a whole. The sliding DFT can be used for faster calculation of constant-Q transform, since the sliding DFT does not have to be linear-frequency spacing and same window size per bin. Alternatively, the constant-Q transform can be approximated by using multiple FFTs of different window sizes and/or sampling rate at different frequency ranges then stitch it together. This is called multiresolution STFT, however the window sizes for multiresolution FFTs are different per-octave, rather than per-bin. Comparison with the Fourier transform. In general, the transform is well suited to musical data, and this can be seen in some of its advantages compared to the fast Fourier transform. As the output of the transform is effectively amplitude/phase against log frequency, fewer frequency bins are required to cover a given range effectively, and this proves useful where frequencies span several octaves. As the range of human hearing covers approximately ten octaves from 20 Hz to around 20 kHz, this reduction in output data is significant. The transform exhibits a reduction in frequency resolution with higher frequency bins, which is desirable for auditory applications. The transform mirrors the human auditory system, whereby at lower-frequencies spectral resolution is better, whereas temporal resolution improves at higher frequencies. At the bottom of the piano scale (about 30 Hz), a difference of 1 semitone is a difference of approximately 1.5 Hz, whereas at the top of the musical scale (about 5 kHz), a difference of 1 semitone is a difference of approximately 200 Hz. So for musical data the exponential frequency resolution of constant-Q transform is ideal. In addition, the harmonics of musical notes form a pattern characteristic of the timbre of the instrument in this transform. Assuming the same relative strengths of each harmonic, as the fundamental frequency changes, the relative position of these harmonics remains constant. This can make identification of instruments much easier. The constant Q transform can also be used for automatic recognition of musical keys based on accumulated chroma content. Relative to the Fourier transform, implementation of this transform is more tricky. This is due to the varying number of samples used in the calculation of each frequency bin, which also affects the length of any windowing function implemented. Also note that because the frequency scale is logarithmic, there is no true zero-frequency / DC term present, which may be a drawback in applications that are interested in the DC term. Although for applications that are not interested in the DC such as audio, this is not a drawback.
[ { "math_id": 0, "text": "\\delta f_k = 2^{1/n} \\cdot \\delta f_{k-1}\n= \\left( 2^{1/n} \\right)^k \\cdot \\delta f_\\text{min}," }, { "math_id": 1, "text": "X[k,m] = \\sum_{n=0}^{N-1} W[n-m] x[n] e^{-j 2 \\pi k n/N}. " }, { "math_id": 2, "text": " Q = \\frac{f_k}{\\delta f_k}." }, { "math_id": 3, "text": " N[k] = \\frac{f_\\text{s}}{\\delta f_k} = \\frac{f_\\text{s}}{f_k} Q. " }, { "math_id": 4, "text": "N = N[k] = Q \\frac{f_\\text{s}}{f_k}." }, { "math_id": 5, "text": " W[k,n] = \\alpha - (1 - \\alpha) \\cos \\frac{2 \\pi n}{N[k] - 1}, \\quad \\alpha = 25/46, \\quad 0 \\leqslant n \\leqslant N[k] - 1." }, { "math_id": 6, "text": " \\frac{2 \\pi k}{N} " }, { "math_id": 7, "text": " \\frac{2 \\pi Q}{N[k]} " }, { "math_id": 8, "text": "X[k] = \\frac{1}{N[k]} \\sum_{n=0}^{N[k]-1} W[k,n] x[n] e^{\\frac{-j2 \\pi Qn}{N[k]}}. " }, { "math_id": 9, "text": " \\delta f_k = \\left(\\frac{2}{f_k + \\gamma} \\right) Q. " }, { "math_id": 10, "text": " \\delta f_k = \\left(\\frac{2}{\\sqrt[\\alpha]{f_k^\\alpha + \\gamma^\\alpha}} \\right) Q. " } ]
https://en.wikipedia.org/wiki?curid=10063692
100638
Controllability
Dynamic system property Controllability is an important property of a control system and plays a crucial role in many control problems, such as stabilization of unstable systems by feedback, or optimal control. Controllability and observability are dual aspects of the same problem. Roughly, the concept of controllability denotes the ability to move a system around in its entire configuration space using only certain admissible manipulations. The exact definition varies slightly within the framework or the type of models applied. The following are examples of variations of controllability notions which have been introduced in the systems and control literature: State controllability. The state of a deterministic system, which is the set of values of all the system's state variables (those variables characterized by dynamic equations), completely describes the system at any given time. In particular, no information on the past of a system is needed to help in predicting the future, if the states at the present time are known and all current and future values of the control variables (those whose values can be chosen) are known. "Complete state controllability" (or simply "controllability" if no other context is given) describes the ability of an external input (the vector of control variables) to move the internal state of a system from any initial state to any final state in a finite time interval. That is, we can informally define controllability as follows: If for any initial state formula_0 and any final state formula_1 there exists an input sequence to transfer the system state from formula_0 to formula_1 in a finite time interval, then the system modeled by the state-space representation is controllable. For the simplest example of a continuous, LTI system, the row dimension of the state space expression formula_2 determines the interval; each row contributes a vector in the state space of the system. If there are not enough such vectors to span the state space of formula_3, then the system cannot achieve controllability. It may be necessary to modify formula_4 and formula_5 to better approximate the underlying differential relationships it estimates to achieve controllability. Controllability does not mean that a reached state can be maintained, merely that any state can be reached. Controllability does not mean that arbitrary paths can be made through state space, only that there exists a path within the prescribed finite time interval. Continuous linear systems. Consider the continuous linear system formula_6 formula_7 There exists a control formula_8 from state formula_9 at time formula_10 to state formula_11 at time formula_12 if and only if formula_13 is in the column space of formula_14 where formula_15 is the state-transition matrix, and formula_16 is the Controllability Gramian. In fact, if formula_17 is a solution to formula_18 then a control given by formula_19 would make the desired transfer. Note that the matrix formula_20 defined as above has the following properties: formula_22 formula_23 Rank condition for controllability. The Controllability Gramian involves integration of the state-transition matrix of a system. A simpler condition for controllability is a rank condition analogous to the Kalman rank condition for time-invariant systems. Consider a continuous-time linear system formula_24 smoothly varying in an interval formula_25 of formula_26: formula_6 formula_7 The state-transition matrix formula_15 is also smooth. Introduce the n x m matrix-valued function formula_27 and define formula_28 = formula_29. Consider the matrix of matrix-valued functions obtained by listing all the columns of the formula_30, formula_31: formula_32. If there exists a formula_33 and a nonnegative integer k such that formula_34, then formula_24 is controllable. If formula_24 is also analytically varying in an interval formula_25, then formula_24 is controllable on every nontrivial subinterval of formula_25 if and only if there exists a formula_33 and a nonnegative integer k such that formula_35. The above methods can still be complex to check, since it involves the computation of the state-transition matrix formula_15. Another equivalent condition is defined as follow. Let formula_36, and for each formula_37, define formula_38= formula_39 In this case, each formula_40 is obtained directly from the data formula_41 The system is controllable if there exists a formula_33 and a nonnegative integer formula_42 such that formula_43. Example. Consider a system varying analytically in formula_44 and matrices formula_45, formula_46 Then formula_47 and since this matrix has rank 3, the system is controllable on every nontrivial interval of formula_26. Continuous linear time-invariant (LTI) systems. Consider the continuous linear time-invariant system formula_48 formula_49 where formula_3 is the formula_50 "state vector", formula_51 is the formula_52 "output vector", formula_53 is the formula_54 "input (or control) vector", formula_55 is the formula_56 "state matrix", formula_57 is the formula_58 "input matrix", formula_59 is the formula_60 "output matrix", formula_61 is the formula_62 "feedthrough (or feedforward) matrix". The formula_63 controllability matrix is given by formula_64 The system is controllable if the controllability matrix has full row rank (i.e. formula_65). Discrete linear time-invariant (LTI) systems. For a discrete-time linear state-space system (i.e. time variable formula_66) the state equation is formula_67 where formula_55 is an formula_56 matrix and formula_57 is a formula_58 matrix (i.e. formula_53 is formula_68 inputs collected in a formula_54 vector). The test for controllability is that the formula_63 matrix formula_69 has full row rank (i.e., formula_70). That is, if the system is controllable, formula_71 will have formula_72 columns that are linearly independent; if formula_72 columns of formula_71 are linearly independent, each of the formula_72 states is reachable by giving the system proper inputs through the variable formula_73. Derivation. Given the state formula_74 at an initial time, arbitrarily denoted as "k"=0, the state equation gives formula_75 then formula_76 and so on with repeated back-substitutions of the state variable, eventually yielding formula_77 or equivalently formula_78 Imposing any desired value of the state vector formula_79 on the left side, this can always be solved for the stacked vector of control vectors if and only if the matrix of matrices at the beginning of the right side has full row rank. Example. For example, consider the case when formula_80 and formula_81 (i.e. only one control input). Thus, formula_57 and formula_82 are formula_83 vectors. If formula_84 has rank 2 (full rank), and so formula_57 and formula_85 are linearly independent and span the entire plane. If the rank is 1, then formula_57 and formula_85 are collinear and do not span the plane. Assume that the initial state is zero. At time formula_86: formula_87 At time formula_88: formula_89 At time formula_86 all of the reachable states are on the line formed by the vector formula_57. At time formula_88 all of the reachable states are linear combinations of formula_85 and formula_57. If the system is controllable then these two vectors can span the entire plane and can be done so for time formula_90. The assumption made that the initial state is zero is merely for convenience. Clearly if all states can be reached from the origin then any state can be reached from another state (merely a shift in coordinates). This example holds for all positive formula_72, but the case of formula_80 is easier to visualize. Analogy for example of "n" = 2. Consider an analogy to the previous example system. You are sitting in your car on an infinite, flat plane and facing north. The goal is to reach any point in the plane by driving a distance in a straight line, come to a full stop, turn, and driving another distance, again, in a straight line. If your car has no steering then you can only drive straight, which means you can only drive on a line (in this case the north-south line since you started facing north). The lack of steering case would be analogous to when the rank of formula_59 is 1 (the two distances you drove are on the same line). Now, if your car did have steering then you could easily drive to any point in the plane and this would be the analogous case to when the rank of formula_59 is 2. If you change this example to formula_91 then the analogy would be flying in space to reach any position in 3D space (ignoring the orientation of the aircraft). You are allowed to: Although the 3-dimensional case is harder to visualize, the concept of controllability is still analogous. Nonlinear systems. Nonlinear systems in the control-affine form formula_92 are locally accessible about formula_9 if the accessibility distribution formula_93 spans formula_72 space, when formula_72 equals the rank of formula_94 and R is given by: formula_95 Here, formula_96 is the repeated Lie bracket operation defined by formula_97 The controllability matrix for linear systems in the previous section can in fact be derived from this equation. Null Controllability. If a discrete control system is null-controllable, it means that there exists a controllable formula_73 so that formula_98 for some initial state formula_99. In other words, it is equivalent to the condition that there exists a matrix formula_100 such that formula_101 is nilpotent. This can be easily shown by controllable-uncontrollable decomposition. Output controllability. "Output controllability" is the related notion for the output of the system (denoted "y" in the previous equations); the output controllability describes the ability of an external input to move the output from any initial condition to any final condition in a finite time interval. It is not necessary that there is any relationship between state controllability and output controllability. In particular: For a linear continuous-time system, like the example above, described by matrices formula_55, formula_57, formula_59, and formula_61, the formula_102 "output controllability matrix" formula_103 has full row rank (i.e. rank formula_104) if and only if the system is output controllable. Controllability under input constraints. In systems with limited control authority, it is often no longer possible to move any initial state to any final state inside the controllable subspace. This phenomenon is caused by constraints on the input that could be inherent to the system (e.g. due to saturating actuator) or imposed on the system for other reasons (e.g. due to safety-related concerns). The controllability of systems with input and state constraints is studied in the context of reachability and viability theory. Controllability in the behavioral framework. In the so-called behavioral system theoretic approach due to Willems (see people in systems and control), models considered do not directly define an input–output structure. In this framework systems are described by admissible trajectories of a collection of variables, some of which might be interpreted as inputs or outputs. A system is then defined to be controllable in this setting, if any past part of a behavior (trajectory of the external variables) can be concatenated with any future trajectory of the behavior in such a way that the concatenation is contained in the behavior, i.e. is part of the admissible system behavior. Stabilizability. A slightly weaker notion than controllability is that of stabilizability. A system is said to be stabilizable when all uncontrollable state variables can be made to have stable dynamics. Thus, even though some of the state variables cannot be controlled (as determined by the controllability test above) all the state variables will still remain bounded during the system's behavior.&lt;ref name="Anderson+Moore/book:1989"&gt;&lt;/ref&gt; Reachable set. Let T ∈ Т and x ∈ "X" (where X is the set of all possible states and Т is an interval of time). The reachable set from x in time T is defined as: formula_105, where xT→z denotes that there exists a state transition from x to z in time T. For autonomous systems the reachable set is given by : formula_106, where R is the controllability matrix. In terms of the reachable set, the system is controllable if and only if formula_107. Proof We have the following equalities: formula_108 formula_109 formula_110 Considering that the system is controllable, the columns of R should be linearly independent. So: formula_111 formula_112 formula_113 A related set to the reachable set is the controllable set, defined by: formula_114. The relation between reachability and controllability is presented by Sontag: (a) An n-dimensional discrete linear system is controllable if and only if: formula_115 (Where X is the set of all possible values or states of x and k is the time step). (b) A continuous-time linear system is controllable if and only if: formula_116 for all e&gt;0. if and only if formula_117 for all e&gt;0. Example Let the system be an n dimensional discrete-time-invariant system from the formula: Φ(n,0,0,w)=formula_118 (Where Φ(final time, initial time, state variable, restrictions) is defined is the transition matrix of a state variable x from an initial time 0 to a final time n with some restrictions w). It follows that the future state is in formula_119 ⇔ it is in the image of the linear map: Im(R)=R(A,B)≜ Im(formula_120), which maps, formula_121→X When formula_122 and formula_123 we identify R(A,B) with a n by nm matrix whose columns are the columns of formula_124 in that order. If the system is controllable the rank of formula_120 is n. If this is truth, the image of the linear map R is all of X. Based on that, we have: formula_115 with XЄformula_125. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{x_0}" }, { "math_id": 1, "text": "\\mathbf{x_f}" }, { "math_id": 2, "text": "\\dot{\\mathbf{x}} = \\mathbf{A}\\mathbf{x}(t) + \\mathbf{B}\\mathbf{u}(t)" }, { "math_id": 3, "text": "\\mathbf{x}" }, { "math_id": 4, "text": "\\mathbf{A}" }, { "math_id": 5, "text": "\\mathbf{B}" }, { "math_id": 6, "text": "\\dot{\\mathbf{x}}(t) = A(t) \\mathbf{x}(t) + B(t) \\mathbf{u}(t)" }, { "math_id": 7, "text": "\\mathbf{y}(t) = C(t) \\mathbf{x}(t) + D(t) \\mathbf{u}(t)." }, { "math_id": 8, "text": "u" }, { "math_id": 9, "text": "x_0" }, { "math_id": 10, "text": "t_0" }, { "math_id": 11, "text": "x_1" }, { "math_id": 12, "text": "t_1 > t_0" }, { "math_id": 13, "text": "x_1 - \\phi(t_0,t_1)x_0" }, { "math_id": 14, "text": "W(t_0,t_1) = \\int_{t_0}^{t_1} \\phi(t_0,t)B(t)B(t)^{T}\\phi(t_0,t)^{T} dt" }, { "math_id": 15, "text": "\\phi" }, { "math_id": 16, "text": "W(t_0,t_1)" }, { "math_id": 17, "text": "\\eta_0" }, { "math_id": 18, "text": "W(t_0,t_1)\\eta = x_1 - \\phi(t_0,t_1)x_0" }, { "math_id": 19, "text": "u(t) = -B(t)^{T}\\phi(t_0,t)^{T}\\eta_0" }, { "math_id": 20, "text": "W" }, { "math_id": 21, "text": "t_1 \\geq t_0" }, { "math_id": 22, "text": "\\frac{d}{dt}W(t,t_1) = A(t)W(t,t_1)+W(t,t_1)A(t)^{T}-B(t)B(t)^{T}, \\; W(t_1,t_1) = 0" }, { "math_id": 23, "text": "W(t_0,t_1) = W(t_0,t) + \\phi(t_0,t)W(t,t_1)\\phi(t_0,t)^{T}" }, { "math_id": 24, "text": "\\Sigma" }, { "math_id": 25, "text": "[t_0,t]" }, { "math_id": 26, "text": "\\mathbb{R}" }, { "math_id": 27, "text": "M_0(t) = \\phi(t_0,t)B(t)" }, { "math_id": 28, "text": "M_k(t)" }, { "math_id": 29, "text": "\\frac{\\mathrm{d^k} M_0}{\\mathrm{d} t^k}(t), k\\geqslant 1" }, { "math_id": 30, "text": "M_i" }, { "math_id": 31, "text": "i = 0,1, \\ldots, k" }, { "math_id": 32, "text": "M^{(k)}(t) := \\left[M_0(t), \\ldots, M_k(t)\\right] " }, { "math_id": 33, "text": "\\bar{t} \\in [t_0,t]" }, { "math_id": 34, "text": "\\operatorname{rank}M^{(k)}(\\bar{t})=n" }, { "math_id": 35, "text": "\\operatorname{rank}M^{(k)}(t_i)=n" }, { "math_id": 36, "text": "B_0(t) = B(t)" }, { "math_id": 37, "text": "i \\geq 0" }, { "math_id": 38, "text": "B_{i+1}(t) " }, { "math_id": 39, "text": "A(t)B_i(t) - \\frac{\\mathrm{d}}{\\mathrm{d} t}B_i(t). " }, { "math_id": 40, "text": "B_i" }, { "math_id": 41, "text": " (A(t),B(t))." }, { "math_id": 42, "text": "k" }, { "math_id": 43, "text": "\\textrm{rank}( \\left[ B_0(\\bar{t}), B_1(\\bar{t}), \\ldots, B_k(\\bar{t}) \\right]) = n " }, { "math_id": 44, "text": " (-\\infty,\\infty) " }, { "math_id": 45, "text": "A(t) = \\begin{bmatrix}\nt & 1 & 0\\\\ \n0 & t^{3} & 0\\\\ \n0 & 0 & t^{2} \n\\end{bmatrix}" }, { "math_id": 46, "text": "B(t) = \\begin{bmatrix}\n0 \\\\ \n1 \\\\ \n1 \n\\end{bmatrix}." }, { "math_id": 47, "text": " [B_0(0),B_1(0),B_2(0),B_3(0)] = \\begin{bmatrix}\n0 & 1 & 0 &-1\\\\ \n1 & 0 & 0&0\\\\ \n1 & 0 & 0&2\n\\end{bmatrix}" }, { "math_id": 48, "text": "\\dot{\\mathbf{x}}(t) = A \\mathbf{x}(t) + B \\mathbf{u}(t)" }, { "math_id": 49, "text": "\\mathbf{y}(t) = C \\mathbf{x}(t) + D \\mathbf{u}(t)" }, { "math_id": 50, "text": "n \\times 1" }, { "math_id": 51, "text": "\\mathbf{y}" }, { "math_id": 52, "text": "m \\times 1" }, { "math_id": 53, "text": "\\mathbf{u}" }, { "math_id": 54, "text": "r \\times 1" }, { "math_id": 55, "text": "A" }, { "math_id": 56, "text": "n \\times n" }, { "math_id": 57, "text": "B" }, { "math_id": 58, "text": "n \\times r" }, { "math_id": 59, "text": "C" }, { "math_id": 60, "text": "m \\times n" }, { "math_id": 61, "text": "D" }, { "math_id": 62, "text": "m \\times r" }, { "math_id": 63, "text": "n \\times nr" }, { "math_id": 64, "text": "R = \\begin{bmatrix}B & AB & A^{2}B & ...& A^{n-1}B\\end{bmatrix}" }, { "math_id": 65, "text": "\\operatorname{rank}(R)=n" }, { "math_id": 66, "text": "k\\in\\mathbb{Z}" }, { "math_id": 67, "text": "\\textbf{x}(k+1) = A\\textbf{x}(k) + B\\textbf{u}(k)" }, { "math_id": 68, "text": "r" }, { "math_id": 69, "text": "\\mathcal{C} = \\begin{bmatrix}B & AB & A^{2}B & \\cdots & A^{n-1}B\\end{bmatrix}" }, { "math_id": 70, "text": "\\operatorname{rank}(\\mathcal C) = n" }, { "math_id": 71, "text": "\\mathcal C" }, { "math_id": 72, "text": "n" }, { "math_id": 73, "text": "u(k)" }, { "math_id": 74, "text": "\\textbf{x}(0)" }, { "math_id": 75, "text": "\\textbf{x}(1) = A\\textbf{x}(0) + B\\textbf{u}(0)," }, { "math_id": 76, "text": "\\textbf{x}(2) = A\\textbf{x}(1) + B\\textbf{u}(1)= A^2\\textbf{x}(0)+AB\\textbf{u}(0)+B\\textbf{u}(1)," }, { "math_id": 77, "text": "\\textbf{x}(n)=B\\textbf{u}(n-1) + AB\\textbf{u}(n-2) + \\cdots + A^{n-1}B\\textbf{u}(0) + A^n\\textbf{x}(0)" }, { "math_id": 78, "text": "\\textbf{x}(n)-A^n\\textbf{x}(0)= [B \\, \\, AB \\, \\, \\cdots \\, \\, A^{n-1}B] [\\textbf{u}^T(n-1) \\, \\, \\textbf{u}^T(n-2) \\, \\, \\cdots \\, \\, \\textbf{u}^T(0)]^T." }, { "math_id": 79, "text": "\\textbf{x}(n)" }, { "math_id": 80, "text": "n=2" }, { "math_id": 81, "text": "r=1" }, { "math_id": 82, "text": "A B" }, { "math_id": 83, "text": "2 \\times 1" }, { "math_id": 84, "text": "\\begin{bmatrix}B & AB\\end{bmatrix}" }, { "math_id": 85, "text": "AB" }, { "math_id": 86, "text": "k=0" }, { "math_id": 87, "text": "x(1) = A\\textbf{x}(0) + B\\textbf{u}(0) = B\\textbf{u}(0)" }, { "math_id": 88, "text": "k=1" }, { "math_id": 89, "text": "x(2) = A\\textbf{x}(1) + B\\textbf{u}(1) = AB\\textbf{u}(0) + B\\textbf{u}(1)" }, { "math_id": 90, "text": "k=2" }, { "math_id": 91, "text": "n=3" }, { "math_id": 92, "text": "\\dot{\\mathbf{x}} = \\mathbf{f(x)} + \\sum_{i=1}^m \\mathbf{g}_i(\\mathbf{x})u_i" }, { "math_id": 93, "text": "R" }, { "math_id": 94, "text": "x" }, { "math_id": 95, "text": "R = \\begin{bmatrix} \\mathbf{g}_1 & \\cdots & \\mathbf{g}_m & [\\mathrm{ad}^k_{\\mathbf{g}_i}\\mathbf{\\mathbf{g}_j}] & \\cdots & [\\mathrm{ad}^k_{\\mathbf{f}}\\mathbf{\\mathbf{g}_i}] \\end{bmatrix}." }, { "math_id": 96, "text": "[\\mathrm{ad}^k_{\\mathbf{f}}\\mathbf{\\mathbf{g}}]" }, { "math_id": 97, "text": "[\\mathrm{ad}^k_{\\mathbf{f}}\\mathbf{\\mathbf{g}}] = \\begin{bmatrix} \\mathbf{f} & \\cdots & j & \\cdots & \\mathbf{[\\mathbf{f}, \\mathbf{g}]} \\end{bmatrix}. " }, { "math_id": 98, "text": "x(k_0) = 0" }, { "math_id": 99, "text": "x(0) = x_0" }, { "math_id": 100, "text": "F" }, { "math_id": 101, "text": "A+BF" }, { "math_id": 102, "text": "m \\times (n+1)r" }, { "math_id": 103, "text": "\\begin{bmatrix} CB & CAB & CA^2B & \\cdots & CA^{n-1}B & D\\end{bmatrix}" }, { "math_id": 104, "text": "m" }, { "math_id": 105, "text": "R^T{(x)} = \\left\\{ z \\in X : x \\overset{T}{\\rightarrow} z \\right\\}" }, { "math_id": 106, "text": "\\mathrm{Im}(R)=\\mathrm{Im}(B)+\\mathrm{Im}(AB)+....+\\mathrm{Im}(A^{n-1}B)" }, { "math_id": 107, "text": "\\mathrm{Im}(R)=\\mathbb{R}^n" }, { "math_id": 108, "text": "R=[B\\ AB ....A^{n-1}B]" }, { "math_id": 109, "text": "\\mathrm{Im}(R)=\\mathrm{Im}([B\\ AB ....A^{n-1}B])" }, { "math_id": 110, "text": "\\mathrm{dim(Im}(R))=\\mathrm{rank}(R)" }, { "math_id": 111, "text": "\\mathrm{dim(Im}(R))=n" }, { "math_id": 112, "text": "\\mathrm{rank}(R)=n" }, { "math_id": 113, "text": "\\mathrm{Im}(R)=\\R^{n}\\quad \\blacksquare" }, { "math_id": 114, "text": "C^T{(x)} = \\left\\{ z \\in X : z \\overset{T}{\\rightarrow} x \\right\\}" }, { "math_id": 115, "text": "R(0)=R^k{(0)=X}" }, { "math_id": 116, "text": "R(0)=R^e{(0)=X}" }, { "math_id": 117, "text": "C(0)=C^e{(0)=X}" }, { "math_id": 118, "text": "\\sum\\limits_{i=1}^n A^{i-1}Bw(n-1)" }, { "math_id": 119, "text": "R^k{(0)}" }, { "math_id": 120, "text": "[B\\ AB ....A^{n-1}B]" }, { "math_id": 121, "text": "u^{n}" }, { "math_id": 122, "text": "u=K^{m}" }, { "math_id": 123, "text": "X=K^{n}" }, { "math_id": 124, "text": "B,\\ AB, ....,A^{n-1}B" }, { "math_id": 125, "text": "\\R^{n}" } ]
https://en.wikipedia.org/wiki?curid=100638
10063937
Dual cone and polar cone
Concepts in convex analysis Dual cone and polar cone are closely related concepts in convex analysis, a branch of mathematics. Dual cone. In a vector space. The dual cone "C*" of a subset "C" in a linear space "X" over the reals, e.g. Euclidean space R"n", with dual space "X*" is the set formula_0 where formula_1 is the duality pairing between "X" and "X*", i.e. formula_2. "C*" is always a convex cone, even if "C" is neither convex nor a cone. In a topological vector space. If "X" is a topological vector space over the real or complex numbers, then the dual cone of a subset "C" ⊆ "X" is the following set of continuous linear functionals on "X": formula_3, which is the polar of the set -"C". No matter what "C" is, formula_4 will be a convex cone. If "C" ⊆ {0} then formula_5. In a Hilbert space (internal dual cone). Alternatively, many authors define the dual cone in the context of a real Hilbert space (such as R"n" equipped with the Euclidean inner product) to be what is sometimes called the "internal dual cone". formula_6 Properties. Using this latter definition for "C*", we have that when "C" is a cone, the following properties hold: Self-dual cones. A cone "C" in a vector space "X" is said to be "self-dual" if "X" can be equipped with an inner product ⟨⋅,⋅⟩ such that the internal dual cone relative to this inner product is equal to "C". Those authors who define the dual cone as the internal dual cone in a real Hilbert space usually say that a cone is self-dual if it is equal to its internal dual. This is slightly different from the above definition, which permits a change of inner product. For instance, the above definition makes a cone in R"n" with ellipsoidal base self-dual, because the inner product can be changed to make the base spherical, and a cone with spherical base in R"n" is equal to its internal dual. The nonnegative orthant of R"n" and the space of all positive semidefinite matrices are self-dual, as are the cones with ellipsoidal base (often called "spherical cones", "Lorentz cones", or sometimes "ice-cream cones"). So are all cones in R3 whose base is the convex hull of a regular polygon with an odd number of vertices. A less regular example is the cone in R3 whose base is the "house": the convex hull of a square and a point outside the square forming an equilateral triangle (of the appropriate height) with one of the sides of the square. Polar cone. For a set "C" in "X", the polar cone of "C" is the set formula_9 It can be seen that the polar cone is equal to the negative of the dual cone, i.e. "Co" = −"C*". For a closed convex cone "C" in "X", the polar cone is equivalent to the polar set for "C". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C^* = \\left \\{y\\in X^*: \\langle y , x \\rangle \\geq 0 \\quad \\forall x\\in C \\right \\}," }, { "math_id": 1, "text": "\\langle y, x \\rangle" }, { "math_id": 2, "text": "\\langle y, x\\rangle = y(x)" }, { "math_id": 3, "text": "C^{\\prime} := \\left\\{ f \\in X^{\\prime} : \\operatorname{Re} \\left( f (x) \\right) \\geq 0 \\text{ for all } x \\in C \\right\\}" }, { "math_id": 4, "text": "C^{\\prime}" }, { "math_id": 5, "text": "C^{\\prime} = X^{\\prime}" }, { "math_id": 6, "text": "C^*_\\text{internal} := \\left \\{y\\in X: \\langle y , x \\rangle \\geq 0 \\quad \\forall x\\in C \\right \\}." }, { "math_id": 7, "text": "C_1 \\subseteq C_2" }, { "math_id": 8, "text": "C_2^* \\subseteq C_1^*" }, { "math_id": 9, "text": "C^o = \\left \\{y\\in X^*: \\langle y , x \\rangle \\leq 0 \\quad \\forall x\\in C \\right \\}." } ]
https://en.wikipedia.org/wiki?curid=10063937
10064136
Separation principle
In control theory, a separation principle, more formally known as a principle of separation of estimation and control, states that under some assumptions the problem of designing an optimal feedback controller for a stochastic system can be solved by designing an optimal observer for the state of the system, which feeds into an optimal deterministic controller for the system. Thus the problem can be broken into two separate parts, which facilitates the design. The first instance of such a principle is in the setting of deterministic linear systems, namely that if a stable observer and a stable state feedback are designed for a linear time-invariant system (LTI system hereafter), then the combined observer and feedback is stable. The separation principle does not hold in general for nonlinear systems. Another instance of the separation principle arises in the setting of linear stochastic systems, namely that state estimation (possibly nonlinear) together with an optimal state feedback controller designed to minimize a quadratic cost, is optimal for the stochastic control problem with output measurements. When process and observation noise are Gaussian, the optimal solution separates into a Kalman filter and a linear-quadratic regulator. This is known as linear-quadratic-Gaussian control. More generally, under suitable conditions and when the noise is a martingale (with possible jumps), again a separation principle applies and is known as the separation principle in stochastic control. The separation principle also holds for high gain observers used for state estimation of a class of nonlinear systems and control of quantum systems. Proof of separation principle for deterministic LTI systems. Consider a deterministic LTI system: formula_0 where formula_1 represents the input signal, formula_2 represents the output signal, and formula_3 represents the internal state of the system. We can design an observer of the form formula_4 and state feedback formula_5 Define the error "e": formula_6 Then formula_7 formula_8 Now we can write the closed-loop dynamics as formula_9 Since this is a triangular matrix, the eigenvalues are just those of "A" − "BK" together with those of "A" − "LC". Thus the stability of the observer and feedback are independent. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{align}\n\\dot{x}(t) & = A x(t) + B u(t) \\\\\ny(t) & = C x(t)\n\\end{align}\n" }, { "math_id": 1, "text": "u(t)" }, { "math_id": 2, "text": "y(t)" }, { "math_id": 3, "text": "x(t)" }, { "math_id": 4, "text": "\\dot{\\hat{x}} = ( A - L C ) \\hat{x} + B u + L y \\, " }, { "math_id": 5, "text": "u(t) = - K \\hat{x} \\, ." }, { "math_id": 6, "text": "e = x - \\hat{x} \\, ." }, { "math_id": 7, "text": "\\dot{e} = (A - L C) e \\, " }, { "math_id": 8, "text": "u(t) = - K ( x - e ) \\, ." }, { "math_id": 9, "text": "\\begin{bmatrix}\n\\dot{x} \\\\\n\\dot{e} \\\\\n\\end{bmatrix} = \n\\begin{bmatrix}\nA - B K & BK \\\\\n0 & A - L C \\\\\n\\end{bmatrix}\n\\begin{bmatrix}\nx \\\\\ne \\\\\n\\end{bmatrix}." } ]
https://en.wikipedia.org/wiki?curid=10064136
10065
Empirical formula
Simplest whole number ratio of atoms present in a compound In chemistry, the empirical formula of a chemical compound is the simplest whole number ratio of atoms present in a compound. A simple example of this concept is that the empirical formula of sulfur monoxide, or SO, would simply be SO, as is the empirical formula of disulfur dioxide, S2O2. Thus, sulfur monoxide and disulfur dioxide, both compounds of sulfur and oxygen, have the same empirical formula. However, their molecular formulas, which express the number of atoms in each molecule of a chemical compound, are not the same. An empirical formula makes no mention of the arrangement or number of atoms. It is standard for many ionic compounds, like calcium chloride (CaCl2), and for macromolecules, such as silicon dioxide (SiO2). The molecular formula, on the other hand, shows the number of each type of atom in a molecule. The structural formula shows the arrangement of the molecule. It is also possible for different types of compounds to have equal empirical formulas. In the early days of chemistry, information regarding the composition of compounds came from elemental analysis, which gives information about the relative amounts of elements present in a compound, which can be written as percentages or mole ratios. However, chemists were not able to determine the exact amounts of these elements and were only able to know their ratios, hence the name "empirical formula". Since ionic compounds are extended networks of anions and cations, all formulas of ionic compounds are empirical. Calculation example. A chemical analysis of a sample of methyl acetate provides the following elemental data: 48.64% carbon (C), 8.16% hydrogen (H), and 43.20% oxygen (O). For the purposes of determining empirical formulas, it's assumed that we have 100 grams of the compound. If this is the case, the percentages will be equal to the mass of each element in grams. Step 1: Change each percentage to an expression of the mass of each element in grams. That is, 48.64% C becomes 48.64 g C, 8.16% H becomes 8.16 g H, and 43.20% O becomes 43.20 g O. Step 2: Convert the amount of each element in grams to its amount in moles formula_0 formula_1 formula_2 Step 3: Divide each of the resulting values by the smallest of these values (2.7) formula_3 formula_4 formula_5 Step 4: If necessary, multiply these numbers by integers in order to get whole numbers; if an operation is done to one of the numbers, it must be done to all of them. formula_6 formula_7 formula_8 Thus, the empirical formula of methyl acetate is . This formula also happens to be methyl acetate's molecular formula. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left(\\frac{48.64 \\mbox{ g C}}{1}\\right)\\left(\\frac{1 \\mbox{ mol }}{12.01 \\mbox{ g C}}\\right) = 4.049\\ \\text{mol}" }, { "math_id": 1, "text": "\\left(\\frac{8.16 \\mbox{ g H}}{1}\\right)\\left(\\frac{1 \\mbox{ mol }}{1.007 \\mbox{ g H}}\\right) = 8.095\\ \\text{mol}" }, { "math_id": 2, "text": "\\left(\\frac{43.20 \\mbox{ g O}}{1}\\right)\\left(\\frac{1 \\mbox{ mol }}{16.00 \\mbox{ g O}}\\right) = 2.7\\ \\text{mol}" }, { "math_id": 3, "text": "\\frac{4.049 \\mbox{ mol }}{2.7 \\mbox{ mol }} = 1.5" }, { "math_id": 4, "text": "\\frac{8.095 \\mbox{ mol }}{2.7 \\mbox{ mol }} = 3" }, { "math_id": 5, "text": "\\frac{2.7 \\mbox{ mol }}{2.7 \\mbox{ mol }} = 1" }, { "math_id": 6, "text": "1.5 \\times 2 = 3" }, { "math_id": 7, "text": "3 \\times 2 = 6" }, { "math_id": 8, "text": "1 \\times 2 = 2" } ]
https://en.wikipedia.org/wiki?curid=10065
1006597
Nanorobotics
Emerging technology field Nanoid robotics, or for short, nanorobotics or nanobotics, is an emerging technology field creating machines or robots, which are called nanorobots or simply nanobots, whose components are at or near the scale of a nanometer (10−9 meters). More specifically, nanorobotics (as opposed to microrobotics) refers to the nanotechnology engineering discipline of designing and building nanorobots with devices ranging in size from 0.1 to 10 micrometres and constructed of nanoscale or molecular components. The terms "nanobot", "nanoid", "nanite", "nanomachine" and "nanomite" have also been used to describe such devices currently under research and development. Nanomachines are largely in the research and development phase, but some primitive molecular machines and nanomotors have been tested. An example is a sensor having a switch approximately 1.5 nanometers across, able to count specific molecules in the chemical sample. The first useful applications of nanomachines may be in nanomedicine. For example, biological machines could be used to identify and destroy cancer cells. Another potential application is the detection of toxic chemicals, and the measurement of their concentrations, in the environment. Rice University has demonstrated a single-molecule car developed by a chemical process and including Buckminsterfullerenes (buckyballs) for wheels. It is actuated by controlling the environmental temperature and by positioning a scanning tunneling microscope tip. Another definition is a robot that allows precise interactions with nanoscale objects, or can manipulate with nanoscale resolution. Such devices are more related to microscopy or scanning probe microscopy, instead of the description of nanorobots as molecular machines. Using the microscopy definition, even a large apparatus such as an atomic force microscope can be considered a nanorobotic instrument when configured to perform nanomanipulation. For this viewpoint, macroscale robots or microrobots that can move with nanoscale precision can also be considered nanorobots. Nanorobotics theory. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a "medical" use for Feynman's theoretical micro-machines (see biological machine). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the surgeon". The idea was incorporated into Feynman's case study 1959 essay "There's Plenty of Room at the Bottom." Since nano-robots would be microscopic in size, it would probably be necessary for very large numbers of them to work together to perform microscopic and macroscopic tasks. These nano-robot swarms, both those unable to replicate (as in utility fog) and those able to replicate unconstrained in the natural environment (as in grey goo and synthetic biology), are found in many science fiction stories, such as the Borg nano-probes in "Star Trek" and "The Outer Limits" episode "The New Breed". Some proponents of nano-robotics, in reaction to the grey goo scenarios that they earlier helped to propagate, hold the view that nano-robots able to replicate outside of a restricted factory environment do not form a necessary part of a purported productive nanotechnology, and that the process of self-replication, were it ever to be developed, could be made inherently safe. They further assert that their current plans for developing and using molecular manufacturing do not in fact include free-foraging replicators. A detailed theoretical discussion of nanorobotics, including specific design issues such as sensing, power communication, navigation, manipulation, locomotion, and onboard computation, has been presented in the medical context of nanomedicine by Robert Freitas. Some of these discussions remain at the level of unbuildable generality and do not approach the level of detailed engineering. Legal and ethical implications. Open technology. A document with a proposal on nanobiotech development using open design technology methods, as in open-source hardware and open-source software, has been addressed to the United Nations General Assembly. According to the document sent to the United Nations, in the same way that open source has in recent years accelerated the development of computer systems, a similar approach should benefit the society at large and accelerate nanorobotics development. The use of nanobiotechnology should be established as a human heritage for the coming generations, and developed as an open technology based on ethical practices for peaceful purposes. Open technology is stated as a fundamental key for such an aim. Nanorobot race. In the same ways that technology research and development drove the space race and nuclear arms race, a race for nanorobots is occurring. There is plenty of ground allowing nanorobots to be included among the emerging technologies. Some of the reasons are that large corporations, such as General Electric, Hewlett-Packard, Synopsys, Northrop Grumman and Siemens have been recently working in the development and research of nanorobots; surgeons are getting involved and starting to propose ways to apply nanorobots for common medical procedures; universities and research institutes were granted funds by government agencies exceeding $2 billion towards research developing nanodevices for medicine; bankers are also strategically investing with the intent to acquire beforehand rights and royalties on future nanorobots commercialisation. Some aspects of nanorobot litigation and related issues linked to monopoly have already arisen. A large number of patents have been granted recently on nanorobots, mostly by patent agents, companies specializing solely on building patent portfolios, and lawyers. After a long series of patents and eventually litigations, see for example the invention of radio, or the war of currents, emerging fields of technology tend to become a monopoly, which normally is dominated by large corporations. Approaches to manufacturing. Manufacturing nanomachines assembled from molecular components is a very challenging task. Because of the level of difficulty, many engineers and scientists continue working cooperatively across multidisciplinary approaches to achieve breakthroughs in this new area of development. Thus, it is quite understandable the importance of the following distinct techniques currently applied towards manufacturing nanorobots: Biochip. The joint use of nanoelectronics, photolithography, and new biomaterials provides a possible approach to manufacturing nanorobots for common medical uses, such as surgical instrumentation, diagnosis, and drug delivery. This method for manufacturing on nanotechnology scale is in use in the electronics industry since 2008. So, practical nanorobots should be integrated as nanoelectronics devices, which will allow tele-operation and advanced capabilities for medical instrumentation. Nubots. A "nucleic acid robot" (nubot) is an organic molecular machine at the nanoscale. DNA structure can provide means to assemble 2D and 3D nanomechanical devices. DNA based machines can be activated using small molecules, proteins and other molecules of DNA. Biological circuit gates based on DNA materials have been engineered as molecular machines to allow in-vitro drug delivery for targeted health problems. Such material based systems would work most closely to smart biomaterial drug system delivery, while not allowing precise in vivo teleoperation of such engineered prototypes. Surface-bound systems. Several reports have demonstrated the attachment of synthetic molecular motors to surfaces. These primitive nanomachines have been shown to undergo machine-like motions when confined to the surface of a macroscopic material. The surface anchored motors could potentially be used to move and position nanoscale materials on a surface in the manner of a conveyor belt. Positional nanoassembly. Nanofactory Collaboration, founded by Robert Freitas and Ralph Merkle in 2000 and involving 23 researchers from 10 organizations and 4 countries, focuses on developing a practical research agenda specifically aimed at developing positionally-controlled diamond mechanosynthesis and a diamondoid nanofactory that would have the capability of building diamondoid medical nanorobots. Biohybrids. The emerging field of bio-hybrid systems combines biological and synthetic structural elements for biomedical or robotic applications. The constituting elements of bio-nanoelectromechanical systems (BioNEMS) are of nanoscale size, for example DNA, proteins or nanostructured mechanical parts. Thiol-ene e-beams resist allow the direct writing of nanoscale features, followed by the functionalization of the natively reactive resist surface with biomolecules. Other approaches use a biodegradable material attached to magnetic particles that allow them to be guided around the body. Bacteria-based. This approach proposes the use of biological microorganisms, like the bacterium "Escherichia coli" and "Salmonella typhimurium". Thus the model uses a flagellum for propulsion purposes. Electromagnetic fields normally control the motion of this kind of biological integrated device. Chemists at the University of Nebraska have created a humidity gauge by fusing a bacterium to a silicon computer chip. Virus-based. Retroviruses can be retrained to attach to cells and replace DNA. They go through a process called reverse transcription to deliver genetic packaging in a vector. Usually, these devices are Pol – Gag genes of the virus for the Capsid and Delivery system. This process is called retroviral gene therapy, having the ability to re-engineer cellular DNA by usage of viral vectors. This approach has appeared in the form of retroviral, adenoviral, and lentiviral gene delivery systems. These gene therapy vectors have been used in cats to send genes into the genetically modified organism (GMO), causing it to display the trait. Magnetic helical nanorobots. Research has led to the creation of helical silica particles coated with magnetic materials that can be maneuvered using a rotating magnetic field. Such nanorobots are not dependent on chemical reactions to fuel the propulsion. A triaxial Helmholtz coil can provide directed rotating field in space. It was shown how such nanomotors can be used to measure viscosity of non-newtonian fluids at a resolution of a few microns. This technology promises creation of viscosity map inside cells and the extracellular milieu. Such nanorobots have been demonstrated to move in blood. Researchers have managed to controllably move such nanorobots inside cancer cells allowing them to trace out patterns inside a cell. Nanorobots moving through the tumor microenvironment have demonstrated the presence of sialic acid in the cancer-secreted extracellular matrix. Summary of helical nanorobots. A magnetic helical nanorobot consists of at least two components - one being a helical body, and the other being a magnetic material. The helical body provides a structure to the nanorobot capable of translation along the helical axis. The magnetic material, on the other hand, allows the structure to rotate by following an externally applied rotating magnetic field. Not only do magnetic helical nanorobots take advantage of magnetic actuation, but they also take advantage of helical propulsion methods. In short, magnetic helical nanorobots translate a rotational motion into translational movement through a fluid in low Reynolds number environments. These nanorobots have been inspired by naturally occurring microorganisms such as flagella, cilia, and Escheric coli (otherwise known as E. coli) which rotate in a helical wave. Movement of magnetic helical nanorobots. One approach to the wireless manipulation of helical swimmers is through externally applied gradient rotation magnetic field. This can be done through Helmholtz coil as the helical swimmers are actuated by a rotating magnetic field. All magnetized objects within an externally imposed magnetic field will have both forces and torques exerted on them.The helical swimmers can rotate due the magnetic field received by the magnetic head and the forces acting upon it. Once the whole structure feels the field then the helical shape of its body converts this rotational movement into a propulsive force. Magnetic forces (fm) are proportional to the gradient of the magnetic field (∇B) on the magnetized object, and act to move the object to local maxima. Also, magnetic torques (τ) are proportional to the magnetic field (B) and act to align the internal magnetization of an object (M) with the field. The equations that express the interactions are as follows where V is the volume of the magnetized object. formula_0 (Equation 1) formula_1 (Equation 2) Equation one indicates that, increasing the volume of the magnetic material will increase the force experienced by the material proportionally. If the volume is doubled, the force will also double, assuming the magnetization (M) and the gradient of the magnetic field (∇B) remain constant. This would be the same for the torque of the magnetic material too since it is proportional to the volume. This increase in magnetic dipoles enhances the overall magnetic response of the material to an external magnetic field, resulting in greater force and torque. Hence when the magnetic material gets bigger than the helical swimmer can move faster. Movement of a helical swimmer with square magnetic head. To use the rotation magnetic field, a permanent magnet can be planted in the helical swimmer’s head, whose magnetization direction would be perpendicular to the swimmer body. When a rotating magnetic field is applied, the swimmer’s head experiences a magnetic torque, causing it to rotate. The helical shape converts this rotational movement into a propulsive force. As the swimmer’s head rotates, its helical tail generates a force against the surrounding fluid, propelling it forward. According to equation 2, the magnetic torque around the "x"-axis is zero formula_2 at the initial position. After the magnet manipulator turns 45°, the magnetic field near the head position of the square magnet turns at an angle around the "x"-axis, as shown in the figure below. If the square magnet stays in its initial position, it will be subject to a magnetic torque around the "x"-axis formula_3 Thus, the helical swimmer will follow the magnetic field. If the magnet manipulator rotates one turn, the magnetic field near the head position of the swimmer projected on the plane "yoz" rotates a whole turn around the x-axis. This results in the helical shape to move, resulting in propulsion as follows: formula_4 This propulsion helps the helical structure to rotate with the angle of the force. As a result, the magnetic robot rotates around the "x"-axis by the action of the rotating magnetic field. Example biomedical applications. Due to its small scale and helical shape providing propulsion, helical swimmers can be used in some biomedical applications such as; targeted drug delivery and targeted cell delivery. In 2018, there was a proposed biocompatible and biodegradable chistosan-based helical micro/nanoswimmer loaded with doxorubicin (DOX), a common anticancer drug that was designed to deliver its payload to a desired location. Using 3.4 × 10–1 W/cm2 intensity UV light radiation, when the swimmer approached the target location, a dose of 60% of the total DOX was released within 5 minutes. However, it was seen that the dosage release rate slowed down after the initial 5 minutes that were reported. This was theorized to be caused by a decreasing diffusion rate of DOX molecules coming from the center of the swimmer. Another group’s spirulina-based helical micro/nanoswimmer also carrying DOX used a different method for controlled drug release. Once the swimmer had reached its destination, near-infrared (NIR) laser irradiation was used to heat up the location to dissolve the swimmer into individual particles, releasing the drug in the process. Through multiple tests, it was found that weak acidic external environments led to an increase in the dosage release rate. Using magnetic helical micro/nanorobots for cell transport can also lead to opportunities in solving male infertility, repairing damaged tissue, and cell assembly. In 2015, a helical micro-/nanomotor with a holding ring on the head was used to successfully capture and transport sperm cells with motion deficiencies. The helix device would approach the sperm cell’s tail and confine it with the body of the micro-/nanomotor. It would then use the holding ring to loosely capture the head of the sperm cell to prevent escape. After reaching the target location, the sperm cell would be released into the membrane of the oocyte by reversing the rotation of the helix device. This strategy was considered to be an efficient strategy while also reducing risk of damage to the sperm cells. 3D printing. 3D printing is the process by which a three-dimensional structure is built through the various processes of additive manufacturing. Nanoscale 3D printing involves many of the same process, incorporated at a much smaller scale. To print a structure in the 5-400 μm scale, the precision of the 3D printing machine needs to be improved greatly. A two-step process of 3D printing, using a 3D printing and laser etched plates method was incorporated as an improvement technique. To be more precise at a nanoscale, the 3D printing process uses a laser etching machine, which etches the details needed for the segments of nanorobots into each plate. The plate is then transferred to the 3D printer, which fills the etched regions with the desired nanoparticle. The 3D printing process is repeated until the nanorobot is built from the bottom up. This 3D printing process has many benefits. First, it increases the overall accuracy of the printing process. Second, it has the potential to create functional segments of a nanorobot. The 3D printer uses a liquid resin, which is hardened at precisely the correct spots by a focused laser beam. The focal point of the laser beam is guided through the resin by movable mirrors and leaves behind a hardened line of solid polymer, just a few hundred nanometers wide. This fine resolution enables the creation of intricately structured sculptures as tiny as a grain of sand. This process takes place by using photoactive resins, which are hardened by the laser at an extremely small scale to create the structure. This process is quick by nanoscale 3D printing standards. Ultra-small features can be made with the 3D micro-fabrication technique used in multiphoton photopolymerisation. This approach uses a focused laser to trace the desired 3D object into a block of gel. Due to the nonlinear nature of photo excitation, the gel is cured to a solid only in the places where the laser was focused while the remaining gel is then washed away. Feature sizes of under 100 nm are easily produced, as well as complex structures with moving and interlocked parts. Challenges in designing nanorobots. There are number of challenges and problems that should be addressed when designing and building nanoscale machines with movable parts. The most obvious one is the need of developing very fine tools and manipulation techniques capable of assembling individual nanostructures with high precision into operational device. Less evident challenge is related to peculiarities of adhesion and friction on nanoscale. It is impossible to take existing design of macroscopic device with movable parts and just reduce it to the nanoscale. Such approach will not work due to high surface energy of nanostructures, which means that all contacting parts will stick together following the energy minimization principle. The adhesion and static friction between parts can easily exceed the strength of materials, so the parts will break before they start to move relative to each other. This leads to the need to design movable structures with minimal contact area []. In spite of the fast development of nanorobots, most of the nanorobots designed for drug delivery purposes, there is "still a long way to go before their commercialization and clinical applications can be achieved." Potential uses. Nanomedicine. Potential uses for nanorobotics in medicine include early diagnosis and targeted drug-delivery for cancer, biomedical instrumentation, surgery, pharmacokinetics, monitoring of diabetes, and health care. In such plans, future medical nanotechnology is expected to employ nanorobots injected into the patient to perform work at a cellular level. Such nanorobots intended for use in medicine should be non-replicating, as replication would needlessly increase device complexity, reduce reliability, and interfere with the medical mission. Nanotechnology provides a wide range of new technologies for developing customized means to optimize the delivery of pharmaceutical drugs. Today, harmful side effects of treatments such as chemotherapy are commonly a result of drug delivery methods that don't pinpoint their intended target cells accurately. Researchers at Harvard and MIT, however, have been able to attach special RNA strands, measuring nearly 10 nm in diameter, to nanoparticles, filling them with a chemotherapy drug. These RNA strands are attracted to cancer cells. When the nanoparticle encounters a cancer cell, it adheres to it, and releases the drug into the cancer cell. This directed method of drug delivery has great potential for treating cancer patients while avoiding negative effects (commonly associated with improper drug delivery). The first demonstration of nanomotors operating in living organisms was carried out in 2014 at University of California, San Diego. MRI-guided nanocapsules are one potential precursor to nanorobots. Another useful application of nanorobots is assisting in the repair of tissue cells alongside white blood cells. Recruiting inflammatory cells or white blood cells (which include neutrophil granulocytes, lymphocytes, monocytes, and mast cells) to the affected area is the first response of tissues to injury. Because of their small size, nanorobots could attach themselves to the surface of recruited white cells, to squeeze their way out through the walls of blood vessels and arrive at the injury site, where they can assist in the tissue repair process. Certain substances could possibly be used to accelerate the recovery. The science behind this mechanism is quite complex. Passage of cells across the blood endothelium, a process known as transmigration, is a mechanism involving engagement of cell surface receptors to adhesion molecules, active force exertion and dilation of the vessel walls and physical deformation of the migrating cells. By attaching themselves to migrating inflammatory cells, the robots can in effect "hitch a ride" across the blood vessels, bypassing the need for a complex transmigration mechanism of their own. As of 2016[ [update]], in the United States, Food and Drug Administration (FDA) regulates nanotechnology on the basis of size. Nanocomposite particles that are controlled remotely by an electromagnetic field was also developed. This series of nanorobots that are now enlisted in the Guinness World Records, can be used to interact with the biological cells. Scientists suggest that this technology can be used for the treatment of cancer. Magnetic nanorobots have demonstrated capabilities to prevent and treat antimicrobial resistant bacteria. Application of nanomotor implants have been proposed to achieve thorough disinfection of the dentine. Cultural references. The Nanites are characters on the TV show "Mystery Science Theater 3000". They're self-replicating, bio-engineered organisms that work on the ship and reside in the SOL's computer systems. They made their first appearance in Season 8. Nanites are used in a number of episodes in the television series "Travelers". They be programmed and injected into injured people to perform repairs, and first appear in season 1. Nanites also feature in the 2016 expansion for the video game "Destiny" in which SIVA, a self-replicating nanotechnology is used as a weapon. Nanites (referred to more often as nanomachines) are often referenced in Konami's "Metal Gear" series, being used to enhance and regulate abilities and body functions. In the "Star Trek" franchise TV shows nanites play an important plot device. Starting with " in the third season of ", Borg Nanoprobes perform the function of maintaining the Borg cybernetic systems, as well as repairing damage to the organic parts of a Borg. They generate new technology inside a Borg when needed, as well as protecting them from many forms of disease. Nanites play a role in the "Deus Ex" video game series, being the basis of the nano-augmentation technology which gives augmented people superhuman abilities. Nanites are also mentioned in the Arc of a Scythe book series by Neal Shusterman and are used to heal all nonfatal injuries, regulate bodily functions, and considerably lessen pain. Nanites are also an integral part of "Stargate SG1" and "Stargate Atlantis", where grey goo scenarios are portrayed. Nanomachines are central to the plot of the "Silo" book series, in which they are used as a weapon of mass destruction propagated via the air, and enter undetected into the human body where, when receiving a signal, they kill the recipient. They are then used to wipe out the majority of the human race. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\boldsymbol{F} = V \\cdot (\\boldsymbol{M} \\cdot \\nabla \\boldsymbol{B})" }, { "math_id": 1, "text": "\\boldsymbol{\\tau} = V \\cdot (\\boldsymbol{M} \\boldsymbol{\\times} \\boldsymbol{B})" }, { "math_id": 2, "text": "(\\boldsymbol{M} \\boldsymbol{\\times} \\boldsymbol{B})*\\boldsymbol{ux} = 0" }, { "math_id": 3, "text": "(\\boldsymbol{M} \\boldsymbol{\\times} \\boldsymbol{B})*\\boldsymbol{ux} \\neq 0" }, { "math_id": 4, "text": "i_{push} = \\sin{\\varphi_{in}} \\sin{\\theta_{in}} \\cos{\\varphi_{in}} \\cos{\\theta_{in}} \\cos{\\varphi_{in}}" } ]
https://en.wikipedia.org/wiki?curid=1006597
10066313
Burgers vector
Vector representing lattice distortion due to dislocations in a crystal In materials science, the Burgers vector, named after Dutch physicist Jan Burgers, is a vector, often denoted as b, that represents the magnitude and direction of the lattice distortion resulting from a dislocation in a crystal lattice. Concepts. The vector's magnitude and direction is best understood when the dislocation-bearing crystal structure is first visualized "without" the dislocation, that is, the "perfect" crystal structure. In this perfect crystal structure, a rectangle whose lengths and widths are integer multiples of a (the unit cell edge length) is drawn "encompassing" the site of the original dislocation's origin. Once this encompassing rectangle is drawn, the dislocation can be introduced. This dislocation will have the effect of deforming, not only the perfect crystal structure, but the rectangle as well. The said rectangle could have one of its sides disjoined from the perpendicular side, severing the connection of the length and width line segments of the rectangle at one of the rectangle's corners, and displacing each line segment from each other. What was once a rectangle before the dislocation was introduced is now an open geometric figure, whose opening defines the direction and magnitude of the Burgers vector. Specifically, the breadth of the opening defines the magnitude of the Burgers vector, and, when a set of fixed coordinates is introduced, an angle between the termini of the dislocated rectangle's length line segment and width line segment may be specified. When calculating the Burgers vector practically, one may draw a rectangular clockwise circuit (Burgers circuit) from a starting point to enclose the dislocation. The Burgers vector will be the vector to complete the circuit, i.e., from the start to the end of the circuit. One can also use a counterclockwise Burgers circuit from a starting point to enclose the dislocation. The Burgers vector will instead be from the end to the start of the circuit (see picture above). The direction of the vector depends on the plane of dislocation, which is usually on one of the closest-packed crystallographic planes. The magnitude is usually represented by the equation (For BCC and FCC lattices only): formula_0 where a is the unit cell edge length of the crystal, formula_1 is the magnitude of the Burgers vector, and h, k, and l are the components of the Burgers vector, formula_2 the coefficient &amp;NoBreak;}&amp;NoBreak; is because in BCC and FCC lattices, the shortest lattice vectors could be as expressed formula_3 Comparatively, for simple cubic lattices, formula_4 and hence the magnitude is represented by formula_5 Generally, the Burgers vector of a dislocation is defined by performing a line integral over the distortion field around the dislocation line formula_6 where the integration path L is a Burgers circuit around the dislocation line, ui is the displacement field, and formula_7 is the distortion field. In most metallic materials, the magnitude of the Burgers vector for a dislocation is of a magnitude equal to the interatomic spacing of the material, since a single dislocation will offset the crystal lattice by one close-packed crystallographic spacing unit. In edge dislocations, the Burgers vector and dislocation line are perpendicular to one another. In screw dislocations, they are parallel. The Burgers vector is significant in determining the yield strength of a material by affecting solute hardening, precipitation hardening and work hardening. The Burgers vector plays an important role in determining the direction of dislocation line. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\|\\mathbf{b}\\|\\ = (a/2)\\sqrt{h^2+k^2+l^2}\n" }, { "math_id": 1, "text": "\\|\\mathbf{b}\\|" }, { "math_id": 2, "text": "\\mathbf b = \\tfrac{a}{2} \\langle h k l \\rangle ;" }, { "math_id": 3, "text": "\\tfrac{a}{2} \\langle h k l \\rangle ." }, { "math_id": 4, "text": "\\mathbf b = a \\langle h k l \\rangle " }, { "math_id": 5, "text": "\n\\|\\mathbf{b}\\|\\ = a\\sqrt{h^2+k^2+l^2}\n" }, { "math_id": 6, "text": "\nb_i = \\oint_{L}w_{ij}d{x_j} = \\oint_{L}\\frac{\\partial u_i}{\\partial x_j}d{x_j}\n" }, { "math_id": 7, "text": "w_{ij}= \\tfrac{\\partial u_i}{\\partial x_j}" } ]
https://en.wikipedia.org/wiki?curid=10066313
10066673
Causal sets
Approach to quantum gravity using discrete spacetime The causal sets program is an approach to quantum gravity. Its founding principles are that spacetime is fundamentally discrete (a collection of discrete spacetime points, called the elements of the causal set) and that spacetime events are related by a partial order. This partial order has the physical meaning of the causality relations between spacetime events. History. For some decades after the formulation of General Relativity, the attitude towards Lorentzian geometry was mostly dedicated to understanding its physical implications and not concerned with theoretical issues. However, early attempts to use causality as a starting point have been provided by Weyl and Lorentz. Alfred Robb in two books in 1914 and 1936 suggested an axiomatic framework where the causal precedence played a critical role. The first explicit proposal of quantising the causal structure of spacetime is attributed by S. Surya to Kronheimer and Penrose, who invented "Causal spaces" in order to "admit structures which can be very different from a manifold". Causal spaces are defined axiomatically, by considering not only causal precedence, but also chronological precedence. The program of causal sets is based on a theorem by David Malament, extending former results by E.C. Zeeman and Hawking, King, McCarthy. Malament Theorem states that if there is a bijective map between two past and future distinguishing space times that preserves their causal structure then the map is a conformal isomorphism. The conformal factor that is left undetermined is related to the volume of regions in the spacetime. This volume factor can be recovered by specifying a volume element for each space time point. The volume of a space time region could then be found by counting the number of points in that region. Causal sets was initiated by Rafael Sorkin who continues to be the main proponent of the program. He has coined the slogan "Order + Number = Geometry" to characterize the above argument. The program provides a theory in which space time is fundamentally discrete while retaining local Lorentz invariance. Definition. A causal set (or causet) is a set formula_0 with a partial order relation formula_1 that is We'll write formula_13 if formula_14 and formula_15. The set formula_0 represents the set of spacetime events and the order relation formula_1 represents the causal relationship between events (see causal structure for the analogous idea in a Lorentzian manifold). Although this definition uses the reflexive convention we could have chosen the irreflexive convention in which the order relation is irreflexive and asymmetric. The causal relation of a Lorentzian manifold (without closed causal curves) satisfies the first three conditions. It is the local finiteness condition that introduces spacetime discreteness. Comparison to the continuum. Given a causal set we may ask whether it can be embedded into a Lorentzian manifold. An embedding would be a map taking elements of the causal set into points in the manifold such that the order relation of the causal set matches the causal ordering of the manifold. A further criterion is needed however before the embedding is suitable. If, on average, the number of causal set elements mapped into a region of the manifold is proportional to the volume of the region then the embedding is said to be "faithful". In this case we can consider the causal set to be 'manifold-like'. A central conjecture of the causal set program, called the "Hauptvermutung" ('fundamental conjecture'), is that the same causal set cannot be faithfully embedded into two spacetimes that are not similar on large scales. It is difficult to define this conjecture precisely because it is difficult to decide when two spacetimes are 'similar on large scales'. Modelling spacetime as a causal set would require us to restrict attention to those causal sets that are 'manifold-like'. Given a causal set this is a difficult property to determine. Sprinkling. The difficulty of determining whether a causal set can be embedded into a manifold can be approached from the other direction. We can create a causal set by sprinkling points into a Lorentzian manifold. By sprinkling points in proportion to the volume of the spacetime regions and using the causal order relations in the manifold to induce order relations between the sprinkled points, we can produce a causal set that (by construction) can be faithfully embedded into the manifold. To maintain Lorentz invariance this sprinkling of points must be done randomly using a Poisson process. Thus the probability of sprinkling formula_16 points into a region of volume formula_17 is formula_18 where formula_19 is the density of the sprinkling. Sprinkling points as a regular lattice would not keep the number of points proportional to the region volume. Geometry. Some geometrical constructions in manifolds carry over to causal sets. When defining these we must remember to rely only on the causal set itself, not on any background spacetime into which it might be embedded. For an overview of these constructions, see. Geodesics. A "link" in a causal set is a pair of elements formula_4 such that formula_13 but with no formula_20 such that formula_21. A "chain" is a sequence of elements formula_22 such that formula_23 for formula_24. The length of a chain is formula_16. If every formula_25 in the chain form a link, then the chain is called a "path". We can use this to define the notion of a geodesic between two causal set elements, provided they are order comparable, that is, causally connected (physically, this means they are time-like). A geodesic between two elements formula_26 is a chain consisting only of links such that In general there can be more than one geodesic between two comparable elements. Myrheim first suggested that the length of such a geodesic should be directly proportional to the proper time along a timelike geodesic joining the two spacetime points. Tests of this conjecture have been made using causal sets generated from sprinklings into flat spacetimes. The proportionality has been shown to hold and is conjectured to hold for sprinklings in curved spacetimes too. Dimension estimators. Much work has been done in estimating the manifold dimension of a causal set. This involves algorithms using the causal set aiming to give the dimension of the manifold into which it can be faithfully embedded. The algorithms developed so far are based on finding the dimension of a Minkowski spacetime into which the causal set can be faithfully embedded. This approach relies on estimating the number of formula_31-length chains present in a sprinkling into formula_32-dimensional Minkowski spacetime. Counting the number of formula_31-length chains in the causal set then allows an estimate for formula_32 to be made. This approach relies on the relationship between the proper time between two points in Minkowski spacetime and the volume of the spacetime interval between them. By computing the maximal chain length (to estimate the proper time) between two points formula_29 and formula_30 and counting the number of elements formula_33 such that formula_21 (to estimate the volume of the spacetime interval) the dimension of the spacetime can be calculated. These estimators should give the correct dimension for causal sets generated by high-density sprinklings into formula_32-dimensional Minkowski spacetime. Tests in conformally-flat spacetimes have shown these two methods to be accurate. Dynamics. An ongoing task is to develop the correct dynamics for causal sets. These would provide a set of rules that determine which causal sets correspond to physically realistic spacetimes. The most popular approach to developing causal set dynamics is based on the "sum-over-histories" version of quantum mechanics. This approach would perform a "sum-over-causal sets" by "growing" a causal set one element at a time. Elements would be added according to quantum mechanical rules and interference would ensure a large manifold-like spacetime would dominate the contributions. The best model for dynamics at the moment is a classical model in which elements are added according to probabilities. This model, due to David Rideout and Rafael Sorkin, is known as "classical sequential growth" (CSG) dynamics. The classical sequential growth model is a way to generate causal sets by adding new elements one after another. Rules for how new elements are added are specified and, depending on the parameters in the model, different causal sets result. In analogy to the path integral formulation of quantum mechanics, one approach to developing a quantum dynamics for causal sets has been to apply an action principle in the sum-over-causal sets approach. Sorkin has proposed a discrete analogue for the d'Alembertian, which can in turn be used to define the Ricci curvature scalar and thereby the "Benincasa–Dowker action" on a causal set. Monte-Carlo simulations have provided evidence for a continuum phase in 2D using the Benincasa–Dowker action. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "C" }, { "math_id": 1, "text": "\\preceq" }, { "math_id": 2, "text": "x \\in C" }, { "math_id": 3, "text": " x \\preceq x " }, { "math_id": 4, "text": "x, y \\in C" }, { "math_id": 5, "text": " x \\preceq y" }, { "math_id": 6, "text": "y \\preceq x" }, { "math_id": 7, "text": "x = y" }, { "math_id": 8, "text": "x, y, z \\in C" }, { "math_id": 9, "text": "y \\preceq z " }, { "math_id": 10, "text": " x \\preceq z " }, { "math_id": 11, "text": "x, z \\in C" }, { "math_id": 12, "text": "\\{y \\in C | x \\preceq y \\preceq z\\}" }, { "math_id": 13, "text": "x \\prec y" }, { "math_id": 14, "text": "x \\preceq y " }, { "math_id": 15, "text": "x \\neq y" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": "V" }, { "math_id": 18, "text": "P(n) = \\frac{(\\rho V)^n e^{-\\rho V}}{n!}" }, { "math_id": 19, "text": " \\rho " }, { "math_id": 20, "text": "z \\in C" }, { "math_id": 21, "text": "x \\prec z \\prec y" }, { "math_id": 22, "text": "x_0,x_1,\\ldots,x_n" }, { "math_id": 23, "text": "x_i \\prec x_{i+1}" }, { "math_id": 24, "text": "i=0,\\ldots,n-1" }, { "math_id": 25, "text": "x_i, x_{i+1}" }, { "math_id": 26, "text": "x \\preceq y \\in C" }, { "math_id": 27, "text": "x_0 = x" }, { "math_id": 28, "text": "x_n = y" }, { "math_id": 29, "text": "x" }, { "math_id": 30, "text": "y" }, { "math_id": 31, "text": "k" }, { "math_id": 32, "text": "d" }, { "math_id": 33, "text": "z" } ]
https://en.wikipedia.org/wiki?curid=10066673
10067215
Quality engineering
Principles and practice of product and service quality assurance and control Quality engineering is the discipline of engineering concerned with the principles and practice of product and service quality assurance and control. In software development, it is the management, development, operation and maintenance of IT systems and enterprise architectures with high quality standard. Description. Quality engineering is the discipline of engineering that creates and implements strategies for quality assurance in product development and production as well as software development. Quality Engineers focus on optimizing product quality which W. Edwards Deming defined as: formula_0 Quality engineering body of knowledge includes: Roles. Auditor: Quality engineers may be responsible for auditing their own companies or their suppliers for compliance to international quality standards such as ISO9000 and AS9100. They may also be independent auditors under an auditing body. Process quality: Quality engineers may be tasked with value stream mapping and statistical process control to determine if a process is likely to produce a defective product. They may create inspection plans and criteria to ensure defective parts are detected prior to completion. Supplier quality: Quality engineers may be responsible for auditing suppliers or performing root cause and corrective action at their facility or overseeing such activity to prevent the delivery of defective products. Software. IT services are increasingly interlinked in workflows across platform boundaries, device and organisational boundaries, for example in cyber-physical systems, business-to-business workflows or when using cloud services. In such contexts, quality engineering facilitates the necessary all-embracing consideration of quality attributes. In such contexts an "end-to-end" view of quality from management to operation is vital. Quality engineering integrates methods and tools from enterprise architecture-management, Software product management, IT service management, software engineering and systems engineering, and from software quality management and information security management. This means that quality engineering goes beyond the classic disciplines of software engineering, information security management or software product management since it integrates management issues (such as business and IT strategy, risk management, business process views, knowledge and information management, operative performance management), design considerations (including the software development process, requirements analysis, software testing) and operative considerations (such as configuration, monitoring, IT service management). In many of the fields where it is used, quality engineering is closely linked to compliance with legal and business requirements, contractual obligations and standards. As far as quality attributes are concerned, reliability, security and safety of IT services play a predominant role. In quality engineering, quality objectives are implemented in a collaborative process. This process requires the interaction of largely independent actors whose knowledge is based on different sources of information. Quality objectives. Quality objectives describe basic requirements for software quality. In quality engineering they often address the quality attributes of availability, security, safety, reliability and performance. With the help of quality models like ISO/IEC 25000 and methods like the Goal Question Metric approach it is possible to attribute metrics to quality objectives. This allows measuring the degree of attainment of quality objectives. This is a key component of the quality engineering process and, at the same time, is a prerequisite for its continuous monitoring and control. To ensure effective and efficient measuring of quality objectives the integration of core numbers, which were identified manually (e.g. by expert estimates or reviews), and automatically identified metrics (e.g. by statistical analysis of source codes or automated regression tests) as a basis for decision-making is favourable. Actors. The end-to-end quality management approach to quality engineering requires numerous actors with different responsibilities and tasks, different expertise and involvement in the organisation. Different roles involved in quality engineering: Typically, these roles are distributed over geographic and organisational boundaries. Therefore, appropriate measures need to be taken to coordinate the heterogeneous tasks of the various roles in quality engineering and to consolidate and synchronize the data and information necessary to fulfill the tasks, and make them available to each actor in an appropriate form. Knowledge management. Knowledge management plays an important part in quality engineering. The quality engineering knowledge base comprises manifold structured and unstructured data, ranging from code repositories via requirements specifications, standards, test reports and enterprise architecture models to system configurations and runtime logs. Software and system models play an important role in mapping this knowledge. The data of the quality engineering knowledge base are generated, processed and made available both manually as well as tool-based in a geographically, organisationally and technically distributed context. Of prime importance is the focus on quality assurance tasks, early recognition of risks, and appropriate support for the collaboration of actors. This results in the following requirements for a quality engineering knowledge base: Collaborative processes. The quality engineering process comprises all tasks carried out manually and in a (semi-)automated way to identify, fulfil and measure any quality features in a chosen context. The process is a highly collaborative one in the sense that it requires interaction of actors, widely acting independently from each other. The quality engineering process has to integrate any existing sub-processes that may comprise highly structured processes such as IT service management and processes with limited structure such as agile software development. Another important aspect is change-driven procedure, where change events, such as changed requirements are dealt with in the local context of information and actors affected by such change. A pre-requisite for this is methods and tools, which support change propagation and change handling. The objective of an efficient quality engineering process is the coordination of automated and manual quality assurance tasks. Code review or elicitation of quality objectives are examples of manual tasks, while regression tests and the collection of code metrics are examples for automatically performed tasks. The quality engineering process (or its sub-processes) can be supported by tools such as ticketing systems or security management tools. See also. Associations References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Quality} = \\frac{\\text{Results of work efforts}}{\\text{Total costs}}" } ]
https://en.wikipedia.org/wiki?curid=10067215
10067276
Shifted Gompertz distribution
The shifted Gompertz distribution is the distribution of the larger of two independent random variables one of which has an exponential distribution with parameter formula_1 and the other has a Gumbel distribution with parameters formula_2 and formula_1. In its original formulation the distribution was expressed referring to the Gompertz distribution instead of the Gumbel distribution but, since the Gompertz distribution is a reverted Gumbel distribution, the labelling can be considered as accurate. It has been used as a model of adoption of innovations. It was proposed by Bemmaor (1994). Some of its statistical properties have been studied further by Jiménez and Jodrá (2009) and Jiménez Torres (2014). It has been used to predict the growth and decline of social networks and on-line services and shown to be superior to the Bass model and Weibull distribution (Bauckhage and Kersting 2014). Specification. Probability density function. The probability density function of the shifted Gompertz distribution is: formula_3 where formula_0 is a scale parameter and formula_4 is a shape parameter. In the context of diffusion of innovations, formula_5 can be interpreted as the overall appeal of the innovation and formula_6 is the propensity to adopt in the propensity-to-adopt paradigm. The larger formula_1 is, the stronger the appeal and the larger formula_6 is, the smaller the propensity to adopt. The distribution can be reparametrized according to the external versus internal influence paradigm with formula_7 as the coefficient of external influence and formula_8 as the coefficient of internal influence. Hence: formula_9 formula_10 When formula_11, the shifted Gompertz distribution reduces to an exponential distribution. When formula_12, the proportion of adopters is nil: the innovation is a complete failure. The shape parameter of the probability density function is equal to formula_13. Similar to the Bass model, the hazard rate formula_14 is equal to formula_15 when formula_16 is equal to formula_17; it approaches formula_18 as formula_16 gets close to formula_19. See Bemmaor and Zheng for further analysis. Cumulative distribution function. The cumulative distribution function of the shifted Gompertz distribution is: formula_20 Equivalently, formula_21 formula_22 Properties. The shifted Gompertz distribution is right-skewed for all values of formula_2. It is more flexible than the Gumbel distribution. The hazard rate is a concave function of formula_23 which increases from formula_24 to formula_5: its curvature is all the steeper as formula_6 is large. In the context of the diffusion of innovations, the effect of word of mouth (i.e., the previous adopters) on the likelihood to adopt decreases as the proportion of adopters increases. (For comparison, in the Bass model, the effect remains the same over time). The parameter formula_25 captures the growth of the hazard rate when formula_26 varies from formula_27 to formula_19. Shapes. The shifted Gompertz density function can take on different shapes depending on the values of the shape parameter formula_2: formula_30 where formula_31 is the smallest root of formula_32 which is formula_33 Related distributions. When formula_2 varies according to a gamma distribution with shape parameter formula_34 and scale parameter formula_35 (mean = formula_36), the distribution of formula_37 is Gamma/Shifted Gompertz (G/SG). When formula_34 is equal to one, the G/SG reduces to the Bass model (Bemmaor 1994). The three-parameter G/SG has been applied by Dover, Goldenberg and Shapira (2009) and Van den Bulte and Stremersch (2004) among others in the context of the diffusion of innovations. The model is discussed in Chandrasekaran and Tellis (2007).Similar to the shifted Gompertz distribution, the G/SG can either be represented according to the propensity-to-adopt paradigm or according to the innovation-imitation paradigm. In the latter case, it includes three parameters: formula_38 and formula_39 with formula_40 and formula_8. The parameter formula_39 modifies the curvature of the hazard rate as expressed as a function of formula_41: when formula_39 is less than 0.5, it decreases to a minimum prior to increasing at an increasing rate as formula_42 increases, it is convex when formula_39 is less than one and larger or equal to 0.5, linear when formula_39 is equal to one, and concave when formula_39 is larger than one. Here are some special cases of the G/SG distribution in the case of homogeneity (across the population) with respect to the likelihood to adopt at a given time: formula_43 = Exponentialformula_44 formula_45 = Left-skewed two-parameter distributionformula_46 formula_47 = Bass modelformula_46 formula_48 = Shifted Gompertzformula_46 with: formula_49 One can compare the parameters formula_15 and formula_50 across the values of formula_39 as they capture the same notions. In all the cases, the hazard rate is either constant or a monotonically increasing function of formula_41 (positive word of mouth). As the diffusion curve is all the more skewed as formula_39 becomes large, we expect formula_50 to decrease as the level of right-skew increases. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "b \\geq 0" }, { "math_id": 1, "text": " b " }, { "math_id": 2, "text": "\\eta" }, { "math_id": 3, "text": " f(x;b,\\eta) = b e^{-bx} e^{-\\eta e^{-bx}}\\left[1 + \\eta\\left(1 - e^{-bx}\\right)\\right] \\text{ for }x \\geq 0. \\," }, { "math_id": 4, "text": "\\eta \\geq 0" }, { "math_id": 5, "text": " b " }, { "math_id": 6, "text": "\\eta " }, { "math_id": 7, "text": " p = f(0;b,\\eta) = be^{-\\eta }" }, { "math_id": 8, "text": " q = b - p " }, { "math_id": 9, "text": " f(x;p,q) = (p + q) e^{-(p + q)x} e^{-\\ln(1 + q/p) e^{-(p+q)x}}\\left[1 + \\ln(1 + q/p)\\left(1 - e^{-(p + q)x}\\right)\\right] \\text{ for }x \\geq 0, p, q \\geq 0. \\," }, { "math_id": 10, "text": " = (p + q) e^{-(p + q)x} {(1 + q/p)^{-e^{-(p+q)x}}}\\left[1 + \\ln(1 + q/p)\\left(1 - e^{-(p + q)x}\\right)\\right] \\text{ for }x \\geq 0, p, q \\geq 0. \\," }, { "math_id": 11, "text": " q = 0 " }, { "math_id": 12, "text": " p = 0" }, { "math_id": 13, "text": " q/p " }, { "math_id": 14, "text": "z(x;p,q)" }, { "math_id": 15, "text": " p " }, { "math_id": 16, "text": "x " }, { "math_id": 17, "text": " 0 " }, { "math_id": 18, "text": " p + q " }, { "math_id": 19, "text": "\\infty" }, { "math_id": 20, "text": " F(x;b,\\eta) = \\left(1 - e^{-bx}\\right)e^{-\\eta e^{-bx}} \\text{ for }x \\geq 0. \\," }, { "math_id": 21, "text": " F(x;p, q) = \\left(1 - e^{-(p + q)x}\\right)e^{-\\ln(1 + q/p)e^{-(p+q)x}} \\text{ for }x \\geq 0. \\," }, { "math_id": 22, "text": " = \\left(1 - e^{-(p + q)x}\\right){(1 + q/p)^{-e^{-(p+q)x}}} \\text{ for }x \\geq 0. \\," }, { "math_id": 23, "text": "F(x;b,\\eta)" }, { "math_id": 24, "text": " be^{-\\eta}" }, { "math_id": 25, "text": " q = b(1-e^{-\\eta}) " }, { "math_id": 26, "text": " x " }, { "math_id": 27, "text": " 0 " }, { "math_id": 28, "text": "0 < \\eta \\leq 0.5\\," }, { "math_id": 29, "text": "\\eta > 0.5\\," }, { "math_id": 30, "text": "\\text{mode}=-\\frac{\\ln(z^\\star)}{b}\\, \\qquad 0 < z^\\star < 1" }, { "math_id": 31, "text": "z^\\star\\," }, { "math_id": 32, "text": "\\eta^2z^2 - \\eta(3 + \\eta)z + \\eta + 1 = 0\\,," }, { "math_id": 33, "text": "z^\\star = [3 + \\eta - (\\eta^2 + 2\\eta + 5)^{1/2}]/(2\\eta)." }, { "math_id": 34, "text": "\\alpha" }, { "math_id": 35, "text": "\\beta" }, { "math_id": 36, "text": "\\alpha\\beta" }, { "math_id": 37, "text": "x" }, { "math_id": 38, "text": " p, q " }, { "math_id": 39, "text": "\\alpha " }, { "math_id": 40, "text": " p = f(0;b,\\beta, \\alpha) = b/(1+\\beta)^{\\alpha }" }, { "math_id": 41, "text": "F(x;p,q, \\alpha)" }, { "math_id": 42, "text": "F(x;p,q, \\alpha < 1/2)" }, { "math_id": 43, "text": "F(x;p,q, \\alpha = 0)" }, { "math_id": 44, "text": "(p + q)" }, { "math_id": 45, "text": "F(x;p,q, \\alpha = 1/2)" }, { "math_id": 46, "text": "(p,q)" }, { "math_id": 47, "text": "F(x;p,q, \\alpha = 1)" }, { "math_id": 48, "text": "F(x;p,q, \\alpha = \\infty)" }, { "math_id": 49, "text": " F(x;p, q,\\alpha = 1/2) = \\left(1 - e^{-(p + q)x}\\right)/{(1 + (q/p)(2+q/p)e^{-(p+q)x})^{1/2}} \\text{ for }x \\geq 0,p, q \\geq 0. \\," }, { "math_id": 50, "text": " q " } ]
https://en.wikipedia.org/wiki?curid=10067276
10070469
Herbertsmithite
Halide mineral Herbertsmithite is a mineral with chemical structure ZnCu3(OH)6Cl2. It is named after the mineralogist Herbert Smith (1872–1953) and was first found in 1972 in Chile. It is polymorphous with kapellasite and closely related to paratacamite. Herbertsmithite is generally found in and around Anarak, Iran, hence its other name, anarakite. Herbertsmithite is associated with copper mineralizations in syenitic porphyries and granites in Chile and in Triassic dolomite formations in Iran. It has also been reported from the Osborn District in the Big Horn Mountains of Maricopa County, Arizona and the Lavrion District Mines of Attica, Greece. Herbertsmithite has a vitreous luster and is fairly transparent with a light-green to blue green color. Herbertsmithite has a Mohs hardness of between 3 and 3.5 and is known to have a brittle tenacity. The crystal's density has been calculated at 3.76 g/cm3. Herbertsmithite, in a pure synthetic form, was discovered in 2012 to be able to exhibit the properties of a quantum spin liquid, a generalized form of strongly correlated quantum spin liquid due to its Kagome lattice structure. Herbertsmithite is the first mineral known to exhibit this unique state of magnetism: it is neither a ferromagnet with mostly aligned magnetic particles, nor is it an antiferromagnet with mostly opposed adjacent magnetic particles; rather its magnetic particles have constantly fluctuating scattered orientations. Optical conductivity observations suggest the magnetic state in herbertsmithite is a type of emergent gauge field of a gapless U(1) Dirac spin liquid. Other experiments and some numerical calculations suggest instead that it is a formula_0 spin liquid (or in other words, has a formula_0 topological order). To clarify the situation, it is useful to carry out a number of experiments. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Z}_2" } ]
https://en.wikipedia.org/wiki?curid=10070469
10070867
Bidirectional map
In computer science, a bidirectional map is an associative data structure in which the formula_0 pairs form a one-to-one correspondence. Thus the binary relation is functional in each direction: each formula_1 can also be mapped to a unique formula_2. A pair formula_3 thus provides a unique coupling between formula_4 and formula_5 so that formula_5 can be found when formula_4 is used as a key and formula_4 can be found when formula_5 is used as a key. Mathematically, a bidirectional map can be defined a bijection formula_6 between two different sets of keys formula_7 and formula_8 of equal cardinality, thus constituting an injective and surjective function: formula_9
[ { "math_id": 0, "text": "(key, value)" }, { "math_id": 1, "text": "value" }, { "math_id": 2, "text": "key" }, { "math_id": 3, "text": "(a, b)" }, { "math_id": 4, "text": "a" }, { "math_id": 5, "text": "b" }, { "math_id": 6, "text": "f: X \\to Y" }, { "math_id": 7, "text": "X" }, { "math_id": 8, "text": "Y" }, { "math_id": 9, "text": "\\begin{cases}\n & \\forall x, x' \\in X, f(x) = f(x') \\Rightarrow x = x' \\\\ \n & \\forall y \\in Y, \\exists x \\in X : y=f(x)\n\\end{cases} \\Rightarrow \\exists f^{-1}(x)" } ]
https://en.wikipedia.org/wiki?curid=10070867
10072717
Categorical distribution
Discrete probability distribution In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can take on one of "K" possible categories, with the probability of each category separately specified. There is no innate underlying ordering of these outcomes, but numerical labels are often attached for convenience in describing the distribution, (e.g. 1 to "K"). The "K"-dimensional categorical distribution is the most general distribution over a "K"-way event; any other discrete distribution over a size-"K" sample space is a special case. The parameters specifying the probabilities of each possible outcome are constrained only by the fact that each must be in the range 0 to 1, and all must sum to 1. The categorical distribution is the generalization of the Bernoulli distribution for a categorical random variable, i.e. for a discrete variable with more than two possible outcomes, such as the roll of a die. On the other hand, the categorical distribution is a special case of the multinomial distribution, in that it gives the probabilities of potential outcomes of a single drawing rather than multiple drawings. Terminology. Occasionally, the categorical distribution is termed the "discrete distribution". However, this properly refers not to one particular family of distributions but to a general class of distributions. In some fields, such as machine learning and natural language processing, the categorical and multinomial distributions are conflated, and it is common to speak of a "multinomial distribution" when a "categorical distribution" would be more precise. This imprecise usage stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-"K"" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range 1 to "K"; in this form, a categorical distribution is equivalent to a multinomial distribution for a single observation (see below). However, conflating the categorical and multinomial distributions can lead to problems. For example, in a Dirichlet-multinomial distribution, which arises commonly in natural language processing models (although not usually with this name) as a result of collapsed Gibbs sampling where Dirichlet distributions are collapsed out of a hierarchical Bayesian model, it is very important to distinguish categorical from multinomial. The joint distribution of the same variables with the same Dirichlet-multinomial distribution has two different forms depending on whether it is characterized as a distribution whose domain is over individual categorical nodes or over multinomial-style counts of nodes in each particular category (similar to the distinction between a set of Bernoulli-distributed nodes and a single binomial-distributed node). Both forms have very similar-looking probability mass functions (PMFs), which both make reference to multinomial-style counts of nodes in a category. However, the multinomial-style PMF has an extra factor, a multinomial coefficient, that is a constant equal to 1 in the categorical-style PMF. Confusing the two can easily lead to incorrect results in settings where this extra factor is not constant with respect to the distributions of interest. The factor is frequently constant in the complete conditionals used in Gibbs sampling and the optimal distributions in variational methods. Formulating distributions. A categorical distribution is a discrete probability distribution whose sample space is the set of "k" individually identified items. It is the generalization of the Bernoulli distribution for a categorical random variable. In one formulation of the distribution, the sample space is taken to be a finite sequence of integers. The exact integers used as labels are unimportant; they might be {0, 1, ..., "k" − 1} or {1, 2, ..., "k"} or any other arbitrary set of values. In the following descriptions, we use {1, 2, ..., "k"} for convenience, although this disagrees with the convention for the Bernoulli distribution, which uses {0, 1}. In this case, the probability mass function "f" is: formula_1 where formula_2, formula_3 represents the probability of seeing element "i" and formula_4. Another formulation that appears more complex but facilitates mathematical manipulations is as follows, using the Iverson bracket: formula_5 where formula_0 evaluates to 1 if formula_6, 0 otherwise. There are various advantages of this formulation, e.g.: Yet another formulation makes explicit the connection between the categorical and multinomial distributions by treating the categorical distribution as a special case of the multinomial distribution in which the parameter "n" of the multinomial distribution (the number of sampled items) is fixed at 1. In this formulation, the sample space can be considered to be the set of 1-of-"K" encoded random vectors x of dimension "k" having the property that exactly one element has the value 1 and the others have the value 0. The particular element having the value 1 indicates which category has been chosen. The probability mass function "f" in this formulation is: formula_7 where formula_3 represents the probability of seeing element "i" and formula_8. This is the formulation adopted by Bishop. formula_14 where "I" is the indicator function. Then "Y" has a distribution which is a special case of the multinomial distribution with parameter formula_15. The sum of formula_16 independent and identically distributed such random variables "Y" constructed from a categorical distribution with parameter formula_17 is multinomially distributed with parameters formula_16 and formula_18 Bayesian inference using conjugate prior. In Bayesian statistics, the Dirichlet distribution is the conjugate prior distribution of the categorical distribution (and also the multinomial distribution). This means that in a model consisting of a data point having a categorical distribution with unknown parameter vector p, and (in standard Bayesian style) we choose to treat this parameter as a random variable and give it a prior distribution defined using a Dirichlet distribution, then the posterior distribution of the parameter, after incorporating the knowledge gained from the observed data, is also a Dirichlet. Intuitively, in such a case, starting from what is known about the parameter prior to observing the data point, knowledge can then be updated based on the data point, yielding a new distribution of the same form as the old one. As such, knowledge of a parameter can be successively updated by incorporating new observations one at a time, without running into mathematical difficulties. Formally, this can be expressed as follows. Given a model formula_21 then the following holds: formula_22 This relationship is used in Bayesian statistics to estimate the underlying parameter p of a categorical distribution given a collection of "N" samples. Intuitively, we can view the hyperprior vector α as pseudocounts, i.e. as representing the number of observations in each category that we have already seen. Then we simply add in the counts for all the new observations (the vector c) in order to derive the posterior distribution. Further intuition comes from the expected value of the posterior distribution (see the article on the Dirichlet distribution): formula_23 This says that the expected probability of seeing a category "i" among the various discrete distributions generated by the posterior distribution is simply equal to the proportion of occurrences of that category actually seen in the data, including the pseudocounts in the prior distribution. This makes a great deal of intuitive sense: if, for example, there are three possible categories, and category 1 is seen in the observed data 40% of the time, one would expect on average to see category 1 40% of the time in the posterior distribution as well. MAP estimation. The maximum-a-posteriori estimate of the parameter p in the above model is simply the mode of the posterior Dirichlet distribution, i.e., formula_31 In many practical applications, the only way to guarantee the condition that formula_32 is to set formula_33 for all "i". Marginal likelihood. In the above model, the marginal likelihood of the observations (i.e. the joint distribution of the observations, with the prior parameter marginalized out) is a Dirichlet-multinomial distribution: formula_34 This distribution plays an important role in hierarchical Bayesian models, because when doing inference over such models using methods such as Gibbs sampling or variational Bayes, Dirichlet prior distributions are often marginalized out. See the article on this distribution for more details. Posterior predictive distribution. The posterior predictive distribution of a new observation in the above model is the distribution that a new observation formula_35 would take given the set formula_36 of "N" categorical observations. As shown in the Dirichlet-multinomial distribution article, it has a very simple form: formula_37 There are various relationships among this formula and the previous ones: The reason for the equivalence between posterior predictive probability and the expected value of the posterior distribution of p is evident with re-examination of the above formula. As explained in the posterior predictive distribution article, the formula for the posterior predictive probability has the form of an expected value taken with respect to the posterior distribution: formula_38 The crucial line above is the third. The second follows directly from the definition of expected value. The third line is particular to the categorical distribution, and follows from the fact that, in the categorical distribution specifically, the expected value of seeing a particular value "i" is directly specified by the associated parameter "pi". The fourth line is simply a rewriting of the third in a different notation, using the notation farther up for an expectation taken with respect to the posterior distribution of the parameters. Observe data points one by one and each time consider their predictive probability before observing the data point and updating the posterior. For any given data point, the probability of that point assuming a given category depends on the number of data points already in that category. In this scenario, if a category has a high frequency of occurrence, then new data points are more likely to join that category — further enriching the same category. This type of scenario is often termed a preferential attachment (or "rich get richer") model. This models many real-world processes, and in such cases the choices made by the first few data points have an outsize influence on the rest of the data points. Posterior conditional distribution. In Gibbs sampling, one typically needs to draw from conditional distributions in multi-variable Bayes networks where each variable is conditioned on all the others. In networks that include categorical variables with Dirichlet priors (e.g. mixture models and models including mixture components), the Dirichlet distributions are often "collapsed out" (marginalized out) of the network, which introduces dependencies among the various categorical nodes dependent on a given prior (specifically, their joint distribution is a Dirichlet-multinomial distribution). One of the reasons for doing this is that in such a case, the distribution of one categorical node given the others is exactly the posterior predictive distribution of the remaining nodes. That is, for a set of nodes formula_36, if the node in question is denoted as formula_39 and the remainder as formula_40, then formula_41 where formula_42 is the number of nodes having category "i" among the nodes other than node "n". Sampling. There are a number of methods, but the most common way to sample from a categorical distribution uses a type of inverse transform sampling: Assume a distribution is expressed as "proportional to" some expression, with unknown normalizing constant. Before taking any samples, one prepares some values as follows: Then, each time it is necessary to sample a value: If it is necessary to draw many values from the same categorical distribution, the following approach is more efficient. It draws n samples in O(n) time (assuming an O(1) approximation is used to draw values from the binomial distribution). Sampling via the Gumbel distribution. In machine learning it is typical to parametrize the categorical distribution, formula_43 via an unconstrained representation in formula_44, whose components are given by: formula_45 where formula_46 is any real constant. Given this representation, formula_43 can be recovered using the softmax function, which can then be sampled using the techniques described above. There is however a more direct sampling method that uses samples from the Gumbel distribution. Let formula_47 be "k" independent draws from the standard Gumbel distribution, then formula_48 will be a sample from the desired categorical distribution. (If formula_49 is a sample from the standard uniform distribution, then formula_50 is a sample from the standard Gumbel distribution.) Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[x=i]" }, { "math_id": 1, "text": "\nf(x=i\\mid \\boldsymbol{p} ) = p_i ,\n" }, { "math_id": 2, "text": "\\boldsymbol{p} = (p_1,\\ldots,p_k)" }, { "math_id": 3, "text": "p_i" }, { "math_id": 4, "text": "\\textstyle{\\sum_{i=1}^k p_i = 1}" }, { "math_id": 5, "text": "\nf(x\\mid \\boldsymbol{p} ) = \\prod_{i=1}^k p_i^{[x=i]} ,\n" }, { "math_id": 6, "text": "x=i" }, { "math_id": 7, "text": "\nf( \\mathbf{x}\\mid \\boldsymbol{p} ) = \\prod_{i=1}^k p_i^{x_i} ,\n" }, { "math_id": 8, "text": "\\textstyle{\\sum_i p_i = 1}" }, { "math_id": 9, "text": "p_i = P(X = i)" }, { "math_id": 10, "text": "(k-1)" }, { "math_id": 11, "text": "p_1+p_2=1, 0 \\leq p_1,p_2 \\leq 1 ." }, { "math_id": 12, "text": "\\operatorname{E} \\left[ \\mathbf{x} \\right] = \\boldsymbol{p}" }, { "math_id": 13, "text": "\\boldsymbol{X}" }, { "math_id": 14, "text": "Y_i=I(\\boldsymbol{X}=i)," }, { "math_id": 15, "text": "n=1" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": "\\boldsymbol{p}" }, { "math_id": 18, "text": "\\boldsymbol{p} ." }, { "math_id": 19, "text": "\\delta_{xi}," }, { "math_id": 20, "text": "p_i ." }, { "math_id": 21, "text": "\\begin{array}{lclcl}\n\\boldsymbol\\alpha &=& (\\alpha_1, \\ldots, \\alpha_K) &=& \\text{concentration hyperparameter} \\\\\n\\mathbf{p}\\mid\\boldsymbol\\alpha &=& (p_1, \\ldots, p_K) &\\sim& \\operatorname{Dir}(K, \\boldsymbol\\alpha) \\\\\n\\mathbb{X}\\mid\\mathbf{p} &=& (x_1, \\ldots, x_N) &\\sim& \\operatorname{Cat}(K,\\mathbf{p})\n\\end{array}\n" }, { "math_id": 22, "text": "\\begin{array}{lclcl}\n\\mathbf{c} &=& (c_1, \\ldots, c_K) &=& \\text{number of occurrences of category }i, \\text{ so that } c_i = \\sum_{j=1}^N [x_j=i] \\\\\n\\mathbf{p} \\mid \\mathbb{X},\\boldsymbol\\alpha &\\sim& \\operatorname{Dir}(K,\\mathbf{c}+\\boldsymbol\\alpha) &=& \\operatorname{Dir}(K,c_1+\\alpha_1,\\ldots,c_K+\\alpha_K)\n\\end{array}\n" }, { "math_id": 23, "text": " \\operatorname{E}[p_i \\mid \\mathbb{X},\\boldsymbol\\alpha] = \\frac{c_i+\\alpha_i}{N+\\sum_k\\alpha_k}" }, { "math_id": 24, "text": "\\alpha_i" }, { "math_id": 25, "text": "\\alpha_i-1" }, { "math_id": 26, "text": "i" }, { "math_id": 27, "text": "c_i+\\alpha_i" }, { "math_id": 28, "text": "c_i+\\alpha_i-1" }, { "math_id": 29, "text": "\\boldsymbol\\alpha = (1,1,\\ldots)" }, { "math_id": 30, "text": "\\cdots-1" }, { "math_id": 31, "text": "\n \\operatorname{arg\\,max}\\limits_{\\mathbf{p}} p(\\mathbf{p} \\mid \\mathbb{X}) = \\frac{\\alpha_i + c_i - 1}{\\sum_i (\\alpha_i + c_i - 1)}, \\qquad \\forall i \\; \\alpha_i + c_i > 1\n" }, { "math_id": 32, "text": "\\forall i \\; \\alpha_i + c_i > 1" }, { "math_id": 33, "text": "\\alpha_i > 1" }, { "math_id": 34, "text": "\n\\begin{align}\np(\\mathbb{X}\\mid\\boldsymbol{\\alpha}) &= \\int_{\\mathbf{p}}p(\\mathbb{X}\\mid \\mathbf{p})p(\\mathbf{p}\\mid\\boldsymbol{\\alpha})\\textrm{d}\\mathbf{p} \\\\\n&= \\frac{\\Gamma\\left(\\sum_k \\alpha_k\\right)}\n{\\Gamma\\left(N+\\sum_k \\alpha_k\\right)}\\prod_{k=1}^K\\frac{\\Gamma(c_{k}+\\alpha_{k})}{\\Gamma(\\alpha_{k})}\n\\end{align}\n" }, { "math_id": 35, "text": "\\tilde{x}" }, { "math_id": 36, "text": "\\mathbb{X}" }, { "math_id": 37, "text": "\n\\begin{align}\np(\\tilde{x}=i\\mid\\mathbb{X},\\boldsymbol{\\alpha}) &= \\int_{\\mathbf{p}}p(\\tilde{x}=i\\mid\\mathbf{p})\\,p(\\mathbf{p}\\mid\\mathbb{X},\\boldsymbol{\\alpha})\\,\\textrm{d}\\mathbf{p} \\\\\n&=\\, \\frac{c_i + \\alpha_i}{N+\\sum_k \\alpha_k} \\\\\n&=\\, \\mathbb{E}[p_i \\mid \\mathbb{X},\\boldsymbol\\alpha] \\\\\n&\\propto\\, c_i + \\alpha_i. \\\\\n\\end{align}\n" }, { "math_id": 38, "text": "\n\\begin{align}\np(\\tilde{x}=i\\mid\\mathbb{X},\\boldsymbol{\\alpha}) &= \\int_{\\mathbf{p}}p(\\tilde{x}=i\\mid\\mathbf{p})\\,p(\\mathbf{p}\\mid\\mathbb{X},\\boldsymbol{\\alpha})\\,\\textrm{d}\\mathbf{p} \\\\\n&=\\, \\operatorname{E}_{\\mathbf{p}\\mid\\mathbb{X},\\boldsymbol{\\alpha}} \\left[p(\\tilde{x}=i\\mid\\mathbf{p})\\right] \\\\\n&=\\, \\operatorname{E}_{\\mathbf{p}\\mid\\mathbb{X},\\boldsymbol{\\alpha}} \\left[p_i\\right] \\\\\n&=\\, \\operatorname{E}[p_i \\mid \\mathbb{X},\\boldsymbol\\alpha].\n\\end{align}\n" }, { "math_id": 39, "text": "x_n" }, { "math_id": 40, "text": "\\mathbb{X}^{(-n)}" }, { "math_id": 41, "text": "\n\\begin{align}\np(x_n=i\\mid\\mathbb{X}^{(-n)},\\boldsymbol{\\alpha}) &=\\, \\frac{c_i^{(-n)} + \\alpha_i}{N-1+\\sum_i \\alpha_i}\n&\\propto\\, c_i^{(-n)} + \\alpha_i\n\\end{align}\n" }, { "math_id": 42, "text": "c_i^{(-n)}" }, { "math_id": 43, "text": "p_1,\\ldots,p_k" }, { "math_id": 44, "text": "\\mathbb{R}^k" }, { "math_id": 45, "text": "\n\\gamma_i = \\log p_i + \\alpha\n" }, { "math_id": 46, "text": "\\alpha" }, { "math_id": 47, "text": "g_1,\\ldots,g_k" }, { "math_id": 48, "text": "\nc = \\operatorname{arg\\,max}\\limits_i \\left( \\gamma_i + g_i \\right)\n" }, { "math_id": 49, "text": "u_i" }, { "math_id": 50, "text": "g_i=-\\log(-\\log u_i)" } ]
https://en.wikipedia.org/wiki?curid=10072717
10073693
1/2 + 1/4 + 1/8 + 1/16 + ⋯
Mathematical infinite series In mathematics, the infinite series + + + + ··· is an elementary example of a geometric series that converges absolutely. The sum of the series is 1. In summation notation, this may be expressed as formula_0 The series is related to philosophical questions considered in antiquity, particularly to Zeno's paradoxes. Proof. As with any infinite series, the sum formula_1 is defined to mean the limit of the partial sum of the first n terms formula_2 as n approaches infinity. By various arguments, one can show that this finite sum is equal to formula_3 As n approaches infinity, the term formula_4 approaches 0 and so sn tends to 1. History. Zeno's paradox. This series was used as a representation of many of Zeno's paradoxes. For example, in the paradox of Achilles and the Tortoise, the warrior Achilles was to race against a tortoise. The track is 100 meters long. Achilles could run at 10 m/s, while the tortoise only 5. The tortoise, with a 10-meter advantage, Zeno argued, would win. Achilles would have to move 10 meters to catch up to the tortoise, but the tortoise would already have moved another five meters by then. Achilles would then have to move 5 meters, where the tortoise would move 2.5 meters, and so on. Zeno argued that the tortoise would always remain ahead of Achilles. The Dichotomy paradox also states that to move a certain distance, you have to move half of it, then half of the remaining distance, and so on, therefore having infinitely many time intervals. This can be easily resolved by noting that each time interval is a term of the infinite geometric series, and will sum to a finite number. The Eye of Horus. The parts of the Eye of Horus were once thought to represent the first six summands of the series. In "Zhuangzi". A version of the series appears in the ancient Taoist book "Zhuangzi". The miscellaneous chapters "All Under Heaven" include the following sentence: "Take a chi long stick and remove half every day, in a myriad ages it will not be exhausted." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac12+\\frac14+\\frac18+\\frac{1}{16}+\\cdots = \\sum_{n=1}^\\infty \\left({\\frac 12}\\right)^n = 1. " }, { "math_id": 1, "text": "\\frac12+\\frac14+\\frac18+\\frac{1}{16}+\\cdots" }, { "math_id": 2, "text": "s_n=\\frac12+\\frac14+\\frac18+\\frac{1}{16}+\\cdots+\\frac{1}{2^{n-1}}+\\frac{1}{2^n}" }, { "math_id": 3, "text": "s_n = 1-\\frac{1}{2^{n}}." }, { "math_id": 4, "text": "\\frac{1}{2^{n}}" } ]
https://en.wikipedia.org/wiki?curid=10073693
10073845
1/4 + 1/16 + 1/64 + 1/256 + ⋯
Infinite series equal to 1/3 at its limit In mathematics, the infinite series + + + + ⋯ is an example of one of the first infinite series to be summed in the history of mathematics; it was used by Archimedes circa 250–200 BC. As it is a geometric series with first term and common ratio , its sum is formula_0 Visual demonstrations. The series + + + + ⋯ lends itself to some particularly simple visual demonstrations because a square and a triangle both divide into four similar pieces, each of which contains the area of the original. In the figure on the left, if the large square is taken to have area 1, then the largest black square has area  ×  = . Likewise, the second largest black square has area , and the third largest black square has area . The area taken up by all of the black squares together is therefore + + + ⋯, and this is also the area taken up by the gray squares and the white squares. Since these three areas cover the unit square, the figure demonstrates that formula_1 Archimedes' own illustration, adapted at top, was slightly different, being closer to the equation formula_2 See below for details on Archimedes' interpretation. The same geometric strategy also works for triangles, as in the figure on the right: if the large triangle has area 1, then the largest black triangle has area , and so on. The figure as a whole has a self-similarity between the large triangle and its upper sub-triangle. A related construction making the figure similar to all three of its corner pieces produces the Sierpiński triangle. Proof by Archimedes. Archimedes encounters the series in his work "Quadrature of the Parabola". He finds the area inside a parabola by the method of exhaustion, and he gets a series of triangles; each stage of the construction adds an area times the area of the previous stage. His desired result is that the total area is times the area of the first stage. To get there, he takes a break from parabolas to introduce an algebraic lemma: Proposition 23. Given a series of areas "A", "B", "C", "D", ... , "Z", of which "A" is the greatest, and each is equal to four times the next in order, then formula_3 Archimedes proves the proposition by first calculating formula_4 On the other hand, formula_5 Subtracting this equation from the previous equation yields formula_6 and adding "A" to both sides gives the desired result. Today, a more standard phrasing of Archimedes' proposition is that the partial sums of the series 1 + + + ⋯ are: formula_7 This form can be proved by multiplying both sides by 1 −  and observing that all but the first and the last of the terms on the left-hand side of the equation cancel in pairs. The same strategy works for any finite geometric series. The limit. Archimedes' Proposition 24 applies the finite (but indeterminate) sum in Proposition 23 to the area inside a parabola by a double "reductio ad absurdum". He does not "quite" take the limit of the above partial sums, but in modern calculus this step is easy enough: formula_8 Since the sum of an infinite series is defined as the limit of its partial sums, formula_9 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\sum_{n=1}^\\infty \\frac{1}{4^n}=\\frac {\\frac 1 4} {1 - \\frac 1 4} = \\frac 1 3. " }, { "math_id": 1, "text": " 3\\left(\\frac14+\\frac{1}{4^2}+\\frac{1}{4^3}+\\frac{1}{4^4}+\\cdots\\right) = 1." }, { "math_id": 2, "text": " \\sum_{n=1}^\\infty \\frac{3}{4^n}=\\frac34+\\frac{3}{4^2}+\\frac{3}{4^3}+\\frac{3}{4^4}+\\cdots = 1." }, { "math_id": 3, "text": " A + B + C + D + \\cdots + Z + \\frac13 Z = \\frac43 A." }, { "math_id": 4, "text": "\\begin{array}{rcl}\n\\displaystyle B+C+\\cdots+Z+\\frac{B}{3}+\\frac{C}{3}+\\cdots+\\frac{Z}{3} & = &\\displaystyle \\frac{4B}{3}+\\frac{4C}{3}+\\cdots+\\frac{4Z}{3} \\\\[1em]\n & = &\\displaystyle \\frac13(A+B+\\cdots+Y).\n\\end{array}" }, { "math_id": 5, "text": "\\frac{B}{3}+\\frac{C}{3}+\\cdots+\\frac{Y}{3} = \\frac13(B+C+\\cdots+Y)." }, { "math_id": 6, "text": "B+C+\\cdots+Z+\\frac{Z}{3} = \\frac13 A" }, { "math_id": 7, "text": " 1+\\frac{1}{4}+\\frac{1}{4^2}+\\cdots+\\frac{1}{4^n}=\\frac{1-\\left(\\frac14\\right)^{n+1}}{1-\\frac14}." }, { "math_id": 8, "text": "\\lim_{n\\to\\infty} \\frac{1-\\left(\\frac14\\right)^{n+1}}{1-\\frac14} = \\frac{1}{1-\\frac14} = \\frac43." }, { "math_id": 9, "text": "1+\\frac14+\\frac{1}{4^2}+\\frac{1}{4^3}+\\cdots = \\frac43." } ]
https://en.wikipedia.org/wiki?curid=10073845
1007613
Bell state
Quantum states of two qubits In quantum information science, the Bell's states or EPR pairs25 are specific quantum states of two qubits that represent the simplest examples of quantum entanglement. The Bell's states are a form of entangled and normalized basis vectors. This normalization implies that the overall probability of the particle being in one of the mentioned states is 1: formula_0. Entanglement is a basis-independent result of superposition. Due to this superposition, measurement of the qubit will "collapse" it into one of its basis states with a given probability. Because of the entanglement, measurement of one qubit will "collapse" the other qubit to a state whose measurement will yield one of two possible values, where the value depends on which Bell's state the two qubits are in initially. Bell's states can be generalized to certain quantum states of multi-qubit systems, such as the GHZ state for three or more subsystems. Understanding of Bell's states is useful in analysis of quantum communication, such as superdense coding and quantum teleportation. The no-communication theorem prevents this behavior from transmitting information faster than the speed of light. Bell states. The Bell states are four specific maximally entangled quantum states of two qubits. They are in a superposition of 0 and 1 – a linear combination of the two states. Their entanglement means the following: The qubit held by Alice (subscript "A") can be in a superposition of 0 and 1. If Alice measured her qubit in the standard basis, the outcome would be either 0 or 1, each with probability 1/2; if Bob (subscript "B") also measured his qubit, the outcome would be the same as for Alice. Thus, Alice and Bob would each seemingly have random outcome. Through communication they would discover that, although their outcomes separately seemed random, these were perfectly correlated. This perfect correlation at a distance is special: maybe the two particles "agreed" in advance, when the pair was created (before the qubits were separated), which outcome they would show in case of a measurement. Hence, following Einstein, Podolsky, and Rosen in their famous 1935 "EPR paper", there is something missing in the description of the qubit pair given above – namely this "agreement", called more formally a hidden variable. In his famous paper of 1964, John S. Bell showed by simple probability theory arguments that these correlations (the one for the 0, 1 basis and the one for the +, − basis) cannot "both" be made perfect by the use of any "pre-agreement" stored in some hidden variables – but that quantum mechanics predicts perfect correlations. In a more refined formulation known as the Bell–CHSH inequality, it is shown that a certain correlation measure cannot exceed the value 2 if one assumes that physics respects the constraints of local "hidden-variable" theory (a sort of common-sense formulation of how information is conveyed), but certain systems permitted in quantum mechanics can attain values as high as formula_1. Thus, quantum theory violates the Bell inequality and the idea of local "hidden variables". Bell basis. Four specific two-qubit states with the maximal value of formula_1 are designated as "Bell states". They are known as the four "maximally entangled two-qubit Bell states" and form a maximally entangled basis, known as the Bell basis, of the four-dimensional Hilbert space for two qubits: formula_2 formula_3 formula_4 formula_5 Creating Bell states via quantum circuits. Although there are many possible ways to create entangled Bell states through quantum circuits, the simplest takes a computational basis as the input, and contains a Hadamard gate and a CNOT gate (see picture). As an example, the pictured quantum circuit takes the two qubit input formula_7 and transforms it to the first Bell state formula_8 Explicitly, the Hadamard gate transforms formula_7 into a superposition of formula_9. This will then act as a control input to the CNOT gate, which only inverts the target (the second qubit) when the control (the first qubit) is 1. Thus, the CNOT gate transforms the second qubit as follows formula_10. For the four basic two-qubit inputs, formula_11, the circuit outputs the four Bell states (listed above). More generally, the circuit transforms the input in accordance with the equation formula_12 where formula_13 is the negation of formula_14. Properties of Bell states. The result of a measurement of a single qubit in a Bell state is indeterminate, but upon measuring the first qubit in the "z"-basis, the result of measuring the second qubit is guaranteed to yield the same value (for the formula_15 Bell states) or the opposite value (for the formula_16 Bell states). This implies that the measurement outcomes are correlated. John Bell was the first to prove that the measurement correlations in the Bell State are stronger than could ever exist between classical systems. This hints that quantum mechanics allows information processing beyond what is possible with classical mechanics. In addition, the Bell states form an orthonormal basis and can therefore be defined with an appropriate measurement. Because Bell states are entangled states, information on the entire system may be known, while withholding information on the individual subsystems. For example, the Bell state is a pure state, but the reduced density operator of the first qubit is a mixed state. The mixed state implies that not all the information on this first qubit is known. Bell States are either symmetric or antisymmetric with respect to the subsystems. Bell states are maximally entangled in the sense that its reduced density operators are maximally mixed, the multipartite generalization of Bell states in this spirit is called the absolutely maximally entangled (AME) state. Bell state measurement. The Bell measurement is an important concept in quantum information science: It is a joint quantum-mechanical measurement of two qubits that determines which of the four Bell states the two qubits are in. A helpful example of quantum measurement in the Bell basis can be seen in quantum computing. If a CNOT gate is applied to qubits A and B, followed by a Hadamard gate on qubit A, a measurement can be made in the computational basis. The CNOT gate performs the act of un-entangling the two previously entangled qubits. This allows the information to be converted from quantum information to a measurement of classical information. Quantum measurement obeys two key principles. The first, the principle of deferred measurement, states that any measurement can be moved to the end of the circuit. The second principle, the principle of implicit measurement, states that at the end of a quantum circuit, measurement can be assumed for any unterminated wires. The following are applications of Bell state measurements: Bell state measurement is the crucial step in quantum teleportation. The result of a Bell state measurement is used by one's co-conspirator to reconstruct the original state of a teleported particle from half of an entangled pair (the "quantum channel") that was previously shared between the two ends. Experiments that utilize so-called "linear evolution, local measurement" techniques cannot realize a complete Bell state measurement. Linear evolution means that the detection apparatus acts on each particle independent of the state or evolution of the other, and local measurement means that each particle is localized at a particular detector registering a "click" to indicate that a particle has been detected. Such devices can be constructed from, for example: mirrors, beam splitters, and wave plates – and are attractive from an experimental perspective because they are easy to use and have a high measurement cross-section. For entanglement in a single qubit variable, only three distinct classes out of four Bell states are distinguishable using such linear optical techniques. This means two Bell states cannot be distinguished from each other, limiting the efficiency of quantum communication protocols such as teleportation. If a Bell state is measured from this ambiguous class, the teleportation event fails. Entangling particles in multiple qubit variables, such as (for photonic systems) polarization and a two-element subset of orbital angular momentum states, allows the experimenter to trace over one variable and achieve a complete Bell state measurement in the other. Leveraging so-called hyper-entangled systems thus has an advantage for teleportation. It also has advantages for other protocols such as superdense coding, in which hyper-entanglement increases the channel capacity. In general, for hyper-entanglement in formula_17 variables, one can distinguish between at most formula_18 classes out of formula_19 Bell states using linear optical techniques. Bell state correlations. Independent measurements made on two qubits that are entangled in Bell states positively correlate perfectly if each qubit is measured in the relevant basis. For the formula_6 state, this means selecting the same basis for both qubits. If an experimenter chose to measure both qubits in a formula_20 Bell state using the same basis, the qubits would appear positively correlated when measuring in the formula_21 basis, anti-correlated in the formula_22 basis, and partially (probabilistically) correlated in other bases. The formula_23 correlations can be understood by measuring both qubits in the same basis and observing perfectly anti-correlated results. More generally, formula_23 can be understood by measuring the first qubit in basis formula_24, the second qubit in basis formula_25, and observing perfectly positively correlated results. Applications. Superdense coding. Superdense coding allows two individuals to communicate two bits of classical information by only sending a single qubit. The basis of this phenomenon is the entangled states or Bell states of a two qubit system. In this example, Alice and Bob are very far from each other, and have each been given one qubit of the entangled state. formula_26. In this example, Alice is trying to communicate two bits of classical information, one of four two bit strings: formula_27or formula_28. If Alice chooses to send the two bit message formula_29, she would perform the phase flip formula_30 to her qubit. Similarly, if Alice wants to send formula_31, she would apply a NOT gate; if she wanted to send formula_28, she would apply the formula_32gate to her qubit; and finally, if Alice wanted to send the two bit message formula_33, she would do nothing to her qubit. Alice performs these quantum gate transformations locally, transforming the initial entangled state formula_34 into one of the four Bell states. The steps below show the necessary quantum gate transformations, and resulting Bell states, that Alice needs to apply to her qubit for each possible two bit message she desires to send to Bob. formula_35 formula_36 formula_37 formula_38. After Alice applies the desired transformations to her qubit, she sends it to Bob. Bob then performs a measurement on the Bell state, which projects the entangled state onto one of the four two-qubit basis vectors, one of which will coincide with the original two bit message Alice was trying to send. Quantum teleportation. Quantum teleportation is the transfer of a quantum state over a distance. It is facilitated by entanglement between A, the giver, and B, the receiver of this quantum state. This process has become a fundamental research topic for quantum communication and computing. More recently, scientists have been testing its applications in information transfer through optical fibers. The process of quantum teleportation is defined as the following: Alice and Bob share an EPR pair and each took one qubit before they became separated. Alice must deliver a qubit of information to Bob, but she does not know the state of this qubit and can only send classical information to Bob. It is performed step by step as the following: The following quantum circuit describes teleportation: Quantum cryptography. Quantum cryptography is the use of quantum mechanical properties in order to encode and send information safely. The theory behind this process is the fact that it is impossible to measure a quantum state of a system without disturbing the system. This can be used to detect eavesdropping within a system. The most common form of quantum cryptography is quantum key distribution. It enables two parties to produce a shared random secret key that can be used to encrypt messages. Its private key is created between the two parties through a public channel. Quantum cryptography can be considered a state of entanglement between two multi-dimensional systems, also known as two-qudit (quantum digit) entanglement. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\langle \\Phi|\\Phi \\rangle = 1" }, { "math_id": 1, "text": "2\\sqrt{2}" }, { "math_id": 2, "text": "|\\Phi^+\\rangle = \\frac{1}{\\sqrt{2}} \\big(|0\\rangle_A \\otimes |0\\rangle_B + |1\\rangle_A \\otimes |1\\rangle_B\\big) \\qquad (1)" }, { "math_id": 3, "text": "|\\Phi^-\\rangle = \\frac{1}{\\sqrt{2}} \\big(|0\\rangle_A \\otimes |0\\rangle_B - |1\\rangle_A \\otimes |1\\rangle_B\\big) \\qquad (2)" }, { "math_id": 4, "text": "|\\Psi^+\\rangle = \\frac{1}{\\sqrt{2}} \\big(|0\\rangle_A \\otimes |1\\rangle_B + |1\\rangle_A \\otimes |0\\rangle_B\\big) \\qquad (3)" }, { "math_id": 5, "text": "|\\Psi^-\\rangle = \\frac{1}{\\sqrt{2}} \\big(|0\\rangle_A \\otimes |1\\rangle_B - |1\\rangle_A \\otimes |0\\rangle_B\\big) \\qquad (4)" }, { "math_id": 6, "text": "|\\Phi^+\\rangle" }, { "math_id": 7, "text": "|00\\rangle" }, { "math_id": 8, "text": "|\\Phi^+\\rangle." }, { "math_id": 9, "text": "(|0\\rangle|0\\rangle + |1\\rangle|0\\rangle) \\over \\sqrt{2}" }, { "math_id": 10, "text": "\\frac{(|00\\rangle + |11\\rangle)}{\\sqrt{2} } = |\\Phi^+\\rangle" }, { "math_id": 11, "text": "|00\\rangle, |01\\rangle, |10\\rangle, |11\\rangle" }, { "math_id": 12, "text": "|\\beta(x,y)\\rangle = \\left ( \\frac{|0,y\\rangle + (-1)^x|1,\\bar{y}\\rangle}{\\sqrt{2}} \\right )," }, { "math_id": 13, "text": "\\bar{y}" }, { "math_id": 14, "text": "y" }, { "math_id": 15, "text": "\\Phi" }, { "math_id": 16, "text": "\\Psi" }, { "math_id": 17, "text": "n" }, { "math_id": 18, "text": "2^{n+1} - 1" }, { "math_id": 19, "text": "4^n" }, { "math_id": 20, "text": "|\\Phi^-\\rangle" }, { "math_id": 21, "text": "\\{|0\\rangle,|1\\rangle\\}" }, { "math_id": 22, "text": "\\{|+\\rangle,|-\\rangle\\}" }, { "math_id": 23, "text": "|\\Psi^+\\rangle" }, { "math_id": 24, "text": "b_1" }, { "math_id": 25, "text": "b_2 = X.b_1" }, { "math_id": 26, "text": "|\\psi \\rangle = \\frac{|00\\rangle + |11\\rangle}{\\sqrt{2}}" }, { "math_id": 27, "text": "'00', '01', '10'," }, { "math_id": 28, "text": "'11'" }, { "math_id": 29, "text": "'01'" }, { "math_id": 30, "text": "Z" }, { "math_id": 31, "text": "'10'" }, { "math_id": 32, "text": "iY" }, { "math_id": 33, "text": "'00'" }, { "math_id": 34, "text": "|\\psi\\rangle" }, { "math_id": 35, "text": "00: I = \\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix} \\longrightarrow |\\psi \\rangle = \\frac{|00\\rangle + |11\\rangle}{\\sqrt2}\\equiv |{\\Phi^+}\\rangle" }, { "math_id": 36, "text": "01: Z = \\begin{bmatrix} 1 & 0 \\\\ 0 & -1 \\end{bmatrix}\\longrightarrow |\\psi \\rangle = \\frac{|00\\rangle - |11\\rangle}{\\sqrt2}\\equiv |{\\Phi^-}\\rangle" }, { "math_id": 37, "text": "10: X = \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix}\\longrightarrow |\\psi \\rangle = \\frac{|01\\rangle + |10\\rangle}{\\sqrt2}\\equiv |{\\Psi^+}\\rangle" }, { "math_id": 38, "text": "11: -iY = XZ = \\begin{bmatrix} 0 & -1 \\\\ 1 & 0 \\end{bmatrix}\\longrightarrow |\\psi \\rangle = \\frac{|01\\rangle - |10\\rangle}{\\sqrt2}\\equiv |{\\Psi^-}\\rangle" } ]
https://en.wikipedia.org/wiki?curid=1007613
1007660
Self-verifying theories
Systems capable of proving their own consistency Self-verifying theories are consistent first-order systems of arithmetic, much weaker than Peano arithmetic, that are capable of proving their own consistency. Dan Willard was the first to investigate their properties, and he has described a family of such systems. According to Gödel's incompleteness theorem, these systems cannot contain the theory of Peano arithmetic nor its weak fragment Robinson arithmetic; nonetheless, they can contain strong theorems. In outline, the key to Willard's construction of his system is to formalise enough of the Gödel machinery to talk about provability internally without being able to formalise diagonalisation. Diagonalisation depends upon being able to prove that multiplication is a total function (and in the earlier versions of the result, addition also). Addition and multiplication are not function symbols of Willard's language; instead, subtraction and division are, with the addition and multiplication predicates being defined in terms of these. Here, one cannot prove the formula_0 sentence expressing totality of multiplication: formula_1 where formula_2 is the three-place predicate which stands for formula_3 When the operations are expressed in this way, provability of a given sentence can be encoded as an arithmetic sentence describing termination of an analytic tableau. Provability of consistency can then simply be added as an axiom. The resulting system can be proven consistent by means of a relative consistency argument with respect to ordinary arithmetic. One can further add any true formula_4 sentence of arithmetic to the theory while still retaining consistency of the theory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Pi^0_2" }, { "math_id": 1, "text": "(\\forall x,y)\\ (\\exists z)\\ {\\rm multiply}(x,y,z)." }, { "math_id": 2, "text": "{\\rm multiply}" }, { "math_id": 3, "text": "z/y=x." }, { "math_id": 4, "text": "\\Pi^0_1" } ]
https://en.wikipedia.org/wiki?curid=1007660
10077292
Eventually (mathematics)
In the mathematical areas of number theory and analysis, an infinite sequence or a function is said to eventually have a certain property, if it does not have the said property across all its ordered instances, but will after some instances have passed. The use of the term "eventually" can be often rephrased as "for sufficiently large numbers", and can be also extended to the class of properties that apply to elements of any ordered set (such as sequences and subsets of formula_0). Notation. The general form where the phrase eventually (or sufficiently large) is found appears as follows: formula_1 is "eventually" true for formula_2 (formula_1 is true for "sufficiently large" formula_2), where formula_3 and formula_4 are the universal and existential quantifiers, which is actually a shorthand for: formula_5 such that formula_1 is true formula_6 or somewhat more formally: formula_7 This does not necessarily mean that any particular value for formula_8 is known, but only that such an formula_8 exists. The phrase "sufficiently large" should not be confused with the phrases "arbitrarily large" or "infinitely large". For more, see Arbitrarily large#Arbitrarily large vs. sufficiently large vs. infinitely large. Motivation and definition. For an infinite sequence, one is often more interested in the long-term behaviors of the sequence than the behaviors it exhibits early on. In which case, one way to formally capture this concept is to say that the sequence possesses a certain property "eventually", or equivalently, that the property is satisfied by one of its subsequences formula_9, for some formula_10. For example, the definition of a sequence of real numbers formula_11 converging to some limit "formula_8" is: For each positive number formula_12, there exists a natural number formula_13 such that for all formula_14, formula_15. When the term "eventually""" is used as a shorthand for "there exists a natural number formula_13 such that for all formula_16", the convergence definition can be restated more simply as: For each positive number formula_17, eventually formula_18. Here, notice that the set of natural numbers that do not satisfy this property is a finite set; that is, the set is empty or has a maximum element. As a result, the use of "eventually" in this case is synonymous with the expression "for all but a finite number of terms" – a special case of the expression "for almost all terms" (although "almost all" can also be used to allow for infinitely many exceptions as well). At the basic level, a sequence can be thought of as a function with natural numbers as its domain, and the notion of "eventually" applies to functions on more general sets as well—in particular to those that have an ordering with no greatest element. More specifically, if formula_19 is such a set and there is an element formula_20 in formula_19 such that the function formula_21 is defined for all elements greater than formula_20, then formula_21 is said to have some property eventually if there is an element formula_22 such that whenever "formula_23", formula_24 has the said property. This notion is used, for example, in the study of Hardy fields, which are fields made up of real functions, each of which have certain properties eventually.
[ { "math_id": 0, "text": "\\mathbb{R}" }, { "math_id": 1, "text": "P" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "\\forall" }, { "math_id": 4, "text": "\\exists" }, { "math_id": 5, "text": "\\exists a \\in \\mathbb{R}" }, { "math_id": 6, "text": "\\forall x \\ge a" }, { "math_id": 7, "text": "\\exists a \\in \\mathbb{R}: \\forall x \\in \\mathbb{R}:x \\ge a \\Rightarrow P(x)" }, { "math_id": 8, "text": "a" }, { "math_id": 9, "text": "(a_n)_{n \\geq N}" }, { "math_id": 10, "text": "N \\in \\N" }, { "math_id": 11, "text": "(a_n)" }, { "math_id": 12, "text": "\\varepsilon" }, { "math_id": 13, "text": "N" }, { "math_id": 14, "text": "n >N " }, { "math_id": 15, "text": "\\left\\vert a_n - a \\right\\vert<\\varepsilon" }, { "math_id": 16, "text": "n > N" }, { "math_id": 17, "text": "\\varepsilon>0" }, { "math_id": 18, "text": "\\left\\vert a_n-a \\right\\vert<\\varepsilon" }, { "math_id": 19, "text": "S" }, { "math_id": 20, "text": "s" }, { "math_id": 21, "text": "f" }, { "math_id": 22, "text": "x_0" }, { "math_id": 23, "text": "x>x_0" }, { "math_id": 24, "text": "f(x)" } ]
https://en.wikipedia.org/wiki?curid=10077292
1007903
Generalized singular value decomposition
Name of two different techniques based on the singular value decomposition In linear algebra, the generalized singular value decomposition (GSVD) is the name of two different techniques based on the singular value decomposition (SVD). The two versions differ because one version decomposes two matrices (somewhat like the higher-order or tensor SVD) and the other version uses a set of constraints imposed on the left and right singular vectors of a single-matrix SVD. First version: two-matrix decomposition. The generalized singular value decomposition (GSVD) is a matrix decomposition on a pair of matrices which generalizes the singular value decomposition. It was introduced by Van Loan in 1976 and later developed by Paige and Saunders, which is the version described here. In contrast to the SVD, the GSVD decomposes simultaneously a pair of matrices with the same number of columns. The SVD and the GSVD, as well as some other possible generalizations of the SVD, are extensively used in the study of the conditioning and regularization of linear systems with respect to quadratic semi-norms. In the following, let formula_0, or formula_1. Definition. The generalized singular value decomposition of matrices formula_2 and formula_3 isformula_4where We denote formula_26, formula_27, formula_28, and formula_29. While formula_30 is diagonal, formula_31 is not always diagonal, because of the leading rectangular zero matrix; instead formula_32 is "bottom-right-diagonal". Variations. There are many variations of the GSVD. These variations are related to the fact that it is always possible to multiply formula_33 from the left by formula_34 where formula_35 is an arbitrary unitary matrix. We denote Here are some variations of the GSVD: Generalized singular values. A "generalized singular value" of formula_45 and formula_46 is a pair formula_47 such that formula_48We have By these properties we can show that the generalized singular values are exactly the pairs formula_51. We haveformula_52Therefore formula_53 This expression is zero exactly when formula_54 and formula_55 for some formula_56. In, the generalized singular values are claimed to be those which solve formula_57. However, this claim only holds when formula_58, since otherwise the determinant is zero for every pair formula_47; this can be seen by substituting formula_59 above. Generalized inverse. Define formula_60 for any invertible matrix formula_35 , formula_61 for any zero matrix formula_62, and formula_63 for any block-diagonal matrix. Then defineformula_64It can be shown that formula_65 as defined here is a generalized inverse of formula_66; in particular a formula_67-inverse of formula_66. Since it does not in general satisfy formula_68, this is not the Moore–Penrose inverse; otherwise we could derive formula_69 for any choice of matrices, which only holds for certain class of matrices. Suppose formula_70, where formula_71 and formula_72. This generalized inverse has the following properties: Quotient SVD. A "generalized singular ratio" of formula_45 and formula_46 is formula_81. By the above properties, formula_82. Note that formula_77 is diagonal, and that, ignoring the leading zeros, contains the singular ratios in decreasing order. If formula_46 is invertible, then formula_83 has no leading zeros, and the generalized singular ratios are the singular values, and formula_84 and formula_85 are the matrices of singular vectors, of the matrix formula_86. In fact, computing the SVD of formula_87 is one of the motivations for the GSVD, as "forming formula_88 and finding its SVD can lead to unnecessary and large numerical errors when formula_89 is ill-conditioned for solution of equations". Hence the sometimes used name "quotient SVD", although this is not the only reason for using GSVD. If formula_46 is not invertible, then formula_90is still the SVD of formula_91 if we relax the requirement of having the singular values in decreasing order. Alternatively, a decreasing order SVD can be found by moving the leading zeros to the back: formula_92, where formula_93 and formula_94 are appropriate permutation matrices. Since rank equals the number of non-zero singular values, formula_95. Construction. Let Thenformula_110We also haveformula_111Thereforeformula_112Since formula_113 has orthonormal columns, formula_114. Thereforeformula_115We also have for each formula_116 such that formula_117 thatformula_118Therefore formula_119, andformula_120 Applications. The GSVD, formulated as a comparative spectral decomposition, has been successfully applied to signal processing and data science, e.g., in genomic signal processing. These applications inspired several additional comparative spectral decompositions, i.e., the higher-order GSVD (HO GSVD) and the tensor GSVD. It has equally found applications to estimate the spectral decompositions of linear operators when the eigenfunctions are parameterized with a linear model, i.e. a reproducing kernel Hilbert space. Second version: weighted single-matrix decomposition. The weighted version of the generalized singular value decomposition (GSVD) is a constrained matrix decomposition with constraints imposed on the left and right singular vectors of the singular value decomposition. This form of the "GSVD" is an extension of the "SVD" as such. Given the "SVD" of an "m×n" real or complex matrix "M" formula_121 where formula_122 Where "I" is the identity matrix and where formula_123 and formula_124 are orthonormal given their constraints (formula_125 and formula_126). Additionally, formula_125 and formula_126 are positive definite matrices (often diagonal matrices of weights). This form of the "GSVD" is the core of certain techniques, such as generalized principal component analysis and Correspondence analysis. The weighted form of the "GSVD" is called as such because, with the correct selection of weights, it "generalizes" many techniques (such as multidimensional scaling and linear discriminant analysis). References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{F} = \\mathbb{R}" }, { "math_id": 1, "text": "\\mathbb{F} = \\mathbb{C}" }, { "math_id": 2, "text": "A_1 \\in \\mathbb{F}^{m_1 \\times n}" }, { "math_id": 3, "text": "A_2 \\in \\mathbb{F}^{m_2 \\times n}" }, { "math_id": 4, "text": "\n\\begin{align}\nA_1 & = U_1\\Sigma_1 [ W^* D, 0_D] Q^*, \\\\\nA_2 & = U_2\\Sigma_2 [ W^* D, 0_D] Q^*,\n\\end{align}\n" }, { "math_id": 5, "text": "U_1 \\in \\mathbb{F}^{m_1 \\times m_1}" }, { "math_id": 6, "text": "U_2 \\in \\mathbb{F}^{m_2 \\times m_2}" }, { "math_id": 7, "text": "Q \\in \\mathbb{F}^{n \\times n}" }, { "math_id": 8, "text": "\nW \\in \\mathbb{F}^{k \\times k}\n" }, { "math_id": 9, "text": "\nD \\in \\mathbb{R}^{k \\times k}\n" }, { "math_id": 10, "text": "C = \\begin{bmatrix} A_1 \\\\ A_2 \\end{bmatrix}" }, { "math_id": 11, "text": "0_D = 0 \\in \\mathbb{R}^{k \\times (n - k)} " }, { "math_id": 12, "text": "\\Sigma_1 = \\lceil I_A, S_1, 0_A \\rfloor \\in \\mathbb{R}^{m_1 \\times k}" }, { "math_id": 13, "text": "S_1 = \\lceil \\alpha_{r + 1}, \\dots, \\alpha_{r + s} \\rfloor" }, { "math_id": 14, "text": " 1 > \\alpha_{r + 1} \\ge \\cdots \\ge \\alpha_{r + s} > 0" }, { "math_id": 15, "text": "I_A = I_r" }, { "math_id": 16, "text": "0_A = 0 \\in \\mathbb{R}^{(m_1 - r - s) \\times (k - r - s)} " }, { "math_id": 17, "text": "\\Sigma_2 = \\lceil 0_B, S_2, I_B \\rfloor \\in \\mathbb{R}^{m_2 \\times k}" }, { "math_id": 18, "text": "S_2 = \\lceil \\beta_{r + 1}, \\dots, \\beta_{r + s} \\rfloor " }, { "math_id": 19, "text": " 0 < \\beta_{r + 1} \\le \\cdots \\le \\beta_{r + s} < 1" }, { "math_id": 20, "text": "I_B = I_{k - r - s}" }, { "math_id": 21, "text": "0_B = 0 \\in \\mathbb{R}^{(m_2 - k + r) \\times r} " }, { "math_id": 22, "text": "\\Sigma_1^* \\Sigma_1 = \\lceil\\alpha_1^2, \\dots, \\alpha_k^2\\rfloor" }, { "math_id": 23, "text": "\\Sigma_2^* \\Sigma_2 = \\lceil\\beta_1^2, \\dots, \\beta_k^2\\rfloor" }, { "math_id": 24, "text": "\\Sigma_1^* \\Sigma_1 + \\Sigma_2^* \\Sigma_2 = I_k" }, { "math_id": 25, "text": "k = \\textrm{rank}(C)" }, { "math_id": 26, "text": "\\alpha_1 = \\cdots = \\alpha_r = 1" }, { "math_id": 27, "text": "\\alpha_{r + s + 1} = \\cdots = \\alpha_k = 0" }, { "math_id": 28, "text": "\\beta_1 = \\cdots = \\beta_r = 0" }, { "math_id": 29, "text": "\\beta_{r + s + 1} = \\cdots = \\beta_k = 1" }, { "math_id": 30, "text": "\\Sigma_1" }, { "math_id": 31, "text": "\\Sigma_2 " }, { "math_id": 32, "text": "\\Sigma_2" }, { "math_id": 33, "text": "Q^*" }, { "math_id": 34, "text": "E E^* = I" }, { "math_id": 35, "text": "E \\in \\mathbb{F}^{n \\times n}" }, { "math_id": 36, "text": "X = ([W^* D, 0_D] Q^*)^*" }, { "math_id": 37, "text": "\nX^* = [0, R] \\hat{Q}^*\n\n" }, { "math_id": 38, "text": "\nR \\in \\mathbb{F}^{k \\times k}\n\n" }, { "math_id": 39, "text": "\n\\hat{Q} \\in \\mathbb{F}^{n \\times n}\n\n" }, { "math_id": 40, "text": "Y = W^* D" }, { "math_id": 41, "text": "\nY\n" }, { "math_id": 42, "text": "\n\\begin{aligned}\nA_1 & = U_1 \\Sigma_1 X^*, \\\\\nA_2 & = U_2 \\Sigma_2 X^*.\n\\end{aligned}\n\n" }, { "math_id": 43, "text": "\n\\begin{aligned}\nA_1 & = U_1 \\Sigma_1 [0, R] \\hat{Q}^*, \\\\\nA_2 & = U_2 \\Sigma_2 [0, R] \\hat{Q}^*.\n\\end{aligned}\n\n" }, { "math_id": 44, "text": "\n\\begin{align}\nA_1 & = U_1\\Sigma_1 [ Y, 0_D] Q^*, \\\\\nA_2 & = U_2\\Sigma_2 [ Y, 0_D] Q^*.\n\\end{align}\n" }, { "math_id": 45, "text": "A_1" }, { "math_id": 46, "text": "A_2" }, { "math_id": 47, "text": "(a, b) \\in \\mathbb{R}^2" }, { "math_id": 48, "text": "\n\\begin{align}\n\\lim_{\\delta \\to 0} \\det(b^2 A_1^* A_1 - a^2 A_2^* A_2 + \\delta I_n) / \\det(\\delta I_{n - k}) & = 0, \\\\\na^2 + b^2 & = 1, \\\\\na, b & \\geq 0.\n\\end{align}\n" }, { "math_id": 49, "text": " A_i A_j^* = U_i \\Sigma_i Y Y^* \\Sigma_j^* U_j^*" }, { "math_id": 50, "text": " A_i^* A_j = Q \\begin{bmatrix} Y^* \\Sigma_i^* \\Sigma_j Y & 0 \\\\ 0 & 0 \\end{bmatrix} Q^* = Q_1 Y^* \\Sigma_i^* \\Sigma_j Y Q_1^* " }, { "math_id": 51, "text": "(\\alpha_i, \\beta_i)" }, { "math_id": 52, "text": "\n\\begin{aligned}\n& \\det(b^2 A_1^* A_1 - a^2 A_2^* A_2 + \\delta I_n) \\\\\n= & \\det(b^2 A_1^* A_1 - a^2 A_2^* A_2 + \\delta Q Q^*) \\\\\n= & \\det\\left(Q \\begin{bmatrix} Y^* (b^2 \\Sigma_1^* \\Sigma_1 - a^2 \\Sigma_2^* \\Sigma_2) Y + \\delta I_k & 0 \\\\ 0 & \\delta I_{n - k} \\end{bmatrix} Q^*\\right) \\\\\n= & \\det(\\delta I_{n - k}) \\det(Y^* (b^2 \\Sigma_1^* \\Sigma_1 - a^2 \\Sigma_2^* \\Sigma_2) Y + \\delta I_k).\n\\end{aligned}\n" }, { "math_id": 53, "text": "\n\\begin{aligned}\n{} & \\lim_{\\delta \\to 0} \\det(b^2 A_1^* A_1 - a^2 A_2^* A_2 + \\delta I_n) / \\det(\\delta I_{n - k}) \\\\\n= & \\lim_{\\delta \\to 0} \\det(Y^* (b^2 \\Sigma_1^* \\Sigma_1 - a^2 \\Sigma_2^* \\Sigma_2) Y + \\delta I_k) \\\\\n= & \\det(Y^* (b^2 \\Sigma_1^* \\Sigma_1 - a^2 \\Sigma_2^* \\Sigma_2) Y) \\\\\n= & |\\det(Y)|^2 \\prod_{i = 1}^k (b^2 \\alpha_i^2 - a^2 \\beta_i^2).\n\\end{aligned}\n" }, { "math_id": 54, "text": "a = \\alpha_i" }, { "math_id": 55, "text": "b = \\beta_i" }, { "math_id": 56, "text": "i" }, { "math_id": 57, "text": "\\det(b^2 A_1^* A_1 - a^2 A_2^* A_2) = 0" }, { "math_id": 58, "text": "k = n" }, { "math_id": 59, "text": "\\delta = 0" }, { "math_id": 60, "text": "E^+ = E^{-1}" }, { "math_id": 61, "text": "0^+ = 0^*" }, { "math_id": 62, "text": "0 \\in \\mathbb{F}^{m \\times n}" }, { "math_id": 63, "text": "\\left\\lceil E_1, E_2 \\right\\rfloor^+ = \\left\\lceil E_1^+, E_2^+ \\right\\rfloor" }, { "math_id": 64, "text": "A_i^+ = Q \\begin{bmatrix} Y^{-1} \\\\ 0 \\end{bmatrix} \\Sigma_i^+ U_i^*" }, { "math_id": 65, "text": "A_i^+" }, { "math_id": 66, "text": "A_i" }, { "math_id": 67, "text": "\\{1, 2, 3\\}" }, { "math_id": 68, "text": "(A_i^+ A_i)^* = A_i^+ A_i" }, { "math_id": 69, "text": "(AB)^+ = B^+ A^+" }, { "math_id": 70, "text": " Q = \\begin{bmatrix}Q_1 & Q_2\\end{bmatrix} " }, { "math_id": 71, "text": "Q_1 \\in \\mathbb{F}^{n \\times k}" }, { "math_id": 72, "text": "Q_2 \\in \\mathbb{F}^{n \\times (n - k)}" }, { "math_id": 73, "text": " \\Sigma_1^+ = \\lceil I_A, S_1^{-1}, 0_A^T \\rfloor " }, { "math_id": 74, "text": " \\Sigma_2^+ = \\lceil 0^T_B, S_2^{-1}, I_B \\rfloor " }, { "math_id": 75, "text": " \\Sigma_1 \\Sigma_1^+ = \\lceil I, I, 0 \\rfloor " }, { "math_id": 76, "text": " \\Sigma_2 \\Sigma_2^+ = \\lceil 0, I, I \\rfloor " }, { "math_id": 77, "text": " \\Sigma_1 \\Sigma_2^+ = \\lceil 0, S_1 S_2^{-1}, 0 \\rfloor " }, { "math_id": 78, "text": " \\Sigma_1^+ \\Sigma_2 = \\lceil 0, S_1^{-1} S_2, 0 \\rfloor " }, { "math_id": 79, "text": " A_i A_j^+ = U_i \\Sigma_i \\Sigma_j^+ U_j^*" }, { "math_id": 80, "text": " A_i^+ A_j = Q \\begin{bmatrix} Y^{-1} \\Sigma_i^+ \\Sigma_j Y & 0 \\\\ 0 & 0 \\end{bmatrix} Q^* = Q_1 Y^{-1} \\Sigma_i^+ \\Sigma_j Y Q_1^* " }, { "math_id": 81, "text": "\\sigma_i=\\alpha_i \\beta_i^+" }, { "math_id": 82, "text": " A_1 A_2^+ = U_1 \\Sigma_1 \\Sigma_2^+ U_2^*" }, { "math_id": 83, "text": " \\Sigma_1 \\Sigma_2^+ " }, { "math_id": 84, "text": "U_1" }, { "math_id": 85, "text": "U_2" }, { "math_id": 86, "text": "A_1 A_2^+ = A_1 A_2^{-1}" }, { "math_id": 87, "text": "A_1 A_2^{-1}" }, { "math_id": 88, "text": "AB^{-1}" }, { "math_id": 89, "text": "B" }, { "math_id": 90, "text": " U_1 \\Sigma_1 \\Sigma_2^+ U_2^*" }, { "math_id": 91, "text": " A_1 A_2^+" }, { "math_id": 92, "text": " U_1 \\Sigma_1 \\Sigma_2^+ U_2^* = (U_1 P_1) P_1^* \\Sigma_1 \\Sigma_2^+ P_2 (P_2^* U_2^*)" }, { "math_id": 93, "text": " P_1" }, { "math_id": 94, "text": " P_2" }, { "math_id": 95, "text": " \\mathrm{rank}(A_1 A_2^+)=s" }, { "math_id": 96, "text": "C = P \\lceil D, 0 \\rfloor Q^*" }, { "math_id": 97, "text": "P \\in \\mathbb{F}^{(m_1 + m_2) \\times (m_1 \\times m_2)}" }, { "math_id": 98, "text": "Q" }, { "math_id": 99, "text": "D" }, { "math_id": 100, "text": "P = [P_1, P_2]" }, { "math_id": 101, "text": "P_1 \\in \\mathbb{F}^{(m_1 + m_2) \\times k}" }, { "math_id": 102, "text": "P_2 \\in \\mathbb{F}^{(m_1 + m_2) \\times (n - k)}" }, { "math_id": 103, "text": "P_1 = \\begin{bmatrix} P_{11} \\\\ P_{21} \\end{bmatrix}" }, { "math_id": 104, "text": "P_{11} \\in \\mathbb{F}^{m_1 \\times k}" }, { "math_id": 105, "text": "P_{21} \\in \\mathbb{F}^{m_2 \\times k}" }, { "math_id": 106, "text": "P_{11} = U_1 \\Sigma_1 W^*" }, { "math_id": 107, "text": "P_{11}" }, { "math_id": 108, "text": "W" }, { "math_id": 109, "text": "P_{21} W = U_2 \\Sigma_2" }, { "math_id": 110, "text": "\\begin{aligned}\nC & = P \\lceil D, 0 \\rfloor Q^* \\\\\n{} & = [P_1 D, 0] Q^* \\\\\n{} & = \\begin{bmatrix} U_1 \\Sigma_1 W^* D & 0 \\\\ U_2 \\Sigma_2 W^* D & 0 \\end{bmatrix} Q^* \\\\\n{} & = \\begin{bmatrix} U_1 \\Sigma_1 [W^* D, 0] Q^* \\\\ U_2 \\Sigma_2 [W^* D, 0] Q^* \\end{bmatrix} .\n\\end{aligned}" }, { "math_id": 111, "text": "\\begin{bmatrix} U_1^* & 0 \\\\ 0 & U_2^* \\end{bmatrix} P_1 W = \\begin{bmatrix} \\Sigma_1 \\\\ \\Sigma_2 \\end{bmatrix}." }, { "math_id": 112, "text": "\\Sigma_1^* \\Sigma_1 + \\Sigma_2^* \\Sigma_2 = \\begin{bmatrix} \\Sigma_1 \\\\ \\Sigma_2 \\end{bmatrix}^* \\begin{bmatrix} \\Sigma_1 \\\\ \\Sigma_2 \\end{bmatrix} = W^* P_1^* \\begin{bmatrix} U_1 & 0 \\\\ 0 & U_2 \\end{bmatrix} \\begin{bmatrix} U_1^* & 0 \\\\ 0 & U_2^* \\end{bmatrix} P_1 W = I." }, { "math_id": 113, "text": "P_1" }, { "math_id": 114, "text": "||P_1||_2 \\leq 1" }, { "math_id": 115, "text": "||\\Sigma_1||_2 = ||U_1^* P_1 W||_2 = ||P_1||_2 \\leq 1." }, { "math_id": 116, "text": "x \\in \\mathbb{R}^k" }, { "math_id": 117, "text": "||x||_2 = 1" }, { "math_id": 118, "text": "||P_{21} x||_2^2 \\leq ||P_{11} x||_2^2 + ||P_{21} x||_2^2 = ||P_{1} x||_2^2 \\leq 1." }, { "math_id": 119, "text": "||P_{21}||_2 \\leq 1" }, { "math_id": 120, "text": "||\\Sigma_2||_2 = || U_2^* P_{21} W ||_2 = ||P_{21}||_2 \\leq 1." }, { "math_id": 121, "text": "M = U\\Sigma V^* \\," }, { "math_id": 122, "text": "U^* W_u U = V^* W_v V = I." }, { "math_id": 123, "text": "U" }, { "math_id": 124, "text": "V" }, { "math_id": 125, "text": "W_u" }, { "math_id": 126, "text": "W_v" } ]
https://en.wikipedia.org/wiki?curid=1007903
1007969
Iterative reconstruction
Iterative reconstruction refers to iterative algorithms used to reconstruct 2D and 3D images in certain imaging techniques. For example, in computed tomography an image must be reconstructed from projections of an object. Here, iterative reconstruction techniques are usually a better, but computationally more expensive alternative to the common filtered back projection (FBP) method, which directly calculates the image in a single reconstruction step. In recent research works, scientists have shown that extremely fast computations and massive parallelism is possible for iterative reconstruction, which makes iterative reconstruction practical for commercialization. Basic concepts. The reconstruction of an image from the acquired data is an inverse problem. Often, it is not possible to exactly solve the inverse problem directly. In this case, a direct algorithm has to approximate the solution, which might cause visible reconstruction artifacts in the image. Iterative algorithms approach the correct solution using multiple iteration steps, which allows to obtain a better reconstruction at the cost of a higher computation time. There are a large variety of algorithms, but each starts with an assumed image, computes projections from the image, compares the original projection data and updates the image based upon the difference between the calculated and the actual projections. Algebraic reconstruction. The Algebraic Reconstruction Technique (ART) was the first iterative reconstruction technique used for computed tomography by Hounsfield. iterative Sparse Asymptotic Minimum Variance. The iterative Sparse Asymptotic Minimum Variance algorithm is an iterative, parameter-free superresolution tomographic reconstruction method inspired by compressed sensing, with applications in synthetic-aperture radar, computed tomography scan, and magnetic resonance imaging (MRI). Statistical reconstruction. There are typically five components to statistical iterative image reconstruction algorithms, e.g. Learned Iterative Reconstruction. In learned iterative reconstruction, the updating algorithm is learned from training data using techniques from machine learning such as convolutional neural networks, while still incorporating the image formation model. This typically gives faster and higher quality reconstructions and has been applied to CT and MRI reconstruction. Advantages. The advantages of the iterative approach include improved insensitivity to noise and capability of reconstructing an optimal image in the case of incomplete data. The method has been applied in emission tomography modalities like SPECT and PET, where there is significant attenuation along ray paths and noise statistics are relatively poor. Statistical, likelihood-based approaches: Statistical, likelihood-based iterative expectation-maximization algorithms are now the preferred method of reconstruction. Such algorithms compute estimates of the likely distribution of annihilation events that led to the measured data, based on statistical principle, often providing better noise profiles and resistance to the streak artifacts common with FBP. Since the density of radioactive tracer is a function in a function space, therefore of extremely high-dimensions, methods which regularize the maximum-likelihood solution turning it towards penalized or maximum a-posteriori methods can have significant advantages for low counts. Examples such as Ulf Grenander's Sieve estimator or Bayes penalty methods, or via I.J. Good's roughness method may yield superior performance to expectation-maximization-based methods which involve a Poisson likelihood function only. As another example, it is considered superior when one does not have a large set of projections available, when the projections are not distributed uniformly in angle, or when the projections are sparse or missing at certain orientations. These scenarios may occur in intraoperative CT, in cardiac CT, or when metal artifacts require the exclusion of some portions of the projection data. In Magnetic Resonance Imaging it can be used to reconstruct images from data acquired with multiple receive coils and with sampling patterns different from the conventional Cartesian grid and allows the use of improved regularization techniques (e.g. total variation) or an extended modeling of physical processes to improve the reconstruction. For example, with iterative algorithms it is possible to reconstruct images from data acquired in a very short time as required for real-time MRI (rt-MRI). In Cryo Electron Tomography, where the limited number of projections are acquired due to the hardware limitations and to avoid the biological specimen damage, it can be used along with compressive sensing techniques or regularization functions (e.g. Huber function) to improve the reconstruction for better interpretation. Here is an example that illustrates the benefits of iterative image reconstruction for cardiac MRI. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(r)" }, { "math_id": 1, "text": "\\mathbf{A}x+\\epsilon" }, { "math_id": 2, "text": "\\epsilon" } ]
https://en.wikipedia.org/wiki?curid=1007969
10083278
5-cubic honeycomb
Tiling of five-dimensional space In geometry, the 5-cubic honeycomb or penteractic honeycomb is the only regular space-filling tessellation (or honeycomb) in Euclidean 5-space. Four 5-cubes meet at each cubic cell, and it is more explicitly called an "order-4 penteractic honeycomb". It is analogous to the square tiling of the plane and to the cubic honeycomb of 3-space, and the tesseractic honeycomb of 4-space. Constructions. There are many different Wythoff constructions of this honeycomb. The most symmetric form is regular, with Schläfli symbol {4,33,4}. Another form has two alternating 5-cube facets (like a checkerboard) with Schläfli symbol {4,3,3,31,1}. The lowest symmetry Wythoff construction has 32 types of facets around each vertex and a prismatic product Schläfli symbol {∞}(5). Related polytopes and honeycombs. The [4,33,4], , Coxeter group generates 63 permutations of uniform tessellations, 35 with unique symmetry and 34 with unique geometry. The expanded 5-cubic honeycomb is geometrically identical to the 5-cubic honeycomb. The "5-cubic honeycomb" can be alternated into the 5-demicubic honeycomb, replacing the 5-cubes with 5-demicubes, and the alternated gaps are filled by 5-orthoplex facets. It is also related to the regular 6-cube which exists in 6-space with three 5-cubes on each cell. This could be considered as a tessellation on the 5-sphere, an "order-3 penteractic honeycomb", {4,34}. The Penrose tilings are 2-dimensional aperiodic tilings that can be obtained as a projection of the 5-cubic honeycomb along a 5-fold rotational axis of symmetry. The vertices correspond to points in the 5-dimensional cubic lattice, and the tiles are formed by connecting points in a predefined manner. Tritruncated 5-cubic honeycomb. A tritruncated 5-cubic honeycomb, , contains all bitruncated 5-orthoplex facets and is the Voronoi tessellation of the D5* lattice. Facets can be identically colored from a doubled formula_0×2, 4,33,4 symmetry, alternately colored from formula_0, [4,33,4] symmetry, three colors from formula_1, [4,3,3,31,1] symmetry, and 4 colors from formula_2, [31,1,3,31,1] symmetry. See also. Regular and uniform honeycombs in 5-space: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\tilde{C}}_5" }, { "math_id": 1, "text": "{\\tilde{B}}_5" }, { "math_id": 2, "text": "{\\tilde{D}}_5" } ]
https://en.wikipedia.org/wiki?curid=10083278
10083462
6-cube
6-dimensional hypercube In geometry, a 6-cube is a six-dimensional hypercube with 64 vertices, 192 edges, 240 square faces, 160 cubic cells, 60 tesseract 4-faces, and 12 5-cube 5-faces. It has Schläfli symbol {4,34}, being composed of 3 5-cubes around each 4-face. It can be called a hexeract, a portmanteau of tesseract (the "4-cube") with "hex" for six (dimensions) in Greek. It can also be called a regular dodeca-6-tope or dodecapeton, being a 6-dimensional polytope constructed from 12 regular facets. Related polytopes. It is a part of an infinite family of polytopes, called hypercubes. The dual of a 6-cube can be called a 6-orthoplex, and is a part of the infinite family of cross-polytopes. It is composed of various 5-cubes, at perpendicular angles on the u-axis, forming coordinates (x,y,z,w,v,u). Applying an "alternation" operation, deleting alternating vertices of the 6-cube, creates another uniform polytope, called a 6-demicube, (part of an infinite family called demihypercubes), which has 12 5-demicube and 32 5-simplex facets. As a configuration. This configuration matrix represents the 6-cube. The rows and columns correspond to vertices, edges, faces, cells, 4-faces and 5-faces. The diagonal numbers say how many of each element occur in the whole 6-cube. The nondiagonal numbers say how many of the column's element occur in or at the row's element. formula_0 Cartesian coordinates. Cartesian coordinates for the vertices of a 6-cube centered at the origin and edge length 2 are (±1,±1,±1,±1,±1,±1) while the interior of the same consists of all points (x0, x1, x2, x3, x4, x5) with −1 &lt; xi &lt; 1. Construction. There are three Coxeter groups associated with the 6-cube, one regular, with the C6 or [4,3,3,3,3] Coxeter group, and a half symmetry (D6) or [33,1,1] Coxeter group. The lowest symmetry construction is based on hyperrectangles or proprisms, cartesian products of lower dimensional hypercubes. Related polytopes. The 64 vertices of a 6-cube also represent a regular skew 4-polytope {4,3,4 | 4}. Its net can be seen as a 4×4×4 matrix of 64 cubes, a periodic subset of the cubic honeycomb, {4,3,4}, in 3-dimensions. It has 192 edges, and 192 square faces. Opposite faces fold together into a 4-cycle. Each fold direction adds 1 dimension, raising it into 6-space. The "6-cube" is 6th in a series of hypercube: This polytope is one of 63 uniform 6-polytopes generated from the B6 Coxeter plane, including the regular 6-cube or 6-orthoplex.
[ { "math_id": 0, "text": "\\begin{bmatrix}\\begin{matrix}64 & 6 & 15 & 20 & 15 & 6 \\\\ 2 & 192 & 5 & 10 & 10 & 5 \\\\ 4 & 4 & 240 & 4 & 6 & 4 \\\\ 8 & 12 & 6 & 160 & 3 & 3 \\\\ 16 & 32 & 24 & 8 & 60 & 2 \\\\ 32 & 80 & 80 & 40 & 10 & 12 \\end{matrix}\\end{bmatrix}" } ]
https://en.wikipedia.org/wiki?curid=10083462
10083518
6-orthoplex
In geometry, a 6-orthoplex, or 6-cross polytope, is a regular 6-polytope with 12 vertices, 60 edges, 160 triangle faces, 240 tetrahedron cells, 192 5-cell "4-faces", and 64 "5-faces". It has two constructed forms, the first being regular with Schläfli symbol {34,4}, and the second with alternately labeled (checkerboarded) facets, with Schläfli symbol {3,3,3,31,1} or Coxeter symbol 311. It is a part of an infinite family of polytopes, called cross-polytopes or "orthoplexes". The dual polytope is the 6-hypercube, or hexeract. As a configuration. This configuration matrix represents the 6-orthoplex. The rows and columns correspond to vertices, edges, faces, cells, 4-faces and 5-faces. The diagonal numbers say how many of each element occur in the whole 6-orthoplex. The nondiagonal numbers say how many of the column's element occur in or at the row's element. formula_0 Construction. There are three Coxeter groups associated with the 6-orthoplex, one regular, dual of the hexeract with the C6 or [4,3,3,3,3] Coxeter group, and a half symmetry with two copies of 5-simplex facets, alternating, with the D6 or [33,1,1] Coxeter group. A lowest symmetry construction is based on a dual of a 6-orthotope, called a 6-fusil. Cartesian coordinates. Cartesian coordinates for the vertices of a 6-orthoplex, centered at the origin are (±1,0,0,0,0,0), (0,±1,0,0,0,0), (0,0,±1,0,0,0), (0,0,0,±1,0,0), (0,0,0,0,±1,0), (0,0,0,0,0,±1) Every vertex pair is connected by an edge, except opposites. Related polytopes. The 6-orthoplex can be projected down to 3-dimensions into the vertices of a regular icosahedron. It is in a dimensional series of uniform polytopes and honeycombs, expressed by Coxeter as 3k1 series. (A degenerate 4-dimensional case exists as 3-sphere tiling, a tetrahedral hosohedron.) This polytope is one of 63 uniform 6-polytopes generated from the B6 Coxeter plane, including the regular 6-cube or 6-orthoplex.
[ { "math_id": 0, "text": "\\begin{bmatrix}\\begin{matrix}12 & 10 & 40 & 80 & 80 & 32 \\\\ 2 & 60 & 8 & 24 & 32 & 16 \\\\ 3 & 3 & 160 & 6 & 12 & 8 \\\\ 4 & 6 & 4 & 240 & 4 & 4 \\\\ 5 & 10 & 10 & 5 & 192 & 2 \\\\ 6 & 15 & 20 & 15 & 6 & 64 \\end{matrix}\\end{bmatrix}" } ]
https://en.wikipedia.org/wiki?curid=10083518
1008471
Wigner–Eckart theorem
Theorem used in quantum mechanics for angular momentum calculations The Wigner–Eckart theorem is a theorem of representation theory and quantum mechanics. It states that matrix elements of spherical tensor operators in the basis of angular momentum eigenstates can be expressed as the product of two factors, one of which is independent of angular momentum orientation, and the other a Clebsch–Gordan coefficient. The name derives from physicists Eugene Wigner and Carl Eckart, who developed the formalism as a link between the symmetry transformation groups of space (applied to the Schrödinger equations) and the laws of conservation of energy, momentum, and angular momentum. Mathematically, the Wigner–Eckart theorem is generally stated in the following way. Given a tensor operator formula_0 and two states of angular momenta formula_1 and formula_2, there exists a constant formula_3 such that for all formula_4, formula_5, and formula_6, the following equation is satisfied: formula_7 where The Wigner–Eckart theorem states indeed that operating with a spherical tensor operator of rank "k" on an angular momentum eigenstate is like adding a state with angular momentum "k" to the state. The matrix element one finds for the spherical tensor operator is proportional to a Clebsch–Gordan coefficient, which arises when considering adding two angular momenta. When stated another way, one can say that the Wigner–Eckart theorem is a theorem that tells how vector operators behave in a subspace. Within a given subspace, a component of a vector operator will behave in a way proportional to the same component of the angular momentum operator. This definition is given in the book "Quantum Mechanics" by Cohen–Tannoudji, Diu and Laloe. Background and overview. Motivating example: position operator matrix elements for 4d → 2p transition. Let's say we want to calculate transition dipole moments for an electron transition from a 4d to a 2p orbital of a hydrogen atom, i.e. the matrix elements of the form formula_11, where "r""i" is either the "x", "y", or "z" component of the position operator, and "m"1, "m"2 are the magnetic quantum numbers that distinguish different orbitals within the 2p or 4d subshell. If we do this directly, it involves calculating 45 different integrals: there are 3 possibilities for "m"1 (−1, 0, 1), 5 possibilities for "m"2 (−2, −1, 0, 1, 2), and 3 possibilities for "i", so the total is 3 × 5 × 3 = 45. The Wigner–Eckart theorem allows one to obtain the same information after evaluating just "one" of those 45 integrals ("any" of them can be used, as long as it is nonzero). Then the other 44 integrals can be inferred from that first one—without the need to write down any wavefunctions or evaluate any integrals—with the help of Clebsch–Gordan coefficients, which can be easily looked up in a table or computed by hand or computer. Qualitative summary of proof. The Wigner–Eckart theorem works because all 45 of these different calculations are related to each other by rotations. If an electron is in one of the 2p orbitals, rotating the system will generally move it into a "different" 2p orbital (usually it will wind up in a quantum superposition of all three basis states, "m" = +1, 0, −1). Similarly, if an electron is in one of the 4d orbitals, rotating the system will move it into a different 4d orbital. Finally, an analogous statement is true for the position operator: when the system is rotated, the three different components of the position operator are effectively interchanged or mixed. If we start by knowing just one of the 45 values (say, we know that formula_12) and then we rotate the system, we can infer that "K" is also the matrix element between the rotated version of formula_13, the rotated version of formula_14, and the rotated version of formula_15. This gives an algebraic relation involving "K" and some or all of the 44 unknown matrix elements. Different rotations of the system lead to different algebraic relations, and it turns out that there is enough information to figure out all of the matrix elements in this way. In terms of representation theory. To state these observations more precisely and to prove them, it helps to invoke the mathematics of representation theory. For example, the set of all possible 4d orbitals (i.e., the 5 states "m" = −2, −1, 0, 1, 2 and their quantum superpositions) form a 5-dimensional abstract vector space. Rotating the system transforms these states into each other, so this is an example of a "group representation", in this case, the 5-dimensional irreducible representation ("irrep") of the rotation group SU(2) or SO(3), also called the "spin-2 representation". Similarly, the 2p quantum states form a 3-dimensional irrep (called "spin-1"), and the components of the position operator also form the 3-dimensional "spin-1" irrep. Now consider the matrix elements formula_11. It turns out that these are transformed by rotations according to the tensor product of those three representations, i.e. the spin-1 representation of the 2p orbitals, the spin-1 representation of the components of r, and the spin-2 representation of the 4d orbitals. This direct product, a 45-dimensional representation of SU(2), is "not" an irreducible representation, instead it is the direct sum of a spin-4 representation, two spin-3 representations, three spin-2 representations, two spin-1 representations, and a spin-0 (i.e. trivial) representation. The nonzero matrix elements can only come from the spin-0 subspace. The Wigner–Eckart theorem works because the direct product decomposition contains one and only one spin-0 subspace, which implies that all the matrix elements are determined by a single scale factor. Apart from the overall scale factor, calculating the matrix element formula_11 is equivalent to calculating the projection of the corresponding abstract vector (in 45-dimensional space) onto the spin-0 subspace. The results of this calculation are the Clebsch–Gordan coefficients. The key qualitative aspect of the Clebsch–Gordan decomposition that makes the argument work is that in the decomposition of the tensor product of two irreducible representations, each irreducible representation occurs only once. This allows Schur's lemma to be used. Proof. Starting with the definition of a spherical tensor operator, we have formula_16 which we use to then calculate formula_17 If we expand the commutator on the LHS by calculating the action of the "J"± on the bra and ket, then we get formula_18 We may combine these two results to get formula_19 This recursion relation for the matrix elements closely resembles that of the Clebsch–Gordan coefficient. In fact, both are of the form Σ"c" "a""b", "c" "x""c" 0. We therefore have two sets of linear homogeneous equations: formula_20 one for the Clebsch–Gordan coefficients ("xc") and one for the matrix elements ("yc"). It is not possible to exactly solve for "xc". We can only say that the ratios are equal, that is formula_21 or that "xc" ∝ "yc", where the coefficient of proportionality is independent of the indices. Hence, by comparing recursion relations, we can identify the Clebsch–Gordan coefficient ⟨"j"1 "m"1 "j"2 ("m"2 ± 1)|"j m"⟩ with the matrix element ⟨"j"′ "m"′|"T"("k")"q" ± 1|"j" "m"⟩, then we may write formula_22 Alternative conventions. There are different conventions for the reduced matrix elements. One convention, used by Racah and Wigner, includes an additional phase and normalization factor, formula_23 where the 2 × 3 array denotes the 3-j symbol. (Since in practice "k" is often an integer, the (−1)2 "k" factor is sometimes omitted in literature.) With this choice of normalization, the reduced matrix element satisfies the relation: formula_24 where the Hermitian adjoint is defined with the "k" − "q" convention. Although this relation is not affected by the presence or absence of the (−1)2 "k" phase factor in the definition of the reduced matrix element, it is affected by the phase convention for the Hermitian adjoint. Another convention for reduced matrix elements is that of Sakurai's "Modern Quantum Mechanics": formula_25 Example. Consider the position expectation value ⟨"n j m"|"x"|"n j m"⟩. This matrix element is the expectation value of a Cartesian operator in a spherically symmetric hydrogen-atom-eigenstate basis, which is a nontrivial problem. However, the Wigner–Eckart theorem simplifies the problem. (In fact, we could obtain the solution quickly using parity, although a slightly longer route will be taken.) We know that "x" is one component of r, which is a vector. Since vectors are rank-1 spherical tensor operators, it follows that "x" must be some linear combination of a rank-1 spherical tensor "T"(1)"q" with "q" ∈ {−1, 0, 1}. In fact, it can be shown that formula_26 where we define the spherical tensors as formula_27 and "Y""l""m" are spherical harmonics, which themselves are also spherical tensors of rank "l". Additionally, "T"(1)0 "z", and formula_28 Therefore, formula_29 The above expression gives us the matrix element for "x" in the |"n j m"⟩ basis. To find the expectation value, we set "n"′ "n", "j"′ "j", and "m"′ "m". The selection rule for "m"′ and "m" is "m" ± 1 "m"′ for the "T"(1)±1 spherical tensors. As we have "m"′ "m", this makes the Clebsch–Gordan Coefficients zero, leading to the expectation value to be equal to zero.
[ { "math_id": 0, "text": "T^{(k)}" }, { "math_id": 1, "text": "j" }, { "math_id": 2, "text": "j'" }, { "math_id": 3, "text": "\\langle j \\| T^{(k)} \\| j' \\rangle" }, { "math_id": 4, "text": "m" }, { "math_id": 5, "text": "m'" }, { "math_id": 6, "text": "q" }, { "math_id": 7, "text": "\n \\langle j \\, m | T^{(k)}_q | j' \\, m'\\rangle\n = \\langle j' \\, m' \\, k \\, q | j \\, m \\rangle \\langle j \\| T^{(k)} \\| j'\\rangle,\n" }, { "math_id": 8, "text": "T^{(k)}_q" }, { "math_id": 9, "text": "|j m\\rangle" }, { "math_id": 10, "text": "\\langle j' m' k q | j m\\rangle" }, { "math_id": 11, "text": "\\langle 2p,m_1 | r_i | 4d,m_2 \\rangle" }, { "math_id": 12, "text": "\\langle 2p,m_1 | r_i | 4d,m_2 \\rangle = K" }, { "math_id": 13, "text": "\\langle 2p,m_1 |" }, { "math_id": 14, "text": "r_i" }, { "math_id": 15, "text": "| 4d,m_2 \\rangle" }, { "math_id": 16, "text": "[J_\\pm, T^{(k)}_q] = \\hbar \\sqrt{(k \\mp q)(k \\pm q + 1)}T_{q\\pm 1}^{(k)}," }, { "math_id": 17, "text": "\n\\begin{align}\n &\\langle j \\, m | [J_\\pm, T^{(k)}_q] | j' \\, m' \n\\rangle = \\hbar \\sqrt{(k \\mp q) (k \\pm q + 1)} \\, \n \\langle j \\, m | T^{(k)}_{q \\pm 1} | j' \\, m' \\rangle.\n\\end{align}\n" }, { "math_id": 18, "text": "\n\\begin{align} \n \\langle j \\, m | [J_\\pm, T^{(k)}_q] | j' \\, m' \n\\rangle ={} &\\hbar\\sqrt{(j \\pm m) (j \\mp m + 1)} \\, \\langle j \\, (m \\mp 1) | T^{(k)}_q | j' \\, m' \\rangle \\\\\n &-\\hbar\\sqrt{(j' \\mp m')(j' \\pm m' + 1)} \\, \\langle j \\, m | T^{(k)}_q | j' \\, (m' \\pm 1) \\rangle.\n\\end{align}\n" }, { "math_id": 19, "text": "\n\\begin{align} \n \\sqrt{(j \\pm m) (j \\mp m + 1)} \\langle j \\, (m \\mp 1) | T^{(k)}_q | j' \\, m' \n\\rangle = &\\sqrt{(j' \\mp m') (j' \\pm m' + 1)} \\, \\langle j \\, m | T^{(k)}_q | j' \\, (m' \\pm 1) \\rangle \\\\\n &+\\sqrt{(k \\mp q) (k \\pm q + 1)} \\, \\langle j \\, m | T^{(k)}_{q \\pm 1} | j' \\, m' \\rangle.\n\\end{align}\n" }, { "math_id": 20, "text": "\n\\begin{align}\n \\sum_c a_{b, c} x_c &= 0, &\n \\sum_c a_{b, c} y_c &= 0.\n\\end{align}\n" }, { "math_id": 21, "text": "\\frac{x_c}{x_d} = \\frac{y_c}{y_d}" }, { "math_id": 22, "text": "\n \\langle j' \\, m' | T^{(k)}_{q \\pm 1} | j \\, m\\rangle\n \\propto \\langle j \\, m \\, k \\, (q \\pm 1) | j' \\, m' \\rangle.\n" }, { "math_id": 23, "text": "\n \\langle j \\, m | T^{(k)}_q | j' \\, m'\\rangle\n = \\frac{(-1)^{2 k} \\langle j' \\, m' \\, k \\, q | j \\, m \\rangle \\langle j \\| T^{(k)} \\| j'\\rangle_{\\mathrm{R}}}{\\sqrt{2 j + 1}}\n = (-1)^{j - m}\n \\begin{pmatrix}\n j & k & j' \\\\\n -m & q & m'\n \\end{pmatrix} \\langle j \\| T^{(k)} \\| j'\\rangle_{\\mathrm{R}}.\n" }, { "math_id": 24, "text": "\\langle j \\| T^{\\dagger (k)} \\| j'\\rangle_{\\mathrm{R}} = (-1)^{k + j' - j} \\langle j' \\| T^{(k)} \\| j\\rangle_{\\mathrm{R}}^*," }, { "math_id": 25, "text": "\n \\langle j \\, m | T^{(k)}_q | j' \\, m'\\rangle\n = \\frac{\\langle j' \\, m' \\, k \\, q | j \\, m \\rangle \\langle j \\| T^{(k)} \\| j'\\rangle}{\\sqrt{2 j' + 1}}.\n" }, { "math_id": 26, "text": "x = \\frac{T^{(1)}_{-1} - T^{(1)}_1}{\\sqrt{2}}," }, { "math_id": 27, "text": "T^{(1)}_{q} = \\sqrt{\\frac{4 \\pi}{3}} r Y_1^q" }, { "math_id": 28, "text": "T^{(1)}_{\\pm 1} = \\mp \\frac{x \\pm i y}{\\sqrt{2}}." }, { "math_id": 29, "text": "\n\\begin{align}\n\\langle n \\, j \\, m | x | n' \\, j' \\, m'\n\\rangle\n & = \\left\\langle n \\, j \\, m \\left| \\frac{T^{(1)}_{-1} - T^{(1)}_1}{\\sqrt{2}} \\right| n' \\, j' \\, m'\n\\right\\rangle \\\\\n & = \\frac{1}{\\sqrt{2}} \\langle n \\, j \\| T^{(1)} \\| n' \\, j'\\rangle \\,\n \\big(\\langle j' \\, m' \\, 1 \\, (-1) | j \\, m \\rangle - \\langle j' \\, m' \\, 1 \\, 1 | j \\, m \\rangle\\big).\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=1008471
10084899
Static synchronous compensator
Regulating device used on transmission networks In Electrical Engineering , a static synchronous compensator (STATCOM) is a shunt-connected, reactive compensation device used on transmission networks. It uses power electronics to form a voltage-source converter that can act as either a source or sink of reactive AC power to an electricity network. It is a member of the FACTS family of devices. STATCOMS are alternatives to other passive reactive power devices, such as capacitors and inductors (reactors). They have a variable reactive power output, can change their output in terms of milliseconds, and able to supply and consume both capacitive and inductive vars. While they can be used for voltage support and power factor correction, their speed and capability are better suited for dynamic situations like supporting the grid under fault conditions or contingency events. The use of voltage-source based FACTs device had been desirable for some time, as it helps mitigate the limitations of current-source based devices whose reactive output decreases with system voltage. However, limitations in technology have historically prevented wide adoption of STATCOMs. When gate turn-off thyristors (GTO) became more widely available in the 1990s and had the ability to switch both on and off at higher power levels, the first STATCOMs began to be commercially available. These devices typically used 3-level topologies and pulse-width modulation (PWM) to simulate voltage waveforms. Modern STATCOMs now make use of insulated-gate bipolar transistors (IGBTs), which allow for faster switching at high-power levels. 3-level topologies have begun to give way to Multi-Modular Converter (MMC) Topologies, which allow for more levels in the voltage waveform, reducing harmonics and improving performance. History. When AC won the War of Currents in the late 19th century, and electric grids began expanding and connecting cities and states, the need for reactive compensation became apparent. While AC offered benefits with transformation and reduced current, the alternating nature of voltage and current lead to additional challenges with the natural capacitance and inductance of transmission lines. Heavily loaded lines consumed reactive power due to the line's inductance, and as transmission voltage increased throughout the 20th century, the higher voltage supplied capacitive reactive power. As operating a transmission line only at it surge impedance loading (SIL) was not feasible, other means to manage the reactive power was needed. Synchronous Machines were commonly used at the time for generators, and could provide some reactive power support, however were limited due to the increase in losses it caused. They also became less effective as higher voltage transmissions lines moved loads further from sources. Fixed, shunt capacitor and reactor banks filled this need by being deployed where needed. In particular, shunt capacitors switched by circuit breakers provided an effective means to managing varying reactive power requirements due to changing loads. However, this was not without limitations. Shunt capacitors and reactors are fixed devices, only able to be switched on and off. This required either a careful study of the exact size needed, or accepting less than ideal effects on the voltage of a transmission line. The need for a more dynamic and flexible solution was realized with the mercury-arc valve in the early 20th century. Similar to a vacuum tube, the mercury-arc valve was a high-powered rectifier, capable of converting high AC voltages to DC. As the technology improved, inverting became possible as well and mercury valves found use in power systems and HVDC ties. When connected to a reactor, different switching pattern could be used to vary the effective inductance connected, allow for more dynamic control. Arc valves continued to dominate power electronics until the rise of solid-state semiconductors in the mid 20th century. As semiconductors replaced vacuum tubes, the thyristor created the first modern FACTs devices in the Static VAR Compensator (SVC). Effectively working as a circuit breaker that could switch on in milliseconds, it allowed for quickly switching capacitor banks. Connected to a reactor and switched sub-cycle allowed the effective inductance to be varied. The thyristor also greatly improved the control system, allowing an SVC to detect and react to faults to better support the system. The thyristor dominated the FACTs and HVDC world until the late 20th century, when the IGBT began to match its power ratings. With the IGBT, the first voltage-sourced converters and STATCOMs began to enter the FACTs world. A prototype 1 MVAr STATCOM was described in a report by Empire State Electric Energy Research Corporation in 1987. The first production 100 MVAr STATCOM made by Westinghouse Electric was installed at the Tennessee Valley Authority Sullivan substation in 1995 but was quickly retired due to obsolescence of its components. Theory. The basis of a STATCOM is a voltage source converter (VSC) connected in series with some type of reactance, either a fixed Inductor or a Power Transformer. This allows a STATCOM to control power flow much like a Transmission Line, albeit without any active (real) power flow. Given an inductor connected between two AC voltages, the reactive power flow between the two points is given by: formula_0 where formula_1: Reactive Power formula_2: Sending-End Voltage formula_3: Magnitude difference in formula_2 and receiving end voltage formula_4 formula_5: Reactance of the Inductor or transformer formula_6: Phase-Angle difference between formula_2 and formula_4 With formula_6 close to zero (as the STATCOM provides no real power and only consumes a small amount as losses) and formula_5 a fixed size, reactive power flow is controlled by the difference in magnitude of the two AC voltages. From the equation, if the STATCOM creates a voltage magnitude greater than the system voltage, it supplies capacitive reactive power to the system. If the STATCOM's voltage magnitude is less, it consumes inductive reactive power from the system. As most modern VSCs are made of power electronics that are capable of making small voltage changes very quickly, a dynamic reactive power output is possible. This compares to a traditional, fixed capacitor or inductor, that is either off (0 MVar) or at its maximum (for example, 50 MVar). A similarly sized STATCOM would range from 50 MVar capacitive to 50 MVar inductive, in as small as 1 MVar steps. VSC topologies. Since a STATCOM varies its voltage magnitude to control reactive power, the topology of how the VSC is designed and connected defines how effectively and quickly it can operate. There are numerous different topologies available for VSCs and power electronic based converters, the most common ones are covered below. IGBTS are listed as the power electronics device below, however older devices also used GTO Thyristors. Two-level converter. One of the earliest VSC topologies was the two-level converter, adapted from the three-phase bridge rectifier. Also referred to as a 6-pulse rectifier, it is able to connect the AC voltage through different IGBT paths based on switching. When used as a rectifier to convert AC to DC, this allows both the positive and negative portion of the waveform to be converted to DC. When used in a VSC for a STATCOM, a capacitor can be connected across the DC side to produce a square wave with two levels. This alone offers no real advantages for a STATCOM, as the voltage magnitude is fixed. However, if the IGBTs can be switched fast enough, pulse-width modulation (PWM) can be used to control the voltage magnitude. By varying the durations of the pulses, the effective magnitude of the voltage waveform can be controlled. Since PWM still only produces square waves, harmonic generation is quite significant. Some harmonic reduction can be achieved by analytical techniques on different switching patterns; however, this is limited to controller complexity. Each level of the two-level converter also generally comprises multiple series IGBTs, to create the needed final voltage, so coordination and timing between individual devices is challenging. Three-level converter. Adding additional levels to a converter topology has the benefit of more closely mirroring a true voltage sine wave, which reduces harmonic generation and improves performance. If all three phases of a VSC utilize its own two-level converter topology, the phase-to-phase voltage will be three levels (as while the three phase have the same switching pattern, they are shifted in time relative to each other). This allows a positive and negative peak in addition to a zero level, which adds positive and negative symmetry and eliminates even order harmonics. Another option is to enhance the two-level topology to a three-level converter. By adding two additional IGBTs to the converter, three different levels can be created by have two IGBTs on at once. If each phase has its own three-level converter, then a total of five levels can be created. This creates a very crude sine wave, however PWM still offer less harmonic generation (as the pulses are still on all five levels). Three-level converters can also be combined with transformers and phase shifting to create additional levels. A transformer with two secondaries, one Wye-Wye and the other Wye-Delta, can be connected to two separate three-phase, three-level converters to double the number of levels. Additional phase-shifted windings can be used to turn the traditional 6 pulses of a three-level to 12, 24, or even 48 pulses. With this many pulses and levels, the waveform better approximates a true sine wave, and all harmonics generated are of a much higher order that can be filtered out with a low-pass filter. Modular multi-level converter. While adding phase shifting to three-level converters improves harmonic performance, it comes at the cost of adding 2, 3 or even 4 additional STATCOMs. It also adds little to no redundancy, as the switching pattern is too complex to accommodate the loss of one STATCOM. As the idea of the three-level converter is to add additional levels to better approximate a voltage sine wave, another topology called the Modular Multi-level Converter (MMC) offers some benefits. The MMC topology is similar to the three-level in that switching on various IGBTs will connect different capacitors to the circuit. As each IGBT "switch" has its own capacitor, voltage can be built up in discrete steps. Adding additional levels increases the number of steps, better approximating a sine wave. With enough levels, PWM is not necessary as the waveform created is close enough to a true voltage sine wave and generates very little harmonics. The IGBT arrangement around the capacitor for each step depends on the DC needs. If a DC bus is needed (for an HVDC tie or a STATCOM with synthetic inertia) then only two IGBTs are needed per capacitor level. If a DC bus is not needed, and there are benefits to connecting the three phases into a delta arrangement to eliminate zero sequence harmonics, four IGBTs can be used to surround the capacitor to bypass or switch it in at either polarity. Operation. As a STATCOM's VSC operation is based on changing current flow to affect voltage, its voltage-current (VI) characteristics control how it operates. The VI characteristic can be divided into two distinct parts: a slopped region between its inductive and capacitive maximums, and its maximum operating points. While in the slopped region between its maximums, the STATCOM is said to be in voltage regulation mode, where it either supplies capacitive vars to increase the voltage or consumes inductive vars to lower the voltage. The rate at which it does this is set by the slope, which functions similar to a generator's droop speed control. This slope is programmable and can be set to a high value (to have the STATCOM regulate voltage like a traditional fixed reactive device) or to near zero, producing a very flat line and reserving the STATCOMs capacity for dynamic or transient events. The maximum slope is generally around 5%, to keep the system voltage within 5% of its nominal value. When operating at either of its maximums, the STATCOM is said to be in a VAR control mode, where it's supplying or consuming its maximum reactive output. Unlike a traditional SVC, whose capacitive reactive output is linearly dependent on the voltage, a STATCOM can supply its maximum capacitive rating for any voltage. This offers an advantage over SVCs, as a STATCOM's effectiveness is not dependent on the voltage drop caused by the fault. While technically capable of responding to near zero voltage magnitudes, typically a STATCOM is set to ride through voltage drops of around 0.2 pu and lower, to prevent the STATCOM from causing a high over-voltage when the fault clears, and the voltage returns to normal. A STATCOM may also have a transient rating, where it can provide above its maximum current for very short time, allowing it to help the system better for larger faults. This rating depends on the specific design, but can be as high as 3.0 pu. To control the operation of a STATCOM when in voltage control mode, a closed loop, PID regulator is typically used, which allows for feedback on how changing the current flow is affecting the system voltage. A simplified PID regulator is shown, however a separate closed loop is sometimes used to determine the reference voltage with respect to the slope and any other modes a STATCOM may have. A full PID system can be used, but typically the derivative component is removed (or set very low) to prevent noise from the system or measurements from causing unwanted fluctuations. A STATCOM may also have additional modes besides voltage regulation or VAR control, depending on specific needs of the system. Examples being active filtering of system harmonics or gain control to accommodate system strength changes due to outages of generation or loads. Application. As a fast, dynamic, and multi-quadrant source of reactive power, a STATCOM can be used for a wide variety of applications, however they are better suited for supporting the grid under fault or transient events or contingency events. One popular use is to place a STATCOM along a transmission line, to improve system power flow. Under normal operation the STATCOM will do very little, however in the event of a fault of a nearby line, the power that was being served is forced onto other transmission lines. Ordinarily this results in voltage drop increases due to increased power flow, but with a STATCOM available it can supply reactive power to increase the voltage until either the fault is removed (if temporary) or until a fixed capacitor can be switched in (if the fault is permanent). In some cases, a STATCOM can be installed at a substation, to help support multiple lines rather than just one, and help reducing the complexity of the protection on the line with a STATCOM on it. Depending on available control function, STATCOMs can also be used for more advanced applications, such as active filtering, Power Oscillation Damping (POD), or even limited active power interactions. With growth of Distributed Energy Resources (DER) and Energy Storage, there has been research into using STATCOMs to aid or augment these uses. One area of recent research is virtual inertia: the use of an energy source on the DC side of a STATCOM to give it an inertia response similar to a synchronous condenser or generator. STATCOM vs. SVC. Fundamentally, a STATCOM is type of static VAR compensator (SVC), with the main difference being that a STATCOM is a voltage-sourced converter while a traditional SVC is a current-sourced converter. Historically, STATCOM have been costlier than an SVC, in part due to higher cost of IGBTs), but in recent years IGBT power ratings have increased, closing the gap. The response time of a STATCOM is shorter than that of a SVC, mainly due to the fast-switching times provided by the IGBTs of the voltage source converter (thyristors cannot be switched off and must be commutated). As a result, the reaction time of a STATCOM is one to two cycles vs. two to three cycles for an SVC. The STATCOM also provides better reactive power support at low AC voltages than an SVC, since the reactive power from a STATCOM decreases linearly with the AC voltage (the current can be maintained at the rated value even down to low AC voltage), as opposed to power being a function of a square of voltage for SVC. The SVC is not used in a severe undervoltage conditions (less than 0.6 pu), since leaving the capacitors on can worsen the transient overvoltage once the fault is cleared, while STATCOM can operate until 0.2–0.3 pu (this limit is due to possible loss of synchronicity and cooling). The footprint of a STATCOM is smaller, as it does not need large capacitors used by an SVC for TSC or filters. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q=\\frac{V_S*(\\Delta V)}{X}*\\cos(\\delta)" }, { "math_id": 1, "text": "Q" }, { "math_id": 2, "text": "V_S" }, { "math_id": 3, "text": "\\Delta V" }, { "math_id": 4, "text": "V_R" }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": "\\delta" } ]
https://en.wikipedia.org/wiki?curid=10084899
10086335
Complete theory
In mathematical logic, a theory is complete if it is consistent and for every closed formula in the theory's language, either that formula or its negation is provable. That is, for every sentence formula_0 the theory formula_1 contains the sentence or its negation but not both (that is, either formula_2 or formula_3). Recursively axiomatizable first-order theories that are consistent and rich enough to allow general mathematical reasoning to be formulated cannot be complete, as demonstrated by Gödel's first incompleteness theorem. This sense of "complete" is distinct from the notion of a complete "logic", which asserts that for every theory that can be formulated in the logic, all semantically valid statements are provable theorems (for an appropriate sense of "semantically valid"). Gödel's completeness theorem is about this latter kind of completeness. Complete theories are closed under a number of conditions internally modelling the T-schema: Maximal consistent sets are a fundamental tool in the model theory of classical logic and modal logic. Their existence in a given case is usually a straightforward consequence of Zorn's lemma, based on the idea that a contradiction involves use of only finitely many premises. In the case of modal logics, the collection of maximal consistent sets extending a theory "T" (closed under the necessitation rule) can be given the structure of a model of "T", called the canonical model. Examples. Some examples of complete theories are:
[ { "math_id": 0, "text": "\\varphi," }, { "math_id": 1, "text": "T" }, { "math_id": 2, "text": "T \\vdash \\varphi" }, { "math_id": 3, "text": "T \\vdash \\neg \\varphi" }, { "math_id": 4, "text": "S" }, { "math_id": 5, "text": "A \\land B \\in S" }, { "math_id": 6, "text": "A \\in S" }, { "math_id": 7, "text": "B \\in S" }, { "math_id": 8, "text": "A \\lor B \\in S" } ]
https://en.wikipedia.org/wiki?curid=10086335
10087500
Impedance parameters
Set of properties used in electrical engineering Impedance parameters or Z-parameters (the elements of an impedance matrix or Z-matrix) are properties used in electrical engineering, electronic engineering, and communication systems engineering to describe the electrical behavior of linear electrical networks. They are also used to describe the small-signal (linearized) response of non-linear networks. They are members of a family of similar parameters used in electronic engineering, other examples being: S-parameters, Y-parameters, H-parameters, T-parameters or ABCD-parameters. Z-parameters are also known as "open-circuit impedance parameters" as they are calculated under open circuit conditions. i.e., Ix=0, where x=1,2 refer to input and output currents flowing through the ports (of a two-port network in this case) respectively. The Z-parameter matrix. A Z-parameter matrix describes the behaviour of any linear electrical network that can be regarded as a black box with a number of ports. A "port" in this context is a pair of electrical terminals carrying equal and opposite currents into and out-of the network, and having a particular voltage between them. The Z-matrix gives no information about the behaviour of the network when the currents at any port are not balanced in this way (should this be possible), nor does it give any information about the voltage between terminals not belonging to the same port. Typically, it is intended that each external connection to the network is between the terminals of just one port, so that these limitations are appropriate. For a generic multi-port network definition, it is assumed that each of the ports is allocated an integer "n" ranging from 1 to "N", where "N" is the total number of ports. For port "n", the associated Z-parameter definition is in terms of the port current and port voltage, formula_0 and formula_1 respectively. For all ports the voltages may be defined in terms of the Z-parameter matrix and the currents by the following matrix equation: formula_2 where Z is an "N" × "N" matrix the elements of which can be indexed using conventional matrix notation. In general the elements of the Z-parameter matrix are complex numbers and functions of frequency. For a one-port network, the Z-matrix reduces to a single element, being the ordinary impedance measured between the two terminals. The Z-parameters are also known as the open circuit parameters because they are measured or calculated by applying current to one port and determining the resulting voltages at all the ports while the undriven ports are terminated into open circuits. Two-port networks. The Z-parameter matrix for the two-port network is probably the most common. In this case the relationship between the port currents, port voltages and the Z-parameter matrix is given by: formula_3. where formula_4 formula_5 For the general case of an "N"-port network, formula_6 Impedance relations. The input impedance of a two-port network is given by: formula_7 where ZL is the impedance of the load connected to port two. Similarly, the output impedance is given by: formula_8 where ZS is the impedance of the source connected to port one. Relation to S-parameters. The Z-parameters of a network are related to its S-parameters by formula_9  and formula_10  where formula_11 is the identity matrix, formula_12 is a diagonal matrix having the square root of the characteristic impedance at each port as its non-zero elements, formula_13 and formula_14 is the corresponding diagonal matrix of square roots of characteristic admittances. In these expressions the matrices represented by the bracketed factors commute and so, as shown above, may be written in either order. Two port. In the special case of a two-port network, with the same characteristic impedance formula_15 at each port, the above expressions reduce to formula_16 formula_17 formula_18 formula_19 Where formula_20 The two-port S-parameters may be obtained from the equivalent two-port Z-parameters by means of the following expressions formula_21 formula_22 formula_23 formula_24 where formula_25 The above expressions will generally use complex numbers for formula_26 and formula_27. Note that the value of formula_28 can become 0 for specific values of formula_27 so the division by formula_29 in the calculations of formula_30 may lead to a division by 0. Relation to Y-parameters. Conversion from Y-parameters to Z-parameters is much simpler, as the Z-parameter matrix is just the inverse of the Y-parameter matrix. For a two-port: formula_31 formula_32 formula_33 formula_34 where formula_35 is the determinant of the Y-parameter matrix.
[ { "math_id": 0, "text": "I_n\\," }, { "math_id": 1, "text": "V_n\\," }, { "math_id": 2, "text": "V = Z I\\," }, { "math_id": 3, "text": "\\begin{pmatrix} V_1 \\\\ V_2\\end{pmatrix} = \\begin{pmatrix} Z_{11} & Z_{12} \\\\ Z_{21} & Z_{22} \\end{pmatrix}\\begin{pmatrix}I_1 \\\\ I_2\\end{pmatrix} " }, { "math_id": 4, "text": "Z_{11} = {V_1 \\over I_1 } \\bigg|_{I_2 = 0} \\qquad Z_{12} = {V_1 \\over I_2 } \\bigg|_{I_1 = 0}" }, { "math_id": 5, "text": "Z_{21} = {V_2 \\over I_1 } \\bigg|_{I_2 = 0} \\qquad Z_{22} = {V_2 \\over I_2 } \\bigg|_{I_1 = 0}" }, { "math_id": 6, "text": "Z_{nm} = {V_n \\over I_m } \\bigg|_{I_k = 0 \\text{ for } k \\ne m}" }, { "math_id": 7, "text": "Z_\\text{in} = Z_{11} - \\frac{Z_{12}Z_{21}}{Z_{22}+Z_L}" }, { "math_id": 8, "text": "Z_\\text{out} = Z_{22} - \\frac{Z_{12}Z_{21}}{Z_{11}+Z_S}" }, { "math_id": 9, "text": " \\begin{align}\nZ &= \\sqrt{z} (1_{\\!N} + S) (1_{\\!N} - S)^{-1} \\sqrt{z} \\\\\n &= \\sqrt{z} (1_{\\!N} - S)^{-1} (1_{\\!N} + S) \\sqrt{z} \\\\\n\\end{align} " }, { "math_id": 10, "text": " \\begin{align}\nS &= (\\sqrt{y}Z\\sqrt{y} \\,- 1_{\\!N}) (\\sqrt{y}Z\\sqrt{y} \\,+ 1_{\\!N})^{-1} \\\\\n &= (\\sqrt{y}Z\\sqrt{y} \\,+ 1_{\\!N})^{-1} (\\sqrt{y}Z\\sqrt{y} \\,- 1_{\\!N}) \\\\\n\\end{align} " }, { "math_id": 11, "text": "1_{\\!N}" }, { "math_id": 12, "text": "\\sqrt{z}" }, { "math_id": 13, "text": "\\sqrt{z} = \\begin{pmatrix}\n \\sqrt{z_{01}} & \\\\\n & \\sqrt{z_{02}} \\\\\n & & \\ddots \\\\\n & & & \\sqrt{z_{0N}}\n\\end{pmatrix}\n" }, { "math_id": 14, "text": "\\sqrt{y} = (\\sqrt{z})^{-1}" }, { "math_id": 15, "text": "z_{01} = z_{02} = Z_0" }, { "math_id": 16, "text": "Z_{11} = {((1 + S_{11}) (1 - S_{22}) + S_{12} S_{21}) \\over \\Delta_S} Z_0 \\," }, { "math_id": 17, "text": "Z_{12} = {2 S_{12} \\over \\Delta_S} Z_0 \\," }, { "math_id": 18, "text": "Z_{21} = {2 S_{21} \\over \\Delta_S} Z_0 \\," }, { "math_id": 19, "text": "Z_{22} = {((1 - S_{11}) (1 + S_{22}) + S_{12} S_{21}) \\over \\Delta_S} Z_0 \\," }, { "math_id": 20, "text": "\\Delta_S = (1 - S_{11}) (1 - S_{22}) - S_{12} S_{21} \\," }, { "math_id": 21, "text": "S_{11} = {(Z_{11} - Z_0) (Z_{22} + Z_0) - Z_{12} Z_{21} \\over \\Delta}" }, { "math_id": 22, "text": "S_{12} = {2 Z_0 Z_{12} \\over \\Delta} \\," }, { "math_id": 23, "text": "S_{21} = {2 Z_0 Z_{21} \\over \\Delta} \\," }, { "math_id": 24, "text": "S_{22} = {(Z_{11} + Z_0) (Z_{22} - Z_0) - Z_{12} Z_{21} \\over \\Delta}" }, { "math_id": 25, "text": "\\Delta = (Z_{11} + Z_0) (Z_{22} + Z_0) - Z_{12} Z_{21} \\, " }, { "math_id": 26, "text": "S_{ij} \\, " }, { "math_id": 27, "text": "Z_{ij} \\, " }, { "math_id": 28, "text": "\\Delta\\, " }, { "math_id": 29, "text": "\\Delta \\, " }, { "math_id": 30, "text": "S_{ij} \\," }, { "math_id": 31, "text": "Z_{11} = {Y_{22} \\over \\Delta_Y} \\," }, { "math_id": 32, "text": "Z_{12} = {-Y_{12} \\over \\Delta_Y} \\," }, { "math_id": 33, "text": "Z_{21} = {-Y_{21} \\over \\Delta_Y} \\," }, { "math_id": 34, "text": "Z_{22} = {Y_{11} \\over \\Delta_Y} \\," }, { "math_id": 35, "text": "\\Delta_Y = Y_{11} Y_{22} - Y_{12} Y_{21} \\," } ]
https://en.wikipedia.org/wiki?curid=10087500
10087606
Symmetry operation
Geometric transformation which produces an identical image In mathematics, a symmetry operation is a geometric transformation of an object that leaves the object looking the same after it has been carried out. For example, a &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄3 turn rotation of a regular triangle about its center, a reflection of a square across its diagonal, a translation of the Euclidean plane, or a point reflection of a sphere through its center are all symmetry operations. Each symmetry operation is performed with respect to some symmetry element (a point, line or plane). In the context of molecular symmetry, a symmetry operation is a permutation of atoms such that the molecule or crystal is transformed into a state indistinguishable from the starting state. Two basic facts follow from this definition, which emphasizes its usefulness. In the context of molecular symmetry, quantum wavefunctions need not be invariant, because the operation can multiply them by a phase or mix states within a degenerate representation, without affecting any physical property. Molecules. Identity Operation. The identity operation corresponds to doing nothing to the object. Because every molecule is indistinguishable from itself if nothing is done to it, every object possesses at least the identity operation. The identity operation is denoted by E or I. In the identity operation, no change can be observed for the molecule. Even the most asymmetric molecule possesses the identity operation. The need for such an identity operation arises from the mathematical requirements of group theory. Reflection through mirror planes. The reflection operation is carried out with respect to symmetry elements known as planes of symmetry or mirror planes. Each such plane is denoted as σ (sigma). Its orientation relative to the principal axis of the molecule is indicated by a subscript. The plane must pass through the molecule and cannot be completely outside it. Through the reflection of each mirror plane, the molecule must be able to produce an identical image of itself. Inversion operation. In an inversion through a centre of symmetry, i (the element), we imagine taking each point in a molecule and then moving it out the same distance on the other side. In summary, the inversion operation projects each atom through the centre of inversion and out to the same distance on the opposite side. The inversion center is a point in space that lies in the geometric center of the molecule. As a result, all the cartesian coordinates of the atoms are inverted (i.e. x,y,z to –x,–y,–z). The symbol used to represent inversion center is i. When the inversion operation is carried out n times, it is denoted by in, where formula_0 when n is even and formula_1 when n is odd. Examples of molecules that have an inversion center include certain molecules with octahedral geometry (general formula ), square planar geometry (general formula ), and ethylene (). Examples of molecules without inversion centers are cyclopentadienide () and molecules with trigonal pyramidal geometry (general formula ). Proper rotation operations or "n"-fold rotation. A "proper rotation" refers to simple rotation about an axis. Such operations are denoted by &amp;NoBreak;&amp;NoBreak; where Cn is a rotation of &amp;NoBreak;}&amp;NoBreak; or &amp;NoBreak;&amp;NoBreak; performed m times. The superscript m is omitted if it is equal to one. "C"1 is a rotation through 360°, where "n" = 1. It is equivalent to the Identity (E) operation. "C"2 is a rotation of 180°, as &amp;NoBreak;&amp;NoBreak; "C"3 is a rotation of 120°, as &amp;NoBreak;&amp;NoBreak; and so on. Here the molecule can be rotated into equivalent positions around an axis. An example of a molecule with "C"2 symmetry is the water () molecule. If the molecule is rotated by 180° about an axis passing through the oxygen atom, no detectable difference before and after the "C"2 operation is observed. Order n of an axis can be regarded as a number of times that, for the least rotation which gives an equivalent configuration, that rotation must be repeated to give a configuration identical to the original structure (i.e. a 360° or 2π rotation). An example of this is "C"3 proper rotation, which rotates by &amp;NoBreak;&amp;NoBreak; "C"3 represents the first rotation around the "C"3 axis by &amp;NoBreak;&amp;NoBreak; &amp;NoBreak;&amp;NoBreak; is the rotation by &amp;NoBreak;&amp;NoBreak; while &amp;NoBreak;&amp;NoBreak; is the rotation by &amp;NoBreak;&amp;NoBreak; &amp;NoBreak;&amp;NoBreak; is the identical configuration because it gives the original structure, and it is called an "identity element" (E). Therefore, "C"3 is an order of three, and is often referred to as a "threefold" axis. Improper rotation operations. An improper rotation involves two operation steps: a proper rotation followed by reflection through a plane perpendicular to the rotation axis. The improper rotation is represented by the symbol Sn where n is the order. Since the improper rotation is the combination of a proper rotation and a reflection, Sn will always exist whenever Cn and a perpendicular plane exist separately. "S"1 is usually denoted as σ, a reflection operation about a mirror plane. "S"2 is usually denoted as i, an inversion operation about an inversion center. When n is an even number formula_2 but when n is odd formula_3 Rotation axes, mirror planes and inversion centres are symmetry elements, not symmetry operations. The rotation axis of the highest order is known as the principal rotation axis. It is conventional to set the Cartesian z-axis of the molecule to contain the principal rotation axis. Examples. Dichloromethane, . There is a "C"2 rotation axis which passes through the carbon atom and the midpoints between the two hydrogen atoms and the two chlorine atoms. Define the z axis as co-linear with the "C"2 axis, the xz plane as containing and the yz plane as containing . A "C"2 rotation operation permutes the two hydrogen atoms and the two chlorine atoms. Reflection in the yz plane permutes the hydrogen atoms while reflection in the xz plane permutes the chlorine atoms. The four symmetry operations E, "C"2, σ("xz") and σ("yz") form the point group "C"2"v". Note that if any two operations are carried out in succession the result is the same as if a single operation of the group had been performed. Methane, . In addition to the proper rotations of order 2 and 3 there are three mutually perpendicular "S"4 axes which pass half-way between the C-H bonds and six mirror planes. Note that formula_4 Crystals. In crystals, screw rotations and/or glide reflections are additionally possible. These are rotations or reflections together with partial translation. These operations may change based on the dimensions of the crystal lattice. The Bravais lattices may be considered as representing translational symmetry operations. Combinations of operations of the crystallographic point groups with the addition symmetry operations produce the 230 crystallographic space groups. See also. Molecular symmetry Crystal structure Crystallographic restriction theorem References. F. A. Cotton "Chemical applications of group theory", Wiley, 1962, 1971
[ { "math_id": 0, "text": "i^n=E" }, { "math_id": 1, "text": "i^n=-E" }, { "math_id": 2, "text": "S_n^n = E," }, { "math_id": 3, "text": "S_n^{2n} = E." }, { "math_id": 4, "text": "S_4^2 = C_2." } ]
https://en.wikipedia.org/wiki?curid=10087606
10088188
Order of integration
Summary statistic In statistics, the order of integration, denoted "I"("d"), of a time series is a summary statistic, which reports the minimum number of differences required to obtain a covariance-stationary series. Integration of order "d". A time series is integrated of order "d" if formula_0 is a stationary process, where formula_1 is the lag operator and formula_2 is the first difference, i.e. formula_3 In other words, a process is integrated to order "d" if taking repeated differences "d" times yields a stationary process. In particular, if a series is integrated of order 0, then formula_4 is stationary. Constructing an integrated series. An "I"("d") process can be constructed by summing an "I"("d" − 1) process: formula_7 where formula_8
[ { "math_id": 0, "text": "(1-L)^d X_t \\ " }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "1-L " }, { "math_id": 3, "text": "(1-L) X_t = X_t - X_{t-1} = \\Delta X. " }, { "math_id": 4, "text": "(1-L)^0 X_t = X_t " }, { "math_id": 5, "text": "X_t " }, { "math_id": 6, "text": "Z_t = \\sum_{k=0}^t X_k" }, { "math_id": 7, "text": " \\Delta Z_t = X_t," }, { "math_id": 8, "text": "X_t \\sim I(d-1). \\," } ]
https://en.wikipedia.org/wiki?curid=10088188
10088265
Papkovich–Neuber solution
The Papkovich–Neuber solution is a technique for generating analytic solutions to the Newtonian incompressible Stokes equations, though it was originally developed to solve the equations of linear elasticity. It can be shown that any Stokes flow with body force formula_0 can be written in the form: formula_1 formula_2 where formula_3 is a harmonic vector potential and formula_4 is a harmonic scalar potential. The properties and ease of construction of harmonic functions makes the Papkovich–Neuber solution a powerful technique for solving the Stokes Equations in a variety of domains.
[ { "math_id": 0, "text": "\\mathbf{f}=0" }, { "math_id": 1, "text": "\\mathbf{u} = {1\\over{2 \\mu}} \\left[ \\nabla ( \\mathbf{x} \\cdot \\mathbf{\\Phi} + \\chi) - 2 \\mathbf{\\Phi} \\right]" }, { "math_id": 2, "text": "p = \\nabla \\cdot \\mathbf{\\Phi}" }, { "math_id": 3, "text": "\\mathbf{\\Phi}" }, { "math_id": 4, "text": "\\chi" } ]
https://en.wikipedia.org/wiki?curid=10088265
10090547
Peano existence theorem
Theorem regarding the existence of a solution to a differential equation. In mathematics, specifically in the study of ordinary differential equations, the Peano existence theorem, Peano theorem or Cauchy–Peano theorem, named after Giuseppe Peano and Augustin-Louis Cauchy, is a fundamental theorem which guarantees the existence of solutions to certain initial value problems. History. Peano first published the theorem in 1886 with an incorrect proof. In 1890 he published a new correct proof using successive approximations. Theorem. Let formula_0 be an open subset of formula_1 with formula_2 a continuous function and formula_3 a continuous, explicit first-order differential equation defined on "D", then every initial value problem formula_4 for "f" with formula_5 has a local solution formula_6 where formula_7 is a neighbourhood of formula_8 in formula_9, such that formula_10 for all formula_11. The solution need not be unique: one and the same initial value formula_12 may give rise to many different solutions formula_13. Proof. By replacing formula_14 with formula_15, formula_16 with formula_17, we may assume formula_18. As formula_0 is open there is a rectangle formula_19. Because formula_20 is compact and formula_21 is continuous, we have formula_22 and by the Stone–Weierstrass theorem there exists a sequence of Lipschitz functions formula_23 converging uniformly to formula_21 in formula_20. Without loss of generality, we assume formula_24 for all formula_25. We define Picard iterations formula_26 as follows, where formula_27. formula_28, and formula_29. They are well-defined by induction: as formula_30 formula_31 is within the domain of formula_32. We have formula_33 where formula_34 is the Lipschitz constant of formula_32. Thus for maximal difference formula_35, we have a bound formula_36, and formula_37 By induction, this implies the bound formula_38 which tends to zero as formula_39 for all formula_40. The functions formula_41 are equicontinuous as for formula_42 we have formula_43 so by the Arzelà–Ascoli theorem they are relatively compact. In particular, for each formula_25 there is a subsequence formula_44 converging uniformly to a continuous function formula_45. Taking limit formula_39 in formula_46 we conclude that formula_47. The functions formula_48 are in the closure of a relatively compact set, so they are themselves relatively compact. Thus there is a subsequence formula_49 converging uniformly to a continuous function formula_50. Taking limit formula_51 in formula_52 we conclude that formula_53, using the fact that formula_54 are equicontinuous by the Arzelà–Ascoli theorem. By the fundamental theorem of calculus, formula_55 in formula_7. Related theorems. The Peano theorem can be compared with another existence result in the same context, the Picard–Lindelöf theorem. The Picard–Lindelöf theorem both assumes more and concludes more. It requires Lipschitz continuity, while the Peano theorem requires only continuity; but it proves both existence and uniqueness where the Peano theorem proves only the existence of solutions. To illustrate, consider the ordinary differential equation formula_56 on the domain formula_57 According to the Peano theorem, this equation has solutions, but the Picard–Lindelöf theorem does not apply since the right hand side is not Lipschitz continuous in any neighbourhood containing 0. Thus we can conclude existence but not uniqueness. It turns out that this ordinary differential equation has two kinds of solutions when starting at formula_58, either formula_59 or formula_60. The transition between formula_61 and formula_62 can happen at any formula_63. The Carathéodory existence theorem is a generalization of the Peano existence theorem with weaker conditions than continuity. The Peano existence theorem cannot be straightforwardly extended to a general Hilbert space formula_64: for an open subset formula_0 of formula_65, the continuity of formula_2 alone is insufficient for guaranteeing the existence of solutions for the associated initial value problem. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D" }, { "math_id": 1, "text": "\\mathbb{R}\\times\\mathbb{R}" }, { "math_id": 2, "text": "f\\colon D \\to \\mathbb{R}" }, { "math_id": 3, "text": "y'(x) = f\\left(x,y(x)\\right)" }, { "math_id": 4, "text": "y\\left(x_0\\right) = y_0" }, { "math_id": 5, "text": "(x_0, y_0) \\in D" }, { "math_id": 6, "text": "z\\colon I \\to \\mathbb{R}" }, { "math_id": 7, "text": "I" }, { "math_id": 8, "text": "x_0" }, { "math_id": 9, "text": "\\mathbb{R}" }, { "math_id": 10, "text": " z'(x) = f\\left(x,z(x)\\right) " }, { "math_id": 11, "text": " x \\in I " }, { "math_id": 12, "text": "(x_0,y_0)" }, { "math_id": 13, "text": "z" }, { "math_id": 14, "text": "y" }, { "math_id": 15, "text": "y-y_0" }, { "math_id": 16, "text": "x" }, { "math_id": 17, "text": "x-x_0" }, { "math_id": 18, "text": "x_0=y_0=0" }, { "math_id": 19, "text": "R=[-x_1,x_1]\\times[-y_1,y_1]\\subset D" }, { "math_id": 20, "text": "R" }, { "math_id": 21, "text": "f" }, { "math_id": 22, "text": "\\textstyle\\sup_R|f|\\le C<\\infty" }, { "math_id": 23, "text": "f_k:R\\to\\mathbb{R}" }, { "math_id": 24, "text": "\\textstyle\\sup_R|f_k|\\le2C" }, { "math_id": 25, "text": "k" }, { "math_id": 26, "text": "y_{k,n}:I=[-x_2,x_2]\\to\\mathbb{R}" }, { "math_id": 27, "text": "x_2=\\min\\{x_1,y_1/(2C)\\}" }, { "math_id": 28, "text": "y_{k,0}(x)\\equiv0" }, { "math_id": 29, "text": "\\textstyle y_{k,n+1}(x)=\\int_0^x f_k(x',y_{k,n}(x'))\\,\\mathrm{d}x'" }, { "math_id": 30, "text": "\\begin{aligned}|y_{k,n+1}(x)|&\\le\\textstyle\\left|\\int_0^x|f_k(x',y_{k,n}(x'))|\\,\\mathrm{d}x'\\right|\\\\&\\le \\textstyle |x|\\sup_R|f_k|\\\\&\\le x_2\\cdot2C\\le y_1,\\end{aligned}" }, { "math_id": 31, "text": "(x',y_{k,n+1}(x'))" }, { "math_id": 32, "text": "f_k" }, { "math_id": 33, "text": "\\begin{aligned}|y_{k,n+1}(x)-y_{k,n}(x)|&\\le\\textstyle\\left|\\int_0^x|f_k(x',y_{k,n}(x'))-f_k(x',y_{k,n-1}(x'))|\\,\\mathrm{d}x'\\right|\\\\&\\le \\textstyle L_k\\left|\\int_0^x|y_{k,n}(x')-y_{k,n-1}(x')|\\,\\mathrm{d}x'\\right|,\\end{aligned}" }, { "math_id": 34, "text": "L_k" }, { "math_id": 35, "text": "\\textstyle M_{k,n}(x)=\\sup_{x'\\in[0,x]}|y_{k,n+1}(x')-y_{k,n}(x')|" }, { "math_id": 36, "text": "\\textstyle M_{k,n}(x)\\le L_k\\left|\\int_0^x M_{k,n-1}(x')\\,\\mathrm{d}x'\\right|" }, { "math_id": 37, "text": "\\begin{aligned}M_{k,0}(x)&\\le\\textstyle\\left|\\int_0^x|f_k(x',0)|\\,\\mathrm{d}x'\\right|\\\\&\\le |x|\\textstyle\\sup_R|f_k|\\le 2C|x|.\\end{aligned}" }, { "math_id": 38, "text": "M_{k,n}(x)\\le 2CL_k^n|x|^{n+1}/(n+1)!" }, { "math_id": 39, "text": "n\\to\\infty" }, { "math_id": 40, "text": "x\\in I" }, { "math_id": 41, "text": "y_{k,n}" }, { "math_id": 42, "text": "-x_2\\le x<x'\\le x_2" }, { "math_id": 43, "text": "\\begin{aligned}|y_{k,n+1}(x')-y_{k,n+1}(x)|&\\le\\textstyle\\int_x^{x'}|f_k(x'',y_{k,n}(x''))|\\,\\mathrm{d}x''\\\\&\\textstyle\\le|x'-x|\\sup_R|f_k|\\le 2C|x'-x|,\\end{aligned}" }, { "math_id": 44, "text": "(y_{k,\\varphi_k(n)})_{n\\in\\mathbb{N}}" }, { "math_id": 45, "text": "y_k:I\\to\\mathbb{R}" }, { "math_id": 46, "text": "\\begin{aligned}\\textstyle \\left|y_{k,\\varphi_k(n)}(x)-\\int_0^xf_k(x',y_{k,\\varphi_k(n)}(x'))\\,\\mathrm{d}x'\\right|&=|y_{k,\\varphi_k(n)}(x)-y_{k,\\varphi_k(n)+1}(x)|\\\\&\\le M_{k,\\varphi_k(n)}(x_2)\\end{aligned}" }, { "math_id": 47, "text": "\\textstyle y_k(x)=\\int_0^xf_k(x',y_k(x'))\\,\\mathrm{d}x'" }, { "math_id": 48, "text": "y_k" }, { "math_id": 49, "text": "y_{\\psi(k)}" }, { "math_id": 50, "text": "z:I\\to\\mathbb{R}" }, { "math_id": 51, "text": "k\\to\\infty" }, { "math_id": 52, "text": "\\textstyle y_{\\psi(k)}(x)=\\int_0^xf_{\\psi(k)}(x',y_{\\psi(k)}(x'))\\,\\mathrm{d}x'" }, { "math_id": 53, "text": "\\textstyle z(x)=\\int_0^xf(x',z(x'))\\,\\mathrm{d}x'" }, { "math_id": 54, "text": "f_{\\psi(k)}" }, { "math_id": 55, "text": "z'(x)=f(x,z(x))" }, { "math_id": 56, "text": "y' = \\left\\vert y\\right\\vert^{\\frac{1}{2}}" }, { "math_id": 57, "text": " \\left[0, 1\\right]." }, { "math_id": 58, "text": "y(0)=0" }, { "math_id": 59, "text": "y(x)=0" }, { "math_id": 60, "text": "y(x)=x^2/4" }, { "math_id": 61, "text": "y=0" }, { "math_id": 62, "text": "y=(x-C)^2/4" }, { "math_id": 63, "text": "C" }, { "math_id": 64, "text": "\\mathcal{H}" }, { "math_id": 65, "text": "\\mathbb{R}\\times \\mathcal{H}" } ]
https://en.wikipedia.org/wiki?curid=10090547
10092186
Nagata ring
In commutative algebra, an N-1 ring is an integral domain formula_0 whose integral closure in its quotient field is a finitely generated formula_0-module. It is called a Japanese ring (or an N-2 ring) if for every finite extension formula_1 of its quotient field formula_2, the integral closure of formula_0 in formula_1 is a finitely generated formula_0-module (or equivalently a finite formula_0-algebra). A ring is called universally Japanese if every finitely generated integral domain over it is Japanese, and is called a Nagata ring, named for Masayoshi Nagata, or a pseudo-geometric ring if it is Noetherian and universally Japanese (or, which turns out to be the same, if it is Noetherian and all of its quotients by a prime ideal are N-2 rings). A ring is called geometric if it is the local ring of an algebraic variety or a completion of such a local ring, but this concept is not used much. Examples. Fields and rings of polynomials or power series in finitely many indeterminates over fields are examples of Japanese rings. Another important example is a Noetherian integrally closed domain (e.g. a Dedekind domain) having a perfect field of fractions. On the other hand, a principal ideal domain or even a discrete valuation ring is not necessarily Japanese. Any quasi-excellent ring is a Nagata ring, so in particular almost all Noetherian rings that occur in algebraic geometry are Nagata rings. The first example of a Noetherian domain that is not a Nagata ring was given by . Here is an example of a discrete valuation ring that is not a Japanese ring. Choose a prime formula_3 and an infinite degree field extension formula_2 of a characteristic formula_3 field formula_4, such that formula_5. Let the discrete valuation ring formula_6 be the ring of formal power series over formula_2 whose coefficients generate a finite extension of formula_4. If formula_7 is any formal power series not in formula_6 then the ring formula_8 is not an N-1 ring (its integral closure is not a finitely generated module) so formula_6 is not a Japanese ring. If formula_6 is the subring of the polynomial ring formula_9 in infinitely many generators generated by the squares and cubes of all generators, and formula_10 is obtained from formula_6 by adjoining inverses to all elements not in any of the ideals generated by some formula_11, then formula_10 is a 1-dimensional Noetherian domain that is not an N-1 ring, in other words its integral closure in its quotient field is not a finitely generated formula_10-module. Also formula_10 has a cusp singularity at every closed point, so the set of singular points is not closed. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "K" }, { "math_id": 3, "text": "p" }, { "math_id": 4, "text": "k" }, { "math_id": 5, "text": "K^p\\subseteq k" }, { "math_id": 6, "text": "R" }, { "math_id": 7, "text": "y" }, { "math_id": 8, "text": "R[y]" }, { "math_id": 9, "text": "k[x_1, x_2, ...]" }, { "math_id": 10, "text": "S" }, { "math_id": 11, "text": "x_n" } ]
https://en.wikipedia.org/wiki?curid=10092186
10092550
Body force
Force which acts throughout the volume of a body In physics, a body force is a force that acts throughout the volume of a body. Forces due to gravity, electric fields and magnetic fields are examples of body forces. Body forces contrast with "contact forces" or "surface forces" which are exerted to the surface of an object. Fictitious forces such as the centrifugal force, Euler force, and the Coriolis effect are other examples of body forces. Definition. Qualitative. A body force is simply a type of force, and so it has the same dimensions as force, [M][L][T]−2. However, it is often convenient to talk about a body force in terms of either the force per unit volume or the force per unit mass. If the force per unit volume is of interest, it is referred to as the force density throughout the system. A body force is distinct from a contact force in that the force does not require contact for transmission. Thus, common forces associated with pressure gradients and conductive and convective heat transmission are not body forces as they require contact between systems to exist. Radiation heat transfer, on the other hand, is a perfect example of a body force. More examples of common body forces include; Fictitious forces (or inertial forces) can be viewed as body forces. Common inertial forces are, However, fictitious forces are not actually forces. Rather they are corrections to Newton's second law when it is formulated in an accelerating reference frame. (Gravity can also be considered a fictitious force in the context of General Relativity.) Quantitative. The body force density is defined so that the volume integral (throughout a volume of interest) of it gives the total force acting throughout the body; formula_0 where d"V" is an infinitesimal volume element, and f is the "external body force density field" acting on the system. Acceleration. Like any other force, a body force will cause an object to accelerate. For a non-rigid object, Newton's second law applied to a small volume element is formula_1, where "ρ"(r) is the mass density of the substance, ƒ the force density, and a(r) is acceleration, all at point r. The case of gravity. In the case of a body in the gravitational field on a planet surface, a(r) is nearly constant (g) and uniform. Near the Earth formula_2. In this case simply formula_3 where "m" is the mass of the body. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{F}_{\\mathrm{body}} = \\int\\limits_{V}\\mathbf{f}(\\mathbf{r}) \\mathrm{d} V \\,," }, { "math_id": 1, "text": "\\mathbf{f} (\\mathbf{r})=\\rho (\\mathbf{r})\\mathbf{a} (\\mathbf{r})" }, { "math_id": 2, "text": "g = 9.81 \\frac{\\mathrm m}{\\mathrm s^2}" }, { "math_id": 3, "text": "\\mathbf{F}_{\\mathrm{body}} = \\int\\limits_{V}\\rho (\\mathbf{r})\\mathbf{g}\\mathrm{d} V = \\int\\limits_{V}\\rho (\\mathbf{r})\\mathrm{d} V \\cdot \\mathbf{g} = m \\mathbf{g}" } ]
https://en.wikipedia.org/wiki?curid=10092550
10094198
Hardy–Littlewood maximal function
In mathematics, the Hardy–Littlewood maximal operator "M" is a significant non-linear operator used in real analysis and harmonic analysis. Definition. The operator takes a locally integrable function "f" : R"d" → C and returns another function "Mf". For any point "x" ∈ R"d", the function "Mf" returns the maximum of a set of reals, namely the set of average values of "f" for all the balls "B"("x", "r") of any radius "r" at "x". Formally, formula_0 where |"E"| denotes the "d"-dimensional Lebesgue measure of a subset "E" ⊂ R"d". The averages are jointly continuous in "x" and "r", so the maximal function "Mf", being the supremum over "r" &gt; 0, is measurable. It is not obvious that "Mf" is finite almost everywhere. This is a corollary of the Hardy–Littlewood maximal inequality. Hardy–Littlewood maximal inequality. This theorem of G. H. Hardy and J. E. Littlewood states that "M" is bounded as a sublinear operator from "Lp"(R"d") to itself for "p" &gt; 1. That is, if "f" ∈ "Lp"(R"d") then the maximal function "Mf" is weak "L"1-bounded and "Mf" ∈ "Lp"(R"d"). Before stating the theorem more precisely, for simplicity, let {"f" &gt; "t"} denote the set {"x" | "f"("x") &gt; "t"}. Now we have: Theorem (Weak Type Estimate). For "d" ≥ 1, there is a constant "Cd" &gt; 0 such that for all λ &gt; 0 and "f" ∈ "L"1(R"d"), we have: formula_1 With the Hardy–Littlewood maximal inequality in hand, the following "strong-type" estimate is an immediate consequence of the Marcinkiewicz interpolation theorem: Theorem (Strong Type Estimate). For "d" ≥ 1, 1 &lt; "p" ≤ ∞, and "f" ∈ "Lp"(R"d"), there is a constant "Cp,d" &gt; 0 such that formula_2 In the strong type estimate the best bounds for "Cp,d" are unknown. However subsequently Elias M. Stein used the Calderón-Zygmund method of rotations to prove the following: Theorem (Dimension Independence). For 1 &lt; "p" ≤ ∞ one can pick "Cp,d" = "Cp" independent of "d". Proof. While there are several proofs of this theorem, a common one is given below: For "p" = ∞, the inequality is trivial (since the average of a function is no larger than its essential supremum). For 1 &lt; "p" &lt; ∞, first we shall use the following version of the Vitali covering lemma to prove the weak-type estimate. (See the article for the proof of the lemma.) Lemma. Let "X" be a separable metric space and formula_3 a family of open balls with bounded diameter. Then formula_3 has a countable subfamily formula_4 consisting of disjoint balls such that formula_5 where 5"B" is "B" with 5 times radius. For every "x" such that "Mf"("x") &gt; "t", by definition, we can find a ball "Bx" centered at "x" such that formula_6 Thus {"Mf" &gt; "t"} is a subset of the union of such balls, as "x" varies in {"Mf" &gt; "t"}. This is trivial since "x" is contained in "Bx". By the lemma, we can find, among such balls, a sequence of disjoint balls "Bj" such that the union of 5"Bj" covers {"Mf" &gt; "t"}. It follows: formula_7 This completes the proof of the weak-type estimate. We next deduce from this the "Lp" bounds. Define "b" by "b"("x") = "f"("x") if |"f"("x")| &gt; "t"/2 and 0 otherwise. By the weak-type estimate applied to "b", we have: formula_8 with "C" = 5"d". Then formula_9 By the estimate above we have: formula_10 where the constant "Cp" depends only on "p" and "d". This completes the proof of the theorem. Note that the constant formula_11 in the proof can be improved to formula_12 by using the inner regularity of the Lebesgue measure, and the finite version of the Vitali covering lemma. See the Discussion section below for more about optimizing the constant. Applications. Some applications of the Hardy–Littlewood Maximal Inequality include proving the following results: Here we use a standard trick involving the maximal function to give a quick proof of Lebesgue differentiation theorem. (But remember that in the proof of the maximal theorem, we used the Vitali covering lemma.) Let "f" ∈ "L"1(R"n") and formula_13 where formula_14 We write "f" = "h" + "g" where "h" is continuous and has compact support and "g" ∈ "L"1(R"n") with norm that can be made arbitrary small. Then formula_15 by continuity. Now, Ω"g" ≤ 2"Mg" and so, by the theorem, we have: formula_16 Now, we can let formula_17 and conclude Ω"f" = 0 almost everywhere; that is, formula_18 exists for almost all "x". It remains to show the limit actually equals "f"("x"). But this is easy: it is known that formula_19 (approximation of the identity) and thus there is a subsequence formula_20 almost everywhere. By the uniqueness of limit, "fr" → "f" almost everywhere then. Discussion. It is still unknown what the smallest constants "Cp,d" and "Cd" are in the above inequalities. However, a result of Elias Stein about spherical maximal functions can be used to show that, for 1 &lt; "p" &lt; ∞, we can remove the dependence of "Cp,d" on the dimension, that is, "Cp,d" = "Cp" for some constant "Cp" &gt; 0 only depending on "p". It is unknown whether there is a weak bound that is independent of dimension. There are several common variants of the Hardy-Littlewood maximal operator which replace the averages over centered balls with averages over different families of sets. For instance, one can define the "uncentered" HL maximal operator (using the notation of Stein-Shakarchi) formula_21 where the balls "Bx" are required to merely contain x, rather than be centered at x. There is also the "dyadic" HL maximal operator formula_22 where "Qx" ranges over all dyadic cubes containing the point "x". Both of these operators satisfy the HL maximal inequality.
[ { "math_id": 0, "text": " Mf(x)=\\sup_{r>0} \\frac{1}{|B(x, r)|}\\int_{B(x, r)} |f(y)|\\, dy " }, { "math_id": 1, "text": "\\left |\\{Mf > \\lambda\\} \\right |< \\frac{C_d}{\\lambda} \\Vert f\\Vert_{L^1 (\\mathbf{R}^d)}." }, { "math_id": 2, "text": " \\Vert Mf\\Vert_{L^p (\\mathbf{R}^d)}\\leq C_{p,d}\\Vert f\\Vert_{L^p(\\mathbf{R}^d)}." }, { "math_id": 3, "text": "\\mathcal{F}" }, { "math_id": 4, "text": "\\mathcal{F}'" }, { "math_id": 5, "text": "\\bigcup_{B \\in \\mathcal{F}} B \\subset \\bigcup_{B \\in \\mathcal{F'}} 5B" }, { "math_id": 6, "text": "\\int_{B_x} |f|dy > t|B_x|." }, { "math_id": 7, "text": "|\\{Mf > t\\}| \\le 5^d \\sum_j |B_j| \\le {5^d \\over t} \\int |f|dy." }, { "math_id": 8, "text": "|\\{Mf > t\\}| \\le {2C \\over t} \\int_{|f| > \\frac{t}{2}} |f|dx, " }, { "math_id": 9, "text": "\\|Mf\\|_p^p = \\int \\int_0^{Mf(x)} pt^{p-1} dt dx = p \\int_0^\\infty t^{p-1} |\\{ Mf > t \\}| dt" }, { "math_id": 10, "text": "\\|Mf\\|_p^p \\leq p \\int_0^\\infty t^{p-1} \\left ({2C \\over t} \\int_{|f| > \\frac{t}{2}} |f|dx \\right ) dt = 2C p \\int_0^\\infty \\int_{|f| > \\frac{t}{2}} t^{p-2} |f| dx dt = C_p \\|f\\|_p^p" }, { "math_id": 11, "text": "C=5^d" }, { "math_id": 12, "text": "3^d" }, { "math_id": 13, "text": "\\Omega f (x) = \\limsup_{r \\to 0} f_r(x) - \\liminf_{r \\to 0} f_r(x)" }, { "math_id": 14, "text": "f_r(x) = \\frac{1}{|B(x, r)|} \\int_{B(x, r)} f(y) dy." }, { "math_id": 15, "text": "\\Omega f \\le \\Omega g + \\Omega h = \\Omega g" }, { "math_id": 16, "text": "\\left | \\{ \\Omega g > \\varepsilon \\} \\right | \\le \\frac{2\\,M}{\\varepsilon} \\|g\\|_1" }, { "math_id": 17, "text": "\\|g\\|_1 \\to 0" }, { "math_id": 18, "text": "\\lim_{r \\to 0} f_r(x)" }, { "math_id": 19, "text": "\\|f_r - f\\|_1 \\to 0" }, { "math_id": 20, "text": "f_{r_k} \\to f" }, { "math_id": 21, "text": " f^*(x) = \\sup_{x \\in B_x} \\frac{1}{|B_x|} \\int_{B_x} |f(y)| dy" }, { "math_id": 22, "text": "M_\\Delta f(x) = \\sup_{x \\in Q_x} \\frac{1}{|Q_x|} \\int_{Q_x} |f(y)| dy" } ]
https://en.wikipedia.org/wiki?curid=10094198
10099552
Zakai equation
In filtering theory the Zakai equation is a linear stochastic partial differential equation for the un-normalized density of a hidden state. In contrast, the Kushner equation gives a non-linear stochastic partial differential equation for the normalized density of the hidden state. In principle either approach allows one to estimate a quantity function (the state of a dynamical system) from noisy measurements, even when the system is non-linear (thus generalizing the earlier results of Wiener and Kalman for linear systems and solving a central problem in estimation theory). The application of this approach to a specific engineering situation may be problematic however, as these equations are quite complex. The Zakai equation is a bilinear stochastic partial differential equation. It was named after Moshe Zakai. Overview. Assume the state of the system evolves according to formula_0 and a noisy measurement of the system state is available: formula_1 where formula_2 are independent Wiener processes. Then the unnormalized conditional probability density formula_3 of the state at time t is given by the Zakai equation: formula_4 where formula_5 is a Kolmogorov forward operator. As previously mentioned, formula_6 is an unnormalized density and thus does not necessarily integrate to 1. After solving for formula_6, integration and normalization can be done if desired (an extra step not required in the Kushner approach). Note that if the last term on the right hand side is omitted (by choosing h identically zero), the result is a nonstochastic PDE: the familiar Fokker–Planck equation, which describes the evolution of the state when no measurement information is available. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "dx = f(x,t) dt + dw" }, { "math_id": 1, "text": "dz = h(x,t) dt + dv" }, { "math_id": 2, "text": "w, v" }, { "math_id": 3, "text": "p(x,t)" }, { "math_id": 4, "text": "dp = L[p] dt + p h^T dz" }, { "math_id": 5, "text": "L[p] = -\\sum \\frac{\\partial (f_i p)}{\\partial x_i} + \\frac12 \\sum \\frac{\\partial^2 p}{\\partial x_i \\partial x_j}" }, { "math_id": 6, "text": "p" } ]
https://en.wikipedia.org/wiki?curid=10099552
1010127
Carbon-13
Rare isotope of carbon Carbon-13 (13C) is a natural, stable isotope of carbon with a nucleus containing six protons and seven neutrons. As one of the environmental isotopes, it makes up about 1.1% of all natural carbon on Earth. Detection by mass spectrometry. A mass spectrum of an organic compound will usually contain a small peak of one mass unit greater than the apparent molecular ion peak (M) of the whole molecule. This is known as the M+1 peak and comes from the few molecules that contain a 13C atom in place of a 12C. A molecule containing one carbon atom will be expected to have an M+1 peak of approximately 1.1% of the size of the M peak, as 1.1% of the molecules will have a 13C rather than a 12C. Similarly, a molecule containing two carbon atoms will be expected to have an M+1 peak of approximately 2.2% of the size of the M peak, as there is double the previous likelihood that any molecule will contain a 13C atom. In the above, the mathematics and chemistry have been simplified, however it can be used effectively to give the number of carbon atoms for small- to medium-sized organic molecules. In the following formula the result should be rounded to the nearest integer: formula_0 where "C" = number of C atoms, "X" = amplitude of the M ion peak, and "Y" = amplitude of the M +1 ion peak. 13C-enriched compounds are used in the research of metabolic processes by means of mass spectrometry. Such compounds are safe because they are non-radioactive. In addition, 13C is used to quantify proteins (quantitative proteomics). One important application is in stable isotope labeling by amino acids in cell culture (SILAC). 13C-enriched compounds are used in medical diagnostic tests such as the urea breath test. Analysis in these tests is usually of the ratio of 13C to 12C by isotope ratio mass spectrometry. The ratio of 13C to 12C is slightly higher in plants employing C4 carbon fixation than in plants employing C3 carbon fixation. Because the different isotope ratios for the two kinds of plants propagate through the food chain, it is possible to determine if the principal diet of a human or other animal consists primarily of C3 plants or C4 plants by measuring the isotopic signature of their collagen and other tissues. Uses in science. Due to differential uptake in plants as well as marine carbonates of 13C, it is possible to use these isotopic signatures in earth science. Biological processes preferentially take up the lower mass isotope through kinetic fractionation. In aqueous geochemistry, by analyzing the δ13C value of carbonaceous material found in surface and ground waters, the source of the water can be identified. This is because atmospheric, carbonate, and plant derived δ13C values all differ. In biology, the ratio of carbon-13 and carbon-12 isotopes in plant tissues is different depending on the type of plant photosynthesis and this can be used, for example, to determine which types of plants were consumed by animals. Greater carbon-13 concentrations indicate stomatal limitations, which can provide information on plant behaviour during drought. Tree ring analysis of carbon isotopes can be used to retrospectively understand forest photosynthesis and how it is impacted by drought. In geology, the 13C/12C ratio is used to identify the layer in sedimentary rock created at the time of the Permian extinction 252 Mya when the ratio changed abruptly by 1%. More information about usage of 13C/12C ratio in science can be found in the article about isotopic signatures. Carbon-13 has a non-zero spin quantum number of , and hence allows the structure of carbon-containing substances to be investigated using carbon-13 nuclear magnetic resonance. The carbon-13 urea breath test is a safe and highly accurate diagnostic tool to detect the presence of "Helicobacter pylori" infection in the stomach. The urea breath test utilizing carbon-13 is preferred to carbon-14 for certain vulnerable populations due to its non-radioactive nature. Production. Bulk carbon-13 for commercial use, e.g. in chemical synthesis, is enriched from its natural 1% abundance. Although carbon-13 can be separated from the major carbon-12 isotope via techniques such as thermal diffusion, chemical exchange, gas diffusion, and laser and cryogenic distillation, currently only cryogenic distillation of methane or carbon monoxide is an economically feasible industrial production technique. Industrial carbon-13 production plants represent a substantial investment, greater than 100 meter tall cryogenic distillation columns are needed to separate the carbon-12 or carbon-13 containing compounds. The largest reported commercial carbon-13 production plant in the world as of 2014 has a production capability of ~400 kg of carbon-13 annually. In contrast, a 1969 carbon monoxide cryogenic distillation pilot plant at Los Alamos Scientific Laboratories could produce 4 kg of carbon-13 annually. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C = \\frac{100Y}{1.1X}" } ]
https://en.wikipedia.org/wiki?curid=1010127
1010141
Gilbreath's conjecture
Conjecture in number theory Gilbreath's conjecture is a conjecture in number theory regarding the sequences generated by applying the forward difference operator to consecutive prime numbers and leaving the results unsigned, and then repeating this process on consecutive terms in the resulting sequence, and so forth. The statement is named after Norman L. Gilbreath who, in 1958, presented it to the mathematical community after observing the pattern by chance while doing arithmetic on a napkin. In 1878, eighty years before Gilbreath's discovery, François Proth had, however, published the same observations along with an attempted proof, which was later shown to be incorrect. Motivating arithmetic. Gilbreath observed a pattern while playing with the ordered sequence of prime numbers 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, ... Computing the absolute value of the difference between term "n" + 1 and term "n" in this sequence yields the sequence 1, 2, 2, 4, 2, 4, 2, 4, 6, 2, ... If the same calculation is done for the terms in this new sequence, and the sequence that is the outcome of this process, and again "ad infinitum" for each sequence that is the output of such a calculation, the following five sequences in this list are 1, 0, 2, 2, 2, 2, 2, 2, 4, ... 1, 2, 0, 0, 0, 0, 0, 2, ... 1, 2, 0, 0, 0, 0, 2, ... 1, 2, 0, 0, 0, 2, ... 1, 2, 0, 0, 2, ... What Gilbreath—and François Proth before him—noticed is that the first term in each series of differences appears to be 1. The conjecture. Stating Gilbreath's observation formally is significantly easier to do after devising a notation for the sequences in the previous section. Toward this end, let formula_0 denote the ordered sequence of prime numbers, and define each term in the sequence formula_1 by formula_2 where formula_3 is positive. Also, for each integer formula_4 greater than 1, let the terms in formula_5 be given by formula_6 Gilbreath's conjecture states that every term in the sequence formula_7 for positive formula_4 is equal to 1. Verification and attempted proofs. François Proth released what he believed to be a proof of the statement that was later shown to be flawed. Andrew Odlyzko verified that formula_8 is equal to 1 for formula_9 in 1993, but the conjecture remains an open problem. Instead of evaluating "n" rows, Odlyzko evaluated 635 rows and established that the 635th row started with a 1 and continued with only 0s and 2s for the next "n" numbers. This implies that the next "n" rows begin with a 1. Generalizations. In 1980, Martin Gardner published a conjecture by Hallard Croft that stated that the property of Gilbreath's conjecture (having a 1 in the first term of each difference sequence) should hold more generally for every sequence that begins with 2, subsequently contains only odd numbers, and has a sufficiently low bound on the gaps between consecutive elements in the sequence. This conjecture has also been repeated by later authors. However, it is false: for every initial subsequence of 2 and odd numbers, and every non-constant growth rate, there is a continuation of the subsequence by odd numbers whose gaps obey the growth rate but whose difference sequences fail to begin with 1 infinitely often. is more careful, writing of certain heuristic reasons for believing Gilbreath's conjecture that "the arguments above apply to many other sequences in which the first element is a 1, the others even, and where the gaps between consecutive elements are not too large and are sufficiently random." However, he does not give a formal definition of what "sufficiently random" means. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(p_n)" }, { "math_id": 1, "text": "(d^1_n)" }, { "math_id": 2, "text": "d^1_n = p_{n+1} - p_n," }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "k" }, { "math_id": 5, "text": "(d_n^k)" }, { "math_id": 6, "text": "d_n^k = |d_{n+1}^{k-1} - d_n^{k-1}|." }, { "math_id": 7, "text": "a_k = d_1^k" }, { "math_id": 8, "text": "d_1^k" }, { "math_id": 9, "text": "k \\leq n = 3.4 \\times 10^{11}" } ]
https://en.wikipedia.org/wiki?curid=1010141
10101991
Newton's inequalities
In mathematics, the Newton inequalities are named after Isaac Newton. Suppose "a"1, "a"2, ..., "a""n" are non-negative real numbers and let formula_0 denote the "k"th elementary symmetric polynomial in "a"1, "a"2, ..., "a""n". Then the elementary symmetric means, given by formula_1 satisfy the inequality formula_2 Equality holds if and only if all the numbers "a""i" are equal. It can be seen that "S"1 is the arithmetic mean, and "S""n" is the "n"-th power of the geometric mean.
[ { "math_id": 0, "text": "e_k" }, { "math_id": 1, "text": "S_k = \\frac{e_k}{\\binom{n}{k}}," }, { "math_id": 2, "text": "S_{k-1}S_{k+1} \\le S_k^2." } ]
https://en.wikipedia.org/wiki?curid=10101991
10102876
Vitali covering lemma
Combinatorial and geometric result used in measure theory of Euclidean spaces In mathematics, the Vitali covering lemma is a combinatorial and geometric result commonly used in measure theory of Euclidean spaces. This lemma is an intermediate step, of independent interest, in the proof of the Vitali covering theorem. The covering theorem is credited to the Italian mathematician Giuseppe Vitali. The theorem states that it is possible to cover, up to a Lebesgue-negligible set, a given subset "E" of R"d" by a disjoint family extracted from a "Vitali covering" of "E". Vitali covering lemma. There are two basic versions of the lemma, a finite version and an infinite version. Both lemmas can be proved in the general setting of a metric space, typically these results are applied to the special case of the Euclidean space formula_0. In both theorems we will use the following notation: if formula_1 is a ball and formula_2, we will write formula_3 for the ball formula_4. Finite version. Theorem (Finite Covering Lemma). Let formula_5 be any finite collection of balls contained in an arbitrary metric space. Then there exists a subcollection formula_6 of these balls which are disjoint and satisfy formula_7Proof: Without loss of generality, we assume that the collection of balls is not empty; that is, "n" &gt; 0. Let formula_8 be the ball of largest radius. Inductively, assume that formula_9 have been chosen. If there is some ball in formula_10 that is disjoint from formula_11, let formula_12 be such ball with maximal radius (breaking ties arbitrarily), otherwise, we set "m" := "k" and terminate the inductive definition. Now set formula_13. It remains to show that formula_14 for every formula_15. This is clear if formula_16. Otherwise, there necessarily is some formula_17 such that formula_18 intersects formula_19. We choose the minimal possible formula_20 and note that the radius of formula_19 is at least as large as that of formula_18. The triangle inequality then implies that formula_21, as needed. This completes the proof of the finite version. Infinite version. Theorem (Infinite Covering Lemma). Let formula_22 be an arbitrary collection of balls in a separable metric space such that formula_23 where formula_24 denotes the radius of the ball "B". Then there exists a countable sub-collection formula_25 such that the balls of formula_26 are pairwise disjoint, and satisfyformula_27And moreover, each formula_28 intersects some formula_29 with formula_30. Proof: Consider the partition of F into subcollections F"n", "n" ≥ 0, defined by formula_31 That is, formula_32 consists of the balls "B" whose radius is in (2−"n"−1"R", 2−"n""R"]. A sequence G"n", with G"n" ⊂ F"n", is defined inductively as follows. First, set H0 = F0 and let G0 be a maximal disjoint subcollection of H0 (such a subcollection exists by Zorn's lemma). Assuming that G0...,G"n" have been selected, let formula_33 and let G"n"+1 be a maximal disjoint subcollection of H"n"+1. The subcollection formula_34 of F satisfies the requirements of the theorem: G is a disjoint collection, and is thus countable since the given metric space is separable. Moreover, every ball "B" ∈ F intersects a ball "C" ∈ G such that "B" ⊂ 5 "C". Indeed, if we are given some formula_28"," there must be some "n" be such that "B" belongs to F"n". Either "B" does not belong to H"n", which implies "n" &gt; 0 and means that "B" intersects a ball from the union of G0, ..., G"n"−1, or "B" ∈ H"n" and by maximality of G"n", "B" intersects a ball in G"n". In any case, "B" intersects a ball "C" that belongs to the union of G0, ..., G"n". Such a ball "C" must have a radius larger than 2−"n"−1"R". Since the radius of "B" is less than or equal to 2−"n""R," we can conclude by the triangle inequality that "B" ⊂ 5 "C," as claimed. From this formula_35 immediately follows, completing the proof. Remarks Applications and method of use. An application of the Vitali lemma is in proving the Hardy–Littlewood maximal inequality. As in this proof, the Vitali lemma is frequently used when we are, for instance, considering the "d"-dimensional Lebesgue measure, formula_36, of a set "E" ⊂ R"d", which we know is contained in the union of a certain collection of balls formula_37, each of which has a measure we can more easily compute, or has a special property one would like to exploit. Hence, if we compute the measure of this union, we will have an upper bound on the measure of "E". However, it is difficult to compute the measure of the union of all these balls if they overlap. By the Vitali lemma, we may choose a subcollection formula_38 which is disjoint and such that formula_39. Therefore, formula_40 Now, since increasing the radius of a "d"-dimensional ball by a factor of five increases its volume by a factor of 5"d", we know that formula_41 and thus formula_42 Vitali covering theorem. In the covering theorem, the aim is to cover, "up to" a "negligible set", a given set "E" ⊆ R"d" by a disjoint subcollection extracted from a "Vitali covering" for "E" : a Vitali class or Vitali covering formula_43 for "E" is a collection of sets such that, for every "x" ∈ "E" and "δ" &gt; 0, there is a set "U" in the collection formula_44 such that "x" ∈ "U" and the diameter of "U" is non-zero and less than "δ". In the classical setting of Vitali, the negligible set is a "Lebesgue negligible set", but measures other than the Lebesgue measure, and spaces other than R"d" have also been considered, as is shown in the relevant section below. The following observation is useful: if formula_44 is a Vitali covering for "E" and if "E" is contained in an open set Ω ⊆ R"d", then the subcollection of sets "U" in formula_44 that are contained in Ω is also a Vitali covering for "E". Vitali's covering theorem for the Lebesgue measure. The next covering theorem for the Lebesgue measure "λ""d" is due to . A collection formula_43 of measurable subsets of R"d" is a "regular family" (in the sense of Lebesgue) if there exists a constant "C" such that formula_45 for every set "V" in the collection formula_44. The family of cubes is an example of regular family formula_44, as is the family formula_46 of rectangles in R2 such that the ratio of sides stays between "m"−1 and "m", for some fixed "m" ≥ 1. If an arbitrary norm is given on R"d", the family of balls for the metric associated to the norm is another example. To the contrary, the family of "all" rectangles in R2 is "not" regular. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — Let "E" ⊆ R"d" be a measurable set with finite Lebesgue measure, and let formula_44 be a regular family of closed subsets of R"d" that is a Vitali covering for "E". Then there exists a finite or countably infinite disjoint subcollection formula_47 such that formula_48 The original result of is a special case of this theorem, in which "d" = 1 and formula_44 is a collection of intervals that is a Vitali covering for a measurable subset "E" of the real line having finite measure. The theorem above remains true without assuming that "E" has finite measure. This is obtained by applying the covering result in the finite measure case, for every integer "n" ≥ 0, to the portion of "E" contained in the open annulus Ω"n" of points "x" such that "n" &lt; |"x"| &lt; "n"+1. A somewhat related covering theorem is the Besicovitch covering theorem. To each point "a" of a subset "A" ⊆ R"d", a Euclidean ball "B"("a", "ra") with center "a" and positive radius "ra" is assigned. Then, as in the Vitali covering lemma, a subcollection of these balls is selected in order to cover "A" in a specific way. The main differences between the Besicovitch covering theorem and the Vitali covering lemma are that on one hand, the disjointness requirement of Vitali is relaxed to the fact that the number "N""x" of the selected balls containing an arbitrary point "x" ∈ R"d" is bounded by a constant "B""d" depending only upon the dimension "d"; on the other hand, the selected balls do cover the set "A" of all the given centers. Vitali's covering theorem for the Hausdorff measure. One may have a similar objective when considering Hausdorff measure instead of Lebesgue measure. The following theorem applies in that case. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem —  Let "H""s" denote "s"-dimensional Hausdorff measure, let "E" ⊆ R"d" be an "H""s"-measurable set and formula_44 a Vitali class of closed sets for "E". Then there exists a (finite or countably infinite) disjoint subcollection formula_47 such that either formula_49 or formula_50 Furthermore, if "E" has finite "s"-dimensional Hausdorff measure, then for any "ε" &gt; 0, we may choose this subcollection {"U""j"} such that formula_51 This theorem implies the result of Lebesgue given above. Indeed, when "s" = "d", the Hausdorff measure "H""s" on R"d" coincides with a multiple of the "d"-dimensional Lebesgue measure. If a disjoint collection formula_52 is regular and contained in a measurable region "B" with finite Lebesgue measure, then formula_53 which excludes the second possibility in the first assertion of the previous theorem. It follows that "E" is covered, up to a Lebesgue-negligible set, by the selected disjoint subcollection. From the covering lemma to the covering theorem. The covering lemma can be used as intermediate step in the proof of the following basic form of the Vitali covering theorem. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — For every subset E of Rd and every Vitali cover of E by a collection F of closed balls, there exists a disjoint subcollection G which covers E up to a Lebesgue-negligible set. Proof: Without loss of generality, one can assume that all balls in F are nondegenerate and have radius less than or equal to 1. By the infinite form of the covering lemma, there exists a countable disjoint subcollection formula_54 of F such that every ball "B" ∈ F intersects a ball "C" ∈ G for which "B" ⊂ 5 "C". Let "r" &gt; 0 be given, and let "Z" denote the set of points "z" ∈ "E" that are not contained in any ball from G and belong to the "open" ball "B"("r") of radius "r", centered at 0. It is enough to show that "Z" is Lebesgue-negligible, for every given "r". Let formula_55 denote the subcollection of those balls in G that meet "B"("r"). Note that formula_56 may be finite or countably infinite. Let "z" ∈ "Z" be fixed. For each "N," "z" does not belong to the closed set formula_57 by the definition of "Z". But by the Vitali cover property, one can find a ball "B" ∈ F containing "z", contained in "B"("r"), and disjoint from "K". By the property of G, the ball "B" intersects some ball formula_58 and is contained in formula_59. But because "K" and "B" are disjoint, we must have "i &gt; N." So formula_60 for some "i &gt; N," and therefore formula_61 This gives for every "N" the inequality formula_62 But since the balls of formula_56 are contained in "B(r+2)", and these balls are disjoint we see formula_63 Therefore, the term on the right side of the above inequality converges to 0 as "N" goes to infinity, which shows that "Z" is negligible as needed. Infinite-dimensional spaces. The Vitali covering theorem is not valid in infinite-dimensional settings. The first result in this direction was given by David Preiss in 1979: there exists a Gaussian measure "γ" on an (infinite-dimensional) separable Hilbert space "H" so that the Vitali covering theorem fails for ("H", Borel("H"), "γ"). This result was strengthened in 2003 by Jaroslav Tišer: the Vitali covering theorem in fact fails for "every" infinite-dimensional Gaussian measure on any (infinite-dimensional) separable Hilbert space. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^d" }, { "math_id": 1, "text": "B = B(x,r)" }, { "math_id": 2, "text": "c \\geq 0 " }, { "math_id": 3, "text": "cB" }, { "math_id": 4, "text": "B(x,cr)" }, { "math_id": 5, "text": " B_{1}, \\dots, B_{n} " }, { "math_id": 6, "text": " B_{j_{1}}, B_{j_{2}}, \\dots, B_{j_{m}} " }, { "math_id": 7, "text": " B_{1}\\cup B_{2}\\cup\\dots \\cup B_{n}\\subseteq 3B_{j_1} \\cup 3B_{j_2} \\cup \\dots \\cup 3B_{j_m}." }, { "math_id": 8, "text": "B_{j_1}" }, { "math_id": 9, "text": "B_{j_1},\\dots,B_{j_k}" }, { "math_id": 10, "text": "B_1,\\dots,B_n" }, { "math_id": 11, "text": "B_{j_1}\\cup B_{j_2}\\cup\\dots\\cup B_{j_k}" }, { "math_id": 12, "text": "B_{j_{k+1}}" }, { "math_id": 13, "text": "X:=\\bigcup_{k=1}^m 3\\,B_{j_k}" }, { "math_id": 14, "text": " B_i\\subset X" }, { "math_id": 15, "text": "i=1,2,\\dots,n" }, { "math_id": 16, "text": "i\\in\\{j_1,\\dots,j_m\\}" }, { "math_id": 17, "text": "k \\in \\{1,\\dots,m\\}" }, { "math_id": 18, "text": "B_i" }, { "math_id": 19, "text": "B_{j_k}" }, { "math_id": 20, "text": " k " }, { "math_id": 21, "text": "B_i\\subset 3\\,B_{j_k}\\subset X" }, { "math_id": 22, "text": " \\mathbf{F}" }, { "math_id": 23, "text": " R := \\sup \\, \\{ \\mathrm{rad}(B) : B \\in \\mathbf{F} \\} <\\infty " }, { "math_id": 24, "text": " \\mathrm{rad}(B) " }, { "math_id": 25, "text": " \\mathbf{G} \\subset \\mathbf{F}" }, { "math_id": 26, "text": " \\mathbf{G}" }, { "math_id": 27, "text": " \\bigcup_{B \\in \\mathbf{F}} B \\subseteq \\bigcup_{C \\in \\mathbf{G}} 5\\,C. " }, { "math_id": 28, "text": "B \\in \\mathbf{F}" }, { "math_id": 29, "text": "C \\in \\mathbf{G}" }, { "math_id": 30, "text": "B \\subset 5C" }, { "math_id": 31, "text": " \\mathbf{F}_n = \\{ B \\in \\mathbf{F} : 2^{-n-1}R < \\text{rad}(B) \\leq 2^{-n}R \\}. " }, { "math_id": 32, "text": "\\mathbf{F}_n" }, { "math_id": 33, "text": " \\mathbf{H}_{n+1} = \\{ B \\in \\mathbf{F}_{n+1} : \\ B \\cap C = \\emptyset, \\ \\ \\forall C \\in \\mathbf{G}_0 \\cup \\mathbf{G}_1 \\cup \\dots \\cup \\mathbf{G}_n \\}, " }, { "math_id": 34, "text": "\\mathbf{G} := \\bigcup_{n=0}^\\infty \\mathbf{G}_n" }, { "math_id": 35, "text": " \\bigcup_{B \\in \\mathbf{F}} B \\subseteq \\bigcup_{C \\in \\mathbf{G}} 5\\,C " }, { "math_id": 36, "text": "\\lambda_d" }, { "math_id": 37, "text": " \\{B_{j}:j\\in J\\}" }, { "math_id": 38, "text": " \\left\\{ B_{j} : j\\in J' \\right\\} " }, { "math_id": 39, "text": "\\bigcup_{j\\in J'}5 B_j\\supset \\bigcup_{j\\in J} B_j\\supset E" }, { "math_id": 40, "text": " \\lambda_d(E)\\leq \\lambda_d \\biggl( \\bigcup_{j\\in J}B_{j} \\biggr) \\leq \\lambda_d \\biggl( \\bigcup_{j\\in J'}5B_{j} \\biggr) \\leq \\sum_{j\\in J'} \\lambda_d(5 B_{j})." }, { "math_id": 41, "text": " \\sum_{j\\in J'} \\lambda_d(5B_{j}) = 5^d \\sum_{j\\in J'} \\lambda_d(B_{j})" }, { "math_id": 42, "text": " \\lambda_d(E) \\leq 5^{d} \\sum_{j\\in J'}\\lambda_d(B_{j}). " }, { "math_id": 43, "text": " \\mathcal{V} " }, { "math_id": 44, "text": "\\mathcal{V}" }, { "math_id": 45, "text": "\\operatorname{diam}(V)^d \\le C \\, \\lambda_d(V)" }, { "math_id": 46, "text": "\\mathcal{V}(m)" }, { "math_id": 47, "text": "\\{U_{j}\\}\\subseteq \\mathcal{V}" }, { "math_id": 48, "text": " \\lambda_d \\biggl( E \\setminus \\bigcup_{j}U_{j} \\biggr) = 0." }, { "math_id": 49, "text": " H^{s} \\left( E\\setminus \\bigcup_{j}U_{j} \\right)=0 " }, { "math_id": 50, "text": "\\sum_{j} \\operatorname{diam} (U_{j})^{s}=\\infty." }, { "math_id": 51, "text": " H^{s}(E)\\leq \\sum_{j} \\mathrm{diam} (U_{j})^{s}+\\varepsilon." }, { "math_id": 52, "text": "\\{U_{j}\\}" }, { "math_id": 53, "text": "\\sum_j \\operatorname{diam}(U_j)^d \\le C \\sum_j \\lambda_d(U_j) \\le C \\, \\lambda_d(B) < +\\infty" }, { "math_id": 54, "text": "\\mathbf{G}" }, { "math_id": 55, "text": "\\mathbf{G}_r = \\{ C_n\\}_{n}" }, { "math_id": 56, "text": "\\mathbf{G}_r" }, { "math_id": 57, "text": "K = \\bigcup_{n \\leq N} C_n" }, { "math_id": 58, "text": "C_i \\in \\mathbf{G}" }, { "math_id": 59, "text": "5C_i" }, { "math_id": 60, "text": "z \\in 5C_i" }, { "math_id": 61, "text": " Z \\subset \\bigcup_{n > N} 5C_n." }, { "math_id": 62, "text": " \\lambda_d(Z) \\le \\sum_{n > N} \\lambda_d(5C_n) = 5^d \\sum_{n > N} \\lambda_d(C_n). " }, { "math_id": 63, "text": "\\sum_n \\lambda_d(C_n) < \\infty." } ]
https://en.wikipedia.org/wiki?curid=10102876
10103
Electroweak interaction
Unified description of electromagnetism and the weak interaction In particle physics, the electroweak interaction or electroweak force is the unified description of two of the four known fundamental interactions of nature: electromagnetism (electromagnetic interaction) and the weak interaction. Although these two forces appear very different at everyday low energies, the theory models them as two different aspects of the same force. Above the unification energy, on the order of 246 GeV, they would merge into a single force. Thus, if the temperature is high enough – approximately 1015 K – then the electromagnetic force and weak force merge into a combined electroweak force. During the quark epoch (shortly after the Big Bang), the electroweak force split into the electromagnetic and weak force. It is thought that the required temperature of 1015 K has not been seen widely throughout the universe since before the quark epoch, and currently the highest human-made temperature in thermal equilibrium is around (from the Large Hadron Collider). Sheldon Glashow, Abdus Salam, and Steven Weinberg were awarded the 1979 Nobel Prize in Physics for their contributions to the unification of the weak and electromagnetic interaction between elementary particles, known as the Weinberg–Salam theory. The existence of the electroweak interactions was experimentally established in two stages, the first being the discovery of neutral currents in neutrino scattering by the Gargamelle collaboration in 1973, and the second in 1983 by the UA1 and the UA2 collaborations that involved the discovery of the W and Z gauge bosons in proton–antiproton collisions at the converted Super Proton Synchrotron. In 1999, Gerardus 't Hooft and Martinus Veltman were awarded the Nobel prize for showing that the electroweak theory is renormalizable. History. After the Wu experiment in 1956 discovered parity violation in the weak interaction, a search began for a way to relate the weak and electromagnetic interactions. Extending his doctoral advisor Julian Schwinger's work, Sheldon Glashow first experimented with introducing two different symmetries, one chiral and one achiral, and combined them such that their overall symmetry was unbroken. This did not yield a renormalizable theory, and its gauge symmetry had to be broken by hand as no spontaneous mechanism was known, but it predicted a new particle, the Z boson. This received little notice, as it matched no experimental finding. In 1964, Salam and John Clive Ward had the same idea, but predicted a massless photon and three massive gauge bosons with a manually broken symmetry. Later around 1967, while investigating spontaneous symmetry breaking, Weinberg found a set of symmetries predicting a massless, neutral gauge boson. Initially rejecting such a particle as useless, he later realized his symmetries produced the electroweak force, and he proceeded to predict rough masses for the W and Z bosons. Significantly, he suggested this new theory was renormalizable. In 1971, Gerard 't Hooft proved that spontaneously broken gauge symmetries are renormalizable even with massive gauge bosons. Formulation. Mathematically, electromagnetism is unified with the weak interactions as a Yang–Mills field with an SU(2) × U(1) gauge group, which describes the formal operations that can be applied to the electroweak gauge fields without changing the dynamics of the system. These fields are the weak isospin fields W1, W2, and W3, and the weak hypercharge field B. This invariance is known as electroweak symmetry. The generators of SU(2) and U(1) are given the name weak isospin (labeled T) and weak hypercharge (labeled Y) respectively. These then give rise to the gauge bosons that mediate the electroweak interactions – the three W bosons of weak isospin ("W"1, "W"2, and "W"3), and the "B" boson of weak hypercharge, respectively, all of which are "initially" massless. These are not physical fields yet, before spontaneous symmetry breaking and the associated Higgs mechanism. In the Standard Model, the observed physical particles, the and bosons, and the photon, are produced through the spontaneous symmetry breaking of the electroweak symmetry SU(2) × U(1) to U(1)em, effected by the Higgs mechanism (see also Higgs boson), an elaborate quantum-field-theoretic phenomenon that "spontaneously" alters the realization of the symmetry and rearranges degrees of freedom. The electric charge arises as the particular linear combination (nontrivial) of Y (weak hypercharge) and the T3 component of weak isospin (formula_0) that does "not" couple to the Higgs boson. That is to say: the Higgs and the electromagnetic field have no effect on each other, at the level of the fundamental forces ("tree level"), while any "other" combination of the hypercharge and the weak isospin must interact with the Higgs. This causes an apparent separation between the weak force, which interacts with the Higgs, and electromagnetism, which does not. Mathematically, the electric charge is a specific combination of the hypercharge and T3 outlined in the figure. U(1)em (the symmetry group of electromagnetism only) is defined to be the group generated by this special linear combination, and the symmetry described by the U(1)em group is unbroken, since it does not "directly" interact with the Higgs. The above spontaneous symmetry breaking makes the W3 and B bosons coalesce into two different physical bosons with different masses – the boson, and the photon (), formula_1 where θ is the "weak mixing angle". The axes representing the particles have essentially just been rotated, in the (W3, B) plane, by the angle θ. This also introduces a mismatch between the mass of the and the mass of the particles (denoted as m and m, respectively), formula_2 The W1 and W2 bosons, in turn, combine to produce the charged massive bosons : formula_3 Lagrangian. Before electroweak symmetry breaking. The Lagrangian for the electroweak interactions is divided into four parts before electroweak symmetry breaking becomes manifest, formula_4 The formula_5 term describes the interaction between the three W vector bosons and the B vector boson, formula_6 where formula_7 (formula_8) and formula_9 are the field strength tensors for the weak isospin and weak hypercharge gauge fields. formula_10 is the kinetic term for the Standard Model fermions. The interaction of the gauge bosons and the fermions are through the gauge covariant derivative, formula_11 where the subscript j sums over the three generations of fermions; Q, u, and d are the left-handed doublet, right-handed singlet up, and right handed singlet down quark fields; and L and e are the left-handed doublet and right-handed singlet electron fields. The Feynman slash formula_12 means the contraction of the 4-gradient with the Dirac matrices, defined as formula_13 and the covariant derivative (excluding the gluon gauge field for the strong interaction) is defined as formula_14 Here formula_15 is the weak hypercharge and the formula_16 are the components of the weak isospin. The formula_17 term describes the Higgs field formula_18 and its interactions with itself and the gauge bosons, formula_19 where formula_20 is the vacuum expectation value. The formula_21 term describes the Yukawa interaction with the fermions, formula_22 and generates their masses, manifest when the Higgs field acquires a nonzero vacuum expectation value, discussed next. The formula_23 for formula_24 are matrices of Yukawa couplings. After electroweak symmetry breaking. The Lagrangian reorganizes itself as the Higgs field acquires a non-vanishing vacuum expectation value dictated by the potential of the previous section. As a result of this rewriting, the symmetry breaking becomes manifest. In the history of the universe, this is believed to have happened shortly after the hot big bang, when the universe was at a temperature (assuming the Standard Model of particle physics). Due to its complexity, this Lagrangian is best described by breaking it up into several parts as follows. formula_25 The kinetic term formula_26 contains all the quadratic terms of the Lagrangian, which include the dynamic terms (the partial derivatives) and the mass terms (conspicuously absent from the Lagrangian before symmetry breaking) formula_27 where the sum runs over all the fermions of the theory (quarks and leptons), and the fields formula_28 formula_29 formula_30 and formula_31 are given as formula_32 with formula_33 to be replaced by the relevant field (formula_34 formula_35 formula_36) and f abc by the structure constants of the appropriate gauge group. The neutral current formula_37 and charged current formula_38 components of the Lagrangian contain the interactions between the fermions and gauge bosons, formula_39 where formula_40 The electromagnetic current formula_41 is formula_42 where formula_43 is the fermions' electric charges. The neutral weak current formula_44 is formula_45 where formula_46 is the fermions' weak isospin. The charged current part of the Lagrangian is given by formula_47 where formula_48 is the right-handed singlet neutrino field, and the CKM matrix formula_49 determines the mixing between mass and weak eigenstates of the quarks. formula_50 contains the Higgs three-point and four-point self interaction terms, formula_51 formula_52 contains the Higgs interactions with gauge vector bosons, formula_53 formula_54 contains the gauge three-point self interactions, formula_55 formula_56 contains the gauge four-point self interactions, formula_57 formula_58 contains the Yukawa interactions between the fermions and the Higgs field, formula_59 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q = T_3 + \\tfrac{1}{2}\\,Y_\\mathrm{W}" }, { "math_id": 1, "text": " \\begin{pmatrix}\n\\gamma \\\\\nZ^0 \\end{pmatrix} = \\begin{pmatrix}\n\\cos \\theta_\\text{W} & \\sin \\theta_\\text{W} \\\\\n-\\sin \\theta_\\text{W} & \\cos \\theta_\\text{W} \\end{pmatrix} \\begin{pmatrix}\nB \\\\\nW_3 \\end{pmatrix} ," }, { "math_id": 2, "text": "m_\\text{Z} = \\frac{m_\\text{W}}{\\,\\cos\\theta_\\text{W}\\,} ~." }, { "math_id": 3, "text": "W^{\\pm} = \\frac{1}{\\sqrt{2\\,}}\\,\\bigl(\\,W_1 \\mp i W_2\\,\\bigr) ~." }, { "math_id": 4, "text": "\\mathcal{L}_{\\mathrm{EW}} = \\mathcal{L}_g + \\mathcal{L}_f + \\mathcal{L}_h + \\mathcal{L}_y~." }, { "math_id": 5, "text": "\\mathcal{L}_g" }, { "math_id": 6, "text": "\\mathcal{L}_g = -\\tfrac{1}{4} W_{a}^{\\mu\\nu}W_{\\mu\\nu}^a - \\tfrac{1}{4} B^{\\mu\\nu}B_{\\mu\\nu}," }, { "math_id": 7, "text": "W_{a}^{\\mu\\nu}" }, { "math_id": 8, "text": "a=1,2,3" }, { "math_id": 9, "text": "B^{\\mu\\nu}" }, { "math_id": 10, "text": "\\mathcal{L}_f" }, { "math_id": 11, "text": "\\mathcal{L}_f = \\overline{Q}_j iD\\!\\!\\!\\!/\\; Q_j+ \\overline{u}_j iD\\!\\!\\!\\!/\\; u_j+ \\overline{d}_j iD\\!\\!\\!\\!/\\; d_j + \\overline{L}_j iD\\!\\!\\!\\!/\\; L_j + \\overline{e}_j iD\\!\\!\\!\\!/\\; e_j," }, { "math_id": 12, "text": "D\\!\\!\\!\\!/" }, { "math_id": 13, "text": "D\\!\\!\\!\\!/ \\equiv \\gamma^\\mu\\ D_\\mu," }, { "math_id": 14, "text": "\\ D_\\mu \\equiv \\partial_\\mu - i\\ \\frac{g'}{2}\\ Y\\ B_\\mu - i\\ \\frac{g}{2}\\ T_j\\ W_\\mu^j." }, { "math_id": 15, "text": "\\ Y\\ " }, { "math_id": 16, "text": "\\ T_j\\ " }, { "math_id": 17, "text": "\\mathcal{L}_h" }, { "math_id": 18, "text": "h" }, { "math_id": 19, "text": "\\mathcal{L}_h = |D_\\mu h|^2 - \\lambda \\left(|h|^2 - \\frac{v^2}{2}\\right)^2\\ ," }, { "math_id": 20, "text": "v" }, { "math_id": 21, "text": "\\ \\mathcal{L}_y\\ " }, { "math_id": 22, "text": "\\mathcal{L}_y = - y_{u\\ ij}\\epsilon^{ab}\\ h_b^\\dagger\\ \\overline{Q}_{ia} u_j^c - y_{d\\ ij}\\ h\\ \\overline{Q}_i d^c_j - y_{e\\,ij}\\ h\\ \\overline{L}_i e^c_j + \\mathrm{h.c.} ~," }, { "math_id": 23, "text": "\\ y_k^{ij}\\ ," }, { "math_id": 24, "text": "\\ k \\in \\{ \\mathrm{u, d, e} \\}\\ ," }, { "math_id": 25, "text": "\\mathcal{L}_{\\mathrm{EW}} = \\mathcal{L}_\\mathrm{K} + \\mathcal{L}_\\mathrm{N} + \\mathcal{L}_\\mathrm{C} + \\mathcal{L}_\\mathrm{H} + \\mathcal{L}_{\\mathrm{HV}} + \\mathcal{L}_{\\mathrm{WWV}} + \\mathcal{L}_{\\mathrm{WWVV}} + \\mathcal{L}_\\mathrm{Y} ~." }, { "math_id": 26, "text": "\\mathcal{L}_K" }, { "math_id": 27, "text": " \n\\begin{align}\n\\mathcal{L}_\\mathrm{K} = \\sum_f \\overline{f}(i\\partial\\!\\!\\!/\\!\\;-m_f)\\ f - \\frac{1}{4}\\ A_{\\mu\\nu}\\ A^{\\mu\\nu} - \\frac{1}{2}\\ W^+_{\\mu\\nu}\\ W^{-\\mu\\nu} + m_W^2\\ W^+_\\mu\\ W^{-\\mu} \n\\\\\n\\qquad -\\frac{1}{4}\\ Z_{\\mu\\nu}Z^{\\mu\\nu} + \\frac{1}{2}\\ m_Z^2\\ Z_\\mu\\ Z^\\mu + \\frac{1}{2}\\ (\\partial^\\mu\\ H)(\\partial_\\mu\\ H) - \\frac{1}{2}\\ m_H^2\\ H^2 ~,\n\\end{align}\n" }, { "math_id": 28, "text": "\\ A_{\\mu\\nu}\\ ," }, { "math_id": 29, "text": "\\ Z_{\\mu\\nu}\\ ," }, { "math_id": 30, "text": "\\ W^-_{\\mu\\nu}\\ ," }, { "math_id": 31, "text": "\\ W^+_{\\mu\\nu} \\equiv (W^-_{\\mu\\nu})^\\dagger\\ " }, { "math_id": 32, "text": "X^{a}_{\\mu\\nu} = \\partial_\\mu X^{a}_\\nu - \\partial_\\nu X^{a}_\\mu + g f^{abc}X^{b}_{\\mu}X^{c}_{\\nu} ~," }, { "math_id": 33, "text": "X" }, { "math_id": 34, "text": "A," }, { "math_id": 35, "text": "Z," }, { "math_id": 36, "text": "W^\\pm" }, { "math_id": 37, "text": "\\ \\mathcal{L}_\\mathrm{N}\\ " }, { "math_id": 38, "text": "\\ \\mathcal{L}_\\mathrm{C}\\ " }, { "math_id": 39, "text": "\\mathcal{L}_\\mathrm{N} = e\\ J_\\mu^\\mathrm{em}\\ A^\\mu + \\frac{g}{\\ \\cos\\theta_W\\ }\\ (\\ J_\\mu^3 - \\sin^2\\theta_W\\ J_\\mu^\\mathrm{em}\\ )\\ Z^\\mu ~," }, { "math_id": 40, "text": "~e = g\\ \\sin \\theta_\\mathrm{W} = g'\\ \\cos \\theta_\\mathrm{W} ~." }, { "math_id": 41, "text": "\\; J_\\mu^{\\mathrm{em}} \\;" }, { "math_id": 42, "text": "J_\\mu^\\mathrm{em} = \\sum_f \\ q_f\\ \\overline{f}\\ \\gamma_\\mu\\ f ~," }, { "math_id": 43, "text": "\\ q_f\\ " }, { "math_id": 44, "text": "\\ J_\\mu^3\\ " }, { "math_id": 45, "text": "J_\\mu^3 = \\sum_f\\ T^3_f\\ \\overline{f}\\ \\gamma_\\mu\\ \\frac{\\ 1-\\gamma^5\\ }{2}\\ f ~," }, { "math_id": 46, "text": "T^3_f" }, { "math_id": 47, "text": "\\mathcal{L}_\\mathrm{C} = -\\frac{g}{\\ \\sqrt{2 \\;}\\ }\\ \\left[\\ \\overline{u}_i\\ \\gamma^\\mu\\ \\frac{\\ 1 - \\gamma^5\\ }{2} \\; M^{\\mathrm{CKM}}_{ij}\\ d_j + \\overline{\\nu}_i\\ \\gamma^\\mu\\;\\frac{\\ 1-\\gamma^5\\ }{2} \\; e_i\\ \\right]\\ W_\\mu^{+} + \\mathrm{h.c.} ~," }, { "math_id": 48, "text": "\\ \\nu\\ " }, { "math_id": 49, "text": "M_{ij}^\\mathrm{CKM}" }, { "math_id": 50, "text": "\\mathcal{L}_\\mathrm{H}" }, { "math_id": 51, "text": "\\mathcal{L}_\\mathrm{H} = -\\frac{\\ g\\ m_\\mathrm{H}^2\\,}{\\ 4\\ m_\\mathrm{W}\\ }\\;H^3 - \\frac{\\ g^2\\ m_\\mathrm{H}^2\\ }{32\\ m_\\mathrm{W}^2}\\;H^4 ~." }, { "math_id": 52, "text": "\\mathcal{L}_{\\mathrm{HV}}" }, { "math_id": 53, "text": "\\mathcal{L}_\\mathrm{HV} =\\left(\\ g\\ m_\\mathrm{HV} + \\frac{\\ g^2\\ }{4}\\;H^2\\ \\right)\\left(\\ W^{+}_\\mu\\ W^{-\\mu} + \\frac{1}{\\ 2\\ \\cos^2\\ \\theta_\\mathrm{W}\\ }\\;Z_\\mu\\ Z^\\mu\\ \\right) ~." }, { "math_id": 54, "text": "\\mathcal{L}_{\\mathrm{WWV}}" }, { "math_id": 55, "text": "\\mathcal{L}_{\\mathrm{WWV}} = -i\\ g\\ \\left[\\; \\left(\\ W_{\\mu\\nu}^{+}\\ W^{-\\mu} - W^{+\\mu}\\ W^{-}_{\\mu\\nu}\\ \\right)\\left(\\ A^\\nu\\ \\sin \\theta_\\mathrm{W} - Z^\\nu\\ \\cos\\theta_\\mathrm{W}\\ \\right) + W^{-}_\\nu\\ W^{+}_\\mu\\ \\left(\\ A^{\\mu\\nu}\\ \\sin \\theta_\\mathrm{W} - Z^{\\mu\\nu}\\ \\cos \\theta_\\mathrm{W}\\ \\right) \\;\\right] ~." }, { "math_id": 56, "text": "\\mathcal{L}_{\\mathrm{WWVV}}" }, { "math_id": 57, "text": "\n\\begin{align}\n\\mathcal{L}_{\\mathrm{WWVV}} = -\\frac{\\ g^2\\ }{4}\\ \\Biggl\\{\\ &\\Bigl[\\ 2\\ W^{+}_\\mu\\ W^{-\\mu} + (\\ A_\\mu\\ \\sin \\theta_\\mathrm{W} - Z_\\mu\\ \\cos \\theta_\\mathrm{W} \\ )^2\\ \\Bigr]^2\n\\\\\n&- \\Bigl[\\ W_\\mu^{+}\\ W_\\nu^{-} + W^{+}_\\nu\\ W^{-}_\\mu + \\left(\\ A_\\mu\\ \\sin \\theta_\\mathrm{W} - Z_\\mu\\ \\cos \\theta_\\mathrm{W}\\ \\right)\\left(\\ A_\\nu\\ \\sin \\theta_\\mathrm{W} - Z_\\nu\\ \\cos \\theta_\\mathrm{W}\\ \\right)\\ \\Bigr]^2\\,\\Biggr\\} ~.\n\\end{align}\n" }, { "math_id": 58, "text": "\\ \\mathcal{L}_\\mathrm{Y}\\ " }, { "math_id": 59, "text": "\\mathcal{L}_\\mathrm{Y} = -\\sum_f\\ \\frac{\\ g\\ m_f\\ }{2\\ m_\\mathrm{W}} \\; \\overline{f}\\ f\\ H ~." } ]
https://en.wikipedia.org/wiki?curid=10103
10103794
Slenderness ratio
Ratio of width and height in architecture In architecture, the slenderness ratio, or simply slenderness, is an aspect ratio, the quotient between the height and the width of a building. In structural engineering, slenderness is used to calculate the propensity of a column to buckle. It is defined as formula_0 where formula_1 is the effective length of the column and formula_2 is the least radius of gyration, the latter defined by formula_3 where formula_4 is the area of the cross-section of the column and formula_5 is the second moment of area of the cross-section. The effective length is calculated from the actual length of the member considering the rotational and relative translational boundary conditions at the ends. Slenderness captures the influence on buckling of all the geometric aspects of the column, namely its length, area, and second moment of area. The influence of the material is represented separately by the material's modulus of elasticity formula_6. Structural engineers generally consider a skyscraper as slender if the height:width ratio exceeds 10:1 or 12:1. Slim towers require the adoption of specific measures to counter the high strengths of wind in the vertical cantilever, like including additional structures to endow greater rigidity to the building or diverse types of tuned mass dampers to avoid unwanted swinging. Tall buildings with high slenderness ratio are sometime referred to as pencil towers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "l/k" }, { "math_id": 1, "text": "l" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "k^2=I/A" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "I" }, { "math_id": 6, "text": "E" } ]
https://en.wikipedia.org/wiki?curid=10103794
10104622
Kuratowski's closure-complement problem
In point-set topology, Kuratowski's closure-complement problem asks for the largest number of distinct sets obtainable by repeatedly applying the set operations of closure and complement to a given starting subset of a topological space. The answer is 14. This result was first published by Kazimierz Kuratowski in 1922. It gained additional exposure in Kuratowski's fundamental monograph "Topologie" (first published in French in 1933; the first English translation appeared in 1966) before achieving fame as a textbook exercise in John L. Kelley's 1955 classic, "General Topology". Proof. Letting formula_0 denote an arbitrary subset of a topological space, write formula_1 for the closure of formula_0, and formula_2 for the complement of formula_0. The following three identities imply that no more than 14 distinct sets are obtainable: The first two are trivial. The third follows from the identity formula_7 where formula_8 is the interior of formula_0 which is equal to the complement of the closure of the complement of formula_0, formula_9. (The operation formula_10 is idempotent.) A subset realizing the maximum of 14 is called a 14-set. The space of real numbers under the usual topology contains 14-sets. Here is one example: formula_11 where formula_12 denotes an open interval and formula_13 denotes a closed interval. Let formula_14 denote this set. Then the following 14 sets are accessible: Further results. Despite its origin within the context of a topological space, Kuratowski's closure-complement problem is actually more algebraic than topological. A surprising abundance of closely related problems and results have appeared since 1960, many of which have little or nothing to do with point-set topology. The closure-complement operations yield a monoid that can be used to classify topological spaces. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "kS" }, { "math_id": 2, "text": "cS" }, { "math_id": 3, "text": "kkS=kS" }, { "math_id": 4, "text": "ccS=S" }, { "math_id": 5, "text": "kckckckcS=kckcS" }, { "math_id": 6, "text": "kckckckS=kckckckccS=kckS" }, { "math_id": 7, "text": "kikiS=kiS" }, { "math_id": 8, "text": "iS" }, { "math_id": 9, "text": "iS=ckcS" }, { "math_id": 10, "text": "ki=kckc" }, { "math_id": 11, "text": "(0,1)\\cup(1,2)\\cup\\{3\\}\\cup\\bigl([4,5]\\cap\\Q\\bigr)," }, { "math_id": 12, "text": "(1,2)" }, { "math_id": 13, "text": "[4,5]" }, { "math_id": 14, "text": "X" }, { "math_id": 15, "text": "cX=(-\\infty,0]\\cup\\{1\\}\\cup[2,3)\\cup(3,4)\\cup\\bigl((4,5)\\setminus\\Q\\bigr)\\cup(5,\\infty)" }, { "math_id": 16, "text": "kcX=(-\\infty,0]\\cup\\{1\\}\\cup[2,\\infty)" }, { "math_id": 17, "text": "ckcX=(0,1)\\cup(1,2)" }, { "math_id": 18, "text": "kckcX=[0,2]" }, { "math_id": 19, "text": "ckckcX=(-\\infty,0)\\cup(2,\\infty)" }, { "math_id": 20, "text": "kckckcX=(-\\infty,0]\\cup[2,\\infty)" }, { "math_id": 21, "text": "ckckckcX=(0,2)" }, { "math_id": 22, "text": "kX=[0,2]\\cup\\{3\\}\\cup[4,5]" }, { "math_id": 23, "text": "ckX=(-\\infty,0)\\cup(2,3)\\cup(3,4)\\cup(5,\\infty)" }, { "math_id": 24, "text": "kckX=(-\\infty,0]\\cup[2,4]\\cup[5,\\infty)" }, { "math_id": 25, "text": "ckckX=(0,2)\\cup(4,5)" }, { "math_id": 26, "text": "kckckX=[0,2]\\cup[4,5]" }, { "math_id": 27, "text": "ckckckX=(-\\infty,0)\\cup(2,4)\\cup(5,\\infty)" } ]
https://en.wikipedia.org/wiki?curid=10104622